KR101801266B1 - Method and Apparatus for image classification - Google Patents

Method and Apparatus for image classification Download PDF

Info

Publication number
KR101801266B1
KR101801266B1 KR1020160012322A KR20160012322A KR101801266B1 KR 101801266 B1 KR101801266 B1 KR 101801266B1 KR 1020160012322 A KR1020160012322 A KR 1020160012322A KR 20160012322 A KR20160012322 A KR 20160012322A KR 101801266 B1 KR101801266 B1 KR 101801266B1
Authority
KR
South Korea
Prior art keywords
condition
image
value
satisfied
area
Prior art date
Application number
KR1020160012322A
Other languages
Korean (ko)
Other versions
KR20170091824A (en
Inventor
송창우
정우진
문영식
Original Assignee
한양대학교 에리카산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한양대학교 에리카산학협력단 filed Critical 한양대학교 에리카산학협력단
Priority to KR1020160012322A priority Critical patent/KR101801266B1/en
Publication of KR20170091824A publication Critical patent/KR20170091824A/en
Application granted granted Critical
Publication of KR101801266B1 publication Critical patent/KR101801266B1/en

Links

Images

Classifications

    • G06K9/6267
    • G06K9/6202
    • G06K9/6234
    • G06K9/6269
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a method and apparatus for classifying images or images. More particularly, the present invention relates to a technique for determining whether a specific image is captured in a backlight situation to determine whether or not a backlight image is captured. In particular, the present invention is characterized in that a first condition determination unit for determining whether a first condition is satisfied based on a histogram form of an image to be classified, and a second condition determination unit for classifying a classification target image into a subject area and a background area, A third condition judging unit for judging whether the second condition is satisfied or not according to the comparison result of the set reference brightness values and a third condition judging unit for judging whether the third condition is satisfied or not based on the position of the subject area or the background area of the classification target image And an image classifying unit that classifies the image into a backlight image when the condition determining unit and the classification target image satisfy both the first condition, the second condition, and the third condition.

Figure R1020160012322

Description

TECHNICAL FIELD [0001] The present invention relates to an image classification apparatus,

The present invention relates to a method and apparatus for classifying images or images. More particularly, the present invention relates to a technique for determining whether a specific image is captured in a backlight situation to determine whether or not a backlight image is captured.

The backlight image or image means an image or an image photographed in an illumination environment projected from the rear surface of the subject. In particular, the backlight image has a dark region and a bright region with a large difference in contrast, and a contrast ratio is poor.

For this reason, in the case of backlight, detailed information of the subject is not obtained, so it is necessary to improve the contrast between the bright region and the dark region.

However, although a variety of image processing techniques have been studied to improve the contrast ratio for backlight images, research has not been conducted on techniques for determining whether the images are backlight images. That is, in order to improve the contrast ratio of the backlight image, the contrast ratio improvement technique can be applied by artificially selecting the backlight image. However, a technology for automatically determining whether a specific image is a backlight image is not proposed.

Accordingly, there is a problem in that the image processing apparatus is automatically determined to be a backlight image, and in the case of a backlight image, the contrast ratio is improved so that the information of the subject can not be obtained.

SUMMARY OF THE INVENTION [0005] The present invention, which is devised from the background described above, proposes a method and apparatus for automatically determining whether an image to be classified is an image captured in a backlight situation and classifying the image.

In addition, the present invention proposes a method and apparatus that can more accurately classify whether a classification object image is a backlight image using various conditions.

According to an aspect of the present invention, there is provided an image processing apparatus including a first condition determining unit for determining whether a first condition is satisfied based on a histogram form of an image to be classified, A second condition determination unit for determining whether the second condition is satisfied or not based on a result of comparing the average brightness value of the region with a preset reference brightness value, And an image classifying unit for classifying the image into a backlit image when the classification target image satisfies all of the first condition, the second condition, and the third condition.

According to another aspect of the present invention, there is provided an image processing method including a first condition determination step of determining whether a first condition is satisfied based on a histogram form of an image to be classified, a first condition determination step of classifying a classification target image into a subject area and a background area, A second condition determination step of determining whether the second condition is satisfied according to a result of comparing the set reference brightness value and a third condition determination step of determining whether the third condition is satisfied based on the position of the subject area or background area of the classification target image And an image classification step of classifying the image into a backlight image when the condition determination step and the classification target image satisfy both the first condition, the second condition, and the third condition.

The present invention provides an effect of automatically classifying whether a classification object image is a backlight image.

The present invention also provides an effect of automatically correcting a captured image by applying a technique of superimposing various conditions on a specific image to more accurately classify a specific image as a backlight image and improving the contrast ratio of the classified image do.

BRIEF DESCRIPTION OF DRAWINGS FIG. 1 is a block diagram illustrating an image classification apparatus according to an embodiment of the present invention; FIG.
2 is a view for explaining a histogram of a backlight image according to an embodiment of the present invention.
3 is a diagram for explaining the operation of the first condition determiner according to an embodiment of the present invention.
FIG. 4 is a diagram for explaining an operation of binarizing each pixel of an image according to an embodiment of the present invention by a binarization technique.
5 is a diagram for explaining a binarization technique of each pixel of an image according to an embodiment of the present invention.
6 is a view for explaining an operation of distinguishing a subject area and a background area of a second condition determiner according to an embodiment of the present invention.
7 is a diagram exemplarily showing a result of binarizing a subject area and a background area by binarizing a second condition determining part according to an embodiment of the present invention.
8 is a diagram for explaining the operation of the third condition determiner according to an embodiment of the present invention.
9 is a diagram for explaining an operation of determining whether a third condition is determined using a vector of center of gravity according to an embodiment of the present invention.
10 is a diagram illustrating an image classification method according to an embodiment of the present invention.

The present invention relates to an image classification apparatus and a method thereof.

Hereinafter, some embodiments of the present invention will be described in detail with reference to exemplary drawings. In describing the components of the present invention, the terms first, second, A, B, (a), (b), and the like can be used. These terms are intended to distinguish the constituent elements from other constituent elements, and the terms do not limit the nature, order or order of the constituent elements. When a component is described as being "connected", "coupled", or "connected" to another component, the component may be directly connected to or connected to the other component, It should be understood that an element may be "connected," "coupled," or "connected."

In the present invention, description will be made of a technique of automatically determining and classifying whether an image to be classified is an image captured in a backlight situation. That is, prior to improving the contrast ratio of the backlight image, a method and apparatus for automatically classifying the image as a backlight image to automate the whole process of improving the backlight image and to improve the accuracy are described.

In the present invention, the backlight image is mainly described, but the same contents can be applied to the backlight image. That is, the image in the present invention is used to mean all of images, photographs, images, and the like.

A first condition determination unit determines whether a first condition is satisfied based on a histogram form of an image to be classified. The first condition determination unit divides an image to be classified into a subject area and a background area. The average brightness value of the subject area is compared with a preset reference A second condition determination unit for determining whether the second condition is satisfied or not based on the comparison result of the brightness value, and a third condition determination unit for determining whether the third condition is satisfied based on the position of the subject area or background area of the classification target image And an image classifying unit for classifying the partial image and the classified image into a backlight image when the image satisfies all of the first condition, the second condition, and the third condition.

BRIEF DESCRIPTION OF DRAWINGS FIG. 1 is a block diagram illustrating an image classification apparatus according to an embodiment of the present invention; FIG.

Referring to FIG. 1, the image classification apparatus 100 may include a first condition determination unit 110 for determining whether a first condition is satisfied based on a histogram form of an image to be classified.

The first condition determination unit 110 may calculate a histogram of the classification target image for classifying the backlight image and determine whether the first condition is satisfied by using the calculated histogram of the classification target image. For example, the first condition determination unit 110 may determine whether the classification target image satisfies the first condition by using the shape of the histogram. Specifically, the first condition determination unit 110 is a method for determining a classification target image by using a characteristic in which a brightness difference between the background and one of the features of the backlight image is remarkable, It is determined whether the first condition is satisfied or not.

For example, when the histogram form is calculated in the form of a beep, the first condition determiner 110 may determine that the first condition is satisfied. For this purpose, the first condition determination unit 110 calculates a single-bar type discrimination value indicating the degree of unevenness of the histogram of the classification target image, and compares the calculated single-bar type discrimination value with a preset reference value to determine whether or not the first condition . The single-pole type discrimination value is a value indicating the degree of proximity of the histogram to the single-pole type, and can be calculated as a normalized value between 0 and 1 inclusive. Therefore, the single-pole type discrimination value can be calculated to be a value closer to 1 as the histogram is closer to the single-pole type. As an example, the unconfined discriminant value can be calculated by Hartigan's Dip test technique. That is, the first condition determination unit 110 may determine that the classification target image satisfies the first condition when the single-bar type determination value of the histogram is equal to or less than a preset reference value. When the classification target image is a backlight image, the bright portion and the dark portion are clearly contrasted so that one group is formed around the low brightness value of the histogram, and the other one group is formed around the high brightness value. That is, the histogram of the backlight image is calculated in the form of a beep. The Hartigan's dip test method described above is described as an example of a technique for calculating a single-rod type discrimination value, and the single-rod type discrimination value can be calculated by various methods. For example, when the difference between the first peak value and the second peak value of the histogram is equal to or less than a certain standard, and the X-axis value of the first peak value and the second peak value is separated from a predetermined reference value or more, The form may be judged to be of the beekeeping type. Alternatively, the single-pole type determination value may be calculated using the first peak value and the second peak value.

Accordingly, the first condition determination unit 110 can determine whether the classification target image is a backlight image, using the histogram form of the classification target image.

Meanwhile, the image classification apparatus 100 divides the classification target image into a subject area and a background area, and determines whether the second condition is satisfied according to a result of comparing an average brightness value of the subject area with a preset reference brightness value 2 condition judging unit 120. [0031] FIG.

The second condition determination unit 120 may divide the classification target image into a subject area and a background area. Also, the second condition determiner 120 may calculate an average brightness value of the divided subject area and compare the brightness value with a predetermined reference brightness value to determine whether the second condition is satisfied. This is to classify the backlight image using a feature that the subject area appears very dark.

For example, the second condition determination unit 120 may convert each pixel into a binarized value by using brightness information of each pixel of the classification target image, set one or more pixels having the same binarization value as the same area, And a background area. That is, the second condition determination unit 120 may extract the brightness information of each pixel constituting the classification target image, and may binarize the brightness of each pixel using a reference value for binarizing the classification target image. For example, the second condition determiner 120 may binarize based on the brightness of each pixel using the Otus binarization technique. In this way, the classification target image can be converted into pixels having only two index values, and pixels having the same index value can be grouped into the same area to distinguish the background area and the subject area. A specific embodiment for binarizing a classification object image and a method for determining a reference value for binarization will be described in more detail below with reference to the drawings.

If the classification target image is divided into two regions, the second condition determination unit 120 may determine whether the second condition is satisfied by comparing a predetermined reference brightness value with an average brightness value of the subject region. In one example, the reference brightness value may be set to a maximum average brightness value included in the backlight group including the backlight sample image in both groups, by clustering the average brightness value of the subject area for each of the plurality of sample images into two groups . That is, the reference brightness value may be set to a maximum average brightness value of a subject area of a plurality of sample images that are classified into a backlight image. Accordingly, the reference brightness value can be set more precisely as the number of sample images increases, and the image classification apparatus can update the reference brightness value by including an image classified as the backlight image in the above-described sample image.

The second condition determination unit 120 may determine that the classification target image satisfies the second condition when the average brightness value of the subject area of the classification target image is equal to or less than the reference brightness value. That is, if the subject area average brightness value of the classification target image is calculated to be lower than the subject area average brightness value of the sample image classified into the backlight image, it can be determined that the classification target image satisfies the second condition for the backlight image have.

On the other hand, in order to calculate the reference brightness value, the backlight image and the normal image (non-backlight image) samples can be used, and the average brightness value of the subject area of each sample image can be classified into two groups through neutal network clustering . The maximum value of the group including the backlight image among the two classified groups can be set as the reference brightness value.

Meanwhile, the image classification apparatus 100 may include a third condition determination unit 130 that determines whether the third condition is satisfied based on the position of the subject area or the background area of the classification target image.

The third condition determination unit 130 can determine whether the third condition is satisfied based on the position of the subject area and the background area that are separated from the classification target image. For example, if the subject area is located below the classification target image, the third condition determination unit 130 can determine that the third condition is satisfied. Specifically, the third condition determination unit 130 may determine whether the third condition of the classification target image is satisfied based on the relative positions of the background region and the subject region. For example, the third condition determiner 130 calculates the center of gravity of the subject area and the center of gravity of the background area, and calculates the center of gravity of the subject area using the vector connecting the center of gravity of the subject area with the center of gravity of the subject area. It is also possible to judge whether the condition is satisfied or not. For this purpose, the classification target image can be input by separating the upper and lower sides of the image based on the Y axis. That is, in order to determine whether the subject area is positioned lower than the background area, it is possible to initially input the classification target image by separating the top and bottom of the classification target image.

The center of gravity of each of the subject area and the background area can be calculated by the coordinates of the pixels constituting each area and the brightness value of each pixel, and the center of gravity is the center point of the corresponding area considering the position and brightness of the pixels constituting the area do. Therefore, the center point of the background area and the center point of the subject area are determined as one position, respectively. Accordingly, the third condition determiner 130 calculates a vector connected to the center of gravity of the subject area based on the center of gravity of the background area, and determines whether the third condition is satisfied by using the angle formed by the vector with the X- It can be judged. The detailed calculation of the center of gravity and the determination of the satisfaction of the third condition will be described in more detail with reference to the drawings below.

Meanwhile, the image classifying apparatus 100 may include an image classifying unit 140 that classifies a classifying object image into a backlit image when both of the first condition, the second condition, and the third condition are satisfied. The image classifying unit 140 may determine and classify a backlight image when the classification target image satisfies all three conditions described above. Alternatively, the image classifying unit 140 may classify the backlight image when two of the three conditions are satisfied according to the setting. Alternatively, the image classifying unit 140 may classify the classification target image into a backlight image when all three conditions are not satisfied according to the setting of each condition. For example, when the criteria of each condition is set so as to satisfy the first condition to the third condition in the case of the backlight image, the image classifying unit 140 may set the criterion of the first condition to the third condition It can be classified as backlight image. On the other hand, if the criterion of each condition is set so that the first condition to the third condition are all unsatisfactory in the case of the backlight image, the image classifying unit 140 may be configured such that if the classification target image is unsatisfactory in all of the first condition to the third condition The classification target image may be classified into a backlight image. Alternatively, when the criteria of the first condition to the third condition are set to be different from each other, the image classifying unit 140 may determine the classification target image by using the respective condition criteria.

As described above, according to the present invention, it is determined whether the image to be classified is a backlight image automatically according to a plurality of condition judgments, and more accurate backlight image determination can be performed according to the setting of each condition.

Hereinafter, each operation of the image classification apparatus 100 will be described in more detail with reference to the drawings.

2 is a view for explaining a histogram of a backlight image according to an embodiment of the present invention.

Referring to FIG. 2, FIG. 2 (A) shows a classification object image, and FIG. 2 (B) shows a histogram of the classification object image. 2 (A), the subject area appears dark due to the backlit image captured in the backlight situation. Therefore, the histogram shown in FIG. 2 (B) appears as a beep. Accordingly, when the histogram form is calculated in the form of a beep, the first condition determiner 110 can determine that the first condition is satisfied.

Specifically, referring to FIG. 2 (B), it can be seen that the histogram is drawn around each of the two peaks 200 and 210 and appears as a beep. That is, the brightness intensity of each pixel is concentrated around each of the peak 200 having a low brightness intensity and the peak 210 having a high brightness intensity. As described above, in the case of the backlight image, the histogram appears as a beep shape, and the number of pixels of each of the peaks 200 and 210 is similar to each other, and the difference in intensity of brightness is largely observed.

The first condition determiner 110 may determine a degree of similarity to the single-pole type (for example, a single-pole type discrimination value) using the histogram to determine whether the classification target image satisfies the first condition. For example, when the single-bar type discrimination value is calculated as 0.1 and the reference value is 0.2, the classification target image can be judged to satisfy the first condition.

3 is a diagram for explaining the operation of the first condition determiner according to an embodiment of the present invention.

Referring to FIG. 3, the first condition determination unit 110 calculates a histogram of an image to be classified (S300). Thereafter, the first condition determiner 110 may calculate the single-stick type discrimination value using the histogram indicating the number of pixels having a specific brightness intensity (S310). For example, Hartigan's Dip test can be used to calculate a single-rod type discriminant value. In the case of using the Dip test, the unipolar type discrimination value can be calculated as a normalized value between 0 and 1 inclusive. Therefore, as the histogram is closer to the single-rod type, the single-rod type determination value can be calculated to be close to 1.

The first condition determination unit 110 compares the calculated single-pole type determination value with a preset reference value to check whether the single-pole type determination value is less than the reference value (S320). If the single-tipped type discrimination value is less than or equal to the reference value, it is determined that the first condition is satisfied (S330). If the single-ended type discrimination value exceeds the reference value, the first condition is determined to be unsatisfactory (S340).

As described above, the first condition determination unit 110 may calculate the single-pole type determination value using the number of pixels of the peak value of the histogram and the degree of separation between the two peak values. In this case, The value may vary depending on the calculation method.

FIG. 4 is a diagram for explaining an operation of binarizing each pixel of an image according to an embodiment of the present invention by a binarization technique.

The second condition determination unit 120 may convert each pixel into a binarization value by using the brightness information of each pixel to divide the classification target image into two regions. There is no limitation on the method of converting into the binarization value, and in FIG. 4, a sample image expressed by six gray scales will be mainly described for convenience of explanation.

Referring to FIG. 4, FIG. 4A shows an image including pixels having 6 gray scales for convenience of explanation, FIG. 4B shows a histogram showing the number of pixels according to each gray scale index to be.

For example, the second condition determination unit 120 may calculate a reference value for binarization using the histogram of the image. That is, the brightness information of each pixel can be classified into 0 or 1 based on the reference value, and the same binarization brightness information can be set as the same area. That is, in the image (A) of FIG. 4, there are pixels having brightness information divided into indices of 0 to 5. For example, the number of pixels at index 0 is 8, and the number of pixels at index 4 is 9.

The second condition determiner 120 may calculate the histogram based on the brightness information and calculate the reference value as shown in FIG. 4B. However, it is necessary to accurately calculate a reference value serving as a reference for area classification in order to accurately divide the subject area and the background area. This will be described below with reference to FIG.

5 is a diagram for explaining a binarization technique of each pixel of an image according to an embodiment of the present invention.

Referring to FIG. 5, the second condition determiner 120 may use variance to find a reference value for binarizing the subject area and the background area. For example, to divide a particular set into two classes, you can classify the relatively populated parts into the same class. Therefore, in order to more precisely divide the area, it is possible to use dispersion to find a point (reference value) divided into two classes.

Referring to FIG. 4 (B), the total variance can be expressed as a within-class variance

Figure 112016010599915-pat00001
) And between-class variance
Figure 112016010599915-pat00002
). ≪ / RTI >

Figure 112016010599915-pat00003

The variance in the class can be calculated as shown in Equation (2).

Figure 112016010599915-pat00004

As shown in Equation 2, the in-class variance has a weight 1 (

Figure 112016010599915-pat00005
) And the variance of class 1 (
Figure 112016010599915-pat00006
) And a weighted value 2 (
Figure 112016010599915-pat00007
) And the distribution of class 2 (
Figure 112016010599915-pat00008
). ≪ / RTI > Here, the weight means a probability that a pixel corresponding to the class appears in the entire image.

On the other hand, since the dispersion represents a scale indicating the extent of the distribution, the larger the variance in the distribution, the more gradual the distribution becomes, and the smaller the variance, the more likely the distribution will be around one point.

Therefore, the smaller the variance of both classes is, the better for accurate binarization of the region. That is, if the minimum value of Equation (2) is obtained, an optimum reference value for binarization can be calculated. Therefore, the second condition determiner 120 may calculate the minimum value using Equation (2), and use this threshold value as the reference value.

On the other hand, inter-class variance can be used to obtain the minimum value of the variance in the class more quickly.

The inter-class variance can be calculated using Equation (3).

Figure 112016010599915-pat00009

here,

Figure 112016010599915-pat00010
Means the mean of each class. On the other hand, since the inter-class variance is the inverse of the intra-class variance, the maximum value of the inter-class variance can be obtained and the threshold at that time can be determined as the reference value.

FIG. 5 shows the variance values in the class for each threshold value by using FIG. 4 (B). As described above, the variance value in the class becomes minimum when the threshold value T is set to 3. Accordingly, the second condition determination unit 120 can determine the threshold value 3, which is the minimum value of the intra-class variance value, as the reference value, and binarize the classification target image.

Referring to FIG. 6, when the reference value is set to 3 to binarize FIG. 4A, the image can be transformed as shown in FIG. 6A. When this is applied to an actual image, the image is converted into a black and white binarized image as shown in (B) of FIG. That is, the second condition determination unit 120 may convert each pixel into a binarization value based on the brightness of the image to be classified, and may classify pixels having the same binarization value into the same area.

On the other hand, the second condition determiner 120 determines whether the second condition is satisfied by comparing the average brightness value of the subject area with a preset reference brightness value, if the background area and the subject area are classified through the binarization classification .

Referring to FIG. 7, the second condition determination unit 120 can convert the classification target image (FIG. 7 (A)) as shown in FIG. 7 (B) through the binarization classification described above. 7 (B), the white part can be set as one area, and the black part can be set as another area. That is, the classification target image as shown in FIG. 7A can be classified into two regions as shown in FIG. 7B.

Then, the second condition determiner 120 can designate each area as a subject area and a background area using the position or brightness information of each area. For example, an area formed centered on the lower side of the image to be classified may be divided into a subject area, and an area formed on the upper side may be classified as a background area. Alternatively, an area in which the brightness information appears relatively dark may be designated as the subject area.

When the subject area is designated, the second condition determiner 120 calculates the average brightness value of the subject area using the classification target image as shown in Fig. 7 (A), and calculates the average brightness of the subject area And judges whether or not the second condition is satisfied. The reference brightness value may be grouped into two groups based on the average brightness value of the subject area for each of the plurality of sample images, and may be determined as the maximum average brightness value of the group including the backlight sample image. That is, the reference brightness value is determined as the maximum average brightness value among the average brightness values of the subject areas for a plurality of images classified as the backlight image, so that it can be used as a value for discriminating whether the classification target image can be classified as a backlight image .

As described above, when the classification of the image to be classified is finally determined, the determined image can be included in the plurality of sample images, thereby improving the accuracy of the reference brightness value.

8 is a diagram for explaining the operation of the third condition determiner according to an embodiment of the present invention.

Referring to FIG. 8, the third condition determiner 130 may determine whether the third condition is satisfied at a relative position between the subject area and the background area. That is, generally, the background region is located above the image, and the subject region is determined to satisfy the third condition when the bar object region located below the image is positioned below the image. For example, if the subject area is located below the image as shown in FIG. 8A, it can be determined that the image satisfies the third condition.

Alternatively, the third condition determination unit 130 may use the center of gravity to more accurately determine the relative positions of the subject area and the background area. For example, the third condition determination unit 130 may divide the subject area and the background area by binarization classification as shown in (B) of FIG. 8, and determine the center of gravity B 810 of the subject area and the center of gravity (A, 800) can be extracted. The center of gravity represents a center point of an area in which coordinates of each pixel and brightness information are considered, and can be determined using the following equation (4).

Figure 112016010599915-pat00011

As shown in Equation (4), the center of gravity can be determined by the coordinates of the corresponding image, and is represented by a coordinate value representing the coordinates and brightness of the corresponding region.

The values constituting each coordinate are calculated as shown in the following equation (5).

Figure 112016010599915-pat00012

In Equation (5), w denotes all the sets of coordinates of each region, and I (x, y) denotes brightness values according to each coordinate.

Hence, using the equations (4) and (5), the center of gravity coordinates of the subject area and the background area can be calculated.

The third condition determiner 130 calculates the center of gravity 800 and 810 of each area using Equations 4 and 5 described above and calculates the center of gravity 800 and 810 of each region as shown in FIG. It is possible to determine whether the third condition is satisfied by using the direction of the vector leading to the center of gravity 810 of the subject area. For example, if the direction of the vector is formed from the upper side to the lower side of the image, the third condition determination unit 130 can determine that the classification target image satisfies the third condition.

Hereinafter, with reference to FIG. 9, whether or not the third condition satisfaction determination using the vector direction will be described in more detail.

9 is a diagram for explaining an operation of determining whether a third condition is determined using a vector of center of gravity according to an embodiment of the present invention.

9, the third condition determiner 130 sets a vector from the center of gravity 900 of the background area to the center of gravity 910 of the subject area, and sets the vector based on the center of gravity 900 of the background area The third condition can be determined by using the angle 920 with the X axis.

For example, the third condition determination unit 130 calculates a vector connecting the center of gravity 910 of the subject area with the center of gravity 900 of the background area as a reference point, and calculates an angle 920 ) Is calculated as a negative value, it can be determined that the third condition is satisfied.

Alternatively, the third condition determiner 130 may determine that the angle 920 formed by the vector with the X axis is negative, and the third condition is satisfied when the magnitude of the negative angle is greater than or equal to a preset reference angle . That is, if the reference angle is set to 45 degrees and the magnitude of the angle 920 formed by the vector with the X axis is 45 degrees or more, it can be determined that the third condition is satisfied. Through this, the relative position of the background area and the object area can be confirmed by using the reference angle, so that it is possible to judge whether or not the third condition is satisfied more accurately.

As described above, the present invention provides an effect of automatically classifying whether a classification object image is a backlight image. The present invention also provides an effect of automatically correcting a captured image by applying a technique of superimposing various conditions on a specific image to more accurately classify a specific image as a backlight image and improving the contrast ratio of the classified image do.

Hereinafter, an image classification method capable of performing all the operations of the image classification apparatus of the present invention described with reference to Figs. 1 to 9 will be described.

10 is a diagram illustrating an image classification method according to an embodiment of the present invention.

A first condition determining step of determining whether a first condition is satisfied based on a histogram form of an image to be classified, a first condition determining step of determining whether a first condition is satisfied on the basis of a histogram form of an image to be classified, A second condition determination step of determining whether the second condition is satisfied according to the comparison result of the brightness value, and a third condition determination step of determining whether the third condition is satisfied based on the position of the subject area or the background area of the classification target image And an image classification step of classifying the image into a backlight image when the image to be classified satisfies all of the first condition, the second condition, and the third condition.

In FIG. 10, the first condition to the third condition are shown and described in order, but this is for convenience of explanation, and the order of judgment of the first condition to the third condition may be changed.

Referring to FIG. 10, the image classification method of the present invention may include a first condition determination step of determining whether a first condition is satisfied based on a histogram form of an image to be classified (S1000).

For example, the first condition determination step may determine whether the classification target image satisfies the first condition using the shape of the histogram. Specifically, the first condition determination step is a method for determining a classification target image by using a characteristic in which a brightness difference between the background and one of the features of the backlight image is remarkable, And judges whether or not the first condition is satisfied.

For example, the first condition determination step may determine that the first condition is satisfied when the histogram form is calculated as a beep form. To this end, the first condition determining step may calculate the single-bar type discrimination value indicating the degree of unevenness of the histogram of the classification object image, and compare the calculated single-bar type discrimination value with a predetermined reference value to determine whether the first condition is satisfied. The unicameral type discrimination value is a value indicating the degree to which the histogram is close to the unipolar type. If the histogram is close to the unipolar type, it can be calculated as a value close to unity. That is, the first condition determination step may determine that the classification target image satisfies the first condition when the single-bar type determination value of the histogram is equal to or less than a preset reference value. When the classification target image is a backlight image, the bright portion and the dark portion are clearly contrasted so that one group is formed around the low brightness value of the histogram, and the other one group is formed around the high brightness value. That is, the histogram of the backlight image is calculated in the form of a beep.

The image classification method divides the classification target image into a subject area and a background area, and determines a second condition for determining whether the second condition is satisfied according to a result of comparing an average brightness value of the subject area with a predetermined reference brightness value (S1010).

The second condition determination step may divide the classification target image into a subject area and a background area. The second condition determining step may calculate an average brightness value of the divided subject area and compare the brightness value with a preset reference brightness value to determine whether the second condition is satisfied. For example, in the second condition determination step, each pixel is converted into a binarization value by using brightness information of each pixel of the classification target image, and one or more pixels having the same binarization value are set as the same area, . That is, in the second condition determination step, the brightness information of each pixel constituting the classification target image is extracted, and the brightness of each pixel can be binarized using a reference value for binarizing the classification target image. For example, the second condition determination step can binarize based on the brightness of each pixel using the Otus binarization technique.

In the second condition determination step, if the classification target image is divided into two areas, it is possible to determine whether the second condition is satisfied by comparing the preset reference brightness value with the average brightness value of the subject area. In one example, the reference brightness value may be set to a maximum average brightness value included in the backlight group including the backlight sample image in both groups, by clustering the average brightness value of the subject area for each of the plurality of sample images into two groups . That is, the reference brightness value may be set to a maximum average brightness value of a subject area of a plurality of sample images that are classified into a backlight image. Accordingly, the reference brightness value can be set more precisely as the number of sample images increases, and the image classification apparatus can update the reference brightness value by including an image classified as the backlight image in the above-described sample image.

The second condition determining step may determine that the classification target image satisfies the second condition when the average brightness value of the subject area of the classification target image is equal to or less than the reference brightness value. That is, if the subject area average brightness value of the classification target image is calculated to be lower than the subject area average brightness value of the sample image classified into the backlight image, it can be determined that the classification target image satisfies the second condition for the backlight image have.

On the other hand, in order to calculate the reference brightness value, the backlight image and the normal image (non-backlight image) samples can be used, and the average brightness value of the subject area of each sample image can be classified into two groups through neutal network clustering . The maximum value of the group including the backlight image among the two classified groups can be set as the reference brightness value.

Meanwhile, the image classification method may include a third condition determination step of determining whether the third condition is satisfied based on the position of the subject area or the background area of the classification target image (S1020).

In the third condition determination step, it is possible to determine whether the third condition is satisfied based on the position of the subject area and the background area that are classified in the classification target image. For example, the third condition determination step may determine that the third condition is satisfied when the subject area is located below the classification target image. Specifically, the third condition determination step may determine whether the third condition of the classification object image satisfies the relative position of the background area and the object area. For example, in the third condition determination step, the center of gravity of the subject region and the center of gravity of the background region are respectively calculated and the third condition is satisfied using the vector connecting the center of gravity of the background region and the center of gravity of the subject region It can be judged whether or not. For this purpose, the classification target image can be input by separating the upper and lower sides of the image based on the Y axis. That is, in order to determine whether the subject area is positioned lower than the background area, it is possible to initially input the classification target image by separating the top and bottom of the classification target image.

The center of gravity of each of the subject area and the background area can be calculated by the coordinates of the pixels constituting each area and the brightness value of each pixel, and the center of gravity is the center point of the corresponding area considering the position and brightness of the pixels constituting the area do. Therefore, the center point of the background area and the center point of the subject area are determined as one position, respectively. Accordingly, in the third condition determination step, a vector connected to the center of gravity of the subject area is calculated based on the center of gravity of the background area, and an angle formed by the vector with the X-axis is used to determine whether the third condition is satisfied have.

Meanwhile, the image classification method may include an image classification step of classifying the classification target image into a backlight image when all of the first condition, the second condition, and the third condition are satisfied (S1030).

The image classification step may be classified into a backlight image when the classification target image satisfies all three conditions described above. Alternatively, the image classification step may be classified into a backlight image when two of the three conditions are satisfied according to the setting. Alternatively, the image classification step may classify the classification target image into a backlight image when all three conditions are not satisfied according to the setting of each condition. For example, in the image classification step, when the reference of each condition is set so as to satisfy the first condition to the third condition in the case of the backlight image, if the classification target image satisfies the first condition to the third condition, Can be classified. On the other hand, in the image classification step, when the criterion of each condition is set so that all of the first condition to the third condition are set in the case of the backlight image, when the classification target image is unsatisfactory in all of the first condition to the third condition The classification target image may be classified as a backlight image. Alternatively, when the criteria of the first condition to the third condition are set to be different from each other, the image classification step may determine the classification target image by using the respective condition criteria.

In addition, the image classification method may include some or all of the operations of the present invention described with reference to FIGS. 1 to 9, and the order may be changed or some steps may be omitted as necessary.

While the present invention has been described in connection with what is presently considered to be the most practical and preferred embodiments, it is to be understood that the invention is not limited to the disclosed embodiments. That is, within the scope of the present invention, all of the components may be selectively coupled to one or more of them. The foregoing description is merely illustrative of the technical idea of the present invention, and various changes and modifications may be made by those skilled in the art without departing from the essential characteristics of the present invention. The scope of protection of the present invention should be construed according to the following claims, and all technical ideas within the scope of equivalents should be construed as falling within the scope of the present invention.

Claims (9)

A first condition determiner for determining whether a first condition is satisfied based on a histogram form of an image to be classified;
A second condition determiner for determining whether the second condition is satisfied according to a result of comparing the average brightness value of the subject area with a preset reference brightness value by dividing the classification target image into a subject area and a background area;
A center of gravity of the subject area of the classification object and a center of gravity of the background area are respectively calculated and a vector obtained by connecting the center of gravity of the background area and the center of gravity of the subject area is used to determine whether or not the third condition is satisfied A third condition determiner for determining the third condition; And
And an image classifying unit classifying the image into a backlight image when the classification target image satisfies all of the first condition, the second condition, and the third condition.
The method according to claim 1,
The first condition determining unit may determine,
And judges that the classification target image satisfies the first condition when the histogram form is calculated in a bimodal form.
3. The method of claim 2,
The first condition determining unit may determine,
A single bar type discrimination value indicating a degree of unimodal of the histogram is calculated, and the single bar type discrimination value is compared with a preset reference value,
And judges that the classification target image satisfies the first condition when the single-rod type discrimination value is equal to or less than the preset reference value.
The method according to claim 1,
The second condition determining unit may determine,
Converting each pixel into a binarized value by using brightness information of each pixel of the classification target image,
Wherein at least one pixel having the same binarization value is set as the same area to distinguish the subject area and the background area.
The method according to claim 1,
The reference brightness value is a brightness value,
The average brightness values of the subject areas for each of the plurality of sample images are clustered into two groups,
And a maximum average brightness value included in the backlight group including the backlight sample image of the two groups.
The method according to claim 1,
The third condition determination unit may determine,
And judges that the third condition is satisfied when the subject area is located below the classification target image.
delete The method according to claim 1,
The third condition determination unit may determine,
An angle formed by the vector with the X axis with respect to the center of gravity of the background area is calculated as a minus angle,
And determines that the third condition is satisfied when the magnitude of the minus angle is equal to or greater than a preset reference angle.
A first condition determining step of determining whether a first condition is satisfied based on a histogram form of an image to be classified;
A second condition determination step of determining whether the second condition is satisfied according to a result of comparing the average brightness value of the subject area with a preset reference brightness value by dividing the classification target image into a subject area and a background area;
A center of gravity of the subject area of the classification object and a center of gravity of the background area are respectively calculated and a vector obtained by connecting the center of gravity of the background area and the center of gravity of the subject area is used to determine whether or not the third condition is satisfied A third condition judging step of judging; And
And classifying the image into a backlight image when the classification target image satisfies all of the first condition, the second condition, and the third condition.
KR1020160012322A 2016-02-01 2016-02-01 Method and Apparatus for image classification KR101801266B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020160012322A KR101801266B1 (en) 2016-02-01 2016-02-01 Method and Apparatus for image classification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020160012322A KR101801266B1 (en) 2016-02-01 2016-02-01 Method and Apparatus for image classification

Publications (2)

Publication Number Publication Date
KR20170091824A KR20170091824A (en) 2017-08-10
KR101801266B1 true KR101801266B1 (en) 2017-11-28

Family

ID=59652135

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020160012322A KR101801266B1 (en) 2016-02-01 2016-02-01 Method and Apparatus for image classification

Country Status (1)

Country Link
KR (1) KR101801266B1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11049224B2 (en) * 2018-12-05 2021-06-29 Microsoft Technology Licensing, Llc Automated real-time high dynamic range content review system
WO2021125472A1 (en) * 2019-12-16 2021-06-24 김현기 Online mission gaming device, online mission game system, and control method for online mission game system
KR102528196B1 (en) * 2021-09-23 2023-05-03 한국자동차연구원 Camera control system for responding to backlight based on camera angle adjustment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101146015B1 (en) 2010-09-20 2012-05-15 숭실대학교산학협력단 Image processing method for backlight image judgment and image compensation on cell phone camera
JP2013042207A (en) * 2011-08-11 2013-02-28 Canon Inc Imaging apparatus, control method therefor, and control program
JP2015172889A (en) 2014-03-12 2015-10-01 富士通株式会社 detection device, detection method, and program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101146015B1 (en) 2010-09-20 2012-05-15 숭실대학교산학협력단 Image processing method for backlight image judgment and image compensation on cell phone camera
JP2013042207A (en) * 2011-08-11 2013-02-28 Canon Inc Imaging apparatus, control method therefor, and control program
JP2015172889A (en) 2014-03-12 2015-10-01 富士通株式会社 detection device, detection method, and program

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
FCM을 이용한 역광 이미지의 효율적인 컬러 색상 보정, 한국컴퓨터정보학회 동계학술대회 논문집, 제19권, 제1호, 2011년 1월
Road Sign Detection with Weathe/Illumination Classifications and Adaptive Color Models in Various Road Images, 정보처리학회논문지/소프트웨어 및 데이터 공한, 제4권, 제11호, 2015년 11월
명도 개선과 Retinex를 이용한 역광 영상의 색 복원, 2015년 대한전자공학회 하계학술대회 논문집, pp. 794-797, 2015년 6월

Also Published As

Publication number Publication date
KR20170091824A (en) 2017-08-10

Similar Documents

Publication Publication Date Title
US11282185B2 (en) Information processing device, information processing method, and storage medium
US10565479B1 (en) Identifying and excluding blurred areas of images of stained tissue to improve cancer scoring
Jagadev et al. Detection of leukemia and its types using image processing and machine learning
CN107610114B (en) optical satellite remote sensing image cloud and snow fog detection method based on support vector machine
WO2017190574A1 (en) Fast pedestrian detection method based on aggregation channel features
EP3633605A1 (en) Information processing device, information processing method, and program
US9684958B2 (en) Image processing device, program, image processing method, computer-readable medium, and image processing system
CN109101924A (en) A kind of pavement marking recognition methods based on machine learning
US20060204082A1 (en) Fusion of color space data to extract dominant color
US9501823B2 (en) Methods and systems for characterizing angle closure glaucoma for risk assessment or screening
US20100040276A1 (en) Method and apparatus for determining a cell contour of a cell
CN104636754B (en) Intelligent image sorting technique based on tongue body subregion color characteristic
US9552528B1 (en) Method and apparatus for image binarization
CN111242899B (en) Image-based flaw detection method and computer-readable storage medium
CN101882223B (en) Assessment method of human body complexion
CN104318225A (en) License plate detection method and device
KR101801266B1 (en) Method and Apparatus for image classification
CN109492544B (en) Method for classifying animal fibers through enhanced optical microscope
CN108563976A (en) A kind of multi-direction vehicle color identification method based on window locations
KR101343623B1 (en) adaptive color detection method, face detection method and apparatus
Khot et al. Optimal computer based analysis for detecting malarial parasites
KR101199959B1 (en) System for reconnizaing road sign board of image
US10146042B2 (en) Image processing apparatus, storage medium, and image processing method
JP2014186710A (en) Shiitake mushroom grading apparatus using image processing technology
Funt et al. Removing outliers in illumination estimation

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
AMND Amendment
E601 Decision to refuse application
AMND Amendment
X701 Decision to grant (after re-examination)
GRNT Written decision to grant