WO2014175477A1 - Apparatus and method for processing image - Google Patents

Apparatus and method for processing image Download PDF

Info

Publication number
WO2014175477A1
WO2014175477A1 PCT/KR2013/003492 KR2013003492W WO2014175477A1 WO 2014175477 A1 WO2014175477 A1 WO 2014175477A1 KR 2013003492 W KR2013003492 W KR 2013003492W WO 2014175477 A1 WO2014175477 A1 WO 2014175477A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pixel
difference
images
filter
Prior art date
Application number
PCT/KR2013/003492
Other languages
French (fr)
Inventor
Shounan An
Original Assignee
Lg Electronics Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lg Electronics Inc. filed Critical Lg Electronics Inc.
Priority to PCT/KR2013/003492 priority Critical patent/WO2014175477A1/en
Publication of WO2014175477A1 publication Critical patent/WO2014175477A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/24Character recognition characterised by the processing or recognition method
    • G06V30/248Character recognition characterised by the processing or recognition method involving plural approaches, e.g. verification by template match; Resolving confusion among similar patterns, e.g. "O" versus "Q"
    • G06V30/2504Coarse or fine approaches, e.g. resolution of ambiguities or multiscale approaches

Definitions

  • the present disclosure relates to an apparatus and method for processing an image, and more particularly, to an apparatus and method for extracting characteristic points in an image.
  • a method of extracting characteristic points from the image or the video to recognize the object is used.
  • a Difference-of-Gaussian (DoG) based characteristic point extracting method is used in extracting characteristic points from the image and the video.
  • DoG Difference-of-Gaussian
  • the DoG based characteristic point extracting method is based on a Gaussian function in filtering an image or a video.
  • this DoG based characteristic point extracting method performs characteristic point extraction well in a high contrast region of an image, but has limitation in extracting characteristic points in a low contrast region due to the nonpreservation of edges.
  • Embodiments provide an apparatus and method for processing an image which easily extracts characteristic points in a low contrast region of an image.
  • an apparatus for processing an image comprises: a plurality of filters for generating a plurality of filtered images respectively, wherein a plurality of filters corresponds to a plurality of filter intensities, each of the plurality of filters filters an input image on the basis of a guide image according to the corresponding filter intensity to generate a corresponding filtered image; a plurality of difference image generators for generating a plurality of difference images on the basis of the plurality of filtered images; and a characteristic point determining unit for determining characteristic points in the input image on the basis of the plurality of difference images.
  • a method of processing an image comprising: filtering an input image by using a plurality of filter intensities on the basis of a guide image to generate a plurality of filtered images, wherein the plurality of filtered images corresponds to the plurality of filter intensities, respectively; generating a plurality of difference images on the basis of the plurality of the filtered images; and determining characteristic points in the input image on the basis of the plurality of difference images.
  • Fig. 1 is a block diagram of a video comparison apparatus according to embodiments.
  • Fig. 2 is a block diagram of a characteristic point detector according to embodiments.
  • Fig. 3 is a block diagram of a characteristic point detector according to embodiments.
  • Fig. 5 illustrates a portion of difference images according to embodiments.
  • Fig. 1 is a block diagram of a video comparison apparatus according to embodiments.
  • the video comparison apparatus 100 includes a camera 110, a reference image storage 120, two characteristic point detectors 130, two descriptor colleting units 140, a matcher 150, and a matching use unit 160.
  • the elements illustrated in Fig. 1 are not essential, and thus the video comparison apparatus 100 may include more or fewer elements than those in Fig. 1.
  • the video comparison apparatus 100 may obtain a comparison target image C through the camera 110.
  • a video comparison apparatus 100 may obtain a comparison target image C from a video file.
  • the reference image storage 120 stores a reference image R. This reference image R may be generated through the camera 110.
  • the two characteristic point detectors 130 respectively correspond to the comparison image C and the reference image R.
  • the characteristic point detector 130 corresponding to the comparison target image C detects a plurality of characteristic points in the comparison target image C.
  • the characteristic point detector 130 corresponding to the reference image R detects a plurality of characteristic points in the reference image R.
  • the two descriptor collectors 140 respectively correspond to the comparison image C and the reference image R.
  • the descriptor collector 140 corresponding to the comparison target image C collects from the target image C a plurality of descriptors which respectively correspond to the plurality of characteristic points in the comparison target image C.
  • the descriptor collector 140 corresponding to the reference image R collects from the reference image R a plurality of descriptors which respectively correspond to the plurality of characteristic points in the reference image R.
  • the descriptor may include characteristic point coordinates, characteristic point color values, color values of pixels surrounding the characteristic points, and the like.
  • the matcher 150 extracts one or more matching points on the basis of similarities between the plurality of descriptors of the comparison target image C and the plurality of descriptors of the reference image R.
  • the matching use unit 160 may percept an object on the basis of the matching points. Specifically, the matching use unit 160 may compare the matching points extracted by the matcher 150 with information stored in a memory (not shown). In the memory, matching points of a predetermined object may be stored in advance. The matching use unit 160 compares the matching points extracted by the matcher 150 with the matching points of the predetermined object, stored in the memory. The matching use unit 160 may search the memory for matching points of an object, which are coincident with the matching points extracted by the matcher 150, and determine that the comparison target image C corresponds to which object image.
  • Fig. 2 is a block diagram of the characteristic point detector 130 according to embodiments.
  • the characteristic point detector 130 receives the comparison target image C or the reference image R.
  • the characteristic point detector 130 corresponding to the comparison target image C will be described, but the characteristic point detector 130 corresponding to the reference image R may also have identical or similar structure to the characteristic point detector 130 corresponding to the comparison target image C.
  • the characteristic point detector 130 includes an input image obtaining unit 210, a guide image obtaining unit 220, a plurality of characteristic point outputting units 230, and a characteristic point collector 240.
  • the elements illustrated in Fig. 2 are not essential, and thus the characteristic point detector 130 may include more or fewer elements than those in Fig. 2.
  • the input image obtaining unit 210 obtains an input image from the comparison target image C.
  • the input image according to an embodiment may be identical to, or different from the comparison target image C.
  • the guide image obtaining unit 210 obtains a guide image I.
  • the guide image I may be identical to, or different from the input image P.
  • the video comparison apparatus 100 may include one or more cameras 110. When the video comparison apparatus 100 includes a plurality of cameras 110, any one camera 110 may capture the input image P and another camera 110 may capture the guide image I. In this case, the camera 110 capable of capturing the guide image I may have higher performance than the camera 110 capable of capturing the input image P.
  • the plurality of characteristic point outputting units 230 correspond to a plurality of image sizes.
  • the plurality of image sizes may be different from each other.
  • one of the plurality of image sizes may be the same as that of the input image P, and the rest of the plurality of image sizes may be smaller than that of the input image P.
  • the plurality of image sizes may correspond to a plurality of magnifying powers which are equal to or smaller than 1.
  • Each of the characteristic point outputting units 230 includes two resizers 231 and a characteristic point extractor 233.
  • the elements illustrated in Fig. 2 are not essential, and thus the characteristic point outputting unit 230 may include more or fewer elements than those in Fig. 2.
  • Two resizers 231 correspond respectively to the input image and the guide image.
  • the resizer 231 corresponding to the input image resizes the input image size to a corresponding image size to generate a resized input image P R .
  • the resizer 231 corresponding to the guide image I resizes the guide image size to a corresponding image size to generate a resized guide image I R .
  • the characteristic point extractor 233 extracts a characteristic point group for a corresponding image size by using the resized input image P R and the resized guide image I R .
  • the plurality of characteristic point outputting units 230 output a plurality of characteristic points respectively corresponding to the plurality of image sizes.
  • the characteristic point collector 230 collects a plurality of characteristic point groups respectively corresponding to the plurality of image sizes to determine the plurality of characteristic points of the comparison target image C.
  • Fig. 3 is a block diagram of the characteristic point extractor 233 according to the embodiment.
  • the characteristic point extractor 233 includes a plurality of filters 310, a plurality of difference image generator 320, and a characteristic point determining unit 330.
  • the characteristic point extractor 233 may include N filters 310, (N-1) difference image generators 320, and one characteristic point determining unit 330.
  • the elements illustrated in Fig. 3 are not essential, and thus the characteristic point extractor 233 may include more or fewer elements than those in Fig. 3.
  • the N filters 310 respectively correspond to N filter intensities, and each of the filters 310 filters the resized input image P R on the basis of the resized guide image I R according to the corresponding filter intensity, to generate filtered image.
  • the N filters 310 generate N filtered images.
  • the N filters 310 are arranged in an ascending order or in a descending order, according to the N filter intensities. The filters 310 will be described later.
  • the plurality of N filter intensities may be determined by combination of r (a window radius) and ⁇ (a normalization parameter).
  • Table 1 shows the filter intensities according to combination of r and ⁇ .
  • the filter 310 filters the resized input image P R on the basis of the resized guide image I R .
  • the filter 310 performs filtering according to the following equation 1.
  • q i denotes a filtered value of a pixel at position i in the resized input image P R .
  • I i denotes a pixel value at position i in the resized guide image I R .
  • W k may indicate a square window whose window radius is r, namely, whose side length is 2r.
  • W k includes a pixel at position i, and
  • the number of windows W k including a pixel at position i may be plural.
  • the number of windows W k including a pixel at position i may be the same as the number of pixels in W k .
  • This may be also applied to the resized guide image I R . That is, the number of the resized guide images I R may be the same as the number of pixels in W k .
  • Equation 1 may be represented as equation 2.
  • a k may be represented as equation 3.
  • u k denotes an average of values of all pixels included in k-th window W k in the resized guide image I R .
  • ⁇ k 2 denotes variance of values of all pixels included in k-th window W k in the resized guide image IR.
  • is a normalization parameter for preventing a k from getting too great.
  • a k may be obtained by summing values of all pixels included in a predetermined window W k in the resized guide image I R , diving the summed result by
  • Equation 3 P k in equation 3 may be represented as equation 4.
  • P k is a value obtained by summing values of all pixels included in a predetermined window W k in the resized input image P R and dividing the summed value by
  • b k may be represented as equation 5.
  • b k is a value obtained by subtracting a multiplication value a k by u k from P k .
  • the filter 310 may obtain the filtered input image P p by obtaining filtered values q i for all the pixels in the resized input image P R .
  • Each difference image generator 320 generates a difference image between a filtered image from a corresponding filter 310 and a filtered image from a filter 310 following the corresponding filter 310.
  • the first difference image generator 320 receives images from the first and second filters 310
  • the second difference image generator 320 receives images from the second filter 310 and a third filter 310
  • the (N-1)th difference image generator 310 receives images from the (N-1)th filter 310 and an N-th filter 310.
  • the difference image generator 320 generates a difference image by using the received two filtered images, namely, input images P p filtered by a filter 310 corresponding to the difference image generator 320 and a filter 310 following the filter 310 corresponding the difference image generator 320.
  • the difference image generator 320 obtains a difference between the two filtered input images P p . Namely, a filtered input image Pp by the filter 310 following the filter 310 corresponding to the difference image generator 320 is subtracted from a filtered input image Pp by the filter 310 corresponding to the difference image generator 320.
  • each of the plurality of difference image generators 320 generates one difference image.
  • the characteristic point determining unit 330 receives a plurality of difference images output from the plurality of difference image generator 320 to determine presence of the characteristic points, and then output the result.
  • the characteristic point determining unit 330 compares a portion or all of the (N-1) difference images with each other to output the characteristic points.
  • Fig. 5 illustrates a portion of the difference images according to embodiments.
  • the characteristic point determining unit 330 compares a value of a pixel 510 at a position m, desired to confirm presence of a characteristic point, with values of pixels surrounding the pixel 510. And the characteristic point determining unit 330 determines whether the value of the pixel 510 is greater or smaller than any surrounding pixel values.
  • the characteristic point determining unit 330 When the value of the pixel 510 is neither greater nor smaller than any of the surrounding pixel values, the characteristic point determining unit 330 does not determine the pixel 510 to be a characteristic point. When the value of the pixel 510 is greater or smaller than any of the surrounding pixel values, the characteristic point determining unit 330 compares the value of the pixel 510 with values of a pixel at position m and surrounding pixels in a previous difference image 502.
  • the characteristic point determining unit 330 When the value of the pixel 510 is neither greater nor smaller than any values of the pixel at position m and the surrounding pixels in the previous difference image 502, the characteristic point determining unit 330 does not determine the pixel 510 to be a characteristic point. When the value of the pixel 510 is greater or smaller than any values of the pixel at position m and the surrounding pixels in the previous difference image 502, the characteristic point determining unit 330 compares again the value of the pixel 510 with values of a pixel at position m and surrounding pixels in a next difference image 503.
  • the characteristic point determining unit 330 When the value of the pixel 510 is neither greater nor smaller than any values of the pixel at position m and the surrounding pixels in the next difference image 503, the characteristic point determining unit 330 does not determine the pixel 510 to be a characteristic point. When the value of the pixel 510 is greater or smaller than any values of the pixel at position m and the surrounding pixels in the next difference image 503, the characteristic point determining unit 330 compares again the value of the pixel 510 with values of a pixel at position m and surrounding pixels in a difference image prior to the previous difference image 502.
  • the characteristic point determining unit 330 compares the value of the pixel 510 with values of pixels at position m and surrounding pixels in all the difference images. As a result, when the value of the pixel 510 is greater or smaller than any values of the pixels at position m and the surrounding pixels in all the difference images, the characteristic point determining unit 330 determines the pixel 510 to be a characteristic point.
  • the characteristic point determining unit 330 determines whether a pixel is a characteristic point for pixels at all positions in the difference images to detect a plurality of characteristic points in the difference images.
  • the configuration and the method thereof according to the embodiments are not limitedly applied, but a portion or all of the embodiments can selectively combined to achieve various modifications.
  • characteristic points can be easily extracted in a low contrast region on the basis of a combination of r (a window radius) and ⁇ (a normalization parameter) in an image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

An image processing apparatus filters an input image by using a plurality of filter intensities on the basis of a guide image to generate a plurality of filtered images. The plurality of filtered images corresponds to the plurality of filter intensities, respectively. The image processing apparatus generates a plurality of difference images on the basis of the plurality of the filtered images and determines characteristic points in the input image on the basis of the plurality of difference images.

Description

APPARATUS AND METHOD FOR PROCESSING IMAGE
The present disclosure relates to an apparatus and method for processing an image, and more particularly, to an apparatus and method for extracting characteristic points in an image.
Recently, in recognizing an object in an image or a video captured by using a camera or a smartphone, a method of extracting characteristic points from the image or the video to recognize the object is used. In particular, in the related art method, a Difference-of-Gaussian (DoG) based characteristic point extracting method is used in extracting characteristic points from the image and the video.
The DoG based characteristic point extracting method is based on a Gaussian function in filtering an image or a video.
However, this DoG based characteristic point extracting method performs characteristic point extraction well in a high contrast region of an image, but has limitation in extracting characteristic points in a low contrast region due to the nonpreservation of edges.
Embodiments provide an apparatus and method for processing an image which easily extracts characteristic points in a low contrast region of an image.
In one embodiment, an apparatus for processing an image, comprises: a plurality of filters for generating a plurality of filtered images respectively, wherein a plurality of filters corresponds to a plurality of filter intensities, each of the plurality of filters filters an input image on the basis of a guide image according to the corresponding filter intensity to generate a corresponding filtered image; a plurality of difference image generators for generating a plurality of difference images on the basis of the plurality of filtered images; and a characteristic point determining unit for determining characteristic points in the input image on the basis of the plurality of difference images.
In another embodiment, a method of processing an image comprising: filtering an input image by using a plurality of filter intensities on the basis of a guide image to generate a plurality of filtered images, wherein the plurality of filtered images corresponds to the plurality of filter intensities, respectively; generating a plurality of difference images on the basis of the plurality of the filtered images; and determining characteristic points in the input image on the basis of the plurality of difference images.
The details of one or more embodiments are set forth in the accompanying drawings and the description below. Other features will be apparent from the description and drawings, and from the claims.
Fig. 1 is a block diagram of a video comparison apparatus according to embodiments.
Fig. 2 is a block diagram of a characteristic point detector according to embodiments.
Fig. 3 is a block diagram of a characteristic point detector according to embodiments.
Fig. 4 illustrates Wk, assuming r=2.
Fig. 5 illustrates a portion of difference images according to embodiments.
Reference will now be made in detail to the embodiments of the present disclosure, examples of which are illustrated in the accompanying drawings.
An apparatus and method for processing an image according to an embodiment will be described in detail with reference to the accompanying drawings. The invention may, however, be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein; rather, that alternate embodiments included in other retrogressive inventions or falling within the spirit and scope of the present disclosure can easily be derived through adding, altering, and changing, and will fully convey the concept of the invention to those skilled in the art.
Fig. 1 is a block diagram of a video comparison apparatus according to embodiments.
Referring to Fig. 1, the video comparison apparatus 100 according to embodiments will be described.
The video comparison apparatus 100 according to embodiments includes a camera 110, a reference image storage 120, two characteristic point detectors 130, two descriptor colleting units 140, a matcher 150, and a matching use unit 160. The elements illustrated in Fig. 1 are not essential, and thus the video comparison apparatus 100 may include more or fewer elements than those in Fig. 1.
The video comparison apparatus 100 according to embodiments may obtain a comparison target image C through the camera 110. A video comparison apparatus 100 according to another embodiment may obtain a comparison target image C from a video file.
The reference image storage 120 stores a reference image R. This reference image R may be generated through the camera 110.
The two characteristic point detectors 130 respectively correspond to the comparison image C and the reference image R. The characteristic point detector 130 corresponding to the comparison target image C detects a plurality of characteristic points in the comparison target image C. The characteristic point detector 130 corresponding to the reference image R detects a plurality of characteristic points in the reference image R.
The two descriptor collectors 140 respectively correspond to the comparison image C and the reference image R. The descriptor collector 140 corresponding to the comparison target image C collects from the target image C a plurality of descriptors which respectively correspond to the plurality of characteristic points in the comparison target image C. The descriptor collector 140 corresponding to the reference image R collects from the reference image R a plurality of descriptors which respectively correspond to the plurality of characteristic points in the reference image R. At this time, the descriptor may include characteristic point coordinates, characteristic point color values, color values of pixels surrounding the characteristic points, and the like.
The matcher 150 extracts one or more matching points on the basis of similarities between the plurality of descriptors of the comparison target image C and the plurality of descriptors of the reference image R.
The matching use unit 160 may percept an object on the basis of the matching points. Specifically, the matching use unit 160 may compare the matching points extracted by the matcher 150 with information stored in a memory (not shown). In the memory, matching points of a predetermined object may be stored in advance. The matching use unit 160 compares the matching points extracted by the matcher 150 with the matching points of the predetermined object, stored in the memory. The matching use unit 160 may search the memory for matching points of an object, which are coincident with the matching points extracted by the matcher 150, and determine that the comparison target image C corresponds to which object image.
Fig. 2 is a block diagram of the characteristic point detector 130 according to embodiments.
Referring to Fig. 2, the characteristic point detector 130 according to embodiments will be described.
As described above, the characteristic point detector 130 receives the comparison target image C or the reference image R. Hereinafter, the characteristic point detector 130 corresponding to the comparison target image C will be described, but the characteristic point detector 130 corresponding to the reference image R may also have identical or similar structure to the characteristic point detector 130 corresponding to the comparison target image C.
The characteristic point detector 130 includes an input image obtaining unit 210, a guide image obtaining unit 220, a plurality of characteristic point outputting units 230, and a characteristic point collector 240. The elements illustrated in Fig. 2 are not essential, and thus the characteristic point detector 130 may include more or fewer elements than those in Fig. 2.
The input image obtaining unit 210 obtains an input image from the comparison target image C. The input image according to an embodiment may be identical to, or different from the comparison target image C.
The guide image obtaining unit 210 obtains a guide image I. In an embodiment, the guide image I may be identical to, or different from the input image P. In another embodiment, the video comparison apparatus 100 may include one or more cameras 110. When the video comparison apparatus 100 includes a plurality of cameras 110, any one camera 110 may capture the input image P and another camera 110 may capture the guide image I. In this case, the camera 110 capable of capturing the guide image I may have higher performance than the camera 110 capable of capturing the input image P.
The plurality of characteristic point outputting units 230 correspond to a plurality of image sizes. At this time, the plurality of image sizes may be different from each other. In addition, one of the plurality of image sizes may be the same as that of the input image P, and the rest of the plurality of image sizes may be smaller than that of the input image P. The plurality of image sizes may correspond to a plurality of magnifying powers which are equal to or smaller than 1.
Each of the characteristic point outputting units 230 includes two resizers 231 and a characteristic point extractor 233. The elements illustrated in Fig. 2 are not essential, and thus the characteristic point outputting unit 230 may include more or fewer elements than those in Fig. 2.
Two resizers 231 correspond respectively to the input image and the guide image. The resizer 231 corresponding to the input image resizes the input image size to a corresponding image size to generate a resized input image PR. The resizer 231 corresponding to the guide image I resizes the guide image size to a corresponding image size to generate a resized guide image IR. The characteristic point extractor 233 extracts a characteristic point group for a corresponding image size by using the resized input image PR and the resized guide image IR.
Like this, the plurality of characteristic point outputting units 230 output a plurality of characteristic points respectively corresponding to the plurality of image sizes.
The characteristic point collector 230 collects a plurality of characteristic point groups respectively corresponding to the plurality of image sizes to determine the plurality of characteristic points of the comparison target image C.
Hereinafter, referring to Fig. 3, the characteristic point extractor 233 according to an embodiment will be described.
Fig. 3 is a block diagram of the characteristic point extractor 233 according to the embodiment.
The characteristic point extractor 233 according to the embodiment includes a plurality of filters 310, a plurality of difference image generator 320, and a characteristic point determining unit 330. For example, the characteristic point extractor 233 may include N filters 310, (N-1) difference image generators 320, and one characteristic point determining unit 330. However, the elements illustrated in Fig. 3 are not essential, and thus the characteristic point extractor 233 may include more or fewer elements than those in Fig. 3.
The N filters 310 respectively correspond to N filter intensities, and each of the filters 310 filters the resized input image PR on the basis of the resized guide image IR according to the corresponding filter intensity, to generate filtered image. The N filters 310 generate N filtered images. The N filters 310 are arranged in an ascending order or in a descending order, according to the N filter intensities. The filters 310 will be described later.
The plurality of N filter intensities may be determined by combination of r (a window radius) and ε(a normalization parameter).
Table 1 shows the filter intensities according to combination of r and ε.
Table 1
Filter intensity Window radius (r) Normalization parameter (ε)
0 Output an input without change
1 2 0.12
2 4 0.12
3 8 0.12
4 2 0.22
5 4 0.22
6 8 0.22
7 2 0.42
8 4 0.42
9 8 0.42
Hereinafter, referring to Fig. 4, the filer 310 will be described.
Fig. 4 shows Wk, assuming r=2.
The filter 310 filters the resized input image PR on the basis of the resized guide image IR.
The filter 310 performs filtering according to the following equation 1.
[equation 1]
Figure PCTKR2013003492-appb-I000001
Here, qi denotes a filtered value of a pixel at position i in the resized input image PR.
Ii denotes a pixel value at position i in the resized guide image IR.
According to an embodiment, Wk may indicate a square window whose window radius is r, namely, whose side length is 2r. In addition, Wk includes a pixel at position i, and |W| denotes the number of pixels in Wk. Here, the number of windows Wk including a pixel at position i may be plural. According to an embodiment, the number of windows Wk including a pixel at position i may be the same as the number of pixels in Wk. This may be also applied to the resized guide image IR. That is, the number of the resized guide images IR may be the same as the number of pixels in Wk.
Fig. 4 shows Wk, assuming r=2.
According to Fig. 4, when r=2, the number of Wk is total 16. That is, different Wks, W1 to W16, exist. Thus, according to Equation 1, qi is a value that (ak * Ii + bk) for all Wk are summed and then divided by |W|.
Equation 1 may be represented as equation 2.
[Equation 2]
Figure PCTKR2013003492-appb-I000002
Figure PCTKR2013003492-appb-I000003
Here, ak may be represented as equation 3.
[Equation 3]
Figure PCTKR2013003492-appb-I000004
Here, uk denotes an average of values of all pixels included in k-th window Wk in the resized guide image IR. σk 2 denotes variance of values of all pixels included in k-th window Wk in the resized guide image IR. ε is a normalization parameter for preventing ak from getting too great.
According to equation 3, ak may be obtained by summing values of all pixels included in a predetermined window Wk in the resized guide image IR, diving the summed result by |W|, namely, the number of pixels in Wk, subtracting a multiplication value of uk by Pk from the divided result, and dividing the subtracted result by an addition value of σk 2 and ε.
Pk in equation 3 may be represented as equation 4.
[equation 4]
Figure PCTKR2013003492-appb-I000005
According to equation 4, Pk is a value obtained by summing values of all pixels included in a predetermined window Wk in the resized input image PR and dividing the summed value by |W|, namely, the number of pixels in Wk.
In addition, bk may be represented as equation 5.
[equation 5]
Figure PCTKR2013003492-appb-I000006
According to equation 5, bk is a value obtained by subtracting a multiplication value ak by uk from Pk.
The filter 310 may obtain the filtered input image Pp by obtaining filtered values qi for all the pixels in the resized input image PR.
Referring back to Fig. 3, when the N filters 310 are arranged in an ascending order or in a descending order, k-th difference image generator 320 corresponds to k-th filter 310 (where k=1, …, N-1). That is, a first filter 310 corresponds to a first difference image generator 320, a second filter 310 corresponds to a second difference image generator 320, and an (N-1)-th filter 310 corresponds to an (N-1)-th difference image generator 320. Each difference image generator 320 generates a difference image between a filtered image from a corresponding filter 310 and a filtered image from a filter 310 following the corresponding filter 310. For example, the first difference image generator 320 receives images from the first and second filters 310, the second difference image generator 320 receives images from the second filter 310 and a third filter 310, and the (N-1)th difference image generator 310 receives images from the (N-1)th filter 310 and an N-th filter 310.
The difference image generator 320 generates a difference image by using the received two filtered images, namely, input images Pp filtered by a filter 310 corresponding to the difference image generator 320 and a filter 310 following the filter 310 corresponding the difference image generator 320. The difference image generator 320 obtains a difference between the two filtered input images Pp. Namely, a filtered input image Pp by the filter 310 following the filter 310 corresponding to the difference image generator 320 is subtracted from a filtered input image Pp by the filter 310 corresponding to the difference image generator 320. As a result, each of the plurality of difference image generators 320 generates one difference image.
The characteristic point determining unit 330 receives a plurality of difference images output from the plurality of difference image generator 320 to determine presence of the characteristic points, and then output the result.
The characteristic point determining unit 330 compares a portion or all of the (N-1) difference images with each other to output the characteristic points.
Hereinafter, referring to Fig. 5, the characteristic point determining unit will be described.
Fig. 5 illustrates a portion of the difference images according to embodiments.
In this document, pixels at predetermined positions in three difference images will be exemplified.
The characteristic point determining unit 330 compares a value of a pixel 510 at a position m, desired to confirm presence of a characteristic point, with values of pixels surrounding the pixel 510. And the characteristic point determining unit 330 determines whether the value of the pixel 510 is greater or smaller than any surrounding pixel values.
When the value of the pixel 510 is neither greater nor smaller than any of the surrounding pixel values, the characteristic point determining unit 330 does not determine the pixel 510 to be a characteristic point. When the value of the pixel 510 is greater or smaller than any of the surrounding pixel values, the characteristic point determining unit 330 compares the value of the pixel 510 with values of a pixel at position m and surrounding pixels in a previous difference image 502.
When the value of the pixel 510 is neither greater nor smaller than any values of the pixel at position m and the surrounding pixels in the previous difference image 502, the characteristic point determining unit 330 does not determine the pixel 510 to be a characteristic point. When the value of the pixel 510 is greater or smaller than any values of the pixel at position m and the surrounding pixels in the previous difference image 502, the characteristic point determining unit 330 compares again the value of the pixel 510 with values of a pixel at position m and surrounding pixels in a next difference image 503. When the value of the pixel 510 is neither greater nor smaller than any values of the pixel at position m and the surrounding pixels in the next difference image 503, the characteristic point determining unit 330 does not determine the pixel 510 to be a characteristic point. When the value of the pixel 510 is greater or smaller than any values of the pixel at position m and the surrounding pixels in the next difference image 503, the characteristic point determining unit 330 compares again the value of the pixel 510 with values of a pixel at position m and surrounding pixels in a difference image prior to the previous difference image 502.
Repeating in this way, the characteristic point determining unit 330 compares the value of the pixel 510 with values of pixels at position m and surrounding pixels in all the difference images. As a result, when the value of the pixel 510 is greater or smaller than any values of the pixels at position m and the surrounding pixels in all the difference images, the characteristic point determining unit 330 determines the pixel 510 to be a characteristic point.
In this way, the characteristic point determining unit 330 determines whether a pixel is a characteristic point for pixels at all positions in the difference images to detect a plurality of characteristic points in the difference images.
In the video comparison apparatus 100 as described above, the configuration and the method thereof according to the embodiments are not limitedly applied, but a portion or all of the embodiments can selectively combined to achieve various modifications.
According to the embodiments, characteristic points can be easily extracted in a low contrast region on the basis of a combination of r (a window radius) and ε(a normalization parameter) in an image.
Although embodiments have been described with reference to a number of illustrative embodiments thereof, it should be understood that numerous other modifications and embodiments can be devised by those skilled in the art that will fall within the spirit and scope of the principles of this disclosure. More particularly, various variations and modifications are possible in the component parts and/or arrangements of the subject combination arrangement within the scope of the disclosure, the drawings and the appended claims. In addition to variations and modifications in the component parts and/or arrangements, alternative uses will also be apparent to those skilled in the art.

Claims (18)

1. An apparatus for processing an image, comprising:
a plurality of filters for generating a plurality of filtered images respectively, wherein a plurality of filters corresponds to a plurality of filter intensities, each of the plurality of filters filters an input image on the basis of a guide image according to the corresponding filter intensity to generate a corresponding filtered image;
a plurality of difference image generators for generating a plurality of difference images on the basis of the plurality of filtered images; and
a characteristic point determining unit for determining characteristic points in the input image on the basis of the plurality of difference images.
The apparatus according to claim 1, wherein the plurality of filter intensities correspond to a plurality of window radiuses, respectively.
The apparatus according to claim 1, wherein the plurality of filter intensities correspond to a plurality of normalization parameters, respectively.
The apparatus according to claim 1, wherein the plurality of filter intensities correspond to a plurality of combinations of one or more normalization parameters and one or more window radiuses, respectively.
The apparatus according to claim 1, wherein the characteristic point determining unit compares a value of any one pixel in a current difference image among the plurality of difference images with values of surrounding pixels of the one pixel, to determine whether the one pixel is a characteristic point.
The apparatus according to claim 1, wherein the characteristic point determining unit determines that any one pixel in a current difference image among the plurality of difference images is not a characteristic point, when a value of the one pixel is neither greater nor smaller than any values of surrounding pixels of the one pixel.
The apparatus according to claim 1, further comprising a camera for capturing the input image or the guide image.
The apparatus according to claim 1, wherein the number of the plurality of difference image generators is smaller than the number of the plurality of filters.
The apparatus according to claim 1, wherein each of the difference image generators generates each of the difference images as a difference of a first filtered image received from a filter corresponding to each of the difference image generators, and a second filtered image received from a filter following the filter corresponding to each of the difference image generators.
The apparatus according claim 1, further comprising a matching use unit determining to which object's image the input image correspond on the basis of the characteristic points.
A method of processing an image comprising:
filtering an input image by using a plurality of filter intensities on the basis of a guide image to generate a plurality of filtered images, wherein the plurality of filtered images corresponds to the plurality of filter intensities, respectively;
generating a plurality of difference images on the basis of the plurality of the filtered images; and
determining characteristic points in the input image on the basis of the plurality of difference images.
The method according to claim 11, wherein the plurality of filter intensities correspond to a plurality of window radiuses, respectively.
The method according to claim 11, wherein the plurality of filter intensities correspond to a plurality of normalization parameters, respectively.
The method according to claim 11, wherein the plurality of filter intensities correspond to a plurality of combinations of one or more normalization parameters, and one or more window radiuses, respectively.
The method according to claim 11, wherein the determining of characteristic points further comprises comparing a value of any one pixel in a current difference image among the plurality of difference images with values of surrounding pixels of the one pixel, to determine whether the one pixel is a characteristic point.
The method according to claim 11, wherein the determining of characteristic points further comprises;
comparing a value of any one pixel in a current difference image among the plurality of difference images with values of surrounding pixels of the one pixel; and
determining the one pixel in the current difference image not to be a characteristic point, when the value of the one pixel is neither greater nor smaller than any values of the surrounding pixels, as the compared result.
The method according to claim 11, further comprising:
determining to which object's image the input image correspond on the basis of the characteristic points.
The method according to claim 17, wherein the determining of the input image comprises:
detecting matching points on the basis of the characteristic points; and
searching a memory for matching points of an object, the matching points corresponding to the detected matching points.
PCT/KR2013/003492 2013-04-24 2013-04-24 Apparatus and method for processing image WO2014175477A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/KR2013/003492 WO2014175477A1 (en) 2013-04-24 2013-04-24 Apparatus and method for processing image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/KR2013/003492 WO2014175477A1 (en) 2013-04-24 2013-04-24 Apparatus and method for processing image

Publications (1)

Publication Number Publication Date
WO2014175477A1 true WO2014175477A1 (en) 2014-10-30

Family

ID=51792034

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2013/003492 WO2014175477A1 (en) 2013-04-24 2013-04-24 Apparatus and method for processing image

Country Status (1)

Country Link
WO (1) WO2014175477A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108399627A (en) * 2018-03-23 2018-08-14 云南大学 Video interframe target method for estimating, device and realization device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080247675A1 (en) * 2007-04-04 2008-10-09 Canon Kabushiki Kaisha Image-processing apparatus and image-processing method
US20090324087A1 (en) * 2008-06-27 2009-12-31 Palo Alto Research Center Incorporated System and method for finding stable keypoints in a picture image using localized scale space properties
US20110142286A1 (en) * 2008-08-11 2011-06-16 Omron Corporation Detective information registration device, target object detection device, electronic device, method of controlling detective information registration device, method of controlling target object detection device, control program for detective information registration device, and control program for target object detection device
KR20120089504A (en) * 2010-12-10 2012-08-13 경북대학교 산학협력단 Apparatus for recognizing a subject and method using thereof
KR20120130462A (en) * 2011-05-23 2012-12-03 동아대학교 산학협력단 Method for tracking object using feature points of object

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080247675A1 (en) * 2007-04-04 2008-10-09 Canon Kabushiki Kaisha Image-processing apparatus and image-processing method
US20090324087A1 (en) * 2008-06-27 2009-12-31 Palo Alto Research Center Incorporated System and method for finding stable keypoints in a picture image using localized scale space properties
US20110142286A1 (en) * 2008-08-11 2011-06-16 Omron Corporation Detective information registration device, target object detection device, electronic device, method of controlling detective information registration device, method of controlling target object detection device, control program for detective information registration device, and control program for target object detection device
KR20120089504A (en) * 2010-12-10 2012-08-13 경북대학교 산학협력단 Apparatus for recognizing a subject and method using thereof
KR20120130462A (en) * 2011-05-23 2012-12-03 동아대학교 산학협력단 Method for tracking object using feature points of object

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108399627A (en) * 2018-03-23 2018-08-14 云南大学 Video interframe target method for estimating, device and realization device
CN108399627B (en) * 2018-03-23 2020-09-29 云南大学 Video inter-frame target motion estimation method and device and implementation device

Similar Documents

Publication Publication Date Title
US10009549B2 (en) Imaging providing ratio pixel intensity
WO2015182904A1 (en) Area of interest studying apparatus and method for detecting object of interest
WO2011136407A1 (en) Apparatus and method for image recognition using a stereo camera
WO2016163755A1 (en) Quality measurement-based face recognition method and apparatus
WO2014069822A1 (en) Apparatus and method for face recognition
WO2019164074A1 (en) Fine dust analysis method, and apparatus for performing same
WO2014046401A1 (en) Device and method for changing shape of lips on basis of automatic word translation
WO2020027607A1 (en) Object detection device and control method
WO2010131435A1 (en) Pattern recognition apparatus and method therefor configured to recognize object and another lower-order object
WO2018151503A2 (en) Method and apparatus for gesture recognition
WO2017115905A1 (en) Human body pose recognition system and method
WO2014194620A1 (en) Method, module, apparatus and system for extracting, training and detecting image characteristic
WO2020045946A1 (en) Image processing device and image processing method
WO2018135695A1 (en) Monitoring apparatus and system
WO2019156543A2 (en) Method for determining representative image of video, and electronic device for processing method
CN109698906A (en) Dithering process method and device, video monitoring system based on image
WO2016064107A1 (en) Pan/tilt/zoom camera based video playing method and apparatus
WO2019132093A1 (en) Barcode detection device and barcode detection method using same
WO2014175477A1 (en) Apparatus and method for processing image
WO2011043498A1 (en) Intelligent image monitoring apparatus
WO2011136405A1 (en) Image recognition device and method using 3d camera
WO2022019601A1 (en) Extraction of feature point of object from image and image search system and method using same
WO2023158068A1 (en) Learning system and method for improving object detection rate
WO2014058234A1 (en) Image processing system using polarization-difference camera
WO2014058165A1 (en) Image monitoring apparatus for estimating size of singleton, and method therefor

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13882723

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13882723

Country of ref document: EP

Kind code of ref document: A1