WO2008066217A1 - Face recognition method by image enhancement - Google Patents

Face recognition method by image enhancement Download PDF

Info

Publication number
WO2008066217A1
WO2008066217A1 PCT/KR2007/000154 KR2007000154W WO2008066217A1 WO 2008066217 A1 WO2008066217 A1 WO 2008066217A1 KR 2007000154 W KR2007000154 W KR 2007000154W WO 2008066217 A1 WO2008066217 A1 WO 2008066217A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
image
face image
suspect
extracting
Prior art date
Application number
PCT/KR2007/000154
Other languages
French (fr)
Inventor
Donghoon Jeon
Sungha Shin
Jonghwa Cheon
Original Assignee
Firstec Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Firstec Co., Ltd. filed Critical Firstec Co., Ltd.
Publication of WO2008066217A1 publication Critical patent/WO2008066217A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/32Normalisation of the pattern dimensions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level

Definitions

  • the present invention relates to a face recognition method by image enhancement and, more particularly, to a face recognition method by image enhancement which extracts a face image of a suspect from a moving image or a still image that contains a scene of a crime, processes the extracted face image, compares the processed face image with faces of criminals stored in a database having face information of criminals stored therein so as to identify the suspect.
  • CCTV cameras are installed in places having high possibility of crime. Moreover, cellular phones attached with cameras are widely spread, and thus pictures of crime scenes, captured by witnesses, are provided to the police in many cases.
  • Japanese Patent Laid-Open Publication No. Hei05-266173 discloses a face recognition technique that extracts a face image, removes the influence of lighting using a homomorphic filter, and generates rotation, magnification/reduction of the face image according to a recursive call second-order moment segmentation means to represent a face using a characteristic vector. That is, the positions of eyes and a mouse in a face region are determined, the face is segmented with a line connecting the two eyes and a line that is perpendicular to the line and passes through a nose, and second- order moment segmentation is limited to the face region to remove noise that affects the characteristic vector so as to recognize the face.
  • Japanese Patent Laid-Open Publication No. Hei07-302327 discloses a technique that captures data of a face in various directions, stores the data, and compares the data with stored images to detect an image having highest similarity.
  • Korean Patent Laid-Open Publication No. 1999-50271 discloses a face recognition method that extracts face images of suspects and identifies a most similar suspect.
  • This face recognition method includes a face extraction step of extracting a face image from a gray picture or a color picture, an image correction step of correcting the face image, a characteristic extraction step of extracting characteristic of a face, and an identification step of comparing the face image with pictures stored in a database to identify a suspect having the face image.
  • the face extraction step uses a method using the motion of eyes (disclosed in
  • Korean Patent No. 361497 a method using grouped face images and a mesh type search region (disclosed in Korean Patent No. 338807), a method of detecting a face region with a face color using edge/color information (disclosed in Korean Patent No. 427181), or a method using an adaptive boosting algorithm based on rectangular characteristic of a face (disclosed in Korean Patent No. 621883).
  • the characteristic extraction step uses face characteristics disclosed in Korean
  • Patent Laid-Open Publication No. 1999-50271 and a hierarchical graph matching method using a flexible grid and a principal component analysis method are known as a characteristic extraction technique.
  • the identification step that compares a face image with pictures stored in a predetermined database to identify a suspect having the face uses a method of determining a hyperplane through a support vector machine and learns a face recognition method to recognize a face (disclosed in Korean Patent No. 456619, No. 571826 and No. 608595), a hierarchical principal component analysis method (disclosed in Korean Patent No. 571800) and so on.
  • the face extraction step, the characteristic extraction step and the identification step are all related to automation for identification, images (including still images of moving images) acquired by CCD cameras from a criminal scene show a very small suspect's face in many cases. In this case, it is required to magnify and correct the images in order to obtain distinct characteristics of a face and compare the face with pictures stored in the database.
  • the images are mostly magnified through an image processing program such as photoshop.
  • image processing program such as photoshop.
  • the following technique is used.
  • FIG. 2(b) illustrates an image of 220x220 pixels, which is obtained by magnifying an image of 55x55 pixels illustrated in FIG. 2(a) four times. Referring to FIG. 2(b), the enlarged image of 220x220 pixels shows repetition effect compared to the image illustrated in FIG. 2(a).
  • linear interpolation that interpolates the pixel value of a corresponding pixel using an average value of pixel values of neighboring pixels larger than magnification is used to magnify an image.
  • an edge region is smoothed to generate blurring, as illustrated in FIG. 2(c), and thus the face extraction step and the characteristic extraction step cannot produce satisfactory results.
  • an image can be transformed to frequency components, and then Fourier-transformed to be magnified. In this case, noise is added to the image in a low frequency region while edge characteristic is maintained.
  • a primary object of the present invention is to provide a face recognition method capable of efficiently detecting a face region, magnifying the detected face region with distinctness, and effectively comparing the detected face region with characteristic parts of a face.
  • a face recognition method by image enhancement comprising: a face extracting step of extracting a face image from an image that is obtained from a CCD camera and contains the face of a suspect; an image correcting step of correcting the face image; a characteristic extracting step of extracting characteristic of the face; and an identification step of comparing the face image with pictures stored in a database to identify the suspect, wherein the image correcting step enlarges the face image to a predetermined size in order to compare the face image with the pictures stored in the database, calculates pixel values, i.e., contrast or color information of pixels of the enlarged face image using an interpolation method according to the position of the enlarged face image and contour information, and processes the calculated pixel values according to a statistical analysis.
  • pixel values i.e., contrast or color information of pixels of the enlarged face image using an interpolation method according to the position of the enlarged face image and contour information
  • a low-frequency noise is removed by means of a wavelet filter for the pixel values processed according to the statistical analysis.
  • the face extracting step uses an AdaBoost method
  • the characteristic extracting step uses an HGM method
  • the identification step uses an SVM method.
  • a low-resolution image is magnified without distorting the characteristic of the low-resolution image when the low-resolution image is converted to a high-resolution image, and thus a face region can be detected more efficiently, the detected face region can be magnified more distinctly, and the face region can be compared with face characteristic parts more effectively.
  • FIG. 1 illustrates pixel values of an image when the image is magnified using a conventional method
  • FIG. 2 illustrates an image magnified according to a conventional image magnification technique
  • FIG. 3 illustrates edge/contour extraction according to the present invention
  • FIG. 4 illustrates interpolation between contours according to the present invention
  • FIG. 5 illustrates image enlargement and correction according to the present invention
  • FIG. 6 illustrates an image magnification and correction result according to the present invention.
  • Edges are extracted from the low-resolution image obtained from the CCD camera, as illustrated in FIG. 3 (a).
  • An edge is designated as pixels having contrast values exceeding a predetermined threshold value and has position values (ui, vi) of a series of pixels.
  • a contour function f(ui, vi) that represents contours is obtained according to edges in a predetermined range among the extracted edges of the low-resolution image.
  • the contour function f(ui, vi) can be a linear function or a nonlinear function.
  • the contour function f(ui, vi) includes multiple functions fk(ui, vi) respectively corresponding to multiple sections of the edges of the image such that the contour function f(ui, vi) satisfactorily reflects the position values (ui, vi). Contours and edges obtained according to the contour function are illustrated in FIG. 3(c).
  • a contours represented according to the contour function is a continuous model represented in the form of a segment of a line while a pixel is an independent discrete model, and thus the characteristic of the contour is maintained even when the contour is enlarged, as illustrated in FIG. 3(c). That is, even when the low-resolution image is converted to a high-resolution image, edge characteristic of the low-resolution image is maintained.
  • Pixel values of pixels constituting an edge in the low-resolution image can be maintained even in the high-resolution image according to the contour function and the width of the edge is also maintained in the high-resolution image.
  • Pixel values of pixels constituting an edge in the high-resolution image maintain the original pixel values and other pixel values copy corresponding pixel values of the low-resolution image multiple times corresponding to magnification in such a manner that FIG. 2(a) is converted to FIG. 2(b) to form the high-resolution image.
  • FIG. 4(a) illustrates a part of an image including multiple edges.
  • a grid indicated by a solid line (that is, a grid formed by first and second rows and first and second columns) represents a pixel of a row-resolution image and a grid indicated by a solid line and a dotted line (that is, a grid formed by the first row and the first column) represents a pixel of a magnified high-resolution image.
  • Pixel values of pixels corresponding to contours maintain the original pixel values and pixel values of enlarged and newly formed pixels are calculated.
  • FIG. 4(b) pixel values of pixels existing between pixel values of pixels corresponding to contours are calculated according to interpolation. Here, multiple pixel values are obtained for pixels existing between the contours.
  • the present invention relates to a face recognition method and, more particularly, to a face recognition method which extracts a face image of a suspect from a moving image or a still image that contains a scene of a crime, processes the extracted face image, compares the processed face image with faces of criminals stored in a database having face information of criminals stored therein so as to identify the suspect. According to the present invention, a high recognition rate can be secured.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention relates to a face recognition method by image enhancement and, more particularly, to a face recognition method by image enhancement which extracts a face image of a suspect from a moving image or a still image that contains a scene of a crime, processes the extracted face image, compares the processed face image with faces of criminals stored in a database having face information of criminals stored therein so as to identify the suspect. The face recognition method comprises a face extracting step of extracting a face image from an image that is obtained from a CCD camera and contains the face of a suspect, an image correcting step of correcting the face image, a characteristic extracting step of extracting characteristic of the face, and an identification step of comparing the face image with pictures stored in a database to identify the suspect, wherein the image correcting step enlarges the face image to a predetermined size in order to compare the face image with the pictures stored in the database, calculates pixel values of pixels of the enlarged face image using an interpolation method according to the position of the enlarged face image and contour information, and processes the calculated pixel values according to a statistical analysis.

Description

Description FACE RECOGNITION METHOD BY IMAGE ENHANCEMENT
Technical Field
[1] The present invention relates to a face recognition method by image enhancement and, more particularly, to a face recognition method by image enhancement which extracts a face image of a suspect from a moving image or a still image that contains a scene of a crime, processes the extracted face image, compares the processed face image with faces of criminals stored in a database having face information of criminals stored therein so as to identify the suspect.
[2]
Background Art
[3] Recently, crimes such as bank or shop robbery and extortion of money from drunken men or the old and the weak occur frequently. Furthermore, a crime is frequently committed in which cash is drawn out from an automated teller machine (ATM) using a card extorted from a person.
[4] In order to prevent the aforementioned crimes and acquire information on suspects,
CCTV cameras are installed in places having high possibility of crime. Moreover, cellular phones attached with cameras are widely spread, and thus pictures of crime scenes, captured by witnesses, are provided to the police in many cases.
[5] However, a camera with a high resolution is required in order to store images obtained from CCTV cameras or cameras attached to cellular phones with distinctness. Furthermore, CCTV cameras are outworn equipment, so that they have a low resolution, and it is difficult to identify suspects because most criminals hide their faces. Moreover, even if the face of a suspect is clearly captured by a camera, the police have no choice but to spread a search for the suspect and receive a report from an information provider in order to identify the suspect.
[6] In order to solve the aforementioned problems, attempts to use face recognition systems installed in various buildings or offices have been made recently. Most face recognition methods are being studied which enlarge a face image, extract characteristics of a face and compare the face image with pictures of criminals stored in a database.
[7] Japanese Patent Laid-Open Publication No. Hei05-266173 discloses a face recognition technique that extracts a face image, removes the influence of lighting using a homomorphic filter, and generates rotation, magnification/reduction of the face image according to a recursive call second-order moment segmentation means to represent a face using a characteristic vector. That is, the positions of eyes and a mouse in a face region are determined, the face is segmented with a line connecting the two eyes and a line that is perpendicular to the line and passes through a nose, and second- order moment segmentation is limited to the face region to remove noise that affects the characteristic vector so as to recognize the face.
[8] Japanese Patent Laid-Open Publication No. Hei07-302327 discloses a technique that captures data of a face in various directions, stores the data, and compares the data with stored images to detect an image having highest similarity.
[9] Furthermore, Korean Patent Laid-Open Publication No. 1999-50271 discloses a face recognition method that extracts face images of suspects and identifies a most similar suspect. This face recognition method includes a face extraction step of extracting a face image from a gray picture or a color picture, an image correction step of correcting the face image, a characteristic extraction step of extracting characteristic of a face, and an identification step of comparing the face image with pictures stored in a database to identify a suspect having the face image.
[10] The face extraction step uses a method using the motion of eyes (disclosed in
Korean Patent No. 361497), a method using grouped face images and a mesh type search region (disclosed in Korean Patent No. 338807), a method of detecting a face region with a face color using edge/color information (disclosed in Korean Patent No. 427181), or a method using an adaptive boosting algorithm based on rectangular characteristic of a face (disclosed in Korean Patent No. 621883).
[11] The characteristic extraction step uses face characteristics disclosed in Korean
Patent Laid-Open Publication No. 1999-50271, and a hierarchical graph matching method using a flexible grid and a principal component analysis method are known as a characteristic extraction technique.
[12] The identification step that compares a face image with pictures stored in a predetermined database to identify a suspect having the face uses a method of determining a hyperplane through a support vector machine and learns a face recognition method to recognize a face (disclosed in Korean Patent No. 456619, No. 571826 and No. 608595), a hierarchical principal component analysis method (disclosed in Korean Patent No. 571800) and so on.
[13] However, while the face extraction step, the characteristic extraction step and the identification step are all related to automation for identification, images (including still images of moving images) acquired by CCD cameras from a criminal scene show a very small suspect's face in many cases. In this case, it is required to magnify and correct the images in order to obtain distinct characteristics of a face and compare the face with pictures stored in the database.
[14] The images are mostly magnified through an image processing program such as photoshop. In order to automate magnification of images, the following technique is used.
[15] In general, such an image captured by a camera, as described above, is a digital image, and thus the image is composed of pixels to which RGB values or contrast values are respectively allocated (hereinafter, information allocated to each pixel, such as an RGB value or a contrast value, is referred to as 'pixel value'). To magnify the image, a technique that repeatedly uses a pixel value of a pixel for neighboring pixels according to a magnification is generally used. More specifically, pixel values are repeated twice and recorded to magnify the original image twice, as illustrated in FIG. 1. FIG. 2(b) illustrates an image of 220x220 pixels, which is obtained by magnifying an image of 55x55 pixels illustrated in FIG. 2(a) four times. Referring to FIG. 2(b), the enlarged image of 220x220 pixels shows repetition effect compared to the image illustrated in FIG. 2(a).
[16] In addition, linear interpolation that interpolates the pixel value of a corresponding pixel using an average value of pixel values of neighboring pixels larger than magnification is used to magnify an image. In this case, an edge region is smoothed to generate blurring, as illustrated in FIG. 2(c), and thus the face extraction step and the characteristic extraction step cannot produce satisfactory results. Furthermore, an image can be transformed to frequency components, and then Fourier-transformed to be magnified. In this case, noise is added to the image in a low frequency region while edge characteristic is maintained.
[17] The aforementioned conventional method is difficult to obtain satisfactory results in the identification step because serious low-frequency noise is generated in an edge region or blurring occurs in the characteristic extraction step. Furthermore, even though a face image is corrected, the best combination of various methods known as the face extraction step, the characteristic extraction step and the face recognition step should be found in a face recognition field.
[18]
Disclosure of Invention Technical Problem
[19] Accordingly, the present invention has been made to solve the above-mentioned problems occurring in the conventional art, and a primary object of the present invention is to provide a face recognition method capable of efficiently detecting a face region, magnifying the detected face region with distinctness, and effectively comparing the detected face region with characteristic parts of a face. Technical Solution
[20] To accomplish the object of the present invention, there is provided a face recognition method by image enhancement comprising: a face extracting step of extracting a face image from an image that is obtained from a CCD camera and contains the face of a suspect; an image correcting step of correcting the face image; a characteristic extracting step of extracting characteristic of the face; and an identification step of comparing the face image with pictures stored in a database to identify the suspect, wherein the image correcting step enlarges the face image to a predetermined size in order to compare the face image with the pictures stored in the database, calculates pixel values, i.e., contrast or color information of pixels of the enlarged face image using an interpolation method according to the position of the enlarged face image and contour information, and processes the calculated pixel values according to a statistical analysis.
[21] A low-frequency noise is removed by means of a wavelet filter for the pixel values processed according to the statistical analysis.
[22] The face extracting step uses an AdaBoost method, the characteristic extracting step uses an HGM method, and the identification step uses an SVM method.
Advantageous Effects
[23] According to the present invention, a low-resolution image is magnified without distorting the characteristic of the low-resolution image when the low-resolution image is converted to a high-resolution image, and thus a face region can be detected more efficiently, the detected face region can be magnified more distinctly, and the face region can be compared with face characteristic parts more effectively.
[24]
Brief Description of the Drawings
[25] Further objects and advantages of the invention can be more fully understood from the following detailed description taken in conjunction with the accompanying drawings, in which:
[26] FIG. 1 illustrates pixel values of an image when the image is magnified using a conventional method;
[27] FIG. 2 illustrates an image magnified according to a conventional image magnification technique;
[28] FIG. 3 illustrates edge/contour extraction according to the present invention;
[29] FIG. 4 illustrates interpolation between contours according to the present invention;
[30] FIG. 5 illustrates image enlargement and correction according to the present invention; and
[31] FIG. 6 illustrates an image magnification and correction result according to the present invention.
[32]
Mode for the Invention [33] In an embodiment of the present invention, a low-resolution image of the face of a suspect, obtained from a CCD camera, is explained.
[34] Edges are extracted from the low-resolution image obtained from the CCD camera, as illustrated in FIG. 3 (a). An edge is designated as pixels having contrast values exceeding a predetermined threshold value and has position values (ui, vi) of a series of pixels. A contour function f(ui, vi) that represents contours is obtained according to edges in a predetermined range among the extracted edges of the low-resolution image.
[35] The contour function f(ui, vi) can be a linear function or a nonlinear function. The contour function f(ui, vi) includes multiple functions fk(ui, vi) respectively corresponding to multiple sections of the edges of the image such that the contour function f(ui, vi) satisfactorily reflects the position values (ui, vi). Contours and edges obtained according to the contour function are illustrated in FIG. 3(c).
[36] A contours represented according to the contour function is a continuous model represented in the form of a segment of a line while a pixel is an independent discrete model, and thus the characteristic of the contour is maintained even when the contour is enlarged, as illustrated in FIG. 3(c). That is, even when the low-resolution image is converted to a high-resolution image, edge characteristic of the low-resolution image is maintained.
[37] For example, when a low-resolution image of 55x55 pixels, as illustrated in FIG.
2(a), is converted to a high-resolution image of 220x220, as illustrated in FIG. 2(b), repetition effect is produced, and pixels producing the repetition effect form a thick edge. This thick edge makes representation of a distinct contour difficult. However, the characteristic of a contour is maintained in the high-resolution image even though the contour is obtained from the low-resolution image, and thus edge characteristic is maintained.
[38] Pixel values of pixels constituting an edge in the low-resolution image can be maintained even in the high-resolution image according to the contour function and the width of the edge is also maintained in the high-resolution image. Pixel values of pixels constituting an edge in the high-resolution image maintain the original pixel values and other pixel values copy corresponding pixel values of the low-resolution image multiple times corresponding to magnification in such a manner that FIG. 2(a) is converted to FIG. 2(b) to form the high-resolution image.
[39] Interpolation according to successive linear analysis or nonlinear analysis is performed on pixel values existing between edges constructing the contour function and neighboring edges to calculate pixel values of pixels between the contours. This is explained in more detail.
[40] FIG. 4(a) illustrates a part of an image including multiple edges. A grid indicated by a solid line (that is, a grid formed by first and second rows and first and second columns) represents a pixel of a row-resolution image and a grid indicated by a solid line and a dotted line (that is, a grid formed by the first row and the first column) represents a pixel of a magnified high-resolution image. Pixel values of pixels corresponding to contours maintain the original pixel values and pixel values of enlarged and newly formed pixels are calculated. Referring to FIG. 4(b), pixel values of pixels existing between pixel values of pixels corresponding to contours are calculated according to interpolation. Here, multiple pixel values are obtained for pixels existing between the contours. For example, for the pixel of a fifth row and a seventh column, seven pixel values are obtained even though calculation is performed on only two pixels indicated in FIG. 4(b). A pixel value with the highest frequency among multiple pixel values allocated to pixels, obtained through calculation according to interpolation, is determined through statistical analysis or an average value of the multiple pixel values is determined. When the image illustrated in FIG. 2(a) is processed through the image processing technique according to the present invention, an image illustrated in FIG. 6 is obtained. Referring to FIG. 6, an excellent enlarged image having distinct edge characteristic and no low-frequency noise can be obtained.
[41] Accordingly, image correction that produces satisfactory edge characteristic and removes repetition effect is achieved. An image has low-frequency noise when the contour function is designated using a considerably small number of edge position values. In this case, it is preferable to remove the low-frequency noise using a wavelet filter.
[42] When a face region is extracted according to adaboost method from an image corrected by the above-described method, face characteristic is extracted using HGM method, and a face is recognized according to SVM method, a face detection success rate is increased to 95% from 82% and a recognition rate of higher than 88%, 95% and 95% is obtained even though lighting/expression/pose are changed.
[43]
Industrial Applicability
[44] The present invention relates to a face recognition method and, more particularly, to a face recognition method which extracts a face image of a suspect from a moving image or a still image that contains a scene of a crime, processes the extracted face image, compares the processed face image with faces of criminals stored in a database having face information of criminals stored therein so as to identify the suspect. According to the present invention, a high recognition rate can be secured.

Claims

Claims
[1] A face recognition method by image enhancement comprising: a face extracting step of extracting a face image from an image containing the face of a suspect; an image correcting step of correcting the face image; a characteristic extracting step of extracting characteristic of the face; and an identification step of comparing the face image with pictures stored in a database to identify the suspect, wherein the image correcting step enlarges the face image to a predetermined size in order to compare the face image with the pictures stored in the database, calculates pixel values, i.e., contrast or color information of pixels of the enlarged face image using an interpolation method according to the position of the enlarged face image and contour information, and processes the calculated pixel values according to a statistical analysis.
[2] The face recognition method by image enhancement according to claim 1, wherein a low-frequency noise is removed by means of a wavelet filter for the pixel values processed according to the statistical analysis.
[3] The face recognition method by image enhancement according to claim 1 or 2, wherein the face extracting step uses an AdaBoost method, the characteristic extracting step uses an HGM method, and the identification step uses an SVM method.
PCT/KR2007/000154 2006-11-30 2007-01-10 Face recognition method by image enhancement WO2008066217A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020060119515A KR100843513B1 (en) 2006-11-30 2006-11-30 Face Recognition Method By Image Enhancement
KR10-2006-0119515 2006-11-30

Publications (1)

Publication Number Publication Date
WO2008066217A1 true WO2008066217A1 (en) 2008-06-05

Family

ID=39467996

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2007/000154 WO2008066217A1 (en) 2006-11-30 2007-01-10 Face recognition method by image enhancement

Country Status (2)

Country Link
KR (1) KR100843513B1 (en)
WO (1) WO2008066217A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101859380A (en) * 2010-04-27 2010-10-13 河海大学 Extraction method for brightness characteristic quantity of face image and identification method of same
CN103116404A (en) * 2013-02-25 2013-05-22 广东欧珀移动通信有限公司 Face recognition unlocking method and mobile smart terminal
CN103177263A (en) * 2013-03-13 2013-06-26 浙江理工大学 Image-based automatic detection and counting method for rice field planthopper
US9113075B2 (en) 2009-02-27 2015-08-18 Samsung Electronics Co., Ltd. Image processing method and apparatus and digital photographing apparatus using the same
CN107728115A (en) * 2017-09-11 2018-02-23 电子科技大学 Ambient interferences suppressing method based on SVM after a kind of radar target imaging
CN111310152A (en) * 2020-03-17 2020-06-19 浙江万里学院 Computer user identity recognition system

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101703355B1 (en) * 2010-10-05 2017-02-06 엘지전자 주식회사 Apparatus and method clearing image
KR101589149B1 (en) * 2015-05-27 2016-02-03 수원대학교산학협력단 Face recognition and face tracking method using radial basis function neural networks pattern classifier and object tracking algorithm and system for executing the same
KR102440490B1 (en) * 2020-04-16 2022-09-06 주식회사 에이비에이치 Apparatus and method for recognizing emotion based on artificial intelligence

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10341334A (en) * 1997-06-10 1998-12-22 Dainippon Screen Mfg Co Ltd Image magnification method, device therefor and storage medium recording program
KR19990050271A (en) * 1997-12-16 1999-07-05 구자홍 Method and apparatus for automatic detection of criminal face using face recognition
KR20010081562A (en) * 2000-02-16 2001-08-29 윤덕용 An image scaling method and scaler using the continuous domain filtering and interpolation methods
JP2006179030A (en) * 2006-03-15 2006-07-06 Nissan Motor Co Ltd Facial region detection apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10341334A (en) * 1997-06-10 1998-12-22 Dainippon Screen Mfg Co Ltd Image magnification method, device therefor and storage medium recording program
KR19990050271A (en) * 1997-12-16 1999-07-05 구자홍 Method and apparatus for automatic detection of criminal face using face recognition
KR20010081562A (en) * 2000-02-16 2001-08-29 윤덕용 An image scaling method and scaler using the continuous domain filtering and interpolation methods
JP2006179030A (en) * 2006-03-15 2006-07-06 Nissan Motor Co Ltd Facial region detection apparatus

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9113075B2 (en) 2009-02-27 2015-08-18 Samsung Electronics Co., Ltd. Image processing method and apparatus and digital photographing apparatus using the same
CN101859380A (en) * 2010-04-27 2010-10-13 河海大学 Extraction method for brightness characteristic quantity of face image and identification method of same
CN103116404A (en) * 2013-02-25 2013-05-22 广东欧珀移动通信有限公司 Face recognition unlocking method and mobile smart terminal
CN103177263A (en) * 2013-03-13 2013-06-26 浙江理工大学 Image-based automatic detection and counting method for rice field planthopper
CN103177263B (en) * 2013-03-13 2016-03-23 浙江理工大学 A kind of rice field plant hopper based on image detects and method of counting automatically
CN107728115A (en) * 2017-09-11 2018-02-23 电子科技大学 Ambient interferences suppressing method based on SVM after a kind of radar target imaging
CN107728115B (en) * 2017-09-11 2020-08-11 电子科技大学 SVM-based background interference suppression method after radar target imaging
CN111310152A (en) * 2020-03-17 2020-06-19 浙江万里学院 Computer user identity recognition system
CN111310152B (en) * 2020-03-17 2020-11-24 浙江万里学院 Computer user identity recognition system

Also Published As

Publication number Publication date
KR20080049206A (en) 2008-06-04
KR100843513B1 (en) 2008-07-03

Similar Documents

Publication Publication Date Title
WO2008066217A1 (en) Face recognition method by image enhancement
Choi et al. Context-aware deep feature compression for high-speed visual tracking
Sun et al. A novel contrast enhancement forensics based on convolutional neural networks
Li et al. Image recapture detection with convolutional and recurrent neural networks
EP3082065A1 (en) Duplicate reduction for face detection
Deborah et al. Detection of fake currency using image processing
Fanfani et al. PRNU registration under scale and rotation transform based on convolutional neural networks
CN112884657B (en) Face super-resolution reconstruction method and system
Hadis et al. The impact of preprocessing on face recognition using pseudorandom pixel placement
CN115984973B (en) Human body abnormal behavior monitoring method for peeping-preventing screen
Tuba et al. Digital image forgery detection based on shadow HSV inconsistency
Maalouf et al. Offline quality monitoring for legal evidence images in video-surveillance applications
RU2661537C2 (en) Method and system of superresolution by combined sparse approximation
KR102500516B1 (en) A protection method of privacy using contextual blocking
Ambili et al. A robust technique for splicing detection in tampered blurred images
CN113014914B (en) Neural network-based single face-changing short video identification method and system
Preetha A fuzzy rule-based abandoned object detection using image fusion for intelligent video surveillance systems
Kadha et al. A novel method for resampling detection in highly compressed JPEG images through BAR using a deep learning technique
Baniya et al. Spatiotemporal dynamics and frame features for improved input selection in video super-resolution models
Jyothy et al. Texture-based multiresolution steganalytic features for spatial image steganography
Bai et al. Detection and localization of video object removal by spatio-temporal lbp coherence analysis
Arvanitidou et al. Short-term motion-based object segmentation
Mer et al. From traditional to deep: A survey of image forgery detection techniques
Jang et al. Image processing-based validation of unrecognizable numbers in severely distorted license plate images
Biswas et al. Sparse representation based anomaly detection using HOMV in H. 264 compressed videos

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07708453

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07708453

Country of ref document: EP

Kind code of ref document: A1