US20080159600A1 - Iris image data processing for use with iris recognition system - Google Patents
Iris image data processing for use with iris recognition system Download PDFInfo
- Publication number
- US20080159600A1 US20080159600A1 US11/933,752 US93375207A US2008159600A1 US 20080159600 A1 US20080159600 A1 US 20080159600A1 US 93375207 A US93375207 A US 93375207A US 2008159600 A1 US2008159600 A1 US 2008159600A1
- Authority
- US
- United States
- Prior art keywords
- pixel
- boundary pixel
- outer boundary
- inner boundary
- iris
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000012545 processing Methods 0.000 title claims description 12
- 238000000034 method Methods 0.000 claims abstract description 61
- 208000016339 iris pattern Diseases 0.000 claims description 20
- 239000013598 vector Substances 0.000 claims description 15
- 238000003708 edge detection Methods 0.000 claims description 2
- 230000008901 benefit Effects 0.000 abstract description 5
- 238000012937 correction Methods 0.000 abstract description 4
- 238000005516 engineering process Methods 0.000 abstract 1
- 210000001747 pupil Anatomy 0.000 description 9
- 238000003491 array Methods 0.000 description 8
- 230000001788 irregular Effects 0.000 description 6
- 230000001179 pupillary effect Effects 0.000 description 6
- 210000003128 head Anatomy 0.000 description 5
- 238000010606 normalization Methods 0.000 description 5
- 238000001514 detection method Methods 0.000 description 3
- 238000001914 filtration Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 210000003786 sclera Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
Definitions
- the present disclosure relates to processing iris image data, and more particularly, to a method of identifying an outer boundary of an iris image.
- An iris recognition system is an apparatus for identifying personal identity by distinguishing one's own peculiar iris pattern.
- the iris recognition system is superior in its accuracy in terms of the personal identification in comparison to the other biometric methods such as voice or fingerprint, and it has a high degree of security.
- the iris is a region existing between the pupil and the white sclera of an eye.
- the iris recognition method is a technique for identifying personal identities based on information obtained by analyzing respective one's own iris patterns different from each other.
- the kernel technique of the iris recognition system is to acquire a more accurate eye image by using image acquisition equipment and to efficiently acquire unique characteristic information on the iris from the inputted eye image.
- the iris image with a variety of deformation may be acquired in practical. That is, it is unlikely that a complete eye image can be acquired since the eye is not necessarily directed toward a front face of a camera but positioned at a slight angle with respect to the camera. Thus, there may be a case where the information on an eye image rotated at an arbitrary angle with respect to a centerline of the iris is acquired.
- One aspect of the invention provides a method of processing iris image data, which comprises: providing data of an eye image comprising an iris defined between an inner boundary and an outer boundary; providing information indicative of a center of the inner boundary, a first inner boundary pixel and a second inner boundary pixel, wherein the first and second inner boundary pixels are located on a first imagery line passing the center; computing to locate a first outer boundary pixel on the first imaginary line extending outwardly from the first inner boundary pixel; computing to locate a second outer boundary pixel on the first imaginary line extending outwardly from the second inner boundary pixel; and computing to locate a center of the outer boundary using the first outer boundary pixel and the second outer boundary pixel.
- computing to locate the center of the outer boundary may comprise computing a bisectional point of the first and second outer boundary pixels.
- the method may further comprise computing a first distance between the first inner boundary pixel and the first outer boundary pixel.
- the method may further comprise obtaining a distance between the first inner boundary pixel and the second inner boundary pixel.
- Computing to locate the center of the outer boundary may use a first distance defined between the first inner boundary pixel and the first outer boundary pixel and a second distance defined between the second inner boundary pixel and the second outer boundary pixel.
- Computing the center of the outer boundary may further use a distance between the first inner boundary pixel and the second inner boundary pixel.
- the center of outer boundary may be off the center of the inner boundary.
- the method may further comprise: providing information indicative of a third inner boundary pixel and a fourth inner boundary pixel, wherein portion to obtain a characteristic vector of a an iris pattern. Processing the data may further comprise obtaining a plurality of characteristic vectors for the same eye image.
- Another aspect of the invention provides a method of processing iris image data, which comprises: providing data of an eye image comprising an iris defined between an inner boundary and an outer boundary; providing information indicative of a first inner boundary pixel, a second inner boundary pixel, a third inner boundary pixel and a fourth inner boundary pixel, which are located at different positions on the inner boundary; computing to locate a first outer boundary pixel on a first imaginary line extending generally radially from the first inner boundary pixel; computing to locate a second outer boundary pixel on a second imaginary line extending generally radially from the second inner boundary pixel; computing to locate a third outer boundary pixel on a third imaginary line extending generally radially from the third inner boundary pixel; computing to locate a fourth outer boundary pixel on a fourth imaginary line extending generally radially from the fourth inner boundary pixel; and using the first, second, third and fourth outer boundary pixels for further processing.
- the method may further comprise computing to locate a center of the outer boundary using the first, second, third and fourth outer boundary pixels.
- the first imaginary line may be substantially perpendicular to the third and fourth imaginary lines, and wherein the second imaginary line may be substantially perpendicular to the third and fourth imaginary lines.
- a pixel located on the first imaginary line may be determined to be the first outer boundary pixel when the difference of the image information between the pixel and its neighboring pixel which are located on the first imaginary line becomes the maximum among differences of the image information between two neighboring pixels located on the first imaginary line.
- FIG. 1 is a flowchart explaining the procedures of a normalization process of an iris image according to one embodiment of the present invention.
- FIG. 2 a is a view showing a result of detection of a pupillary boundary using a Canny edge detector.
- FIG. 2 b is a view showing center coordinates and diameter of a pupil.
- FIG. 2 c shows an iris image upon obtainment of a radius and center of an the third and fourth inner boundary pixels are located on a second imagery line passing the center; computing to locate a third outer boundary pixel on the second imaginary line extending outwardly from the third inner boundary pixel; and computing to locate a fourth outer boundary pixel on the second imaginary line extending outwardly from the fourth inner boundary pixel, wherein computing to locate the center of the outer boundary further uses the third outer boundary pixel and the fourth outer boundary pixel.
- the second imaginary line may be substantially perpendicular to the first imaginary line.
- Computing to locate the center of the outer boundary may comprise computing a bisectional point of the first and second outer boundary pixels and a bisectional point of the third and fourth outer boundary pixels.
- Computing to locate the center of the outer boundary may use a first distance defined between the first inner boundary pixel and the first outer boundary pixel, a second distance defined between the second inner boundary pixel and the second outer boundary pixel, a third distance defined between the third inner boundary pixel and the third outer boundary pixel and a fourth distance defined between the fourth inner boundary pixel and the fourth outer boundary pixel.
- Computing to locate the center of the outer boundary may further use a distance between the first inner boundary pixel and the second inner boundary pixel.
- the first imaginary line may comprise a first line segment extending outwardly from the first inner boundary pixel and a second line segment extending outwardly from the second inner boundary pixel, wherein a pixel located on the first line segment may be determined to be the first outer boundary pixel when the difference of the image information between the pixel and its neighboring pixel which are located on the first line segment becomes the maximum among differences of the image information between two neighboring pixels located on the first line segment, wherein a pixel located on the second line segment may be determined to be the second outer boundary pixel when the difference of the image information between the pixel and its neighboring pixel which are located on the second line segment becomes the maximum among differences of the image information between two neighboring pixels located on the second line segment.
- Providing information indicative of a center of the inner boundary may comprise performing a Canny edge detection method using the eye image data.
- the method may further comprise: extracting data of a portion of the eye image that is located between the inner boundary and the outer boundary; and processing the data of the outer boundary of an iris according to one embodiment of the present invention.
- FIGS. 3( a ) to ( d ) show the procedures of the normalization process of a slanted iris image.
- FIGS. 4( a ) and ( b ) show a rotated iris image resulting from the tilting of the user's head.
- FIGS. 5( a ) and ( b ) show procedures of a correction process of the rotated iris image shown in FIGS. 4( a ) and ( b ).
- FIG. 1 is a flowchart explaining procedures of a normalization process of an iris image according to one embodiment of the present invention.
- an eye image is acquired by image acquisition equipment using an infrared illuminator and a visible light rejection filter.
- a reflective light is caused to be gathered in the pupil of an eye so that information on the iris image is not lost.
- inner and outer boundaries of the iris are detected in order to extract only an iris region from the acquired eye image, and the center of the detected inner and outer boundaries is set.
- Step 120 is performed by a method for detecting the inner and outer boundaries of the iris using differences in pixels of a Canny edge detector and the image according to one embodiment of the present invention, which will be specifically explained below.
- FIG. 2 a is a view showing a result of detection of a pupillary boundary, i.e. the inner boundary of the iris, using the Canny edge detector.
- a pupillary boundary i.e. the inner boundary of the iris
- the Canny edge detector smoothes an acquired image by using Gaussian filtering and then detects a boundary by using a Sobel operator.
- the Gaussian filtering process can be expressed as the following Equation 1, and the used Sobel operator can be expressed as the following Equation 2.
- FIG. 2 b shows the center coordinates and diameter of the pupil. Referring to FIG. 2 b , the pupil's radius is d/2, and the pupil's center coordinates are (x+d/2, y+d/2).
- the outer boundary of the iris in the image can be detected by comparing pixel values while proceeding upward and downward and leftward and rightward from the pupillary boundary, i.e. the inner boundary of the iris, and by finding out maximum values of differences in the pixel values.
- the detected maximum values are Max ⁇ I(x, y) ⁇ I(x‘1, y) ⁇ , Max ⁇ I(x, y) ⁇ I(x+1, y) ⁇ , Max ⁇ I(x, y) ⁇ I(x, y ⁇ 1) ⁇ , and Max ⁇ I(x, y) ⁇ I(x, y+1) ⁇ , where I(x, y) is a pixel value of the image at a point of (x, y).
- the reason why the differences in the pixel values are obtained while proceeding upward and downward and leftward and rightward from the inner boundary of the iris upon detection of the outer boundary of the iris in the image is to make the inner and outer centers different from each other. That is, in a case where a slanted iris image is acquired, since the pupil is located a little upward, downward, leftward or rightward of the image, the inner and outer centers can be set differently from each other.
- FIG. 2 c shows an iris image upon obtainment of the radius and center of the outer boundary of the iris according to one embodiment of the present invention.
- a process of setting the centers of the inner/outer boundaries of the iris is required. First, values of distances R L , R R , R U and R D from the inner boundary to the left, right, upper and lower portions of the outer boundary, respectively, and a value of the radius RI of the inner boundary, i.e. pupillary boundary, are calculated. Then, the center of the outer boundary is obtained by finding out bisection points upward and downward and leftward and rightward of the image using the above calculated values.
- iris patterns are detected only at predetermined portions of the distances from the inner boundary to the outer boundary.
- the detected iris pattern is converted into an iris image in the polar coordinates.
- the converted iris image in the polar coordinates is normalized to obtain an image having predetermined dimensions in its width and height.
- Equation 3 The conversion of the extracted iris patterns into the iris image in the polar coordinates can be expressed as the following Equation 3.
- ⁇ is increased by 0.8 degrees
- r is calculated by using the second Cosine Rule from a distance between the outer center C O and the inner center C I of the iris, the radius R O of the outer boundary, and the value of ⁇ .
- the iris patterns between the inner and outer boundaries of the iris are extracted using the r and ⁇ .
- FIG. 3( a ) shows the slanted iris image
- FIG. 3( b ) is the iris image in polar coordinates converted from the slanted iris image. It can be seen from FIG. 3( b ) that a lower portion of the converted iris image in the polar coordinates is curved with an irregular shape.
- FIG. 3( c ) shows an iris image having the dimensions of M pixels in width and N pixels in height, which is normalized from the irregular image of the iris patterns.
- the iris patterns existing at only a portion corresponding to X % of the distance between the inner and outer boundaries of the iris are taken in order to eliminate interference from the illuminator and acquire a large amount of iris patterns. That is, when the inner and outer boundaries of the iris are detected, the iris patterns are taken and then converted into those in the polar coordinates.
- iris patterns existing at only a portion corresponding to 60% of the distance from the inner boundary among the region from the inner boundary (pupillary boundary) of the iris to the outer boundary can be picked up and converted into those in the polar coordinates.
- the value of 60% selected in this embodiment of the present invention was experimentally determined as a range in which a greatest deal of iris patterns can be picked up while excluding the reflective light gathered on the iris.
- the slanted iris image is converted into the iris image in the polar coordinates.
- the lower portion of the converted iris pattern image in the polar coordinates is curved with an irregular shape.
- the irregular image of the iris patterns is normalized to obtain the iris image with the dimensions of M pixels in width and N pixels in height.
- the performance of the iris recognition system is evaluated by two factors: a false acceptance rate (FAR) and a false rejection rate (FRR).
- the FAR means the probability that the iris recognition system incorrectly identifies an impostor as an enrollee and thus allows entrance of the impostor
- the FRR means the probability that the iris recognition system incorrectly identifies the enrollee as an impostor and thus rejects entrance to the enrollee.
- the FAR when a pre-processing is made by employing the method for detecting the boundaries of the iris and the normalization of the slanted iris image, the FAR was reduced from 5.5% to 2.83% and the FRR is reduced from 5 . 0 % to 2.0% as compared with the iris recognition system employing a conventional method for detecting the boundaries of the iris.
- step 160 if the iris in the acquired eye image has been rotated at an arbitrary angle with respect to a centerline of the iris, the arrays of pixels of the iris image information are moved and compared in order to correct the rotated iris image.
- FIGS. 4( a ) to ( b ) show a rotated iris image resulting from the tilting of the user's head.
- the user's head may be tilted a little toward the left or right, under which if the iris image is acquired, the rotated iris image is obtained as shown in FIG. 4( a ). That is, if the eye image acquired at step 110 has been rotated at an arbitrary angle with respect to a centerline of the eye, a process of correcting the rotated image is required.
- FIG. 4( a ) shows the iris image rotated by about 15 degrees in a clockwise or counterclockwise direction with respect to the centerline of the eye.
- the rotated iris image is converted into an image in the polar coordinates, the iris patterns in the converted image are shifted leftward of rightward as shown in FIG. 4( b ), as compared with the normal iris pattern.
- FIGS. 5( a ) and ( b ) show procedures of the process of correcting the rotated iris image shown in FIGS. 4( a ) and ( b ).
- the process of correcting the rotated iris image, which has resulted from the tilting of the user's head, by comparing and moving the arrays of the iris image information will be described below with reference to FIGS. 5( a ) and ( b ).
- a plurality of arrays Array(n) of the iris image are temporarily generated by means of shifts by an arbitrary angle with respect to an Array(0) of the converted iris image in the polar coordinates. That is, by shifting columns leftward or rightward of the Array(0) based on the Array(0) of the converted iris image in the polar coordinates, 20 arrays of image information from Array(0) to Array( ⁇ 10) and from Array(0) to Array(10) are temporarily generated.
- wavelet transform is performed.
- the respective characteristic vectors generated by the wavelet transform are compared with previously registered characteristic vectors to obtain similarities.
- a characteristic vector corresponding to the maximum similarity among the obtained similarities is accepted as the characteristic vector of the user.
- the characteristic vectors f T (n) of the iris corresponding to the temporarily generated plurality of arrays Array(n) of the iris image are the generated.
- the characteristic vectors f T (n) are generated from f T (0) to f T (10) and from f T (0) to f T ( ⁇ 10).
- the respective generated characteristic vectors f T (n) are compared with each of the characteristic vectors f R of the enrollees and thus similarities S n are obtained.
- a characteristic vector f T (n) corresponding to the maximum similarity among the obtained similarities S n is considered as a resulted value in which the rotation effect is corrected, and is accepted as the characteristic vector of the user's iris.
- the non-contact type human iris recognition method by the correction of the rotated iris image of one embodiment of the present invention, there is an advantage in that by detecting the inner and outer boundaries of the iris using the differences in pixels of the Canny edge detector and the image, the boundaries of the iris can be more correctly detected from the eye image of the user.
- the non-contact type human iris recognition method of one embodiment of the present invention if the iris in the eye image acquired by the image acquisition equipment has been rotated at an arbitrary angle with respect to the centerline of the iris, the rotated image is corrected into the normal iris image. Otherwise, if a lower portion of the converted iris image in the polar coordinates is curved and thus has an irregular shape due to the acquisition of the slanted iris image, the iris image is normalized in predetermined dimensions.
- the iris image with a variety of deformation is processed into data on a correct iris image so as to markedly reduce a false acceptance rate and a false rejection rate.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Ophthalmology & Optometry (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Collating Specific Patterns (AREA)
- Image Analysis (AREA)
- Image Input (AREA)
- Closed-Circuit Television Systems (AREA)
- Image Processing (AREA)
Abstract
Disclosed is an iris recognition method which is one of biometric technologies. According to a non-contact-type human iris recognition method by correction of a rotated iris image, the iris image is acquired by image acquisition equipment using an infrared illuminator. Inner and outer boundaries of the iris are detected by analyzing differences in pixels of a Canny edge detector and the image for the inputted iris image, so as to allow the boundaries of the iris to be more accurately detected from the eye image of a user. Thus, the iris image with a variety of deformation can be processed into a correct iris image, so that there is an advantage in that a false acceptance rate and a false rejection rate can be markedly reduced.
Description
- This application is a continuation application of Application No. 10/656,921, which is a continuation application under 35 U.S.C. §365 (c) claiming the benefit of the filing date of PCT Application No. PCT/KR01/01302 designating the United States, filed Jul. 1, 2001. The PCT Application was published in English as WO 02/071316 A1 on Sep. 12, 2002, and claims the benefit of the earlier filing date of Korean Patent Application No. 2001/11441, filed Mar. 6, 2001. The contents of the Korean Patent Application No. 2001/11441, the international application No. PCT/KR01/01302 including the publication WO 02/071316 A1 and application Ser. No. 10/656,921 are incorporated herein by reference in their entirety.
- 1. Field
- The present disclosure relates to processing iris image data, and more particularly, to a method of identifying an outer boundary of an iris image.
- 2. Discussion of the Related Art
- An iris recognition system is an apparatus for identifying personal identity by distinguishing one's own peculiar iris pattern. The iris recognition system is superior in its accuracy in terms of the personal identification in comparison to the other biometric methods such as voice or fingerprint, and it has a high degree of security. The iris is a region existing between the pupil and the white sclera of an eye. The iris recognition method is a technique for identifying personal identities based on information obtained by analyzing respective one's own iris patterns different from each other.
- Generally, the kernel technique of the iris recognition system is to acquire a more accurate eye image by using image acquisition equipment and to efficiently acquire unique characteristic information on the iris from the inputted eye image.
- However, in a non-contact type human iris recognition system which acquires an iris image to be taken at a certain distance therefrom, the iris image with a variety of deformation may be acquired in practical. That is, it is unlikely that a complete eye image can be acquired since the eye is not necessarily directed toward a front face of a camera but positioned at a slight angle with respect to the camera. Thus, there may be a case where the information on an eye image rotated at an arbitrary angle with respect to a centerline of the iris is acquired.
- The foregoing discussion in the background section is to provide general background information and does not constitute an admission of prior art.
- One aspect of the invention provides a method of processing iris image data, which comprises: providing data of an eye image comprising an iris defined between an inner boundary and an outer boundary; providing information indicative of a center of the inner boundary, a first inner boundary pixel and a second inner boundary pixel, wherein the first and second inner boundary pixels are located on a first imagery line passing the center; computing to locate a first outer boundary pixel on the first imaginary line extending outwardly from the first inner boundary pixel; computing to locate a second outer boundary pixel on the first imaginary line extending outwardly from the second inner boundary pixel; and computing to locate a center of the outer boundary using the first outer boundary pixel and the second outer boundary pixel.
- In the foregoing method, computing to locate the center of the outer boundary may comprise computing a bisectional point of the first and second outer boundary pixels. The method may further comprise computing a first distance between the first inner boundary pixel and the first outer boundary pixel. The method may further comprise obtaining a distance between the first inner boundary pixel and the second inner boundary pixel. Computing to locate the center of the outer boundary may use a first distance defined between the first inner boundary pixel and the first outer boundary pixel and a second distance defined between the second inner boundary pixel and the second outer boundary pixel. Computing the center of the outer boundary may further use a distance between the first inner boundary pixel and the second inner boundary pixel. The center of outer boundary may be off the center of the inner boundary.
- Still in the foregoing method, the method may further comprise: providing information indicative of a third inner boundary pixel and a fourth inner boundary pixel, wherein portion to obtain a characteristic vector of a an iris pattern. Processing the data may further comprise obtaining a plurality of characteristic vectors for the same eye image.
- Another aspect of the invention provides a method of processing iris image data, which comprises: providing data of an eye image comprising an iris defined between an inner boundary and an outer boundary; providing information indicative of a first inner boundary pixel, a second inner boundary pixel, a third inner boundary pixel and a fourth inner boundary pixel, which are located at different positions on the inner boundary; computing to locate a first outer boundary pixel on a first imaginary line extending generally radially from the first inner boundary pixel; computing to locate a second outer boundary pixel on a second imaginary line extending generally radially from the second inner boundary pixel; computing to locate a third outer boundary pixel on a third imaginary line extending generally radially from the third inner boundary pixel; computing to locate a fourth outer boundary pixel on a fourth imaginary line extending generally radially from the fourth inner boundary pixel; and using the first, second, third and fourth outer boundary pixels for further processing.
- In the foregoing method, the method may further comprise computing to locate a center of the outer boundary using the first, second, third and fourth outer boundary pixels. The first imaginary line may be substantially perpendicular to the third and fourth imaginary lines, and wherein the second imaginary line may be substantially perpendicular to the third and fourth imaginary lines. A pixel located on the first imaginary line may be determined to be the first outer boundary pixel when the difference of the image information between the pixel and its neighboring pixel which are located on the first imaginary line becomes the maximum among differences of the image information between two neighboring pixels located on the first imaginary line.
-
FIG. 1 is a flowchart explaining the procedures of a normalization process of an iris image according to one embodiment of the present invention. -
FIG. 2 a is a view showing a result of detection of a pupillary boundary using a Canny edge detector. -
FIG. 2 b is a view showing center coordinates and diameter of a pupil. -
FIG. 2 c shows an iris image upon obtainment of a radius and center of an the third and fourth inner boundary pixels are located on a second imagery line passing the center; computing to locate a third outer boundary pixel on the second imaginary line extending outwardly from the third inner boundary pixel; and computing to locate a fourth outer boundary pixel on the second imaginary line extending outwardly from the fourth inner boundary pixel, wherein computing to locate the center of the outer boundary further uses the third outer boundary pixel and the fourth outer boundary pixel. The second imaginary line may be substantially perpendicular to the first imaginary line. Computing to locate the center of the outer boundary may comprise computing a bisectional point of the first and second outer boundary pixels and a bisectional point of the third and fourth outer boundary pixels. Computing to locate the center of the outer boundary may use a first distance defined between the first inner boundary pixel and the first outer boundary pixel, a second distance defined between the second inner boundary pixel and the second outer boundary pixel, a third distance defined between the third inner boundary pixel and the third outer boundary pixel and a fourth distance defined between the fourth inner boundary pixel and the fourth outer boundary pixel. Computing to locate the center of the outer boundary may further use a distance between the first inner boundary pixel and the second inner boundary pixel. - Further in the foregoing method, the first imaginary line may comprise a first line segment extending outwardly from the first inner boundary pixel and a second line segment extending outwardly from the second inner boundary pixel, wherein a pixel located on the first line segment may be determined to be the first outer boundary pixel when the difference of the image information between the pixel and its neighboring pixel which are located on the first line segment becomes the maximum among differences of the image information between two neighboring pixels located on the first line segment, wherein a pixel located on the second line segment may be determined to be the second outer boundary pixel when the difference of the image information between the pixel and its neighboring pixel which are located on the second line segment becomes the maximum among differences of the image information between two neighboring pixels located on the second line segment. Providing information indicative of a center of the inner boundary may comprise performing a Canny edge detection method using the eye image data. The method may further comprise: extracting data of a portion of the eye image that is located between the inner boundary and the outer boundary; and processing the data of the outer boundary of an iris according to one embodiment of the present invention.
-
FIGS. 3( a) to (d) show the procedures of the normalization process of a slanted iris image. -
FIGS. 4( a) and (b) show a rotated iris image resulting from the tilting of the user's head. -
FIGS. 5( a) and (b) show procedures of a correction process of the rotated iris image shown inFIGS. 4( a) and (b). - Hereinafter, various embodiments of the present invention will be described in detail with reference to the accompanying drawings.
-
FIG. 1 is a flowchart explaining procedures of a normalization process of an iris image according to one embodiment of the present invention. Referring toFIG. 1 , atstep 110, an eye image is acquired by image acquisition equipment using an infrared illuminator and a visible light rejection filter. At this time, a reflective light is caused to be gathered in the pupil of an eye so that information on the iris image is not lost. Atstep 120, inner and outer boundaries of the iris are detected in order to extract only an iris region from the acquired eye image, and the center of the detected inner and outer boundaries is set.Step 120 is performed by a method for detecting the inner and outer boundaries of the iris using differences in pixels of a Canny edge detector and the image according to one embodiment of the present invention, which will be specifically explained below. -
FIG. 2 a is a view showing a result of detection of a pupillary boundary, i.e. the inner boundary of the iris, using the Canny edge detector. Referring toFIG. 2 a, it is noted that only the pupillary boundary is detected by employing the Canny edge detector. That is, as shown inFIG. 2 a, the inner boundary of the iris is detected by using the Canny edge detector that is a kind of boundary detecting filter. The Canny edge detector smoothes an acquired image by using Gaussian filtering and then detects a boundary by using a Sobel operator. The Gaussian filtering process can be expressed as the following Equation 1, and the used Sobel operator can be expressed as the following Equation 2. -
I G(x, y)=G(x, y)×I(x, y) [Equation 1] -
- In a case where the boundary detecting method employing the Canny edge detector is used, even though a normal eye image is not acquired since the eye of a user is not directed toward a front face of a camera but positioned at a slight angle with respect to the camera, the inner boundary of the iris, i.e. papillary boundary, can be correctly detected and center coordinates and radius of the pupil can also be easily obtained.
FIG. 2 b shows the center coordinates and diameter of the pupil. Referring toFIG. 2 b, the pupil's radius is d/2, and the pupil's center coordinates are (x+d/2, y+d/2). - On the other hand, the outer boundary of the iris in the image can be detected by comparing pixel values while proceeding upward and downward and leftward and rightward from the pupillary boundary, i.e. the inner boundary of the iris, and by finding out maximum values of differences in the pixel values. The detected maximum values are Max{I(x, y)−I(x‘1, y)}, Max{I(x, y)−I(x+1, y)}, Max{I(x, y)−I(x, y−1)}, and Max{I(x, y)−I(x, y+1)}, where I(x, y) is a pixel value of the image at a point of (x, y). The reason why the differences in the pixel values are obtained while proceeding upward and downward and leftward and rightward from the inner boundary of the iris upon detection of the outer boundary of the iris in the image is to make the inner and outer centers different from each other. That is, in a case where a slanted iris image is acquired, since the pupil is located a little upward, downward, leftward or rightward of the image, the inner and outer centers can be set differently from each other.
-
FIG. 2 c shows an iris image upon obtainment of the radius and center of the outer boundary of the iris according to one embodiment of the present invention. In a case where an incomplete eye image is acquired since the eye is not directed toward the front face of the camera but positioned at a slight angle with respect to the camera, a process of setting the centers of the inner/outer boundaries of the iris is required. First, values of distances RL, RR, RU and RD from the inner boundary to the left, right, upper and lower portions of the outer boundary, respectively, and a value of the radius RI of the inner boundary, i.e. pupillary boundary, are calculated. Then, the center of the outer boundary is obtained by finding out bisection points upward and downward and leftward and rightward of the image using the above calculated values. - At
step 130, iris patterns are detected only at predetermined portions of the distances from the inner boundary to the outer boundary. Atstep 140, the detected iris pattern is converted into an iris image in the polar coordinates. Atstep 150, the converted iris image in the polar coordinates is normalized to obtain an image having predetermined dimensions in its width and height. - The conversion of the extracted iris patterns into the iris image in the polar coordinates can be expressed as the following Equation 3.
-
I(x(r, θ), y(r, θ))=>I(r, θ) [Equation 3] - where θ is increased by 0.8 degrees, and r is calculated by using the second Cosine Rule from a distance between the outer center CO and the inner center CI of the iris, the radius RO of the outer boundary, and the value of θ. The iris patterns between the inner and outer boundaries of the iris are extracted using the r and θ. In order to avoid changes in features of the iris according to variations in the size of the pupil, when the iris image between the inner and outer boundaries of the iris is divided into 60 segments and the θ is varied by 0.8 degrees to represent 450 data, the iris image is finally normalized into a 27000 segmented iris image (θ×r=450×60).
-
FIG. 3( a) shows the slanted iris image, andFIG. 3( b) is the iris image in polar coordinates converted from the slanted iris image. It can be seen fromFIG. 3( b) that a lower portion of the converted iris image in the polar coordinates is curved with an irregular shape. In addition,FIG. 3( c) shows an iris image having the dimensions of M pixels in width and N pixels in height, which is normalized from the irregular image of the iris patterns. Hereinafter, the normalization process of the slanted iris image will be described with reference toFIGS. 3( a) to (c). In the portion corresponding to the distance between the inner and outer boundaries of the iris inFIG. 3( a), the iris patterns existing at only a portion corresponding to X % of the distance between the inner and outer boundaries of the iris are taken in order to eliminate interference from the illuminator and acquire a large amount of iris patterns. That is, when the inner and outer boundaries of the iris are detected, the iris patterns are taken and then converted into those in the polar coordinates. However, in a case where reflective light from the illuminator is gathered on the iris, iris patterns existing at only a portion corresponding to 60% of the distance from the inner boundary among the region from the inner boundary (pupillary boundary) of the iris to the outer boundary can be picked up and converted into those in the polar coordinates. The value of 60% selected in this embodiment of the present invention was experimentally determined as a range in which a greatest deal of iris patterns can be picked up while excluding the reflective light gathered on the iris. - In
FIG. 3( b), the slanted iris image is converted into the iris image in the polar coordinates. As shown inFIG. 3( b), when the iris patterns are converted into those in the polar coordinates, the lower portion of the converted iris pattern image in the polar coordinates is curved with an irregular shape. Thus, it is necessary to normalize the irregular iris pattern image. InFIG. 3( c), the irregular image of the iris patterns is normalized to obtain the iris image with the dimensions of M pixels in width and N pixels in height. - For reference, the performance of the iris recognition system is evaluated by two factors: a false acceptance rate (FAR) and a false rejection rate (FRR). The FAR means the probability that the iris recognition system incorrectly identifies an impostor as an enrollee and thus allows entrance of the impostor, and the FRR means the probability that the iris recognition system incorrectly identifies the enrollee as an impostor and thus rejects entrance to the enrollee. According to one embodiment of the present invention, when a pre-processing is made by employing the method for detecting the boundaries of the iris and the normalization of the slanted iris image, the FAR was reduced from 5.5% to 2.83% and the FRR is reduced from 5.0% to 2.0% as compared with the iris recognition system employing a conventional method for detecting the boundaries of the iris.
- Finally, at
step 160, if the iris in the acquired eye image has been rotated at an arbitrary angle with respect to a centerline of the iris, the arrays of pixels of the iris image information are moved and compared in order to correct the rotated iris image. -
FIGS. 4( a) to (b) show a rotated iris image resulting from the tilting of the user's head. Upon acquisition of an iris image, the user's head may be tilted a little toward the left or right, under which if the iris image is acquired, the rotated iris image is obtained as shown inFIG. 4( a). That is, if the eye image acquired atstep 110 has been rotated at an arbitrary angle with respect to a centerline of the eye, a process of correcting the rotated image is required.FIG. 4( a) shows the iris image rotated by about 15 degrees in a clockwise or counterclockwise direction with respect to the centerline of the eye. When the rotated iris image is converted into an image in the polar coordinates, the iris patterns in the converted image are shifted leftward of rightward as shown inFIG. 4( b), as compared with the normal iris pattern. -
FIGS. 5( a) and (b) show procedures of the process of correcting the rotated iris image shown inFIGS. 4( a) and (b). The process of correcting the rotated iris image, which has resulted from the tilting of the user's head, by comparing and moving the arrays of the iris image information will be described below with reference toFIGS. 5( a) and (b). - Referring to
FIG. 5( a), from the rotated iris image resulting from the tiling of the user's head, a plurality of arrays Array(n) of the iris image are temporarily generated by means of shifts by an arbitrary angle with respect to an Array(0) of the converted iris image in the polar coordinates. That is, by shifting columns leftward or rightward of the Array(0) based on the Array(0) of the converted iris image in the polar coordinates, 20 arrays of image information from Array(0) to Array(−10) and from Array(0) to Array(10) are temporarily generated. - In order to generate characteristic vectors of the iris corresponding to the plurality of arrays of iris image that have been temporarily generated, wavelet transform is performed. The respective characteristic vectors generated by the wavelet transform are compared with previously registered characteristic vectors to obtain similarities. A characteristic vector corresponding to the maximum similarity among the obtained similarities is accepted as the characteristic vector of the user.
- In other words, by generating the arrays Array(n) of image information on the rotated image as mentioned above and performing the wavelet transform for the respective arrays of the image information as shown
FIG. 5( b), the characteristic vectors fT(n) of the iris corresponding to the temporarily generated plurality of arrays Array(n) of the iris image are the generated. The characteristic vectors fT(n) are generated from fT(0) to fT(10) and from fT(0) to fT(−10). The respective generated characteristic vectors fT(n) are compared with each of the characteristic vectors fR of the enrollees and thus similarities Sn are obtained. A characteristic vector fT(n) corresponding to the maximum similarity among the obtained similarities Sn is considered as a resulted value in which the rotation effect is corrected, and is accepted as the characteristic vector of the user's iris. - As described above, according to the non-contact type human iris recognition method by the correction of the rotated iris image of one embodiment of the present invention, there is an advantage in that by detecting the inner and outer boundaries of the iris using the differences in pixels of the Canny edge detector and the image, the boundaries of the iris can be more correctly detected from the eye image of the user.
- Furthermore, according to the non-contact type human iris recognition method of one embodiment of the present invention, if the iris in the eye image acquired by the image acquisition equipment has been rotated at an arbitrary angle with respect to the centerline of the iris, the rotated image is corrected into the normal iris image. Otherwise, if a lower portion of the converted iris image in the polar coordinates is curved and thus has an irregular shape due to the acquisition of the slanted iris image, the iris image is normalized in predetermined dimensions. Thus, there is another advantage in that the iris image with a variety of deformation is processed into data on a correct iris image so as to markedly reduce a false acceptance rate and a false rejection rate.
- It should be noted that the above description exemplifies of the non-contact type human iris recognition method by the correction of the rotated iris image according to embodiments of the present invention, and the present invention is not limited thereto. A person skilled in the art can make various modifications and changes to the embodiments of the present invention without departing from the technical spirit and scope of the present invention defined by the appended claims.
Claims (20)
1. A method of processing iris image data, comprising:
providing data of an eye image comprising an iris defined between an inner boundary and an outer boundary;
providing information indicative of a center of the inner boundary, a first inner boundary pixel and a second inner boundary pixel, wherein the first and second inner boundary pixels are located on a first imagery line passing the center;
computing to locate a first outer boundary pixel on the first imaginary line extending outwardly from the first inner boundary pixel;
computing to locate a second outer boundary pixel on the first imaginary line extending outwardly from the second inner boundary pixel; and
computing to locate a center of the outer boundary using the first outer boundary pixel and the second outer boundary pixel.
2. The method of claim 1 , wherein computing to locate the center of the outer boundary comprises computing a bisectional point of the first and second outer boundary pixels.
3. The method of claim 1 , further comprising computing a first distance between the first inner boundary pixel and the first outer boundary pixel.
4. The method of claim 1 , further comprising obtaining a distance between the first inner boundary pixel and the second inner boundary pixel.
5. The method of claim 1 , wherein computing to locate the center of the outer boundary uses a first distance defined between the first inner boundary pixel and the first outer boundary pixel and a second distance defined between the second inner boundary pixel and the second outer boundary pixel.
6. The method of claim 5 , wherein computing the center of the outer boundary further uses a distance between the first inner boundary pixel and the second inner boundary pixel.
7. The method of claim 1 , wherein the center of outer boundary is off the center of the inner boundary.
8. The method of claim 1 , further comprising:
providing information indicative of a third inner boundary pixel and a fourth inner boundary pixel, wherein the third and fourth inner boundary pixels are located on a second imagery line passing the center;
computing to locate a third outer boundary pixel on the second imaginary line extending outwardly from the third inner boundary pixel; and
computing to locate a fourth outer boundary pixel on the second imaginary line extending outwardly from the fourth inner boundary pixel,
wherein computing to locate the center of the outer boundary further uses the third outer boundary pixel and the fourth outer boundary pixel.
9. The method of claim 8 , wherein the second imaginary line is substantially perpendicular to the first imaginary line.
10. The method of claim 8 , wherein computing to locate the center of the outer boundary comprises computing a bisectional point of the first and second outer boundary pixels and a bisectional point of the third and fourth outer boundary pixels.
11. The method of claim 8 , wherein computing to locate the center of the outer boundary uses a first distance defined between the first inner boundary pixel and the first outer boundary pixel, a second distance defined between the second inner boundary pixel and the second outer boundary pixel, a third distance defined between the third inner boundary pixel and the third outer boundary pixel and a fourth distance defined between the fourth inner boundary pixel and the fourth outer boundary pixel.
12. The method of claim 11 , wherein computing to locate the center of the outer boundary further uses a distance between the first inner boundary pixel and the second inner boundary pixel.
13. The method of claim 1 , wherein the first imaginary line comprises a first line segment extending outwardly from the first inner boundary pixel and a second line segment extending outwardly from the second inner boundary pixel, wherein a pixel located on the first line segment is determined to be the first outer boundary pixel when the difference of the image information between the pixel and its neighboring pixel which are located on the first line segment becomes the maximum among differences of the image information between two neighboring pixels located on the first line segment, wherein a pixel located on the second line segment is determined to be the second outer boundary pixel when the difference of the image information between the pixel and its neighboring pixel which are located on the second line segment becomes the maximum among differences of the image information between two neighboring pixels located on the second line segment.
14. The method of claim 1 , wherein providing information indicative of a center of the inner boundary comprises performing a Canny edge detection method using the eye image data.
15. The method of claim 1 , further comprising:
extracting data of a portion of the eye image that is located between the inner boundary and the outer boundary; and
processing the data of the portion to obtain a characteristic vector of a an iris pattern.
16. The method of claim 15 , wherein processing the data further comprises obtaining a plurality of characteristic vectors for the same eye image.
17. A method of processing iris image data, comprising:
providing data of an eye image comprising an iris defined between an inner boundary and an outer boundary;
providing information indicative of a first inner boundary pixel, a second inner boundary pixel, a third inner boundary pixel and a fourth inner boundary pixel, which are located at different positions on the inner boundary;
computing to locate a first outer boundary pixel on a first imaginary line extending generally radially from the first inner boundary pixel;
computing to locate a second outer boundary pixel on a second imaginary line extending generally radially from the second inner boundary pixel;
computing to locate a third outer boundary pixel on a third imaginary line extending generally radially from the third inner boundary pixel;
computing to locate a fourth outer boundary pixel on a fourth imaginary line extending generally radially from the fourth inner boundary pixel; and
using the first, second, third and fourth outer boundary pixels for further processing.
18. The method of claim 17 , further comprising computing to locate a center of the outer boundary using the first, second, third and fourth outer boundary pixels.
19. The method of claim 17 , wherein the first imaginary line is substantially perpendicular to the third and fourth imaginary lines, and wherein the second imaginary line is substantially perpendicular to the third and fourth imaginary lines.
20. The method of claim 17 , wherein a pixel located on the first imaginary line is determined to be the first outer boundary pixel when the difference of the image information between the pixel and its neighboring pixel which are located on the first imaginary line becomes the maximum among differences of the image information between two neighboring pixels located on the first imaginary line.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/933,752 US20080159600A1 (en) | 2001-03-06 | 2007-11-01 | Iris image data processing for use with iris recognition system |
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2001-011441 | 2001-03-06 | ||
KR10-2001-0011441A KR100374708B1 (en) | 2001-03-06 | 2001-03-06 | Non-contact type human iris recognition method by correction of rotated iris image |
PCT/KR2001/001302 WO2002071316A1 (en) | 2001-03-06 | 2001-07-31 | Non-contact type human iris recognition method by correction of rotated iris image |
US10/656,921 US7298874B2 (en) | 2001-03-06 | 2003-09-05 | Iris image data processing for use with iris recognition system |
US11/933,752 US20080159600A1 (en) | 2001-03-06 | 2007-11-01 | Iris image data processing for use with iris recognition system |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/656,921 Continuation US7298874B2 (en) | 2001-03-06 | 2003-09-05 | Iris image data processing for use with iris recognition system |
Publications (1)
Publication Number | Publication Date |
---|---|
US20080159600A1 true US20080159600A1 (en) | 2008-07-03 |
Family
ID=36754301
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/017,118 Abandoned US20020154794A1 (en) | 2001-03-06 | 2001-12-07 | Non-contact type human iris recognition method for correcting a rotated iris image |
US10/656,921 Expired - Fee Related US7298874B2 (en) | 2001-03-06 | 2003-09-05 | Iris image data processing for use with iris recognition system |
US11/933,752 Abandoned US20080159600A1 (en) | 2001-03-06 | 2007-11-01 | Iris image data processing for use with iris recognition system |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/017,118 Abandoned US20020154794A1 (en) | 2001-03-06 | 2001-12-07 | Non-contact type human iris recognition method for correcting a rotated iris image |
US10/656,921 Expired - Fee Related US7298874B2 (en) | 2001-03-06 | 2003-09-05 | Iris image data processing for use with iris recognition system |
Country Status (6)
Country | Link |
---|---|
US (3) | US20020154794A1 (en) |
EP (1) | EP1374144A4 (en) |
JP (2) | JP2004527033A (en) |
KR (1) | KR100374708B1 (en) |
CN (1) | CN1255756C (en) |
WO (1) | WO2002071316A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9633259B2 (en) | 2014-10-10 | 2017-04-25 | Hyundai Motor Company | Apparatus and method for recognizing iris |
US9684828B2 (en) | 2013-07-04 | 2017-06-20 | Samsung Electronics Co., Ltd | Electronic device and eye region detection method in electronic device |
US10185875B2 (en) | 2013-10-04 | 2019-01-22 | Casio Computer Co., Ltd. | Image processing device, image display device, image processing method, and medium |
US10383515B2 (en) | 2016-12-07 | 2019-08-20 | 3E Co., Ltd. | Method of detecting pupil center |
Families Citing this family (69)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100374708B1 (en) * | 2001-03-06 | 2003-03-04 | 에버미디어 주식회사 | Non-contact type human iris recognition method by correction of rotated iris image |
DE10140152A1 (en) * | 2001-08-16 | 2003-03-06 | Kurt Staehle | Procedure for creating and evaluating a medical database |
KR20030051963A (en) * | 2001-12-20 | 2003-06-26 | 엘지전자 주식회사 | Detection method of iris rotation data for iris recognition system |
KR100476406B1 (en) * | 2002-12-03 | 2005-03-17 | 이일병 | Iris identification system and method using wavelet packet transformation, and storage media having program thereof |
JP2004260472A (en) * | 2003-02-25 | 2004-09-16 | Toshiba Tec Corp | Image formation device and its method |
US7436986B2 (en) * | 2003-03-25 | 2008-10-14 | Bausch & Lomb Incorporated | Positive patient identification |
US7599524B2 (en) * | 2003-04-04 | 2009-10-06 | Sarnoff Corporation | Method and apparatus for providing a robust object finder |
CN100571624C (en) * | 2003-04-11 | 2009-12-23 | 博士伦公司 | Be used to obtain the system and method for data and aligning and tracking eyes |
EP1647936B1 (en) * | 2003-07-17 | 2012-05-30 | Panasonic Corporation | Iris code generation method, individual authentication method, iris code entry device, individual authentication device, and individual certification program |
US7933507B2 (en) | 2006-03-03 | 2011-04-26 | Honeywell International Inc. | Single lens splitter camera |
US8442276B2 (en) | 2006-03-03 | 2013-05-14 | Honeywell International Inc. | Invariant radial iris segmentation |
US8098901B2 (en) * | 2005-01-26 | 2012-01-17 | Honeywell International Inc. | Standoff iris recognition system |
US7593550B2 (en) | 2005-01-26 | 2009-09-22 | Honeywell International Inc. | Distance iris recognition |
US8064647B2 (en) | 2006-03-03 | 2011-11-22 | Honeywell International Inc. | System for iris detection tracking and recognition at a distance |
US8090157B2 (en) | 2005-01-26 | 2012-01-03 | Honeywell International Inc. | Approaches and apparatus for eye detection in a digital image |
US7756301B2 (en) * | 2005-01-26 | 2010-07-13 | Honeywell International Inc. | Iris recognition system and method |
US8705808B2 (en) | 2003-09-05 | 2014-04-22 | Honeywell International Inc. | Combined face and iris recognition system |
FR2864290B1 (en) * | 2003-12-18 | 2006-05-26 | Sagem | METHOD AND DEVICE FOR RECOGNIZING IRIS |
EP1779064A4 (en) * | 2004-08-09 | 2009-11-04 | Classifeye Ltd | Non-contact optical means and method for 3d fingerprint recognition |
JP4507082B2 (en) * | 2004-08-19 | 2010-07-21 | ノーリツ鋼機株式会社 | Catch light synthesis method |
SE0402749D0 (en) * | 2004-11-11 | 2004-11-11 | Snabbfoto Invest Ab | photo Station |
KR100629550B1 (en) * | 2004-11-22 | 2006-09-27 | 아이리텍 잉크 | Multiscale Variable Domain Decomposition Method and System for Iris Identification |
FR2881546B1 (en) * | 2005-01-31 | 2007-09-14 | Sagem | METHOD FOR DETERMINING A REFERENCE AXIS OF AN EYE |
JP2007011667A (en) * | 2005-06-30 | 2007-01-18 | Matsushita Electric Ind Co Ltd | Iris authentication device and iris authentication method |
JP4664147B2 (en) | 2005-07-29 | 2011-04-06 | 株式会社山武 | Iris authentication device |
KR200404650Y1 (en) * | 2005-08-02 | 2005-12-27 | 주식회사 큐리텍 | Mouse having iris identification system |
US7751598B2 (en) * | 2005-08-25 | 2010-07-06 | Sarnoff Corporation | Methods and systems for biometric identification |
US8260008B2 (en) | 2005-11-11 | 2012-09-04 | Eyelock, Inc. | Methods for performing biometric recognition of a human eye and corroboration of same |
EP1991947B1 (en) | 2006-03-03 | 2020-04-29 | Gentex Corporation | Indexing and database search system |
JP4738488B2 (en) | 2006-03-03 | 2011-08-03 | ハネウェル・インターナショナル・インコーポレーテッド | Iris recognition system with image quality metric |
WO2007101275A1 (en) | 2006-03-03 | 2007-09-07 | Honeywell International, Inc. | Camera with auto-focus capability |
WO2008019168A2 (en) | 2006-03-03 | 2008-02-14 | Honeywell International, Inc. | Modular biometrics collection system architecture |
KR101299074B1 (en) | 2006-03-03 | 2013-08-30 | 허니웰 인터내셔널 인코포레이티드 | Iris encoding system |
US8170293B2 (en) * | 2006-09-15 | 2012-05-01 | Identix Incorporated | Multimodal ocular biometric system and methods |
US7970179B2 (en) | 2006-09-25 | 2011-06-28 | Identix Incorporated | Iris data extraction |
ES2276637B1 (en) * | 2006-11-03 | 2008-11-16 | Jose Antonio Gil Soldevilla | COLOR OBTAINING PROCEDURE FROM IRIS. |
US8063889B2 (en) | 2007-04-25 | 2011-11-22 | Honeywell International Inc. | Biometric data collection system |
JP4974761B2 (en) * | 2007-05-25 | 2012-07-11 | ローレル精機株式会社 | Safety management system |
IL184399A0 (en) * | 2007-07-03 | 2007-10-31 | Yossi Tsuria | Content delivery system |
KR100924232B1 (en) * | 2007-12-10 | 2009-11-02 | 아이리텍 잉크 | Weighted Pixel Interpolation Method for Rectilinear and Polar Image Conversion |
US8436907B2 (en) | 2008-05-09 | 2013-05-07 | Honeywell International Inc. | Heterogeneous video capturing system |
US8213782B2 (en) | 2008-08-07 | 2012-07-03 | Honeywell International Inc. | Predictive autofocusing system |
US8090246B2 (en) | 2008-08-08 | 2012-01-03 | Honeywell International Inc. | Image acquisition system |
US20100278394A1 (en) * | 2008-10-29 | 2010-11-04 | Raguin Daniel H | Apparatus for Iris Capture |
US8317325B2 (en) * | 2008-10-31 | 2012-11-27 | Cross Match Technologies, Inc. | Apparatus and method for two eye imaging for iris identification |
US8280119B2 (en) | 2008-12-05 | 2012-10-02 | Honeywell International Inc. | Iris recognition system using quality metrics |
WO2010129074A1 (en) * | 2009-01-14 | 2010-11-11 | Indiana University Research & Technology Corporation | System and method for identifying a person with reference to a sclera image |
US8630464B2 (en) | 2009-06-15 | 2014-01-14 | Honeywell International Inc. | Adaptive iris matching using database indexing |
US8472681B2 (en) | 2009-06-15 | 2013-06-25 | Honeywell International Inc. | Iris and ocular recognition system using trace transforms |
US20110040740A1 (en) * | 2009-08-15 | 2011-02-17 | Alex Nugent | Search engine utilizing flow networks |
US10216995B2 (en) | 2009-09-25 | 2019-02-26 | International Business Machines Corporation | System and method for generating and employing short length iris codes |
US8742887B2 (en) | 2010-09-03 | 2014-06-03 | Honeywell International Inc. | Biometric visitor check system |
DE102010054168B4 (en) | 2010-12-12 | 2017-09-07 | Chronos Vision Gmbh | Method, device and program for determining the torsional component of the eye position |
KR101492933B1 (en) * | 2014-05-23 | 2015-02-12 | 동국대학교 산학협력단 | Apparatus and method for periocular recognition |
EP3198913A4 (en) | 2014-09-24 | 2018-05-23 | Princeton Identity, Inc. | Control of wireless communication device capability in a mobile device with a biometric key |
CA2969331A1 (en) | 2014-12-03 | 2016-06-09 | Princeton Identity, Inc. | System and method for mobile device biometric add-on |
WO2017123702A1 (en) | 2016-01-12 | 2017-07-20 | Princeton Identity, Inc. | Systems and methods biometric analysis |
US10373008B2 (en) | 2016-03-31 | 2019-08-06 | Princeton Identity, Inc. | Systems and methods of biometric analysis with adaptive trigger |
WO2017173228A1 (en) | 2016-03-31 | 2017-10-05 | Princeton Identity, Inc. | Biometric enrollment systems and methods |
US10607096B2 (en) | 2017-04-04 | 2020-03-31 | Princeton Identity, Inc. | Z-dimension user feedback biometric system |
US10902104B2 (en) | 2017-07-26 | 2021-01-26 | Princeton Identity, Inc. | Biometric security systems and methods |
CN108287963B (en) * | 2018-01-19 | 2021-07-16 | 东莞市燕秀信息技术有限公司 | Automatic calculation method, device, equipment and medium for size and placement position |
CN108470152A (en) * | 2018-02-14 | 2018-08-31 | 天目爱视(北京)科技有限公司 | Based on infrared 3D four-dimension iris data acquisition methods and system |
CN109086713B (en) * | 2018-07-27 | 2019-11-15 | 腾讯科技(深圳)有限公司 | Eye recognition method, apparatus, terminal and storage medium |
CN108470171B (en) * | 2018-07-27 | 2018-11-02 | 上海聚虹光电科技有限公司 | The asynchronous coding comparison method of two dimension |
CN110516661B (en) * | 2019-10-21 | 2020-05-05 | 武汉虹识技术有限公司 | Beautiful pupil detection method and device applied to iris recognition |
CN111579080B (en) * | 2020-04-30 | 2021-09-14 | 沈阳天眼智云信息科技有限公司 | Self-calibration method of infrared thermal image body temperature monitor |
DE102020002733A1 (en) | 2020-05-07 | 2021-11-11 | Chronos Vision Gmbh | Determination of the static and torsional components of the eye position during eye operations using the pupil center shift (PCS) |
CN115509351B (en) * | 2022-09-16 | 2023-04-07 | 上海仙视电子科技有限公司 | Sensory linkage situational digital photo frame interaction method and system |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5291560A (en) * | 1991-07-15 | 1994-03-01 | Iri Scan Incorporated | Biometric personal identification system based on iris analysis |
US5471542A (en) * | 1993-09-27 | 1995-11-28 | Ragland; Richard R. | Point-of-gaze tracker |
US5953440A (en) * | 1997-12-02 | 1999-09-14 | Sensar, Inc. | Method of measuring the focus of close-up images of eyes |
US6035054A (en) * | 1992-10-29 | 2000-03-07 | Canon Kabushiki Kaisha | Visual axis detection apparatus and optical apparatus provided therewith |
US6081607A (en) * | 1996-07-25 | 2000-06-27 | Oki Electric Industry Co. | Animal body identifying device and body identifying system |
US6229907B1 (en) * | 1997-03-28 | 2001-05-08 | Oki Electric Industry Co., Ltd. | Method and apparatus for identifying individual |
US20020039433A1 (en) * | 2000-07-10 | 2002-04-04 | Sung Bok Shin | Iris identification system and method and computer readable storage medium stored therein computer executable instructions to implement iris identification method |
US20020150281A1 (en) * | 2001-03-06 | 2002-10-17 | Seong-Won Cho | Method of recognizing human iris using daubechies wavelet transform |
US20020154794A1 (en) * | 2001-03-06 | 2002-10-24 | Seong-Won Cho | Non-contact type human iris recognition method for correcting a rotated iris image |
US6526160B1 (en) * | 1998-07-17 | 2003-02-25 | Media Technology Corporation | Iris information acquisition apparatus and iris identification apparatus |
US6542624B1 (en) * | 1998-07-17 | 2003-04-01 | Oki Electric Industry Co., Ltd. | Iris code generating device and iris identifying system |
US6714665B1 (en) * | 1994-09-02 | 2004-03-30 | Sarnoff Corporation | Fully automated iris recognition system utilizing wide and narrow fields of view |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4641349A (en) * | 1985-02-20 | 1987-02-03 | Leonard Flom | Iris recognition system |
US5210799A (en) * | 1991-03-28 | 1993-05-11 | Texas Instruments Incorporated | System and method for ranking and extracting salient contours for target recognition |
US5572596A (en) * | 1994-09-02 | 1996-11-05 | David Sarnoff Research Center, Inc. | Automated, non-invasive iris recognition system and method |
JP2002514098A (en) * | 1996-08-25 | 2002-05-14 | センサー インコーポレイテッド | Device for iris acquisition image |
JPH10262953A (en) * | 1997-03-28 | 1998-10-06 | Oki Electric Ind Co Ltd | Image recognizing device |
US6285780B1 (en) * | 1997-03-28 | 2001-09-04 | Oki Electric Industry Co., Ltd. | Apparatus for identifying individual animals and image processing method |
US6373968B2 (en) * | 1997-06-06 | 2002-04-16 | Oki Electric Industry Co., Ltd. | System for identifying individuals |
JP2000185031A (en) * | 1998-12-22 | 2000-07-04 | Oki Electric Ind Co Ltd | Individual identification device |
JP2000194853A (en) * | 1998-12-25 | 2000-07-14 | Oki Electric Ind Co Ltd | Individual identification device |
US6700998B1 (en) * | 1999-04-23 | 2004-03-02 | Oki Electric Industry Co, Ltd. | Iris registration unit |
KR20020065248A (en) * | 2001-02-06 | 2002-08-13 | 이승재 | Preprocessing of Human Iris Verification |
-
2001
- 2001-03-06 KR KR10-2001-0011441A patent/KR100374708B1/en not_active IP Right Cessation
- 2001-07-31 WO PCT/KR2001/001302 patent/WO2002071316A1/en not_active Application Discontinuation
- 2001-07-31 EP EP01953360A patent/EP1374144A4/en not_active Withdrawn
- 2001-07-31 JP JP2002570165A patent/JP2004527033A/en active Pending
- 2001-07-31 CN CNB018229921A patent/CN1255756C/en not_active Expired - Fee Related
- 2001-08-14 JP JP2001246037A patent/JP2002269565A/en active Pending
- 2001-12-07 US US10/017,118 patent/US20020154794A1/en not_active Abandoned
-
2003
- 2003-09-05 US US10/656,921 patent/US7298874B2/en not_active Expired - Fee Related
-
2007
- 2007-11-01 US US11/933,752 patent/US20080159600A1/en not_active Abandoned
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5291560A (en) * | 1991-07-15 | 1994-03-01 | Iri Scan Incorporated | Biometric personal identification system based on iris analysis |
US6035054A (en) * | 1992-10-29 | 2000-03-07 | Canon Kabushiki Kaisha | Visual axis detection apparatus and optical apparatus provided therewith |
US5471542A (en) * | 1993-09-27 | 1995-11-28 | Ragland; Richard R. | Point-of-gaze tracker |
US6714665B1 (en) * | 1994-09-02 | 2004-03-30 | Sarnoff Corporation | Fully automated iris recognition system utilizing wide and narrow fields of view |
US6081607A (en) * | 1996-07-25 | 2000-06-27 | Oki Electric Industry Co. | Animal body identifying device and body identifying system |
US6229907B1 (en) * | 1997-03-28 | 2001-05-08 | Oki Electric Industry Co., Ltd. | Method and apparatus for identifying individual |
US5953440A (en) * | 1997-12-02 | 1999-09-14 | Sensar, Inc. | Method of measuring the focus of close-up images of eyes |
US6542624B1 (en) * | 1998-07-17 | 2003-04-01 | Oki Electric Industry Co., Ltd. | Iris code generating device and iris identifying system |
US6526160B1 (en) * | 1998-07-17 | 2003-02-25 | Media Technology Corporation | Iris information acquisition apparatus and iris identification apparatus |
US20020039433A1 (en) * | 2000-07-10 | 2002-04-04 | Sung Bok Shin | Iris identification system and method and computer readable storage medium stored therein computer executable instructions to implement iris identification method |
US20020154794A1 (en) * | 2001-03-06 | 2002-10-24 | Seong-Won Cho | Non-contact type human iris recognition method for correcting a rotated iris image |
US20020150281A1 (en) * | 2001-03-06 | 2002-10-17 | Seong-Won Cho | Method of recognizing human iris using daubechies wavelet transform |
US20040114781A1 (en) * | 2001-03-06 | 2004-06-17 | Seong-Won Cho | Daubechies wavelet transform of iris image data for use with iris recognition system |
US20040114782A1 (en) * | 2001-03-06 | 2004-06-17 | Seong-Won Cho | Iris image data processing for use with iris recognition system |
US7298874B2 (en) * | 2001-03-06 | 2007-11-20 | Senga Advisors, Llc | Iris image data processing for use with iris recognition system |
US7302087B2 (en) * | 2001-03-06 | 2007-11-27 | Senga Advisors, Llc | Daubechies wavelet transform of iris image data for use with iris recognition system |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9684828B2 (en) | 2013-07-04 | 2017-06-20 | Samsung Electronics Co., Ltd | Electronic device and eye region detection method in electronic device |
US10185875B2 (en) | 2013-10-04 | 2019-01-22 | Casio Computer Co., Ltd. | Image processing device, image display device, image processing method, and medium |
US9633259B2 (en) | 2014-10-10 | 2017-04-25 | Hyundai Motor Company | Apparatus and method for recognizing iris |
US10383515B2 (en) | 2016-12-07 | 2019-08-20 | 3E Co., Ltd. | Method of detecting pupil center |
Also Published As
Publication number | Publication date |
---|---|
JP2002269565A (en) | 2002-09-20 |
KR100374708B1 (en) | 2003-03-04 |
US20040114782A1 (en) | 2004-06-17 |
CN1493055A (en) | 2004-04-28 |
WO2002071316A1 (en) | 2002-09-12 |
JP2004527033A (en) | 2004-09-02 |
KR20020071330A (en) | 2002-09-12 |
CN1255756C (en) | 2006-05-10 |
US20020154794A1 (en) | 2002-10-24 |
EP1374144A1 (en) | 2004-01-02 |
EP1374144A4 (en) | 2007-02-07 |
US7298874B2 (en) | 2007-11-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20080159600A1 (en) | Iris image data processing for use with iris recognition system | |
US7110581B2 (en) | Wavelet-enhanced automated fingerprint identification system | |
US7869626B2 (en) | Iris recognition method and apparatus thereof | |
US11557146B2 (en) | Biological pattern information processing device, biological pattern information processing method, and program | |
US6876757B2 (en) | Fingerprint recognition system | |
Arca et al. | A face recognition system based on automatically determined facial fiducial points | |
Kawaguchi et al. | Detection of eyes from human faces by Hough transform and separability filter | |
US5982912A (en) | Person identification apparatus and method using concentric templates and feature point candidates | |
JP4340553B2 (en) | Biometric information verification device | |
US7206437B2 (en) | Method to conduct fingerprint verification and a fingerprint verification system | |
Min et al. | Eyelid and eyelash detection method in the normalized iris image using the parabolic Hough model and Otsu’s thresholding method | |
WO2005024708A1 (en) | The pupil detection method and shape descriptor extraction method for a iris recognition, iris feature extraction apparatus and method, and iris recognition system and method using its | |
EP3309742B1 (en) | Dermal image information processing device, dermal image information processing method, and program | |
JP4809155B2 (en) | Back of hand authentication system and back of hand authentication method | |
Julasayvake et al. | An algorithm for fingerprint core point detection | |
Gawande et al. | Improving iris recognition accuracy by score based fusion method | |
JP2011258209A (en) | Back of hand authentication system and back of hand authentication method | |
Noruzi et al. | Robust iris recognition in unconstrained environments | |
Arora et al. | Human identification based on iris recognition for distant images | |
KR100596197B1 (en) | Face Detection Method Using A Variable Ellipsoidal Mask and Morphological Features | |
US11080518B2 (en) | Face image generating method for recognizing face | |
JPH06162174A (en) | Fingerprint matching method | |
KR20050062748A (en) | Authentication method and apparatus using fingerprint | |
El-Calamawy et al. | Fingerprint Recognition | |
Mun'im Ibrahim | IRIS RECOGNITION USING GABOR FILTERS |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |