CN109389033B - Novel pupil rapid positioning method - Google Patents

Novel pupil rapid positioning method Download PDF

Info

Publication number
CN109389033B
CN109389033B CN201810987673.9A CN201810987673A CN109389033B CN 109389033 B CN109389033 B CN 109389033B CN 201810987673 A CN201810987673 A CN 201810987673A CN 109389033 B CN109389033 B CN 109389033B
Authority
CN
China
Prior art keywords
region
pupil
marking
connected region
window
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810987673.9A
Other languages
Chinese (zh)
Other versions
CN109389033A (en
Inventor
武栋
张佳
金丹萍
顾灏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University of Technology
Original Assignee
Jiangsu University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University of Technology filed Critical Jiangsu University of Technology
Priority to CN201810987673.9A priority Critical patent/CN109389033B/en
Publication of CN109389033A publication Critical patent/CN109389033A/en
Application granted granted Critical
Publication of CN109389033B publication Critical patent/CN109389033B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a novel pupil rapid positioning method, which comprises the following steps: (1) marking a connected region in the binarized iris image; (2) calculating the number of pixels of each connected region; (3) using points in four directions of the outermost periphery of each communication region to limit a window of a minimum circumcircle for the region, and calculating the area of the window; (4) selecting a pixel number threshold value to carry out primary screening on the connected region; (5) and calculating the ratio of the pixel number of the connected region to the minimum circumscribed circle window area of the connected region, and taking the connected region with the maximum ratio as the connected region of the pupil. The invention provides an improved iris positioning algorithm, which effectively improves the accuracy and efficiency of iris recognition under non-ideal conditions. The advantage of searching the maximum connected region is that the interference points are automatically eliminated according to the gray-scale characteristics of the pupil, and errors caused by extraction of invalid edge points and error entry of illegal points when the inner boundary of the iris is fitted are avoided.

Description

Novel pupil rapid positioning method
Technical Field
The invention relates to the field of image processing, in particular to a novel pupil quick positioning method.
Background
With the rapid development of society and science and technology, the safety of information presents unprecedented importance. Traditional identification methods such as identity tagging articles and identity tagging knowledge are gradually being replaced by biometric identification techniques due to their inherent deficiencies and vulnerabilities. Several classical biometric techniques currently used for identity authentication mainly include: face recognition, voice recognition, fingerprint recognition, palm print recognition, iris recognition, and the like. The iris identification technology is distinguished in a plurality of identity identification technologies due to the excellent biological characteristics of stable characteristics, high anti-counterfeiting performance, difficult stealing and the like of the iris and the non-contact characteristic of the acquisition and detection mode, becomes the most important, safe and accurate identity identification technology, and has wide application prospect and important academic research. The idea of using iris for identification was proposed as early as 80 s in the 19 th century, and through the recent development of more than twenty years, iris recognition technology has been dramatically developed and widely used. The use of iris recognition technology also enables the security level of some application scenarios to be greatly improved. A complete iris recognition system comprises four parts of iris image acquisition, preprocessing, feature extraction and matching recognition. The pretreatment of the iris image is a key link and a basic part in iris recognition, and the quality of a pretreatment result directly influences all subsequent operations, thereby influencing the recognition performance of the whole system.
In the preprocessing stage of iris recognition, the positioning of the pupil is the beginning of the whole process, the efficiency and the accuracy of the positioning have important effects on subsequent processing and recognition, and especially under the condition of poor iris image acquisition conditions, how to quickly remove the heterogeneous matters is very important, and the accurate positioning of the pupil is a very important link.
In the prior art, the interference points of the binaryzation iris image are mostly removed by adopting empirical values in the aspect of pupil positioning of heterogeneous images. This will result in a less efficient localization method for the iris recognition system when more heterogeneous iris images are encountered.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a novel rapid pupil positioning method, which can avoid the process of removing interference points of an iris image after binaryzation by using priori knowledge in the pretreatment process of iris identification, quickly and stably position a pupil region, further determine the inner and outer boundaries of the iris, and integrally improve the accuracy and efficiency of an iris identification algorithm.
In order to achieve the purpose, the invention is realized by the following technical scheme:
a novel pupil rapid positioning method comprises the following steps:
(1) marking a connected region in the binarized iris image:
a. marking the foreground image with 1 after binarization and marking the background image with 0;
b. traversing the original image, and judging whether a foreground image p (i, j) is marked when the foreground image is encountered, wherein p is the original image, and i and j respectively represent corner marks of rows and columns; if the pixel point p (i, j) is not marked, the coordinate value of the pixel point p (i, j) is stored in the queue, and the pixel point is marked at the corresponding coordinate position of the marking matrix;
c. searching eight neighborhoods of p (i, j), and when a new unmarked foreground pixel point is encountered, arranging coordinate values of the eight neighborhoods into a column and marking the eight neighborhoods in a marking matrix; the new foreground pixel point coordinate is p (i +1, j);
d. c, after the eight-neighborhood search marking is finished, listing p (i, j), wherein the column head is p (i +1, j), and performing the eight-neighborhood search and marking operation in the step c again;
e. after one connected region is marked, adding 1 to the label count, emptying the queue, performing the traversing operation of the steps b-d again, and marking a new connected region;
(2) calculating the number of pixels of each connected region: after the marking of the communication areas is finished in the step (1), accumulating the number of pixels in each communication area;
(3) using the points in four directions of the outermost periphery of each connected region, performing window limitation of a minimum circumcircle on the region, and calculating the window area:
a. finding out points on positive and negative 45-degree angles in four directions of the outermost periphery of each communicated region, and making a window of a minimum circumcircle for the region by using a least square method;
b. calculating the area of the window, namely the accumulation of the number of pixels in the window, and taking the minimum circumcircle as an important basis for judging whether the connected area is a pupil area;
(4) selecting a pixel number threshold value to carry out primary screening on the connected region: selecting a pixel number threshold value to carry out primary screening on the connected regions according to the pixel numbers of the circumscribed circles of the different connected regions calculated in the step (3), and narrowing the screening range of pupils;
(5) calculating the ratio of the pixel number of the connected regions to the minimum circumscribed circle window area of the connected regions, and taking the connected regions with the maximum ratio as the connected regions of the pupil: the pupil is an approximate circular area, and the selected window of the communication area is a circumscribed circle; if the ratio of the sum of the pixel numbers of different connected regions to the minimum circumscribed circle window area corresponding to the connected regions is calculated, obviously, the pupil closest to the circle will obtain the maximum ratio, and the connected region with the maximum ratio can be determined as the pupil region; before the pupil boundary is fitted, firstly, extracting an edge by using a Canny operator, then accurately positioning the pupil boundary by using a least square fitting method for the extracted edge point, and giving the circle center (x, y) and the radius r of an inner circle.
The invention has the beneficial effects that: the invention provides an improved iris positioning algorithm, which effectively improves the accuracy and efficiency of iris recognition under non-ideal conditions. The advantage of searching the maximum connected region is that the interference points are automatically eliminated according to the gray-scale characteristics of the pupil, and errors caused by extraction of invalid edge points and error entry of illegal points when the inner boundary of the iris is fitted are avoided. The method can avoid the process of removing the interference points of the binarized iris image by using the prior knowledge in the preprocessing process of iris recognition, quickly and stably position the pupil region, further determine the inner and outer boundaries of the iris, and integrally improve the accuracy and efficiency of the iris recognition algorithm.
Drawings
A more complete understanding of the present invention, and the attendant advantages and features thereof, will be more readily understood by reference to the following detailed description when considered in conjunction with the accompanying drawings wherein:
FIG. 1 is a flow chart of the present invention;
FIG. 2 is an iris image provided in an embodiment of the present invention;
FIG. 3 is an iris image after binarization in an embodiment of the invention;
FIG. 4 is a diagram of the results of connected component labeling in an embodiment of the present invention;
figure 5 is a graph of the results of the pupillary region of an embodiment of the present invention;
fig. 6 is a diagram illustrating the pupil location results in an embodiment of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
As shown in fig. 1 to 6, the novel pupil rapid positioning method of the present invention includes the following steps:
(1) binarizing the input image 2 by an Otsu method to obtain an image 3, and labeling a connected region of the image 3, wherein the labeling method comprises the following steps:
a. marking the foreground image with 1 after binarization and marking the background image with 0;
b. traversing the original image, judging whether the foreground image p (i, j) (wherein p is the original image, and i and j respectively represent corner marks of rows and columns) is marked when the foreground image is encountered, if the pixel point p (i, j) is not marked, storing the coordinate value of the pixel point in a queue, and marking the pixel point at the corresponding coordinate position of a marking matrix;
c. searching eight neighborhoods of p (i, j), when meeting new unmarked foreground pixel points, arranging coordinate values of the eight neighborhoods, and marking the eight neighborhoods in a marking matrix, wherein the new foreground pixel points have coordinates of p (i +1, j);
d. after the eight-neighborhood search marking is finished, discharging p (i, j), wherein the head of the column is p (i +1, j), and performing the eight-neighborhood search and marking operation again in the step c;
e. after one connected region is marked, the tag count is increased by 1, the queue is emptied, the operations of traversing and the like in the steps b-d are carried out again, and a new connected region is marked;
after steps a to e are completed, the parts framed by different red frames, represented by different connected regions as shown in fig. 4, are obtained, although some red frames are slightly larger, as shown in fig. 3, in practice, the eyelash region is long and narrow, and thus the part is not easy to observe by human eyes.
(2) Calculating the number of pixels of each connected region, namely accumulating the number of pixels in each connected region after the connected region is marked in the step (1);
(3) and (3) limiting a window of a minimum circumcircle for each communicating region by using points on positive and negative 45-degree angles in four directions of the outermost periphery of the region, and calculating the area of the window:
a. finding out points on positive and negative 45-degree angles in four directions of the outermost periphery of each communicated region, and making a window of a minimum circumcircle for the region by using a least square method;
b. calculating the area of the window, namely the accumulation of the number of pixels in the window, and taking the minimum circumcircle as an important basis for judging whether the connected area is a pupil area;
(4) and (4) selecting a pixel number threshold value to carry out primary screening on the connected regions according to the pixel numbers of the circumscribed circles of the different connected regions calculated in the step (3), and narrowing the screening range of the pupils. In the example shown in fig. 2, the threshold is 2500, which is about 25% of the number of pixels in the maximum connected region;
(5) the pupil is an approximate circular area, and the window of the communication area selected by the user is a circumscribed circle; if the ratio of the sum of the pixel numbers of different connected regions to the minimum circumscribed circle window area corresponding to the connected regions is calculated, obviously, the pupil closest to the circle will obtain the maximum ratio, the connected region with the maximum ratio can be determined to be the pupil region, and the result is shown in fig. 5; the results of pupil localization are shown in figure 6.
The present invention is not limited to the above-described preferred embodiments, and various other embodiments of the present invention can be made by anyone in light of the above teachings, but any variations in shape or structure, which are the same or similar to the present invention, fall within the scope of the present invention.

Claims (1)

1. A novel pupil rapid positioning method is characterized by comprising the following steps:
(1) marking a connected region in the binarized iris image:
a. marking the foreground image with 1 after binarization and marking the background image with 0;
b. traversing an original image, and judging whether a pixel point p (i, j) of the foreground image is marked when the foreground image is encountered, wherein i and j respectively represent corner marks of rows and columns; if the pixel point p (i, j) of the foreground image is not marked, storing the coordinate value of the pixel point p (i, j) into a queue, and marking the pixel point at the corresponding coordinate position of the marking matrix;
c. searching eight neighborhoods of p (i, j), and when a new unmarked foreground pixel point is encountered, arranging coordinate values of the eight neighborhoods into a column and marking the eight neighborhoods in a marking matrix; wherein the pixel point coordinate of the new foreground image is p (i +1, j);
d. c, after the eight-neighborhood searching and marking is finished, listing P (i, j), wherein the column head coordinate of the pixel point P is P (i +1, j), and performing the eight-neighborhood searching and marking operation in the step c again;
e. after one connected region is marked, adding 1 to the label count, emptying the queue, performing the traversing operation of the steps b-d again, and marking a new connected region;
(2) calculating the number of pixels of each connected region: after the marking of the communication areas is finished in the step (1), accumulating the number of pixels in each communication area;
(3) using the points in four directions of the outermost periphery of each connected region, performing window limitation of a minimum circumcircle on the region, and calculating the window area:
a. finding out points on positive and negative 45-degree angles in four directions of the outermost periphery of each communicated region, and making a window of a minimum circumcircle for the region by using a least square method;
b. calculating the area of the window, namely the accumulation of the number of pixels in the window, and taking the minimum circumcircle as an important basis for judging whether the connected area is a pupil area;
(4) selecting a pixel number threshold value to carry out primary screening on the connected region: selecting a pixel number threshold value to carry out primary screening on the connected regions according to the pixel numbers of the circumscribed circles of the different connected regions calculated in the step (3), and narrowing the screening range of pupils;
(5) calculating the ratio of the pixel number of the connected regions to the minimum circumscribed circle window area of the connected regions, and taking the connected regions with the maximum ratio as the connected regions of the pupil: the pupil is a circular area, and the selected window of the communication area is a circumscribed circle; if the ratio of the sum of the pixel numbers of different connected regions to the minimum circumscribed circle window area corresponding to the connected regions is calculated, the largest ratio is obviously obtained with a circular pupil, and the connected region with the largest ratio can be determined as a pupil region; before the pupil boundary is fitted, firstly, extracting an edge by using a Canny operator, then accurately positioning the pupil boundary by using a least square fitting method for the extracted edge point, and giving the circle center (x, y) and the radius r of an inner circle.
CN201810987673.9A 2018-08-28 2018-08-28 Novel pupil rapid positioning method Active CN109389033B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810987673.9A CN109389033B (en) 2018-08-28 2018-08-28 Novel pupil rapid positioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810987673.9A CN109389033B (en) 2018-08-28 2018-08-28 Novel pupil rapid positioning method

Publications (2)

Publication Number Publication Date
CN109389033A CN109389033A (en) 2019-02-26
CN109389033B true CN109389033B (en) 2022-02-11

Family

ID=65418431

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810987673.9A Active CN109389033B (en) 2018-08-28 2018-08-28 Novel pupil rapid positioning method

Country Status (1)

Country Link
CN (1) CN109389033B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109854964B (en) * 2019-03-29 2021-03-19 沈阳天眼智云信息科技有限公司 Steam leakage positioning system and method based on binocular vision
CN111476795A (en) * 2020-02-27 2020-07-31 浙江工业大学 Binary icon notation method based on breadth-first search
CN112162629A (en) * 2020-09-11 2021-01-01 天津科技大学 Real-time pupil positioning method based on circumscribed rectangle
CN112434675B (en) * 2021-01-26 2021-04-09 西南石油大学 Pupil positioning method for global self-adaptive optimization parameters
CN115601825B (en) * 2022-10-25 2023-09-19 扬州市职业大学(扬州开放大学) Method for evaluating reading ability based on visual positioning technology
CN116740068B (en) * 2023-08-15 2023-10-10 贵州毅丹恒瑞医药科技有限公司 Intelligent navigation system for cataract surgery

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107844736A (en) * 2016-09-19 2018-03-27 北京眼神科技有限公司 iris locating method and device
CN108171201A (en) * 2018-01-17 2018-06-15 山东大学 Eyelashes rapid detection method based on gray scale morphology

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE602008002276D1 (en) * 2007-04-26 2010-10-07 St Microelectronics Rousset Localization method and device of a human iris in a picture

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107844736A (en) * 2016-09-19 2018-03-27 北京眼神科技有限公司 iris locating method and device
CN108171201A (en) * 2018-01-17 2018-06-15 山东大学 Eyelashes rapid detection method based on gray scale morphology

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种基于区域标记的虹膜内边缘定位新算法;张仁彦等;《海军工程大学学报》;20050831;第17卷(第04期);全文 *

Also Published As

Publication number Publication date
CN109389033A (en) 2019-02-26

Similar Documents

Publication Publication Date Title
CN109389033B (en) Novel pupil rapid positioning method
CN110414333B (en) Image boundary detection method and device
KR100826876B1 (en) Iris recognition method and apparatus for thereof
CN102902967B (en) Method for positioning iris and pupil based on eye structure classification
CN101916362A (en) Iris positioning method and iris identification system
CN105913093A (en) Template matching method for character recognizing and processing
US10853967B2 (en) Method and apparatus for determining pupil position
Thalji et al. Iris Recognition using robust algorithm for eyelid, eyelash and shadow avoiding
CN113256580A (en) Automatic identification method for target colony characteristics
CN1885314A (en) Pre-processing method for iris image
Frucci et al. Severe: Segmenting vessels in retina images
CN114359998B (en) Identification method of face mask in wearing state
CN113673460A (en) Method and device for iris recognition, terminal equipment and storage medium
CN114648511A (en) Accurate extraction and identification method for escherichia coli contour
CN108171229A (en) A kind of recognition methods of hollow adhesion identifying code and system
CN109446935B (en) Iris positioning method for iris recognition in long-distance traveling
CN110084587B (en) Automatic dinner plate settlement method based on edge context
CN108764230A (en) A kind of bank's card number automatic identifying method based on convolutional neural networks
CN105740828B (en) A kind of stopping line detecting method based on Fast Labeling connection
CN1141665C (en) Micro image characteristic extracting and recognizing method
Lin A novel iris recognition method based on the natural-open eyes
Lina et al. White blood cells detection from unstained microscopic images using modified watershed segmentation
CN105894489B (en) Cornea topography image processing method
CN114926635B (en) Target segmentation method in multi-focus image combined with deep learning method
CN115100696A (en) Connected domain rapid marking and extracting method and system in palm vein recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant