CN112258569B - Pupil center positioning method, pupil center positioning device, pupil center positioning equipment and computer storage medium - Google Patents

Pupil center positioning method, pupil center positioning device, pupil center positioning equipment and computer storage medium Download PDF

Info

Publication number
CN112258569B
CN112258569B CN202010993486.9A CN202010993486A CN112258569B CN 112258569 B CN112258569 B CN 112258569B CN 202010993486 A CN202010993486 A CN 202010993486A CN 112258569 B CN112258569 B CN 112258569B
Authority
CN
China
Prior art keywords
pupil
eye image
target eye
target
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010993486.9A
Other languages
Chinese (zh)
Other versions
CN112258569A (en
Inventor
季渊
赵浩然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Tanggu Semiconductor Co ltd
Original Assignee
Wuxi Tanggu Semiconductor Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi Tanggu Semiconductor Co ltd filed Critical Wuxi Tanggu Semiconductor Co ltd
Priority to CN202010993486.9A priority Critical patent/CN112258569B/en
Publication of CN112258569A publication Critical patent/CN112258569A/en
Application granted granted Critical
Publication of CN112258569B publication Critical patent/CN112258569B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Eye Examination Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides a pupil center positioning method, a pupil center positioning device, pupil center positioning equipment and a computer storage medium, wherein the pupil center positioning method comprises the following steps: acquiring a target eye image; determining a contour of a pupil in the target eye image; determining an circumscribed graph of the outline, and acquiring coordinates of tangential points of the outline and the circumscribed graph in the target eye image; and determining the position of the central point of the pupil in the target eye image according to the coordinates of the tangent points. The pupil center positioning method, the pupil center positioning device, the pupil center positioning equipment and the computer storage medium have the advantages of small calculated amount and capability of realizing rapid pupil center positioning.

Description

Pupil center positioning method, pupil center positioning device, pupil center positioning equipment and computer storage medium
Technical Field
The application belongs to the technical field of image positioning, and particularly relates to a pupil center positioning method, device and equipment and a computer storage medium.
Background
With the development of technology, the pupil center positioning plays a more and more remarkable role in various fields. For example, in the field of eye tracking, the direction of the eye's line of sight and the position of the landing point can be estimated by capturing the position of the pupil center. For example, in the field of iris recognition, the iris region can be conveniently extracted by locating the pupil center, and then features such as textures on the extracted iris region can be recognized.
In order to realize pupil center positioning, the existing pupil center positioning method generally needs to calculate the position of the pupil center point by using a mathematical fit equation and/or a great amount of mathematical operations, so that the calculation amount is huge, and the positioning speed is slow.
Disclosure of Invention
The embodiment of the application provides a pupil center positioning method, device, equipment and computer storage medium, which can solve the technical problems of huge calculation amount and slow positioning speed in the pupil center positioning process.
In a first aspect, an embodiment of the present application provides a pupil center positioning method, where the method includes:
acquiring a target eye image;
determining a contour of a pupil in the target eye image;
determining an circumscribed graph of the outline, and acquiring coordinates of tangential points of the outline and the circumscribed graph in the target eye image;
and determining the position of the central point of the pupil in the target eye image according to the coordinates of the tangent points.
In one embodiment, the determining the contour of the pupil in the target eye image specifically includes:
performing image binarization processing on the target eye image according to a preset threshold value to obtain a binarized image comprising pupils;
Screening pixel points in the binarized image to obtain target pixel points on the edge of the pupil;
and determining the outline according to the target pixel point.
In one embodiment, before performing image binarization processing on the target eye image according to the preset threshold value to obtain a binarized image including a pupil, the method further includes:
acquiring the occurrence times of each gray value in a preset gray value range in the target eye image;
constructing a gray level histogram of the corresponding relation between each gray level value and the occurrence times;
and taking a gray value corresponding to a minimum value between the first maximum value and the second maximum value of the occurrence times in the gray histogram as the preset threshold value.
In one embodiment, the filtering the pixel points in the binarized image to obtain the target pixel point on the edge of the pupil specifically includes:
carrying out plane convolution operation on each pixel point in the binarized image by using a transverse convolution factor of the sobel convolution factor and a longitudinal convolution factor of the sobel convolution factor to obtain a gradient amplitude value of each pixel point in the binarized image;
Performing non-maximum value inhibition processing on the gradient amplitude;
extracting a first pixel point meeting a preset condition in the binarized image as the target pixel point, wherein the preset condition comprises:
the gradient amplitude of the first pixel point is larger than a preset first threshold value;
and under the condition that the gradient amplitude of the first pixel point is smaller than or equal to the preset first threshold value and larger than the preset second threshold value, pixel points with gradient amplitudes larger than the preset first threshold value exist in eight adjacent areas of the first pixel point.
In one embodiment, before the determining the contour of the pupil in the target eye image, the method further comprises:
preprocessing the target eye image, wherein the preprocessing comprises the following steps: gaussian filtering processing, open operation and closed operation;
determining the outline of the pupil in the target eye image specifically comprises the following steps:
the contour of the pupil in the preprocessed target eye image is determined.
In one embodiment, in a case where the circumscribed graph is a circumscribed rectangle, the acquiring coordinates of a tangent point of the contour and the circumscribed rectangle in the target eye image specifically includes:
Acquiring first coordinates of each target pixel point in the target eye image;
and taking the first coordinate with the smallest abscissa, the first coordinate with the largest abscissa, the first coordinate with the smallest ordinate and the first coordinate with the largest ordinate in the first coordinates as the coordinates of the tangent points respectively.
In one embodiment, after said determining the location of the center point of the pupil in the target eye image, the method further comprises:
and marking the position of the center point in the target eye image by using a preset mark.
In a second aspect, embodiments of the present application provide a pupil center positioning device, including:
the acquisition unit is used for acquiring the target eye image;
a first determining unit configured to determine an outline of a pupil in the target eye image;
the second determining unit is used for determining an circumscribed graph of the outline and acquiring coordinates of a tangent point of the outline and the circumscribed graph in the target eye image;
and the third determining unit is used for determining the position of the central point of the pupil in the target eye image according to the coordinates of the tangent point.
In a third aspect, an embodiment of the present application provides an electronic device, including:
A processor, a memory and a computer program stored on the memory and executable on the processor, which when executed by the processor performs the steps of the pupil centering method as described above.
In a fourth aspect, embodiments of the present application provide a computer storage medium having a computer program stored thereon, which when executed by a processor, implements the steps of the pupil centering method as described above.
The pupil center positioning method, device, equipment and computer storage medium provided by the embodiment of the application firstly acquire a target eye image; then, determining the outline of the pupil in the target eye image, and determining the circumscribed figure of the outline; and finally, determining the position of the central point of the pupil in the target eye image according to the coordinates of the tangential points of the acquired contour and the circumscribed graph in the target eye image. Because the position of the center point of the pupil is determined according to the tangential point coordinates of the pupil and the pupil circumscribed graph, a mathematical fitting equation is not needed in the positioning process, and the coordinate calculation is simple and does not involve a large amount of mathematical operation, so that the calculation amount is small, the time for calculating and positioning the center point is short, and the rapid pupil center positioning can be realized.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described, and it is possible for a person skilled in the art to obtain other drawings according to these drawings without inventive effort.
FIG. 1 is a schematic diagram of a data model of any ellipse and its circumscribed pattern;
fig. 2 is a flow chart of a pupil center positioning method according to an embodiment of the present application;
fig. 3 is a flowchart of step S102 of the pupil center positioning method in the embodiment of the present application;
fig. 4a schematically shows an image of a target eye according to an embodiment of the present application, and fig. 4b schematically shows a gray level histogram according to an embodiment of the present application;
fig. 5 is a schematic diagram of the contour of the pupil extracted in step S102 in the embodiment of the present application;
fig. 6a is an original target eye image, fig. 6b is a target eye image after gaussian filtering, fig. 6c is a target eye image after open operation, and fig. 6d is a target eye image after close operation;
fig. 7 schematically illustrates an circumscribed pattern of a pupil in an embodiment of the present application;
fig. 8 schematically illustrates the result of pupil center positioning in an embodiment of the present application;
FIG. 9 illustrates a partial target eye image centered over a pupil using a pupil centering method of an embodiment of the present application;
fig. 10 is a schematic structural diagram of a pupil center positioning device according to an embodiment of the present disclosure;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Features and exemplary embodiments of various aspects of the present application are described in detail below to make the objects, technical solutions and advantages of the present application more apparent, and to further describe the present application in conjunction with the accompanying drawings and the detailed embodiments. It should be understood that the specific embodiments described herein are intended to be illustrative of the application and are not intended to be limiting. It will be apparent to one skilled in the art that the present application may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present application by showing examples of the present application.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
Pupil centering is widely used in various fields. For example, in the field of eye tracking, the direction of the eye's line of sight and the position of the landing point can be estimated by capturing the position of the pupil center. For example, in the field of iris recognition, an iris region can be conveniently extracted by locating the pupil center, so that features such as textures on the extracted iris region can be conveniently recognized. In the field of psychology, for example, by detecting measurement indexes such as pupil states and eye movement tracks, a lie can be detected for a tester, and psychological activities of the tester can be known. With the rapid development of technologies such as eyeball tracking and pupil identification, pupil center positioning as a basis of the technology gradually becomes a research hotspot.
In order to realize pupil center positioning, two methods are proposed to realize pupil center positioning: one is a pupil center positioning method based on Hough transformation, and the other is a pupil center positioning method based on least square ellipse fitting. When the first pupil center positioning method based on Hough transformation is used for pupil center positioning, all possible circle centers and radii need to be calculated for all edge pixel points, a large amount of mathematical operations are involved, the consumption in time and space is large, and the problems of huge calculation amount and slow positioning speed exist; and when the pupil is not perfect circle, there is also the problem that the positioning accuracy is lower. And when the pupil center positioning method based on least square ellipse fitting is used for pupil center positioning, a mathematical fitting equation is needed to calculate the position of the pupil center point. It can be seen that the prior art generally needs to calculate the position of the pupil center point by using a mathematical fit equation and/or a great number of mathematical operations, which is not only computationally intensive, but also slow in positioning speed.
In order to solve the problems in the prior art, the present inventors have proposed a technical idea after a lot of researches: the shape of the extracted pupil outline scatter diagram is approximate to an ellipse or a circle, and the position of the pupil center point can be indirectly obtained according to the characteristic that the ellipse and the circumscribed figure of the ellipse have the same center point without fitting an equation and a great amount of mathematical operations.
To facilitate understanding and verification of the above technical idea, the following description is made with reference to fig. 1.
FIG. 1 is a schematic diagram of a data model of any ellipse and its circumscribed pattern. In fig. 1, a denotes a semi-major axis of an ellipse, b denotes a semi-minor axis of the ellipse, and four points P1, P2, P3, and P4 denote tangent points on four sides of the ellipse and the circumscribed rectangle of the ellipse, respectively. Taking the circumscribed figure as an example of a circumscribed rectangle, as shown in FIG. 1, the center point of the ellipse is at the origin O, the tangent points P1 (x 1, y 1) and P3 (-x 1, -y 1) are symmetrical about the center point O, and the tangent points P2 (x 2, y 2) and P4 (-x 2, -y 2) are symmetrical about the center point O. From the symmetry, the circumscribed rectangle ABCD of the ellipse is also centered and symmetrical about the origin O, and the line connecting AC and BD intersects at the origin O, i.e. the center point of the circumscribed rectangle ABCD is also the origin O. As can be seen from the above discussion, the ellipse has the same center point as the circumscribed pattern of the ellipse.
Based on the technical conception, the embodiment of the application provides a pupil center positioning method, a pupil center positioning device, pupil center positioning equipment and a computer storage medium.
The technical conception of the embodiment of the application is as follows: firstly, acquiring a target eye image; then, determining the outline of the pupil in the target eye image, and determining the circumscribed figure of the outline; and finally, determining the position of the central point of the pupil in the target eye image according to the coordinates of the tangential points of the acquired contour and the circumscribed graph in the target eye image. Because the position of the center point of the pupil is determined according to the tangential point coordinates of the pupil and the pupil circumscribed graph, a mathematical fitting equation is not needed in the positioning process, and the coordinate calculation is simple and does not involve a large amount of mathematical operation, so that the calculation amount is small, the time for calculating and positioning the center point is short, and the rapid pupil center positioning can be realized.
The pupil center positioning method provided in the embodiment of the present application is first described below.
Fig. 2 shows a flowchart of a pupil center positioning method according to an embodiment of the present application. As shown in fig. 2, the method may include the steps of:
s101, acquiring a target eye image.
S102, determining the outline of the pupil in the target eye image.
S103, determining an circumscribed graph of the outline, and acquiring coordinates of tangential points of the outline and the circumscribed graph in the target eye image.
S104, determining the position of the central point of the pupil in the target eye image according to the coordinates of the tangent points.
The specific implementation of each of the above steps will be described in detail below.
The pupil center positioning method, device, equipment and computer storage medium provided by the embodiment of the application firstly acquire a target eye image; then, determining the outline of the pupil in the target eye image, and determining the circumscribed figure of the outline; and finally, determining the position of the central point of the pupil in the target eye image according to the coordinates of the tangential points of the acquired contour and the circumscribed graph in the target eye image. Because the position of the center point of the pupil is determined according to the tangential point coordinates of the pupil and the pupil circumscribed graph, a mathematical fitting equation is not needed in the positioning process, and the coordinate calculation is simple and does not involve a large amount of mathematical operation, so that the calculation amount is small, the time for calculating and positioning the center point is short, and the rapid pupil center positioning can be realized.
A specific implementation of each of the above steps is described below.
First, S101, a target eye image is acquired. Specifically, the target eye image may be acquired by a video camera, a still camera, or any device having a photographing function, for example, to obtain the target eye image. Of course, one or more eye images are also called out from the stored existing eye images as target eye images, and the present application is not limited thereto.
In order to save time for pupil center positioning and achieve rapid positioning, as an example, a near-eye mode is adopted in the method for acquiring the target eye image in the embodiment of the application. Unlike desktop acquisition of an entire facial image of a person, near-to-eye acquisition is performed on an eye region with a camera to obtain a target eye image including the eye region. Compared with the eye image acquired by the desktop, the method adopted by the embodiment of the application can save the time consumed by extracting the eye region by the desktop, thereby saving the time for positioning the center of the pupil, realizing quick positioning, ensuring that the acquired target eye image has clearer details, and facilitating and effectively analyzing the characteristics of the eyes such as the pupil, the iris and the like.
The above is a specific implementation of S101, and a specific implementation of S102 is described below.
S102, determining the outline of the pupil in the target eye image.
As an example, S102 may directly process the target eye image acquired in S101 to obtain the contour of the pupil.
Fig. 3 is a flowchart of step S102 of the pupil center positioning method in the embodiment of the present application. As shown in fig. 3, S102 may specifically include the following steps:
s201, performing image binarization processing on the target eye image according to a preset threshold value to obtain a binarized image comprising pupils;
s202, screening pixel points in the binarized image to obtain target pixel points on the edge of the pupil;
s203, determining the outline of the pupil according to the target pixel point.
Steps S201 to S203 are described in order below.
Fig. 4a schematically shows an image of a target eye according to an embodiment of the present application. As shown in fig. 4a, the human eye structure in the target eye image is pupil, iris and sclera from inside to outside, respectively, and the sclera, iris and pupil are sequentially reduced in gray scale value. According to such gray distribution characteristics of the target eye image, in S201, a pupil region having a relatively lowest gray value may be segmented by setting a reasonable threshold value, to obtain a binary image including the pupil.
Specifically, in S201, image binarization processing is performed on the target eye image according to a preset threshold, which specifically includes: and setting the gray value of the pixel with the current gray value larger than the preset threshold value in the target eye image to be 0 or 255, and setting the gray value of the pixel with the current gray value smaller than or equal to the preset threshold value to be 255 or 0, so as to obtain the binarized image containing the pupil. By the image binarization processing, the target eye image is converted into a binarized image of only black and white, for example, the pupil is black, and the region other than the pupil in the target eye image is white.
In S201, setting of the preset threshold is important in this step, the reasonable threshold is favorable for segmentation of pupil areas, the effect of pupil segmentation is affected by too low or too high threshold, the incomplete pupil areas may be obtained, and the pupil images containing the interference areas may be segmented by too high threshold.
In view of this, in order to make pupil areas in the converted binarized image more accurate and reasonable, as an implementation manner, the embodiment of the present application determines the magnitude of the preset threshold by:
and the first step, obtaining the occurrence times of each gray level value in the preset gray level value range in the target eye image. The preset gray value range may be, for example, 0 to 255, but may be other reasonable ranges, which is not limited thereto. In the first step, the number of pixels corresponding to each gray level value in the target eye image is specifically determined, so that the occurrence number of each gray level value in the target eye image is determined. For example, there are 1000 pixels in the target eye image, 30 pixels with 1-level gray values, and 40 pixels with 2-level gray values, then the number of occurrences of 1-level gray values in the target eye image is 30, and the number of occurrences of 2-level gray values in the target eye image is 40.
And a second step of constructing a gray level histogram of the correspondence between each gray level value and the occurrence number of each gray level value in the target eye image.
Fig. 4b schematically shows a gray level histogram of an embodiment of the present application. In fig. 4b, the abscissa is the gray values of 0 to 255, and the ordinate is the number of occurrences of each gray value in the target eye image. After the number of occurrences of each gray-scale value in the target eye image is determined, a gray-scale histogram of the correspondence of each gray-scale value to the number of occurrences of each gray-scale value in the target eye image is constructed. As shown in fig. 4, the gray histograms of the pupil area and the iris area are similar to "bimodal one valley" in morphology, because the gray value of each of the pupil area, the iris area and the sclera area is concentrated in one gray value range, for example, the gray value of the pupil area is concentrated in 30-50 gray value ranges, for example, the gray value of the iris area is concentrated in 130-170 gray value ranges, so that a "first peak" appears in the gray value concentrated range of the pupil area, then as the gray value increases, the gray value of the pupil area appears less and less until reaching a critical value between the gray value of the pupil area and the gray value of the iris area, after the critical value is exceeded, the gray value of the iris area appears more and more, and then a "second peak" appears in the gray value concentrated range of the iris area.
And a third step of taking a gray value corresponding to a minimum value between the first maximum value and the second maximum value of the occurrence times in the gray histogram as a preset threshold value.
Specifically, in the above second step, a threshold value is set between the gray level value of the pupil area and the gray level value of the iris area, and this threshold value is the gray level value corresponding to the "valley" located between the "first peak" and the "second peak" in the gray level histogram. In practice, the "first peak" is the first maximum of the number of occurrences, the "second peak" is the second maximum of the number of occurrences, and the "valley" is the minimum between the first and second maxima of the number of occurrences. In the embodiment of the present application, this critical value is used as a preset threshold value.
With continued reference to fig. 3, after obtaining a binarized image including a pupil through a preset threshold in S201, S202 is executed to screen pixel points in the binarized image, so as to obtain a target pixel point on the edge of the pupil.
Specifically, where the edges of an object in an image appear as the most strongly varying gray values in gray values, edge extraction or contour extraction can generally be considered to preserve regions of the image where the gray values vary strongly. And extracting the outline of the binarized image containing the pupil to obtain a target pixel point positioned on the edge of the pupil. In S202, the objective pixel point located at the edge of the pupil is finally obtained through sobel edge detection, non-maximum suppression, dual-threshold detection and edge connection in sequence.
S202 specifically comprises the following steps: a sobel edge detection step, a non-maximum value suppression processing step, and a double threshold detection and edge connection step.
A sobel edge detection step: and carrying out plane convolution operation by using the transverse convolution factor of the Sobel convolution factor, the longitudinal convolution factor of the Sobel convolution factor and each pixel point in the binarized image to obtain the gradient amplitude value of each pixel point in the binarized image.
Specifically, a sobel convolution factor is used to calculate the gradient magnitude G and the direction θ for each pixel point in the binarized image. In the embodiment of the present application, the sobel convolution factor includes two groups of 3×3 matrices, which are a transverse convolution factor Gx and a longitudinal convolution factor Gy, and the expression is as follows:
wherein Gx is used to detect horizontal edges and Gy is used to detect vertical edges. And carrying out plane convolution on the transverse convolution factor Gx and the longitudinal convolution factor Gy and each pixel point in the binarized image respectively, so as to calculate the gradient amplitude G and the gradient direction theta of each pixel point in the binarized image.
The expression for calculating the gradient amplitude G and gradient direction theta of each pixel point in the binarized image is as follows:
wherein I represents a pixel point in the binarized image.
The inventor finds that after carrying out gradient calculation on each pixel point in the binary image containing the pupil, the pupil edge is directly extracted according to the gradient amplitude of the pixel point. In order to avoid extracting the edge blur of the pupil, as an example, the non-maximum value suppression processing step may perform "edge thinning" on the pupil edge, specifically find a local maximum value in the gradient amplitude of the pixel point, and suppress other gradient values except for the local maximum value in the binarized image to 0, so as to reject a part of the non-edge pixel points. For example, the pixel points are divided into a plurality of groups according to the area of the binarized image, each group comprises a plurality of pixel points (for example, 10 pixel points), a preset number of pixel points (for example, 3 pixel points) with larger gradient amplitude values in each group are searched, and the gradient amplitude values of the pixel points with smaller gradient amplitude values remaining in each group are replaced by 0.
After the non-maximum suppression processing step, a double threshold detection and edge connection step is performed: extracting a first pixel point meeting preset conditions in a binarized image as a target pixel point, wherein the preset conditions comprise:
the gradient amplitude of the first pixel point is larger than a preset first threshold value;
Under the condition that the gradient amplitude of the first pixel point is smaller than or equal to a preset first threshold value and larger than a preset second threshold value, pixel points with gradient amplitudes larger than the preset first threshold value exist in eight adjacent areas of the first pixel point.
In this embodiment of the present application, the first pixel point refers to any one or more pixel points in the binarized image that satisfy a preset condition.
In particular, the real and potential edges are determined by setting a high threshold and a low threshold. After non-maxima suppression, the pixels left in the binarized image can more accurately represent the actual edges in the pupil. For each pixel after non-maximum suppression, assume that the gradient amplitude of the pixel is G 0 The preset first threshold (high threshold) and the preset second threshold (low threshold) are respectively G 1 And G 2 . When G 0 >G 1 When the pixel point is considered to be a strong edge pixel point; when G 0 <G 2 When the pixel point is not considered to be an edge pointRemoving by kicking; when G 2 <G 0 <G 1 When this pixel is considered a weak edge pixel. For the weak edge pixel points, if the 8 neighborhood pixels of the weak edge pixel contain strong edge pixel points, the weak edge pixel points can be reserved as real edges; if the 8 neighborhood pixels of the weak edge pixel do not contain the strong edge pixel point, the pixel point is suppressed, namely is eliminated. Therefore, the first pixel point with the gradient amplitude meeting the preset condition, namely the target pixel point on the edge of the pupil, can be obtained.
After obtaining the target pixel point on the edge of the pupil, S203 is executed, and the contour of the pupil is determined according to the target pixel point. For example, the contour of the pupil can be obtained by connecting the target pixel points by a set program.
Fig. 5 is a schematic diagram of extracting the contour of the pupil in step S102 in the embodiment of the present application. As shown in fig. 5, the contour of the pupil composed of a plurality of target pixel points can be extracted from the binarized image including the pupil region by S102.
The above is a description of an example in which S102 may directly process the target eye image acquired in S101 to obtain the contour of the pupil.
As another implementation manner of the present application, in order to avoid that noise and invalid information in the target eye image affect S102 and subsequent steps, an image preprocessing step may be further included before S102 is performed.
Specifically, in the process of capturing the target eye image by a device such as a camera having a photographing function, noise of different degrees and interference of invalid information may be introduced. Noise can affect the quality of the target eye image, and invalid information can cause difficulties in subsequent analysis and processing of the target eye image. Therefore, in order to avoid that noise and invalid information in the target eye image affect S102 and subsequent steps, the acquired target eye image may also be preprocessed before S102 is performed. Wherein, the preprocessing may include: gaussian filtering, open operation and closed operation.
The Gaussian filtering, also called Gaussian smoothing, is to perform weighted average on pixels in an image according to weight distribution in a Gaussian function, so that the pixel values in the image are smoothed, and a blurring effect is displayed on the image, so that the influence of interference information on subsequent operations such as S102 image processing is reduced. In the embodiment of the application, a two-dimensional zero-mean discrete gaussian function with excellent smoothness is selected as a smoothing filter of an image, and the expression is as follows:
wherein sigma is standard deviation, also called Gaussian kernel radius, and the larger the sigma value is, the more obvious the smoothing effect is; x and y are point coordinates, where x is the abscissa and y is the ordinate. In the embodiment of the application, the gaussian kernel radius sigma is 1.4, and the gaussian template size is 7×7. The Gaussian filtering is carried out by adopting a Gaussian template sliding window convolution mode, the weighted average gray value of pixels in a window is used for replacing the gray value of pixels in a central point of the window, each pixel in the image is scanned in sequence, and finally the image after Gaussian smoothing is obtained. FIG. 6a is an original target eye image; fig. 6b is a gaussian filtered image of the target eye. As can be seen from a comparison between fig. 6a and fig. 6b, interference information such as eyelashes, iris textures and the like in the target eye image after gaussian filtering becomes blurred after gaussian filtering, so that the influence of the interference information on subsequent steps can be reduced.
After Gaussian filtering treatment, interference information in the image is well suppressed, but fine 'stains', holes and the like still possibly exist in the image. In order to reduce the influence of 'stain' and cavities on an image, as an example, the embodiment of the application performs expansion and corrosion treatment on the Gaussian filtered image by matching with image morphology. The different combined processing sequences of erosion and dilation form open and closed operations in the image morphology. The image is first subjected to the erosion process and then to the expansion process, which is called an open operation, and the image is first subjected to the expansion process and then to the erosion process, which is called a close operation.
Specifically, let f (x, y) be an input image, b (x, y) be a structural element in an open operation and a close operation, and as an example, the embodiment of the present application adopts a square structural element of a length 7*7, and the structural element b is used to perform the open operation and the close operation on the input image f, which has the following expression:
f·b=(f⊙b)⊕b (6)
f·b=(f⊕b)⊙b (7)
wherein expression (6) is an open operation and expression (7) is a closed operation.
The method and the device for processing the target eye image through the Gaussian filter perform combination processing of opening operation and closing operation on the target eye image after Gaussian filtering. Firstly, carrying out open operation on the Gaussian filtered image to filter out tiny objects, breaking narrow connection positions and eliminating burrs, so that the pupil area boundary in the image is smoother. Fig. 6c is an image of the target eye after an open operation. As shown in fig. 6c, the burrs in the target eye image after the open operation are basically filtered, and the pupil area boundary in the image is smoother. Then, on the basis of the open operation, the image is subjected to the closed operation again, so that the small cavity in the pupil area is filled, and adjacent objects are connected to make up for the narrow break. Fig. 6d is a closed-loop image of the target eye. As shown in fig. 6d, after a series of image preprocessing, it is obvious from comparing fig. 6a and fig. 6d that the interference information in the target eye image is filtered out to a large extent, which provides a good basis for performing the subsequent steps.
After preprocessing the target eye image, S102 is performed to determine the contour of the pupil in the preprocessed target eye image, and the specific process may be referred to the description of S102 above, which is not repeated herein.
The above is a specific implementation of S102, and a specific implementation of S103 is described below.
With continued reference to fig. 2, in S103, an circumscribed pattern of the outline is determined, and coordinates of points of tangency of the outline and the circumscribed pattern in the target eye image are acquired.
Specifically, after obtaining the target pixel points on the edge of the pupil and the contour of the pupil in S102, obtaining the first coordinates of each target pixel point in the target eye image in S103; and taking the first coordinate with the smallest abscissa, the first coordinate with the largest abscissa, the first coordinate with the smallest ordinate and the first coordinate with the largest ordinate in the first coordinates as the coordinates of each tangential point.
In the embodiment of the application, the first coordinate represents the coordinate of the target pixel point in the target eye image. Here, the gray value of the pixel point in the target eye image or the binary image after the conversion of the target eye image is changed, and the coordinates of the pixel point are unchanged. In other words, the coordinates of the pixel points in the binarized image are the same as the coordinates of the pixel points in the target eye image.
Therefore, after the target pixel points on the edge of the pupil are obtained, the coordinates of each target pixel point in the binarized image can be obtained, and the first coordinates of each target pixel point in the target eye image can be obtained.
Fig. 7 schematically shows an circumscribed pattern of the pupil according to an embodiment of the present application. As shown in fig. 7, as an example, the circumscribed figure is a circumscribed rectangle, and the contour of the pupil and the circumscribed rectangle have four tangential points P1', P2', P3', and P4', respectively. The coordinates of the four tangent points are the first coordinate with the smallest abscissa, the first coordinate with the largest abscissa, the first coordinate with the smallest ordinate and the first coordinate with the largest ordinate, namely the coordinates of the leftmost, rightmost, bottommost and uppermost target pixel points on the contour. Here, a circumscribed rectangle of the outline can be determined by straight lines passing through the four tangent points and parallel to the x-axis and the y-axis, respectively.
The above is a specific implementation of S103, and a specific implementation of S104 is described below.
S104, determining the position of the central point of the pupil in the target eye image according to the coordinates of the tangent points.
Specifically, after the coordinates of each tangent point of the contour of the pupil and the circumscribed pattern are obtained, for example, by calculating the average value of the coordinates of each tangent point, the coordinates or the position of the center point of the pupil in the target eye image can be obtained.
To visually display the position of the center point of the pupil, as an example, it may further include: and marking the position of the central point of the pupil in the target eye image by using a preset mark. The preset identifier may be any symbol or graphic, etc., which is not limited in this application.
Fig. 8 schematically shows the result of pupil center positioning in the embodiment of the present application. As shown in fig. 8, after obtaining the coordinates or positions of the center point of the pupil in the target eye image, the position of the center point of the pupil in the target eye image may be marked with a "+" symbol.
In order to verify the feasibility and effect of the pupil center positioning method provided by the embodiment of the application, the inventor invokes 756 target eye images from the eye image database to perform traversal test, wherein fig. 9 shows a part of target eye images for pupil center positioning by using the pupil center positioning method of the embodiment of the application. As shown in fig. 9, the pupil center positioning method in the embodiment of the present application can better position the center point of the pupil, and the positioned center point is consistent with the actual center point, and almost has no deviation, which indicates that the pupil center positioning method in the embodiment of the present application can realize accurate positioning of the center point of the pupil.
Under the same condition, the inventor respectively utilizes a least square ellipse fitting pupil center positioning method and a pupil center positioning method of the embodiment of the application to perform pupil center positioning on the 756 target eye images, and the statistical result is shown in table 1.
Table 1 shows the results of pupil centering the 756 target eye images using the least square ellipse fitting pupil centering method and the pupil centering method of the embodiments of the present application.
As shown in table 1, the identification rate of the pupil center positioning method in the embodiment of the application is 98.3%, the identification rate of the least square ellipse fitting pupil center positioning method is 98.8%, and the identification rates of the two methods are similar; the pupil center positioning method of the embodiment of the application has less time for averagely positioning the pupil center, which indicates that the pupil center positioning method of the embodiment of the application can shorten the time for positioning and realize quick positioning.
Based on the pupil center positioning method provided by the embodiment, correspondingly, the application also provides a specific implementation mode of the pupil center positioning device. Please refer to the following examples.
Referring first to fig. 10, the pupil center positioning device 100 provided in the embodiment of the present application may include the following units:
An acquisition unit 1001 for acquiring a target eye image;
a first determining unit 1002, configured to determine an outline of a pupil in the target eye image;
a second determining unit 1003 configured to determine an circumscribed pattern of the contour, and acquire coordinates of points of tangency of the contour and the circumscribed pattern in the target eye image;
a third determining unit 1004 is configured to determine a position of a center point of the pupil in the target eye image according to coordinates of the tangent point.
The pupil center positioning device provided by the embodiment of the application firstly acquires a target eye image; then, determining the outline of the pupil in the target eye image, and determining the circumscribed figure of the outline; and finally, determining the position of the central point of the pupil in the target eye image according to the coordinates of the tangential points of the acquired contour and the circumscribed graph in the target eye image. Because the position of the center point of the pupil is determined according to the tangential point coordinates of the pupil and the pupil circumscribed graph, a mathematical fitting equation is not needed in the positioning process, and the coordinate calculation is simple and does not involve a large amount of mathematical operation, so that the calculation amount is small, the time for calculating and positioning the center point is short, and the rapid pupil center positioning can be realized.
As an implementation manner of the present application, in order to save time for pupil center positioning and achieve rapid positioning, the acquiring unit 1001 may acquire the target eye image in a near-to-eye manner. Compared with the eye image acquired by the desktop, the method adopted by the embodiment of the application can save the time consumed by extracting the eye region by the desktop, thereby saving the time for positioning the center of the pupil, realizing quick positioning, ensuring that the acquired target eye image has clearer details, and facilitating and effectively analyzing the characteristics of the eyes such as the pupil, the iris and the like.
As an implementation manner of the present application, the first determining unit 1002 is specifically configured to perform image binarization processing on a target eye image according to a preset threshold value, so as to obtain a binarized image including a pupil; screening pixel points in the binarized image to obtain target pixel points on the edge of the pupil; and determining the contour of the pupil according to the target pixel points.
As another implementation manner of the present application, in order to make the pupil area in the converted binary image more accurate and reasonable, the pupil center positioning device 100 may further include: the preset threshold setting unit is used for obtaining the occurrence times of each gray value in the preset gray value range in the target eye image; constructing a gray level histogram of the corresponding relation between each gray level value and the occurrence times; and taking a gray value corresponding to a minimum value between the first maximum value and the second maximum value of the occurrence times in the gray histogram as a preset threshold value.
As an implementation manner of the present application, in order to accurately extract the contour of the pupil, the first determining unit 1002 is specifically configured to perform a planar convolution operation with each pixel point in the binarized image by using a lateral convolution factor of the sobel convolution factor, a longitudinal convolution factor of the sobel convolution factor, and a gradient amplitude of each pixel point in the binarized image; performing non-maximum value inhibition treatment on the gradient amplitude; extracting a first pixel point meeting preset conditions in a binarized image as a target pixel point, wherein the preset conditions comprise: the gradient amplitude of the first pixel point is larger than a preset first threshold value; under the condition that the gradient amplitude of the first pixel point is smaller than or equal to a preset first threshold value and larger than a preset second threshold value, pixel points with gradient amplitudes larger than the preset first threshold value exist in eight adjacent areas of the first pixel point.
As another implementation manner of the present application, in order to avoid that noise and invalid information in the target eye image affect the subsequent steps, the pupil center positioning device 100 may further include: the preprocessing unit is used for preprocessing the target eye image, and the preprocessing comprises the following steps: gaussian filtering, open operation and closed operation.
As an implementation manner of the present application, the second determining unit 1003 is specifically configured to: acquiring first coordinates of each target pixel point in a target eye image; and taking the first coordinate with the smallest abscissa, the first coordinate with the largest abscissa, the first coordinate with the smallest ordinate and the first coordinate with the largest ordinate in the first coordinates as the coordinates of each tangential point.
As another implementation of the present application, in order to visually display the position of the center point of the pupil, the pupil center positioning device 100 may further include: and the marking unit is used for marking the position of the central point of the pupil in the target eye image by using a preset mark.
The modules/units in the apparatus shown in fig. 10 have functions of implementing the steps in fig. 2, and achieve corresponding technical effects, which are not described herein for brevity.
Based on the pupil center positioning method provided by the embodiment, correspondingly, the application also provides a specific implementation mode of the electronic equipment. Please refer to the following examples.
Fig. 11 shows a schematic hardware structure of an electronic device according to an embodiment of the present application. As shown in fig. 11, the electronic device may include a processor 1101 and a memory 1102 storing computer program instructions.
In particular, the processor 1101 described above may include a central processing unit (Central Processing Unit, CPU), or an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or may be configured to implement one or more integrated circuits of embodiments of the present application.
Memory 1102 may include mass storage for data or instructions. By way of example, and not limitation, memory 1102 may comprise a Hard Disk Drive (HDD), floppy Disk Drive, flash memory, optical Disk, magneto-optical Disk, magnetic tape, or universal serial bus (Universal Serial Bus, USB) Drive, or a combination of two or more of the foregoing. In one example, memory 1102 may include removable or non-removable (or fixed) media, or memory 1102 is a non-volatile solid state memory. Memory 1102 may be internal or external to the integrated gateway disaster recovery device.
In one example, memory 1102 may be Read Only Memory (ROM). In one example, the ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or flash memory, or a combination of two or more of these.
Memory 1102 may include read-only memory (ROM), random-access memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices. Thus, in general, the memory includes one or more tangible (non-transitory) computer-readable storage media (e.g., memory devices) encoded with software comprising computer-executable instructions and when the software is executed (e.g., by one or more processors) it is operable to perform the operations described with reference to a method according to an aspect of the present application.
The processor 1101 reads and executes the computer program instructions stored in the memory 1102 to implement the methods/steps S101 to S104 in the embodiment shown in fig. 2, and achieve the corresponding technical effects achieved by executing the methods/steps in the embodiment shown in fig. 2, which are not described herein for brevity.
In one example, the electronic device may also include a communication interface 1103 and a bus 1110. As shown in fig. 11, the processor 1101, the memory 1102, and the communication interface 1103 are connected to each other through a bus 1110 and perform communication with each other.
The communication interface 1103 is mainly used for implementing communication between each module, device, unit and/or apparatus in the embodiments of the present application.
Bus 1110 includes hardware, software, or both, that couple the components of the online data flow billing device to each other. By way of example, and not limitation, the buses may include an accelerated graphics port (Accelerated Graphics Port, AGP) or other graphics Bus, an enhanced industry standard architecture (Extended Industry Standard Architecture, EISA) Bus, a Front Side Bus (FSB), a HyperTransport (HT) interconnect, an industry standard architecture (Industry Standard Architecture, ISA) Bus, an infiniband interconnect, a Low Pin Count (LPC) Bus, a memory Bus, a micro channel architecture (MCa) Bus, a Peripheral Component Interconnect (PCI) Bus, a PCI-Express (PCI-X) Bus, a Serial Advanced Technology Attachment (SATA) Bus, a video electronics standards association local (VLB) Bus, or other suitable Bus, or a combination of two or more of the above. Bus 1110 can include one or more buses, where appropriate. Although embodiments of the present application describe and illustrate a particular bus, the present application contemplates any suitable bus or interconnect.
In addition, in combination with the pupil center positioning method in the above embodiment, the embodiment of the application may be implemented by providing a computer storage medium. The computer storage medium has stored thereon computer program instructions; the computer program instructions, when executed by a processor, implement any of the pupil centering methods of the above embodiments.
In summary, the pupil center positioning method, device, equipment and computer storage medium provided in the embodiments of the present application first acquire a target eye image; then, determining the outline of the pupil in the target eye image, and determining the circumscribed figure of the outline; and finally, determining the position of the central point of the pupil in the target eye image according to the coordinates of the tangential points of the acquired contour and the circumscribed graph in the target eye image. Because the position of the center point of the pupil is determined according to the tangential point coordinates of the pupil and the pupil circumscribed graph, a mathematical fitting equation is not needed in the positioning process, and the coordinate calculation is simple and does not involve a large amount of mathematical operation, so that the calculation amount is small, the time for calculating and positioning the center point is short, and the rapid pupil center positioning can be realized.
It should be clear that the present application is not limited to the particular arrangements and processes described above and illustrated in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present application are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications, and additions, or change the order between steps, after appreciating the spirit of the present application.
The functional blocks shown in the above-described structural block diagrams may be implemented in hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the present application are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave. A "machine-readable medium" may include any medium that can store or transfer information. Examples of machine-readable media include electronic circuitry, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio Frequency (RF) links, and the like. The code segments may be downloaded via computer networks such as the internet, intranets, etc.
It should also be noted that the exemplary embodiments mentioned in this application describe some methods or systems based on a series of steps or devices. However, the present application is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, may be different from the order in the embodiments, or several steps may be performed simultaneously.
Aspects of the present disclosure are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such a processor may be, but is not limited to being, a general purpose processor, a special purpose processor, an application specific processor, or a field programmable logic circuit. It will also be understood that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware which performs the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In the foregoing, only the specific embodiments of the present application are described, and it will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the systems, modules and units described above may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein. It should be understood that the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present application, which are intended to be included in the scope of the present application.

Claims (8)

1. A pupil center locating method, comprising:
acquiring a target eye image;
determining a contour of a pupil in the target eye image;
determining an circumscribed graph of the outline, and acquiring coordinates of tangential points of the outline and the circumscribed graph in the target eye image;
determining the position of the central point of the pupil in the target eye image according to the coordinates of the tangent points;
the determining the outline of the pupil in the target eye image specifically includes:
performing image binarization processing on a target eye image according to a preset threshold value to obtain a binarized image comprising pupils, wherein the preset threshold value is larger than the gray value of a pupil area in the target eye image and smaller than the gray value of the iris area in the target eye image;
screening pixel points in the binarized image to obtain target pixel points on the edge of the pupil;
determining the outline according to the target pixel point;
the filtering the pixel points in the binarized image to obtain target pixel points on the edge of the pupil specifically includes:
sequentially carrying out Sobel edge detection, non-maximum suppression, double-threshold detection and edge connection on the pixel points in the binarized image to obtain target pixel points positioned on the edge of the pupil;
Under the condition that the circumscribed graph is a circumscribed rectangle, the acquiring the coordinates of the tangent points of the outline and the circumscribed rectangle in the target eye image specifically comprises:
acquiring first coordinates of each target pixel point in the target eye image;
taking the first coordinate with the smallest abscissa, the first coordinate with the largest abscissa, the first coordinate with the smallest ordinate and the first coordinate with the largest ordinate in the first coordinates as the coordinates of the tangent points respectively;
the determining the position of the center point of the pupil in the target eye image according to the coordinates of the tangent point comprises the following steps:
and calculating the average value of the coordinates of each tangent point to obtain the coordinates of the central point of the pupil in the target eye image.
2. The method according to claim 1, wherein before performing image binarization processing on the target eye image according to a preset threshold value to obtain a binarized image including a pupil, the method further comprises:
acquiring the occurrence times of each gray value in a preset gray value range in the target eye image;
constructing a gray level histogram of the corresponding relation between each gray level value and the occurrence times;
And taking a gray value corresponding to a minimum value between the first maximum value and the second maximum value of the occurrence times in the gray histogram as the preset threshold value.
3. The method of claim 1, wherein the filtering the pixel points in the binarized image to obtain the target pixel point on the edge of the pupil specifically comprises:
carrying out plane convolution operation on each pixel point in the binarized image by using a transverse convolution factor of the sobel convolution factor and a longitudinal convolution factor of the sobel convolution factor to obtain a gradient amplitude value of each pixel point in the binarized image;
performing non-maximum value inhibition processing on the gradient amplitude;
extracting a first pixel point meeting a preset condition in the binarized image as the target pixel point, wherein the preset condition comprises:
the gradient amplitude of the first pixel point is larger than a preset first threshold value;
and under the condition that the gradient amplitude of the first pixel point is smaller than or equal to the preset first threshold value and larger than the preset second threshold value, pixel points with gradient amplitudes larger than the preset first threshold value exist in eight adjacent areas of the first pixel point.
4. The method of claim 1, wherein prior to the determining the contour of the pupil in the target eye image, the method further comprises:
preprocessing the target eye image, wherein the preprocessing comprises the following steps: gaussian filtering processing, open operation and closed operation;
determining the outline of the pupil in the target eye image specifically comprises the following steps:
the contour of the pupil in the preprocessed target eye image is determined.
5. The method of claim 1, wherein after the determining the location of the center point of the pupil in the target eye image, the method further comprises:
and marking the position of the center point in the target eye image by using a preset mark.
6. A pupil centering device, the device comprising:
the acquisition unit is used for acquiring the target eye image;
a first determining unit configured to determine an outline of a pupil in the target eye image;
the second determining unit is used for determining an circumscribed graph of the outline and acquiring coordinates of a tangent point of the outline and the circumscribed graph in the target eye image;
a third determining unit, configured to determine a position of a center point of the pupil in the target eye image according to the coordinates of the tangent point;
The first determining unit is specifically configured to: performing image binarization processing on a target eye image according to a preset threshold value to obtain a binarized image comprising pupils, wherein the preset threshold value is larger than the gray value of a pupil area in the target eye image and smaller than the gray value of the iris area in the target eye image; screening pixel points in the binarized image to obtain target pixel points on the edge of the pupil; determining the outline according to the target pixel point;
the first determining unit is specifically configured to: sequentially carrying out Sobel edge detection, non-maximum suppression, double-threshold detection and edge connection on the pixel points in the binarized image to obtain target pixel points positioned on the edge of the pupil;
the second determining unit is specifically configured to: under the condition that the circumscribed graph is a circumscribed rectangle, acquiring first coordinates of each target pixel point in the target eye image; taking the first coordinate with the smallest abscissa, the first coordinate with the largest abscissa, the first coordinate with the smallest ordinate and the first coordinate with the largest ordinate in the first coordinates as the coordinates of the tangent points respectively;
The third determining unit is specifically configured to: and calculating the average value of the coordinates of each tangent point to obtain the coordinates of the central point of the pupil in the target eye image.
7. An electronic device, the device comprising: a processor, a memory and a computer program stored on the memory and executable on the processor, which when executed by the processor performs the steps of the pupil centre positioning method as claimed in any one of claims 1 to 5.
8. A computer storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the pupil centering method as claimed in any of the claims 1 to 5.
CN202010993486.9A 2020-09-21 2020-09-21 Pupil center positioning method, pupil center positioning device, pupil center positioning equipment and computer storage medium Active CN112258569B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010993486.9A CN112258569B (en) 2020-09-21 2020-09-21 Pupil center positioning method, pupil center positioning device, pupil center positioning equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010993486.9A CN112258569B (en) 2020-09-21 2020-09-21 Pupil center positioning method, pupil center positioning device, pupil center positioning equipment and computer storage medium

Publications (2)

Publication Number Publication Date
CN112258569A CN112258569A (en) 2021-01-22
CN112258569B true CN112258569B (en) 2024-04-09

Family

ID=74232461

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010993486.9A Active CN112258569B (en) 2020-09-21 2020-09-21 Pupil center positioning method, pupil center positioning device, pupil center positioning equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN112258569B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112989939B (en) * 2021-02-08 2023-04-07 佛山青藤信息科技有限公司 Strabismus detection system based on vision
CN114093018B (en) * 2021-11-23 2023-07-07 河南省儿童医院郑州儿童医院 Vision screening equipment and system based on pupil positioning
CN115170992B (en) * 2022-09-07 2022-12-06 山东水发达丰再生资源有限公司 Image identification method and system for scattered blanking of scrap steel yard
CN115294202B (en) * 2022-10-08 2023-01-31 南昌虚拟现实研究院股份有限公司 Pupil position marking method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002000567A (en) * 2000-06-23 2002-01-08 Kansai Tlo Kk Method of measuring pupil center position and method of detecting view point position
CN101211413A (en) * 2006-12-28 2008-07-02 台正 Quick pupil center positioning method based on vision frequency image processing
CN103136512A (en) * 2013-02-04 2013-06-05 重庆市科学技术研究院 Pupil positioning method and system
CN104809458A (en) * 2014-12-29 2015-07-29 华为技术有限公司 Pupil center positioning method and pupil center positioning device
CN106326880A (en) * 2016-09-08 2017-01-11 电子科技大学 Pupil center point positioning method
CN109766818A (en) * 2019-01-04 2019-05-17 京东方科技集团股份有限公司 Pupil center's localization method and system, computer equipment and readable storage medium storing program for executing

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108572735B (en) * 2018-04-24 2021-01-26 京东方科技集团股份有限公司 Pupil center positioning device and method and virtual reality equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002000567A (en) * 2000-06-23 2002-01-08 Kansai Tlo Kk Method of measuring pupil center position and method of detecting view point position
CN101211413A (en) * 2006-12-28 2008-07-02 台正 Quick pupil center positioning method based on vision frequency image processing
CN103136512A (en) * 2013-02-04 2013-06-05 重庆市科学技术研究院 Pupil positioning method and system
CN104809458A (en) * 2014-12-29 2015-07-29 华为技术有限公司 Pupil center positioning method and pupil center positioning device
CN106326880A (en) * 2016-09-08 2017-01-11 电子科技大学 Pupil center point positioning method
CN109766818A (en) * 2019-01-04 2019-05-17 京东方科技集团股份有限公司 Pupil center's localization method and system, computer equipment and readable storage medium storing program for executing

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A Low-Cost Pupil Center Localization Algorithm Based on Maximized Integral Voting of Circular Hollow Kernels;IBRAHIM FURKAN INCE 等;《The Computer journal》;第1001-1015页 *
瞳孔中心快速定位方法研究;王长元 等;《计算机工程与应用》;第196-198+201页 *
面向近眼式应用的快速瞳孔中心定位算法;赵浩然 等;《电讯技术》;第1102-1107页 *

Also Published As

Publication number Publication date
CN112258569A (en) 2021-01-22

Similar Documents

Publication Publication Date Title
CN112258569B (en) Pupil center positioning method, pupil center positioning device, pupil center positioning equipment and computer storage medium
CN107808378B (en) Method for detecting potential defects of complex-structure casting based on vertical longitudinal and transverse line profile features
CN109242853B (en) PCB defect intelligent detection method based on image processing
Guan et al. Accurate segmentation of partially overlapping cervical cells based on dynamic sparse contour searching and GVF snake model
CN112837290B (en) Crack image automatic identification method based on seed filling algorithm
Thalji et al. Iris Recognition using robust algorithm for eyelid, eyelash and shadow avoiding
Lin et al. Detection and segmentation of cervical cell cytoplast and nucleus
CN107240086B (en) A kind of fabric defects detection method based on integral nomography
CN105447489B (en) A kind of character of picture OCR identifying system and background adhesion noise cancellation method
CN117094975A (en) Method and device for detecting surface defects of steel and electronic equipment
CN109509168B (en) A kind of details automatic analysis method for picture quality objective evaluating dead leaf figure
Chandra et al. A survey on advanced segmentation techniques in image processing applications
CN117078688B (en) Surface defect identification method for strong-magnetic neodymium-iron-boron magnet
Lin et al. Image segmentation based on edge detection and region growing for thinprep-cervical smear
EP3475915A1 (en) Visual cardiomyocyte analysis
EP3293672A1 (en) Particle boundary identification
CN104102911A (en) Image processing for AOI (automated optical inspection)-based bullet appearance defect detection system
Wang et al. Color edge detection using the normalization anisotropic Gaussian kernel and multichannel fusion
Chen et al. Robust iris segmentation algorithm based on self-adaptive Chan–Vese level set model
CN114332138A (en) Contour characteristic-based cell nucleus segmentation method, device, equipment and storage medium
CN110458042B (en) Method for detecting number of probes in fluorescent CTC
CN114119569A (en) Imaging logging image crack segmentation and identification method and system based on machine learning
Song et al. A new separation algorithm for overlapping blood cells using shape analysis
CN117392066B (en) Defect detection method, device, equipment and storage medium
Reddy et al. Optic Disk Segmentation through Edge Density Filter in Retinal Images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230802

Address after: Room 702, Block C, Swan Tower, No. 111 Linghu Avenue, Xinwu District, Wuxi City, Jiangsu Province, 214028

Applicant after: Wuxi Tanggu Semiconductor Co.,Ltd.

Address before: 215128 unit 4-a404, creative industry park, 328 Xinghu street, Suzhou Industrial Park, Suzhou City, Jiangsu Province

Applicant before: Suzhou Tanggu Photoelectric Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant