CN108256391A - A kind of pupil region localization method based on projecting integral and edge detection - Google Patents

A kind of pupil region localization method based on projecting integral and edge detection Download PDF

Info

Publication number
CN108256391A
CN108256391A CN201611237359.6A CN201611237359A CN108256391A CN 108256391 A CN108256391 A CN 108256391A CN 201611237359 A CN201611237359 A CN 201611237359A CN 108256391 A CN108256391 A CN 108256391A
Authority
CN
China
Prior art keywords
pupil region
human eye
integral
edge detection
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611237359.6A
Other languages
Chinese (zh)
Inventor
钟鸿飞
覃争鸣
杨旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rich Intelligent Science And Technology Ltd Is Reflected In Guangzhou
Original Assignee
Rich Intelligent Science And Technology Ltd Is Reflected In Guangzhou
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rich Intelligent Science And Technology Ltd Is Reflected In Guangzhou filed Critical Rich Intelligent Science And Technology Ltd Is Reflected In Guangzhou
Priority to CN201611237359.6A priority Critical patent/CN108256391A/en
Publication of CN108256391A publication Critical patent/CN108256391A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Ophthalmology & Optometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The present invention proposes a kind of pupil region localization method based on projecting integral and edge detection, including:S1, image preprocessing pre-process the facial image of camera capture;S2, projecting integral calculate, and the grey scale change of facial image is calculated using integral projection method;S3, human eye area positioning position human eye area according to human eye ratio and integral and calculating result;S4, pupil region positioning position pupil region using edge detection.The present invention program is integrated using Gray Projection, human face structure feature combination Gray Projection is integrated, the grey scale change of human eye area is calculated in both the horizontal and vertical directions, so as to obtain human eye area positioning, it reuses edge detection operator and realizes being accurately positioned for pupil region, without using LED light source direct projection eyes of user, and calculate quick, it is easy to accomplish.

Description

A kind of pupil region localization method based on projecting integral and edge detection
Technical field
The present invention relates to field of human-computer interaction, and in particular to a kind of pupil region based on projecting integral and edge detection is determined Position method.
Background technology
In order to help disabled person/the elderly keep with the exchanging of the external world, link up, improve their independent living ability, subtract Light family, the burden of society, all over the world many scientists start the novel man-machine interaction mode of exploratory development.So-called interaction Technology includes people and the interaction of executing agency (such as robot) and the interaction of executing agency and environment.The former meaning is can It is gone to realize the planning and decision that executing agency is difficult in unknown or uncertain condition by people;And be can for the meaning of the latter By robot go to complete people job task in inaccessiable adverse circumstances or long distance environment.
Traditional human-computer interaction device mainly has keyboard, mouse, handwriting pad, touch screen, game console etc., these equipment The function of human-computer interaction is realized using the hand exercise of user.However for many individuals with disabilities, especially as described above Limbs residuum personage (total paralysis, muscular dystrophy patient etc.), these traditional equipment can not usually meet its and normal hand over Mutual demand.
The man-machine interaction mode that information is moved based on eye is realized by camera acquisition user images, progress pattern-recognition , since it has the characteristics that direct, naturality and amphicheirality, instead of input through keyboard, the interactive function of mouse movement.It is common The man-machine interaction mode of information moved based on eye extract corneal reflection information by LED infrared light supplies, and then obtain and differentiate feature, But LED infrared light supplies are easily influenced by light, and LED light source long-term irradiation human eye, and damage is be easy to cause to human eye.
The man-machine interaction mode key that information is moved based on eye is eye pupil positioning and the accuracy of pupil action recognition, Therefore how under natural light environment, pupil region is accurately positioned out using monocular cam, eye is efficiently based on for realizing The human-computer interaction of dynamic information, has great importance.
Invention content
The purpose of the present invention is to overcome the deficiency in the prior art, especially solves existing moving based on eye using LED infrared lights In the human-computer interaction technology of information, LED infrared light supplies are easily influenced by light, and LED light source long-term irradiation human eye, easy to human eye The problem of causing damage.
It is positioned in order to solve the above technical problems, present invention proposition is a kind of based on the pupil region of projecting integral and edge detection Method, key step include:
S1, image preprocessing pre-process the facial image of camera capture;
S2, projecting integral calculate, and the grey scale change of facial image is calculated using integral projection method;
S3, human eye area coarse positioning position human eye area according to human eye ratio and integral and calculating result;
S4, pupil region positioning position pupil region using edge detection.
Further, the step S1 image pretreatment operations include:S11 image gray processings, S12 image equilibrations, figure As binaryzation, image negative film operate;
Further, the step S2 projecting integrals calculating operation includes:Horizontal vertical integration operation and vertical integration are thrown Shadow operates;
Further, in the step S2 projecting integrals calculating operation, left and right two parts is divided the image into, are thrown respectively Shadow integral and calculating;
Further, in the step S3 human eye area positioning actions, people is obtained according to the ratio of two spacing of people or so Vitrea eye domain;
Further, the step S4 pupil region positioning actions include:S41 gaussian filterings are smooth, S42 calculates gradient width Value and direction, S43 gradient magnitudes non-maxima suppression, S44 edge detections and connection.
The present invention has following advantageous effect compared with prior art:
The present invention program is integrated using Gray Projection, and human face structure feature combination Gray Projection is integrated, horizontal and vertical The grey scale change of human eye area is calculated in straight both direction, so as to obtain human eye area positioning, reuses edge detection operator reality Existing pupil region is accurately positioned, and without using LED light source direct projection eyes of user, and is calculated quick, it is easy to accomplish.
Description of the drawings
Fig. 1 is the flow the present invention is based on projecting integral and one embodiment of the pupil region localization method of edge detection Figure.
Fig. 2 is the image binaryzation result schematic diagram of the embodiment of the present invention.
Fig. 3 is the image negative film result schematic diagram of the embodiment of the present invention.
Fig. 4 is the integral projection result of calculation curve graph of the embodiment of the present invention.
Fig. 5 is the human eye parameter logistic relation schematic diagram of the embodiment of the present invention.
Fig. 6 is the pupil region locating effect figure of the embodiment of the present invention.
Specific embodiment
Below in conjunction with the accompanying drawings and specific embodiment the present invention is carried out in further detail with complete explanation.It is appreciated that It is that specific embodiment described herein is only used for explaining the present invention rather than limitation of the invention.
Referring to Fig. 1, the pupil region localization method based on projecting integral and edge detection of the embodiment of the present invention is main to walk Suddenly include:
S1, image preprocessing, detailed process include:S11 image gray processings, S12 image equilibrations, S13 image binaryzations, S14 images negative film operates.
S11 image gray processings:Camera obtain eye image be coloured image, comprising contain much information, image procossing speed Degree is slower.High in view of requirement of the human-computer interaction to real-time, it is necessary that gray processing processing is carried out to coloured image.Gray processing The process for exactly making R, G of colour element, B component value equal, the gray value in gray level image are equal to the RGB in original color image Average value, i.e.,
Gray=(R+G+B)/3 (1)
S12 image equilibrations:Histogram equalization pulls open the gray scale spacing of image or makes intensity profile uniform, so as to increase Big contrast makes image detail clear, achievees the purpose that image enhancement.Its specific method is:
All gray level S of original image are provided firstk(k=0,1 ..., L-1);Then statistics original image is each The pixel number n of gray levelk;It is straight that the accumulation for (3) formula being used to calculate original image again after the histogram of original image is calculated using formula (2) Fang Tu:
P(Sk)=nk/ n, k=0,1 ..., L-1 (2)
p(tk)=nk/n (4)
Wherein, n is total number of image pixels.To gray value tkRounding determines Sk→tkMapping relations after count new histogram Each gray-scale pixel number nk;New histogram is finally calculated using formula (4).
S13 image binaryzations:Image binaryzation is carried out with maximum variance between clusters, process is:
If image shares L gray level, gray value is that the pixel of i shares niA, image shares N number of pixel, normalizing Change grey level histogram, enable
A threshold value t is set, pixel is divided by c according to gray value0And c1Two classes.c0Probability ω0, mean μ0
c1Probability ω1, mean μ1
Wherein,It can thus be appreciated that c0And c1Inter-class variance σ2(t) it is:
σ2(t)=ω0(μ-μ0)211-μ)2 (9)
Then t is subjected to value from 0 to i, t is optimal threshold when σ is maximized, you can obtains best binary picture Picture, binaryzation result is referring to Fig. 2.
S14 image negative films:Image negative film refers to that black portions are mapped as white by binary image, and white portion reflects It penetrates as black, so as to the prominent ocular for being originally used for black portions, negative film result is referring to Fig. 3.
S2, projecting integral calculate;For facial image, since there may be the plane internal rotation of certain angle, people Eyes be not in same horizontal line, if carrying out global level projection to whole facial image, obtained eyes indulge seat Mark is just not accurate enough.Therefore in the present embodiment, facial image is divided into the identical left and right two parts of size between two parties first, The left and right eyes of people are included respectively, are made horizontal integral projection respectively to this left and right two parts image and are calculated:
I (x, y) is grey scale pixel values of the image I at location point (x, y), then image is respectively in section [y1,y2] and [x1, x2] on vertical integral projection IPFv(x) and horizontal integral projection IPFh(x) it is respectively:
In the present embodiment, make horizontal integral projection respectively to left and right two parts image and calculate, obtain result referring to Fig. 4, it is horizontal Axis represents each location point, the longitudinal axis represent pixel on this position and, on the horizontal axis in Fig. 4 (a) corresponding to peak-peak Location point be exactly corresponding right eye ordinate value yR, the location point on horizontal axis in Fig. 4 (b) corresponding to peak-peak is exactly The ordinate value y of corresponding left eyeL, two positions on horizontal axis in Fig. 4 (c) corresponding to crest value maximum in two sections of waveforms Point be corresponding right and left eyes abscissa value, i.e. xLAnd xR, and xR<xL
S3, human eye area positioning.The human eye centre coordinate calculated with the method for gray-level projection is exactly human eye ash Spend the horizontal maximum of integration and the intersecting point coordinate of vertical maximum place straight line, i.e. point (xL,yL) left eye eyeball central point is corresponded to, Point (xR,yR) correspond to right eye eyeball central point.Referring to Fig. 5, human eye parameter logistic relationship marks rectangle human eye window according to figure 5 Mouthful, rectangle human eye window, that is, human eye localization region.
S4, pupil region positioning position pupil region using edge detection, and detailed process includes:S41 gaussian filterings are put down It is sliding;S42 calculates gradient magnitude and direction;S43 gradient magnitude non-maxima suppressions;S44 edge detections and connection.
S41 gaussian filterings are smooth.The human eye area f (x, y) of given coarse positioning, using Gaussian function H (x, y) to human eye area Area image carries out smooth:
Wherein, the human eye area image after G (x, y) represents smooth, σ represent Gaussian kernel coefficient.
S42 calculates gradient magnitude and direction.The gradient magnitude of each pixel and gradient in human eye area image is calculated to become Change direction, use first-order difference convolution mask HxAnd Hy(x, y) carries out gradient calculating:
Then pixel x directionsY directionsThe mathematic(al) representation of first-order partial derivative matrix be:
Amplitude representationFor:
The gradient direction θ (x, y) of pixel (x, y) is:
S43 gradient magnitude non-maxima suppressions.Pixel local maximum is found, it will be corresponding to non local maximum point Gray value be set to 0, with the edge refined, weed out the point of most non-edge.
S44 edge detections and connection.Use two threshold value T1,T2(T1< T2), T1For finding every line segment, T2For The breaking part at edge is found in extension in the both direction of these line segments, and connects these edges.So as to obtain two threshold values Edge image N1(x, y) and N2(x,y).Due to N2(x, y) is obtained using high threshold, thus containing seldom false edge, but between having Disconnected (not being closed).Dual-threshold voltage will be in N2Edge is connected into profile in (x, y), when the endpoint for reaching profile, the algorithm is just In N1The 8 neighborhood positions searching of (x, y) may be coupled to the edge on profile.In this way, algorithm is constantly in N1Side is collected in (x, y) Edge, until by N2Until (x, y) is connected.The barycenter of connected region after edge detection is as pupil center, connected region Edge be pupil profile.Referring to Fig. 6, Fig. 6 is the pupil region of the present embodiment extraction.

Claims (2)

1. a kind of pupil region localization method based on projecting integral and edge detection, which is characterized in that including:
S1, image preprocessing pre-process the facial image of camera capture;
S2, projecting integral calculate, and the grey scale change of facial image is calculated using integral projection method;
S3, human eye area positioning position human eye area according to human eye ratio and integral and calculating result;
S4, pupil region positioning position pupil region using edge detection.
2. a kind of pupil region localization method based on projecting integral and edge detection according to claim 1, feature It is, the step S4 pupil region positioning actions include:S41 gaussian filterings are smooth, S42 calculates gradient magnitude and direction, S43 Gradient magnitude non-maxima suppression, S44 edge detections and attended operation.
CN201611237359.6A 2016-12-29 2016-12-29 A kind of pupil region localization method based on projecting integral and edge detection Pending CN108256391A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611237359.6A CN108256391A (en) 2016-12-29 2016-12-29 A kind of pupil region localization method based on projecting integral and edge detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611237359.6A CN108256391A (en) 2016-12-29 2016-12-29 A kind of pupil region localization method based on projecting integral and edge detection

Publications (1)

Publication Number Publication Date
CN108256391A true CN108256391A (en) 2018-07-06

Family

ID=62719580

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611237359.6A Pending CN108256391A (en) 2016-12-29 2016-12-29 A kind of pupil region localization method based on projecting integral and edge detection

Country Status (1)

Country Link
CN (1) CN108256391A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110623629A (en) * 2019-07-31 2019-12-31 毕宏生 Visual attention detection method and system based on eyeball motion
CN110942043A (en) * 2019-12-02 2020-03-31 深圳市迅雷网络技术有限公司 Pupil image processing method and related device
CN111160291A (en) * 2019-12-31 2020-05-15 上海易维视科技有限公司 Human eye detection method based on depth information and CNN
CN113221798A (en) * 2021-05-24 2021-08-06 南京伯索网络科技有限公司 Classroom student aggressiveness evaluation system based on network

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110623629A (en) * 2019-07-31 2019-12-31 毕宏生 Visual attention detection method and system based on eyeball motion
CN110942043A (en) * 2019-12-02 2020-03-31 深圳市迅雷网络技术有限公司 Pupil image processing method and related device
CN110942043B (en) * 2019-12-02 2023-11-14 深圳市迅雷网络技术有限公司 Pupil image processing method and related device
CN111160291A (en) * 2019-12-31 2020-05-15 上海易维视科技有限公司 Human eye detection method based on depth information and CNN
CN111160291B (en) * 2019-12-31 2023-10-31 上海易维视科技有限公司 Human eye detection method based on depth information and CNN
CN113221798A (en) * 2021-05-24 2021-08-06 南京伯索网络科技有限公司 Classroom student aggressiveness evaluation system based on network

Similar Documents

Publication Publication Date Title
CN108256391A (en) A kind of pupil region localization method based on projecting integral and edge detection
CN106980852B (en) Based on Corner Detection and the medicine identifying system matched and its recognition methods
CN108256392A (en) Pupil region localization method based on projecting integral and area grayscale extreme value
CN108256379A (en) A kind of eyes posture identification method based on Pupil diameter
CN110619628A (en) Human face image quality evaluation method
Marrugo et al. Retinal image analysis: preprocessing and feature extraction
CN1885314A (en) Pre-processing method for iris image
Bagchi et al. Robust 3D face recognition in presence of pose and partial occlusions or missing parts
Mohsin et al. Pupil detection algorithm based on feature extraction for eye gaze
Everingham et al. Head-mounted mobility aid for low vision using scene classification techniques
CN112287765A (en) Face living body detection method, device and equipment and readable storage medium
Parikh et al. Effective approach for iris localization in nonideal imaging conditions
CN108268125A (en) A kind of motion gesture detection and tracking based on computer vision
Zaim Automatic segmentation of iris images for the purpose of identification
Khilari Iris tracking and blink detection for human-computer interaction using a low resolution webcam
CN108256387A (en) A kind of eye areas localization method based on Gray Projection integration
CN113011333B (en) System and method for obtaining optimal venipuncture point and direction based on near-infrared image
CN206363347U (en) Based on Corner Detection and the medicine identifying system that matches
Everingham et al. Head-mounted mobility aid for low vision using scene classification techniques
CN107392170A (en) A kind of palmmprint main line extracting method for meeting nature growth rhythm
CN109902539A (en) A kind of new pupil region localization method
Hassan et al. Enhance iris segmentation method for person recognition based on image processing techniques
CN108255288A (en) Gesture detecting method based on acceleration compensation and complexion model
De Luca et al. Deploying an Instance Segmentation Algorithm to Implement Social Distancing for Prosthetic Vision
Wang et al. Face detection based on adaboost and face contour in e-learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180706

WD01 Invention patent application deemed withdrawn after publication