CN111428634B - Human eye line-of-sight tracking and positioning method adopting six-point method for blocking fuzzy weighting - Google Patents

Human eye line-of-sight tracking and positioning method adopting six-point method for blocking fuzzy weighting Download PDF

Info

Publication number
CN111428634B
CN111428634B CN202010208956.6A CN202010208956A CN111428634B CN 111428634 B CN111428634 B CN 111428634B CN 202010208956 A CN202010208956 A CN 202010208956A CN 111428634 B CN111428634 B CN 111428634B
Authority
CN
China
Prior art keywords
vector
fuzzy
follows
pupil
cornea
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010208956.6A
Other languages
Chinese (zh)
Other versions
CN111428634A (en
Inventor
李科华
姚永杰
王庆敏
戴圣龙
刘秋红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chinese Peoples Liberation Army Naval Characteristic Medical Center
Original Assignee
Chinese Peoples Liberation Army Naval Characteristic Medical Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chinese Peoples Liberation Army Naval Characteristic Medical Center filed Critical Chinese Peoples Liberation Army Naval Characteristic Medical Center
Priority to CN202010208956.6A priority Critical patent/CN111428634B/en
Publication of CN111428634A publication Critical patent/CN111428634A/en
Application granted granted Critical
Publication of CN111428634B publication Critical patent/CN111428634B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The human eye sight tracking and positioning method adopting six-point method block fuzzy weighting is characterized by comprising the following steps: step S10, dividing a screen area; step S20, image preprocessing and pupil cornea vector extraction; step S30, six-point pupil cornea vector calculation of a single area; step S40, primarily calibrating the sight tracking position and solving a first regional calibration factor vector; s50, solving the calibration factor vectors of all areas; step S60, obtaining pupil cornea error vector; step S70, establishing a fuzzy system and solving fuzzy relativity; s80, fuzzy weighting calibration; step S90, real-time positioning and resolving and sight tracking. The eye sight tracking and positioning method adopting the six-point method for blocking fuzzy weighting further solves the problems of large calculation amount of solution and poor global adaptability of results caused by the limitations and defects of the related technology at least to a certain extent.

Description

Human eye line-of-sight tracking and positioning method adopting six-point method for blocking fuzzy weighting
Technical Field
The invention relates to the field of sight tracking, in particular to a sight tracking and positioning method adopting fuzzy weighting.
Background
The eye is an important information acquisition channel for humans, and about 80% -90% of the information comes from vision. Eyes can reflect physiological and psychological changes of human bodies, and research on eye movements is considered as the most effective means in visual information processing research.
Gaze tracking is a technique that uses various detection means, such as mechanical, electronic, and optical, to obtain the current "gaze direction" of a subject.
Through the tracking of vision, the glance selection and the fixation process of a person when the person observes the external view and related information can be obtained, so that the vision perception and the information processing mode of the person can be researched, and finally the cognitive mode of the person can be obtained. In addition, under the situation of executing different tasks, the eye movement sensitivity, delay, reflecting speed and other specific characteristics of targets in different positions, sizes, colors and the like are thoroughly known, and the relevant indexes such as fatigue, task load degree and the like of people can be reflected.
In the current eye line-of-sight tracking method, the eye line-of-sight direction calibration is mainly applied to a method combining a nine-point method and least square, the calculated amount of the method is large, and when the least square is adopted, matrix approximation singular is easy to occur, so that the condition that a resolving result is unsatisfactory is caused.
Disclosure of Invention
The invention aims to provide a six-point method block fuzzy weighting human eye sight tracking and positioning method, which at least overcomes the problems of large calculation amount of solution and poor global adaptability of results caused by the limitations and defects of the related technology to a certain extent.
In order to solve the problems, the invention provides a human eye sight tracking and positioning method adopting six-point method block fuzzy weighting, which comprises the following steps:
step S10, dividing a screen area;
step S20, image preprocessing and pupil cornea vector extraction;
step S30, six-point pupil cornea vector calculation of a single area;
step S40, primarily calibrating the sight tracking position and solving a first regional calibration factor vector;
s50, solving the calibration factor vectors of all areas;
step S60, obtaining pupil cornea error vector;
step S70, establishing a fuzzy system and solving fuzzy relativity;
s80, fuzzy weighting calibration;
step S90, real-time positioning and resolving and sight tracking.
In particular, the specific implementation method of the step S10 is as follows: dividing the screen area into N parts, each of which is denoted as S i Where i=1, 2, …, N, and in each region S i 6 points are selected and marked as A ij Where j=1, 2,3,4,5,6, the coordinates of each point are respectively denoted as (a) ijx ,a ijy ) Wherein a is ijx Is the horizontal direction coordinate of the screen, a ijy Is the vertical coordinate of the screen.
In particular, the specific implementation method of the step S20 is as follows: first let eyes of a subject watch the screen area S i A of (2) ij Point, adopting an eye tracking camera system to shoot a high-definition image of human eye part, and marking as B 1 The method comprises the steps of carrying out a first treatment on the surface of the Then for high definition human eye partial image B 1 Filtering and preprocessing to remove noise in the image and obtain a filtered image, and marking as B 2 The method comprises the steps of carrying out a first treatment on the surface of the Finally, pupil cornea vector is extracted by a bright spot detection and pupil detection method and is marked as (x) ei ,y ei ). In particular, the specific implementation method of the step S30 is as follows: let eyes of a subject sequentially watch the screen area S i Is a single-point type; step S20 is repeated to obtain pupil cornea vectors at six points.
In particular, the specific implementation method of the step S40 is as follows: according to the position coordinates of the six points on the screen and the pupil-cornea vector coordinates, the following equation is solved to obtain a preliminary calibration formula, and the specific solving process is as follows:
firstly, constructing a nonlinear homogeneous pupil cornea matrix C according to six-point pupil cornea vectors, wherein the nonlinear homogeneous pupil cornea matrix C is defined as follows:
Figure GDA0004128288350000031
the above-mentioned C array is characterized by that except for first column being 1, the latter two columns are
Figure GDA0004128288350000034
When p is more than 0, q is more than 0, p+q=1, and the values of p and q are larger, the whole resolving result is uneven, and the data correlation is poor;
secondly, constructing a screen coordinate vector D according to the position coordinates of six points on the screen x And D y Wherein D is x And D y The composition of (2) is as follows:
D x =[a 11x a 12x a 13x a 14x a 15x a 16x ] T
D y =[a 11y a 12y a 13y a 14y a 15y a 16y ] T
finally, solving an inverse matrix C of the nonlinear homogeneous pupil cornea matrix C -1 Calculating the calibration factor vector E ix 、E iy The solving method is as follows:
E ix =C -1 D x 、E iy =C -1 D y
in particular, the specific implementation method of the step S50 is as follows: sequentially selecting screen regions S 2 、S 3 Up to S N Repeating the steps S10 to S40 to obtain a calibration factor vector E ix 、E iy (i=1,2,…,N)。
In particular, the specific implementation method of the step S60 is as follows: for a certain area S i Pupil cornea vector (x) e1 ,y e1 )、(x e2 ,y e2 )、(x e3 ,y e3 )、(x e4 ,y e4 )、(x e5 ,y e5 ) And (x) e6 ,y e6 ) Calculate the average value and record as
Figure GDA0004128288350000035
The calculation method comprises the following steps:
Figure GDA0004128288350000032
Figure GDA0004128288350000033
obtaining the region S by adopting the same algorithm 2 To S N Is the average of pupil cornea vectors obtained from six reference points of (2)
Figure GDA0004128288350000041
For real-time pupil cornea vector (x e ,y e ) With the above average value
Figure GDA0004128288350000049
Difference, where i=1, 2,3, …, N, yields pupil cornea error vector (e xi ,e yi ) Pupil cornea error vector radius is +.>
Figure GDA0004128288350000042
Wherein the method comprises the steps of
Figure GDA0004128288350000043
In particular, the specific implementation method of the step S70 is as follows: first define the input variable r i The fuzzy concept of (2) is mainly divided into the following five fuzzy concepts, namely:
r i ={H B M S O}
wherein H represents the input variable r i Large, B represents the input variable r i Larger, M represents the input variable r i Representing the input variable r for medium, S i Smaller, O represents the input variable r i Near 0;
determination of the region S i Is denoted as s ai The design method comprises the following steps of:
when (when)
Figure GDA0004128288350000044
When ri is considered to be large;
when (when)
Figure GDA0004128288350000045
When ri is considered to be large;
when (when)
Figure GDA0004128288350000046
When ri is considered to be moderate;
when (when)
Figure GDA0004128288350000047
When ri is considered smaller;
when (when)
Figure GDA0004128288350000048
When considering r i Near 0;
second define output association degree d i (r i ) Is also divided into five fuzzy concepts, namely
d i (r i )={H'B'M'S'O'},i=1,2,3,…,N
I.e. H' represents the output relevance d i (r i ) Large, B' represents the output relevance d i (r i ) Larger, M' represents the output relevance d i (r i ) For medium, S' represents the output association degree d i (r i ) Smaller, O' represents the output relevance d i (r i ) Near 0;
the design method comprises the following steps of:
when 0.8 < |d i (r i ) When the I < 1, consider as the fuzzy system output d i (r i ) Is very large;
when 0.6 < |d i (r i ) When the I is less than 0.8, the output d is regarded as a fuzzy system output i (r i ) Larger;
when 0.4 < |d i (r i ) When the I is less than 0.6, the output d is regarded as a fuzzy system output i (r i ) Medium;
when 0.2 < |d i (r i ) When the level is less than 0.4, the output d is regarded as a fuzzy system output i (r i ) Smaller;
when 0 < |d i (r i ) When the I is less than 0.2, the output d is regarded as a fuzzy system output i (r i ) Almost 0;
finally, the fuzzy rule is defined as follows:
when inputting variable r i When it is large, consider output association degree d i (r i ) Almost 0;
when inputting variable r i Generally, when the correlation degree d is large, it is considered that i (r i ) Smaller;
when inputting variable r i Is medium, consider output association degree d i (r i ) Medium;
when inputting variable r i Smaller, consider output relevance d i (r i ) Larger;
when inputting variable r i At almost 0, the output association degree d is considered i (r i ) Is very large.
In particular, the specific implementation method of the step S80 is as follows: correlation coefficient d obtained from double-number fuzzy system i (r i ) Solving the weight coefficient c of each region i The resolving process is as follows:
Figure GDA0004128288350000051
the overall calibration vector is:
Figure GDA0004128288350000052
finally gives the cornea vector (x) e ,y e ) Calculating to obtain the on-screen gaze point coordinates (a x ,a y ) The formula of (2) is as follows:
Figure GDA0004128288350000053
Figure GDA0004128288350000054
wherein E is x (i) And E is connected with y (i) The above-mentioned calibration vectors E x And E is connected with y I (i=1, 2,3,4,5, 6) th component.
In particular, the specific implementation method of the step S90 is as follows: high-definition images of human eyes are shot in real time through an eye tracking camera system, and pupil cornea vectors (x) are obtained through filtering and processing in step S20 es ,y es ) And counting in real time by adopting the formula to obtain the real-time sight tracking coordinates (a) of the screen region points xs ,a ys ) Thereby realizing the whole process of sight tracking, (a) xs ,a ys ) The calculation formula is as follows:
Figure GDA0004128288350000061
Figure GDA0004128288350000062
the invention provides a method for calibrating the sight line by adopting a six-point method block fuzzy weighting human eye sight line tracking and positioning method, which not only ensures that the single-region resolving is simple, but also ensures that the whole resolving result has better robustness and full-region adaptability; the problems of large calculation amount and poor global adaptability of the result caused by the limitations and defects of the prior related art are solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention. It is evident that the drawings in the following description are only some embodiments of the present invention and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 is a flow chart of a human eye vision tracking and positioning method adopting six-point method block fuzzy weighting;
FIG. 2 is a screen area segmentation diagram of a method provided by an embodiment of the present invention;
FIG. 3 is a six-point distribution plot of region 1 of the method provided by an embodiment of the present invention;
FIG. 4 is a partial image of a human eye of a method provided by an embodiment of the present invention;
FIG. 5 is a filtered image of a partial image of the human eye of a method according to an embodiment of the present invention;
FIG. 6 is a pupil cornea vector coordinate extraction method according to an embodiment of the present invention;
FIG. 7 is a graph of a blur metric for pupil corneal error vector magnitude for a method provided by an embodiment of the present invention;
FIG. 8 is a fuzzy metric diagram of a method provided by an embodiment of the present invention;
fig. 9 is a view point resolving and positioning display of the method according to the embodiment of the present invention.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known aspects have not been shown or described in detail to avoid obscuring aspects of the invention.
The invention provides a method for calculating coordinates of a sight-line gaze point on a screen by determining calibration vector information of sight-line tracking through a six-point method and combining the calibration vector information with a nonlinear odd-order function, and comprehensively setting the calibration vector information by adopting a multi-region segmentation and fuzzy weighting mode. The method is more direct than the traditional nine-point method, and is more convenient to calculate, and the method for comprehensively setting the multiple areas enables the final result to have better global adaptability, so that the method has high engineering application value.
The six-point method block fuzzy weighting calibration human eye sight tracking method of the invention is further explained and described below with reference to the accompanying drawings. Referring to fig. 1, the eye gaze tracking method of the six-point method block fuzzy weighting calibration includes the following steps:
step S10, firstly, screen S is processedThe region segmentation is specifically implemented by the following method: dividing the screen area into N parts, each of which is denoted as S i (i=1, 2, …, N) and in each region S i 6 points are selected and marked as A ij (j=1, 2,3,4,5, 6), the coordinates of each point are respectively denoted as (a) ijx ,a ijy ) Wherein a is ijx Is the horizontal direction coordinate of the screen, a ijy Is the vertical coordinate of the screen.
Step S20, image preprocessing and pupil cornea vector extraction are carried out by the following steps: first let eyes of a subject watch the screen area S 1 A of (2) 11 Point, adopting an eye tracking camera system to shoot a high-definition image of human eye part, and marking as B 1 . Then for high definition human eye partial image B 1 Filtering and preprocessing to remove noise in the image and obtain a filtered image, and marking as B 2 . Finally, pupil cornea vector is extracted by a bright spot detection and pupil detection method and is marked as (x) e1 ,y e1 );
Step S30, image preprocessing and pupil cornea vector extraction are carried out by the following steps: let eyes of a subject sequentially watch the screen area S 1 A of (2) 12 、A 13 、A 14 、A 15 And A is a 16 A dot; repeating step S20 to obtain pupil cornea vector (x e2 ,y e2 )、(x e3 ,y e3 )、(x e4 ,y e4 )、(x e5 ,y e5 ) And (x) e6 ,y e6 )。
Step S40, primarily calibrating the sight line tracking position, which is implemented by the following method:
according to the position coordinates (a) of the six points on the screen 1jx ,a 1jy ) (j=1, 2,3,4,5, 6) and pupil-cornea vector coordinates (x ei ,y ei ) (i=1, 2,3,4,5, 6), a simple equation solution is performed as follows to obtain a preliminary calibration formula.
Namely, firstly, constructing a nonlinear homogeneous pupil cornea matrix C according to six-point pupil cornea vectors, wherein the nonlinear homogeneous pupil cornea matrix C is defined as follows:
Figure GDA0004128288350000081
the above C array is characterized in that the other five columns are all
Figure GDA0004128288350000082
Form, and p > 0, q > 0, p+q=1. And when the values of p and q are larger, the whole resolving result is uneven, and the data correlation is poor.
Secondly, constructing a screen coordinate vector D according to the position coordinates of six points on the screen x And D y Wherein D is x And D y The composition of (2) is as follows:
D x =[a 11x a 12x a 13x a 14x a 15x a 16x ] T
D y =[a 11y a 12y a 13y a 14y a 15y a 16y ] T
finally, solving an inverse matrix C of the nonlinear homogeneous pupil cornea matrix C -1 Calculating the calibration factor vector E ix 、E iy The solving method is as follows:
E ix =C -1 D x 、E iy =C -1 D y
step S50, the calibration factor vector of all the areas is solved, and the method is specifically implemented by the following steps: sequentially selecting screen regions S 2 、S 3 Up to S N Repeating the steps S10 to S40 to obtain a calibration factor vector E ix 、E iy (i=1,2,…,N)。
Step S60, obtaining a pupil cornea error vector, which is specifically implemented by the following method: for the first region S 1 Pupil cornea vector (x) e1 ,y e1 )、(x e2 ,y e2 )、(x e3 ,y e3 )、(x e4 ,y e4 )、(x e5 ,y e5 ) And (x) e6 ,y e6 ) The average value was calculated and recorded as (x) 11 ,y 11 ) The calculation method is that
Figure GDA0004128288350000091
Figure GDA0004128288350000092
Obtaining the region S by adopting the same algorithm 2 To S N Is the average of pupil cornea vectors obtained from six reference points of (2)
Figure GDA0004128288350000093
For real-time pupil cornea vector (x e ,y e ) With the above average value
Figure GDA0004128288350000094
Obtaining pupil cornea error vector (e) xi ,e yi ). Defining pupil cornea error vector radius as +.>
Figure GDA0004128288350000095
Wherein the method comprises the steps of
Figure GDA0004128288350000096
Figure GDA0004128288350000097
Step S70, establishing a fuzzy system and solving the area S i Is specifically implemented by the following method: first define the input variable r i Is mainly divided into five fuzzy concepts, namely
r i ={H B M S O}
I.e. H represents the input variable r i Large, B represents the input variable r i Larger, M represents the input variable r i Is the middle warmerEqual, S represents the input variable r i Smaller, O represents the input variable r i Near 0.
Determination of the region S i Is denoted as s ai The design method comprises the following steps of:
when (when)
Figure GDA0004128288350000101
When considering r i Is very large;
when (when)
Figure GDA0004128288350000102
When considering r i Larger;
when (when)
Figure GDA0004128288350000103
When considering r i Medium;
when (when)
Figure GDA0004128288350000104
When considering r i Smaller;
when (when)
Figure GDA0004128288350000105
When considering r i Near 0;
second define output association degree d i (r i ) Is also divided into five fuzzy concepts, namely
d i (r i )={H' B' M' S' O'},i=1,2,3,…,N
I.e. H' represents the output relevance d i (r i ) Large, B' represents the output relevance d i (r i ) Larger, M' represents the output relevance d i (r i ) For medium, S' represents the output association degree d i (r i ) Smaller, O' represents the output relevance d i (r i ) Near 0;
the design method comprises the following steps of:
when 0.8 < |d i (r i ) When the I < 1, consider as the fuzzy system output d i (r i ) Is very large;
when 0.6 < |d i (r i ) When the I is less than 0.8, the output d is regarded as a fuzzy system output i (r i ) Larger;
when 0.4 < |d i (r i ) When the I is less than 0.6, the output d is regarded as a fuzzy system output i (r i ) Medium;
when 0.2 < |d i (r i ) When the level is less than 0.4, the output d is regarded as a fuzzy system output i (r i ) Smaller;
when 0 < |d i (r i ) When the I is less than 0.2, the output d is regarded as a fuzzy system output i (r i ) Almost 0;
finally, the fuzzy rule is defined as follows:
when inputting variable r i When it is large, consider output association degree d i (r i ) Almost 0;
when inputting variable r i Generally, when the correlation degree d is large, it is considered that i (r i ) Smaller;
when inputting variable r i Is medium, consider output association degree d i (r i ) Medium;
when inputting variable r i Smaller, consider output relevance d i (r i ) Larger;
when inputting variable r i At almost 0, the output association degree d is considered i (r i ) Is very large.
In particular, the specific implementation method of the step S80 is as follows: correlation coefficient d obtained from double-number fuzzy system i (r i ) Solving the weight coefficient c of each region i The resolving process is as follows:
Figure GDA0004128288350000111
the overall calibration vector is:
Figure GDA0004128288350000112
Figure GDA0004128288350000113
finally gives the cornea vector (x) e ,y e ) Calculating to obtain the on-screen gaze point coordinates (a x ,a y ) The formula of (2) is as follows:
Figure GDA0004128288350000114
Figure GDA0004128288350000115
wherein E is x (i) And E is connected with y (i) The above-mentioned calibration vectors E x And E is connected with y I (i=1, 2,3,4,5, 6) th component.
Step S90, real-time positioning calculation and tracking realization are carried out by the following steps: high-definition images of human eyes are shot in real time through an eye tracking camera system, and pupil cornea vectors (x) are obtained through filtering and processing in step S20 es ,y es ) And counting in real time by adopting the formula to obtain the real-time sight tracking coordinates (a) of the screen region points xs ,a ys ) Thereby realizing the whole process of sight tracking, (a) xs ,a ys ) The calculation formula is as follows:
Figure GDA0004128288350000116
Figure GDA0004128288350000117
finally, by adopting the eye tracking shooting equipment, the steps and the formulas, the real-time tracking calculation of the coordinates of the point of the sight-line fixation screen area is realized.
The invention is further illustrated and described below by means of examples.
Examples
A six-point method block fuzzy weighted calibration eye sight tracking method comprises the following steps:
step S10, firstly, dividing the screen S into regions, as shown in FIG. 2, dividing the screen region into 6 small blocks, denoted as S 1 、S 2 、S 3 、S 4 、S 5 、S 6
Step S20, firstly let eyes of the subject watch the screen area S 1 A of (2) 11 As shown in fig. 3, an eye tracking camera system is adopted to shoot human eyes and intercept local high-definition images, and the point is denoted as B 1 As shown in fig. 4;
then for high definition human eye partial image B 1 Filtering and preprocessing to remove noise in the image and obtain a filtered image, and marking as B 2 As shown in fig. 5;
finally, pupil cornea vector is extracted by a bright spot detection and pupil detection method and is marked as (x) e1 ,y e1 ) As shown in fig. 6 '+';
step S30, letting eyes of the subject watch the screen area S in sequence 1 A of (2) 12 、A 13 、A 14 And A is a 15 A dot; repeating step S20 to obtain pupil cornea vector (x e2 ,y e2 )、(x e3 ,y e3 )、(x e4 ,y e4 ) And (x) e5 ,y e5 )。
Step S40, preliminary calibration of the sight tracking position
Step S50, sequentially selecting screen regions S 2 、S 3 Up to S 6 Repeating the steps S10 to S40 to obtain a calibration factor vector E ix 、E iy (i=1,2,…,6)。
In step S60, a cornea vector (x e ,y e ) Error vector, pupil cornea error vector radius
Figure GDA0004128288350000121
Step S70, establishing a mouldPaste system, solving area S i Is a fuzzy correlation of (1):
first define the input variable r i As shown in FIG. 7, outputs a degree of association d i (r i ) As shown in fig. 8;
s80, fuzzy weighting calibration;
step S90, shooting a high-definition image of a human eye part in real time through an eye movement tracking camera system, and obtaining a pupil cornea vector (x) through filtering and processing in step S20 es ,y es ) And counting in real time by adopting the formula to obtain the real-time sight tracking coordinates (a) of the screen region points xs ,a ys ) As shown in the star position of fig. 9, substantially coincides with the gaze point of the subject participating in the experiment. Therefore, the positioning calibration design is completed, and the sight tracking is realized.
In the above-mentioned step S20, regarding the method of image filtering preprocessing and pupil cornea vector extraction, many patents and documents have been developed specifically for these two techniques, and therefore, the present invention is not focused on or limited by the above-mentioned techniques, and will not be described here too much.
The method mainly comprises the steps of dividing a screen area, extracting pupil cornea vectors in a filtering pre-production period of a partial image of human eyes, sequentially extracting pupil cornea vectors from six reference points of a single area, constructing a nonlinear homogeneous pupil cornea matrix through six-point coordinates, and primarily calibrating a sight tracking position to obtain calibration factor vectors of the single area. And then carrying out preliminary calibration on all the areas to obtain a calibration factor vector of each area. And then according to the real-time pupil cornea vector, calculating pupil cornea error vectors of all the relative areas, establishing a fuzzy system, according to the principle that the larger the error vector is, the smaller the correlation degree is, obtaining the real-time correlation factors of the pupil cornea vector relative to all the areas, and finally carrying out fuzzy weighted calibration according to the correlation factors, thereby finally realizing real-time positioning calculation and sight tracking. The invention adopts a six-point resolving method, simplifies the traditional nine-point or more-point data processing method, and simultaneously, the construction of the nonlinear homogeneous pupil cornea matrix also avoids the problem of singular data in the resolving process, and increases the resolving precision. Meanwhile, the method of multi-region segmentation and fuzzy weighting is adopted, so that the weighting algorithm has better global adaptability to the whole screen region and has good precision, and the method has high practical value and economic value in the field of human eye sight line data processing and tracking.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.

Claims (2)

1. The human eye sight tracking and positioning method adopting six-point method block fuzzy weighting is characterized by comprising the following steps:
step S10, dividing a screen area, wherein the specific implementation method of the step S10 is as follows: dividing the screen area into N parts, each of which is denoted as S i Where i=1, 2, …, N, and in each region S i 6 points are selected and marked as A ij Where j=1, 2,3,4,5,6, the coordinates of each point are respectively denoted as (a) ijx ,a ijy ) Wherein a is ijx Is the horizontal direction coordinate of the screen, a ijy Coordinates in the vertical direction of the screen;
step S20, image preprocessing and pupil cornea vector extraction, wherein the specific implementation method of the step S20 is as follows: first let eyes of a subject watch the screen area S i A of (2) ij Point, adopting an eye tracking camera system to shoot a high-definition image of human eye part, and marking as B 1 The method comprises the steps of carrying out a first treatment on the surface of the Then for high definition human eye partial image B 1 Filtering and preprocessing to remove noise in the image and obtain a filtered image, and marking as B 2 The method comprises the steps of carrying out a first treatment on the surface of the Finally, extracting pupil angle by using a bright spot detection and pupil detection methodFilm vector, denoted (x) ei ,y ei );
Step S30, six-point pupil cornea vector calculation of a single area is carried out, and the specific implementation method of the step S30 is as follows: let eyes of a subject sequentially watch the screen area S i Is a single-point type; repeating the step S20 to obtain pupil cornea vectors of six points;
step S40, primarily calibrating the sight tracking position and solving a first regional calibration factor vector, wherein the specific implementation method of the step S40 is as follows: according to the position coordinates of the six points on the screen and the pupil-cornea vector coordinates, the following equation is solved to obtain a preliminary calibration formula, and the specific solving process is as follows:
firstly, constructing a nonlinear homogeneous pupil cornea matrix C according to six-point pupil cornea vectors, wherein the nonlinear homogeneous pupil cornea matrix C is defined as follows:
Figure FDA0004216079620000021
the above-mentioned C array is characterized by that except for first column being 1, the latter two columns are
Figure FDA0004216079620000022
Form, and p>0,q>0,p+q=1;
Secondly, constructing a screen coordinate vector D according to the position coordinates of six points on the screen x And D y Wherein D is x And D y The composition of (2) is as follows:
D x =[a 11x a 12x a 13x a 14x a 15x a 16x ] T
D y =[a 11y a 12y a 13y a 14y a 15y a 16y ] T
finally, solving an inverse matrix C of the nonlinear homogeneous pupil cornea matrix C -1 Calculating the calibration factor vector E ix 、E iy The solving method is as follows:
E ix =C -1 D x 、E iy =C -1 D y
step S50, solving the calibration factor vectors of all areas, wherein the specific implementation method of the step S50 is as follows: sequentially selecting screen regions S 2 、S 3 Up to S N Repeating the steps S10 to S40 to obtain a calibration factor vector E ix 、E iy (i=1,2,…,N);
Step S60, obtaining pupil cornea error vector, wherein the specific implementation method of the step S60 is as follows: for a certain area S i Pupil cornea vector (x) e1 ,y e1 )、(x e2 ,y e2 )、(x e3 ,y e3 )、(x e4 ,y e4 )、(x e5 ,y e5 ) And (x) e6 ,y e6 ) Calculate the average value and record as
Figure FDA0004216079620000023
The calculation method comprises the following steps:
Figure FDA0004216079620000024
Figure FDA0004216079620000025
obtaining the region S by adopting the same algorithm 2 To S N Is the average of pupil cornea vectors obtained from six reference points of (2)
Figure FDA0004216079620000031
For real-time pupil cornea vector (x e ,y e ) With the above average value
Figure FDA0004216079620000032
Difference, where i=1, 2,3, …, N, yields pupil cornea error vector (e xi ,e yi ) Pupil cornea error vector radiusIs->
Figure FDA0004216079620000033
Wherein the method comprises the steps of
Figure FDA0004216079620000034
Step S70, a fuzzy system is established, and fuzzy correlation degree is solved, wherein the specific implementation method of the step S70 is as follows: first define the input variable r i The fuzzy concept of (2) is mainly divided into the following five fuzzy concepts, namely:
r i ={H B M S O}
wherein H represents the input variable r i Large, B represents the input variable r i Larger, M represents the input variable r i Representing the input variable r for medium, S i Smaller, O represents the input variable r i Near 0;
determination of the region S i Is denoted as s ai The design method comprises the following steps of:
when (when)
Figure FDA0004216079620000035
When considering r i Is very large;
when (when)
Figure FDA0004216079620000036
When considering r i Larger;
when (when)
Figure FDA0004216079620000037
When considering r i Medium;
when (when)
Figure FDA0004216079620000038
When considering r i Smaller;
when (when)
Figure FDA0004216079620000039
When considering r i Near 0;
second define output association degree d i (r i ) Is also divided into five fuzzy concepts, namely
d i (r i )={H'B'M'S'O'},i=1,2,3,…,N
I.e. H' represents the output relevance d i (r i ) Large, B' represents the output relevance d i (r i ) Larger, M' represents the output relevance d i (r i ) For medium, S' represents the output association degree d i (r i ) Smaller, O' represents the output relevance d i (r i ) Near 0;
the design method comprises the following steps of:
when 0.8<|d i (r i )|<1, consider as the fuzzy system output d i (r i ) Is very large;
when 0.6<|d i (r i )|<0.8, considered as the fuzzy system output d i (r i ) Larger;
when 0.4<|d i (r i )|<0.6, consider as the fuzzy system output d i (r i ) Medium;
when 0.2<|d i (r i )|<0.4, considered as the fuzzy system output d i (r i ) Smaller;
when 0 is<|d i (r i )|<0.2, considered as the fuzzy system output d i (r i ) Almost 0;
finally, the fuzzy rule is defined as follows:
when inputting variable r i When it is large, consider output association degree d i (r i ) Almost 0;
when inputting variable r i Generally, when the correlation degree d is large, it is considered that i (r i ) Smaller;
when inputting variable r i Is medium, consider output association degree d i (r i ) Medium;
when inputting variable r i Smaller, consider output relevance d i (r i ) Larger;
when inputting variable r i At almost 0, the output association degree d is considered i (r i ) Is very large;
step S80, fuzzy weighting calibration, wherein the specific implementation method of the step S80 is as follows: correlation coefficient d obtained from double-number fuzzy system i (r i ) Solving the weight coefficient c of each region i The resolving process is as follows:
Figure FDA0004216079620000041
the overall calibration vector is:
Figure FDA0004216079620000042
Figure FDA0004216079620000043
finally gives the cornea vector (x) e ,y e ) Calculating to obtain the on-screen gaze point coordinates (a x ,a y ) The formula of (2) is as follows:
Figure FDA0004216079620000044
Figure FDA0004216079620000045
wherein E is x (i) And E is connected with y (i) The above-mentioned calibration vectors E x And E is connected with y I (i=1, 2,3,4,5, 6) th component;
step S90, real-time positioning and resolving and sight tracking.
2. The eye gaze tracking positioning method of claim 1 employing six point method block fuzzy weighting,the method is characterized in that the specific implementation method of the step S90 is as follows: high-definition images of human eyes are shot in real time through an eye tracking camera system, and pupil cornea vectors (x) are obtained through filtering and processing in step S20 es ,y es ) And counting in real time by adopting the formula to obtain the real-time sight tracking coordinates (a) of the screen region points xs ,a ys ) Thereby realizing the whole process of sight tracking, (a) xs ,a ys ) The calculation formula is as follows:
Figure FDA0004216079620000051
Figure FDA0004216079620000052
CN202010208956.6A 2020-03-23 2020-03-23 Human eye line-of-sight tracking and positioning method adopting six-point method for blocking fuzzy weighting Active CN111428634B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010208956.6A CN111428634B (en) 2020-03-23 2020-03-23 Human eye line-of-sight tracking and positioning method adopting six-point method for blocking fuzzy weighting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010208956.6A CN111428634B (en) 2020-03-23 2020-03-23 Human eye line-of-sight tracking and positioning method adopting six-point method for blocking fuzzy weighting

Publications (2)

Publication Number Publication Date
CN111428634A CN111428634A (en) 2020-07-17
CN111428634B true CN111428634B (en) 2023-06-27

Family

ID=71549555

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010208956.6A Active CN111428634B (en) 2020-03-23 2020-03-23 Human eye line-of-sight tracking and positioning method adopting six-point method for blocking fuzzy weighting

Country Status (1)

Country Link
CN (1) CN111428634B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114812468B (en) * 2022-06-27 2022-09-06 沈阳建筑大学 H-shaped six-point method-based precise rotation shafting rotation error in-situ separation method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101788848A (en) * 2009-09-29 2010-07-28 北京科技大学 Eye characteristic parameter detecting method for sight line tracking system
CN103927014A (en) * 2014-04-21 2014-07-16 广州杰赛科技股份有限公司 Character input method and device
CN108427503A (en) * 2018-03-26 2018-08-21 京东方科技集团股份有限公司 Human eye method for tracing and human eye follow-up mechanism

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6014931B2 (en) * 2012-09-06 2016-10-26 公立大学法人広島市立大学 Gaze measurement method
CN106056092B (en) * 2016-06-08 2019-08-20 华南理工大学 The gaze estimation method for headset equipment based on iris and pupil
CN107862246B (en) * 2017-10-12 2021-08-06 电子科技大学 Eye gazing direction detection method based on multi-view learning
JP6881755B2 (en) * 2017-11-29 2021-06-02 国立研究開発法人産業技術総合研究所 Line-of-sight detection calibration methods, systems, and computer programs
EP3811182A4 (en) * 2018-06-22 2021-07-28 Magic Leap, Inc. Method and system for performing eye tracking using an off-axis camera
CN108985210A (en) * 2018-07-06 2018-12-11 常州大学 A kind of Eye-controlling focus method and system based on human eye geometrical characteristic

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101788848A (en) * 2009-09-29 2010-07-28 北京科技大学 Eye characteristic parameter detecting method for sight line tracking system
CN103927014A (en) * 2014-04-21 2014-07-16 广州杰赛科技股份有限公司 Character input method and device
CN108427503A (en) * 2018-03-26 2018-08-21 京东方科技集团股份有限公司 Human eye method for tracing and human eye follow-up mechanism

Also Published As

Publication number Publication date
CN111428634A (en) 2020-07-17

Similar Documents

Publication Publication Date Title
Zhao et al. RefineDNet: A weakly supervised refinement framework for single image dehazing
CN105812778B (en) Binocular AR wears display device and its method for information display
Le Meur et al. A coherent computational approach to model bottom-up visual attention
US6778207B1 (en) Fast digital pan tilt zoom video
Brolly et al. Implicit calibration of a remote gaze tracker
US7978880B2 (en) Continuous extended range image processing
CN109754377A (en) A kind of more exposure image fusion methods
CN108476311A (en) Dynamic Announce calibration based on eye tracks
CN107516335A (en) The method for rendering graph and device of virtual reality
CN108537788B (en) Camouflage effect evaluation method and device, computer equipment and storage medium
CN106023148B (en) A kind of sequence focuses on star image point position extracting method under observation mode
CN109766007A (en) A kind of the blinkpunkt compensation method and compensation device, display equipment of display equipment
CN105913453A (en) Target tracking method and target tracking device
CN109327712A (en) The video of fixed scene disappears fluttering method
CN111428634B (en) Human eye line-of-sight tracking and positioning method adopting six-point method for blocking fuzzy weighting
CN114494347A (en) Single-camera multi-mode sight tracking method and device and electronic equipment
CN105740874B (en) Determine the method and device of operation coordinate when automatic test script playback
CN111339982A (en) Multi-stage pupil center positioning technology implementation method based on features
Murali et al. Conducting eye tracking studies online
CN108024704B (en) For measuring the method and system of the subjective refraction characteristic of eyes
AU2020247193A1 (en) A device and method for evaluating a performance of a visual equipment for a visual task
Thibos Retinal image formation and sampling in a three-dimensional world
CN116453198B (en) Sight line calibration method and device based on head posture difference
CN116602764A (en) Positioning navigation method and device for end-to-end ophthalmic surgery
CN109963143A (en) A kind of image acquiring method and system of AR glasses

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant