CN106778641B - Sight estimation method and device - Google Patents

Sight estimation method and device Download PDF

Info

Publication number
CN106778641B
CN106778641B CN201611207525.8A CN201611207525A CN106778641B CN 106778641 B CN106778641 B CN 106778641B CN 201611207525 A CN201611207525 A CN 201611207525A CN 106778641 B CN106778641 B CN 106778641B
Authority
CN
China
Prior art keywords
point
extracted
light
human eye
light source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611207525.8A
Other languages
Chinese (zh)
Other versions
CN106778641A (en
Inventor
王云飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing 7Invensun Technology Co Ltd
Original Assignee
Beijing 7Invensun Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing 7Invensun Technology Co Ltd filed Critical Beijing 7Invensun Technology Co Ltd
Priority to CN201611207525.8A priority Critical patent/CN106778641B/en
Publication of CN106778641A publication Critical patent/CN106778641A/en
Application granted granted Critical
Publication of CN106778641B publication Critical patent/CN106778641B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/203Drawing of straight lines or curves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Eye Examination Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

The invention aims to provide a sight line estimation method and a sight line estimation device, wherein the method comprises the following steps: acquiring a human eye image, and if at least three light spots are extracted from the human eye image, fitting a light spot distribution curve according to the position of each extracted light spot, wherein the human eye image corresponds to a plurality of light sources, and each light spot is an image of one light source; determining the characteristic points of the light spot distribution curve as first characteristic points, wherein the first characteristic points comprise a middle point, a circle center, a focus and a central point; and determining the fixation point information corresponding to the human eye image according to the position of the first characteristic point and preset information. By the sight estimation method and the sight estimation device in the embodiment of the invention, the fixation point information can be quickly estimated.

Description

Sight estimation method and device
Technical Field
The invention relates to the technical field of human-computer interaction, in particular to a sight line estimation method and device.
Background
Line-of-sight estimation refers to a technique for acquiring the current "gaze direction" of a subject by various detection means such as mechanical, electronic, optical, and the like. With the rapid development of computer vision, artificial intelligence technology and digitization technology, the sight line estimation technology has become a current hot research field, and has wide application in the field of human-computer interaction, for example, the sight line estimation technology can be applied to vehicle auxiliary frame driving, virtual reality, cognitive disorder diagnosis and the like.
In the related art, a P-CR method is generally adopted to determine the gaze point information of an eyeball, and the P-CR method is divided into a regression-based P-CR method and a 3D-based P-CR method, and when the two methods are specifically implemented, a plurality of light sources are required to form a plurality of light spots in the eyeball of a user, and an image of the eye of the user is acquired.
When the regression-based P-CR method is used for acquiring the fixation point information of the eyeballs, light spots corresponding to all light sources need to be detected on the human eye images of the user, however, when the eyeball movement amplitude of the user is large, the eyeball deviates from the center more, and at this time, part of the light sources cannot form light spots in the human eyes, so that the fixation point information of the eyeballs cannot be determined.
When the 3D-based P-CR method is used to obtain the gaze point information of the eyeball, the corresponding relationship between the light spot and the light source in the image of the eye needs to be known, however, when there are many light sources and only a part of the light sources are imaged in the eye to form the light spot, the corresponding relationship between the light spot and the light source is difficult to estimate, and the gaze point information of the eyeball cannot be determined.
Disclosure of Invention
In view of the above, the present invention is directed to a method and an apparatus for estimating a line of sight, so as to solve at least one of the above technical problems.
In a first aspect, an embodiment of the present invention provides a gaze estimation method, including: acquiring a human eye image, and if at least three light spots are extracted from the human eye image, fitting a light spot distribution curve according to the position of each extracted light spot, wherein the human eye image corresponds to a plurality of light sources, and each light spot is an image of one light source; determining the characteristic points of the light spot distribution curve as first characteristic points, wherein the first characteristic points comprise a middle point, a circle center, a focus and a central point; and determining the fixation point information corresponding to the human eye image according to the position of the first characteristic point and preset information.
With reference to the first aspect, an embodiment of the present invention provides a first possible implementation manner of the first aspect, where the determining, according to the position of the first feature point and preset information, gaze point information corresponding to the human-eye image includes: determining fixation point information corresponding to the human eye image according to the position of the first characteristic point and the position of a pupil center point in the human eye image; or determining feature points of a graph formed by all the light sources as second feature points, determining the light source corresponding to each extracted light spot according to the position of the first feature point and the position of the second feature point, the position of each extracted light spot and the position of each light source, and determining the gaze point information corresponding to the human eye image according to the position of each extracted light spot and the position of the light source corresponding to the light spot, wherein the second feature points comprise a middle point, a circle center, a focus and a central point.
With reference to the first possible implementation manner of the first aspect, an embodiment of the present invention provides a second possible implementation manner of the first aspect, where the determining, according to the position of the first feature point and the position of the second feature point, the position of each extracted light spot and the position of each light source, the light source corresponding to each extracted light spot includes: calculating a position conversion relation between the first characteristic point and the second characteristic point according to the position of the first characteristic point and the position of the second characteristic point; calculating the position of the extracted standard light source corresponding to each light spot according to the position conversion relation and the position of each extracted light spot; and determining the light source with the position matched with the standard light source position as the light source corresponding to each extracted light spot.
With reference to the first possible implementation manner of the first aspect, an embodiment of the present invention provides a third possible implementation manner of the first aspect, where the determining, according to the position of the first feature point, the position of the second feature point, the position of each extracted light spot, and the position of each light source, the light source corresponding to each extracted light spot includes: calculating a first relative position between the first characteristic point and each extracted light spot according to the position of the first characteristic point and the position of each extracted light spot; calculating a second relative position between the second feature point and each light source according to the position of the second feature point and the position of each light source; and determining a second relative position matched with the first relative position, and determining a light source corresponding to the determined second relative position as the light source corresponding to each extracted light spot.
With reference to the first aspect, an embodiment of the present invention provides a fourth possible implementation manner of the first aspect, where all the light sources are arranged in a circular, elliptical, approximately circular, or approximately elliptical shape, and the light spot distribution curve is a circle or an ellipse.
With reference to the foregoing implementation manner of the first aspect, an embodiment of the present invention provides a fifth possible implementation manner of the first aspect, where the method further includes: if two light spots are extracted from the human eye image, determining the light source corresponding to each extracted light spot according to the position of each extracted light spot, the position of each light source and the position of human eye characteristics in the human eye image; determining the gaze point information corresponding to the human eye image according to the extracted position of each light spot and the position of the light source corresponding to the light spot, or determining a feature point of a graph formed by all the light sources as a second feature point, determining a mapping point of the second feature point in the human eye image according to the extracted position of the light source corresponding to each light spot, the extracted position of the second feature point and the extracted position of each light spot, and determining the gaze point information corresponding to the human eye image according to the position of the mapping point and the position of a pupil center point in the human eye image, wherein the second feature point comprises a midpoint, a circle center, a focus and a center point.
With reference to the fifth possible implementation manner of the first aspect, an embodiment of the present invention provides the sixth possible implementation manner of the first aspect, where the determining, according to the extracted position of each light spot, the extracted position of each light source, and the position of a human eye feature in the human eye image, the light source corresponding to each extracted light spot includes: calculating a third relative position between the extracted light spots according to the extracted position of each light spot, and calculating a fourth relative position between the light sources according to the position of each light source; calculating a fifth relative position between each extracted light spot and the human eye feature according to the position of each extracted light spot and the position of the human eye feature, and calculating a sixth relative position between each light source and the human eye feature according to the position of each light source and the position of the human eye feature; and determining the light source of which the fourth relative position is matched with the third relative position and the sixth relative position is matched with the fifth relative position, wherein the light source corresponds to each extracted light spot.
With reference to the foregoing implementation manner of the first aspect, an embodiment of the present invention provides a seventh possible implementation manner of the first aspect, where the method further includes: if one light spot is extracted from the human eye image, controlling each light source to be lightened one by one, and determining the light source corresponding to the light spot; determining feature points of a graph formed by all the light sources as second feature points, and determining mapping points of the second feature points in the human eye image according to the positions of the light sources corresponding to the extracted light spots, the positions of the second feature points and the positions of the extracted light spots; and determining the fixation point information corresponding to the human eye image according to the position of the mapping point and the position of the pupil center point in the human eye image, wherein the second characteristic point comprises a middle point, a circle center, a focus and a center point.
In a second aspect, an embodiment of the present invention provides a gaze estimation apparatus, including: the system comprises a curve determining module, a light source acquiring module and a light source acquiring module, wherein the curve determining module is used for acquiring a human eye image, and if at least three light spots are extracted from the human eye image, a light spot distribution curve is fitted according to the position of each extracted light spot, the human eye image corresponds to a plurality of light sources, and each light spot is an image of one light source; a characteristic point determining module, configured to determine a characteristic point of the light spot distribution curve as a first characteristic point, where the first characteristic point includes a midpoint, a circle center, a focus, and a center point; and the information determining module is used for determining the fixation point information corresponding to the human eye image according to the position of the first characteristic point and preset information.
With reference to the second aspect, an embodiment of the present invention provides a first possible implementation manner of the second aspect, where the information determining module includes: the first determining submodule is used for determining the fixation point information corresponding to the human eye image according to the position of the first characteristic point and the position of a pupil center point in the human eye image; or, the second determining submodule is configured to determine a feature point of a graph formed by all the light sources as a second feature point, determine the light source corresponding to each extracted light spot according to the position of the first feature point and the position of the second feature point, the position of each extracted light spot and the position of each light source, and determine the gaze point information corresponding to the human eye image according to the position of each extracted light spot and the position of the light source corresponding to the position of each extracted light spot, where the second feature point includes a midpoint, a circle center, a focus, and a center point.
In the embodiment of the invention, if at least three light spots are extracted from the human eye image, a light spot distribution curve is fitted according to the position of each extracted light spot, wherein the human eye image corresponds to a plurality of light sources, each light spot is an image of one light source, a characteristic point of the light spot distribution curve is determined as a first characteristic point, the first characteristic point comprises a middle point, a circle center, a focus and a central point, and the fixation point information corresponding to the human eye image is determined according to the position of the first characteristic point and preset information. By the sight estimation method and the sight estimation device in the embodiment of the invention, all light sources do not need to be imaged in eyeballs of users to form light spots, and the corresponding relation between the light spots and the light sources does not need to be estimated.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a top view of an eye tracking module according to an embodiment of the present invention;
FIG. 2 is a side view of an eye tracking module according to an embodiment of the present invention;
fig. 3 is a schematic flow chart of a gaze estimation method according to an embodiment of the present invention;
fig. 4 is a schematic block diagram of a gaze estimation apparatus according to an embodiment of the present invention.
Reference numerals:
10-eyeball, 20-incident light path, 21-reflection light path, 22-transmission light path, 31-light source, 30-eyepiece barrel, 40-eyepiece, 50-circuit board, 60-image acquisition device, 70-infrared filter, 80-infrared cut-off filter and 90-display screen.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
In consideration of the fact that when the gaze point information of the eyeball is acquired in the related art, all light sources are required to form light spots in the eyeball, or the corresponding relationship between the light spots in the image of the human eye and the light sources is required to be known, so that the gaze point information is difficult to determine in some cases, embodiments of the present invention provide a gaze estimation method and device, which are specifically described below by embodiments.
To clearly describe the gaze estimation method and apparatus in the embodiments of the present invention, the eyeball tracking module in the embodiments of the present invention is first described.
The eyeball tracking module in the embodiment is arranged in the virtual reality glasses device, the eyeball tracking technology is realized through the design of the internal structure of the virtual reality glasses, so that when a user watches the virtual reality glasses, the user can operate and control the display terminal through the eyeball tracking technology, and the functions of man-machine interaction, viewpoint rendering and the like are realized.
Fig. 1 is a top view of an eyeball tracking module according to an embodiment of the present invention, as shown in fig. 1, the eyeball tracking module includes an eyepiece barrel 30, at least two Light sources 31 are disposed on the eyepiece barrel 30, the Light sources 31 are preferably LED (Light emitting diode) infrared Light sources, the eyeball tracking module further includes a circuit board 50 and an image capturing device 60, and the image capturing device 60 includes an imaging objective lens and an image sensor.
Fig. 2 is a side view of an eyeball-tracking module according to an embodiment of the present invention, as shown in fig. 2, an eyepiece 40 is disposed in an eyepiece barrel 30, the eyepiece 40 is located in front of an eyeball 10, a light source 31 emits infrared light to the eyeball 10, the eyeball 10 reflects the infrared light, an incident light path 20 from the eyeball 10 propagates to an infrared cut-off filter 80 through the eyepiece 40, and is reflected by the infrared cut-off filter 80 to form a reflected light path 21, the reflected light path 21 enters an image capturing device 60 through an infrared filter 70, and the image capturing device 60 is fixed on a circuit board 50. The incident light path 20 from the eyeball 10 also passes through the infrared cut filter 80 to form a transmission light path 22, and the transmission light path 22 is transmitted to a display screen 90, and the display screen 90 is a VR (Virtual Reality) display screen.
Referring to fig. 1 and 2, in the present embodiment, the eyepiece 40 functions as a magnifying glass, and the entire image of the eye region can be obtained through the eyepiece 40. The light source 31 is fixedly disposed in a lateral front area of the eyeball 10 to emit infrared light toward the eyeball 10, and the eyeball 10 reflects the infrared light to form a spot on the eyeball 10. The image capturing device 60 is fixed at the edge of the eyeball tracking module with a viewing angle or outside the eyeball tracking module, and the infrared cut-off filter 80 is used to reflect the eyeball image to the image sensor of the image capturing device 60, so that the image sensor captures the eyeball image.
Referring to fig. 2, the imaging optical path of the eyeball tracking module is:
(1) the light source 31 emits infrared light toward the eyeball 10, and the eyeball 10 reflects the infrared light, thereby forming a spot on the eyeball 10; specifically, the light spot is obtained by emitting light from the light source 31 to impinge on the eyeball 10, reflecting the light from different layer surfaces of the eyeball 10, and receiving the reflected light by the image sensor of the image capturing device 60, so that the light spot can be formed in the image. Because the wavelength which can be perceived by human eyes is 380nm-780nm, in order not to influence the normal watching of people on the display terminal, the light source 31 basically selects a near-infrared light source with low human eye sensitivity and small harmfulness;
(2) the light spot formed on the eyeball 10 is reflected by the infrared cut-off filter 80 and then imaged on the image acquisition device 60, and in order to eliminate the influence of light of other wavelengths on the image, the infrared filter 70 is required to be added in front of the image acquisition device 60, so that only the light emitted by the light source 31 is allowed to enter the image acquisition device 60.
To further reduce the occupied volume of the entire device, the ir-cut filter 80 may be directly integrated on the screen surface of the display screen 90, and/or the ir-cut filter 80 may be an ir-reflective coating or an ir-reflective film of the display screen 90.
The infrared cut-off filter 80 is fixedly arranged in the area between the eyepiece 40 and the display screen 90, and the outer edge of the infrared cut-off filter 80 is positioned outside the visible angle of the module, so that light spots formed at any position of the eyeball 10 can be imaged to the image acquisition device 60 through the infrared cut-off filter 80.
In order to prevent the display screen 90 from affecting the sight of the user, in this embodiment, the angle of the infrared cut-off filter 80 is adjusted to reflect the near-infrared light of the eyeball image to the side of the eyepiece barrel, and the near-infrared light is received by the image collecting device 60, and the visible light emitted by the display screen 90 reaches the eyeball 10 through the infrared cut-off filter 80. The size of the ir-cut filter 80 is selected to contain the maximum field-of-view marginal rays of the eyepiece 40.
In practical applications, the eyeball tracking module shown in fig. 1 and fig. 2 can be integrated into a head-mounted device, such as a virtual reality glasses, and when a user uses the head-mounted device, the eye tracking module firstly performs alignment once after the device is started, and the specific process is as follows: the alignment marks are displayed using the display screen 90 while the position of the pupil in the camera field of view is displayed, and the user moves the head mounted device so that the position of the pupil (or the pupil center) in the camera field of view is within the appropriate area. Wherein the camera is integrated in the image acquisition device 60. After the alignment is completed, the sight line estimation is carried out based on an eyeball tracking module in the head-mounted equipment.
As shown in fig. 3, the gaze estimation method in the embodiment of the present invention specifically includes the following steps:
step S302, a human eye image is obtained, if at least three light spots are extracted from the human eye image, a light spot distribution curve is fitted according to the position of each extracted light spot, wherein the human eye image corresponds to a plurality of light sources, and each light spot is an image of one light source.
The eye tracking module shown in fig. 1 and 2 is used to acquire an image of a human eye, and specifically, when a user wears virtual reality glasses having the eye tracking module shown in fig. 1 and 2, the light source 31 in the eye tracking module emits infrared light to the eyeball 10, the eyeball 10 reflects the infrared light to form a light spot on the eyeball 10, and the light spot formed on the eyeball 10 is reflected by the infrared cut-off filter 80 and then imaged on the image acquisition device 60, so that the image of the human eye is acquired by the image acquisition device 60.
The eye tracking module shown in fig. 1 and 2 is used to collect a plurality of eye images corresponding to the plurality of light sources, that is, a plurality of light sources 31, and each light spot in the eye image is an image of one light source, specifically, a projection of an image of one light source reflected on a cornea surface of a human eye in the image.
In this embodiment, the image acquisition device 60 performs image processing (including preprocessing, information statistics, parameter updating, pupil positioning, pupil edge positioning, pupil center positioning, light spot screening, etc.) once every time a frame of eye image is acquired, so as to extract light spots in the eye image, if at least three light spots, such as three or four light spots, are extracted from the eye image, a light spot distribution curve is fitted according to the extracted position of each light spot, for example, when three light spots are extracted, a circle is fitted by using the positions of the three light spots, the circle is a light spot distribution curve, when five light spots are extracted, an ellipse is fitted by using the positions of the five light spots, the ellipse is a light spot distribution curve, and of course, in other embodiments, the number of light spots is not limited to three or five, as long as the number is equal to or greater than three, the shape of the spot distribution curve obtained by fitting is not limited to a circle or an ellipse, and depends on the specific fitting result.
It should be noted that, because the light spot is an image of each light source, the shape of the light spot distribution curve corresponds to the setting shapes of all the light sources, and the light spot distribution curve is theoretically affine transformation of the setting shapes of all the light sources, mathematically, the affine transformation changes a straight line into a straight line, a circle into an ellipse, an ellipse into an ellipse, and a trapezoid, a rectangle, or a square into an arbitrary quadrangle. Therefore, in this embodiment, when all the light sources are arranged in a circular or approximately circular shape, the shape of the light spot distribution curve may be circular or elliptical, when all the light sources are arranged in an elliptical or approximately elliptical shape, the shape of the light spot distribution curve is elliptical, when all the light sources are arranged in a trapezoidal, rectangular, square or other shape, the light spot distribution curve corresponds to any quadrangle, when all the light sources are arranged in line segments, the light spot distribution curve corresponds to line segments, of course, the arrangement mode of all the light sources and the shape of the light spot distribution curve can have other situations, which are based on the principle of mathematical radiation transformation, and are not repeated here.
In a specific embodiment, all the light sources corresponding to the images for human eyes are arranged in a circular, elliptical, approximately circular or approximately elliptical shape, that is, all the light sources 31 in the eye tracking module shown in fig. 1 and fig. 2 are arranged in a circular, elliptical, approximately circular or approximately elliptical shape, and the light spot distribution curve is a circle or an ellipse.
As shown in fig. 1 and 2, when the structural accuracy of the eye tracking module is high, it is preferable to set all the light sources 31 to be arranged in a circular or elliptical shape, and when a practical application scenario is considered, it may also be set that all the light sources 31 are arranged in an approximately circular or approximately elliptical shape, where an approximately circular shape means that distances from centers of all the light sources are within a first preset distance threshold range, such as within a distance range of 5 mm to 6 mm, and an approximately circular shape means that a difference between distances from all the light sources to the same focus is within a second preset distance threshold range, such as within a distance range of 1 mm to 3 mm. In this embodiment, the spot distribution curve obtained by fitting the extracted position of each spot is a circle or an ellipse.
Step S304, determining the characteristic points of the light spot distribution curve as first characteristic points, wherein the first characteristic points comprise a midpoint, a circle center, a focus and a central point.
And after the light spot distribution curve is obtained through fitting, determining the characteristic point of the light spot distribution curve as a first characteristic point. The shape of the light spot distribution curve corresponds to the setting shape of all light sources, when the light spot distribution curve is a circle, the characteristic point and the first characteristic point of the light spot distribution curve are both circle centers, when the light spot distribution curve is an ellipse, the characteristic point and the first characteristic point of the light spot distribution curve are both focuses, specifically, the light spot distribution curve can be any one of the two focuses, when the light spot distribution curve is a line segment, the characteristic point and the first characteristic point of the light spot distribution curve are both midpoints of the line segment, and when the light spot distribution curve is a polygon, the characteristic point and the first characteristic point of the light spot distribution curve are both polygonal central points.
Step S306, determining the fixation point information corresponding to the human eye image according to the position of the first characteristic point and the preset information.
And after the first characteristic point is determined, determining the fixation point information corresponding to the human eye image according to the position of the first characteristic point and preset information.
One specific implementation is as follows: the preset information is the position of a pupil center point in the human eye image, the fixation point information corresponding to the human eye image is determined according to the position of the first characteristic point and the position of the pupil center point in the human eye image, specifically, the position of the pupil center point in the human eye image is determined, the fixation point information is obtained by utilizing a regression-based P-CR algorithm according to the position of the first characteristic point and the position of the pupil center point, and the fixation point information can have various presentation modes including but not limited to coordinates, deflection angles, rotation matrixes and the like. In the embodiment, the corresponding relation between the light spot and the light source does not need to be obtained, the calculation process is simple, and the calculation efficiency is high.
Another specific implementation manner is that feature points of a graph composed of all light sources are determined as second feature points, a light source corresponding to each extracted light spot is determined according to the position of the first feature point and the position of the second feature point, the position of each light spot and the position of each light source are extracted, and fixation point information corresponding to the human eye image is determined according to the position of each light spot and the position of the light source corresponding to the light spot, wherein the second feature points include a midpoint, a circle center, a focus and a central point. In this embodiment, the preset information is the position of the second feature point, the position of each spot extracted, and the position of each light source.
In this embodiment, the graph formed by all the light sources may be a circle, an ellipse, an approximate circle, an approximate ellipse, a square, a rectangle, a line segment, a trapezoid, and the like, and accordingly, the second feature point may be a circle center, a middle point, a focus, a central point, and the like. Because the light spots are the images of each light source in human eyes, the shape of the light spot distribution curve corresponds to the graph formed by all the light sources, the first characteristic point corresponds to the second characteristic point, each light source corresponds to each light spot, and based on the shape, the specific process of determining the light source corresponding to each extracted light spot according to the position of the first characteristic point, the position of the second characteristic point, the position of each extracted light spot and the position of each extracted light source is as follows:
(1) and calculating the position conversion relation between the first characteristic point and the second characteristic point according to the position of the first characteristic point and the position of the second characteristic point. The position transformation relation can be expressed by means of coordinates, deflection angles and rotation matrixes;
(2) and calculating the position of the standard light source corresponding to each extracted light spot according to the position conversion relation and the position of each extracted light spot. The calculation mode can be that the position of the light spot is added with a position conversion relation to obtain a corresponding standard light source position, and the standard light source position represents the standard position of the light source corresponding to the light spot;
(3) and determining the light source with the position matched with the standard light source position as the light source corresponding to each extracted light spot. Due to the influence of calculation precision errors and the like, the position of the light source may have an error with the position of the standard light source, the light source with the error within the allowable error range is determined as the light source matched with the position of the standard light source, and the light source with the position matched with the position of the standard light source is determined as the light source corresponding to each extracted light spot.
Since the shape of the light spot distribution curve corresponds to a graph formed by all light sources, the first feature point corresponds to the second feature point, and each light source corresponds to each light spot, according to the position of the first feature point, the position of the second feature point, the position of each extracted light spot, and the position of each light source, the specific process of determining the light source corresponding to each extracted light spot may further be:
(1) and calculating a first relative position between the first characteristic point and each extracted light spot according to the position of the first characteristic point and the position of each extracted light spot. The specific calculation mode may be subtracting the position of the first feature point from the position of each extracted light spot to obtain a plurality of first relative positions, each extracted light spot corresponds to one first relative position, and the first relative positions may be represented by coordinates, a deflection angle, and a rotation matrix;
(2) and calculating a second relative position between the second feature point and each light source according to the position of the second feature point and the position of each light source. The specific calculation mode may be that the position of the second feature point and the position of each light source are subtracted to obtain a plurality of second relative positions, each light source corresponds to one second relative position, and the second relative positions may be represented in a coordinate, deflection angle, or rotation matrix manner;
(3) and determining a second relative position matched with the first relative position, and determining the light source corresponding to the determined second relative position as the light source corresponding to each extracted light spot. And in consideration of the influence of calculation accuracy and the like, subtracting each first relative position from each second relative position to obtain a difference value, and determining that the first relative position is matched with the second relative position when the difference value is within an allowable difference value range. And for the matched first relative position and second relative position, the light spot corresponding to the first relative position and the light source corresponding to the second relative position correspond to each other.
After the light source corresponding to each extracted light spot is determined by the two methods, the gaze point information corresponding to the human eye image is determined by a 3D-based P-CR algorithm according to the position of each extracted light spot and the position of the light source corresponding to the light spot, and the gaze point information may have various presentation methods, including but not limited to coordinates, deflection angles, rotation matrices, and the like. In this embodiment, the correspondence between the light spot and the light source needs to be known, and when the position of the pupil center point in the human eye image cannot be determined, the fixation point information can be determined by using this embodiment.
In the embodiment of the invention, if at least three light spots are extracted from the human eye image, a light spot distribution curve is fitted according to the position of each extracted light spot, wherein the human eye image corresponds to a plurality of light sources, each light spot is an image of one light source, a characteristic point of the light spot distribution curve is determined as a first characteristic point, the first characteristic point comprises a middle point, a circle center, a focus and a central point, and the fixation point information corresponding to the human eye image is determined according to the position of the first characteristic point and preset information. By the method in the embodiment of the invention, all light sources are not required to be imaged in eyeballs of users to form light spots, and the corresponding relation between the light spots and the light sources is not required to be estimated.
Considering the case of extracting two light spots in the human eye image, the embodiment of the present invention further provides the following steps:
(a1) if two light spots are extracted from the human eye image, determining a light source corresponding to each extracted light spot according to the position of each extracted light spot, the position of each light source and the position of human eye characteristics in the human eye image;
the human eye features in the human eye image include, but are not limited to, the pupil, the iris, and the like. In this step, the position of the human eye feature in the human eye image is taken as a reference, and the light source corresponding to each extracted light spot is determined, and the specific process is as follows:
(a11) and calculating a third relative position between the extracted light spots according to the position of each light spot, and calculating a fourth relative position between the light sources according to the position of each light source. Calculating a third relative position of each extracted spot with respect to any other extracted spot, assuming that N spots are extracted, and thus having N-1 third relative positions for each extracted spot, and calculating a fourth relative position of each light source with respect to any other light source, assuming a total of M light sources, and thus having M-1 fourth relative positions for each light source;
(a12) and calculating a fifth relative position between each extracted light spot and the human eye feature according to the position of each extracted light spot and the position of the human eye feature, and calculating a sixth relative position between each light source and the human eye feature according to the position of each light source and the position of the human eye feature. Calculating a fifth relative position of each extracted spot with respect to a feature of the human eye (e.g., pupil or iris), one for each extracted spot, calculating a sixth relative position of each light source with respect to a feature of the human eye (e.g., pupil or iris), one for each extracted light source. It should be noted that, when the fifth relative position is calculated and the sixth relative position is calculated, the human eye features are the same human eye feature;
(a13) and determining the light source with the fourth relative position matched with the third relative position and the sixth relative position matched with the fifth relative position as the light source corresponding to each extracted light spot. When the light source corresponding to the extracted light spot is determined, the relative position between the light source and the human eye feature is considered, and when the relative position between the light source and the light spot is matched, and the relative position between the light source and the human eye feature is matched with the relative position between the light spot and the human eye feature, the light source is determined to correspond to the light spot. The two positions are matched in this embodiment, which means that the difference between the two positions is within the allowable error range.
In steps (a11) to (a13), the third relative position, the fourth relative position, the fifth relative position, and the sixth relative position can be represented in the form of coordinates, a yaw angle, a rotation matrix, or the like.
(a2) According to the position of each light spot extracted and the position of the corresponding light source, the corresponding fixation point information of the human eye image is determined, or the characteristic point of a graph formed by all the light sources is determined as a second characteristic point, according to the position of the light source corresponding to each light spot extracted, the position of the second characteristic point and the position of each light spot extracted, the mapping point of the second characteristic point in the human eye image is determined, and according to the position of the mapping point and the position of the pupil center point in the human eye image, the fixation point information corresponding to the human eye image is determined, wherein the second characteristic point comprises a middle point, a circle center, a focus and a center point.
In this step, after the light source corresponding to each extracted light spot is obtained, the gaze point information corresponding to the human eye image can be determined by using a 3D-based P-CR algorithm according to the position of each extracted light spot and the position of the light source corresponding thereto, and the gaze point information may have various presentation modes including, but not limited to, coordinates, deflection angles, rotation matrices, and the like.
In this step, after the light source corresponding to each extracted light spot is obtained, considering that the light spot is an image of the light source in human eyes, each light source corresponds to each light spot, and the feature points of the graph formed by all the light sources correspond to the feature points of the image formed by all the light spots, the main viewpoint information can be determined in the following manner:
(a21) and determining the characteristic point of the graph formed by all the light sources as a second characteristic point. The graph formed by all the light sources can be circular, elliptical, approximately circular, approximately elliptical, square, rectangular, line segment, trapezoid and the like, and correspondingly, the second characteristic point can be a circle center, a middle point, a focus, a central point and the like.
(a22) And determining a mapping point of the second characteristic point in the human eye image according to the position of the light source corresponding to each extracted light spot, the position of the second characteristic point and the position of each extracted light spot. The mapping points may be determined by: and calculating the position conversion relation between each light source corresponding to each extracted light spot and the second characteristic point, and obtaining the position of the mapping point by using the position conversion relation and the position of each extracted light spot in the human eye image so as to determine the mapping point of the second characteristic point in the human eye image.
(a23) According to the position of the mapping point and the position of the pupil center point in the human eye image, the point of regard information corresponding to the human eye image is determined by utilizing a regression-based P-CR algorithm, and the point of regard information can have various presentation modes, including but not limited to coordinates, deflection angles, rotation matrixes and the like.
Through the steps (a1) and (a2), when two light spots are extracted from the human eye image, the gaze point information corresponding to the human eye image can be determined, so that the gaze point information of the human eye can be determined when the number of the light spots is small, and the fast estimation of the gaze point can be realized.
Considering the case of extracting a spot in a human eye image, the embodiment of the present invention further provides the following steps:
(b1) if a light spot is extracted from the human eye image, controlling each light source to be lightened one by one, and determining the light source corresponding to the light spot; and if a light spot appears when a certain light source is lightened, determining that the light source corresponds to the light spot.
(b2) And determining the characteristic point of the graph formed by all the light sources as a second characteristic point, and determining the mapping point of the second characteristic point in the human eye image according to the position of the light source corresponding to the extracted light spot, the position of the second characteristic point and the position of the extracted light spot.
Specifically, the graph formed by all the light sources may be a circle, an ellipse, an approximate circle, an approximate ellipse, a square, a rectangle, a line segment, a trapezoid, or the like, and accordingly, the second feature point may be a circle center, a middle point, a focus, a center point, or the like. And determining the mapping point of the second characteristic point in the human eye image according to the position of the light source corresponding to the extracted light spot, the position of the second characteristic point and the position of the extracted light spot.
(b3) And determining the fixation point information corresponding to the eye image according to the position of the mapping point and the position of the pupil center point in the eye image, wherein the second characteristic point comprises a middle point, a circle center, a focus and a center point.
According to the position of the mapping point and the position of the pupil center point in the human eye image, the point of regard information corresponding to the human eye image is determined by utilizing a regression-based P-CR algorithm, and the point of regard information can have various presentation modes, including but not limited to coordinates, deflection angles, rotation matrixes and the like.
Through the steps (b1) to (b3), when one light spot is extracted from the human eye image, the fixation point information corresponding to the human eye image can be determined, so that the fixation point information of the human eye is determined when the number of the light spots is minimum, and the fixation point can be quickly estimated.
In summary, with the method in the embodiment of the present invention, when the eye image has three or more light spots, the gaze point information corresponding to the eye image can be determined, and when the eye image has two or one light spot, the gaze point information corresponding to the eye image can be determined, so that the gaze point information can be determined when the light source is partially imaged, and the fast estimation of the gaze point of the eye can be realized.
Corresponding to the above sight line estimation method, an embodiment of the present invention further provides a sight line estimation device, fig. 4 is a schematic diagram of module compositions of the sight line estimation device provided in the embodiment of the present invention, as shown in fig. 4, the device includes:
the curve determining module 41 is configured to obtain a human eye image, and if at least three light spots are extracted from the human eye image, fit a light spot distribution curve according to the position of each extracted light spot, where the human eye image corresponds to multiple light sources, and each light spot is an image of one light source;
a characteristic point determining module 42, configured to determine a characteristic point of the light spot distribution curve as a first characteristic point, where the first characteristic point includes a midpoint, a circle center, a focus, and a center point;
and an information determining module 43, configured to determine, according to the position of the first feature point and preset information, gaze point information corresponding to the human eye image.
Wherein, the information determining module 43 includes: the first determining submodule is used for determining the fixation point information corresponding to the eye image according to the position of the first characteristic point and the position of the pupil center point in the eye image; or the second determining submodule is used for determining the feature point of the graph formed by all the light sources as a second feature point, determining the light source corresponding to each extracted light spot according to the position of the first feature point and the position of the second feature point, the position of each light spot and the position of each light source, and determining the fixation point information corresponding to the human eye image according to the position of each light spot and the position of the light source corresponding to the light spot, wherein the second feature point comprises a midpoint, a circle center, a focus and a central point.
In one embodiment, the second determining submodule determines the light source corresponding to each extracted light spot by: calculating a position conversion relation between the first characteristic point and the second characteristic point according to the position of the first characteristic point and the position of the second characteristic point; calculating the position of the standard light source corresponding to each extracted light spot according to the position conversion relation and the position of each extracted light spot; and determining the light source with the position matched with the standard light source position as the light source corresponding to each extracted light spot.
In one embodiment, the second determining submodule determines the light source corresponding to each extracted light spot by: calculating a first relative position between the first characteristic point and each extracted light spot according to the position of the first characteristic point and the position of each extracted light spot; calculating a second relative position between the second feature point and each light source according to the position of the second feature point and the position of each light source; and determining a second relative position matched with the first relative position, and determining the light source corresponding to the determined second relative position as the light source corresponding to each extracted light spot.
In this embodiment, all the light sources are arranged in a circular, elliptical, approximately circular, or approximately elliptical shape, and the light spot distribution curve is a circle or an ellipse.
In the embodiment of the invention, if at least three light spots are extracted from the human eye image, a light spot distribution curve is fitted according to the position of each extracted light spot, wherein the human eye image corresponds to a plurality of light sources, each light spot is an image of one light source, a characteristic point of the light spot distribution curve is determined as a first characteristic point, the first characteristic point comprises a middle point, a circle center, a focus and a central point, and the fixation point information corresponding to the human eye image is determined according to the position of the first characteristic point and preset information. By the device in the embodiment of the invention, all light sources are not required to be imaged in eyeballs of users to form light spots, and the corresponding relation between the light spots and the light sources is not required to be estimated.
Further, the apparatus in this embodiment further includes:
the first light source determining module is used for determining the light source corresponding to each extracted light spot according to the position of each extracted light spot, the position of each light source and the position of the human eye feature in the human eye image if two light spots are extracted from the human eye image;
the first fixation point information determining module is used for determining fixation point information corresponding to the human eye image according to the extracted position of each light spot and the position of the corresponding light source, or determining a feature point of a graph formed by all the light sources as a second feature point, determining a mapping point of the second feature point in the human eye image according to the extracted position of the light source corresponding to each light spot, the position of the second feature point and the extracted position of each light spot, and determining the fixation point information corresponding to the human eye image according to the position of the mapping point and the position of a pupil center point in the human eye image, wherein the second feature point comprises a middle point, a circle center, a focus and a center point.
The first fixation point information determining module determines the light source corresponding to each extracted light spot by the following method: calculating a third relative position between the extracted light spots according to the position of each extracted light spot, and calculating a fourth relative position between the light sources according to the position of each light source; calculating a fifth relative position between each extracted light spot and the human eye feature according to the position of each extracted light spot and the position of the human eye feature, and calculating a sixth relative position between each light source and the human eye feature according to the position of each light source and the position of the human eye feature; and determining the light source with the fourth relative position matched with the third relative position and the sixth relative position matched with the fifth relative position as the light source corresponding to each extracted light spot.
Through the first light source determining module and the first fixation point information determining module, the fixation point information corresponding to the human eye image can be determined when two light spots are extracted from the human eye image, so that the fixation point information of human eyes can be determined when the number of the light spots is small, and the fixation point can be quickly estimated.
Further, the apparatus in this embodiment further includes:
the second light source determining module is used for controlling each light source to be lightened one by one if a light spot is extracted from the human eye image, and determining the light source corresponding to the light spot;
the mapping point determining module is used for determining the characteristic point of the graph formed by all the light sources as a second characteristic point, and determining the mapping point of the second characteristic point in the human eye image according to the position of the light source corresponding to the extracted light spot, the position of the second characteristic point and the position of the extracted light spot;
and the second fixation point information determining module is used for determining the fixation point information corresponding to the human eye image according to the position of the mapping point and the position of the pupil center point in the human eye image, wherein the second characteristic point comprises a middle point, a circle center, a focus and a center point.
Through the second light source determining module, the mapping point determining module and the second fixation point information determining module, the fixation point information corresponding to the human eye image can be determined when one light spot is extracted from the human eye image, so that the fixation point information of the human eye is determined when the number of the light spots is minimum, and the fixation point is quickly estimated.
The sight line estimation device provided by the embodiment of the invention can be specific hardware on the equipment, or software or firmware installed on the equipment, and the like. The device provided by the embodiment of the present invention has the same implementation principle and technical effect as the method embodiments, and for the sake of brief description, reference may be made to the corresponding contents in the method embodiments without reference to the device embodiments. It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the foregoing systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments provided by the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus once an item is defined in one figure, it need not be further defined and explained in subsequent figures, and moreover, the terms "first", "second", "third", etc. are used merely to distinguish one description from another and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the present invention in its spirit and scope. Are intended to be covered by the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A gaze estimation method, comprising:
acquiring a human eye image, and if at least three light spots are extracted from the human eye image, fitting a light spot distribution curve according to the position of each extracted light spot, wherein the human eye image corresponds to a plurality of light sources, and each light spot is an image of one light source;
determining the characteristic points of the light spot distribution curve as first characteristic points, wherein the first characteristic points comprise a middle point, a circle center, a focus and a central point;
determining the fixation point information corresponding to the human eye image according to the position of the first characteristic point and preset information;
the determining of the gaze point information corresponding to the human eye image according to the position of the first feature point and preset information includes:
determining feature points of a graph formed by all the light sources as second feature points, determining the light source corresponding to each extracted light spot according to the position of the first feature point and the position of the second feature point, the position of each extracted light spot and the position of each light source, and determining the fixation point information corresponding to the human eye image according to the position of each extracted light spot and the position of the light source corresponding to the light spot, wherein the second feature points comprise a middle point, a circle center, a focus and a central point;
the preset information includes a position of the second feature point, a position of each of the extracted light spots, and a position of each of the light sources.
2. The method according to claim 1, wherein the determining the gaze point information corresponding to the human eye image according to the position of the first feature point and preset information further comprises:
and determining the fixation point information corresponding to the human eye image according to the position of the first characteristic point and the position of the pupil center point in the human eye image.
3. The method according to claim 1, wherein the determining the light source corresponding to each extracted light spot according to the position of the first feature point, the position of the second feature point, the position of each extracted light spot and the position of each extracted light source comprises:
calculating a position conversion relation between the first characteristic point and the second characteristic point according to the position of the first characteristic point and the position of the second characteristic point;
calculating the position of the extracted standard light source corresponding to each light spot according to the position conversion relation and the position of each extracted light spot;
and determining the light source with the position matched with the standard light source position as the light source corresponding to each extracted light spot.
4. The method according to claim 1, wherein the determining the light source corresponding to each extracted light spot according to the position of the first feature point, the position of the second feature point, the position of each extracted light spot and the position of each extracted light source comprises:
calculating a first relative position between the first characteristic point and each extracted light spot according to the position of the first characteristic point and the position of each extracted light spot;
calculating a second relative position between the second feature point and each light source according to the position of the second feature point and the position of each light source;
and determining a second relative position matched with the first relative position, and determining a light source corresponding to the determined second relative position as the light source corresponding to each extracted light spot.
5. The method of claim 1, wherein all of the light sources are arranged in a circle, an ellipse, an approximate circle, or an approximate ellipse, and the light spot distribution curve is a circle or an ellipse.
6. The method of any of claims 1 to 5, further comprising:
if two light spots are extracted from the human eye image, determining the light source corresponding to each extracted light spot according to the position of each extracted light spot, the position of each light source and the position of human eye characteristics in the human eye image;
determining the gaze point information corresponding to the human eye image according to the extracted position of each light spot and the position of the light source corresponding to the light spot, or determining a feature point of a graph formed by all the light sources as a second feature point, determining a mapping point of the second feature point in the human eye image according to the extracted position of the light source corresponding to each light spot, the extracted position of the second feature point and the extracted position of each light spot, and determining the gaze point information corresponding to the human eye image according to the position of the mapping point and the position of a pupil center point in the human eye image, wherein the second feature point comprises a midpoint, a circle center, a focus and a center point.
7. The method according to claim 6, wherein the determining the light source corresponding to each extracted light spot according to the position of each extracted light spot, the position of each light source, and the position of a human eye feature in the human eye image comprises:
calculating a third relative position between the extracted light spots according to the extracted position of each light spot, and calculating a fourth relative position between the light sources according to the position of each light source;
calculating a fifth relative position between each extracted light spot and the human eye feature according to the position of each extracted light spot and the position of the human eye feature, and calculating a sixth relative position between each light source and the human eye feature according to the position of each light source and the position of the human eye feature;
and determining the light source of which the fourth relative position is matched with the third relative position and the sixth relative position is matched with the fifth relative position, wherein the light source corresponds to each extracted light spot.
8. The method of any of claims 1 to 5, further comprising:
if one light spot is extracted from the human eye image, controlling each light source to be lightened one by one, and determining the light source corresponding to the light spot;
determining feature points of a graph formed by all the light sources as second feature points, and determining mapping points of the second feature points in the human eye image according to the positions of the light sources corresponding to the extracted light spots, the positions of the second feature points and the positions of the extracted light spots;
and determining the fixation point information corresponding to the human eye image according to the position of the mapping point and the position of the pupil center point in the human eye image, wherein the second characteristic point comprises a middle point, a circle center, a focus and a center point.
9. A gaze estimation device, comprising:
the system comprises a curve determining module, a light source acquiring module and a light source acquiring module, wherein the curve determining module is used for acquiring a human eye image, and if at least three light spots are extracted from the human eye image, a light spot distribution curve is fitted according to the position of each extracted light spot, the human eye image corresponds to a plurality of light sources, and each light spot is an image of one light source;
a characteristic point determining module, configured to determine a characteristic point of the light spot distribution curve as a first characteristic point, where the first characteristic point includes a midpoint, a circle center, a focus, and a center point;
the information determining module is used for determining the fixation point information corresponding to the human eye image according to the position of the first characteristic point and preset information;
the information determining module, when determining the gaze point information corresponding to the image of the person eye according to the position of the first feature point and preset information, is configured to:
determining feature points of a graph formed by all the light sources as second feature points, determining the light source corresponding to each extracted light spot according to the position of the first feature point and the position of the second feature point, the position of each extracted light spot and the position of each light source, and determining the fixation point information corresponding to the human eye image according to the position of each extracted light spot and the position of the light source corresponding to the light spot, wherein the second feature points comprise a middle point, a circle center, a focus and a central point;
the preset information includes a position of the second feature point, a position of each of the extracted light spots, and a position of each of the light sources.
10. The apparatus of claim 9, wherein the information determining module further comprises:
and the first determining submodule is used for determining the fixation point information corresponding to the human eye image according to the position of the first characteristic point and the position of the pupil center point in the human eye image.
CN201611207525.8A 2016-12-23 2016-12-23 Sight estimation method and device Active CN106778641B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611207525.8A CN106778641B (en) 2016-12-23 2016-12-23 Sight estimation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611207525.8A CN106778641B (en) 2016-12-23 2016-12-23 Sight estimation method and device

Publications (2)

Publication Number Publication Date
CN106778641A CN106778641A (en) 2017-05-31
CN106778641B true CN106778641B (en) 2020-07-03

Family

ID=58919295

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611207525.8A Active CN106778641B (en) 2016-12-23 2016-12-23 Sight estimation method and device

Country Status (1)

Country Link
CN (1) CN106778641B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107767421B (en) * 2017-09-01 2020-03-27 北京七鑫易维信息技术有限公司 Light spot light source matching method and device in sight tracking equipment
CN108038884B (en) 2017-11-01 2020-12-11 北京七鑫易维信息技术有限公司 Calibration method, calibration device, storage medium and processor
CN108510542B (en) * 2018-02-12 2020-09-11 北京七鑫易维信息技术有限公司 Method and device for matching light source and light spot
CN108898572B (en) * 2018-04-19 2020-11-13 北京七鑫易维信息技术有限公司 Light spot extraction method
CN110032277B (en) * 2019-03-13 2022-08-23 北京七鑫易维信息技术有限公司 Eyeball tracking device and intelligent terminal
CN110123267B (en) * 2019-03-22 2022-02-08 重庆康华瑞明科技股份有限公司 Additional floodlight projection device based on ophthalmic slit lamp and image analysis system
CN112580413A (en) * 2019-09-30 2021-03-30 Oppo广东移动通信有限公司 Human eye region positioning method and related device
CN114428547A (en) * 2020-10-29 2022-05-03 北京七鑫易维信息技术有限公司 Sight tracking method, device, equipment and storage medium
CN116301301A (en) * 2021-12-20 2023-06-23 华为技术有限公司 Eye movement tracking device and eye movement tracking method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103761519A (en) * 2013-12-20 2014-04-30 哈尔滨工业大学深圳研究生院 Non-contact sight-line tracking method based on self-adaptive calibration
WO2014186347A1 (en) * 2013-05-13 2014-11-20 River Point, Llc Medical headlamp assembly
CN105094300A (en) * 2014-05-16 2015-11-25 北京七鑫易维信息技术有限公司 Standardized eye image based eye gaze tracking system and method
CN205594581U (en) * 2016-04-06 2016-09-21 北京七鑫易维信息技术有限公司 Module is tracked to eyeball of video glasses
CN106168853A (en) * 2016-06-23 2016-11-30 中国科学技术大学 A kind of free space wear-type gaze tracking system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930252B (en) * 2012-10-26 2016-05-11 广东百泰科技有限公司 A kind of sight tracing based on the compensation of neutral net head movement
CN104732191B (en) * 2013-12-23 2018-08-17 北京七鑫易维信息技术有限公司 The devices and methods therefor of virtual display Eye-controlling focus is realized using Cross ration invariability
CN103838378B (en) * 2014-03-13 2017-05-31 广东石油化工学院 A kind of wear-type eyes control system based on pupil identification positioning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014186347A1 (en) * 2013-05-13 2014-11-20 River Point, Llc Medical headlamp assembly
CN103761519A (en) * 2013-12-20 2014-04-30 哈尔滨工业大学深圳研究生院 Non-contact sight-line tracking method based on self-adaptive calibration
CN105094300A (en) * 2014-05-16 2015-11-25 北京七鑫易维信息技术有限公司 Standardized eye image based eye gaze tracking system and method
CN205594581U (en) * 2016-04-06 2016-09-21 北京七鑫易维信息技术有限公司 Module is tracked to eyeball of video glasses
CN106168853A (en) * 2016-06-23 2016-11-30 中国科学技术大学 A kind of free space wear-type gaze tracking system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
The line of sight to estimate method based on stereo vision;Wang Changyuan 等;《Multimedia Tools and Applications》;20160217;全文 *
人机交互中视线跟踪技术的研究;杨瑞;《中国优秀硕士学位论文全文数据库 信息科技辑》;20160415;第2016年卷(第04期);全文 *
视线追踪***中注视点估计算法研究;金纯 等;《科学技术与工程》;20160531;第16卷(第14期);全文 *

Also Published As

Publication number Publication date
CN106778641A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
CN106778641B (en) Sight estimation method and device
US10878236B2 (en) Eye tracking using time multiplexing
US9398848B2 (en) Eye gaze tracking
US8913789B1 (en) Input methods and systems for eye positioning using plural glints
CN107004275B (en) Method and system for determining spatial coordinates of a 3D reconstruction of at least a part of a physical object
CN113808160B (en) Sight direction tracking method and device
EP3453316A1 (en) Eye tracking using eyeball center position
US9961335B2 (en) Pickup of objects in three-dimensional display
KR20180096434A (en) Method for displaying virtual image, storage medium and electronic device therefor
US10902623B1 (en) Three-dimensional imaging with spatial and temporal coding for depth camera assembly
US11243607B2 (en) Method and system for glint/reflection identification
Bohme et al. Remote eye tracking: State of the art and directions for future development
CN109979016B (en) Method for displaying light field image by AR (augmented reality) equipment, AR equipment and storage medium
KR20130107981A (en) Device and method for tracking sight line
CN108697321B (en) Device for gaze tracking within a defined operating range
CN110213491B (en) Focusing method, device and storage medium
CN108398788B (en) Eye tracking device and virtual reality imaging device
EP4356222A1 (en) Variable intensity distributions for gaze detection assembly
CN108537103B (en) Living body face detection method and device based on pupil axis measurement
US11458040B2 (en) Corneal topography mapping with dense illumination
US10485420B2 (en) Eye gaze tracking
KR100960269B1 (en) Apparatus of estimating user's gaze and the method thereof
EP3801196B1 (en) Method and system for glint/reflection identification
US10928894B2 (en) Eye tracking
US20240153136A1 (en) Eye tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant