WO2018076202A1 - Head-mounted display device that can perform eye tracking, and eye tracking method - Google Patents

Head-mounted display device that can perform eye tracking, and eye tracking method Download PDF

Info

Publication number
WO2018076202A1
WO2018076202A1 PCT/CN2016/103375 CN2016103375W WO2018076202A1 WO 2018076202 A1 WO2018076202 A1 WO 2018076202A1 CN 2016103375 W CN2016103375 W CN 2016103375W WO 2018076202 A1 WO2018076202 A1 WO 2018076202A1
Authority
WO
WIPO (PCT)
Prior art keywords
eye
head
human eye
image information
eyeball
Prior art date
Application number
PCT/CN2016/103375
Other languages
French (fr)
Chinese (zh)
Inventor
李荣茂
臧珊珊
刘燕君
陈昳丽
朱艳春
陈鸣闽
谢耀钦
Original Assignee
中国科学院深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院深圳先进技术研究院 filed Critical 中国科学院深圳先进技术研究院
Priority to PCT/CN2016/103375 priority Critical patent/WO2018076202A1/en
Publication of WO2018076202A1 publication Critical patent/WO2018076202A1/en

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Definitions

  • the present invention relates to the field of computer technologies, and in particular, to a head mounted visual device, and more particularly to a head mounted visual device capable of human eye tracking and a human eye tracking method.
  • a head-mounted display (HMD, also known as a head-mounted visual device) reflects a two-dimensional image directly into the viewer's eye, specifically by a set of optical systems (primarily precision optical lenses) that amplify the ultra-microdisplay The image on the image is projected onto the retina, and the large-screen image is presented in the viewer's eye. The image is a magnifying glass to see the object and present an enlarged virtual object image.
  • the image can be obtained directly through a light emitting diode (LED), an active matrix liquid crystal display (AMLCD), an organic light emitting diode (OLED), or a liquid crystal with silicon (LCOS), or can be indirectly obtained by conducting a fiber or the like.
  • LED light emitting diode
  • AMLCD active matrix liquid crystal display
  • OLED organic light emitting diode
  • LCOS liquid crystal with silicon
  • the display system is imaged at infinity by a collimating lens and then reflected through the reflecting surface into the human eye. Head-mounted visual devices are quietly changing people's modern lives because of their portability and entertainment.
  • the existing head-mounted visual device cannot actively interact with the user, that is, the wearer actively operates the head-mounted visual device, and the head-mounted visual device cannot actively sense the user's attention and the user's mood. Therefore, people think of using eye tracking technology to actively perceive the user's attention and the user's mood.
  • how to use the eye tracking technology in the head-mounted visual device to realize real-time tracking of human eye information to obtain the gaze point of the human eye in space there is currently no good solution;
  • the weight of the head-mounted visual device is considered to be a non-negligible factor.
  • the existing eye tracker already has a mature product, it is directly embedded in the eye-catching visual device. The tracker will undoubtedly increase the weight of the virtual reality helmet and reduce the customer experience.
  • the technical problem to be solved by the present invention is to provide a head-mounted visual device capable of human eye tracking and a human eye tracking method, which solves the problem that the existing head-mounted visual device cannot track the human eye viewing orientation. problem.
  • a specific embodiment of the present invention provides a head mounted visual device capable of performing human eye tracking, comprising: a virtual reality helmet for accommodating a head mounted visual device; a light source disposed at the The virtual reality helmet is used to illuminate the eyeball of the human eye; the micro camera is disposed on the virtual reality helmet for collecting eyeball image information of the human eye, so that the server determines the orientation information of the pupil of the human eye according to the image information of the eyeball.
  • a specific embodiment of the present invention further provides a human eye tracking method for a head-mounted visual device, comprising: illuminating an eyeball with an LED light source; collecting an eyeball image information of a human eye by using a micro camera; and utilizing a spatial mapping relationship according to The eyeball image information determines orientation information of the pupil of the human eye.
  • the head-mounted visual device capable of performing human eye tracking and the human eye tracking method have at least the following beneficial effects: by embedding a miniature camera and an LED light source in the head-mounted visual device, and Multiple reference points are set in the virtual scene, and the spatial mapping relationship between the miniature camera, the reference point and the eyeball is constructed by using the three-dimensional matrix; then the micro-camera is used to capture the image information of the eyeball, and the acquired image information of the eyeball is obtained according to the spatial mapping relationship.
  • the analysis can obtain the pupil focus area in real time, thereby determining the user's viewing orientation, without increasing the weight of the head-mounted visual device, and not leaking the environmental information around the user, thereby improving the user experience.
  • FIG. 1A is a schematic structural diagram of a main body of a head-mounted visual device capable of tracking human eyes according to an embodiment of the present invention
  • FIG. 1B is a schematic rear view of a head-mounted visual device capable of tracking human eyes according to an embodiment of the present invention
  • Embodiment 1 of a human eye tracking method for a head mounted visual device according to an embodiment of the present invention
  • Embodiment 3 is a flowchart of Embodiment 2 of a human eye tracking method for a head mounted visual device according to an embodiment of the present invention
  • FIG. 4 is a schematic diagram of three-dimensional coordinates of a spatial positional relationship between a miniature camera, a reference point, and a human eyeball according to an embodiment of the present invention
  • FIG. 5 is a diagram of a coordinate conversion relationship provided by a specific embodiment of the present invention.
  • orientation terms used herein for example, up, down, left, right, front or back, etc., only the orientation with reference to the drawings. Therefore, the orientation terms used are used to illustrate that it is not intended to limit the creation.
  • FIG. 1A is a schematic diagram of a main structure of a head-mounted visual device capable of tracking human eyes according to an embodiment of the present invention
  • FIG. 1B is a head-mounted type capable of tracking human eyes according to an embodiment of the present invention
  • a rear view structure of the visual device as shown in FIG. 1A and FIG.
  • a light source and a micro camera are respectively disposed on two sides of the virtual reality helmet lens, one light source and one micro camera correspond to one eye of the user, and another light source and Another micro camera corresponds to the other eye of the user, the light source is used to illuminate the eyeball of the human eye, and the miniature camera is used to collect the eyeball image information of the human eye, so that the server determines the orientation information of the pupil of the human eye according to the image information of the eyeball.
  • the head mounted visual device comprises a virtual reality helmet 10, a light source 20 and a miniature camera 30, wherein the virtual reality helmet 10 is for accommodating a head mounted visual device;
  • the light source 20 is used to illuminate the eyeball of the human eye;
  • the micro camera 30 is disposed in the virtual reality helmet 10, and the miniature camera 30 is configured to collect eyeball image information of the human eye, so that the server according to the eyeball
  • the image information determines the orientation information of the pupil of the human eye, wherein the micro camera 30 can be a miniature camera, a micro camera, etc.
  • the light source 20 can be a micro LED light source, and when the micro camera 30 collects the eyeball image information of the human eye, the light source 20 is instantly turned on and off.
  • the miniature camera 30 is connected to the server via the HDMI data line of the miniature camera.
  • the orientation information of the pupil of the human eye specifically refers to: viewing the straight line directly in front of the human eye as a reference line, and then connecting the viewing target point with the pupil of the human eye, and the angle and positional relationship information between the connection line and the reference line is The orientation information of the pupil of the human eye.
  • the server calculates the orientation information of the pupil of the human eye according to the spatial positional relationship between the micro camera 30, the reference point, and the eye of the human eye.
  • the number of reference points is at least 4.
  • the light source 20 specifically includes a first LED light source 201 and a second LED light source 202.
  • a first LED light source 201 is disposed at a left lens edge of the virtual reality helmet 10;
  • a second LED light source 202 is disposed at a right lens edge of the virtual reality helmet 10; and
  • the first LED light source 201 is configured to illuminate a left eye The eyeball;
  • the second LED light source 202 is for illuminating the right eyeball.
  • the miniature camera 30 specifically includes a first miniature camera 301 and a second miniature camera 302. a first miniature camera 301 is disposed at a left lens edge of the virtual reality helmet 10; a second miniature camera 302 is disposed at a right lens edge of the virtual reality helmet 10; and a first miniature camera 301 is configured to capture a left eye Eyeball image information; the second miniature camera 302 is used to capture eyeball image information of the right eye.
  • the server obtains a left-eye optical axis vector of a left-eye gaze orientation according to the eyeball image information of the left eye, and obtains a right-eye optical axis vector of the right-eye gaze orientation according to the right-eye ocular image information, and then The orientation information of the pupil of the human eye is determined according to the intersection of the optical axis vector of the left eye and the optical axis vector of the right eye.
  • a miniature camera and a light source are disposed in a virtual reality helmet, and a plurality of reference points are set in the virtual scene, and a spatial mapping relationship between the miniature camera, the reference point, and the eye of the human eye is constructed by using the three-dimensional matrix;
  • the micro-camera is used to collect the image information of the eyeball, and the acquired image information of the eyeball is analyzed according to the spatial mapping relationship, and the pupil focusing area can be obtained in real time, thereby determining the view of the user. Look at the orientation without increasing the weight of the head-mounted visual device and without revealing environmental information around the user.
  • the power source is integrated into a USB interface (not shown) to supply power to the electronic components such as the light source 20 and the micro camera 30 in the virtual reality helmet;
  • the wearable visual device is connected to the server through the HDMI data line, the server controls the light source 20 switch and the micro camera 30 to collect the eyeball image information through the HDMI data line, and the processing of the eyeball image information collected by the micro camera 30 is completed by the server.
  • a processor may be provided in the virtual reality helmet 10 to perform the processing and control of the server.
  • FIG. 2 is a flowchart of Embodiment 1 of a human eye tracking method for a head-mounted visual device according to an embodiment of the present invention.
  • an LED light source is turned on instantaneously, and a micro camera collects a human eye.
  • the eyeball image information determines the orientation information of the pupil of the human eye by analyzing the acquired image information of the eyeball.
  • Step 101 Illuminating the eyeball of the human eye with an LED light source.
  • the LED light source is similar to the camera's flash, and is turned off immediately after turning on, without affecting the user's normal visual experience.
  • Step 102 Acquire an eyeball image information of a human eye by using a micro camera.
  • the miniature camera captures the eyeball image information of the human eye when the LED light source is turned on; the miniature camera can be a miniature camera, a miniature camera, or the like.
  • Step 103 Determine the orientation information of the pupil of the human eye according to the eyeball image information by using a spatial mapping relationship.
  • the step 103 includes: collecting eyeball image information of the left eye and eyeball image information of the right eye; obtaining a left-eye optical axis vector of the left-eye gaze orientation according to the eyeball image information of the left eye, and according to the right The eyeball image information of the eye obtains a right eye optical axis vector of the right eye gaze orientation; and the orientation information of the human eye pupil is determined according to the left eye optical axis vector and the right eye optical axis vector.
  • the micro-camera (which can also use a sensor such as a micro camera) collects the eyeball image information of the human eye, analyzes the acquired eyeball image information according to the spatial mapping relationship, and can obtain the pupil focusing area in real time, thereby determining the viewing orientation of the user. Does not increase the weight of the head-mounted visual device, and does not reveal environmental information around the user, thereby improving the user experience.
  • FIG. 3 is a flowchart of Embodiment 2 of a human eye tracking method for a head-mounted visual device according to an embodiment of the present invention. As shown in FIG. 3, before the user performs human eye tracking, the user needs to utilize The three-dimensional matrix constructs a spatial mapping relationship between the miniature camera, the reference point, and the eye of the human eye.
  • the method further includes:
  • Step 100 Construct a spatial mapping relationship between the miniature camera, the reference point, and the eyeball of the human eye using the three-dimensional matrix.
  • a three-dimensional matrix form is used to fit the coordinate system between the eyeball of the human eye and the coordinate system of the reference point, and the position between the miniature camera and the eye of the human eye. Relationship, finally constructing the spatial mapping relationship between the miniature camera, the reference point and the eyeball of the human eye, and using the spatial mapping relationship combined with the collected eyeball image information, the visual gaze point of the user in the virtual space can be calculated in real time.
  • FIG. 4 is a schematic diagram of three-dimensional coordinates of a spatial positional relationship between a miniature camera, a reference point, and a human eyeball according to an embodiment of the present invention.
  • the present invention provides a pupil focus area tracking applied to a virtual reality helmet.
  • the solution is mainly to install a miniature camera (for example, a miniature camera) on both sides of the lens of a head-mounted visual device (for example, a virtual reality helmet), and install an LED light source on the edge of the miniature camera lens, and the virtual reality helmet is operated in a virtual state.
  • Four reference points are set in the scene. When the eyeball is looking at the reference point, the LED light source is turned on.
  • the miniature camera captures and records the real-time image information of the eyeball and the pupil, and then combines the miniature camera, the reference point, and the coordinate system of the human eyeball. Spatial positional relationship, with different functional forms and matrix forms to fit the one-to-one correspondence between the eye reference frame and the reference frame where the reference point is located, and obtain the pupil position and its orientation information, which can be calculated in the space.
  • E1 and E2 are the origin of the space rectangular coordinate system where the left and right eyeballs are located; S1 and S2 are the origin of the space rectangular coordinate system where the miniature camera is located; O is the origin of the space rectangular coordinate system where the target fixation point is located; X1 and X2 are the reference points set in the virtual reality X1 and X2 are located on the midline of the line segment of the two eyeballs; X3 is the target fixation point in the virtual reality scene; H1, H2 and Ct are the vertical distance between the camera and the human eye; L is the distance between the two eyeballs; Cs is the distance between two miniature cameras; the distance between reference points X1 and X2 is equal to the distance between reference points X1 and S0, both of which are ⁇ X; the angle of ⁇ E1X1E2 is 2 ⁇ .
  • the spatial position and orientation information of the pupil are calculated. , you can get the vector coordinates of the pupil looking at a certain point.
  • the spatial position of the pupil can be expressed as The pupil movement in the space contains three dimensions of the X-axis, the Y-axis, and the Z-axis. Therefore, there should be three unknown parameters, but since the pupil moves on the fixed plane of the eyeball, it is on the fixed plane where the eyeball is located. , Contains only two unknown parameters ⁇ 0 , ⁇ 0 in the two-dimensional space in which the pupil plane motion is located, and another parameter Directly related to ⁇ 0, ⁇ 0.
  • the gaze orientation of the pupil is the rotation angle of the pupil in three dimensions of the space in which it is located, denoted as R, and the spatial position and orientation data of the pupil are integrated, and the vector coordinate information [R, t] when the pupil looks at a certain point can be obtained.
  • R is a 3x3 rotation matrix representing the gaze orientation of the pupil
  • t is a 3x1 vector representing the spatial position information of the pupil. Since the rotation angle R is also on the fixed plane of the eyeball, there are two rotation angles which are unknown parameters, one is the rotation angle around the X axis, and the other is the rotation angle around the Z axis, and the two rotation angles determine the value of R.
  • R The values of R can be determined by (1) and (2):
  • the coordinate system of the reference points X 1 and X 2 is recorded as the plane coordinate system O
  • the coordinate system of the eyeball is recorded as the three-dimensional coordinate system E of the eye
  • the coordinate system of the camera is recorded as S
  • the coordinate system of the two-dimensional image of the eye movement of the camera is located.
  • Recorded as B according to the relationship between the camera, the reference point and the coordinate system of the eyeball in the virtual reality eye tracking system, the coordinate conversion relationship diagram shown in Fig. 5 can be obtained.
  • T O ⁇ E T O ⁇ S ⁇ T S ⁇ B ⁇ T B ⁇ E
  • T O ⁇ E represents the conversion relationship from the eye coordinate system E to the coordinate system O where the reference point is located, and can be calibrated by the reference point.
  • another T O ⁇ S camera coordinate system S relative to the coordinate system O of the reference point, and the coordinate system B of the two-dimensional image captured by the T S ⁇ B camera relative to the coordinate system S of the camera can be obtained by calibration .
  • T B ⁇ E Two unknown parameters (x, y) in T B ⁇ E are calculated from the reference point, that is, the transformation relationship between the current eye coordinate system E and the coordinate system B in which the two-dimensional image is located.
  • Relative orbital eye are two unknown amount, in the eyes and eye shape limitations, eye movement only in X, Y-axis, the reference point by a calibration can be obtained two unknowns T B ⁇ E in, Get the conversion relationship of T B ⁇ E .
  • the unknown parameters of the coordinate system can be calculated.
  • R is a 3 ⁇ 3 rotation matrix
  • t is a 3 ⁇ 1 vector
  • C is an internal matrix.
  • the four external parameters of the pupil determine the position and orientation of the pupil relative to the scene, including two rotation angles, which can uniquely determine R, and the other two parameters constitute t.
  • the starting point (x 0 , y 0 ) represents the pixel coordinates at the intersection of the optical axis and the reference point
  • f x and f y represent the length of the focus in the horizontal and vertical directions, respectively.
  • the two-dimensional image of the eyeball captured by the camera can be converted into the optical axis vector coordinate of the eye gaze orientation, and the intersection of the optical axis vectors acquired by the two eyes is the target gaze region, and here are mainly the following three Situation:
  • the first type the optical axes intersect.
  • the obtained optical axes of the two eyes are successfully intersected to obtain the target fixation point.
  • the light columns intersect. According to the eyeball feature of each user, a light column having a radius of r (according to the characteristics of the user's eyes) centered on the optical axis vector Fo is formed, and the intersection of the left and right eye beams is the target attention area.
  • the third type the light cones intersect.
  • the actual line-of-sight geometry range is that the retina is the apex of the cone of light, and the line of sight is the central axis of the cone of light, and the cone of light is at an angle, that is, the field of view is an area on the focal plane of the view.
  • the intersection area of the area is the focus area, and the geometric center of the focus area is the focus.
  • the first two methods yield sufficient approximation accuracy.
  • the reference point is set in the virtual scene to pick up the eyeball image data of the pupil at different target points, through the spatial positional relationship of the system, the conversion between different coordinate systems, and the image data.
  • the visual gaze point of the user in the virtual space can be calculated in real time.
  • the solution of the invention mainly comprises the following contents: setting of the virtual reality helmet edge camera and the LED light source; setting of the reference point in the virtual reality scene; photographing the pupil movement image; and segmenting the eye white and the pupil according to the image information to obtain the pupil and the eyeball Position relationship; calculate the real-time position and focus direction of the pupil based on the acquired data.
  • a miniature camera is placed at the edge of the lens of the virtual reality helmet to capture the changes in the user's eye.
  • an LED light source is arranged on the micro camera to emit light, which helps the camera to collect data.
  • the position relationship of the miniature camera is shown in Fig. 4.
  • Set reference point Before the user uses the virtual reality helmet, set 4 target points from the near to far as the reference point in the default virtual scene.
  • the reference point is set to obtain the data information when the eye focuses on the reference point, and the user pupils focus.
  • the camera captures the image information of the user's eyeball at this time.
  • the camera photographs the eye movement image: when the user's eyes look at each reference point, the LED light is turned on, and the camera takes a group of images to record the pupil motion information to obtain image data.
  • Analyze the image information to obtain the positional relationship between the pupil and the eyeball transmit different sets of image information captured by the camera to the server, and segment the white of the eye and the pupil through image analysis.
  • the reference system between the eye reference frame and the reference point is fitted in different functional forms and matrix forms.
  • Corresponding mapping relationship is obtained, and the position of the pupil and its orientation information are obtained, and the visual gaze point of the user in the virtual space is calculated in real time.
  • the application environment of the invention is an eye tracking technology inside the virtual reality immersed helmet, geometric nearsight of the near field eye line of sight, and the application environment is an environment with no content tracking other than the eye space, and the environment is controllable for protecting the user's personal information.
  • the interaction (without revealing the user's surroundings) is convenient and easy to use; due to the geometric vision myopia model, the visual optical path reconstruction parameter model of the user's lens, pupil, cornea, vitreous, etc. is not calculated, and the calculation amount is small and the implementation is simple.
  • an embodiment of the present invention may be implemented in various hardware, software code or combinations of both.
  • an embodiment of the present invention may also be a program code for executing the above method in a Digital Signal Processor (DSP).
  • DSP Digital Signal Processor
  • the invention may also relate to various functions performed by a computer processor, digital signal processor, microprocessor or Field Programmable Gate Array (FPGA).
  • the above described processor may be configured to perform specific tasks in accordance with the present invention, which are accomplished by executing machine readable software code or firmware code that defines a particular method disclosed herein.
  • Software code or firmware code can be developed into different programming languages and different formats or forms. Software code can also be compiled for different target platforms. However, different code patterns, types, and languages of software code and other types of configuration code for performing tasks in accordance with the present invention do not depart from the spirit and scope of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Human Computer Interaction (AREA)
  • Position Input By Displaying (AREA)

Abstract

A head-mounted display device that can perform eye tracking, and an eye tracking method, wherein the head-mounted display comprises: a virtual reality helmet (10), used for accommodating the head-mounted display device; a light source (20), provided inside the virtual reality helmet (10) for illuminating the eyeballs of human eyes; and a miniature camera (30), provided inside the virtual reality helmet (10) to collect eyeball image information about the human eyes, so that a server can determine orientation information about pupils of the human eyes according to the eyeball image information. The viewing orientation of a user can be determined in real time without increasing the weight of the head-mounted display device.

Description

能够进行人眼追踪的头戴式可视设备及人眼追踪方法Head-mounted visual device capable of human eye tracking and human eye tracking method 技术领域Technical field
本发明涉及计算机技术领域,尤其涉及一种头戴式可视设备,具体来说就是一种能够进行人眼追踪的头戴式可视设备及人眼追踪方法。The present invention relates to the field of computer technologies, and in particular, to a head mounted visual device, and more particularly to a head mounted visual device capable of human eye tracking and a human eye tracking method.
背景技术Background technique
近年来,头戴式可视设备的大量涌现,例如,联想眼镜、谷歌眼镜、虚拟现实(VR)游戏眼镜等,虚拟现实(Virtual Reality,VR)、增强现实(Augmented Reality,AR)及混合现实(Mixed Reality,MR)技术逐渐进入我们的日常生活中。头戴式显示器(HMD,也称为头戴式可视设备)是把二维图像直接反射到观看者的眼睛里,具体就是通过一组光学***(主要是精密光学透镜)放大超微显示屏上的图像,将影像投射于视网膜上,进而将大屏幕图像呈现在观看者眼中,形象点说就是拿放大镜看物体呈现出放大的虚拟物体图像。图像可以直接通过发光二极管(LED)、主动式矩阵液晶显示器(AMLCD)、有机发光二极管(OLED)或液晶附硅(LCOS)获得,也可以通过光纤等传导方式间接获得。显示***通过准直透镜成像在无穷远处,然后通过反射面把图像反射进人的眼睛里。头戴式可视设备由于其具有便携性、娱乐性等特点,正在悄然改变人们的现代生活。In recent years, a large number of head-mounted visual devices have emerged, such as Lenovo glasses, Google glasses, virtual reality (VR) game glasses, etc., Virtual Reality (VR), Augmented Reality (AR) and mixed reality. (Mixed Reality, MR) technology is gradually entering our daily lives. A head-mounted display (HMD, also known as a head-mounted visual device) reflects a two-dimensional image directly into the viewer's eye, specifically by a set of optical systems (primarily precision optical lenses) that amplify the ultra-microdisplay The image on the image is projected onto the retina, and the large-screen image is presented in the viewer's eye. The image is a magnifying glass to see the object and present an enlarged virtual object image. The image can be obtained directly through a light emitting diode (LED), an active matrix liquid crystal display (AMLCD), an organic light emitting diode (OLED), or a liquid crystal with silicon (LCOS), or can be indirectly obtained by conducting a fiber or the like. The display system is imaged at infinity by a collimating lens and then reflected through the reflecting surface into the human eye. Head-mounted visual devices are quietly changing people's modern lives because of their portability and entertainment.
然而,现有头戴式可视设备无法与用户进行主动交互,即佩戴用户主动去操作头戴式可视设备,而头戴式可视设备无法主动去感知用户的关注点及用户的心情,因此,人们想到利用眼动追踪技术去主动感知用户的关注点及用户的心情。但是,如何在头戴式可视设备中使用眼动追踪技术实现对人眼信息实时跟踪获取人眼在空间中的凝视点这一方面,目前还没有很好的解决方案;在头戴式可视设备的设计方面,头戴式可视设备的重量被看作是不可忽略的因素,虽然现有眼动追踪仪已经有比较成熟的产品,但在头戴式可视设备中直接嵌入眼动追踪仪无疑会增加虚拟现实头盔的重量,降低客户体验。However, the existing head-mounted visual device cannot actively interact with the user, that is, the wearer actively operates the head-mounted visual device, and the head-mounted visual device cannot actively sense the user's attention and the user's mood. Therefore, people think of using eye tracking technology to actively perceive the user's attention and the user's mood. However, how to use the eye tracking technology in the head-mounted visual device to realize real-time tracking of human eye information to obtain the gaze point of the human eye in space, there is currently no good solution; Depending on the design of the device, the weight of the head-mounted visual device is considered to be a non-negligible factor. Although the existing eye tracker already has a mature product, it is directly embedded in the eye-catching visual device. The tracker will undoubtedly increase the weight of the virtual reality helmet and reduce the customer experience.
因此,如何在不增加头戴式可视设备重量的基础上,让头戴式可视设备具有眼动追踪功能是本领域技术人员长期亟需解决的问题。 Therefore, how to make the head-mounted visual device have the eye tracking function on the basis of not increasing the weight of the head-mounted visual device is a problem that a person skilled in the art needs to solve for a long time.
发明内容Summary of the invention
有鉴于此,本发明要解决的技术问题在于提供一种能够进行人眼追踪的头戴式可视设备及人眼追踪方法,解决了现有头戴式可视设备无法追踪人眼观看方位的问题。In view of this, the technical problem to be solved by the present invention is to provide a head-mounted visual device capable of human eye tracking and a human eye tracking method, which solves the problem that the existing head-mounted visual device cannot track the human eye viewing orientation. problem.
为了解决上述技术问题,本发明的具体实施方式提供一种能够进行人眼追踪的头戴式可视设备,包括:虚拟现实头盔,用于容纳头戴式可视设备;光源,设置于所述虚拟现实头盔内,用于照射人眼眼球;***机,设置于所述虚拟现实头盔上,用于采集人眼的眼球图像信息,以便服务器根据所述眼球图像信息确定人眼瞳孔的方位信息。In order to solve the above technical problem, a specific embodiment of the present invention provides a head mounted visual device capable of performing human eye tracking, comprising: a virtual reality helmet for accommodating a head mounted visual device; a light source disposed at the The virtual reality helmet is used to illuminate the eyeball of the human eye; the micro camera is disposed on the virtual reality helmet for collecting eyeball image information of the human eye, so that the server determines the orientation information of the pupil of the human eye according to the image information of the eyeball.
本发明的具体实施方式还提供一种用于头戴式可视设备的人眼追踪方法,包括:利用LED光源照射人眼眼球;利用***机采集人眼的眼球图像信息;利用空间映射关系根据所述眼球图像信息确定人眼瞳孔的方位信息。A specific embodiment of the present invention further provides a human eye tracking method for a head-mounted visual device, comprising: illuminating an eyeball with an LED light source; collecting an eyeball image information of a human eye by using a micro camera; and utilizing a spatial mapping relationship according to The eyeball image information determines orientation information of the pupil of the human eye.
根据本发明的上述具体实施方式可知,能够进行人眼追踪的头戴式可视设备及人眼追踪方法至少具有以下有益效果:通过在头戴式可视设备中嵌入***机及LED光源,并在虚拟场景中设置多个参考点,利用三维矩阵构建***机、参考点以及人眼眼球之间的空间映射关系;再利用***机拍摄眼球图像信息,根据空间映射关系对获取的眼球图像信息进行分析,可实时获取瞳孔聚焦区域,从而确定用户的观看方位,不增加头戴式可视设备的重量,而且不会泄露用户周围的环境信息,提高用户体验度。According to the above specific embodiments of the present invention, the head-mounted visual device capable of performing human eye tracking and the human eye tracking method have at least the following beneficial effects: by embedding a miniature camera and an LED light source in the head-mounted visual device, and Multiple reference points are set in the virtual scene, and the spatial mapping relationship between the miniature camera, the reference point and the eyeball is constructed by using the three-dimensional matrix; then the micro-camera is used to capture the image information of the eyeball, and the acquired image information of the eyeball is obtained according to the spatial mapping relationship. The analysis can obtain the pupil focus area in real time, thereby determining the user's viewing orientation, without increasing the weight of the head-mounted visual device, and not leaking the environmental information around the user, thereby improving the user experience.
应了解的是,上述一般描述及以下具体实施方式仅为示例性及阐释性的,其并不能限制本发明所欲主张的范围It is to be understood that the foregoing general description,
附图说明DRAWINGS
下面的所附附图是本发明的说明书的一部分,其绘示了本发明的示例实施例,所附附图与说明书的描述一起用来说明本发明的原理。The accompanying drawings, which are set forth in the claims
图1A为本发明具体实施方式提供的一种能够进行人眼追踪的头戴式可视设备的主体结构示意图;1A is a schematic structural diagram of a main body of a head-mounted visual device capable of tracking human eyes according to an embodiment of the present invention;
图1B为本发明具体实施方式提供的一种能够进行人眼追踪的头戴式可视设备的后视结构示意图; FIG. 1B is a schematic rear view of a head-mounted visual device capable of tracking human eyes according to an embodiment of the present invention; FIG.
图2为本发明具体实施方式提供的一种用于头戴式可视设备的人眼追踪方法的实施例一的流程图;2 is a flowchart of Embodiment 1 of a human eye tracking method for a head mounted visual device according to an embodiment of the present invention;
图3为本发明具体实施方式提供的一种用于头戴式可视设备的人眼追踪方法的实施例二的流程图;3 is a flowchart of Embodiment 2 of a human eye tracking method for a head mounted visual device according to an embodiment of the present invention;
图4为本发明具体实施方式提供的***机、参考点以及人眼眼球之间的空间位置关系三维坐标示意图;4 is a schematic diagram of three-dimensional coordinates of a spatial positional relationship between a miniature camera, a reference point, and a human eyeball according to an embodiment of the present invention;
图5为本发明具体实施方式提供的坐标转换关系图。FIG. 5 is a diagram of a coordinate conversion relationship provided by a specific embodiment of the present invention.
具体实施方式detailed description
为使本发明实施例的目的、技术方案和优点更加清楚明白,下面将以附图及详细叙述清楚说明本发明所揭示内容的精神,任何所属技术领域技术人员在了解本发明内容的实施例后,当可由本发明内容所教示的技术,加以改变及修饰,其并不脱离本发明内容的精神与范围。The spirit and scope of the present invention will be apparent from the following description of the embodiments of the invention, The invention may be modified and modified by the teachings of the present invention without departing from the spirit and scope of the invention.
本发明的示意性实施例及其说明用于解释本发明,但并不作为对本发明的限定。另外,在附图及实施方式中所使用相同或类似标号的元件/构件是用来代表相同或类似部分。The illustrative embodiments of the invention and the description thereof are intended to illustrate the invention, but are not intended to limit the invention. In addition, the same or similar reference numerals or components are used in the drawings and the embodiments to represent the same or similar parts.
关于本文中所使用的“第一”、“第二”、…等,并非特别指称次序或顺位的意思,也非用以限定本发明,其仅为了区别以相同技术用语描述的元件或操作。The terms "first", "second", etc., as used herein, are not specifically intended to refer to the order or order, and are not intended to limit the invention, only to distinguish between elements or operations described in the same technical terms. .
关于本文中所使用的方位用语,例如:上、下、左、右、前或后等,仅是参考附图的方位。因此,使用的方位用语是用来说明并非用来限制本创作。Regarding the orientation terms used herein, for example, up, down, left, right, front or back, etc., only the orientation with reference to the drawings. Therefore, the orientation terms used are used to illustrate that it is not intended to limit the creation.
关于本文中所使用的“包含”、“包括”、“具有”、“含有”等等,均为开放性的用语,即意指包含但不限于。As used herein, "including", "including", "having", "containing", and the like are meant to be open-ended, meaning to include but not limited to.
关于本文中所使用的“及/或”,包括所述事物的任一或全部组合。As used herein, "and/or" includes any and all combinations of the recited.
图1A为本发明具体实施方式提供的一种能够进行人眼追踪的头戴式可视设备的主体结构示意图;图1B为本发明具体实施方式提供的一种能够进行人眼追踪的头戴式可视设备的后视结构示意图;如图1A、图1B所示,在虚拟现实头盔镜片两侧分别设置光源和***机,一个光源和一个***机对应于用户的一只眼睛,另外一个光源和另外一个***机对应于用户的另一只眼睛,光源用于照射人眼眼球,***机用于采集人眼的眼球图像信息,以便服务器根据所述眼球图像信息确定人眼瞳孔的方位信息。 1A is a schematic diagram of a main structure of a head-mounted visual device capable of tracking human eyes according to an embodiment of the present invention; FIG. 1B is a head-mounted type capable of tracking human eyes according to an embodiment of the present invention; A rear view structure of the visual device; as shown in FIG. 1A and FIG. 1B, a light source and a micro camera are respectively disposed on two sides of the virtual reality helmet lens, one light source and one micro camera correspond to one eye of the user, and another light source and Another micro camera corresponds to the other eye of the user, the light source is used to illuminate the eyeball of the human eye, and the miniature camera is used to collect the eyeball image information of the human eye, so that the server determines the orientation information of the pupil of the human eye according to the image information of the eyeball.
该附图所示的具体实施方式中,该头戴式可视设备包括虚拟现实头盔10、光源20和***机30,其中,虚拟现实头盔10用于容纳头戴式可视设备;光源20设置于所述虚拟现实头盔10内,光源20用于照射人眼眼球;***机30设置于所述虚拟现实头盔10内,***机30用于采集人眼的眼球图像信息,以便服务器根据所述眼球图像信息确定人眼瞳孔的方位信息,其中,***机30可以为***机、微型照相机等,光源20可以为微型LED光源,***机30采集人眼的眼球图像信息时,光源20瞬间开启并关闭;***机30通过***机的HDMI数据线与服务器连接。人眼瞳孔的方位信息具体指:以人眼水平观看正前方的直线为参考线,然后将观看目标点与人眼瞳孔进行连线,该连线与参考线之间的角度、位置关系信息就是人眼瞳孔的方位信息。In the specific embodiment shown in the figure, the head mounted visual device comprises a virtual reality helmet 10, a light source 20 and a miniature camera 30, wherein the virtual reality helmet 10 is for accommodating a head mounted visual device; In the virtual reality helmet 10, the light source 20 is used to illuminate the eyeball of the human eye; the micro camera 30 is disposed in the virtual reality helmet 10, and the miniature camera 30 is configured to collect eyeball image information of the human eye, so that the server according to the eyeball The image information determines the orientation information of the pupil of the human eye, wherein the micro camera 30 can be a miniature camera, a micro camera, etc., the light source 20 can be a micro LED light source, and when the micro camera 30 collects the eyeball image information of the human eye, the light source 20 is instantly turned on and off. The miniature camera 30 is connected to the server via the HDMI data line of the miniature camera. The orientation information of the pupil of the human eye specifically refers to: viewing the straight line directly in front of the human eye as a reference line, and then connecting the viewing target point with the pupil of the human eye, and the angle and positional relationship information between the connection line and the reference line is The orientation information of the pupil of the human eye.
进一步地,所述服务器具体根据***机30、参考点以及人眼眼球之间的空间位置关系计算出人眼瞳孔的方位信息。参考点的个数至少为4个。Further, the server calculates the orientation information of the pupil of the human eye according to the spatial positional relationship between the micro camera 30, the reference point, and the eye of the human eye. The number of reference points is at least 4.
另外,如图1B所示,所述光源20具体包括第一LED光源201和第二LED光源202。第一LED光源201设置于所述虚拟现实头盔10的左侧镜片边缘处;第二LED光源202设置于所述虚拟现实头盔10的右侧镜片边缘处;第一LED光源201用于照射左眼眼球;第二LED光源202用于照射右眼眼球。In addition, as shown in FIG. 1B, the light source 20 specifically includes a first LED light source 201 and a second LED light source 202. a first LED light source 201 is disposed at a left lens edge of the virtual reality helmet 10; a second LED light source 202 is disposed at a right lens edge of the virtual reality helmet 10; and the first LED light source 201 is configured to illuminate a left eye The eyeball; the second LED light source 202 is for illuminating the right eyeball.
所述***机30具体包括第一***机301和第二***机302。第一***机301设置于所述虚拟现实头盔10的左侧镜片边缘处;第二***机302设置于所述虚拟现实头盔10的右侧镜片边缘处;第一***机301用于拍摄左眼的眼球图像信息;第二***机302用于拍摄右眼的眼球图像信息。The miniature camera 30 specifically includes a first miniature camera 301 and a second miniature camera 302. a first miniature camera 301 is disposed at a left lens edge of the virtual reality helmet 10; a second miniature camera 302 is disposed at a right lens edge of the virtual reality helmet 10; and a first miniature camera 301 is configured to capture a left eye Eyeball image information; the second miniature camera 302 is used to capture eyeball image information of the right eye.
本发明的具体实施例中,所述服务器具体根据左眼的眼球图像信息获得左眼注视方位的左眼光轴矢量,并根据右眼的眼球图像信息获得右眼注视方位的右眼光轴矢量,再根据左眼光轴矢量和右眼光轴矢量的相交处确定人眼瞳孔的方位信息。In a specific embodiment of the present invention, the server obtains a left-eye optical axis vector of a left-eye gaze orientation according to the eyeball image information of the left eye, and obtains a right-eye optical axis vector of the right-eye gaze orientation according to the right-eye ocular image information, and then The orientation information of the pupil of the human eye is determined according to the intersection of the optical axis vector of the left eye and the optical axis vector of the right eye.
参见图1A、图1B,在虚拟现实头盔中设置***机和光源,并在虚拟场景中设置多个参考点,利用三维矩阵构建***机、参考点以及人眼眼球之间的空间映射关系;再利用***机采集眼球图像信息,根据空间映射关系对获取的眼球图像信息进行分析,可实时获取瞳孔聚焦区域,从而确定用户的观 看方位,不增加头戴式可视设备的重量,而且不会泄露用户周围的环境信息。Referring to FIG. 1A and FIG. 1B, a miniature camera and a light source are disposed in a virtual reality helmet, and a plurality of reference points are set in the virtual scene, and a spatial mapping relationship between the miniature camera, the reference point, and the eye of the human eye is constructed by using the three-dimensional matrix; The micro-camera is used to collect the image information of the eyeball, and the acquired image information of the eyeball is analyzed according to the spatial mapping relationship, and the pupil focusing area can be obtained in real time, thereby determining the view of the user. Look at the orientation without increasing the weight of the head-mounted visual device and without revealing environmental information around the user.
本发明的具体实施例中,在虚拟现实头盔10中,电源集成到USB接口(图中未绘示)中,来给虚拟现实头盔中光源20、***机30等电子元器件供电;另外,头戴式可视设备通过HDMI数据线与服务器连接,服务器通过HDMI数据线控制光源20开关及***机30采集眼球图像信息,***机30采集的眼球图像信息的处理均由服务器完成。在本发明的其它实施例中,也可以在在虚拟现实头盔10中设置一个处理器来完成上述服务器的处理及控制工作。In a specific embodiment of the present invention, in the virtual reality helmet 10, the power source is integrated into a USB interface (not shown) to supply power to the electronic components such as the light source 20 and the micro camera 30 in the virtual reality helmet; The wearable visual device is connected to the server through the HDMI data line, the server controls the light source 20 switch and the micro camera 30 to collect the eyeball image information through the HDMI data line, and the processing of the eyeball image information collected by the micro camera 30 is completed by the server. In other embodiments of the present invention, a processor may be provided in the virtual reality helmet 10 to perform the processing and control of the server.
图2为本发明具体实施方式提供的一种用于头戴式可视设备的人眼追踪方法的实施例一的流程图,如图2所示,LED光源开启瞬间,***机采集人眼的眼球图像信息,通过分析采集的眼球图像信息确定人眼瞳孔的方位信息。FIG. 2 is a flowchart of Embodiment 1 of a human eye tracking method for a head-mounted visual device according to an embodiment of the present invention. As shown in FIG. 2, an LED light source is turned on instantaneously, and a micro camera collects a human eye. The eyeball image information determines the orientation information of the pupil of the human eye by analyzing the acquired image information of the eyeball.
该附图所示的具体实施方式包括:Specific embodiments shown in the figures include:
步骤101:利用LED光源照射人眼眼球。LED光源类似照相机的闪光灯,开启后立刻关闭,不会影响用户正常视觉体验。Step 101: Illuminating the eyeball of the human eye with an LED light source. The LED light source is similar to the camera's flash, and is turned off immediately after turning on, without affecting the user's normal visual experience.
步骤102:利用***机采集人眼的眼球图像信息。***机在LED光源开启瞬间采集人眼的眼球图像信息;***机可以为***机、微型照相机等。Step 102: Acquire an eyeball image information of a human eye by using a micro camera. The miniature camera captures the eyeball image information of the human eye when the LED light source is turned on; the miniature camera can be a miniature camera, a miniature camera, or the like.
步骤103:利用空间映射关系根据所述眼球图像信息确定人眼瞳孔的方位信息。本发明的具体实施例中,具步骤103体包括:采集左眼的眼球图像信息和右眼的眼球图像信息;根据左眼的眼球图像信息获得左眼注视方位的左眼光轴矢量,并根据右眼的眼球图像信息获得右眼注视方位的右眼光轴矢量;根据所述左眼光轴矢量和所述右眼光轴矢量确定人眼瞳孔的方位信息。Step 103: Determine the orientation information of the pupil of the human eye according to the eyeball image information by using a spatial mapping relationship. In a specific embodiment of the present invention, the step 103 includes: collecting eyeball image information of the left eye and eyeball image information of the right eye; obtaining a left-eye optical axis vector of the left-eye gaze orientation according to the eyeball image information of the left eye, and according to the right The eyeball image information of the eye obtains a right eye optical axis vector of the right eye gaze orientation; and the orientation information of the human eye pupil is determined according to the left eye optical axis vector and the right eye optical axis vector.
参见图2,利用***机(还可以利用微型照相机等传感器)采集人眼的眼球图像信息,根据空间映射关系对获取的眼球图像信息进行分析,可实时获取瞳孔聚焦区域,从而确定用户的观看方位,不增加头戴式可视设备的重量,而且不会泄露用户周围的环境信息,提高用户体验度。Referring to FIG. 2, the micro-camera (which can also use a sensor such as a micro camera) collects the eyeball image information of the human eye, analyzes the acquired eyeball image information according to the spatial mapping relationship, and can obtain the pupil focusing area in real time, thereby determining the viewing orientation of the user. Does not increase the weight of the head-mounted visual device, and does not reveal environmental information around the user, thereby improving the user experience.
图3为本发明具体实施方式提供的一种用于头戴式可视设备的人眼追踪方法的实施例二的流程图,如图3所示,在对用户进行人眼追踪之前,需要利用三维矩阵构建***机、参考点以及人眼眼球之间的空间映射关系。FIG. 3 is a flowchart of Embodiment 2 of a human eye tracking method for a head-mounted visual device according to an embodiment of the present invention. As shown in FIG. 3, before the user performs human eye tracking, the user needs to utilize The three-dimensional matrix constructs a spatial mapping relationship between the miniature camera, the reference point, and the eye of the human eye.
该附图所示的具体实施方式中,步骤101之前,该方法还包括: In the specific embodiment shown in the figure, before step 101, the method further includes:
步骤100:利用三维矩阵构建***机、参考点以及人眼眼球之间的空间映射关系。Step 100: Construct a spatial mapping relationship between the miniature camera, the reference point, and the eyeball of the human eye using the three-dimensional matrix.
参见图3,以不同的函数形式、三维矩阵形式来拟合人眼眼球所在的坐标系、参考点所在的坐标系之间一一对应的映射关系,以及***机与人眼眼球之间的位置关系,最终构建***机、参考点以及人眼眼球之间的空间映射关系,利用空间映射关系结合采集的眼球图像信息可以实时计算出使用者在虚拟空间中的视觉凝视点。Referring to FIG. 3, a different functional form, a three-dimensional matrix form is used to fit the coordinate system between the eyeball of the human eye and the coordinate system of the reference point, and the position between the miniature camera and the eye of the human eye. Relationship, finally constructing the spatial mapping relationship between the miniature camera, the reference point and the eyeball of the human eye, and using the spatial mapping relationship combined with the collected eyeball image information, the visual gaze point of the user in the virtual space can be calculated in real time.
图4为本发明具体实施方式提供的***机、参考点以及人眼眼球之间的空间位置关系三维坐标示意图,如图4所示,本发明提供一种应用于虚拟现实头盔的瞳孔聚焦区域追踪方案,主要通过在头戴式可视设备(例如,虚拟现实头盔)镜片两侧安装***机(例如,***机),并在***机镜头边缘安装LED光源,借助虚拟现实头盔的工作特性在虚拟场景中设置4个参考点,人眼眼球在注视参考点时,开启LED光源,***机捕捉并记录眼球及瞳孔的实时图像信息,然后再结合***机、参考点以及人眼眼球所在坐标系的空间位置关系,以不同的函数形式、矩阵形式来拟合眼图参考系与参考点所在的参考系之间的一一对应映射关系,得出瞳孔位置及其方位信息,进而可计算出空间中任一视觉凝视点的位置坐标,该***空间位置关系如图4所示,图中E1和E2为左右眼球所在空间直角坐标系原点;S1和S2为***机所在空间直角坐标系原点;O为目标注视点所在的空间直角坐标系原点;X1和X2为虚拟现实中设置的参考点,X1和X2位于两眼球所在线段的中垂线上;X3为虚拟现实场景中的目标注视点;H1、H2和Ct为摄像机与人眼的垂直距离;L为两眼球之间的距离;Cs为两个***机之间的距离;参考点X1和X2之间的距离,与参考点X1与S0之间的距离相等,均为ΔX;∠E1X1E2的角度为2θ。4 is a schematic diagram of three-dimensional coordinates of a spatial positional relationship between a miniature camera, a reference point, and a human eyeball according to an embodiment of the present invention. As shown in FIG. 4, the present invention provides a pupil focus area tracking applied to a virtual reality helmet. The solution is mainly to install a miniature camera (for example, a miniature camera) on both sides of the lens of a head-mounted visual device (for example, a virtual reality helmet), and install an LED light source on the edge of the miniature camera lens, and the virtual reality helmet is operated in a virtual state. Four reference points are set in the scene. When the eyeball is looking at the reference point, the LED light source is turned on. The miniature camera captures and records the real-time image information of the eyeball and the pupil, and then combines the miniature camera, the reference point, and the coordinate system of the human eyeball. Spatial positional relationship, with different functional forms and matrix forms to fit the one-to-one correspondence between the eye reference frame and the reference frame where the reference point is located, and obtain the pupil position and its orientation information, which can be calculated in the space. The position coordinates of any visual gaze point, the spatial positional relationship of the system is shown in Figure 4, E1 and E2 are the origin of the space rectangular coordinate system where the left and right eyeballs are located; S1 and S2 are the origin of the space rectangular coordinate system where the miniature camera is located; O is the origin of the space rectangular coordinate system where the target fixation point is located; X1 and X2 are the reference points set in the virtual reality X1 and X2 are located on the midline of the line segment of the two eyeballs; X3 is the target fixation point in the virtual reality scene; H1, H2 and Ct are the vertical distance between the camera and the human eye; L is the distance between the two eyeballs; Cs is the distance between two miniature cameras; the distance between reference points X1 and X2 is equal to the distance between reference points X1 and S0, both of which are ΔX; the angle of ∠E1X1E2 is 2θ.
基于图4所示的不同坐标系(人眼眼球坐标系E、摄像机所在坐标系S、参考点所在坐标系O)之间的转换关系及空间位置关系,计算得出瞳孔的空间位置及方位信息,即可得到瞳孔注视某一点的矢量坐标。其中瞳孔的空间位置可表示为
Figure PCTCN2016103375-appb-000001
瞳孔在空间运动包含有X轴、Y轴、Z轴三个维度的位置信息,故本应有三个未知参数,但由于瞳孔在眼球固定平面上进行运动,所以在人眼眼球所在的固定平面上,
Figure PCTCN2016103375-appb-000002
只包含有瞳孔平面运动所在二 维空间的两个未知参数μ00,而另一参数
Figure PCTCN2016103375-appb-000003
则直接与μ00相关。另外,瞳孔的注视方位即瞳孔在其所在空间三个维度上的旋转角,记为R,整合瞳孔的空间位置及方位数据,可得出瞳孔注视某一点时的矢量坐标信息[R,t],其中R是一个3×3的旋转矩阵,代表瞳孔的注视方位,t是一个3×1的向量,代表瞳孔的空间位置信息。由于旋转角R也是在眼球固定平面上,所以共有两个旋转角为未知参数,一个是绕X轴的旋转角,一个是绕Z轴的旋转角,两个旋转角确定了R的值。
Based on the transformation relationship and spatial position relationship between different coordinate systems (human eye coordinate system E, camera coordinate system S, reference point coordinate system O) shown in Fig. 4, the spatial position and orientation information of the pupil are calculated. , you can get the vector coordinates of the pupil looking at a certain point. Where the spatial position of the pupil can be expressed as
Figure PCTCN2016103375-appb-000001
The pupil movement in the space contains three dimensions of the X-axis, the Y-axis, and the Z-axis. Therefore, there should be three unknown parameters, but since the pupil moves on the fixed plane of the eyeball, it is on the fixed plane where the eyeball is located. ,
Figure PCTCN2016103375-appb-000002
Contains only two unknown parameters μ 0 , θ 0 in the two-dimensional space in which the pupil plane motion is located, and another parameter
Figure PCTCN2016103375-appb-000003
Directly related to μ 0, θ 0. In addition, the gaze orientation of the pupil is the rotation angle of the pupil in three dimensions of the space in which it is located, denoted as R, and the spatial position and orientation data of the pupil are integrated, and the vector coordinate information [R, t] when the pupil looks at a certain point can be obtained. Where R is a 3x3 rotation matrix representing the gaze orientation of the pupil, and t is a 3x1 vector representing the spatial position information of the pupil. Since the rotation angle R is also on the fixed plane of the eyeball, there are two rotation angles which are unknown parameters, one is the rotation angle around the X axis, and the other is the rotation angle around the Z axis, and the two rotation angles determine the value of R.
绕X轴旋转:y'=ycost-zsintRotate around the X axis: y'=ycost-zsint
z'=ysint+zcostz'=ysint+zcost
x'=xx'=x
其中,
Figure PCTCN2016103375-appb-000004
among them,
Figure PCTCN2016103375-appb-000004
绕Z轴旋转:x'=xcost-ysintRotate around the Z axis: x'=xcost-ysint
y'=xsint+ycosty'=xsint+ycost
z'=zz'=z
其中,
Figure PCTCN2016103375-appb-000005
among them,
Figure PCTCN2016103375-appb-000005
由(1)、(2)可以确定R的值:The values of R can be determined by (1) and (2):
Figure PCTCN2016103375-appb-000006
Figure PCTCN2016103375-appb-000006
通过参考点的标定,获取该***中的未知参数,然后实时计算出各瞳孔在注视任一点的方位及位置坐标信息[R,t],即: Through the calibration of the reference point, the unknown parameters in the system are obtained, and then the orientation and position coordinate information [R, t] of each pupil in any point of sight is calculated in real time, namely:
Figure PCTCN2016103375-appb-000007
Figure PCTCN2016103375-appb-000007
1.坐标系的转换:1. Conversion of the coordinate system:
参考点X1、X2所在坐标系记为平面坐标系O,眼球所在坐标系记为眼三维坐标系E,摄像机所在的坐标系记为S,摄像机拍摄眼球运动的二维图像所在的坐标系记为B,根据虚拟现实眼动追踪***中摄像机、参考点以及眼球所在坐标系的关系,可得如图5所示的坐标转换关系图。The coordinate system of the reference points X 1 and X 2 is recorded as the plane coordinate system O, the coordinate system of the eyeball is recorded as the three-dimensional coordinate system E of the eye, the coordinate system of the camera is recorded as S, and the coordinate system of the two-dimensional image of the eye movement of the camera is located. Recorded as B, according to the relationship between the camera, the reference point and the coordinate system of the eyeball in the virtual reality eye tracking system, the coordinate conversion relationship diagram shown in Fig. 5 can be obtained.
在等式TO←E=TO←S·TS←B·TB←E中,TO←E代表从眼坐标系E到参考点所在坐标系O的转换关系,通过参考点标定可获取,另TO←S摄像机所在坐标系S相对于参考点所在坐标系O,以及TS←B摄像机所拍摄的二维图像所在坐标系B相对于摄像机所在坐标系S,都可通过标定获取。In the equation T O←E =T O←S ·T S←B ·T B←E , T O←E represents the conversion relationship from the eye coordinate system E to the coordinate system O where the reference point is located, and can be calibrated by the reference point. Acquiring, another T O←S camera coordinate system S relative to the coordinate system O of the reference point, and the coordinate system B of the two-dimensional image captured by the T S←B camera relative to the coordinate system S of the camera can be obtained by calibration .
TB←E:根据参考点计算出TB←E中的两个未知参数(x,y),即当前眼坐标系E与二维图像所在坐标系B之间的变换关系。眼球相对于眼眶是有两个未知的量,在眼眶及眼球形状限制下,眼球只能在X、Y轴进行运动,通过参考点的标定可求得TB←E中的两个未知量,获取TB←E的转换关系。T B ← E : Two unknown parameters (x, y) in T B ← E are calculated from the reference point, that is, the transformation relationship between the current eye coordinate system E and the coordinate system B in which the two-dimensional image is located. Relative orbital eye are two unknown amount, in the eyes and eye shape limitations, eye movement only in X, Y-axis, the reference point by a calibration can be obtained two unknowns T B ← E in, Get the conversion relationship of T B←E .
通过参考点的标定,同时根据坐标系转换关系,可计算得出坐标系的未知参数。Through the calibration of the reference point and the coordinate transformation relationship, the unknown parameters of the coordinate system can be calculated.
2.基于三维矩阵的映射关系:2. Mapping based on three-dimensional matrix:
首先确定三维空间中的点M=[X Y Z]T与二维空间中的点的图像坐标m=[x y]T之间的映射关系如下:First, the mapping relationship between the point M=[X Y Z] T in the three-dimensional space and the image coordinate m=[x y] T of the point in the two-dimensional space is determined as follows:
Figure PCTCN2016103375-appb-000008
Figure PCTCN2016103375-appb-000008
其中,R是一个3×3的旋转矩阵,t是一个3×1的向量,C是内部矩阵。瞳孔的4个外部参数确定了瞳孔相对于场景的位置和方位,其中包括两个旋转角,这两个旋转角可以唯一的确定R,另外两个参数即构成t。C中四个内部参数,起始点(x0,y0)代表光轴和参考点的交点处的像素坐标,fx和fy分别代表水平方位和垂直方位上焦点的长度。Where R is a 3 × 3 rotation matrix, t is a 3 × 1 vector, and C is an internal matrix. The four external parameters of the pupil determine the position and orientation of the pupil relative to the scene, including two rotation angles, which can uniquely determine R, and the other two parameters constitute t. Four internal parameters in C, the starting point (x 0 , y 0 ) represents the pixel coordinates at the intersection of the optical axis and the reference point, and f x and f y represent the length of the focus in the horizontal and vertical directions, respectively.
3.根据上述方法,即可将摄像机拍摄的眼球二维图像转换成眼睛注视方位的光轴矢量坐标,两个眼睛所获取的光轴矢量相交处即是目标注视区域,在这里主要有以下三种情况:3. According to the above method, the two-dimensional image of the eyeball captured by the camera can be converted into the optical axis vector coordinate of the eye gaze orientation, and the intersection of the optical axis vectors acquired by the two eyes is the target gaze region, and here are mainly the following three Situation:
第一种:光轴相交。得到的两个眼睛的光轴矢量成功相交,得到目标注视点The first type: the optical axes intersect. The obtained optical axes of the two eyes are successfully intersected to obtain the target fixation point.
第二种:光柱相交。根据每个使用者的眼球特征,形成以光轴矢量Fo为中心,r(根据使用者眼部特征可得)为半径的光柱,左右眼光柱相交处即为目标注视区域。Second: the light columns intersect. According to the eyeball feature of each user, a light column having a radius of r (according to the characteristics of the user's eyes) centered on the optical axis vector Fo is formed, and the intersection of the left and right eye beams is the target attention area.
第三种:光锥相交。实际的视线几何范围是以视网膜为光锥顶点,以视线为光锥中轴线,成一定角度的光锥,即视野为所视焦平面上的一片区域。而区域的交叉区域即为视焦区域,视焦区域的几何中心为即为焦点。对于近视场光源,前两种方法可得到足够的近似精度。The third type: the light cones intersect. The actual line-of-sight geometry range is that the retina is the apex of the cone of light, and the line of sight is the central axis of the cone of light, and the cone of light is at an angle, that is, the field of view is an area on the focal plane of the view. The intersection area of the area is the focus area, and the geometric center of the focus area is the focus. For near-field sources, the first two methods yield sufficient approximation accuracy.
通过在虚拟现实头盔安装摄像机和LED光源,在虚拟场景中设置参考点的方式拾取瞳孔在聚焦不同目标点的眼球图像数据,通过***的空间位置关系、不同坐标系之间的转换及图像数据,计算出使用者瞳孔实时位置及聚焦方位,即可实时计算出使用者在虚拟空间中的视觉凝视点。By installing a camera and an LED light source in a virtual reality helmet, the reference point is set in the virtual scene to pick up the eyeball image data of the pupil at different target points, through the spatial positional relationship of the system, the conversion between different coordinate systems, and the image data. By calculating the real-time position and focus orientation of the user's pupil, the visual gaze point of the user in the virtual space can be calculated in real time.
本发明方案主要包含以下几点的内容:虚拟现实头盔边缘摄像机及LED光源的设置;虚拟现实场景中参考点的设置;拍照记录瞳孔运动图像;根据图像信息分割眼白与瞳孔得出瞳孔与眼球的位置关系;根据获取的数据计算瞳孔的实时位置及聚焦方位。The solution of the invention mainly comprises the following contents: setting of the virtual reality helmet edge camera and the LED light source; setting of the reference point in the virtual reality scene; photographing the pupil movement image; and segmenting the eye white and the pupil according to the image information to obtain the pupil and the eyeball Position relationship; calculate the real-time position and focus direction of the pupil based on the acquired data.
硬件方面:在虚拟现实头盔的镜片边缘位置各设置一个***机用来捕捉使用者眼球的变化情况。同时在***机上再各设置一个LED光源用来发射光,帮助摄像机进行数据采集,***机位置关系如图4所示。 Hardware: A miniature camera is placed at the edge of the lens of the virtual reality helmet to capture the changes in the user's eye. At the same time, an LED light source is arranged on the micro camera to emit light, which helps the camera to collect data. The position relationship of the miniature camera is shown in Fig. 4.
设置参考点:在使用者使用虚拟现实头盔之前,在默认虚拟场景中由近至远设置4个目标点作为参考点,设置参考点是为了获取眼睛聚焦参考点时的数据信息,使用者瞳孔聚焦一个参考点时,摄像机就会捕捉到此时使用者的眼球图像信息,通过对图像信息的解析获得一组数据,不同参考点也就能获得不同数据。Set reference point: Before the user uses the virtual reality helmet, set 4 target points from the near to far as the reference point in the default virtual scene. The reference point is set to obtain the data information when the eye focuses on the reference point, and the user pupils focus. When a reference point is used, the camera captures the image information of the user's eyeball at this time. By analyzing the image information, a set of data is obtained, and different reference points can also obtain different data.
摄像机拍照记录眼球运动图像:在使用者眼睛注视每个参考点的时候,开启LED灯,摄像机拍摄一组图像记录瞳孔运动信息,获得图像数据。The camera photographs the eye movement image: when the user's eyes look at each reference point, the LED light is turned on, and the camera takes a group of images to record the pupil motion information to obtain image data.
解析图像信息获取瞳孔与眼球空间位置关系:将摄像机拍摄的不同组图像信息传至服务器端,通过图像解析分割眼白与瞳孔。Analyze the image information to obtain the positional relationship between the pupil and the eyeball: transmit different sets of image information captured by the camera to the server, and segment the white of the eye and the pupil through image analysis.
根据***中各部分的空间位置关系、不同坐标系之间的关系,借助相关参考点的设置,以不同的函数形式、矩阵形式来拟合眼图参考系与参考点所在参考系之间一一对应的映射关系,得出瞳孔位置及其方位信息,进而实时计算出使用者在虚拟空间中的视觉凝视点。According to the spatial position relationship of each part in the system and the relationship between different coordinate systems, with the help of the setting of the relevant reference points, the reference system between the eye reference frame and the reference point is fitted in different functional forms and matrix forms. Corresponding mapping relationship is obtained, and the position of the pupil and its orientation information are obtained, and the visual gaze point of the user in the virtual space is calculated in real time.
本发明至少还具有以下有效效果或特点:The invention also has at least the following effective effects or features:
本发明应用环境为虚拟现实浸入式头盔内部的眼动追踪技术,近场眼球视线的几何近视,应用环境为除眼部空间外没有其他内容追踪的环境,该环境为保护用户的个人信息可控的(而***露用户周围环境)交互,方便使用;由于采用几何视线近视模型,而并未计算用户的晶状体、瞳孔、角膜、玻璃体等视觉光路重建参数模型,数据计算量小,实现简单。The application environment of the invention is an eye tracking technology inside the virtual reality immersed helmet, geometric nearsight of the near field eye line of sight, and the application environment is an environment with no content tracking other than the eye space, and the environment is controllable for protecting the user's personal information. The interaction (without revealing the user's surroundings) is convenient and easy to use; due to the geometric vision myopia model, the visual optical path reconstruction parameter model of the user's lens, pupil, cornea, vitreous, etc. is not calculated, and the calculation amount is small and the implementation is simple.
上述的本发明实施例可在各种硬件、软件编码或两者组合中进行实施。例如,本发明的实施例也可为在数据信号处理器(Digital Signal Processor,DSP)中执行上述方法的程序代码。本发明也可涉及计算机处理器、数字信号处理器、微处理器或现场可编程门阵列(Field Programmable Gate Array,FPGA)执行的多种功能。可根据本发明配置上述处理器执行特定任务,其通过执行定义了本发明揭示的特定方法的机器可读软件代码或固件代码来完成。可将软件代码或固件代码发展为不同的程序语言与不同的格式或形式。也可为不同的目标平台编译软件代码。然而,根据本发明执行任务的软件代码与其他类型配置代码的不同代码样式、类型与语言不脱离本发明的精神与范围。The above described embodiments of the invention may be implemented in various hardware, software code or combinations of both. For example, an embodiment of the present invention may also be a program code for executing the above method in a Digital Signal Processor (DSP). The invention may also relate to various functions performed by a computer processor, digital signal processor, microprocessor or Field Programmable Gate Array (FPGA). The above described processor may be configured to perform specific tasks in accordance with the present invention, which are accomplished by executing machine readable software code or firmware code that defines a particular method disclosed herein. Software code or firmware code can be developed into different programming languages and different formats or forms. Software code can also be compiled for different target platforms. However, different code patterns, types, and languages of software code and other types of configuration code for performing tasks in accordance with the present invention do not depart from the spirit and scope of the present invention.
以上所述仅为本发明示意性的具体实施方式,在不脱离本发明的构思和原则的前提下,任何本领域的技术人员所做出的等同变化与修改,均应属于本发 The above is only the exemplary embodiments of the present invention, and any equivalent changes and modifications made by those skilled in the art should be included in the present invention without departing from the spirit and scope of the present invention.

Claims (10)

  1. 一种能够进行人眼追踪的头戴式可视设备,其特征在于,该头戴式可视设备包括:A head-mounted visual device capable of tracking human eyes, characterized in that the head-mounted visual device comprises:
    虚拟现实头盔(10),用于容纳头戴式可视设备;a virtual reality helmet (10) for housing a head mounted visual device;
    光源(20),设置于所述虚拟现实头盔(10)内,用于照射人眼眼球;以及a light source (20) disposed in the virtual reality helmet (10) for illuminating an eyeball of a human eye;
    ***机(30),设置于所述虚拟现实头盔(10)内,用于采集人眼的眼球图像信息,以便服务器根据所述眼球图像信息确定人眼瞳孔的方位信息。The micro camera (30) is disposed in the virtual reality helmet (10) for collecting eyeball image information of the human eye, so that the server determines the orientation information of the pupil of the human eye according to the eyeball image information.
  2. 如权利要求1所述的能够进行人眼追踪的头戴式可视设备,其特征在于,所述服务器具体根据***机(30)、参考点以及人眼眼球之间的空间位置关系计算出人眼瞳孔的方位信息。The head-mounted visual device capable of performing human eye tracking according to claim 1, wherein the server calculates a person according to a spatial positional relationship between the micro camera (30), the reference point, and the eye of the human eye. The orientation information of the eye pupil.
  3. 如权利要求2所述的能够进行人眼追踪的头戴式可视设备,其特征在于,所述参考点的个数为4个。The head-mounted visual device capable of tracking human eyes according to claim 2, wherein the number of the reference points is four.
  4. 如权利要求1所述的能够进行人眼追踪的头戴式可视设备,其特征在于,所述光源(20)具体包括:The head-mounted visual device capable of performing human eye tracking according to claim 1, wherein the light source (20) specifically comprises:
    第一LED光源(201),设置于所述虚拟现实头盔(10)的左侧镜片边缘处,用于照射左眼眼球;以及a first LED light source (201) disposed at an edge of the left lens of the virtual reality helmet (10) for illuminating the left eyeball;
    第二LED光源(202),设置于所述虚拟现实头盔(10)的右侧镜片边缘处,用于照射右眼眼球。A second LED light source (202) is disposed at an edge of the right lens of the virtual reality helmet (10) for illuminating the right eyeball.
  5. 如权利要求1所述的能够进行人眼追踪的头戴式可视设备,其特征在于,所述***机(30)具体包括:The head-mounted visual device capable of performing human eye tracking according to claim 1, wherein the miniature camera (30) specifically comprises:
    第一***机(301),设置于所述虚拟现实头盔(10)的左侧镜片边缘处,用于拍摄左眼的眼球图像信息;以及a first miniature camera (301) disposed at an edge of a left lens of the virtual reality helmet (10) for capturing eyeball image information of the left eye;
    第二***机(302),设置于所述虚拟现实头盔(10)的右侧镜片边缘处,用于拍摄右眼的眼球图像信息。A second miniature camera (302) is disposed at an edge of the right lens of the virtual reality helmet (10) for capturing eyeball image information of the right eye.
  6. 如权利要求5所述的能够进行人眼追踪的头戴式可视设备,其特征在于,所述服务器具体根据左眼的眼球图像信息获得左眼注视方位的左眼光轴矢量,并 根据右眼的眼球图像信息获得右眼注视方位的右眼光轴矢量,再根据左眼光轴矢量和右眼光轴矢量的相交处确定人眼瞳孔的方位信息。The head-mounted visual device capable of performing human eye tracking according to claim 5, wherein the server obtains a left-eye optical axis vector of a left-eye gaze orientation according to the eyeball image information of the left eye, and Obtaining the right-eye optical axis vector of the right-eye gaze orientation according to the eyeball image information of the right eye, and determining the orientation information of the human eye pupil according to the intersection of the left-eye optical axis vector and the right-eye optical axis vector.
  7. 如权利要求1所述的能够进行人眼追踪的头戴式可视设备,其特征在于,所述***机(30)采集人眼的眼球图像信息时,所述光源(20)瞬间开启并关闭。The head-mounted visual device capable of performing human eye tracking according to claim 1, wherein when the micro camera (30) collects eyeball image information of a human eye, the light source (20) is instantly turned on and off. .
  8. 一种用于头戴式可视设备的人眼追踪方法,其特征在于,该方法包括:A human eye tracking method for a head mounted visual device, the method comprising:
    利用LED光源照射人眼眼球;Illuminating the eyeball of the human eye with an LED light source;
    利用***机采集人眼的眼球图像信息;以及Acquiring eyeball image information of the human eye using a miniature camera;
    利用空间映射关系根据所述眼球图像信息确定人眼瞳孔的方位信息。The orientation information of the pupil of the human eye is determined according to the eyeball image information by using a spatial mapping relationship.
  9. 如权利要求8所述的用于头戴式可视设备的人眼追踪方法,其特征在于,利用LED光源照射人眼眼球的步骤之前,该方法还包括:The human eye tracking method for a head-mounted visual device according to claim 8, wherein before the step of illuminating the eyeball with the LED light source, the method further comprises:
    利用三维矩阵构建***机、参考点以及人眼眼球之间的空间映射关系。A three-dimensional matrix is used to construct a spatial mapping relationship between a miniature camera, a reference point, and a human eye.
  10. 如权利要求8所述的用于头戴式可视设备的人眼追踪方法,其特征在于,根据所述眼球图像信息确定人眼瞳孔的方位信息的步骤,具体包括:The human eye tracking method for a head-mounted visual device according to claim 8, wherein the step of determining the orientation information of the pupil of the human eye according to the image information of the eyeball comprises:
    采集左眼的眼球图像信息和右眼的眼球图像信息;Collecting eyeball image information of the left eye and eyeball image information of the right eye;
    根据左眼的眼球图像信息获得左眼注视方位的左眼光轴矢量,并根据右眼的眼球图像信息获得右眼注视方位的右眼光轴矢量;以及Obtaining a left-eye optical axis vector of a left-eye gaze orientation according to the eye image information of the left eye, and obtaining a right-eye optical axis vector of the right-eye gaze orientation according to the eye image information of the right eye;
    根据所述左眼光轴矢量和所述右眼光轴矢量确定人眼瞳孔的方位信息。 The orientation information of the pupil of the human eye is determined according to the left-eye optical axis vector and the right-eye optical axis vector.
PCT/CN2016/103375 2016-10-26 2016-10-26 Head-mounted display device that can perform eye tracking, and eye tracking method WO2018076202A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/103375 WO2018076202A1 (en) 2016-10-26 2016-10-26 Head-mounted display device that can perform eye tracking, and eye tracking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/103375 WO2018076202A1 (en) 2016-10-26 2016-10-26 Head-mounted display device that can perform eye tracking, and eye tracking method

Publications (1)

Publication Number Publication Date
WO2018076202A1 true WO2018076202A1 (en) 2018-05-03

Family

ID=62023004

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/103375 WO2018076202A1 (en) 2016-10-26 2016-10-26 Head-mounted display device that can perform eye tracking, and eye tracking method

Country Status (1)

Country Link
WO (1) WO2018076202A1 (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110308794A (en) * 2019-07-04 2019-10-08 郑州大学 There are two types of the virtual implementing helmet of display pattern and the control methods of display pattern for tool
CN110347260A (en) * 2019-07-11 2019-10-18 歌尔科技有限公司 A kind of augmented reality device and its control method, computer readable storage medium
CN110633014A (en) * 2019-10-23 2019-12-31 哈尔滨理工大学 Head-mounted eye movement tracking device
CN111240464A (en) * 2018-11-28 2020-06-05 简韶逸 Eyeball tracking correction method and device
CN111524175A (en) * 2020-04-16 2020-08-11 东莞市东全智能科技有限公司 Depth reconstruction and eye movement tracking method and system for asymmetric multiple cameras
CN111665932A (en) * 2019-03-05 2020-09-15 宏达国际电子股份有限公司 Head-mounted display device and eyeball tracking device thereof
CN112540084A (en) * 2019-09-20 2021-03-23 联策科技股份有限公司 Appearance inspection system and inspection method
CN112633128A (en) * 2020-12-18 2021-04-09 上海影创信息科技有限公司 Method and system for pushing information of interested object in afterglow area
CN112926521A (en) * 2021-03-30 2021-06-08 青岛小鸟看看科技有限公司 Eyeball tracking method and system based on light source on-off
CN113138664A (en) * 2021-03-30 2021-07-20 青岛小鸟看看科技有限公司 Eyeball tracking system and method based on light field perception
CN113242384A (en) * 2021-05-08 2021-08-10 聚好看科技股份有限公司 Panoramic video display method and display equipment
CN113362676A (en) * 2020-03-04 2021-09-07 上海承尊器进多媒体科技有限公司 Virtual reality driving system and method based on virtual reality
CN114209990A (en) * 2021-12-24 2022-03-22 艾视雅健康科技(苏州)有限公司 Method and device for analyzing effective work of medical device entering eye in real time
CN114296233A (en) * 2022-01-05 2022-04-08 京东方科技集团股份有限公司 Display module, manufacturing method thereof and head-mounted display device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040150728A1 (en) * 1997-12-03 2004-08-05 Shigeru Ogino Image pick-up apparatus for stereoscope
CN103439794A (en) * 2013-09-11 2013-12-11 百度在线网络技术(北京)有限公司 Calibration method for head-mounted device and head-mounted device
CN104603673A (en) * 2012-09-03 2015-05-06 Smi创新传感技术有限公司 Head mounted system and method to compute and render stream of digital images using head mounted system
CN104685541A (en) * 2012-09-17 2015-06-03 感官运动仪器创新传感器有限公司 Method and an apparatus for determining a gaze point on a three-dimensional object
US20150160725A1 (en) * 2013-12-10 2015-06-11 Electronics And Telecommunications Research Institute Method of acquiring gaze information irrespective of whether user wears vision aid and moves
CN105393160A (en) * 2013-06-28 2016-03-09 微软技术许可有限责任公司 Camera auto-focus based on eye gaze

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040150728A1 (en) * 1997-12-03 2004-08-05 Shigeru Ogino Image pick-up apparatus for stereoscope
CN104603673A (en) * 2012-09-03 2015-05-06 Smi创新传感技术有限公司 Head mounted system and method to compute and render stream of digital images using head mounted system
CN104685541A (en) * 2012-09-17 2015-06-03 感官运动仪器创新传感器有限公司 Method and an apparatus for determining a gaze point on a three-dimensional object
CN105393160A (en) * 2013-06-28 2016-03-09 微软技术许可有限责任公司 Camera auto-focus based on eye gaze
CN103439794A (en) * 2013-09-11 2013-12-11 百度在线网络技术(北京)有限公司 Calibration method for head-mounted device and head-mounted device
US20150160725A1 (en) * 2013-12-10 2015-06-11 Electronics And Telecommunications Research Institute Method of acquiring gaze information irrespective of whether user wears vision aid and moves

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111240464A (en) * 2018-11-28 2020-06-05 简韶逸 Eyeball tracking correction method and device
CN111665932A (en) * 2019-03-05 2020-09-15 宏达国际电子股份有限公司 Head-mounted display device and eyeball tracking device thereof
CN111665932B (en) * 2019-03-05 2023-03-24 宏达国际电子股份有限公司 Head-mounted display device and eyeball tracking device thereof
CN110308794A (en) * 2019-07-04 2019-10-08 郑州大学 There are two types of the virtual implementing helmet of display pattern and the control methods of display pattern for tool
CN110347260A (en) * 2019-07-11 2019-10-18 歌尔科技有限公司 A kind of augmented reality device and its control method, computer readable storage medium
CN112540084A (en) * 2019-09-20 2021-03-23 联策科技股份有限公司 Appearance inspection system and inspection method
CN110633014A (en) * 2019-10-23 2019-12-31 哈尔滨理工大学 Head-mounted eye movement tracking device
CN110633014B (en) * 2019-10-23 2024-04-05 常州工学院 Head-wearing eye movement tracking device
CN113362676A (en) * 2020-03-04 2021-09-07 上海承尊器进多媒体科技有限公司 Virtual reality driving system and method based on virtual reality
CN111524175A (en) * 2020-04-16 2020-08-11 东莞市东全智能科技有限公司 Depth reconstruction and eye movement tracking method and system for asymmetric multiple cameras
CN112633128A (en) * 2020-12-18 2021-04-09 上海影创信息科技有限公司 Method and system for pushing information of interested object in afterglow area
CN112926521A (en) * 2021-03-30 2021-06-08 青岛小鸟看看科技有限公司 Eyeball tracking method and system based on light source on-off
CN112926521B (en) * 2021-03-30 2023-01-24 青岛小鸟看看科技有限公司 Eyeball tracking method and system based on light source on-off
US11863875B2 (en) 2021-03-30 2024-01-02 Qingdao Pico Technology Co., Ltd Eyeball tracking method and system based on on-off of light sources
CN113138664A (en) * 2021-03-30 2021-07-20 青岛小鸟看看科技有限公司 Eyeball tracking system and method based on light field perception
CN113242384A (en) * 2021-05-08 2021-08-10 聚好看科技股份有限公司 Panoramic video display method and display equipment
CN114209990A (en) * 2021-12-24 2022-03-22 艾视雅健康科技(苏州)有限公司 Method and device for analyzing effective work of medical device entering eye in real time
CN114296233A (en) * 2022-01-05 2022-04-08 京东方科技集团股份有限公司 Display module, manufacturing method thereof and head-mounted display device

Similar Documents

Publication Publication Date Title
WO2018076202A1 (en) Head-mounted display device that can perform eye tracking, and eye tracking method
US11290706B2 (en) Display systems and methods for determining registration between a display and a user's eyes
US10917634B2 (en) Display systems and methods for determining registration between a display and a user's eyes
US9728010B2 (en) Virtual representations of real-world objects
CN107991775B (en) Head-mounted visual equipment capable of tracking human eyes and human eye tracking method
US9779512B2 (en) Automatic generation of virtual materials from real-world materials
US9727132B2 (en) Multi-visor: managing applications in augmented reality environments
CA2820950C (en) Optimized focal area for augmented reality displays
JP5908491B2 (en) Improved autofocus for augmented reality display
US20160131902A1 (en) System for automatic eye tracking calibration of head mounted display device
CN112805659A (en) Selecting depth planes for a multi-depth plane display system by user classification
US20140152558A1 (en) Direct hologram manipulation using imu
CN108139806A (en) Relative to the eyes of wearable device tracking wearer
JP2016507805A (en) Direct interaction system for mixed reality environments
US11422620B2 (en) Display systems and methods for determining vertical alignment between left and right displays and a user's eyes
CN112753037A (en) Sensor fusion eye tracking
CN114581514A (en) Method for determining fixation point of eyes and electronic equipment
WO2023195995A1 (en) Systems and methods for performing a motor skills neurological test using augmented or virtual reality
JP2015013011A (en) Visual field restriction image data creation program and visual field restriction apparatus using the same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16919734

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16919734

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 02.07.2019)

122 Ep: pct application non-entry in european phase

Ref document number: 16919734

Country of ref document: EP

Kind code of ref document: A1