WO2020042542A1 - 眼动控制校准数据获取方法和装置 - Google Patents

眼动控制校准数据获取方法和装置 Download PDF

Info

Publication number
WO2020042542A1
WO2020042542A1 PCT/CN2019/073766 CN2019073766W WO2020042542A1 WO 2020042542 A1 WO2020042542 A1 WO 2020042542A1 CN 2019073766 W CN2019073766 W CN 2019073766W WO 2020042542 A1 WO2020042542 A1 WO 2020042542A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
eye
eyeball
calibration data
data
Prior art date
Application number
PCT/CN2019/073766
Other languages
English (en)
French (fr)
Inventor
蒋壮
Original Assignee
深圳市沃特沃德股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市沃特沃德股份有限公司 filed Critical 深圳市沃特沃德股份有限公司
Publication of WO2020042542A1 publication Critical patent/WO2020042542A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Definitions

  • the present application relates to the field of human-computer interaction technology, and in particular, to a method and device for acquiring calibration data for eye movement control.
  • the eye movement control method is a non-contact human-computer interaction method.
  • the position of the eye's fixation point is calculated by tracking the position of the eyeball.
  • Eye movement control is a great help for users who ca n’t use both hands.
  • gaming computers with eye tracking capabilities make players more immersive in the game scene.
  • Eye-tracking technology requires special equipment, such as an eye tracker. During the use of these special equipment, users need to control the equipment according to the eye movements defined in the instructions.
  • the trend of human-computer interaction is human-centered, more friendly and convenient, so eye tracking is also moving towards controlling the device according to the user's eye movement habits.
  • Each user can first calibrate the device according to their specific eye movement habits, so that subsequent eye movement control can be operated according to the user's eye movement habits.
  • image processing is usually performed according to a user staring at an image of a preset positioning point, and a pupil center position corresponding to the preset positioning point is calculated to collect calibration data.
  • the accuracy of the gaze judgment is low, and the user experience is not high.
  • the purpose of this application is to provide a method and device for acquiring eye movement control calibration data, which aims to solve the problem that in the prior art, accurate eye movement control calibration data cannot be obtained according to a user's eye movement habits.
  • This application proposes a method for obtaining calibration data for eye movement control, including:
  • the present application also proposes an eye movement control calibration data acquisition device, including:
  • An image acquisition module configured to sequentially obtain a user image where a human eye fixes on a plurality of positioning points, wherein a plurality of the positioning points are preset in a designated viewing area;
  • An image analysis module configured to sequentially search a human eye image and an eyeball image from the user image, and obtain human eye position data and eyeball position data;
  • a data calculation module is configured to calculate calibration data according to the position data of the human eye and the position data of the eyeball, and sequentially record the calibration data and position information of a plurality of corresponding anchor points.
  • the present application also proposes a computer device including a processor, a memory, and a computer program stored on the memory and executable on the processor.
  • the processor implements the foregoing eye movement when the computer program is executed. Controls the method of acquiring calibration data.
  • At least one positioning point is preset in a designated viewing area, and when a human eye looks at one positioning point, an image is acquired through a common camera, and a human eye image and an eyeball image are searched from the image.
  • the calibration data is calculated, and the calibration data and the position information of the positioning point are stored in the memory until all the positioning points have been collected.
  • the calibration data can be used in subsequent eye tracking control to determine whether the distance between the user and the specified viewing area is within a preset range, and track the position of the user's line of sight to improve the accuracy of the line of sight judgment.
  • the method and device for acquiring eye movement control calibration data of the present application do not need to use special equipment, and can collect data according to the eye movement habits of the user, and the user experience is good.
  • FIG. 1 is a schematic flowchart of an eye movement control calibration data acquisition method according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of an anchor point in a designated viewing area according to an embodiment of the present application (wherein FIG. 2 a is a schematic diagram of each anchor point, FIG. 2 b is a schematic diagram of division of a left region and a right region, and FIG. 2 c is an illustration of an upper region and a lower region. Division diagram);
  • FIG. 3 is a schematic block diagram of a structure of an eye movement control calibration data acquisition device according to an embodiment of the present application.
  • FIG. 4 is a schematic block diagram of a structure of an image analysis module in FIG. 3;
  • FIG. 5 is a schematic block diagram of a structure of a data calculation module in FIG. 3;
  • FIG. 6 is a schematic block diagram of a structure of a first data acquisition unit in FIG. 5;
  • FIG. 7 is a schematic block diagram of a structure of a second data obtaining unit in FIG. 5;
  • FIG. 8 is a schematic structural diagram of a computer device according to an embodiment of the present application.
  • an embodiment of the present application provides a method for acquiring eye movement control calibration data, including:
  • the designated viewing area in step S1 includes a terminal device interface for human-computer interaction with a user, such as a smartphone display, a flat-panel display, a smart TV display, a personal computer display, and a laptop display.
  • a user such as a smartphone display, a flat-panel display, a smart TV display, a personal computer display, and a laptop display.
  • the user image can be obtained through a camera.
  • the camera includes a front camera built in the terminal device, an external camera, such as a front camera of a mobile phone, etc., which is not limited in this application.
  • FIG. 2a it is a schematic diagram of the anchor points of the designated viewing area, including 9 anchor points of upper left, upper middle, upper right, left middle, middle middle, right middle, lower left, middle lower, and lower right. Referring to FIG.
  • the upper left , The left, middle, bottom left, middle bottom, middle middle, and middle top surrounded by the designated viewing area is the left area
  • the middle designated middle and upper center surrounded by the designated viewing area is the right area
  • the designated viewing area surrounded by top left, middle left, center middle, right middle, top right, and top middle is the top area
  • the designated viewing area surrounded by bottom left, left middle, middle center, right middle, bottom right, and bottom middle is the bottom area .
  • the user looks at an anchor point of the mobile phone display at an appropriate distance from the mobile phone display according to his habits, and collects an image of a human eye watching the anchor point through a front camera of the mobile phone.
  • a fixation time may be set in advance, and reminders for reminding the user to continuously look at each anchor point may be sent separately to remind the user to keep looking at the anchor point; judging whether the time between the current time and the moment when the reminder information is sent is greater than a preset fixation Duration, if the time between the current time and the time at which the reminder information is sent is greater than the preset gaze duration, an instruction to capture a user image is generated, and the camera obtains an instruction to capture a user image to collect the image; the reminder can also be sent separately After the user continuously looks at the information of each anchor point, the camera continuously collects images in real time, and distinguishes the state of the human eye through a pre-trained classifier.
  • any frame of the above image in the gaze state is obtained
  • the image serves as the user image. Further searching for the human eye image and the eyeball image from the acquired image, obtaining the human eye position data and the eyeball position data; calculating a series of calibration data according to the human eye position data and the eyeball position data, and sequentially recording the calibration data Correspondence with the anchor point.
  • the calibration data can be used in subsequent eye tracking control to determine whether the distance between the user and the specified viewing area is within a preset range, and track the position of the user's line of sight to improve the accuracy of the line of sight judgment.
  • the user in this embodiment first looks at the upper left positioning point, and the camera collects an image of the human eye gazing at the upper left positioning point, finds the human eye image and the eyeball image from the image, obtains human eye position data and eyeball position data, and calculates a calibration. Data to record the correspondence between the calibration data and the upper left anchor point. The user then starts looking at the center-up anchor point, and the rest of the steps are the same as the top-left anchor point. Until the upper left, middle upper, upper right, left middle, middle middle, right middle, lower left, middle lower, and lower right calibration data of the nine positioning points and the corresponding relationship of the positioning points are collected.
  • step S2 of searching for a human eye image and an eyeball image from the user image in order to obtain human eye image position data and eyeball image position data includes:
  • step S21 first finds a face image from the image. If no face image is found in the image, it returns to step S1 to adjust the relative position of the user and the designated viewing area until the image obtained by the camera can be found.
  • face images There are many ways to search for facial images, such as: using face rules (such as the distribution of eyes, nose, mouth, etc.) to perform face detection on the input image; by finding features that are invariant to the face (such as skin color, edges, textures) ) To perform face detection on the input image; describe the facial features of the face with a standard face template.
  • the face detection When performing face detection, first calculate the correlation value between the input image and the standard face template, and then The obtained correlation value is compared with a preset threshold value to determine whether a face exists in the input image; the face area is regarded as a type of pattern, and a large amount of face data is used as a sample training to learn potential rules
  • a classifier is constructed to detect faces by discriminating all possible region pattern attributes in the image.
  • the found face image is marked with a rectangular frame.
  • Step S22 searches for the human eye image from the rectangular frame of the face image, which is helpful to narrow the search range and improve the search efficiency and accuracy of the human eye search. If no human eye image is found, return to step S1 and reacquire the image until A human eye image can be found in step S22.
  • Human eye search methods include template-based methods, statistics-based methods, and knowledge-based methods. Among them, the method based on template matching includes a gray projection template and a geometric feature template.
  • the gray projection method refers to the horizontal and vertical projection of a gray image of a human face, and respectively counts the gray value and / or in two directions.
  • the value of the gray function finds specific change points, and then combine the positions of change points in different directions according to prior knowledge to obtain the position of the human eye; the geometric feature template is implemented using the individual features and distribution features of the eyes as the basis Human eye detection.
  • Statistics-based methods generally train and learn a large number of target samples and non-target samples to obtain a set of model parameters, and then build a classifier or filter to detect the target based on the model.
  • the knowledge-based method is to determine the application environment of the image, summarize the knowledge (such as contour information, color information, position information) that can be used for human eye detection under specific conditions, and summarize them into rules that guide human eye detection.
  • This embodiment uses a rectangular frame to frame the left-eye image and the right-eye image, respectively, to obtain the following human eye position data, including:
  • r 1 the distance from the upper left vertex of the rectangular frame of the left-eye image to the left-most face image
  • t 1 the distance from the upper left vertex of the rectangular frame of the left eye image to the uppermost edge of the face image
  • w 1 the width of the rectangular frame of the left-eye image
  • h 1 the height of the rectangular frame of the left-eye image
  • r 2 the distance from the upper-left vertex of the rectangular frame of the right-eye image to the left-most face image
  • t 2 the distance from the top left vertex of the rectangular frame of the right eye image to the uppermost edge of the face image
  • w 2 the width of the rectangular frame of the right-eye image
  • h 2 the height of the rectangular frame of the right-eye image.
  • Step S23 finds the left eyeball image from the left eye image and the right eyeball image from the right eye image. If no eyeball image is found, the process returns to step S1 to acquire the image again until the eyeball image can be found in step S23.
  • Eyeball search methods include neural network method, extreme point position discrimination method of edge point integral projection curve, template matching method, multi-resolution mosaic map method, geometric and symmetry detection method, and Hough transform-based method. This embodiment uses a rectangular frame to frame the left eyeball image and the right eyeball image, respectively, to obtain the following eyeball position data, including:
  • r 3 the distance between the top left vertex of the rectangular frame of the left eyeball image and the leftmost face of the face image
  • t 3 the distance from the top left vertex of the rectangular frame of the left eyeball image to the uppermost edge of the face image
  • w 3 the width of the rectangular frame of the left eyeball image
  • h 3 the height of the rectangular frame of the left eyeball image
  • r 4 the distance between the top left vertex of the rectangular frame of the right eyeball image and the leftmost face of the face image
  • t 4 the distance from the top left vertex of the rectangular frame of the right eyeball image to the uppermost edge of the face image
  • w 4 the width of the rectangular frame of the right eyeball image
  • h 4 the height of the rectangular frame of the right eyeball image.
  • eyeball position data can also be obtained from a human eye image, and this application does not go into details about obtaining eyeball position data from a human eye image.
  • the calibration data includes distance calibration data, horizontal calibration data, and vertical calibration data.
  • the calibration data is calculated according to the position data of the human eye and the position data of the eyeball, and the calibration data and corresponding multiple data are recorded in sequence.
  • the step S3 of the positioning point location information includes:
  • Steps S31 to S32 are used to calculate the calibration data when the human eye looks at an anchor point, and the calibration data and the corresponding anchor point information are stored in the memory.
  • calculation and data storage are performed on the nine positioning points of upper left, upper middle, upper right, middle left, middle middle, right middle, lower left, middle lower, and lower right.
  • the distance calibration data is used to locate the distance of the human eye from the specified viewing area, and the horizontal calibration data and vertical calibration data are used to indicate the position of the eyeball when the human eye looks at the specified positioning point.
  • the step of calculating distance calibration data when a human eye fixes on one of the positioning points according to the human eye position data includes:
  • step S321 the coordinates (x 1 , y 1 ) of the center position of the left eye can be calculated by formula (1)
  • step S322 the distance d between the center of the left eye and the center of the right eye can be calculated by formula (3), where d is the distance calibration data.
  • the value of d can be used to locate the distance of the human eye from the specified viewing area.
  • the step of calculating, based on the human eye position data and the eyeball position data, horizontal eyeball position lateral calibration data and vertical eyeball position vertical calibration data when a human eye fixates on one of the positioning points includes:
  • nine positioning points are set in a specified viewing area, and the human eye sequentially looks at the nine positioning points, and sequentially records the correspondence between the calibration data and the positioning points when the human eye looks at one positioning point.
  • the camera acquires an image, looks for a face image from the image, then looks for a human eye image, and finally looks for an eyeball image from the human eye image.
  • This method is fast and accurate.
  • Degree is high; distance calibration data d, horizontal calibration data m, and vertical calibration data n are calculated according to human eye position data and eyeball position data, and d, m, n and position information of the positioning point are stored in a memory.
  • the distance calibration data of 9 positioning points can be used to calibrate the distance between the human eye and the specified viewing area, thereby limiting the distance between the user and the specified viewing area within the specified range.
  • the horizontal calibration data and vertical calibration data can estimate the position of the specified viewing area to which the user's line of sight is looking, and the accuracy of the line of sight tracking is high.
  • the method for acquiring the eye movement control calibration data in this embodiment does not require special equipment, and can collect data according to the eye movement habits of the user, and the user experience is good.
  • an embodiment of the present application further provides a device for acquiring eye movement control calibration data, including: an image acquisition module 10 for sequentially acquiring user images of a user gazing at a plurality of positioning points; The point is preset in the designated viewing area;
  • An image analysis module 20 configured to sequentially search a human eye image and an eyeball image from the user image, and obtain human eye position data and eyeball position data;
  • a data calculation module 30 is configured to calculate calibration data according to the position data of the human eye and the position data of the eyeball, and sequentially record the calibration data and position information of a plurality of corresponding anchor points.
  • the designated viewing area in the image acquisition module 10 includes a terminal device interface for human-computer interaction with a user, such as a smartphone display, a flat-panel display, a smart TV display, a personal computer display, and a notebook computer. Display, etc.
  • User images can be obtained through cameras.
  • the cameras include the front camera and external cameras, such as the front camera of the mobile phone.
  • FIG. 2 a schematic diagram of an anchor point for a designated viewing area is provided, including upper left, middle upper, upper right, left middle, middle middle, right middle, lower left, middle lower, and lower right.
  • upper left, middle left, The designated viewing area surrounded by bottom left, middle bottom, middle middle, and top middle is the left area
  • the designated viewing area surrounded by top right, middle right, bottom right, middle bottom, middle, and middle top is the right area, top left, middle left, and middle.
  • the designated viewing areas surrounded by middle, right middle, top right, and top middle are the top areas
  • the designated viewing areas surrounded by bottom left, left middle, middle middle, right middle, bottom right, and bottom middle are the bottom areas.
  • the user looks at an anchor point of the mobile phone display at an appropriate distance from the mobile phone display according to his habits, and collects an image of a human eye watching the anchor point through a front camera of the mobile phone.
  • the gaze time may be set in advance, and the first reminder unit may separately send reminder information reminding the user to continuously look at each anchor point to remind the user to continuously watch the anchor point; the first judgment unit may determine the current time from the time when the reminder information is sent.
  • the first image acquisition unit determines whether the time between is greater than the preset gaze duration, and if the time between the current time and the time when the reminder information is sent is greater than the preset gaze duration, the first image acquisition unit generates an instruction to capture a user image, the camera Obtain shooting instructions and collect images; or you can send information reminding the user to continuously watch each anchor point through the second reminder unit, and then use the camera to continuously capture real-time images through the camera through the real-time image acquisition unit.
  • the classifier distinguishes the state of the human eye. If it is determined that the human eye is in the gaze state, then the second image acquisition unit acquires any frame image of the above-mentioned image in the gaze state as the user image.
  • the calibration data can be used in subsequent eye tracking control to determine whether the distance between the user and the specified viewing area is within a preset range, and track the position of the user's line of sight to improve the accuracy of the line of sight judgment.
  • the user in this embodiment first looks at the upper left positioning point, and the camera collects an image of the human eye gazing at the upper left positioning point, finds the human eye image and the eyeball image from the image, obtains human eye position data and eyeball position data, and calculates a calibration. Data to record the correspondence between the calibration data and the upper left anchor point. The user then starts looking at the center-up anchor point, and the rest of the steps are the same as the top-left anchor point. Until the upper left, middle upper, upper right, left middle, middle middle, right middle, lower left, middle lower, and lower right calibration data of the nine positioning points and the corresponding relationship of the positioning points are collected.
  • the image analysis module 20 includes:
  • a face finding unit 201 configured to find a face image from the user image
  • the human eye searching unit 202 is configured to search for a human eye image from the human face image and obtain human eye position data from the human face image, where the human eye image includes a left eye image and a right eye image;
  • the eyeball search unit 203 is configured to search an eyeball image from the human eye image, and obtain eyeball position data from the human face image.
  • the face search unit 201 first searches for a face image from the image. If no face image is found in the image, it returns to step S1 to adjust the relative position of the user and the designated viewing area until the image obtained by the camera Face images can be found in.
  • face images There are many ways to search for facial images, such as: using face rules (such as the distribution of eyes, nose, mouth, etc.) to perform face detection on the input image; by finding features that are invariant to the face (such as skin color, edges, textures) ) To perform face detection on the input image; describe the facial features of the face with a standard face template.
  • the face detection When performing face detection, first calculate the correlation value between the input image and the standard face template, and then The obtained correlation value is compared with a preset threshold value to determine whether a face exists in the input image; the face area is regarded as a type of pattern, and a large amount of face data is used as a sample training to learn potential rules
  • a classifier is constructed to detect faces by discriminating all possible region pattern attributes in the image.
  • the found face image is marked with a rectangular frame.
  • the human eye search unit 202 searches for a human eye image from a rectangular frame of the face image, which is beneficial to narrow the search range and improve the search efficiency and accuracy of the human eye search. If no human eye image is found, return to step S1 to reacquire Image until a human eye image can be found in step S22.
  • Human eye search methods include template-based methods, statistics-based methods, and knowledge-based methods. Among them, the method based on template matching includes a gray projection template and a geometric feature template.
  • the gray projection method refers to the horizontal and vertical projection of a gray image of a human face, and respectively counts the gray value and / or in two directions.
  • the value of the gray function finds specific change points, and then combine the positions of change points in different directions according to prior knowledge to obtain the position of the human eye; the geometric feature template is implemented using the individual features and distribution features of the eyes as the basis Human eye detection.
  • Statistics-based methods generally train and learn a large number of target samples and non-target samples to obtain a set of model parameters, and then build a classifier or filter to detect the target based on the model.
  • the knowledge-based method is to determine the application environment of the image, summarize the knowledge (such as contour information, color information, position information) that can be used for human eye detection under specific conditions, and summarize them into rules that guide human eye detection.
  • This embodiment uses a rectangular frame to frame the left-eye image and the right-eye image, respectively, to obtain the following human eye position data, including:
  • r 1 the distance from the upper left vertex of the rectangular frame of the left-eye image to the left-most face image
  • t 1 the distance from the upper left vertex of the rectangular frame of the left eye image to the uppermost edge of the face image
  • w 1 the width of the rectangular frame of the left-eye image
  • h 1 the height of the rectangular frame of the left-eye image
  • r 2 the distance from the upper-left vertex of the rectangular frame of the right-eye image to the left-most face image
  • t 2 the distance from the top left vertex of the rectangular frame of the right eye image to the uppermost edge of the face image
  • w 2 the width of the rectangular frame of the right-eye image
  • h 2 the height of the rectangular frame of the right-eye image.
  • the eyeball search unit 203 finds the left eyeball image from the left eye image, and the right eyeball image from the right eye image. If no eyeball image is found, the process returns to step S1 to reacquire the image until the eyeball image can be found in step S23.
  • Eyeball search methods include neural network method, extreme point position discrimination method of edge point integral projection curve, template matching method, multi-resolution mosaic map method, geometric and symmetry detection method, and Hough transform-based method. This embodiment uses a rectangular frame to frame the left eyeball image and the right eyeball image, respectively, to obtain the following eyeball position data, including:
  • r 3 the distance between the top left vertex of the rectangular frame of the left eyeball image and the leftmost face of the face image
  • t 3 the distance from the top left vertex of the rectangular frame of the left eyeball image to the uppermost edge of the face image
  • w 3 the width of the rectangular frame of the left eyeball image
  • h 3 the height of the rectangular frame of the left eyeball image
  • r 4 the distance between the top left vertex of the rectangular frame of the right eyeball image and the leftmost face of the face image
  • t 4 the distance from the top left vertex of the rectangular frame of the right eyeball image to the uppermost edge of the face image
  • w 4 the width of the rectangular frame of the right eyeball image
  • h 4 the height of the rectangular frame of the right eyeball image.
  • eyeball position data can also be obtained from a human eye image, and this application does not go into details about obtaining eyeball position data from a human eye image.
  • the calibration data includes distance calibration data, horizontal calibration data, and vertical calibration data
  • the data calculation module 30 includes:
  • a first data obtaining unit 301 configured to calculate distance calibration data when a human eye looks at one of the positioning points according to the human eye position data
  • a second data obtaining unit 302 configured to calculate, according to the human eye position data and the eyeball position data, horizontal eyeball position lateral calibration data and vertical eyeball position calibration data when a human eye fixates on one of the positioning points;
  • a data storage unit 303 is configured to store the distance calibration data, the horizontal calibration data, the vertical calibration data, and the corresponding position information of the anchor point in a memory.
  • the first data acquisition unit 301, the second data acquisition unit 302, and the data storage unit 303 are used to calculate calibration data when a human eye looks at an anchor point, and the calibration data and the corresponding anchor point information are stored in a memory. .
  • calculation and data storage are performed on the nine positioning points of upper left, upper middle, upper right, middle left, middle middle, right middle, lower left, middle lower, and lower right.
  • the distance calibration data is used to locate the distance of the human eye from the specified viewing area, and the horizontal calibration data and vertical calibration data are used to indicate the position of the eyeball when the human eye looks at the specified positioning point.
  • the first data obtaining unit 301 includes:
  • a first calculation subunit 3011 is configured to calculate coordinates of a left eye center position according to left eye position data included in the human eye position data, and calculate a right eye center position according to right eye position data included in the human eye position data. coordinate;
  • a second calculation subunit 3012 configured to calculate the distance between the center of the left eye and the center of the right eye according to the coordinates of the center position of the left eye and the coordinates of the center position of the right eye to obtain the distance calibration data;
  • the second calculation subunit 3012 can calculate the distance d between the center of the left eye and the center of the right eye by using formula (14), where d is distance calibration data.
  • the value of d can be used to locate the distance of the human eye from the specified viewing area.
  • the second data obtaining unit 302 includes:
  • a third calculation subunit 3021 configured to calculate coordinates of the left eyeball center position according to the left eyeball position data included in the eyeball position data; and calculate the right eyeball center position coordinates according to the right eyeball position data included in the eyeball position data;
  • a fourth calculation subunit 3022 is configured to calculate a first lateral distance between the left eyeball center and the leftmost side of the left eye image, and the left eyeball center according to the left eyeball center position coordinates and the left eye position data.
  • a first longitudinal distance from the uppermost edge of the left-eye image; and calculating a distance between the center of the right-eyeball and the right-most side of the right-eye image based on the right-eye-center position coordinates and the right-eye position data A second lateral distance, and a second longitudinal distance between the center of the right eyeball and the lowermost edge of the right eye image;
  • a fifth calculation subunit 3023 is configured to calculate a ratio of the first lateral distance to the second lateral distance to obtain the lateral calibration data; and calculate a ratio of the first longitudinal distance to the second longitudinal distance To obtain the longitudinal calibration data.
  • the fifth calculation subunit 3023 can calculate the lateral calibration data m by formula (21):
  • nine positioning points are set in a designated viewing area, and the human eye sequentially looks at the nine positioning points, and sequentially records the correspondence between the calibration data and the positioning points when the human eye looks at one positioning point.
  • the camera acquires an image, looks for a face image from the image, then looks for a human eye image, and finally looks for an eyeball image from the human eye image.
  • This method is fast and accurate.
  • Degree is high; distance calibration data d, horizontal calibration data m, and vertical calibration data n are calculated according to human eye position data and eyeball position data, and d, m, n and position information of the positioning point are stored in a memory.
  • the distance calibration data of 9 positioning points can be used to calibrate the distance between the human eye and the specified viewing area, thereby limiting the distance between the user and the specified viewing area within the specified range.
  • the horizontal calibration data and vertical calibration data can estimate the position of the specified viewing area to which the user's line of sight is looking, and the accuracy of the line of sight tracking is high.
  • the apparatus for acquiring eye movement control calibration data in this embodiment does not need to use special equipment, and can collect data according to a user's eye movement habits, and the user experience is good.
  • This application also proposes a computer device 03, which includes a processor 04, a memory 01, and a computer program 02 stored on the memory 01 and executable on the processor 04.
  • the processor 04 executes the computer At program 02, the above-mentioned method for acquiring eye movement control calibration data is implemented.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Eye Examination Apparatus (AREA)
  • Image Processing (AREA)

Abstract

本申请揭示了一种眼动控制校准数据获取方法和装置,包括:依次获取人眼注视多个定位点的用户图像;依次从用户图像中查找人眼图像和眼球图像,获取人眼位置数据和眼球位置数据;计算校准数据,依次记录校准数据和对应的多个定位点位置信息。本申请无需采用专用设备,且可以根据用户的眼动习惯来采集数据。

Description

眼动控制校准数据获取方法和装置 技术领域
本申请涉及人机交互技术领域,具体涉及一种眼动控制校准数据获取方法和装置。
背景技术
眼动控制方法是一种非接触的人机互动方式,通过追踪眼球位置来计算眼睛的注视点的位置。眼动控制对于无法双手操作的用户起到重大帮助。随着智能终端的发展,具有眼球追踪功能的游戏电脑使玩家在游戏场景中更为身临其境。眼球追踪技术需要用到专用设备,如眼动仪。在这些专用设备使用过程中,用户需要根据说明书限定的眼动方式才可控制设备。人机交互方式的趋势是以人为中心、更为友好和便捷,因此眼动追踪也朝着根据用户眼动习惯来控制设备的方向发展。每个用户可以根据自己特定的眼动习惯先对设备进行校准,使得后续的眼动控制可以根据用户的眼动习惯来操作。现有技术的校准步骤中,通常根据用户盯住预设定位点的图像来进行图像处理,计算预设定位点对应的瞳孔中心位置来收集校准数据。但是根据此种方法得到的校准数据,在后续的眼动追踪操作中,视线判断的准确度低,用户体验不高。
技术问题
本申请的目的在于提供一种眼动控制校准数据获取方法和装置,旨在解决现有技术中不能根据用户眼动习惯来获取准确的眼动控制校准数据的问题。
技术解决方案
本申请提出一种眼动控制校准数据获取方法,包括:
依次获取人眼注视多个定位点的用户图像;其中,多个所述定位点预设于指定观看区域内;
依次从所述用户图像中查找人眼图像和眼球图像,获取人眼位置数据和眼球位置数据;
根据所述人眼位置数据和所述眼球位置数据计算校准数据,依次记录所述校准数据和对应的多个所述定位点位置信息。
本申请还提出了一种眼动控制校准数据获取装置,包括:
图像获取模块,用于依次获取人眼注视多个定位点的用户图像;其中,多个所述定位点预设于指定观看区域内;
图像分析模块,用于依次从所述用户图像中查找人眼图像和眼球图像,获 取人眼位置数据和眼球位置数据;
数据计算模块,用于根据所述人眼位置数据和所述眼球位置数据计算校准数据,依次记录所述校准数据和对应的多个所述定位点位置信息。
本申请还提出一种计算机设备,其包括处理器、存储器及存储于所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述的眼动控制校准数据获取方法。
有益效果
本申请的眼动控制校准数据获取方法和装置,在指定观看区域预设至少一个定位点,在人眼注视一个定位点时,通过普通摄像头获取图像,从图像中查找人眼图像和眼球图像,根据人眼位置数据和眼球位置数据,计算校准数据,将校准数据和该定位点的位置信息保存在存储器中,直至所有定位点均采集完数据。校准数据可用于后续眼动追踪控制中,判断用户与指定观看区域的距离是否在预设范围内,并进行用户视线位置追踪,提高视线判断的准确度。本申请的眼动控制校准数据获取方法和装置无需采用专用设备,且可以根据用户的眼动习惯来采集数据,用户体验好。
附图说明
图1是本申请一实施例的眼动控制校准数据获取方法的流程示意图;
图2是本申请一实施例的指定观看区域的定位点的示意图(其中,图2a为各定位点的示意图,图2b为左边区域和右边区域的划分示意图,图2c为上边区域和下边区域的划分示意图);
图3是本申请一实施例的眼动控制校准数据获取装置的结构示意框图;
图4是图3中图像分析模块的结构示意框图;
图5是图3中数据计算模块的结构示意框图;
图6是图5中第一数据获取单元的结构示意框图;
图7是图5中第二数据获取单元的结构示意框图;
图8是本申请一实施例的计算机设备的结构示意图。
本发明的最佳实施方式
参照图1,本申请实施例提供了一种眼动控制校准数据获取方法,包括:
S1、依次获取人眼注视多个定位点的用户图像;其中,多个所述定位点预设于指定观看区域内;
S2、依次从所述用户图像中查找人眼图像和眼球图像,获取人眼位置数据和眼球位置数据;
S3、根据所述人眼位置数据和所述眼球位置数据计算校准数据,依次记录所述校准数据和对应的多个所述定位点位置信息。
本实施例中,步骤S1中的指定观看区域包括与用户进行人机交互的终端设备界面,例如可以是智能手机显示屏、平板显示屏、智能电视显示屏、个人电脑显示屏、笔记本电脑显示屏等,本申请对此不作限定。用户图像可以通过摄像头获取,摄像头包括终端设备自带的前置摄像头、外接摄像头,如手机前置摄像头等,本申请对此不作限定。参照图2a,为指定观看区域的定位点的示意图,包括左上、中上、右上、左中、中中、右中、左下、中下和右下的9个定位点,参照图2b,其中左上、左中、左下、中下、中中和中上包围的指定观看区域为左边区域,右上、右中、右下、中下、中中和中上包围的指定观看区域为右边区域,参照图2c,左上、左中、中中、右中、右上和中上包围的指定观看区域为上边区域,左下、左中、中中、右中、右下和中下包围的指定观看区域为下边区域。
以眼动控制手机显示屏为例,用户根据自己的习惯在距离手机显示屏合适的距离处,眼睛注视手机显示屏的一个定位点,通过手机前置摄像头采集人眼注视该定位点的图像。比如,可以预先设置注视时间,分别发送提醒用户持续注视每个定位点的提醒信息,以提醒用户持续注视该定位点;判断当前时刻距发送提醒信息的时刻之间的时长是否大于预设的注视时长,若当前时刻距发送所述提醒信息的时刻之间的时长大于预设的注视时长时,则生成拍摄用户图像的指令,摄像头获得拍摄用户图像的指令,采集图像;也可以在分别发送提醒用户持续注视每个定位点的信息后,用摄像头持续实时采集图像,通过预先训练好的分类器区分人眼的状态,如果判断人眼处于注视状态,则获取注视状态中上述图像的任一帧图像作为用户图像。进一步从获取的图像中查找人眼图像和眼球图像,获取到人眼位置数据和眼球位置数据;根据所述人眼位置数据和所述眼球位置数据计算一系列校准数据,依次记录所述校准数据与所述定位点的对应关系。校准数据可用于后续眼动追踪控制中,判断用户与指定观看区域的距离是否在预设范围内,并进行用户视线位置追踪,提高视线判断的准确度。
具体地,本实施例的用户首先看向左上定位点,摄像头采集人眼注视左 上定位点的图像,从该图像中查找人眼图像和眼球图像,获取人眼位置数据和眼球位置数据,计算校准数据,记录该校准数据与左上定位点的对应关系。用户再开始看向中上定位点,其余步骤同左上定位点。直至左上、中上、右上、左中、中中、右中、左下、中下和右下的9个定位点的校准数据和定位点的对应关系均采集完毕。
本实施例中,所述依次从所述用户图像中查找人眼图像和眼球图像,获取人眼图像位置数据和眼球图像位置数据的步骤S2,包括:
S21、从所述用户图像中查找人脸图像;
S22、从所述人脸图像中查找人眼图像,以及从所述人脸图像中获取人眼位置数据,所述人眼图像包括左眼图像和右眼图像;
S23、从所述人眼图像中查找眼球图像,以及从所述人脸图像中获取眼球位置数据。
本实施例中,步骤S21先从图像中查找人脸图像,如果在图像中没有查找到人脸图像,则返回步骤S1,调整用户和指定观看区域的相对位置,直至摄像头获取的图像中能查找到人脸图像。人脸图像的查找方法较多,比如:利用人脸规则(如眼睛、鼻子、嘴巴等的分布)对输入图像进行人脸检测;通过寻找人脸面部不变的特征(如肤色、边缘、纹理)来对输入图像进行人脸检测;将人脸的面部特征用一个标准的人脸模板来描述,进行人脸检测时,先计算输入图像与标准人脸模板之间的相关值,然后再将求得的相关值与事先设定的阂值进行比较,以判别输入图像中是否存在人脸;将人脸区域看作一类模式,使用大量的人脸数据作样本训练,来学习潜在的规则并构造分类器,通过判别图像中所有可能区域模式属性来实现人脸的检测。本实施例将查找到的人脸图像用矩形框标出。
步骤S22从人脸图像的矩形框中查找人眼图像,有利于缩小查找范围,提高人眼查找的查找效率和准确度,如果没有查找到人眼图像,则返回步骤S1,重新获取图像,直至步骤S22中能查找到人眼图像。人眼查找的方法包括基于模板匹配的方法、基于统计的方法和基于知识的方法。其中基于模板匹配的方 法包括灰度投影模板和几何特征模板:灰度投影法是指对人脸灰度图像进行水平和垂直方向的投影,分别统计出两个方向上的灰度值和/或灰度函数值,找出特定变化点,然后根据先验知识将不同方向上的变化点位置相结合,即得到人眼的位置;几何特征模板是利用眼睛的个体特征以及分布特征作为依据来实施人眼检测。基于统计的方法一般是通过对大量目标样本和非目标样本进行训练学习得到一组模型参数,然后基于模型构建分类器或者滤波器来检测目标。基于知识的方法是确定图像的应用环境,总结特定条件下可用于人眼检测的知识(如轮廓信息、色彩信息、位置信息)等,把它们归纳成指导人眼检测的规则。本实施例用矩形框分别框出左眼图像和右眼图像,获得下述人眼位置数据,包括:
r 1:左眼图像的矩形框的左上顶点距离人脸图像的最左边的距离;
t 1:左眼图像的矩形框的左上顶点距离人脸图像的最上边的距离;
w 1:左眼图像的矩形框的宽度;h 1:左眼图像的矩形框的高度;
r 2:右眼图像的矩形框的左上顶点距离人脸图像的最左边的距离;
t 2:右眼图像的矩形框的左上顶点距离人脸图像的最上边的距离;
w 2:右眼图像的矩形框的宽度;h 2:右眼图像的矩形框的高度。
步骤S23从左眼图像中查找到左眼球图像,从右眼图像中查找右眼球图像,如果没有查找到眼球图像,则返回步骤S1,重新获取图像,直至步骤S23中能查找到眼球图像。眼球查找的方法包括神经网络法、边缘点积分投影曲线的极值位置判别法、模板匹配法、多分辨率的马赛克图法、几何及对称性检测法、基于霍夫变换法等。本实施例用矩形框分别框出左眼球图像和右眼球图像,获得下述眼球位置数据,包括:
r 3:左眼球图像的矩形框的左上顶点距离人脸图像的最左边的距离;
t 3:左眼球图像的矩形框的左上顶点距离人脸图像的最上边的距离;
w 3:左眼球图像的矩形框的宽度;h 3:左眼球图像的矩形框的高度;
r 4:右眼球图像的矩形框的左上顶点距离人脸图像的最左边的距离;
t 4:右眼球图像的矩形框的左上顶点距离人脸图像的最上边的距离;
w 4:右眼球图像的矩形框的宽度;h 4:右眼球图像的矩形框的高度。
本实施例中给出了从人脸图像中获取眼球位置数据的具体参数。基于本申请的发明理念,也可以从人眼图像中获取眼球位置数据,本申请不对从人眼图像中获取眼球位置数据进行赘述。
本实施例中,校准数据包括距离校准数据、横向校准数据和纵向校准数据,所述根据所述人眼位置数据和所述眼球位置数据计算校准数据,依次记录所述校准数据和对应的多个所述定位点位置信息的步骤S3,包括:
S31、根据所述人眼位置数据计算人眼注视一个所述定位点时的距离校准数据;以及根据所述人眼位置数据和所述眼球位置数据计算人眼注视一个所述定位点时的眼球位置横向校准数据与眼球位置纵向校准数据;
S32、将所述距离校准数据、横向校准数据、纵向校准数据和对应的所述定位点位置信息保存在存储器中。
通过步骤S31~S32计算人眼注视一个定位点时的校准数据,并将校准数据和对应的定位点信息保存在存储器中。本实施例中对左上、中上、右上、左中、中中、右中、左下、中下和右下的9个定位点进行一一计算和数据储存。距离校准数据用来定位人眼离指定观看区域的距离,横向校准数据和纵向校准数据用来指示人眼看向指定定位点时的眼球位置。
本实施例中,所述根据所述人眼位置数据计算人眼注视一个所述定位点时的距离校准数据的步骤,包括:
S321、根据所述人眼位置数据包括的左眼位置数据,计算左眼中心位置坐标;以及根据所述人眼位置数据包括的右眼位置数据,计算右眼中心位置坐标;
S322、根据所述左眼中心位置坐标以及所述右眼中心位置坐标,计算左眼中心与右眼中心的距离,获得所述距离校准数据。
本实施例中,步骤S321可以通过公式(1)计算左眼中心位置坐标(x 1,y 1),
Pot(x 1,y 1)=Pot(r 1+w 1/2,t 1+h 1/2)   (1)
通过公式(2)计算右眼中心位置坐标(x 2,y 2),
Pot(x 2,y 2)=Pot(r 2+w 2/2,t 2+h 2/2)    (2)
步骤S322可以通过公式(3)计算左眼中心与右眼中心的距离d,d即为距离校准数据。
Figure PCTCN2019073766-appb-000001
通过d的值可以定位人眼距离指定观看区域的距离。
本实施例中,所述根据所述人眼位置数据和所述眼球位置数据计算人眼注视一个所述定位点时的眼球位置横向校准数据与眼球位置纵向校准数据的步骤,包括:
S331、根据所述眼球位置数据包括的左眼球位置数据,计算左眼球中心位置坐标;以及根据所述眼球位置数据包括的右眼球位置数据,计算右眼球中心位置坐标;
S332、根据所述左眼球中心位置坐标和所述左眼位置数据,计算左眼球中心与所述左眼图像的最左边之间的第一横向距离,和左眼球中心与所述左眼图像的最上边之间的第一纵向距离;以及根据所述右眼球中心位置坐标和所述右眼位置数据,计算右眼球中心与所述右眼图像的最右边之间的第二横向距离,和右眼球中心与右眼图像的最下边之间的第二纵向距离;
S333、计算所述第一横向距离与所述第二横向距离的比值,获得所述横向校准数据;以及计算所述第一纵向距离与所述第二纵向距离的比值,获得所述纵向校准数据。
本实施例中,步骤S331中可以通过公式(4)计算左眼球中心位置坐标(x 3,y 3),Pot(x 3,y 3)=Pot(r 3+w 3/2,t 3+h 3/2)  (4)
通过公式(5)计算右眼球中心位置坐标(x 4,y 4),
Pot(x 4,y 4)=Pot(r 4+w 4/2,t 4+h 4/2)   (5)
步骤S332可以通过公式(6)计算左眼球中心与左眼图像的最左边之间的第一横向距离d 1:d 1=x 3–r 1    (6)
通过公式(7)计算左眼球中心与左眼图像的最上边之间的第一纵向距离d 3:d 3=y 3–t 1    (7)
通过公式(8)计算右眼球中心与右眼图像的最右边之间的第二横向距 离d 2:d 2=r 2+w 2–x 4     (8)
通过公式(9)计算右眼球中心与右眼图像的最下边之间的第二纵向距离d 4:d 4=t 2+h 2–y 4     (9)
步骤S333可以通过公式(10)计算横向校准数据m:m=d 1/d 2   (10)
通过公式(11)计算纵向校准数据n:n=d 3/d 4    (11)
本实施例的眼动校准控制方法,在指定观看区域设置9个定位点,人眼依次注视这9个定位点,依次记录人眼注视一个定位点时的校准数据和该定位点的对应关系。在人眼注视一个定位点时,通过摄像头获取图像,从图像中查找人脸图像,再从人脸图像中查找人眼图像,最后从人眼图像中查找眼球图像,该方法查找效率快且准确度高;根据人眼位置数据和眼球位置数据,计算出距离校准数据d、横向校准数据m和纵向校准数据n,将d、m、n和该定位点的位置信息保存在存储器中。所有定位点均采集完数据后,通过9个定位点的距离校准数据可以校准人眼距离指定观看区域的距离,从而将用户与指定观看区域的距离限定在指定范围内;通过9个定位点的横向校准数据和纵向校准数据可以推算用户视线所看向的指定观看区域的位置,视线追踪的准确度高。本实施例的眼动控制校准数据获取方法无需采用专用设备,且可以根据用户的眼动习惯来采集数据,用户体验好。
参照图3,本申请实施例还提供了一种眼动控制校准数据获取装置,包括:图像获取模块10,用于依次获取人眼注视多个定位点的用户图像;其中,多个所述定位点预设于指定观看区域内;
图像分析模块20,用于依次从所述用户图像中查找人眼图像和眼球图像,获取人眼位置数据和眼球位置数据;
数据计算模块30,用于根据所述人眼位置数据和所述眼球位置数据计算校准数据,依次记录所述校准数据和对应的多个所述定位点位置信息。
本实施例中,图像获取模块10中的指定观看区域包括与用户进行人机交互的终端设备界面,例如可以是智能手机显示屏、平板显示屏、智能电视显示屏、个人电脑显示屏、笔记本电脑显示屏等。用户图像可以通过摄像头获取,摄像 头包括终端设备自带的前置摄像头、外接摄像头,如手机前置摄像头等。参照图2,为指定观看区域的定位点的示意图,包括左上、中上、右上、左中、中中、右中、左下、中下和右下的9个定位点,其中左上、左中、左下、中下、中中和中上包围的指定观看区域为左边区域,右上、右中、右下、中下、中中和中上包围的指定观看区域为右边区域,左上、左中、中中、右中、右上和中上包围的指定观看区域为上边区域,左下、左中、中中、右中、右下和中下包围的指定观看区域为下边区域。
以眼动控制手机显示屏为例,用户根据自己的习惯在距离手机显示屏合适的距离处,眼睛注视手机显示屏的一个定位点,通过手机前置摄像头采集人眼注视该定位点的图像。比如,可以预先设置注视时间,通过第一提醒单元分别发送提醒用户持续注视每个定位点的提醒信息,以提醒用户持续注视该定位点;通过第一判断单元判断当前时刻距发送提醒信息的时刻之间的时长是否大于预设的注视时长,若当前时刻距发送所述提醒信息的时刻之间的时长大于预设的注视时长时,通过第一图像获取单元生成拍摄用户图像的指令,则摄像头获得拍摄指令,采集图像;也可以在通过第二提醒单元分别发送提醒用户持续注视每个定位点的信息后,通过实时图像获取单元用摄像头持续实时采集图像,通过第二判断单元,根据训练好的分类器区分人眼的状态,如果判断人眼处于注视状态,则通过第二图像获取单元获取注视状态中上述图像的任一帧图像作为用户图像。进一步从获取的图像中查找人眼图像和眼球图像,获取到人眼位置数据和眼球位置数据;根据所述人眼位置数据和所述眼球位置数据计算一系列校准数据,依次记录所述校准数据与所述定位点的对应关系。校准数据可用于后续眼动追踪控制中,判断用户与指定观看区域的距离是否在预设范围内,并进行用户视线位置追踪,提高视线判断的准确度。
具体地,本实施例的用户首先看向左上定位点,摄像头采集人眼注视左上定位点的图像,从该图像中查找人眼图像和眼球图像,获取人眼位置数据和眼球位置数据,计算校准数据,记录该校准数据与左上定位点的对应关系。用户再开始看向中上定位点,其余步骤同左上定位点。直至左上、中上、右上、左中、中中、右中、左下、中下和右下的9个定位点的校准数据和定位点的对应关系均采集完毕。
参照图4,本实施例中,所述图像分析模块20包括:
人脸查找单元201,用于从所述用户图像中查找人脸图像;
人眼查找单元202,用于从所述人脸图像中查找人眼图像,以及从所述人脸图像中获取人眼位置数据,所述人眼图像包括左眼图像和右眼图像;
眼球查找单元203,用于从所述人眼图像中查找眼球图像,以及从所述人脸图像中获取眼球位置数据。
本实施例中,人脸查找单元201先从图像中查找人脸图像,如果在图像中没有查找到人脸图像,则返回步骤S1,调整用户和指定观看区域的相对位置,直至摄像头获取的图像中能查找到人脸图像。人脸图像的查找方法较多,比如:利用人脸规则(如眼睛、鼻子、嘴巴等的分布)对输入图像进行人脸检测;通过寻找人脸面部不变的特征(如肤色、边缘、纹理)来对输入图像进行人脸检测;将人脸的面部特征用一个标准的人脸模板来描述,进行人脸检测时,先计算输入图像与标准人脸模板之间的相关值,然后再将求得的相关值与事先设定的阂值进行比较,以判别输入图像中是否存在人脸;将人脸区域看作一类模式,使用大量的人脸数据作样本训练,来学习潜在的规则并构造分类器,通过判别图像中所有可能区域模式属性来实现人脸的检测。本实施例将查找到的人脸图像用矩形框标出。
人眼查找单元202从人脸图像的矩形框中查找人眼图像,有利于缩小查找范围,提高人眼查找的查找效率和准确度,如果没有查找到人眼图像,则返回步骤S1,重新获取图像,直至步骤S22中能查找到人眼图像。人眼查找的方法包括基于模板匹配的方法、基于统计的方法和基于知识的方法。其中基于模板匹配的方法包括灰度投影模板和几何特征模板:灰度投影法是指对人脸灰度图像进行水平和垂直方向的投影,分别统计出两个方向上的灰度值和/或灰度函数值,找出特定变化点,然后根据先验知识将不同方向上的变化点位置相结合,即得到人眼的位置;几何特征模板是利用眼睛的个体特征以及分布特征作为依据来实施人眼检测。基于统计的方法一般是通过对大量目标样本和非目标样本进行训练学习得到一组模型参数,然后基于模型构建分类器或者滤波器来检测目标。基于知识的方法是确定图像的应用环境,总结特定条件下可用于人眼检测的知识(如轮廓信息、色彩信息、位置信息)等,把它们归纳成指导人眼检测的规则。本实施例用矩形框分别框出左眼图像和右眼图像,获得下述人眼位 置数据,包括:
r 1:左眼图像的矩形框的左上顶点距离人脸图像的最左边的距离;
t 1:左眼图像的矩形框的左上顶点距离人脸图像的最上边的距离;
w 1:左眼图像的矩形框的宽度;h 1:左眼图像的矩形框的高度;
r 2:右眼图像的矩形框的左上顶点距离人脸图像的最左边的距离;
t 2:右眼图像的矩形框的左上顶点距离人脸图像的最上边的距离;
w 2:右眼图像的矩形框的宽度;h 2:右眼图像的矩形框的高度。
眼球查找单元203从左眼图像中查找到左眼球图像,从右眼图像中查找右眼球图像,如果没有查找到眼球图像,则返回步骤S1,重新获取图像,直至步骤S23中能查找到眼球图像。眼球查找的方法包括神经网络法、边缘点积分投影曲线的极值位置判别法、模板匹配法、多分辨率的马赛克图法、几何及对称性检测法、基于霍夫变换法等。本实施例用矩形框分别框出左眼球图像和右眼球图像,获得下述眼球位置数据,包括:
r 3:左眼球图像的矩形框的左上顶点距离人脸图像的最左边的距离;
t 3:左眼球图像的矩形框的左上顶点距离人脸图像的最上边的距离;
w 3:左眼球图像的矩形框的宽度;h 3:左眼球图像的矩形框的高度;
r 4:右眼球图像的矩形框的左上顶点距离人脸图像的最左边的距离;
t 4:右眼球图像的矩形框的左上顶点距离人脸图像的最上边的距离;
w 4:右眼球图像的矩形框的宽度;h 4:右眼球图像的矩形框的高度。
本实施例中给出了从人脸图像中获取眼球位置数据的具体参数。基于本申请的发明理念,也可以从人眼图像中获取眼球位置数据,本申请不对从人眼图像中获取眼球位置数据进行赘述。
参照图5,本实施例中,所述校准数据包括距离校准数据、横向校准数据和纵向校准数据,所述数据计算模块30包括:
第一数据获取单元301,用于根据所述人眼位置数据计算人眼注视一个所述定位点时的距离校准数据;
第二数据获取单元302,用于根据所述人眼位置数据和所述眼球位置数据计算人眼注视一个所述定位点时的眼球位置横向校准数据与眼球位置纵向校准数据;
数据存储单元303,用于将所述距离校准数据、横向校准数据、纵向校准 数据和对应的所述定位点位置信息保存在存储器中。
本实施例中,通过第一数据获取单元301、第二数据获取单元302和数据存储单元303计算人眼注视一个定位点时的校准数据,并将校准数据和对应的定位点信息保存在存储器中。本实施例中对左上、中上、右上、左中、中中、右中、左下、中下和右下的9个定位点进行一一计算和数据储存。距离校准数据用来定位人眼离指定观看区域的距离,横向校准数据和纵向校准数据用来指示人眼看向指定定位点时的眼球位置。
参照图6,本实施例中,所述第一数据获取单元301包括:
第一计算子单元3011,用于根据所述人眼位置数据包括的左眼位置数据,计算左眼中心位置坐标;以及根据所述人眼位置数据包括的右眼位置数据,计算右眼中心位置坐标;
第二计算子单元3012,用于根据所述左眼中心位置坐标以及所述右眼中心位置坐标,计算左眼中心与右眼中心的距离,获得所述距离校准数据;
本实施例中,第一计算子单元3011可以通过公式(12)计算左眼中心位置坐标(x 1,y 1),Pot(x 1,y 1)=Pot(r 1+w 1/2,t 1+h 1/2)    (12)
通过公式(13)计算右眼中心位置坐标(x 2,y 2),
Pot(x 2,y 2)=Pot(r 2+w 2/2,t 2+h 2/2)    (13)
第二计算子单元3012可以通过公式(14)计算左眼中心与右眼中心的距离d,d即为距离校准数据。
Figure PCTCN2019073766-appb-000002
通过d的值可以定位人眼距离指定观看区域的距离。
参照图7,本实施例中,所述第二数据获取单元302包括:
第三计算子单元3021,用于根据所述眼球位置数据包括的左眼球位置数据,计算左眼球中心位置坐标;以及根据所述眼球位置数据包括的右眼球位置数据,计算右眼球中心位置坐标;
第四计算子单元3022,用于根据所述左眼球中心位置坐标和所述左眼位置数据,计算左眼球中心与所述左眼图像的最左边之间的第一横向距离,和 左眼球中心与所述左眼图像的最上边之间的第一纵向距离;以及根据所述右眼球中心位置坐标和所述右眼位置数据,计算右眼球中心与所述右眼图像的最右边之间的第二横向距离,和右眼球中心与右眼图像的最下边之间的第二纵向距离;
第五计算子单元3023,用于计算所述第一横向距离与所述第二横向距离的比值,获得所述横向校准数据;以及计算所述第一纵向距离与所述第二纵向距离的比值,获得所述纵向校准数据。
本实施例中,第三计算子单元3021中可以通过公式(15)计算左眼球中心位置坐标(x 3,y 3),Pot(x 3,y 3)=Pot(r 3+w 3/2,t 3+h 3/2)   (15)
通过公式(16)计算右眼球中心位置坐标(x 4,y 4),
Pot(x 4,y 4)=Pot(r 4+w 4/2,t 4+h 4/2)     (16)
第四计算子单元3022可以通过公式(17)计算左眼球中心与左眼图像的最左边之间的第一横向距离d 1:d 1=x 3–r 1   (17)
通过公式(18)计算左眼球中心与左眼图像的最上边之间的第一纵向距离d 3:d 3=y 3–t 1    (18)
通过公式(19)计算右眼球中心与右眼图像的最右边之间的第二横向距离d 2:d 2=r 2+w 2–x 4      (19)
通过公式(20)计算右眼球中心与右眼图像的最下边之间的第二纵向距离d 4:d 4=t 2+h 2–y 4     (20)
第五计算子单元3023可以通过公式(21)计算横向校准数据m:
m=d 1/d 2     (21)
通过公式(22)计算纵向校准数据n:n=d 3/d 4    (22)
本实施例的眼动校准控制装置,在指定观看区域设置9个定位点,人眼依次注视这9个定位点,依次记录人眼注视一个定位点时的校准数据和该定位点的对应关系。在人眼注视一个定位点时,通过摄像头获取图像,从图像中查找 人脸图像,再从人脸图像中查找人眼图像,最后从人眼图像中查找眼球图像,该方法查找效率快且准确度高;根据人眼位置数据和眼球位置数据,计算出距离校准数据d、横向校准数据m和纵向校准数据n,将d、m、n和该定位点的位置信息保存在存储器中。所有定位点均采集完数据后,通过9个定位点的距离校准数据可以校准人眼距离指定观看区域的距离,从而将用户与指定观看区域的距离限定在指定范围内;通过9个定位点的横向校准数据和纵向校准数据可以推算用户视线所看向的指定观看区域的位置,视线追踪的准确度高。本实施例的眼动控制校准数据获取装置无需采用专用设备,且可以根据用户的眼动习惯来采集数据,用户体验好。
本申请还提出一种计算机设备03,其包括处理器04、存储器01及存储于所述存储器01上并可在所述处理器04上运行的计算机程序02,所述处理器04执行所述计算机程序02时实现上述的眼动控制校准数据获取方法。

Claims (17)

  1. 一种眼动控制校准数据获取方法,其特征在于,包括:
    依次获取人眼注视多个定位点的用户图像;其中,多个所述定位点预设于指定观看区域内;
    依次从所述用户图像中查找人眼图像和眼球图像,获取人眼位置数据和眼球位置数据;
    根据所述人眼位置数据和所述眼球位置数据计算校准数据,依次记录所述校准数据和对应的多个所述定位点位置信息。
  2. 如权利要求1所述的眼动控制校准数据获取方法,其特征在于,所述依次从所述用户图像中查找人眼图像和眼球图像,获取人眼图像位置数据和眼球图像位置数据的步骤,包括:
    从所述用户图像中查找人脸图像;
    从所述人脸图像中查找人眼图像,以及从所述人脸图像中获取人眼位置数据,所述人眼图像包括左眼图像和右眼图像;
    从所述人眼图像中查找眼球图像,以及从所述人脸图像中获取眼球位置数据。
  3. 如权利要求1所述的眼动控制校准数据获取方法,其特征在于,所述校准数据包括距离校准数据、横向校准数据和纵向校准数据,所述根据所述人眼位置数据和所述眼球位置数据计算校准数据,依次记录所述校准数据和对应的多个所述定位点位置信息的步骤,包括:
    根据所述人眼位置数据计算人眼注视一个所述定位点时的距离校准数据;以及根据所述人眼位置数据和所述眼球位置数据计算人眼注视一个所述定位点时的眼球位置横向校准数据与眼球位置纵向校准数据;
    将所述距离校准数据、横向校准数据、纵向校准数据和对应的所述定位点位置信息保存在存储器中。
  4. 如权利要求3所述的眼动控制校准数据获取方法,其特征在于,所述根据所述人眼位置数据计算人眼注视一个所述定位点时的距离校准数据的步骤,包括:
    根据所述人眼位置数据包括的左眼位置数据,计算左眼中心位置坐标;以及根据所述人眼位置数据包括的右眼位置数据,计算右眼中心位置坐标;
    根据所述左眼中心位置坐标以及所述右眼中心位置坐标,计算左眼中心与右眼中心的距离,获得所述距离校准数据。
  5. 如权利要求3所述的眼动控制校准数据获取方法,其特征在于,所述根据所述人眼位置数据和所述眼球位置数据计算人眼注视一个所述定位点时的眼球位置横向校准数据与眼球位置纵向校准数据的步骤,包括:
    根据所述眼球位置数据包括的左眼球位置数据,计算左眼球中心位置坐标;以及根据所述眼球位置数据包括的右眼球位置数据,计算右眼球中心位置坐标;
    根据所述左眼球中心位置坐标和所述左眼位置数据,计算左眼球中心与所述左眼图像的最左边之间的第一横向距离,和左眼球中心与所述左眼图像的最上边之间的第一纵向距离;以及根据所述右眼球中心位置坐标和所述右眼位置数据,计算右眼球中心与所述右眼图像的最右边之间的第二横向距离,和右眼球中心与右眼图像的最下边之间的第二纵向距离;
    计算所述第一横向距离与所述第二横向距离的比值,获得所述横向校准数据;以及计算所述第一纵向距离与所述第二纵向距离的比值,获得所述纵向校准数据。
  6. 如权利要求1所述的眼动控制校准数据获取方法,其特征在于,所述依次获取人眼注视多个定位点的用户图像的步骤,包括:
    分别发送提醒用户持续注视每个所述定位点的提醒信息;
    分别判断当前时刻距发送所述提醒信息的时刻之间的时长是否大于预设的注视时长;
    若是,则分别生成拍摄所述用户图像的指令,以获取所述用户图像。
  7. 如权利要求1所述的眼动控制校准数据获取方法,其特征在于,所述依次获取人眼注视多个定位点的用户图像的步骤,包括:
    分别发送提醒用户持续注视每个所述定位点的提醒信息;
    分别获取摄像头实时采集的图像;
    分别通过预训练的分类器判断所述图像内所包含的人眼的状态;
    若所述人眼处于注视状态,则分别从实时采集的所述图像中获取所述用户图像。
  8. 如权利要求1所述的眼动控制校准数据获取方法,其特征在于,所述指定观看区域内包括左上、中上、右上、左中、中中、右中、左下、中下和右下的9个所述定位点。
  9. 一种眼动控制校准数据获取装置,其特征在于,包括:
    图像获取模块,用于依次获取人眼注视多个定位点的用户图像;其中,多 个所述定位点预设于指定观看区域内;
    图像分析模块,用于依次从所述用户图像中查找人眼图像和眼球图像,获取人眼位置数据和眼球位置数据;
    数据计算模块,用于根据所述人眼位置数据和所述眼球位置数据计算校准数据,依次记录所述校准数据和对应的多个所述定位点位置信息。
  10. 如权利要求9所述的眼动控制校准数据获取装置,其特征在于,所述图像分析模块包括:
    人脸查找单元,用于从所述用户图像中查找人脸图像;
    人眼查找单元,用于从所述人脸图像中查找人眼图像,以及从所述人脸图像中获取人眼位置数据,所述人眼图像包括左眼图像和右眼图像;
    眼球查找单元,用于从所述人眼图像中查找眼球图像,以及从所述人脸图像中获取眼球位置数据。
  11. 如权利要求9所述的眼动控制校准数据获取装置,其特征在于,所述校准数据包括距离校准数据、横向校准数据和纵向校准数据,所述数据计算模块包括:
    第一数据获取单元,用于根据所述人眼位置数据计算人眼注视一个所述定位点时的距离校准数据;
    第二数据获取单元,用于根据所述人眼位置数据和所述眼球位置数据计算人眼注视一个所述定位点时的眼球位置横向校准数据与眼球位置纵向校准数据;
    数据存储单元,用于将所述距离校准数据、横向校准数据、纵向校准数据和对应的所述定位点位置信息保存在存储器中。
  12. 如权利要求11所述的眼动控制校准数据获取装置,其特征在于,所述第一数据获取单元包括:
    第一计算子单元,用于根据所述人眼位置数据包括的左眼位置数据,计算左眼中心位置坐标;以及根据所述人眼位置数据包括的右眼位置数据,计算右眼中心位置坐标;
    第二计算子单元,用于根据所述左眼中心位置坐标以及所述右眼中心位置坐标,计算左眼中心与右眼中心的距离,获得所述距离校准数据。
  13. 如权利要求11所述的眼动控制校准数据获取装置,其特征在于,所述第二数据获取单元包括:
    第三计算子单元,用于根据所述眼球位置数据包括的左眼球位置数据,计 算左眼球中心位置坐标;以及根据所述眼球位置数据包括的右眼球位置数据,计算右眼球中心位置坐标;
    第四计算子单元,用于根据所述左眼球中心位置坐标和所述左眼位置数据,计算左眼球中心与所述左眼图像的最左边之间的第一横向距离,和左眼球中心与所述左眼图像的最上边之间的第一纵向距离;以及根据所述右眼球中心位置坐标和所述右眼位置数据,计算右眼球中心与所述右眼图像的最右边之间的第二横向距离,和右眼球中心与右眼图像的最下边之间的第二纵向距离;
    第五计算子单元,用于计算所述第一横向距离与所述第二横向距离的比值,获得所述横向校准数据;以及计算所述第一纵向距离与所述第二纵向距离的比值,获得所述纵向校准数据。
  14. 如权利要求9所述的眼动控制校准数据获取装置,其特征在于,所述图像获取模块包括:
    第一提醒单元,用于分别发送提醒用户持续注视每个所述定位点的提醒信息;
    第一判断单元,用于分别判断当前时刻距发送所述提醒信息的时刻之间的时长是否大于预设的注视时长;
    第一图像获取单元,用于若当前时刻距发送所述提醒信息的时刻之间的时长大于预设的注视时长,则分别生成拍摄所述用户图像的指令,以获取所述用户图像。
  15. 如权利要求9所述的眼动控制校准数据获取装置,其特征在于,所述图像获取模块包括:
    第二提醒单元,用于分别发送提醒用户持续注视每个所述定位点的信息;
    实时图像获取单元,用于分别获取摄像头实时采集的图像;
    第二判断单元,用于分别通过预训练的分类器判断所述图像内所包含的人眼的状态;
    第二图像获取单元,用于若所述人眼处于注视状态,则分别从实时采集的所述图像中获取所述用户图像。
  16. 如权利要求9所述的眼动控制校准数据获取装置,其特征在于,所述指定观看区域内包括左上、中上、右上、左中、中中、右中、左下、中下和右下的9个所述定位点。
  17. 一种计算机设备,其特征在于,其包括处理器、存储器及存储于所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机 程序时实现如权利要求1~8任一项所述的眼动控制校准数据获取方法。
PCT/CN2019/073766 2018-08-31 2019-01-29 眼动控制校准数据获取方法和装置 WO2020042542A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811014201.1 2018-08-31
CN201811014201.1A CN109343700B (zh) 2018-08-31 2018-08-31 眼动控制校准数据获取方法和装置

Publications (1)

Publication Number Publication Date
WO2020042542A1 true WO2020042542A1 (zh) 2020-03-05

Family

ID=65292236

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/073766 WO2020042542A1 (zh) 2018-08-31 2019-01-29 眼动控制校准数据获取方法和装置

Country Status (2)

Country Link
CN (1) CN109343700B (zh)
WO (1) WO2020042542A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111444789A (zh) * 2020-03-12 2020-07-24 深圳市时代智汇科技有限公司 一种基于视频感应技术的近视预防方法及其***
CN113255476A (zh) * 2021-05-08 2021-08-13 西北大学 一种基于眼动追踪的目标跟踪方法、***及存储介质

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109976528B (zh) * 2019-03-22 2023-01-24 北京七鑫易维信息技术有限公司 一种基于头动调整注视区域的方法以及终端设备
CN110275608B (zh) * 2019-05-07 2020-08-04 清华大学 人眼视线追踪方法
CN110399930B (zh) * 2019-07-29 2021-09-03 北京七鑫易维信息技术有限公司 一种数据处理方法及***
CN110780742B (zh) * 2019-10-31 2021-11-02 Oppo广东移动通信有限公司 眼球追踪处理方法及相关装置
CN111290580B (zh) * 2020-02-13 2022-05-31 Oppo广东移动通信有限公司 基于视线追踪的校准方法及相关装置
CN113918007B (zh) * 2021-04-27 2022-07-05 广州市保伦电子有限公司 一种基于眼球追踪的视频交互操作方法
CN116824683B (zh) * 2023-02-20 2023-12-12 广州视景医疗软件有限公司 一种基于移动设备的眼动数据采集方法及***

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060110008A1 (en) * 2003-11-14 2006-05-25 Roel Vertegaal Method and apparatus for calibration-free eye tracking
CN101807110A (zh) * 2009-02-17 2010-08-18 由田新技股份有限公司 瞳孔定位方法及***
CN102802502A (zh) * 2010-03-22 2012-11-28 皇家飞利浦电子股份有限公司 用于跟踪观察者的注视点的***和方法
CN105094337A (zh) * 2015-08-19 2015-11-25 华南理工大学 一种基于虹膜和瞳孔的三维视线估计方法
CN109375765A (zh) * 2018-08-31 2019-02-22 深圳市沃特沃德股份有限公司 眼球追踪交互方法和装置

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102830793B (zh) * 2011-06-16 2017-04-05 北京三星通信技术研究有限公司 视线跟踪方法和设备
CN102662476B (zh) * 2012-04-20 2015-01-21 天津大学 一种视线估计方法
CN107436675A (zh) * 2016-05-25 2017-12-05 深圳纬目信息技术有限公司 一种视觉交互方法、***和设备
US9996744B2 (en) * 2016-06-29 2018-06-12 International Business Machines Corporation System, method, and recording medium for tracking gaze using only a monocular camera from a moving screen
CN107633240B (zh) * 2017-10-19 2021-08-03 京东方科技集团股份有限公司 视线追踪方法和装置、智能眼镜
CN108427503B (zh) * 2018-03-26 2021-03-16 京东方科技集团股份有限公司 人眼追踪方法及人眼追踪装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060110008A1 (en) * 2003-11-14 2006-05-25 Roel Vertegaal Method and apparatus for calibration-free eye tracking
CN101807110A (zh) * 2009-02-17 2010-08-18 由田新技股份有限公司 瞳孔定位方法及***
CN102802502A (zh) * 2010-03-22 2012-11-28 皇家飞利浦电子股份有限公司 用于跟踪观察者的注视点的***和方法
CN105094337A (zh) * 2015-08-19 2015-11-25 华南理工大学 一种基于虹膜和瞳孔的三维视线估计方法
CN109375765A (zh) * 2018-08-31 2019-02-22 深圳市沃特沃德股份有限公司 眼球追踪交互方法和装置

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111444789A (zh) * 2020-03-12 2020-07-24 深圳市时代智汇科技有限公司 一种基于视频感应技术的近视预防方法及其***
CN111444789B (zh) * 2020-03-12 2023-06-20 深圳市时代智汇科技有限公司 一种基于视频感应技术的近视预防方法及其***
CN113255476A (zh) * 2021-05-08 2021-08-13 西北大学 一种基于眼动追踪的目标跟踪方法、***及存储介质
CN113255476B (zh) * 2021-05-08 2023-05-19 西北大学 一种基于眼动追踪的目标跟踪方法、***及存储介质

Also Published As

Publication number Publication date
CN109343700A (zh) 2019-02-15
CN109343700B (zh) 2020-10-27

Similar Documents

Publication Publication Date Title
WO2020042542A1 (zh) 眼动控制校准数据获取方法和装置
WO2020042541A1 (zh) 眼球追踪交互方法和装置
US9791927B2 (en) Systems and methods of eye tracking calibration
Xu et al. Turkergaze: Crowdsourcing saliency with webcam based eye tracking
CN105184246B (zh) 活体检测方法和活体检测***
Li et al. Learning to predict gaze in egocentric video
US9075453B2 (en) Human eye controlled computer mouse interface
US9750420B1 (en) Facial feature selection for heart rate detection
CN108230383A (zh) 手部三维数据确定方法、装置及电子设备
KR101288447B1 (ko) 시선 추적 장치와 이를 이용하는 디스플레이 장치 및 그 방법
CN105912126B (zh) 一种手势运动映射到界面的增益自适应调整方法
Emery et al. OpenNEEDS: A dataset of gaze, head, hand, and scene signals during exploration in open-ended VR environments
WO2021135639A1 (zh) 活体检测方法及装置
CN111696140A (zh) 基于单目的三维手势追踪方法
WO2023071882A1 (zh) 人眼注视检测方法、控制方法及相关设备
KR102657095B1 (ko) 탈모 상태 정보 제공 방법 및 장치
US10036902B2 (en) Method of determining at least one behavioural parameter
Kim et al. Gaze estimation using a webcam for region of interest detection
JP2016111612A (ja) コンテンツ表示装置
CN115393963A (zh) 运动动作纠正方法、***、存储介质、计算机设备及终端
Yang et al. vGaze: Implicit saliency-aware calibration for continuous gaze tracking on mobile devices
CN112527103B (zh) 显示设备的遥控方法、装置、设备及计算机可读存储介质
CN110858095A (zh) 可由头部操控的电子装置与其操作方法
CN110275608B (zh) 人眼视线追踪方法
Yang et al. Continuous gaze tracking with implicit saliency-aware calibration on mobile devices

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19854978

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19854978

Country of ref document: EP

Kind code of ref document: A1