CN110263657A - A kind of human eye method for tracing, device, system, equipment and storage medium - Google Patents

A kind of human eye method for tracing, device, system, equipment and storage medium Download PDF

Info

Publication number
CN110263657A
CN110263657A CN201910438457.3A CN201910438457A CN110263657A CN 110263657 A CN110263657 A CN 110263657A CN 201910438457 A CN201910438457 A CN 201910438457A CN 110263657 A CN110263657 A CN 110263657A
Authority
CN
China
Prior art keywords
camera
user
azimuth information
pupil
eyes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910438457.3A
Other languages
Chinese (zh)
Other versions
CN110263657B (en
Inventor
卢增祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yixin Technology Development Co Ltd
Original Assignee
Yixin Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yixin Technology Development Co Ltd filed Critical Yixin Technology Development Co Ltd
Priority to CN201910438457.3A priority Critical patent/CN110263657B/en
Priority to PCT/CN2019/106701 priority patent/WO2020237921A1/en
Publication of CN110263657A publication Critical patent/CN110263657A/en
Application granted granted Critical
Publication of CN110263657B publication Critical patent/CN110263657B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention discloses a kind of human eye method for tracing, device, system, equipment and storage mediums, this method comprises: calling the first camera to acquire the user images in default viewing areas, and according to the corresponding three-dimensional head azimuth information of user each in the determining default viewing areas of user images;According to three-dimensional head azimuth information, the target second camera of the first preset quantity is determined from multiple second cameras;The face image of each target second camera acquisition user is called, and determines the two-dimentional pupil of both eyes azimuth information of user according to each face image;The three-dimensional pupil of both eyes azimuth information of user is determined according at least two two-dimentional pupil of both eyes azimuth informations.Technical solution through the embodiment of the present invention can improve calculating speed and precision simultaneously, and can be adapted for the human eye tracking scene of multi-user.

Description

A kind of human eye method for tracing, device, system, equipment and storage medium
Technical field
The present embodiments relate to image processing techniques more particularly to a kind of human eye method for tracing, device, systems, equipment And storage medium.
Background technique
Human eye tracer technique is mainly used in the fields such as human-computer interaction, naked eye 3D display, virtual reality, by tracking eyeball Movement obtain the viewing viewpoint position of people;In current naked eye 3D display screen by human eye tracking judge it is currently viewing Position, modification show image to reduce the right and left eyes crosstalk phenomenon of 3D rendering.
Existing human eye tracking is based primarily upon PCCR (pupil corneal reflection technology) plus image procossing identification method is realized, such as Tobii eye tracker.Tobii eye tracker makes to produce at reflected image, so on the cornea and pupil of eyes of user using near-infrared light source Afterwards using the image of two imaging sensors acquisition eyes and reflection.It is then based on image processing algorithm and a three-dimensional eyeball mould Eye space position and direction of gaze is precisely calculated in type.
However, existing human eye trace mode does not have user's recognition capability, it is only applicable to the scene of short distance single user, Such as use computer, VR glasses, eye examination.And existing human eye trace mode is usually first to identify in user images Face area calculates user's pupil of both eyes position, the picture as shared by the face area in user images further according to face area Vegetarian noodles product may be less, so that will lead to can not improve calculating simultaneously when directly calculating pupil of both eyes position in user images The case where precision and calculating speed.
As it can be seen that being currently badly in need of one kind not only can be improved calculating speed but also the human eye method for tracing of computational accuracy can be improved.
Summary of the invention
The embodiment of the invention provides a kind of human eye method for tracing, device, system, equipment and storage mediums, can be simultaneously Calculating speed and precision are improved, and can be adapted for the human eye tracking scene of multi-user.
In a first aspect, the embodiment of the invention provides a kind of human eye method for tracing, comprising:
The first camera is called to acquire the user images in default viewing areas, and according to user images determination The corresponding three-dimensional head azimuth information of each user in default viewing areas;
According to the three-dimensional head azimuth information, the target second of the first preset quantity is determined from multiple second cameras Camera;
It calls each target second camera to acquire the face image of the user, and is schemed according to each face Two-dimentional pupil of both eyes azimuth information as determining the user;
The three-dimensional pupil of both eyes orientation letter of the user is determined according at least two two-dimentional pupil of both eyes azimuth informations Breath.
Second aspect, the embodiment of the invention also provides a kind of human eye follow-up mechanisms, comprising:
Three-dimensional head azimuth information determining module, for calling the first camera to acquire user's figure in default viewing areas Picture, and the corresponding three-dimensional head azimuth information of each user in the default viewing areas is determined according to the user images;
Target second camera determining module is used for according to the three-dimensional head azimuth information, from multiple second cameras The target second camera of middle the first preset quantity of determination;
Two-dimentional pupil of both eyes azimuth information determining module, for calling each target second camera to acquire the use The face image at family, and determine according to each face image the two-dimentional pupil of both eyes azimuth information of the user;
Three-dimensional pupil of both eyes azimuth information determining module, for according at least two two-dimentional pupil of both eyes azimuth informations Determine the three-dimensional pupil of both eyes azimuth information of the user.
The third aspect, the embodiment of the invention also provides a kind of human eye tracing system, the system comprises: the first camera shooting Head, multiple second cameras and human eye follow-up mechanism;Wherein, the human eye follow-up mechanism is implemented for realizing such as present invention is any Human eye method for tracing provided by example.
Fourth aspect, the embodiment of the invention also provides a kind of equipment, the equipment includes:
One or more processors;
Memory, for storing one or more programs;
Input unit, for acquiring image;
Output device is used for display screen information;
When one or more of programs are executed by one or more of processors, so that one or more of processing Device realizes such as human eye method for tracing provided by any embodiment of the invention.
5th aspect, the embodiment of the invention also provides a kind of computer readable storage mediums, are stored thereon with computer Program realizes such as human eye method for tracing provided by any embodiment of the invention when the program is executed by processor.
The embodiment of the present invention determines each use in default viewing areas by the user images acquired according to the first camera The corresponding three-dimensional head azimuth information in family, to realize the tracking to user's head.Three-dimensional head azimuth information based on user, The target second camera of the first preset quantity is determined from multiple second cameras, so that each target second camera acquires Face image in elemental area shared by pupil of both eyes region it is more, thus can be quickly and accurately according to multiple face images The three-dimensional pupil of both eyes azimuth information for determining user, realizes the high speed tracking of human eye, while improving computational accuracy.And this Inventive embodiments are the face images that user in default viewing areas is acquired using multiple second cameras, so as to call not Same second camera acquires the face image of different user simultaneously, allows to the human eye tracking scene suitable for multi-user.
Detailed description of the invention
Fig. 1 is a kind of flow chart for human eye method for tracing that the embodiment of the present invention one provides;
Fig. 2 is a kind of matched schematic diagram of second camera involved in the embodiment of the present invention one;
Fig. 3 be a kind of light involved in the embodiment of the present invention one to eye position depth direction distance sensitivity Example;
Fig. 4 is the example that one kind the second three-dimensional head azimuth information involved in the embodiment of the present invention one is shown;
Fig. 5 is a kind of schematic diagram for receiving face image data involved in the embodiment of the present invention one;
Fig. 6 is a kind of flow chart of human eye method for tracing provided by Embodiment 2 of the present invention;
Fig. 7 is a kind of example of helical form search graph involved in the embodiment of the present invention two;
Fig. 8 is the signal of second camera and Column Layout at the second placement position of one kind involved in the embodiment of the present invention two Figure;
Fig. 9 is a kind of example of the corresponding three layer datas table of second camera involved in the embodiment of the present invention two;
Figure 10 is a kind of structural schematic diagram for human eye follow-up mechanism that the embodiment of the present invention three provides;
Figure 11 is a kind of structural schematic diagram for human eye tracing system that the embodiment of the present invention four provides;
Figure 12 is first camera shooting when default viewing areas is rounded interior areas of one kind involved in the embodiment of the present invention four The layout example of head;
Figure 13 is first camera shooting when default viewing areas is round exterior domain of one kind involved in the embodiment of the present invention four The layout example of head;
Figure 14 is one kind involved in the embodiment of the present invention four when default viewing areas is straight line unilateral side viewing areas the The layout example of one camera;
Figure 15 is a kind of example of the intersection region of two neighboring second camera involved in the embodiment of the present invention four;
Figure 16 is second camera shooting when default viewing areas is rounded interior areas of one kind involved in the embodiment of the present invention four The layout example of head;
Figure 17 is second camera shooting when default viewing areas is round exterior domain of one kind involved in the embodiment of the present invention four The layout example of head;
Figure 18 is one kind involved in the embodiment of the present invention four when default viewing areas is straight line unilateral side viewing areas the The layout example of two cameras;
Figure 19 is the structural schematic diagram for another human eye tracing system that the embodiment of the present invention four provides;
Figure 20 is a kind of structural schematic diagram for equipment that the embodiment of the present invention five provides.
Specific embodiment
The present invention is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining the present invention rather than limiting the invention.It also should be noted that in order to just Only the parts related to the present invention are shown in description, attached drawing rather than entire infrastructure.
Embodiment one
Fig. 1 is a kind of flow chart for human eye method for tracing that the embodiment of the present invention one provides, and the present embodiment is applicable to When user watches 3D display screen, the case where positioning is tracked to the pupil of both eyes of user.This method can be tracked by human eye and be filled It sets to execute, which can be realized by the mode of software and/or hardware, it is integrated in the equipment with 3D display function, Such as naked eye 3D advertisement machine, naked eye 3D display device etc..As shown in Figure 1, this method specifically includes the following steps:
S110, it calls the first camera to acquire the user images in default viewing areas, and is determined in advance according to user images If the corresponding three-dimensional head azimuth information of each user in viewing areas.
Wherein, the first camera can refer to the camera for tracking user's head.The first camera shooting in the present embodiment Head can be 3D camera, be also possible to multiple 2D cameras.Since the first camera is responsible for tracking user's head, to face Details and tracking rate request be not high, so as to the first camera for selecting visual angle big.Default viewing areas can refer to use Region present when 3D display screen is watched at family, can be predefined according to the shape of 3D display screen and position.For example, if 3D Display screen is a circle and is shown towards circular central that then presetting viewing areas can be the circle of circle composition In domain.There may be one or more users to watch simultaneously in default viewing areas.First camera in the present embodiment Quantity and camera site (i.e. layout type) can be configured in advance according to the shape of 3D display screen and position, so that each Total detection zone of one camera can cover default viewing areas, so as to track default viewing area using the first camera The head of each user in domain.The head of at least ten user can be tracked in the present embodiment simultaneously, particular number can root It is determined according to the shooting performance of the first camera.Three-dimensional head azimuth information may include the three-dimensional head location information of user With head orientation information, wherein head orientation information can be used for reflecting the head state of user, for example faces upward head, bows or partially Head etc..
Specifically, can be by calling each first camera to acquire the user images in default viewing areas, and utilize The modes such as vision positioning principle realize the tracing and positioning of user's head, and are carried out by technologies such as face's matchings to user images Image procossing identifies different user's heads, so as to calculate the corresponding three-dimensional of each user in default viewing areas Head orientation information.The first camera can be preferably the colour imagery shot of high definition in the present embodiment, so as to according to acquisition User images in RGB information identify the hair color of user, colour of skin etc., so as to more accurately determine user Three-dimensional head azimuth information.
Illustratively, following data can be used by presetting the corresponding three-dimensional head azimuth information of each user in viewing areas Structure is stored:
Wherein, hid is the corresponding identifier number of some user's head tracked, to distinguish different user's heads. (x, y, z) is the three-dimensional head position coordinates of user, and unit can be set to a millimeter mm, (angle_x, angle_y, angle_ It z) is the head of user towards corresponding towards vector.User's head is rotated towards horizontal direction (i.e. along y-axis in the present embodiment Rotational angle) precision be greater than other both directions (along the rotational angle of x-axis and z-axis) precision.
S120, according to three-dimensional head azimuth information, the target the of the first preset quantity is determined from multiple second cameras Two cameras.
Wherein, second camera can refer to the camera for tracking user's pupil of both eyes.Second takes the photograph in the present embodiment As head can be 2D camera.The particular number of second camera and camera site (i.e. layout type) can roots in the present embodiment Be configured in advance according to the shape of 3D display screen and position so that total detection zone of each second camera can cover it is default Viewing areas realizes multi-user's so as to track the eyes of each user in default viewing areas using second camera Human eye tracking.Target second camera can refer to that the acquisition user's face image filtered out from multiple second cameras is best Second camera.First preset quantity can be configured according to business demand and scene.First in the present embodiment is default Quantity can be at least two.
Specifically, for each user, can the three-dimensional head azimuth information based on user from it is multiple second camera shooting The corresponding target second camera of the user is determined in head, so that in each collected face image of target second camera Elemental area shared by pupil of both eyes region is more, and then guarantees resolution ratio, improves the precision of human eye tracking.Illustratively, originally Second camera can be black and white camera in embodiment, and illuminated infrared light is arranged in the installation site of second camera Source, so as to carry out Image Acquisition.The resolution ratio of second camera can be less than the resolution of the first camera in the present embodiment Rate, to further increase the processing speed of image.
Illustratively, S120 may include: the azimuth configuration according to three-dimensional head azimuth information and each second camera Information determines that the candidate second camera of corresponding second preset quantity of user and each candidate second camera are corresponding Matching degree;According to each matching degree and the corresponding current call number of each candidate second camera, from each candidate second camera In filter out the target second camera of the first preset quantity.
Wherein, the azimuth configuration information of second camera can include but is not limited to the installation site of second camera, divide Resolution, depth of field parameter and angular field of view.Illustratively, the azimuth configuration information of each second camera can use following data Structure is stored:
Wherein, cid is the corresponding identifier number of second camera, to distinguish different second cameras;(x, y, z) is The corresponding installation site coordinate of second camera, unit can be millimeter mm;(angle_x, angle_y, angle_z) is second The corresponding direction vector in shooting center of camera.Width and height respectively indicates the resolution ratio of second camera;fov_h It is respectively the visual angle of second camera both horizontally and vertically with fov_v, unit is degree.Dof is second camera Depth of field parameter.Type is the layout type of second camera, such as round inward-type, round outward type and plane, wherein circle Shape inward-type can refer to the shooting direction for the second camera being distributed in circle towards circular center position;It is round Outward type can refer to that the shooting direction for the second camera being distributed in circle deviates from circular center;Plane can To refer to that viewing areas of the second camera of distribution on straight line towards straight line unilateral side is shot.The present embodiment can be based on cloth Office's mode chooses corresponding screening and optimizing strategy.It should be noted that each second camera stored in the present embodiment is corresponding Installation site coordinate (x, y, z) refer to the coordinate under world coordinate system, to carry out Data Matching.
Specifically, the present embodiment carries out the azimuth configuration information of three-dimensional head azimuth information and each second camera Match, determination can take the candidate second camera of the second preset quantity of user's face image, and can be according to user's Angle between head direction and the center line of each candidate second camera shooting visual angle determines that each candidate second takes the photograph Angle of the picture between corresponding matching degree, such as head direction and the center line of shooting visual angle is bigger, then candidate second The corresponding matching degree of camera is smaller.The present embodiment can also be closed according to the position between users multiple in default viewing areas System detects whether there is the case where shooting is blocked, so that the corresponding matching degree of candidate second camera is adjusted.For example, If there are a user B to have blocked user A, then can reduce the candidate when using some candidate second camera shooting user A The corresponding matching degree of second camera, or set minimum for the corresponding matching degree of candidate's second camera, to avoid benefit Image Acquisition is carried out to user A with candidate's second camera.When the present embodiment tracks multi-user at the same time, each second camera shooting Head may correspond to multiple shooting tasks, the calling of the corresponding second camera of each shooting task, so as to be directed to Different user is shot.The corresponding current call number of candidate second camera can refer to candidate second camera current The quantity of moment corresponding pending shooting task.The first preset quantity in the present embodiment is less than or equal to the second present count Amount.The size of second preset quantity can be determined according to business scenario and practical operation situation.When the first preset quantity is less than It, can be based on each candidate corresponding matching degree of camera and current call number, from the second preset quantity when two preset quantities Candidate camera in the first preset quantity of further screening optimal second camera, i.e. target second camera.When When first preset data is equal to the second preset quantity, each of determining candidate second camera can be determined directly as to target the Two cameras.
Illustratively, Fig. 2 gives a kind of matched schematic diagram of second camera.As shown in Fig. 2, the cloth of second camera Office's mode is round inward-type, i.e., is uniformly distributed multiple placement positions in circle, can install at each placement position multiple Second camera, and the shooting direction of each camera is towards circular center position.In Fig. 2, dotted line indicates user The direction on head, c1 refer to the corresponding shooting visual angle of second camera C1 at placement position 1;C2 refers to positioned at cloth The corresponding shooting visual angle of second camera C2 at office position 2;C3 refers to that the second camera C3 at placement position 2 is corresponding Shooting visual angle;C4 refers to the corresponding shooting visual angle of second camera C4 at placement position 3.The head of user A in Fig. 2 Portion's direction is between placement position 1 and placement position 2, then for user A, at placement position 1 and placement position 2 To match an optimal second camera, can be determined according to the shooting visual angle of each second camera on placement position Optimal second camera on the placement position out, as shown in Fig. 2, two target second cameras matched to user A are distinguished For second camera C1 and second camera C2.Similarly, matching two target second cameras to user B is respectively the second camera shooting Head C3 and second camera C4.
S130, the face image for calling each target second camera acquisition user, and determined according to each face image The two-dimentional pupil of both eyes azimuth information of user.
Wherein, two-dimentional pupil of both eyes azimuth information may include the two-dimentional pupil of both eyes location information and eye gaze of user Directional information, wherein two-dimentional pupil of both eyes location information includes two-dimentional pupil of left eye location information and two-dimentional pupil of right eye position letter Breath.Illustratively, the two-dimentional pupil of both eyes azimuth information of user can be stored using following data structure:
Wherein, the two-dimensional position coordinate of (left_x, left_y), (right_x, right_y) expression left and right eye pupil, Wherein x-axis direction can be direction from left to right, and y-axis direction can be direction from the top down;(angle_x, angle_y) Indicate the angle in eye gaze direction, wherein the direction x can be divided into "None", " left side ", " in " and " right side " fourth gear, the correspondingly direction y Can for "None", "upper", " in " and "lower" fourth gear, wherein "None" indicates that direction is not known.Time can be to calculate eyes pupil At corresponding time point when hole site, unit can be ms, to distinguish different eye pupil positions using time point. Mode indicates the eyes mode that detects, for example, eyes mode, left eye mode, right eye mode and simple eye mode these four modes, Wherein, simple eye mode can refer to the position for only detecting one eye eyeball pupil and can not identify the eyes for left eye still Right eye.The corresponding two-dimensional position coordinate of simple eye mode can be stored at pupil of left eye position (left_x, left_y), can also be with (right_x, right_y) is stored at pupil of right eye position.The two-dimensional position coordinate of the pupil of both eyes calculated in the present embodiment Precision be ± 1 pixel.
Specifically, the present embodiment pointedly acquires the face image of user using the target second camera filtered out, So that when acquiring face image using the target second camera of low resolution, it is ensured that pupil of both eyes in face image Elemental area shared by region is more, so that computational accuracy can be improved under the premise of improving calculating speed.Determining first After the target second camera of preset quantity, by calling each target second camera, the corresponding each face's figure of user is obtained Picture, and can be handled based on each face image of the image processing algorithm to acquisition, so as to fast accurate calculate Two-dimentional pupil of both eyes azimuth information in each face image out.
S140, the three-dimensional pupil of both eyes azimuth information that user is determined according at least two two-dimentional pupil of both eyes azimuth informations.
Wherein, three-dimensional pupil of both eyes azimuth information may include the three-dimensional pupil of both eyes location information and eye gaze of user Directional information.In light field is shown, due to light angle problem, so that the light of different directions pixel is to eye position in depth The distance sensitivity in direction is different.Fig. 3 gives a kind of example of the distance sensitivity of light to eye position in depth direction. Light field display screen in Fig. 3 is a circle, and user can watch in the border circular areas.In eye pupil position in Fig. 3 Setting the precision of the light of place " pixel 1 " sending in the y-axis direction is d1, the light that " pixel 2 " issues at eye pupil position Precision in the y-axis direction is d2, it is seen then that d2 < d1, i.e., the light that " pixel 2 " issues at eye pupil position is in y-axis direction On precision it is higher.If the precision of eye pupil position in the y-axis direction is less than d2, the light that " pixel 2 " issues can not shine It is mapped at the eye pupil position of user, leads to the phenomenon that missing pixel occur, to need to improve the essence in depth distance Degree.
Specifically, the present embodiment can carry out user's pupil of both eyes according at least two two-dimentional pupil of both eyes azimuth informations Three-dimensional reconstruction, the three-dimensional pupil of both eyes azimuth information for calculating user avoid so as to improve the precision in depth distance There is the phenomenon that above-mentioned missing pixel.The three-dimensional pupil of both eyes azimuth information of user can be input to 3D display screen by the present embodiment In driving, so that 3D display screen can determine corresponding display data according to three-dimensional pupil of both eyes azimuth information, so that user can To watch correspondingly three-dimensional picture.
The technical solution of the present embodiment determines every in default viewing areas according to the user images that the first camera acquires The corresponding three-dimensional head azimuth information of a user, to realize the tracking to user's head.Three-dimensional head orientation based on user Information determines the target second camera of the first preset quantity from multiple second cameras, so that each target second images Elemental area shared by pupil of both eyes region is more in the face image of head acquisition, thus can be accurate according to multiple face images The three-dimensional pupil of both eyes azimuth information for quickly determining user, realizes the high speed tracking of human eye, while improving computational accuracy. And the embodiment of the present invention is the face image that user in default viewing areas is acquired using multiple second cameras, so as to It calls different second cameras while acquiring the face image of different user, and then the human eye tracking of multi-user may be implemented.
Based on the above technical solution, " according to each matching degree and the corresponding current tune of each candidate second camera With number, the target second camera of the first preset quantity is filtered out from each candidate second camera ", it may include: basis The corresponding current call number of candidate second camera filters out the time that current call number is less than or equal to default call number Second camera is selected, as second camera to be selected;Based on the corresponding matching degree of second camera to be selected, to each progress descending row Column, and the second camera to be selected of the first preset quantity before after arrangement is determined as target second camera.
Wherein, default call number can refer to the maximum of the corresponding pending shooting task quantity of second camera Value, can be configured in advance according to business demand and scene.For example, default call number can be set to 5.Specifically, The present embodiment can filter out current call number and be less than or equal to default call number first in each candidate second camera Candidate second camera then will be each to be selected and using the candidate second camera filtered out as second camera to be selected The matching degree of second camera is arranged from big to small, and the second camera to be selected of the first preset quantity before after arrangement is true It is set to target second camera.
Based on the above technical solution, the equipment in the present embodiment for carrying out human eye tracking can be adjusted periodically With the first camera, to periodically acquire user images, and corresponding user is determined according to the user images acquired every time Three-dimensional head azimuth information, and target second camera is determined using the three-dimensional head azimuth information, according to each target The face image of two cameras acquisition determines the two-dimentional pupil of both eyes azimuth information of user.That is, can be in the present embodiment Corresponding two-dimentional head orientation information is determined according to the user images periodically acquired.If basis is adopted every time in default history number The user images of collection can not determine at least two two-dimentional pupil of both eyes azimuth informations, then show that user's head is constantly in and hidden The state of gear or user are constantly in motion state, thus determining when secondary target second camera, it can be by second The candidate second camera of preset quantity is all determined as target second camera, to increase the probability for tracking human eye, improves Track efficiency.
Alternatively, before S140, if can also include: only to determine a two-dimentional pupil of both eyes according to each face image Azimuth information then filters out at least one target from the candidate second camera of residue after screening target second camera again Second camera, and the face image of the target second camera acquisition user filtered out again is called, and according to acquiring again Face image determine corresponding two-dimentional pupil of both eyes azimuth information;Alternatively, if can not determine two according to each face image Tie up pupil of both eyes azimuth information, then filtered out again from the candidate second camera of residue after screening target second camera to Few two target second cameras, and the face image of each target second camera acquisition user filtered out again is called, And corresponding two-dimentional pupil of both eyes azimuth information is determined according to each face image acquired again.
Specifically, the face of the first preset quantity of user is acquired by the target second camera of the first preset quantity of calling After portion's image, situations such as if there is the unexpected rotary head of user or at the boundary position in second camera, then the face that acquires Portion's image possibly can not include the pupil of both eyes information of user, so that can not determine that corresponding two dimension is double based on the face image Eye pupil hole azimuth information.If only determining a two-dimentional pupil of both eyes azimuth information according to each face image, can be based on Identical screening rule filters out at least one target second camera from remaining candidate second camera again, utilizes this Again the target second camera filtered out resurveys the face image of user, and is determined again accordingly according to face image Two-dimentional pupil of both eyes azimuth information, to obtain at least two two-dimentional pupil of both eyes azimuth informations.Wherein, screening rule can be Refer to the rule that current call number and matching degree based on candidate camera are screened.Similarly, if being schemed according to each face As that can not determine two-dimentional pupil of both eyes azimuth information, i.e., do not include pupil of both eyes information in each face image, then it can be with It filters out at least two target second cameras again from remaining second camera, utilizes the target filtered out again second Camera resurveys the face image of user, and determines corresponding two-dimentional pupil of both eyes orientation letter again according to face image Breath, to obtain at least two two-dimentional pupil of both eyes azimuth informations.
Based on the above technical solution, after S140 and before S110, further includes: according to user's current time Three-dimensional pupil of both eyes azimuth information and the three-dimensional pupil of both eyes azimuth information of historical juncture, estimate user the of subsequent time Two three-dimensional head azimuth informations;User is estimated in the three-dimensional pupil of both eyes side of subsequent time according to the second three-dimensional head azimuth information Position information.
Wherein, the first camera in the present embodiment is on a frame-by-frame basis acquired in default viewing areas according to default frame speed User images determine the corresponding three-dimensional head side of user periodically so as to according to the user images periodically acquired Position information.When postponing as consumed by determining three-dimensional head azimuth information relative to acquisition user images and according to user images Between for, according to three-dimensional head azimuth information calculate three-dimensional pupil of both eyes azimuth information consumed by delay time it is shorter.Also It is to say, determines user in the three-dimensional pupil of both eyes side at current time in the present frame user images acquired according to the first camera After the information of position, the user determined according to next frame user images can not be obtained immediately and is believed in the three-dimensional head orientation of subsequent time Breath, i.e. head location tracking speed are slower than human eye location tracking speed.The present embodiment is determined according to present frame user images User is after the three-dimensional pupil of both eyes azimuth information at current time out, i.e., after execution step S110-S140 operation, and under One framed user's image determines that user, can be to user in subsequent time before the three-dimensional head azimuth information of subsequent time Three-dimensional head azimuth information is estimated, to solve the problems, such as that head location tracking is slow, is chased after to further increase Track speed.The second three-dimensional head azimuth information in the present embodiment can refer to according to the existing three-dimensional pupil of both eyes orientation of user Information estimate the three-dimensional head azimuth information of acquisition.The three-dimensional pupil of both eyes azimuth information of historical juncture can refer to history The three-dimensional pupil of both eyes azimuth information accurately determined out in moment based on user images.
Specifically, after S140, if not obtaining the three-dimensional head orientation for the subsequent time determined according to user images It, can be according to current time calculated three-dimensional pupil of both eyes azimuth information and historical juncture calculated three-dimensional eyes when information Pupil azimuth information relatively accurately estimates user in the second three-dimensional head azimuth information of subsequent time, so as to basis Second three-dimensional head azimuth information estimates user in the three-dimensional pupil of both eyes azimuth information of subsequent time, so that it is fixed to solve head Track slow problem in position.When the accurate three-dimensional head azimuth information that acquisition is determined according to user images, then root Step S120-S140 operation is executed according to the three-dimensional head azimuth information, so as to reduce delay time, further increases tracking Speed.
Illustratively, the second three-dimensional head azimuth information includes three-dimensional head position and rotational angle;Correspondingly, Ke Yigen User is estimated in the second three-dimensional head azimuth information of subsequent time according to following formula:
Wherein, (Xp1, Yp1, Zp1) and α1Three-dimensional eye pupil position and direction of gaze for user in current time P1 Angle;(Xp2, Yp2, Zp2) and α2Angle for user in three-dimensional the eye pupil position and direction of gaze of historical juncture P2;(Xp3, Yp3, Zp3) and α3Angle for user in three-dimensional the eye pupil position and direction of gaze of historical juncture P3;(X, Y, Z) and α are to estimate The user of calculation is in the three-dimensional head position of subsequent time and rotational angle.Wherein, when current time P1, historical juncture P2 and history The three-dimensional eye pupil position for carving these three moment of P3 can be three-dimensional pupil of right eye position or the three-dimensional pupil of left eye of user Position.Fig. 4 gives a kind of example that second three-dimensional head azimuth information is shown.Point A and a in Fig. 4 are respectively user under The angle of three-dimensional the eye pupil position and direction of gaze at one moment;Point A1 and a1 is respectively three-dimensional of the user in current time P1 The angle of eye pupil position and direction of gaze;Point A2 and a2 is respectively user in the three-dimensional eye pupil position of current time P2 With the angle of direction of gaze;Point A3 and a3 is respectively user in the three-dimensional head position of current time P3 and rotational angle, this reality Applying example can be according to the corresponding accurate three-dimensional eye pupil position current time P1 and preceding two moment P2 and P3 and direction of gaze Angle, be scaled ground forecasting mechanism to estimate three-dimensional head position and the rotational angle of user's subsequent time, with The operation of step S120-S140 can be continued to execute according to the second three-dimensional head azimuth information estimated out, improve tracking speed Degree.
Based on the above technical solution, after S130, further includes: determined according to three-dimensional head azimuth information each The position in the target face region in the face image of target second camera acquisition and size.Wherein, target face region can To refer to the image-region being made of in face image face mask.The present embodiment can be believed based on the three-dimensional head orientation of user Breath delimit position and the size in the target face region in face image, so as to reduce zoning, and by only to this Target face region carries out image procossing, can be more quickly computed out the two-dimentional pupil of both eyes azimuth information of user, thus It can be further improved the calculating speed of human eye positioning.
Based on the above technical solution, " the two-dimentional pupil of both eyes of user is determined according to each face image in S130 Azimuth information " may include: that the time that data are received by scan line is determined according to the position in target face region and size, And the target face area data that target second camera is sent, and target face image based on the received are received according to the time Data determine the two-dimentional pupil of both eyes azimuth information of user.
Wherein, CSI (CMOS Sensor Interface, camera serial line interface) can be used in target second camera, with The face image data of acquisition is sent in eye tracking device by line mode.Target second camera can use contiguous memory Module stores image data, and can be by way of the line pointer list for directly transmitting face image, i.e. the mode of scan line Face image data is transmitted, with improve data transfer speed.The present embodiment can be according to the position in target face region and big It is small to calculate the time that data are received by scan line, to receive less useless number under the premise of receiving face area According to, so as to reduce because target second camera acquire face image caused by delay time, further increase tracking speed Degree.Illustratively, Fig. 5 gives a kind of schematic diagram for receiving face image data.Face image resolution ratio in Fig. 5 is 640 ×480.Target second camera understands first one frame of transmission and starts synchronization signal (correspondence when starting to transmit a frame face image Sending instant be denoted as Ts), then in the end of transmission send a frame end synchronization signal (corresponding sending instant is denoted as Te), the row transmission speed of target second camera can be determined according to the two signals are as follows: (Te-Ts)/480.Human eye tracking is set It is standby that target face region last line can be determined in 300 height and positions, so as to calculate according to three-dimensional head azimuth information The corresponding data receipt time in target face region are as follows: 300 × (Te-Ts)/480.It is opened after receiving frame and starting synchronization signal Beginning timing, when 330 receiving between reach 300 × (Te-Ts)/480 when, show that the target face comprising face area has been received Image data no longer needs to receive remaining face image data at this time, so as to improve data transfer speed, and according to connecing The target face image data of receipts can more quickly determine the two-dimentional pupil of both eyes azimuth information of user, when reducing delay Between, further increase tracking speed and calculating speed.
Embodiment two
Fig. 6 is a kind of flow chart of human eye method for tracing provided by Embodiment 2 of the present invention, and the present embodiment is in above-mentioned implementation On the basis of example, to " determining the three-dimensional pupil of both eyes azimuth information of user according at least two two-dimentional pupil of both eyes azimuth informations " It is optimized.Details are not described herein for wherein same as the previously described embodiments or corresponding term explanation.
Referring to Fig. 6, human eye method for tracing provided in this embodiment specifically includes the following steps:
S210, it calls the first camera to acquire the user images in default viewing areas, and is determined in advance according to user images If the corresponding three-dimensional head azimuth information of each user in viewing areas.
S220, according to three-dimensional head azimuth information, the target the of the first preset quantity is determined from multiple second cameras Two cameras.
S230, the face image for calling each target second camera acquisition user.
If S240, can not determine user's current time corresponding at least two two-dimentional eyes pupil according to each face image Hole azimuth information then determines helical form search rule according to the three-dimensional pupil of both eyes azimuth information of historical juncture.
Specifically, when as user there are unexpected rotary head or at the boundary position in second camera, cause User's pupil of both eyes information can not be detected by invocation target second camera, i.e., can not be determined according to each face image At least two two-dimentional pupil of both eyes azimuth informations, show that the three-dimensional head azimuth information determined according to user images is not at this time Accurately, need to predict user's current time corresponding three-dimensional head azimuth information, so as to determining according to user images Three-dimensional head azimuth information is adjusted.In general, user watch screen when head rotation angular speed faster than movement speed, thus It is establishing except head movement model, can also scanned for outward according to helical form, to improve predetermined speed.The present embodiment The three-dimensional pupil of both eyes azimuth information that can be determined according to the historical juncture determines radius value of the helical form at each node And motion profile, so that it is determined that helical form search rule.Fig. 7 gives a kind of example of helical form search graph.Node in Fig. 7 " 0 ", " 1 ", " 2 ", " 3 ", " 4 " and " 5 " respectively indicates different head positions, and the straight line of each node connection indicates the node pair The head direction answered.It is initial position that helical form search rule in the present embodiment, which can be node " 0 ", according to node " 1 "- The sequence of " 5 ", is searched for again outward.
S250, using according to user images determine three-dimensional head azimuth information as user current time three-dimensional head Azimuth information.
Specifically, when being predicted using three-dimensional head azimuth information of the helical form search rule to user, according to The three-dimensional head azimuth information that family image is determined is target that is inaccurate, i.e., being called according to the three-dimensional head azimuth information Two cameras can not detect user's pupil of both eyes information, and the present embodiment is first being worked as using the three-dimensional head azimuth information as user The three-dimensional head azimuth information at preceding moment, to be adjusted to the three-dimensional head azimuth information.
S260, it is adjusted according to three-dimensional head azimuth information of the helical form search rule to current time, and will adjustment The three-dimensional head azimuth information at current time afterwards is as the first three-dimensional head azimuth information.
It specifically, can be according to the corresponding helical form search graph of helical form search rule to the three-dimensional head side at current time Position information is adjusted.Illustratively, as in Fig. 7, node " 0 " indicates to be believed according to the three-dimensional head orientation that user images determine Breath, i.e. the three-dimensional head azimuth information at current time.It, can be with when being searched for outward again according to node " 1 "-" 5 " sequence The corresponding three-dimensional head position of next node " 1 " of node " 0 " and head direction are determined as current time adjusted Three-dimensional head azimuth information, i.e. the first three-dimensional head azimuth information, so that head azimuth information is reasonably adjusted and be predicted.
S270, at least two first two-dimentional eyes pupils that user's current time is determined according to the first three-dimensional head azimuth information Hole azimuth information.
It specifically, can be according to the first three-dimensional head information from multiple second after predicting the first three-dimensional head information The target second camera of the first preset quantity is determined in camera, and calls the face of each target second camera acquisition user Portion's image, and determine that at least two first two-dimentional pupil of both eyes orientation at user's current time are believed according to each face image Breath.
S280, determine that the three-dimensional pupil of both eyes orientation of user is believed according at least two first two-dimentional pupil of both eyes azimuth informations Breath.
Specifically, the present embodiment can be according at least two first two-dimentional pupil of both eyes azimuth informations to user's pupil of both eyes Three-dimensional reconstruction is carried out, the three-dimensional pupil of both eyes azimuth information of user is calculated, so as to improve computational accuracy and speed.
The technical solution of the present embodiment can not be detected in the three-dimensional head azimuth information determined according to user images When user's pupil of both eyes information, it can use helical form search rule and the three-dimensional head azimuth information adjusted and predicted, so as to More accurate first three-dimensional head azimuth information is obtained, and is based on the first three-dimensional head azimuth information, can be obtained at least Two first two-dimentional pupil of both eyes azimuth informations, to determine the three-dimensional pupil of both eyes azimuth information of user, so as to solve The problem of human eye tracking certainly can not be carried out due to user rotates suddenly situations such as, further improves human eye tracking speed and essence Degree.
Based on the above technical solution, if can not determine current time according to the first three-dimensional head azimuth information At least two first two-dimentional pupil of both eyes azimuth informations, and current length of testing speech is less than preset duration, then by the first three-dimensional head Azimuth information is updated to the three-dimensional head azimuth information at current time, and enters step S260.
Wherein, preset duration can be determined according to the frame speed of the first camera, such as the first camera can be shot phase Time interval between adjacent two field pictures is determined as preset duration, to guarantee the three-dimensional head determined in acquisition according to user images Before portion's azimuth information, prediction adjustment operation is carried out.Specifically, according to the first three-dimensional head information from multiple second cameras The target second camera of middle the first preset quantity of determination, and call the face image of each target second camera acquisition user Later, if can not determine at least two first two-dimentional pupil of both eyes orientation letters at user's current time according to each face image Breath, and current length of testing speech is less than preset duration, then shows that mistake occurs in the first three-dimensional head azimuth information estimated at this time, and And the testing time is shorter, at this time can using the first three-dimensional head azimuth information as the three-dimensional head azimuth information at current time, And the operation of S260-S280 is returned to step, the three-dimensional head azimuth information at current time is adjusted again, updates the One three-dimensional head azimuth information.For example, three-dimensional head azimuth information of the node " 1 " as current time in Fig. 7, according to section Point " 1 "-" 5 " sequence can be by the corresponding three-dimensional head position of next node " 2 " of node " 1 " when being searched for outward again Set the three-dimensional head azimuth information for being determined as current time adjusted with head direction, i.e., updated first three-dimensional head side Position information, so as to which head azimuth information is reasonably adjusted and predicted again.
Based on the above technical solution, if can not determine current time according to the first three-dimensional head azimuth information At least two first two-dimentional pupil of both eyes azimuth informations, and current length of testing speech is equal to preset duration, then by user's last moment Three-dimensional pupil of both eyes azimuth information of the three-dimensional pupil of both eyes information determined as user's current time.
Specifically, if can not determine at least two first two dimensions at current time according to the first three-dimensional head azimuth information Pupil of both eyes azimuth information, and current length of testing speech is equal to preset duration, then shows obtain determining according to user images Three-dimensional head azimuth information out, can stop the predicted operation at this time, to avoid indefinitely being predicted, and by upper a period of time Carve the three-dimensional pupil of both eyes azimuth information that the three-dimensional pupil of both eyes information determined is determined directly as current time.Illustratively, The present embodiment is when current length of testing speech is equal to preset duration, if only determining the one first two-dimentional pupil of both eyes at current time Azimuth information then can use the two-dimentional pupil of both eyes azimuth information that last moment is determined and calculate depth distance, and according to this The two-dimentional pupil of both eyes azimuth information of this of depth distance and current time first relatively accurately calculates current time Three-dimensional pupil of both eyes azimuth information.
Based on the above technical solution, call the first camera acquire the user images in default viewing areas it Before, further includes: accidentally according to the corresponding first default direction of the corresponding viewing angle range of default viewing areas and the first camera Difference determines the quantity of corresponding first placement position of the first camera;Each is determined according to the first visual angle of the first camera The corresponding first camera quantity of one placement position.
Wherein, first default can refer to that user's head turns when being matched to a first new camera towards error Dynamic degree can be preset according to business demand and scene.Illustratively, if being laid out the first camera in a circle In shape, then can be set first it is default towards error be 60 degree, i.e. user's head every 60 degree of rotation can be in the presence of one newly First camera acquires the direct picture of the user.Viewing angle range can refer to viewing angle corresponding to default viewing areas Degree, for example, default viewing areas is a border circular areas, then corresponding default viewing angle is 360 degree.The of first camera One visual angle can refer to the range of the first camera shooting angle.The default corresponding default detecting distance of viewing areas can be The maximum value of the first camera of user distance in default viewing areas.First depth of field of each first camera is all larger than default sight The corresponding default detecting distance in region is seen, so as to clearly shoot the use in default viewing areas using first camera Family image.
Specifically, the present embodiment can be corresponding divided by the first camera by the corresponding viewing angle range of default viewing areas The first default quantity for being determined as the first placement position towards the result that error obtains.And according to each first placement position institute It is required that angular field of view and the first camera the first visual angle, determine the first camera number corresponding to each first placement position Amount, so as to cover entirely default viewing areas.Illustratively, if angular field of view required by the first placement position is 150 Degree, the first visual angle of the first camera are 150 degree, then can determine need to only install one first at each first placement position takes the photograph As head.
Based on the above technical solution, call the first camera acquire the user images in default viewing areas it Before, further includes: accidentally according to the corresponding second default direction of the corresponding viewing angle range of default viewing areas and second camera Difference determines the quantity of corresponding second placement position of second camera;According to second depth of field of second camera and default viewing The corresponding default detecting distance in region determines the corresponding depth of field number of plies of each second placement position;According to the of second camera Two visual angles determine the corresponding second camera quantity of every layer of depth of field.
Wherein, second default can refer to the user's head institute when being matched to an optimal second camera towards error The degree of rotation can be preset according to business demand and scene.Illustratively, if being laid out the first camera at one In circle, then can be set first and preset towards error is 30 degree, so that every 30 degree of the rotation of user's head can have one Optimal second camera acquires the direct picture of the user, and can guarantee to distinguish the right and left eyes of user.Viewing angle model Viewing angle corresponding to default viewing areas can be referred to by enclosing, for example, default viewing areas is a border circular areas, then be corresponded to Default viewing angle be 360 degree.The default corresponding default detecting distance of viewing areas can be to be used in default viewing areas Maximum value of the family apart from second camera.Second depth of field of second camera can be less than default detecting distance, so as into Row at least two layers of depth of field layout, guarantees the resolution ratio of image and improves contrast.Second visual angle of second camera can be The range for referring to second camera shooting angle, can be obtained by the lens parameters of second camera.First in the present embodiment Camera is lower to facial detail and tracking rate request for tracking user's head, and second camera is for tracking human eye Position, it is higher to tracking speed and positioning accuracy request, it presets towards error so as to be arranged first more than or equal to second It is default to be greater than the second visual angle towards error and the first visual angle.
Specifically, the present embodiment can be corresponding divided by second camera by the corresponding viewing angle range of default viewing areas The second default quantity for being determined as the second placement position towards the result that error obtains.According to second depth of field of second camera Default detecting distance corresponding with default viewing areas selects the suitable depth of field number of plies, so as to cover entire default viewing Region, and can guarantee that the resolution ratio and contrast of image can be improved under different shooting distances with blur-free imaging.Example Property, the present embodiment can be used three layers of depth of field (i.e. close-in-remote) and carry out multilayer layout, and every layer can be using row side by side The mode for putting second camera increases the coverage area of tracking.According to angular field of view required by every layer of depth of field and second camera The second visual angle, the corresponding second camera quantity of every layer of depth of field is determined, so as to cover entirely default viewing areas.Example Property, if horizontal direction upward angle of visibility range required by every layer of depth of field is 150 degree, the second visual angle of second camera is 30 degree, Corresponding 6 second cameras of every layer of depth of field can be then determined, to guarantee the coverage area in horizontal direction for 150 degree, such as Fig. 8 institute Show.In order to improve the angle of coverage range on vertical direction, at least two can be superposed in each second placement position Two camera groups include the second camera of at least two layers different depth of field in each second camera group, so as to track use The case where family is bowed and is faced upward.Such as in the corresponding 3 layers of depth of field of the second placement position, 3 layers of depth of field (as one second camera shooting Head group) corresponding 18 second cameras can determine if be superposed two second camera groups at each placement position Need to install 36 second cameras at each second placement position.
After the present embodiment installs the second camera of respective numbers at each second placement position, each second camera Azimuth configuration information can be stored by the way of three layer data tables, so as to improve second camera matching search speed Degree, and then target second camera constant speed degree really can be improved.Illustratively, three layer data tables may include second cloth Office's position indicator pointer table, multiple structured fingers tables and multiple data structure tables.It is three layers corresponding that Fig. 9 gives a kind of second camera The example of tables of data.As shown in figure 9, the second placement position pointer gauge can be (such as suitable according to the layout direction of the second placement position Hour hands are counterclockwise) sequentially arrange each second placement position.The corresponding structured fingers table of each second placement position, is used In the identification code cid for storing the corresponding each second camera of second placement position.Data structure table can be used for storing The corresponding azimuth configuration information structcamera_position of second camera.
It is the embodiment of human eye follow-up mechanism provided in an embodiment of the present invention, the people of the device and the various embodiments described above below Ocular pursuit method belongs to the same inventive concept, the detail content of not detailed description in the embodiment of human eye follow-up mechanism, can With the embodiment with reference to above-mentioned human eye method for tracing.
Embodiment three
Figure 10 is a kind of structural schematic diagram for human eye follow-up mechanism that the embodiment of the present invention three provides, and the present embodiment is applicable When user watches 3D display screen, the case where positioning is tracked to the pupil of both eyes of user.The device includes: three-dimensional head side Position information determination module 310, target second camera determining module 320, two-dimentional 330 and of pupil of both eyes azimuth information determining module Three-dimensional pupil of both eyes azimuth information determining module 340.
Wherein, three-dimensional head azimuth information determining module 310, for calling the first camera to acquire in default viewing areas User images, and determine the corresponding three-dimensional head azimuth information of each user in default viewing areas according to user images;Mesh Second camera determining module 320 is marked, for according to three-dimensional head azimuth information, determining that first is pre- from multiple second cameras If the target second camera of quantity;Two-dimentional pupil of both eyes azimuth information determining module 330, for calling each target second to take the photograph As the face image of head acquisition user, and determine according to each face image the two-dimentional pupil of both eyes azimuth information of user;It is three-dimensional Pupil of both eyes azimuth information determining module 340, for determining the three of user according at least two two-dimentional pupil of both eyes azimuth informations Tie up pupil of both eyes azimuth information.
Optionally, target second camera determining module 320, comprising:
Candidate second camera determination unit, for the orientation according to three-dimensional head azimuth information and each second camera Configuration information determines the candidate second camera and each candidate second camera pair of corresponding second preset quantity of user The matching degree answered;
Target second camera determination unit, for corresponding current according to each matching degree and each candidate second camera Call number filters out the target second camera of the first preset quantity from each candidate second camera;Wherein, first is default Quantity is less than the second preset quantity.
Optionally, target second camera determination unit, is specifically used for: according to the corresponding current tune of candidate second camera With number, the candidate second camera that current call number is less than or equal to default call number is filtered out, as to be selected second Camera;Based on the corresponding matching degree of second camera to be selected, preset to each progress descending arrangement, and by before after arrangement first The second camera to be selected of quantity is determined as target second camera.
Optionally, three-dimensional pupil of both eyes azimuth information determining module 340, is specifically used for: if can not according to each face image Determine user's current time corresponding at least two two-dimentional pupil of both eyes azimuth information, then according to the three-dimensional eyes of historical juncture Pupil azimuth information determines helical form search rule;The three-dimensional head azimuth information determined according to user images is existed as user The three-dimensional head azimuth information at current time;It is carried out according to three-dimensional head azimuth information of the helical form search rule to current time Adjustment, and using the three-dimensional head azimuth information at current time adjusted as the first three-dimensional head azimuth information;According to first Three-dimensional head azimuth information determines at least two first two-dimentional pupil of both eyes azimuth informations at user's current time;According at least two A first two-dimentional pupil of both eyes azimuth information determines the three-dimensional pupil of both eyes azimuth information of user.
Optionally, the device further include:
Second three-dimensional head azimuth information determining module, for true according at least two two-dimentional pupil of both eyes azimuth informations After the three-dimensional pupil of both eyes azimuth information for determining user, and each user couple in default viewing areas is being determined according to user images Before the three-dimensional head azimuth information answered, according to the three of the three-dimensional pupil of both eyes azimuth information at user's current time and historical juncture Pupil of both eyes azimuth information is tieed up, estimates user in the second three-dimensional head azimuth information of subsequent time;
Three-dimensional pupil of both eyes azimuth information estimates module, for estimating user under according to the second three-dimensional head azimuth information The three-dimensional pupil of both eyes azimuth information at one moment.
Optionally, the second three-dimensional head azimuth information includes three-dimensional head position and rotational angle;Correspondingly, according to as follows Formula estimates user in the second three-dimensional head azimuth information of subsequent time:
Wherein, (Xp1, Yp1, Zp1) and α1Three-dimensional eye pupil position and direction of gaze for user in current time P1 Angle;(Xp2, Yp2, Zp2) and α2Angle for user in three-dimensional the eye pupil position and direction of gaze of historical juncture P2;(Xp3, Yp3, Zp3) and α3Angle for user in three-dimensional the eye pupil position and direction of gaze of historical juncture P3;(X, Y, Z) and α are to estimate The user of calculation is in the three-dimensional head position of subsequent time and rotational angle.
Optionally, the device further include:
Target face area determination module, in the target for determining the first preset quantity from multiple second cameras After two cameras, the target in the face image of each target second camera acquisition is determined according to three-dimensional head azimuth information The position of face area and size.
Optionally, two-dimentional pupil of both eyes azimuth information determining module 330, is specifically used for: according to the position in target face region The time for determining with size and receiving data by scan line is set, and the target face that target second camera is sent is received according to the time Portion's area data, and target face image data determines the two-dimentional pupil of both eyes azimuth information of user based on the received.
Optionally, the first camera be colour imagery shot or 3D camera, second camera be black and white camera, second Illuminated infrared light source is set in the installation site of camera.
Optionally, the device further include: the quantity determining module of the first placement position, for calling the first camera to adopt Before collecting the user images in default viewing areas, according to the corresponding viewing angle range of default viewing areas and the first camera Corresponding first is default towards error, determines the quantity of corresponding first placement position of the first camera;
First camera quantity determining module determines each first layout position for the first visual angle according to the first camera Set corresponding first camera quantity, wherein it is corresponding that first depth of field of each first camera is all larger than default viewing areas Default detecting distance.
Optionally, the device further include: the quantity determining module of the second placement position, for calling the first camera to adopt Before collecting the user images in default viewing areas, according to the corresponding viewing angle range of default viewing areas and second camera Corresponding second is default towards error, determines the quantity of corresponding second placement position of second camera;
Depth of field number of plies determining module, for corresponding default according to second depth of field of second camera and default viewing areas Detecting distance determines the corresponding depth of field number of plies of each second placement position, wherein second depth of field is less than default detecting distance;
Second camera quantity determining module determines that every layer of depth of field is corresponding for the second visual angle according to second camera Second camera quantity.
Human eye follow-up mechanism provided by the embodiment of the present invention can be performed human eye provided by any embodiment of the invention and chase after Track method has the corresponding functional module of executor's ocular pursuit method and beneficial effect.
It is worth noting that, included each unit and module are only pressed in the embodiment of above-mentioned human eye follow-up mechanism It is divided, but is not limited to the above division according to function logic, as long as corresponding functions can be realized;In addition, The specific name of each functional unit is also only for convenience of distinguishing each other, the protection scope being not intended to restrict the invention.
Example IV
Figure 11 is a kind of structural schematic diagram for human eye tracing system that the embodiment of the present invention four provides.Referring to Figure 11, this is System includes: the first camera 410, multiple second cameras 420 and human eye follow-up mechanism 430;Wherein, human eye follow-up mechanism 430 It can be used to implement such as human eye method for tracing provided by any embodiment of the invention.
Wherein, the first camera 410 is used to acquire the user images in default viewing areas, to realize user's head Tracking.The first camera 410 in the present embodiment can be 3D camera, be also possible to multiple 2D cameras.It takes the photograph due to first Tracking user's head is responsible for as first 410, it is not high to facial detail and tracking rate request, so as to select visual angle it is big the One camera 410.Second camera 420 is used to acquire the face image of user, to realize that human eye high speed is tracked.The present embodiment In second camera 420 can be 2D camera.Illustratively, the first camera 410 can be the colour imagery shot of high definition, Second camera 420 can be black and white camera.The resolution ratio of the first camera in the present embodiment can be greater than the second camera shooting The resolution ratio of head.The present embodiment by using low resolution second camera can under the premise of guaranteeing computational accuracy, Improve calculating speed.
The layout requirements of the first camera 410 can be set but be not limited in the present embodiment: (1) the first camera 410 The depth of field be greater than the default corresponding default detecting distance of viewing areas, need to only to utilize one kind within the scope of default detecting distance The depth of field of first camera of the depth of field, i.e., each first camera is all the same.The user images of (2) first cameras 410 acquisition Region area shared by middle head zone can be greater than required precision, such as 60 × 60 pixels, so that the head zone of shooting is more Clearly.(3) default towards the enough first camera quantity of error setting according to first, and entire default viewing can be covered Region, wherein first default can be set to 60 degree towards error.
The present embodiment can layout requirements based on the first camera 410 and default viewing areas the first camera is carried out Layout.Specifically, it is provided at least one first camera at the first placement position each of in default viewing areas, each the The corresponding total detection zone of one camera is default viewing areas, that is to say, that total coverage of each first camera can To cover default viewing areas.It illustratively, can be in default viewing areas pair when default viewing areas is rounded interior areas It is distributed each first placement position in the circle answered, and is provided at least one first camera shooting at each first placement position The shooting direction of head and each first camera is towards circular center.Wherein, each adjacent two first is laid out position The distance between setting can be equal, i.e., it is corresponding close-shaped that each first placement position is evenly distributed on default viewing areas On;It can also be within the scope of default allowable error.For example, Figure 12 gives one kind when default viewing areas is rounded interior areas The layout example of first camera.Dotted line in Figure 12 indicates the optical axis of the first camera, and the solid line on dotted line both sides indicates the The angular field of view of one camera.As shown in figure 12,6 the first placement positions are uniformly distributed in circle, and in each first cloth First camera is arranged at position in office, i.e., can cover entire rounded interior areas by the way that 6 the first cameras are arranged, with The user images in rounded interior areas at each position can be acquired, the layout type of this first camera is properly termed as justifying Shape inward-type.
It illustratively, can be in the corresponding circle of default viewing areas when default viewing areas is round exterior domain It is distributed each first placement position, and is provided at least one first camera at each first placement position, and is each The shooting direction of first camera deviates from circular center.Wherein, between the first placement position of each adjacent two away from From can be equal, i.e., each first placement position be evenly distributed on default viewing areas it is corresponding it is close-shaped on;It can also be Within the scope of default allowable error.For example, Figure 13 gives a kind of the first camera when default viewing areas is round exterior domain Layout example.Dotted line in Figure 13 indicates that the optical axis of the first camera, the solid line on dotted line both sides indicate the first camera Angular field of view.As shown in figure 13,6 the first placement positions are uniformly distributed in circle, and are set at each first placement position First camera is set, i.e., entire round exterior domain can be covered by the way that 6 the first cameras are arranged, so as to acquire The layout type of user images in round exterior domain, this first camera is properly termed as round outward type.
Illustratively, when default viewing areas is straight line unilateral side viewing areas, such as the viewing areas in cinema When, each first placement position can be distributed on the straight line of straight line unilateral side viewing areas, and at each first placement position The shooting direction for being provided at least one first camera and each first camera is clapped towards straight line unilateral side viewing areas It takes the photograph.Wherein, the distance between first placement position of each adjacent two can be equal, i.e., each first placement position is evenly distributed on On the straight line of straight line unilateral side viewing areas;It can also be within the scope of default allowable error.For example, Figure 14 gives a kind of presetting The layout example of first camera when viewing areas is straight line unilateral side viewing areas.Dotted line in Figure 14 indicates the first camera Optical axis, the solid line on dotted line both sides indicate the angular field of view of the first camera.As shown in figure 14,3 are uniformly distributed on straight line One placement position, and first camera is set at each first placement position, and these three first cameras can be with It is shot using fan-shaped direction towards straight line unilateral side viewing areas, to cover entire straight line unilateral side viewing areas, this first is taken the photograph As the layout type of head is properly termed as plane.
The layout requirements of second camera 420 can be set but be not limited in the present embodiment: (1) in each second layout At least two layers of the depth of field is used at position, to improve image resolution ratio and contrast.(2) the frame speed of second camera is greater than 60 frames It is per second.(3) region area shared by face area can be greater than required precision in the face image of second camera acquisition, such as 100 × 100 pixels, to improve the computational accuracy of two-dimentional pupil of both eyes azimuth information.(4) adjacent two at each second placement position Shortest distance d in the field depth intersection region (shadow region in such as Figure 15) of a second camera is greater than between pupil of both eyes Distance, with guarantee can collect simultaneously comprising control two eye pupils face image.Wherein d can be set to 6.5cm.(5) default towards sufficient amount of second placement position of error setting according to second, and can cover entire default Viewing areas, wherein second default can be set to 30 degree towards error.(6) when second camera is black and white camera, often Illuminated infrared light source is set at the center of a second placement position, to carry out light filling illumination.
Illustratively, at least one second camera group, each second camera shooting can be set at each second placement position Head group includes at least two layers of second camera, and every layer of second camera is the identical multiple second cameras of the depth of field, different layers In second camera the depth of field it is different.As shown in figure 8, work as 1 meter to 5 meters of user distance second camera of detection range, the The black and white camera of low resolution (640 × 480) high speed (it is per second to be greater than 90 frames) can be used in two cameras, and each second takes the photograph It may include the second camera of three kinds of different focal lengths to protect in each second camera group when as the second visual angle of head being 30 degree Demonstrate,prove to the use under different distance per family can blur-free imaging, and the mode for using 6 cameras to discharge side by side makes horizontal direction Coverage area maximum can achieve 150 degree.
The present embodiment can layout requirements based on second camera 420 and default viewing areas second camera is carried out Layout.Specifically, it is provided at least one second camera at the second placement position each of in default viewing areas, each the The corresponding total detection zone of two cameras is the default viewing areas, that is to say, that total shooting model of each second camera Default viewing areas can be covered by enclosing.It illustratively, can be in default viewing area when default viewing areas is rounded interior areas It is distributed each second placement position in the corresponding circle in domain, and is provided at least one at each second placement position and second takes the photograph As the shooting direction of head and each second camera is towards circular center.Wherein, each adjacent two second is laid out The distance between position can be equal, i.e., it is corresponding close-shaped that each second placement position is evenly distributed on default viewing areas On;It can also be within the scope of default allowable error.For example, Figure 16 gives one kind when default viewing areas is rounded interior areas The layout example of second camera.As shown in figure 16,12 the second placement positions are uniformly distributed in circle, if each second camera shooting Head group is laid out according to Fig. 8 mode, then 18 second cameras can be set in the second placement position in the horizontal direction, i.e., and each the Two camera groups include 18 second cameras, to guarantee that the shooting angle of each second placement position in the horizontal direction can be with It is 150 degree, the second camera sum needed at this time are as follows: 12 × 18=216.The present embodiment is in order to improve on vertical direction Angle of coverage range can also be superposed at least two second camera groups at each second placement position, so as to Tracking user bows and faces upward the case where when head is watched, and 12 × 18 × 2=432 should at least be arranged at each second placement position at this time A second camera, so that every 30 degree of the rotation of user's head just has an optimal target second camera shooting should The frontal one image of user, so as to reduce eclipse phenomena and improve tracking precision.The layout of this second camera Mode is properly termed as round inward-type.
It illustratively, can be in the corresponding enclosed shape of default viewing areas when default viewing areas is round exterior domain It is distributed each second placement position on shape, and is provided at least one second camera at each second placement position, and The shooting direction of each second camera deviates from circular center.Wherein, between the second placement position of each adjacent two Distance can be equal, i.e., each second placement position be evenly distributed on default viewing areas it is corresponding it is close-shaped on;It can also Within the scope of default allowable error.For example, Figure 17 gives and a kind of second takes the photograph when default viewing areas is circle exterior domain As the layout example of head.As shown in figure 17,12 the second placement position 1-12, each second placement position are uniformly distributed in circle 18 second cameras are set in the horizontal direction, to guarantee that the shooting angle of each second placement position in the horizontal direction can Think 150 degree.
Illustratively, when default viewing areas is straight line unilateral side viewing areas, such as the viewing areas in cinema When, each second placement position can be distributed on the straight line of straight line unilateral side viewing areas, and at each second placement position The shooting direction for being provided at least one second camera and each second camera is clapped towards straight line unilateral side viewing areas It takes the photograph.Wherein, the distance between second placement position of each adjacent two can be equal, i.e., each second placement position is evenly distributed on On the straight line of straight line unilateral side viewing areas;It can also be within the scope of default allowable error.For example, Figure 18 gives a kind of presetting The layout example of second camera when viewing areas is straight line unilateral side viewing areas.As shown in figure 18,3 are uniformly distributed on straight line 18 second cameras are arranged if each second placement position is laid out according to Fig. 8 mode in second placement position, every to guarantee The shooting angle of a second placement position in the horizontal direction can be 150 degree, and these three second placement positions are corresponding Shooting direction can be a sector, to cover the second placement position entire straight line unilateral side viewing areas in the horizontal direction, The layout type of this second camera is properly termed as plane.
The course of work of human eye tracing system in the present embodiment are as follows: human eye follow-up mechanism 430 calls the first camera 410, So that the first camera 410 acquires the user images in default viewing areas, and the user images of acquisition are transmitted to human eye and are chased after Track device 430, human eye follow-up mechanism 430 is according to the corresponding three-dimensional head of user each in the determining default viewing areas of user images Azimuth information, and according to three-dimensional head azimuth information, the target second of the first preset quantity is determined from multiple second cameras Camera, and each target second camera is called, allow each target second camera to acquire the face image of user, And the face image of acquisition is transmitted to human eye follow-up mechanism 430, human eye follow-up mechanism 430 is determined according to each face image and is used The two-dimentional pupil of both eyes azimuth information at family, and determine that the three-dimensional of user is double according at least two two-dimentional pupil of both eyes azimuth informations Eye pupil hole azimuth information so as to realize the high speed tracking of user's pupil of both eyes, while improving computational accuracy.
Human eye tracing system provided in this embodiment, by realizing head respectively using the first camera and second camera Identification and human eye high speed are tracked, and second camera is scheduled and is managed using human eye follow-up mechanism 430, so as to reality The high speed of current family pupil of both eyes is tracked, while improving computational accuracy.
Based on the above technical solution, human eye follow-up mechanism 430 can be integrated on a server and is realized Human eye method for tracing provided by any embodiment of the invention, also can use the first client, multiple second clients and in Central server realizes human eye method for tracing provided by any embodiment of the invention.Figure 19 gives provided in this embodiment another A kind of structural schematic diagram of human eye tracing system, as shown in figure 19, the system include: the first camera 410, multiple second camera shootings First 420, first client 440, multiple second clients 450 and central server 460.
Wherein, the first camera 410 is connect with the first client 440, for acquiring the figure of the user in default viewing areas Picture, and user images are sent to the first client 440;First client 440, connect with central server 460, is used for basis The user images that first client 440 is sent determine the corresponding three-dimensional head azimuth information of each user in default viewing areas, And three-dimensional head azimuth information is sent to central server 460;Each second camera 420 respectively with corresponding one Two clients 450 are connected, and are sent to corresponding second client for acquiring the face image of user, and by face image 450;Second client 450, connect with central server 460, for determining the two-dimentional eyes of user according to each face image Pupil azimuth information, and two-dimentional pupil of both eyes azimuth information is sent to central server 460;Central server 460 is used for root According to the three-dimensional head azimuth information that the first client 440 is sent, the mesh of the first preset quantity is determined from multiple second cameras Second camera is marked, and calls the second client connecting with each target second camera, obtains the second visitor of each of calling The two-dimentional pupil of both eyes azimuth information that family end is sent determines the three-dimensional of user according at least two two-dimentional pupil of both eyes azimuth informations Pupil of both eyes azimuth information.
It should be noted that the first client can be but be not limited to the high PC of performance (Personal Computer, it is a People's computer).Second client can be but be not limited to embedded computer, to improve loudness speed.In this present embodiment The quantity of one camera 410 can be one, or it is multiple, to improve head-tracking range, and improve head-tracking essence Degree.When there are multiple first cameras 410, each first camera 410 is connected with the first client 440, so that the One client 440 carries out image procossing to the user images that each first camera 410 acquires, and more accurately determines default see See the three-dimensional head azimuth information of each user in region.
Specifically, the course of work of the human eye tracing system provided in Figure 19 are as follows: the first client 440 calls the first camera shooting First 410, so that the first camera 410 acquires the user images in default viewing areas, and user images are sent to the first visitor Family end 440;First client 440 determines each use in default viewing areas according to the user images that the first client 440 is sent The corresponding three-dimensional head azimuth information in family, and three-dimensional head azimuth information is sent to central server 460;Central server 460, the three-dimensional head azimuth information for being sent according to the first client 440 determines first from multiple second cameras 420 The target second camera of preset quantity, and call the second client connecting with each target second camera;With each mesh Second client 450 of mark second camera connection calls corresponding target second camera, so that each target second images The face image of head acquisition user, and face image is sent to corresponding second client 450;Second client, 450 basis Received each face image determines the two-dimentional pupil of both eyes azimuth information of user, and two-dimentional pupil of both eyes azimuth information is sent To central server 460;Central server 460 determines that the three-dimensional of user is double according at least two two-dimentional pupil of both eyes azimuth informations Eye pupil hole azimuth information.Central server 460 can also drive with 3D display screen be connected, so as to by the three-dimensional eyes pupil of user Hole azimuth information is input in the driving of 3D display screen, and 3D display screen is allowed to determine phase according to three-dimensional pupil of both eyes azimuth information The display data answered, so that user can watch correspondingly three-dimensional picture.The present embodiment is by utilizing the first client, second Client and central server are each responsible for three links in processing human eye tracing process, i.e. the first client is responsible for user Image is handled, and the second client is responsible for handling face image, and central server is responsible for second camera Match and dispatch, and calculate the three-dimensional pupil of both eyes azimuth information of user, so that system operational speed is faster, treatment effeciency is more Height, to further improve human eye tracking speed.
Embodiment five
Figure 20 is a kind of structural schematic diagram for equipment that the embodiment of the present invention five provides.Referring to fig. 20, which includes:
One or more processors 510;
Memory 520, for storing one or more programs;
Input unit 530, for acquiring image;
Output device 540 is used for display screen information;
When one or more programs are executed by one or more processors 510, so that one or more processors 510 are realized Such as the human eye method for tracing that any embodiment proposes in above-described embodiment.
In Figure 20 by taking a processor 510 as an example;Processor 510, memory 520, input unit 530 in equipment and defeated Device 540 can be connected by bus or other modes out, in Figure 20 for being connected by bus.
Memory 520 is used as a kind of computer readable storage medium, can be used for storing software program, journey can be performed in computer Sequence and module, if the corresponding program instruction/module of human eye method for tracing in the embodiment of the present invention is (for example, human eye tracking dress Three-dimensional head azimuth information determining module 310, target second camera determining module 320, two-dimentional pupil of both eyes orientation in setting Information determination module 330 and three-dimensional pupil of both eyes azimuth information determining module 340).Processor 510 is stored in storage by operation Software program, instruction and module in device 520 are realized thereby executing the various function application and data processing of equipment Above-mentioned human eye method for tracing.
Memory 520 mainly includes storing program area and storage data area, wherein storing program area can store operation system Application program needed for system, at least one function;Storage data area, which can be stored, uses created data etc. according to equipment.This Outside, memory 520 may include high-speed random access memory, can also include nonvolatile memory, for example, at least one Disk memory, flush memory device or other non-volatile solid state memory parts.In some instances, memory 520 can be into one Step includes the memory remotely located relative to processor 510, these remote memories can pass through network connection to equipment.On The example for stating network includes but is not limited to internet, intranet, local area network, mobile radio communication and combinations thereof.
Input unit 530 may include the acquisition equipment such as camera, for acquiring user images and face image, and will acquisition Face image and face image be input to processor 510 carry out data processing.
Output device 540 may include that display screen etc. shows equipment, be used for display screen information.
The human eye method for tracing that the equipment and above-described embodiment that the present embodiment proposes propose belongs to same inventive concept, does not exist The technical detail of detailed description can be found in above-described embodiment in the present embodiment, and the present embodiment has executor's ocular pursuit method Identical beneficial effect.
Embodiment six
The present embodiment provides a kind of computer readable storage mediums, are stored thereon with computer program, and the program is processed It is realized when device executes such as any embodiment of that present invention so the human eye method for tracing provided.
The computer storage medium of the embodiment of the present invention, can be using any of one or more computer-readable media Combination.Computer-readable medium can be computer-readable signal media or computer readable storage medium.It is computer-readable Storage medium can be for example but not limited to: electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or Any above combination of person.The more specific example (non exhaustive list) of computer readable storage medium includes: with one Or the electrical connections of multiple conducting wires, portable computer diskette, hard disk, random access memory (RAM), read-only memory (ROM), Erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light Memory device, magnetic memory device or above-mentioned any appropriate combination.In this document, computer readable storage medium can With to be any include or the tangible medium of storage program, the program can be commanded execution system, device or device use or Person is in connection.
Computer-readable signal media may include in a base band or as carrier wave a part propagate data-signal, Wherein carry computer-readable program code.The data-signal of this propagation can take various forms, including but unlimited In electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be that computer can Any computer-readable medium other than storage medium is read, which can send, propagates or transmit and be used for By the use of instruction execution system, device or device or program in connection.
The program code for including on computer-readable medium can transmit with any suitable medium, including but not limited to: Wirelessly, electric wire, optical cable, RF etc. or above-mentioned any appropriate combination.
The computer for executing operation of the present invention can be write with one or more programming languages or combinations thereof Program code, programming language include object oriented program language, and such as Java, Smalltalk, C++ further include Conventional procedural programming language-such as " C " language or similar programming language.Program code can be fully Execute, partly execute on the user computer on the user computer, being executed as an independent software package, partially with Part executes on the remote computer or executes on a remote computer or server completely on the computer of family.It is being related to far In the situation of journey computer, remote computer can pass through the network of any kind, including local area network (LAN) or wide area network (WAN), it is connected to subscriber computer, or, it may be connected to outer computer (such as led to using ISP Cross internet connection).
Will be appreciated by those skilled in the art that each module of the above invention or each step can use general meter Device is calculated to realize, they can be concentrated on single computing device, or be distributed in network constituted by multiple computing devices On, optionally, they can be realized with the program code that computer installation can be performed, so as to be stored in storage It is performed by computing device in device, perhaps they are fabricated to each integrated circuit modules or will be more in them A module or step are fabricated to single integrated circuit module to realize.In this way, the present invention is not limited to any specific hardware and The combination of software.
Note that the above is only a better embodiment of the present invention and the applied technical principle.It will be appreciated by those skilled in the art that The present invention is not limited to specific embodiments here, be able to carry out for a person skilled in the art it is various it is apparent variation, again Adjustment and substitution are without departing from protection scope of the present invention.Therefore, although by above embodiments to the present invention carried out compared with For detailed description, but the present invention is not limited to the above embodiments only, without departing from the inventive concept, can be with Including more other equivalent embodiments, and the scope of the invention is determined by the scope of the appended claims.

Claims (21)

1. a kind of human eye method for tracing characterized by comprising
It calls the first camera to acquire the user images in default viewing areas, and determines described preset according to the user images The corresponding three-dimensional head azimuth information of each user in viewing areas;
According to the three-dimensional head azimuth information, determine that the target second of the first preset quantity is imaged from multiple second cameras Head;
Each target second camera is called to acquire the face image of the user, and true according to each face image The two-dimentional pupil of both eyes azimuth information of the fixed user;
The three-dimensional pupil of both eyes azimuth information of the user is determined according at least two two-dimentional pupil of both eyes azimuth informations.
2. the method according to claim 1, wherein according to the three-dimensional head azimuth information, from multiple second The target second camera of the first preset quantity is determined in camera, comprising:
According to the azimuth configuration information of the three-dimensional head azimuth information and each second camera, determine that the user is corresponding The candidate second camera of second preset quantity and the corresponding matching degree of each candidate's second camera;
According to each matching degree and the corresponding current call number of each candidate second camera, from each described candidate the The target second camera of the first preset quantity is filtered out in two cameras;
Wherein, first preset quantity is less than or equal to second preset quantity.
3. according to the method described in claim 2, it is characterized in that, being taken the photograph according to each matching degree and each described candidate second As corresponding current call number, the target second that the first preset quantity is filtered out from each candidate second camera is taken the photograph As head, comprising:
According to the corresponding current call number of candidate's second camera, filters out the current call number and be less than or equal to The candidate second camera of default call number, as second camera to be selected;
Based on the corresponding matching degree of the second camera to be selected, to each progress descending arrangement, and by before after arrangement The second camera to be selected of one preset quantity is determined as target second camera.
4. according to the method described in claim 2, it is characterized in that, according at least two two-dimentional pupil of both eyes azimuth informations Determine the three-dimensional pupil of both eyes azimuth information of the user, comprising:
If can not determine user's current time corresponding at least two two-dimentional pupil of both eyes according to each face image Azimuth information then determines helical form search rule according to the three-dimensional pupil of both eyes azimuth information of historical juncture;
Using according to the user images determine three-dimensional head azimuth information as the user current time three-dimensional head Azimuth information;
It is adjusted according to three-dimensional head azimuth information of the helical form search rule to current time, and worked as adjusted The three-dimensional head azimuth information at preceding moment is as the first three-dimensional head azimuth information;
At least two first two-dimentional eyes pupils at user's current time are determined according to the first three-dimensional head azimuth information Hole azimuth information;
The three-dimensional pupil of both eyes orientation letter of the user is determined according at least two described first two-dimentional pupil of both eyes azimuth informations Breath.
5. the method according to claim 1, wherein believing according at least two two-dimentional pupil of both eyes orientation After breath determines the three-dimensional pupil of both eyes azimuth information of the user, and the default viewing is being determined according to the user images In region before the corresponding three-dimensional head azimuth information of each user, further includes:
Believed according to the three-dimensional pupil of both eyes azimuth information at user's current time and the three-dimensional pupil of both eyes orientation of historical juncture Breath, estimates the user in the second three-dimensional head azimuth information of subsequent time;
The user is estimated in the three-dimensional pupil of both eyes azimuth information of subsequent time according to the second three-dimensional head azimuth information.
6. according to the method described in claim 5, it is characterized in that, the second three-dimensional head azimuth information includes three-dimensional head Position and rotational angle;
Correspondingly, the user is estimated according to the following formula in the second three-dimensional head azimuth information of subsequent time:
Wherein, (Xp1, Yp1, Zp1) and α1For the user three-dimensional the eye pupil position and direction of gaze of current time P1 angle Degree;(Xp2, Yp2, Zp2) and α2Angle for the user in three-dimensional the eye pupil position and direction of gaze of historical juncture P2; (Xp3, Yp3, Zp3) and α3Angle for the user in three-dimensional the eye pupil position and direction of gaze of historical juncture P3;(X, Y, Z) and α is the user of estimation in the three-dimensional head position of subsequent time and rotational angle.
7. the method according to claim 1, wherein determining the first preset quantity from multiple second cameras Target second camera after, further includes:
The target in the face image of each target second camera acquisition is determined according to the three-dimensional head azimuth information The position of face area and size.
8. the method according to the description of claim 7 is characterized in that determining the two of the user according to each face image Tie up pupil of both eyes azimuth information, comprising:
Determine the time by scan line reception data according to the position in the target face region and size, and according to it is described when Between receive the target face area data that the target second camera is sent, and target face image data is true based on the received The two-dimentional pupil of both eyes azimuth information of the fixed user.
9. -8 any method according to claim 1, which is characterized in that first camera be colour imagery shot or 3D camera, the second camera are black and white camera, and illuminated infrared light is arranged in the installation site of the second camera Source.
10. the method according to claim 1, wherein calling the first camera to acquire in default viewing areas User images before, further includes:
According to the default corresponding viewing angle range of viewing areas and the corresponding first default direction of first camera Error determines the quantity of corresponding first placement position of first camera;
The corresponding first camera quantity of each first placement position is determined according to the first visual angle of first camera, Wherein, first depth of field of each first camera is all larger than the corresponding default detecting distance of the default viewing areas.
11. the method according to claim 1, wherein calling the first camera to acquire in default viewing areas User images before, further includes:
According to the default corresponding viewing angle range of viewing areas and the corresponding second default direction of the second camera Error determines the quantity of corresponding second placement position of the second camera;
According to second depth of field of the second camera and the corresponding default detecting distance of the default viewing areas, determine each The corresponding depth of field number of plies of second placement position, wherein second depth of field is less than the default detecting distance;
The corresponding second camera quantity of every layer of depth of field is determined according to the second visual angle of the second camera.
12. a kind of human eye follow-up mechanism characterized by comprising
Three-dimensional head azimuth information determining module, for calling the first camera to acquire the user images in default viewing areas, And the corresponding three-dimensional head azimuth information of each user in the default viewing areas is determined according to the user images;
Target second camera determining module is used for according to the three-dimensional head azimuth information, from multiple second cameras really The target second camera of fixed first preset quantity;
Two-dimentional pupil of both eyes azimuth information determining module, for calling each target second camera to acquire the user's Face image, and determine according to each face image the two-dimentional pupil of both eyes azimuth information of the user;
Three-dimensional pupil of both eyes azimuth information determining module, for being determined according at least two two-dimentional pupil of both eyes azimuth informations The three-dimensional pupil of both eyes azimuth information of the user.
13. a kind of human eye tracing system, which is characterized in that the system comprises: the first camera, multiple second cameras and people Ocular pursuit device;Wherein, the human eye follow-up mechanism is for realizing the human eye tracking side as described in any in claim 1-11 Method.
14. system according to claim 13, which is characterized in that first camera is that colour imagery shot or 3D take the photograph As head, the second camera is black and white camera.
15. system according to claim 13, which is characterized in that the first placement position each of in default viewing areas Place is provided at least one described first camera, and the corresponding total detection zone of each first camera is the default viewing Region;
It is provided at least one described second camera at the second placement position each of in default viewing areas, each described the The corresponding total detection zone of two cameras is the default viewing areas.
16. 5 any system according to claim 1, which is characterized in that set at the center of each second placement position It is equipped with illuminated infrared light source.
17. any system of 3-16 according to claim 1, which is characterized in that the depth of field of each first camera is equal Default detecting distance corresponding greater than default viewing areas.
18. 5 any system according to claim 1, which is characterized in that be arranged at least at each second placement position One second camera group, the second camera group includes at least two layers of second camera, and every layer of second camera is scape The depth of field of deep identical multiple second cameras, the second camera in different layers is different.
19. system according to claim 15, which is characterized in that two neighboring described at each second placement position The shortest distance in the field depth intersection region of second camera is greater than the distance between pupil of both eyes.
20. a kind of equipment, which is characterized in that the equipment includes:
One or more processors;
Memory, for storing one or more programs;
Input unit, for acquiring image;
Output device is used for display screen information;
When one or more of programs are executed by one or more of processors, so that one or more of processors are real The now human eye method for tracing as described in any in claim 1-11.
21. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor The human eye method for tracing as described in any in claim 1-11 is realized when execution.
CN201910438457.3A 2019-05-24 2019-05-24 Human eye tracking method, device, system, equipment and storage medium Active CN110263657B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910438457.3A CN110263657B (en) 2019-05-24 2019-05-24 Human eye tracking method, device, system, equipment and storage medium
PCT/CN2019/106701 WO2020237921A1 (en) 2019-05-24 2019-09-19 Eye tracking method, apparatus and system, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910438457.3A CN110263657B (en) 2019-05-24 2019-05-24 Human eye tracking method, device, system, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110263657A true CN110263657A (en) 2019-09-20
CN110263657B CN110263657B (en) 2023-04-18

Family

ID=67915324

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910438457.3A Active CN110263657B (en) 2019-05-24 2019-05-24 Human eye tracking method, device, system, equipment and storage medium

Country Status (2)

Country Link
CN (1) CN110263657B (en)
WO (1) WO2020237921A1 (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111158162A (en) * 2020-01-06 2020-05-15 亿信科技发展有限公司 Super multi-viewpoint three-dimensional display device and system
CN111586352A (en) * 2020-04-26 2020-08-25 上海鹰觉科技有限公司 Multi-photoelectric optimal adaptation joint scheduling system and method
CN111881861A (en) * 2020-07-31 2020-11-03 北京市商汤科技开发有限公司 Display method, device, equipment and storage medium
CN111935473A (en) * 2020-08-17 2020-11-13 广东申义实业投资有限公司 Rapid eye three-dimensional image collector and image collecting method thereof
CN112417977A (en) * 2020-10-26 2021-02-26 青岛聚好联科技有限公司 Target object searching method and terminal
CN112583980A (en) * 2020-12-23 2021-03-30 重庆蓝岸通讯技术有限公司 Intelligent terminal display angle adjusting method and system based on visual identification and intelligent terminal
CN112711982A (en) * 2020-12-04 2021-04-27 科大讯飞股份有限公司 Visual detection method, equipment, system and storage device
CN112799407A (en) * 2021-01-13 2021-05-14 信阳师范学院 Pedestrian navigation-oriented gaze direction estimation method
CN112804504A (en) * 2020-12-31 2021-05-14 成都极米科技股份有限公司 Image quality adjusting method, image quality adjusting device, projector and computer readable storage medium
CN112929638A (en) * 2019-12-05 2021-06-08 北京芯海视界三维科技有限公司 Eye positioning method and device, multi-view naked eye 3D display method and equipment
CN113132643A (en) * 2019-12-30 2021-07-16 Oppo广东移动通信有限公司 Image processing method and related product
CN113128243A (en) * 2019-12-31 2021-07-16 苏州协尔智能光电有限公司 Optical recognition system, optical recognition method and electronic equipment
CN113138664A (en) * 2021-03-30 2021-07-20 青岛小鸟看看科技有限公司 Eyeball tracking system and method based on light field perception
CN113448428A (en) * 2020-03-24 2021-09-28 中移(成都)信息通信科技有限公司 Method, device and equipment for predicting sight focus and computer storage medium
CN113476037A (en) * 2021-06-29 2021-10-08 京东方科技集团股份有限公司 Sleep monitoring method based on child sleep system and terminal processor
CN114449250A (en) * 2022-01-30 2022-05-06 纵深视觉科技(南京)有限责任公司 Method and device for determining viewing position of user relative to naked eye 3D display equipment
CN114697602A (en) * 2020-12-31 2022-07-01 华为技术有限公司 Conference device and conference system

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101068342A (en) * 2007-06-05 2007-11-07 西安理工大学 Video frequency motion target close-up trace monitoring method based on double-camera head linkage structure
US20090196460A1 (en) * 2008-01-17 2009-08-06 Thomas Jakobs Eye tracking system and method
CN103324284A (en) * 2013-05-24 2013-09-25 重庆大学 Mouse control method based on face and eye detection
US20150169054A1 (en) * 2011-11-02 2015-06-18 Google Inc. Imaging Method
CN105930821A (en) * 2016-05-10 2016-09-07 上海青研信息技术有限公司 Method for identifying and tracking human eye and apparatus for applying same to naked eye 3D display
WO2016142489A1 (en) * 2015-03-11 2016-09-15 SensoMotoric Instruments Gesellschaft für innovative Sensorik mbH Eye tracking using a depth sensor
CN107609516A (en) * 2017-09-13 2018-01-19 重庆爱威视科技有限公司 Adaptive eye moves method for tracing
US20180052515A1 (en) * 2015-03-13 2018-02-22 Sensomotoric Instruments Gesellschaft Für Innovati Ve Sensorik Mbh Method for Operating an Eye Tracking Device for Multi-User Eye Tracking and Eye Tracking Device
CN109598253A (en) * 2018-12-14 2019-04-09 北京工业大学 Mankind's eye movement measuring method based on visible light source and camera
CN109688403A (en) * 2019-01-25 2019-04-26 广州杏雨信息科技有限公司 One kind being applied to perform the operation indoor naked eye 3D human eye method for tracing and its equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101068342A (en) * 2007-06-05 2007-11-07 西安理工大学 Video frequency motion target close-up trace monitoring method based on double-camera head linkage structure
US20090196460A1 (en) * 2008-01-17 2009-08-06 Thomas Jakobs Eye tracking system and method
US20150169054A1 (en) * 2011-11-02 2015-06-18 Google Inc. Imaging Method
CN103324284A (en) * 2013-05-24 2013-09-25 重庆大学 Mouse control method based on face and eye detection
WO2016142489A1 (en) * 2015-03-11 2016-09-15 SensoMotoric Instruments Gesellschaft für innovative Sensorik mbH Eye tracking using a depth sensor
US20180052515A1 (en) * 2015-03-13 2018-02-22 Sensomotoric Instruments Gesellschaft Für Innovati Ve Sensorik Mbh Method for Operating an Eye Tracking Device for Multi-User Eye Tracking and Eye Tracking Device
CN105930821A (en) * 2016-05-10 2016-09-07 上海青研信息技术有限公司 Method for identifying and tracking human eye and apparatus for applying same to naked eye 3D display
CN107609516A (en) * 2017-09-13 2018-01-19 重庆爱威视科技有限公司 Adaptive eye moves method for tracing
CN109598253A (en) * 2018-12-14 2019-04-09 北京工业大学 Mankind's eye movement measuring method based on visible light source and camera
CN109688403A (en) * 2019-01-25 2019-04-26 广州杏雨信息科技有限公司 One kind being applied to perform the operation indoor naked eye 3D human eye method for tracing and its equipment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HOPF K,ET AL.: "Multi-user eye tracking suitable for 3D display applications", 《3DTV-CON》 *
张太宁等: "基于暗瞳图像的人眼视线估计", 《物理学报》 *
沈晓权: "协同式眼动跟踪技术及其交互应用研究", 《中国优秀硕士学位论文全文数据库-信息科技辑》 *

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112929638A (en) * 2019-12-05 2021-06-08 北京芯海视界三维科技有限公司 Eye positioning method and device, multi-view naked eye 3D display method and equipment
CN112929638B (en) * 2019-12-05 2023-12-15 北京芯海视界三维科技有限公司 Eye positioning method and device and multi-view naked eye 3D display method and device
CN113132643A (en) * 2019-12-30 2021-07-16 Oppo广东移动通信有限公司 Image processing method and related product
CN113128243A (en) * 2019-12-31 2021-07-16 苏州协尔智能光电有限公司 Optical recognition system, optical recognition method and electronic equipment
CN111158162A (en) * 2020-01-06 2020-05-15 亿信科技发展有限公司 Super multi-viewpoint three-dimensional display device and system
CN113448428A (en) * 2020-03-24 2021-09-28 中移(成都)信息通信科技有限公司 Method, device and equipment for predicting sight focus and computer storage medium
CN111586352A (en) * 2020-04-26 2020-08-25 上海鹰觉科技有限公司 Multi-photoelectric optimal adaptation joint scheduling system and method
WO2022022036A1 (en) * 2020-07-31 2022-02-03 北京市商汤科技开发有限公司 Display method, apparatus and device, storage medium, and computer program
CN111881861A (en) * 2020-07-31 2020-11-03 北京市商汤科技开发有限公司 Display method, device, equipment and storage medium
CN111935473A (en) * 2020-08-17 2020-11-13 广东申义实业投资有限公司 Rapid eye three-dimensional image collector and image collecting method thereof
CN112417977B (en) * 2020-10-26 2023-01-17 青岛聚好联科技有限公司 Target object searching method and terminal
CN112417977A (en) * 2020-10-26 2021-02-26 青岛聚好联科技有限公司 Target object searching method and terminal
CN112711982A (en) * 2020-12-04 2021-04-27 科大讯飞股份有限公司 Visual detection method, equipment, system and storage device
CN112583980A (en) * 2020-12-23 2021-03-30 重庆蓝岸通讯技术有限公司 Intelligent terminal display angle adjusting method and system based on visual identification and intelligent terminal
CN114697602A (en) * 2020-12-31 2022-07-01 华为技术有限公司 Conference device and conference system
CN112804504A (en) * 2020-12-31 2021-05-14 成都极米科技股份有限公司 Image quality adjusting method, image quality adjusting device, projector and computer readable storage medium
CN112804504B (en) * 2020-12-31 2022-10-04 成都极米科技股份有限公司 Image quality adjusting method, image quality adjusting device, projector and computer readable storage medium
CN114697602B (en) * 2020-12-31 2023-12-29 华为技术有限公司 Conference device and conference system
CN112799407A (en) * 2021-01-13 2021-05-14 信阳师范学院 Pedestrian navigation-oriented gaze direction estimation method
WO2022205770A1 (en) * 2021-03-30 2022-10-06 青岛小鸟看看科技有限公司 Eyeball tracking system and method based on light field perception
CN113138664A (en) * 2021-03-30 2021-07-20 青岛小鸟看看科技有限公司 Eyeball tracking system and method based on light field perception
CN113476037A (en) * 2021-06-29 2021-10-08 京东方科技集团股份有限公司 Sleep monitoring method based on child sleep system and terminal processor
CN114449250A (en) * 2022-01-30 2022-05-06 纵深视觉科技(南京)有限责任公司 Method and device for determining viewing position of user relative to naked eye 3D display equipment

Also Published As

Publication number Publication date
WO2020237921A1 (en) 2020-12-03
CN110263657B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN110263657A (en) A kind of human eye method for tracing, device, system, equipment and storage medium
EP3574407B1 (en) No miss cache structure for real-time image transformations with data compression
US10674142B2 (en) Optimized object scanning using sensor fusion
CN108292489B (en) Information processing apparatus and image generating method
EP3195595B1 (en) Technologies for adjusting a perspective of a captured image for display
US20210042520A1 (en) Deep learning for three dimensional (3d) gaze prediction
US10672368B2 (en) No miss cache structure for real-time image transformations with multiple LSR processing engines
CN112424790A (en) System and method for hybrid eye tracker
US11941167B2 (en) Head-mounted VR all-in-one machine
TWI701941B (en) Method, apparatus and electronic device for image processing and storage medium thereof
KR20180057672A (en) Eye wearable wearable devices
JP7337091B2 (en) Reduced output behavior of time-of-flight cameras
EP3574408A1 (en) No miss cache structure for real-time image transformations
US10948994B2 (en) Gesture control method for wearable system and wearable system
KR20160094190A (en) Apparatus and method for tracking an eye-gaze
CN108881893A (en) Naked eye 3D display method, apparatus, equipment and medium based on tracing of human eye
KR20220044897A (en) Wearable device, smart guide method and device, guide system, storage medium
CN108259886A (en) Deduction system, presumption method and program for estimating
CN112651270A (en) Gaze information determination method and apparatus, terminal device and display object
CN112099615A (en) Gaze information determination method and device, eyeball tracking equipment and storage medium
CN112114659A (en) Method and system for determining a fine point of regard for a user
US11822851B2 (en) Information display system, information display method, and processing device
Bailer et al. A simple real-time eye tracking and calibration approach for autostereoscopic 3d displays
CN118101920A (en) Head-mounted display device, image processing method, device and medium
CN116665292A (en) Gaze information determination method, device, eye movement equipment, object to be observed and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40008454

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant