US11009946B2 - Pupil center positioning apparatus and method, and virtual reality device - Google Patents

Pupil center positioning apparatus and method, and virtual reality device Download PDF

Info

Publication number
US11009946B2
US11009946B2 US16/646,929 US201916646929A US11009946B2 US 11009946 B2 US11009946 B2 US 11009946B2 US 201916646929 A US201916646929 A US 201916646929A US 11009946 B2 US11009946 B2 US 11009946B2
Authority
US
United States
Prior art keywords
matrix
coefficient matrix
operation circuit
parameter
center positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US16/646,929
Other languages
English (en)
Other versions
US20200278744A1 (en
Inventor
Gaoming SUN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd filed Critical BOE Technology Group Co Ltd
Assigned to BOE TECHNOLOGY GROUP CO., LTD. reassignment BOE TECHNOLOGY GROUP CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SUN, Gaoming
Publication of US20200278744A1 publication Critical patent/US20200278744A1/en
Application granted granted Critical
Publication of US11009946B2 publication Critical patent/US11009946B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/11Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
    • G06F17/12Simultaneous equations, e.g. systems of linear equations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • G06K9/00604
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/955Hardware or software architectures specially adapted for image or video understanding using specific electronic processors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Definitions

  • the disclosure relates to the field of eye control technology, and in particular, to a pupil center positioning apparatus and method, and a virtual reality device.
  • VR virtual reality
  • AR augmented reality
  • the eye tracking technology is an intelligent human-machine interaction technology for controlling a machine with eye movement, and it may complete all the operations with eye “look”, which not only may free both hands, but is also the fastest and humanized control mode.
  • the eye tracking technology is introduced into the VR/AR field, which may not only meet the needs of high-definition rendering, but also greatly improve the interaction experience of a VR/AR device. In the interaction of a user with a VR/AR user interface via eyes, it may be possible to directly control a menu and trigger an operation with eyes, and hence enable people to get rid of unnatural head operations.
  • the pupil center positioning apparatus may include: a matrix operation circuit configured to obtain a set of linear normal equations according to received N boundary point coordinates of a pupil area, N being a positive integer greater than 5; a parameter operation circuit configured to obtain parameters of an elliptic equation employing the Cramer's Rule according to the set of linear normal equations; and a coordinate operation circuit configured to obtain the center coordinate of the pupil area according to the parameters of the elliptic equation.
  • the transposed matrix processor may include: a first cache configured to obtain matrix elements of the first coefficient matrix G1 from the coefficient matrix processor; a multiplexer configured to obtain all the matrix elements from the first cache and interchange rows and columns for them; a second cache configured to obtain all the matrix elements after the interchange of rows and columns from the multiplexer to form the transposed matrix G1 T ; a counter coupled to the first cache, the multiplexer and a marker, respectively, and configured to increment a count in address mapping; and the marker coupled to the counter and the second cache, respectively, and configured to control individual matrix elements to be outputted to the second cache according to the counter.
  • the parameter operation circuit may include: a state controller coupled to a coefficient matrix memory, a multi-way selector and a parameter calculator, respectively; the coefficient matrix memory configured to obtain the second coefficient matrix G2 and the second constant term matrix H2 from the matrix operation circuit and store all the matrix elements of the second coefficient matrix G2 and the second constant term matrix H2; the multi-way selector configured to successively obtain parameter matrices composed of the second coefficient matrix G2 and the second constant term matrix H2 from the coefficient matrix memory according to the Cramer's Rule under the control of the state controller; and the parameter calculator configured to calculate the product of each of the parameter matrices and the second coefficient matrix G2 under the control of the state controller and obtain each parameter of the elliptic equation.
  • N 10.
  • a virtual reality device which may include a pupil center positioning apparatus as described above.
  • the pupil center positioning method may include: obtaining a set of linear normal equations according to received N boundary point coordinates of a pupil area, N being a positive integer greater than 5; obtaining parameters of an elliptic equation employing the Cramer's Rule according to the set of linear normal equations; and obtaining the center coordinate of the pupil area according to the parameters of the elliptic equation.
  • the obtaining parameters of an elliptic equation employing the Cramer's Rule according to the set of linear normal equations may include: storing the matrix elements of the second coefficient matrix G2 and the second constant term matrix H2; successively obtaining parameter matrices composed of the second coefficient matrix G2 and the second constant term matrix H2 according to the Cramer's Rule; and calculating the product of each of the parameter matrices and the second coefficient matrix G2 and obtaining the parameters of the elliptic equation.
  • N 10.
  • FIG. 1 is a structural diagram of a pupil center positioning apparatus according to an embodiment of the disclosure
  • FIG. 2 is a structural diagram of a matrix operation circuit according to an embodiment of the disclosure
  • FIG. 3 is a schematic diagram of N boundary point coordinates of a pupil area according to an embodiment of the disclosure
  • FIG. 4 is a structural diagram of a transposed matrix processor according to an embodiment of the disclosure.
  • FIG. 5 is a structural diagram of a parameter operation circuit according to an embodiment of the disclosure.
  • FIG. 6 is a flow chart of a pupil center positioning method according to an embodiment of the disclosure.
  • FIG. 7 is a virtual reality device according to an embodiment of the disclosure.
  • the eye tracking technology involves capturing an image of an eye in real time through a camera apparatus installed in front of the eye based on an image processing method, the image containing a high brightness spot, i.e., a purkinje image, formed on the cornea of the eye.
  • a high brightness spot i.e., a purkinje image
  • the eye approximate to a ball rotates while the spot as a reference point does not move.
  • the position coordinate of the pupil center is obtained by data processing, by making use of the relationship between the position coordinate of the pupil center and the position coordinate of the spot, it may be possible to calculate the position of the current line-of-sight of the eye which falls on a display screen in front of the eye, and then operate the display screen, implementing a function such as human-machine interaction or gaze point rendering, etc. From this, accurate positioning of the position coordinate of the pupil center is the basis of the eye tracking technology.
  • pupil center positioning employs a hardware-implemented projection coordinate method.
  • the projection coordinate method utilizes the gray-level information of the eye, and detects the abscissa and the ordinate of the pupil through the horizontal and vertical projections, respectively, thereby obtaining the position coordinate of the pupil center.
  • Embodiments of the disclosure provide a pupil center positioning apparatus and method based on hardware implementation, and a virtual reality device containing the pupil center positioning apparatus.
  • FIG. 1 is a structural diagram of a pupil center positioning apparatus according to an embodiment of the disclosure.
  • the main structure of a pupil center positioning apparatus based on hardware implementation comprises a matrix operation circuit 1 , a parameter operation circuit 2 , and a coordinate operation circuit 3 that are successively connected.
  • the matrix operation circuit 1 is configured to obtain a set of linear normal equations according to received N boundary point coordinates of a pupil area, N being a positive integer greater than 5.
  • the parameter operation circuit 2 is configured to obtain parameters of an elliptic equation employing the Cramer's Rule according to the set of linear normal equations.
  • the coordinate operation circuit 3 is configured to obtain the center coordinate of the pupil area according to the parameters of the elliptic equation.
  • the embodiment of the disclosure provides a pupil center positioning apparatus based on hardware implementation.
  • the pupil center positioning apparatus carries out pupil center positioning based on an ellipse fitting algorithm, and, as compared to the projection coordinate method, it not only increases the calculation accuracy and improves the anti-interference ability, but also facilitates hardware implementation and is integrated into a virtual reality device, its processing speed is higher, its processing time is shorter, and it uses less logic resources.
  • the Cramer's Rule is employed to obtain the parameters of the elliptic equation, which, as compared to the LU Decomposition (Lower and Upper Decomposition) adopted in the related art, may effectively avoid error accumulation occurring in division calculation in the LU Decomposition, and further improve the calculation accuracy and the anti-interference ability.
  • the N boundary point coordinates of the pupil area obtained by the matrix operation circuit 1 are outputted by a pupil area circuit (not shown in FIG. 1 ) at the front end.
  • a camera apparatus for example, a camera based on CCD or CMOS imaging elements, which captures an image of the eye in real time, the acquired eye image is transmitted to the pupil area circuit for preprocessing and boundary extraction processing, the pupil area is extracted and multiple boundary point coordinates of the pupil area are sent to the matrix operation circuit 1 .
  • the preprocessing comprises gray-level conversion processing, filtering processing, and binarization processing, etc.
  • Gray-level conversion processing is to perform gray-level conversion on a color image captured by the camera apparatus and convert RGB image data to e.g., 8-bit, gray-level image data.
  • Filtering processing is to perform filtering processing on the gray-level image data to remove noise from the image.
  • Binarization processing is to process an 8-bit, 256-grayscale image into an image with only two grayscales, 0 or 255, that is, convert a gray-level image to a black-and-white image.
  • the preprocessing may further comprise processing such as boundary erosion and expansion, etc.
  • Boundary erosion and expansion is to perform an open operation on binarized data, eliminate a small object and smooth the boundary of a larger object to make the boundary clear, and obtain binarized data with a clear boundary.
  • Boundary extraction processing is to perform boundary extraction on the binarized data with a clear boundary and obtain boundary point coordinates of the pupil area in the image.
  • choosing the N boundary points from all the boundary points may employ the average selection method.
  • the average selection method may comprise: first counting the number S of all the boundary points, then selecting one boundary point every S/N boundary points starting from a boundary point, and finally sending the coordinates of the N boundary points to the matrix operation circuit 1 .
  • the camera apparatus may also comprise an infrared acquisition apparatus.
  • an infrared acquisition apparatus Under the irradiation of infrared light, since the pupil and the iris have different absorptivities and reflectivities for the infrared light, the reflection effect of the pupil is very low and most of the infrared light will be absorbed, whereas the iris will almost completely reflect the infrared light, the pupil part will appear darker, the iris part will appear brighter, the difference between the two is obvious, and thereby the pupil may be easily detected.
  • the gray-level conversion processing, filtering processing, binarization processing, boundary erosion and expansion and boundary extraction processing, etc. may be conducted employing algorithms well-known in the art, for example, the filtering processing may employ Gaussian filtering, the boundary extraction processing may employ the four-direction comparison method or the eight neighborhood method, and the like, which will not be repeated here any longer.
  • FIG. 2 is a structural diagram of a matrix operation circuit according to an embodiment of the disclosure.
  • the matrix operation circuit may be the matrix operation circuit 1 as shown in FIG. 1 .
  • the matrix operation circuit based on hardware implementation may comprise an array cache 11 , a coefficient matrix processor 12 , a transposed matrix processor 13 and a matrix multiplier 14 .
  • the array cache 11 may be coupled to the pupil area circuit at the front end and configured to receive the N boundary point coordinates (x i and y i , wherein i is greater than or equal to 1 and less than or equal to N) outputted by the pupil area circuit and cache them in the form of an array. En as shown in FIG. 2 is for example an enabling signal.
  • the transposed matrix processor 13 may be coupled to the coefficient matrix processor 12 and configured to obtain a transposed matrix G1 T of the first coefficient matrix G1.
  • An embodiment of the disclosure performs pupil center positioning based on the ellipse fitting algorithm.
  • ellipse fitting is performed on the N boundary sample points to find the pupil center.
  • 5 sample points may uniquely determine an ellipse.
  • the fitting error will be larger and cannot meet the accuracy requirement, if ellipse fitting is performed on all the sample points containing the sample points with larger errors.
  • N boundary sample points are generally employed to form a set of linear over-determined equations, and then ellipse fitting is performed according to the least square method to obtain an elliptic equation.
  • the least square method is a mathematical optimization technique, and it seeks the best function match for data by minimizing the sum of squares of errors. Utilization of the least square method may simply and conveniently obtain unknown data and minimize the sum of the squares of the errors between the obtained data and the actual data.
  • FIG. 3 is a schematic diagram of N boundary point coordinates of a pupil area according to an embodiment of the disclosure.
  • FIG. 3 schematically shows 10 boundary points.
  • (x 1 , y 1 ), (x 2 , y 2 ), . . . , (x 9 , y 9 ), (x 10 , y 10 ) successively represent the abscissa and the ordinate of the first boundary point, the abscissa and the ordinate of the second boundary point, . . . , the abscissa and the ordinate of the ninth boundary point, and the abscissa and the ordinate of the tenth boundary point.
  • x c and y c represent the abscissa and the ordinate of the pupil center, respectively.
  • the array cache 11 receives the N boundary point coordinates outputted by the pupil area circuit at the front end and caches them in the form of an array, wherein x i and y i are the abscissa and the ordinate of the i-th boundary point of the pupil area, respectively (in this example, i is greater than or equal to 1 and less than or equal to N).
  • the coefficient matrix processor 12 first successively reads out the abscissa x i and the ordinate y i of the i-th boundary point, obtains the values of x i y i , x i 2 , y i 2 (as shown in FIG.
  • Over-determined equations refer to those in which the number of independent equations is greater than the number of independent unknown parameters.
  • the first coefficient matrix G1 is
  • the first constant term matrix H1 is
  • the transposed matrix processor 13 processes the first coefficient matrix G1 and obtains the transposed matrix G1 T of the first coefficient matrix G1.
  • FIG. 4 is a structural diagram of a transposed matrix processor according to an embodiment of the disclosure.
  • the transposed matrix processor may be the transposed matrix processor 13 as shown in FIG. 2 .
  • the transposed matrix processor 13 may comprise a first cache 131 , a multiplexer 132 , a second cache 133 , a counter 134 and a marker 135 .
  • the first cache 131 is coupled to the coefficient matrix processor 12 and configured to obtain matrix elements of the first coefficient matrix G1 of N ⁇ 5 from the coefficient matrix processor 12 and output them to the multiplexer 132 .
  • the multiplexer 132 is coupled to the first cache 131 and configured to obtain all the matrix elements from the first cache 131 , interchange rows and columns for them and then output all the matrix elements to the second cache 133 .
  • the second cache 133 is coupled to the multiplexer 132 and configured to obtain all the matrix elements after the interchange of rows and columns from the multiplexer 132 to form the transposed matrix G1 T of 5 ⁇ N. Between the first cache 131 and the multiplexer 132 are N ⁇ 5 channels for data input, and between the multiplexer 132 and the second cache 133 are 5 ⁇ N channels for data output.
  • the counter 134 is coupled to the first cache 131 , the multiplexer 132 and the marker 135 , respectively, and configured to increment a count in address mapping.
  • the marker 135 is coupled to the counter 134 and the second cache 133 , respectively, and configured to control individual matrix elements to be outputted to the second cache 133 according to the counter 134 .
  • the matrix multiplier 14 obtains the first coefficient matrix G1 and the first constant term matrix H1 from the coefficient matrix processor 12 , obtains the transposed matrix G1 T from the transposed matrix processor 13 , multiplies the first coefficient matrix G1 with the transposed matrix G1 T to obtain a second coefficient matrix G2 of 5 ⁇ 5, and multiplies the first constant term matrix H1 with the transposed matrix G1 T to obtain a second constant term matrix H2 of 5 ⁇ 1.
  • the transposed matrix processor 13 may employ a structure of a random access memory (RAM), a multiplier and an adder, and be arranged with two address mapping RAMs, and in operation, the two RAMs successively give individual elements on which multiplication and addition operations are to be performed, for the multiplier and the adder to perform the multiplication and addition operations.
  • RAM random access memory
  • the two RAMs successively give individual elements on which multiplication and addition operations are to be performed, for the multiplier and the adder to perform the multiplication and addition operations.
  • the matrix multiplier 14 After the matrix multiplier 14 obtains the second coefficient matrix G2 of 5 ⁇ 5 and the second constant term matrix H2 of 5 ⁇ 1, it sends all the matrix elements of the matrices to the parameter operation circuit 2 .
  • FIG. 5 is a structural diagram of a parameter operation circuit according to an embodiment of the disclosure.
  • the parameter operation circuit may be the parameter operation circuit 2 as shown in FIG. 1 .
  • the parameter operation circuit based on hardware implementation may comprise a coefficient matrix memory 21 , a multi-way selector 22 , a parameter calculator 23 and a state controller 24 .
  • the coefficient matrix memory 21 may be coupled to the matrix multiplier 14 in the matrix operation circuit 1 and configured to obtain the second coefficient matrix G2 and the second constant term matrix H2 from the matrix multiplier 14 and store all the matrix elements of the second coefficient matrix G2 and the second constant term matrix H2
  • the multi-way selector 22 may be coupled to the coefficient matrix memory 21 and the state controller 24 and configured to successively obtain parameter matrices composed of the second coefficient matrix G2 and the second constant term matrix H2 from the coefficient matrix memory 21 according to the Cramer's Rule under the control of the state controller 24 .
  • the parameter calculator 23 may be coupled to the multi-way selector 22 and the state controller 24 and configured to calculate the product of each of the parameter matrices and the second coefficient matrix G2 under the control of the state controller 24 , obtain each parameter of the elliptic equation, and send the parameter to the coordinate operation circuit 3 .
  • the state controller 24 may be coupled to the coefficient matrix memory 21 , the multi-way selector 22 and the parameter calculator 23 , and configured to uniformly control the coefficient matrix memory 21 , the multi-way selector 22 and the parameter calculator 23 .
  • the Cramer's Rule is a theorem on solving a set of linear equations in linear algebra and suitable for a set of linear equations in which the number of variables is equal to that of the equations.
  • the Cramer's Rule when the second coefficient matrix G2 is reversible, or the corresponding determinant
  • is not equal to 0, it has a unique solution Xi
  • G2, wherein G2i (i 1, 2, 3, 4, 5) is a parameter matrix obtained by replacing a 1i , a 2i , a 3i , a 4i , as, of the i-th column in the second coefficient matrix G2 with b 1 , b 2 , b 3 , b 4 , b 5 successively.
  • the multi-way selector 22 obtains the parameter matrix G2i under the control of the state controller 24 , respectively, and the parameter calculator 23 performs calculation of
  • A [ b 1 a 12 a 13 a 14 a 15 b 2 a 2 ⁇ 2 a 23 a 24 a 25 b 3 a 3 ⁇ 2 a 3 ⁇ 3 a 34 a 35 b 4 a 4 ⁇ 2 a 43 a 4 ⁇ 4 a 45 b 5 a 5 ⁇ 2 a 53 a 5 ⁇ 4 a 55 ] [ a 1 ⁇ 1 a 1 ⁇ 2 a 1 ⁇ 3 a 1 ⁇ 4 a 15 a 2 ⁇ 1 a 2 ⁇ 2 a 2 ⁇ 3 a 2 ⁇ 4 a 25 a 3 ⁇ 1 a 3 ⁇ 2 a 3 ⁇ 1 a 34 a 3 ⁇ 5 a 4 ⁇ 1 a 4 ⁇ 2 a 4 ⁇ 3 a 4 ⁇ 4 a 45 a 5 ⁇ 1 a 5 ⁇ 2 a 53 a 54 a 55 ]
  • B [ a 11 b 1 a 13 a 14 a 15 a 21
  • the parameter calculator 23 After obtaining the parameters of the elliptic equation, the parameter calculator 23 sends these parameters to the coordinate operation circuit 3 , and utilizing the five parameters A, B, C, D, E, the coordinate operation circuit 3 may calculate the ultimate pupil center coordinate (x c , y c ) through ellipse center calculation formulae.
  • x c and y c are the abscissa and the ordinate of the ellipse center, respectively, and A, B, C, D, E are the parameters of the elliptic equation.
  • the LU decomposition method is commonly used for parameter calculation, however, a drawback exists that error accumulation occurs when division calculation is performed, and the calculation accuracy is low.
  • the embodiment of the disclosure adopts the Cramer's Rule for parameter calculation, effectively avoids the error accumulation, improves the calculation accuracy, and meanwhile, also has the advantages that the processing speed is high, the processing time is short, use of logic resources is little, and the like, and provides a good basis for subsequent processing to improve the processing speed and reduce the processing time.
  • the matrix operation circuit, the parameter operation circuit and the coordinate operation circuit of an embodiment of the disclosure may all be implemented employing a hardware circuit constituted by a RAM read/write controller, a comparator, a counter, a multiplier, an adder and other devices, or also may be implemented by employing an approach combining FPGA.
  • the circuit structure and the code may be directly applied in IC customization, and facilitate integration in a virtual reality device, especially a head-mounted virtual reality device.
  • FIG. 6 is a flow chart of a pupil center positioning method according to an embodiment of the disclosure. As shown in FIG. 6 , the pupil center positioning method may comprise:
  • the pupil center positioning method based on hardware implementation provided by the embodiment of the disclosure carries out pupil center positioning based on an ellipse fitting algorithm, and, as compared to the projection coordinate method, the embodiment of the disclosure not only increases the calculation accuracy and improves the anti-interference ability, but also facilitates hardware implementation and is integrated into a virtual reality device, its processing speed is higher, its processing time is shorter, and it uses less logic resources.
  • the Cramer's Rule is employed to obtain the parameters of the elliptic equation, and as compared to the LU Decomposition, the embodiment of the disclosure may effectively avoid error accumulation occurring in division calculation in the LU Decomposition, and further improve the calculation accuracy and the anti-interference ability.
  • step S 1 may comprise:
  • the step S 2 may comprise:
  • the step 3 may comprise:
  • the processing flows of obtaining the set of over-determined equations, the transposed matrix, the set of linear normal equations, the parameter matrix, the parameters and the pupil center coordinate, etc., may refer to the content of the pupil center positioning apparatus, and will not be repeated here any longer.
  • an embodiment of the disclosure further provides a virtual reality device comprising a pupil center positioning apparatus as described above.
  • FIG. 7 is a structural diagram of a virtual reality device according to an embodiment of the disclosure.
  • a main structure of the virtual reality device may comprise a wearing body 7 .
  • the display device 71 may comprise one or more display screen 711 and a display driver 712 .
  • a pupil center positioning apparatus 7121 is integrated into the display driver 712 .
  • the pupil center positioning apparatus 7121 may be one as shown in FIG. 1 .
  • the camera device 72 acquires an eye image of a user, and sends the eye image to the pupil center positioning apparatus 7121 , and after the pupil center positioning apparatus 7121 obtains the pupil center coordinate, it calculates the position of the current line-of-sight of the user's eye which falls on the display screen in combination with the coordinate of the spot, and then operates the display screen, implementing a function such as human-machine interaction or gaze point rendering, etc.
  • the orientation or position relationship indicated by the terms “middle”, “on”, “below”, “front”, “rear”, “vertical”, “horizontal”, “top”, “bottom”, “inside” and “outside”, etc. is an orientation or position relationship based on what is shown in the drawings, it is only for the convenience of describing the disclosure and simplifying the description, but does not indicate or imply that the apparatus or element referred to must have a specific orientation, and be constructed and operated in a specific orientation, and therefore cannot be understood as limiting the disclosure.
  • the terms “install”, “join”, “connect” should be understood in a broad sense, for example, it may be a fixed connection, or also may be a removable connection, or connected integrally; it may be a mechanical connection, or also may be an electrical connection; it may be connected directly, or also may be connected indirectly by an intermediate medium, or may be internal communication between two elements.
  • install or also may be a removable connection, or connected integrally
  • mechanical connection or also may be an electrical connection
  • it may be connected directly, or also may be connected indirectly by an intermediate medium, or may be internal communication between two elements.
  • the embodiments of the disclosure may be provided as a method, a system or a computer program product. Therefore, the disclosure may take the form of entire hardware embodiments, entire software embodiments, or embodiments combining software and hardware aspects.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Analysis (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Computational Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Ophthalmology & Optometry (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Operations Research (AREA)
  • Computing Systems (AREA)
  • Image Processing (AREA)
US16/646,929 2018-04-24 2019-04-10 Pupil center positioning apparatus and method, and virtual reality device Active US11009946B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN201810375282.1A CN108572735B (zh) 2018-04-24 2018-04-24 瞳孔中心定位装置和方法、虚拟现实设备
CN201810375282.1 2018-04-24
PCT/CN2019/082026 WO2019205937A1 (zh) 2018-04-24 2019-04-10 瞳孔中心定位装置和方法、虚拟现实设备

Publications (2)

Publication Number Publication Date
US20200278744A1 US20200278744A1 (en) 2020-09-03
US11009946B2 true US11009946B2 (en) 2021-05-18

Family

ID=63574195

Family Applications (1)

Application Number Title Priority Date Filing Date
US16/646,929 Active US11009946B2 (en) 2018-04-24 2019-04-10 Pupil center positioning apparatus and method, and virtual reality device

Country Status (3)

Country Link
US (1) US11009946B2 (zh)
CN (1) CN108572735B (zh)
WO (1) WO2019205937A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108572735B (zh) * 2018-04-24 2021-01-26 京东方科技集团股份有限公司 瞳孔中心定位装置和方法、虚拟现实设备
CN112258569B (zh) * 2020-09-21 2024-04-09 无锡唐古半导体有限公司 瞳孔中心定位方法、装置、设备及计算机存储介质
CN114281184A (zh) * 2020-09-28 2022-04-05 京东方科技集团股份有限公司 注视点计算装置及其驱动方法、电子设备
CN115451882B (zh) * 2022-11-10 2023-03-24 通用技术集团沈阳机床有限责任公司 基于浮动销的空间位姿计算蒙皮底孔轴线法向量的方法

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101266645A (zh) 2008-01-24 2008-09-17 电子科技大学中山学院 一种基于多分辨率分析的虹膜定位方法
US20090141995A1 (en) 2007-11-02 2009-06-04 Siemens Corporate Research, Inc. System and Method for Fixed Point Continuation for Total Variation Based Compressed Sensing Imaging
CN103067662A (zh) 2013-01-21 2013-04-24 天津师范大学 一种自适应视线跟踪***
CN106774863A (zh) 2016-12-03 2017-05-31 西安中科创星科技孵化器有限公司 一种基于瞳孔特征实现视线追踪的方法
CN107833251A (zh) 2017-11-13 2018-03-23 京东方科技集团股份有限公司 瞳孔定位装置和方法、虚拟现实设备的显示驱动器
CN107844736A (zh) 2016-09-19 2018-03-27 北京眼神科技有限公司 虹膜定位方法和装置
CN108427926A (zh) 2018-03-16 2018-08-21 西安电子科技大学 一种视线跟踪***中的瞳孔定位方法
CN108572735A (zh) 2018-04-24 2018-09-25 京东方科技集团股份有限公司 瞳孔中心定位装置和方法、虚拟现实设备

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090141995A1 (en) 2007-11-02 2009-06-04 Siemens Corporate Research, Inc. System and Method for Fixed Point Continuation for Total Variation Based Compressed Sensing Imaging
CN101266645A (zh) 2008-01-24 2008-09-17 电子科技大学中山学院 一种基于多分辨率分析的虹膜定位方法
CN103067662A (zh) 2013-01-21 2013-04-24 天津师范大学 一种自适应视线跟踪***
CN107844736A (zh) 2016-09-19 2018-03-27 北京眼神科技有限公司 虹膜定位方法和装置
CN106774863A (zh) 2016-12-03 2017-05-31 西安中科创星科技孵化器有限公司 一种基于瞳孔特征实现视线追踪的方法
CN107833251A (zh) 2017-11-13 2018-03-23 京东方科技集团股份有限公司 瞳孔定位装置和方法、虚拟现实设备的显示驱动器
US20190147216A1 (en) * 2017-11-13 2019-05-16 Boe Technology Group Co., Ltd. Pupil positioning device and method and display driver of virtual reality device
CN108427926A (zh) 2018-03-16 2018-08-21 西安电子科技大学 一种视线跟踪***中的瞳孔定位方法
CN108572735A (zh) 2018-04-24 2018-09-25 京东方科技集团股份有限公司 瞳孔中心定位装置和方法、虚拟现实设备

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
International Search Report received for PCT Patent Application No. PCT/CN2019/082026, dated Jul. 17, 2019, 5 pages (2 pages of English Translation and 3 pages of Original Document).
Office Action received for Chinese Patent Application No. 201810375282.1, dated Dec. 4, 2019, 12 pages (6 pages of English Translation and 6 pages of Office Action).
Office Action received for Chinese Patent Application No. 201810375282.1, dated Jun. 17, 2020, 10 pages (6 pages of English Translation and 4 pages of Office Action).

Also Published As

Publication number Publication date
CN108572735B (zh) 2021-01-26
US20200278744A1 (en) 2020-09-03
WO2019205937A1 (zh) 2019-10-31
CN108572735A (zh) 2018-09-25

Similar Documents

Publication Publication Date Title
US11009946B2 (en) Pupil center positioning apparatus and method, and virtual reality device
CN107945282B (zh) 基于对抗网络的快速多视角三维合成和展示方法及装置
EP3739431B1 (en) Method for determining point of gaze, contrast adjustment method and device, virtual reality apparatus, and storage medium
Plopski et al. Corneal-imaging calibration for optical see-through head-mounted displays
US9995578B2 (en) Image depth perception device
US20190147216A1 (en) Pupil positioning device and method and display driver of virtual reality device
US11263803B2 (en) Virtual reality scene rendering method, apparatus and device
CN108898630A (zh) 一种三维重建方法、装置、设备和存储介质
WO2021067044A1 (en) Systems and methods for video communication using a virtual camera
WO2018140229A1 (en) No miss cache structure for real-time image transformations with data compression
CN111028330A (zh) 三维表情基的生成方法、装置、设备及存储介质
WO2019238114A1 (zh) 动态模型三维重建方法、装置、设备和存储介质
WO2023011339A1 (zh) 视线方向追踪方法和装置
JP7337091B2 (ja) 飛行時間カメラの低減された出力動作
CN110807364A (zh) 三维人脸与眼球运动的建模与捕获方法及***
WO2018191061A1 (en) No miss cache structure for real-time image transformations with multiple lsr processing engines
US20230116638A1 (en) Method for eye gaze tracking
CN109407828A (zh) 一种凝视点估计方法及***、存储介质及终端
Wu et al. Appearance-based gaze block estimation via CNN classification
CN112241934B (zh) 一种图像处理方法以及相关设备
CA3172140A1 (en) Full skeletal 3d pose recovery from monocular camera
US11288988B2 (en) Display control methods and apparatuses
US20220254106A1 (en) Method of gaze estimation with 3d face reconstructing
Bose et al. Pixel processor arrays for low latency gaze estimation
CN112767553A (zh) 一种自适应群体服装动画建模方法

Legal Events

Date Code Title Description
AS Assignment

Owner name: BOE TECHNOLOGY GROUP CO., LTD., CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SUN, GAOMING;REEL/FRAME:052101/0925

Effective date: 20200217

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE