WO2019205937A1 - 瞳孔中心定位装置和方法、虚拟现实设备 - Google Patents

瞳孔中心定位装置和方法、虚拟现实设备 Download PDF

Info

Publication number
WO2019205937A1
WO2019205937A1 PCT/CN2019/082026 CN2019082026W WO2019205937A1 WO 2019205937 A1 WO2019205937 A1 WO 2019205937A1 CN 2019082026 W CN2019082026 W CN 2019082026W WO 2019205937 A1 WO2019205937 A1 WO 2019205937A1
Authority
WO
WIPO (PCT)
Prior art keywords
matrix
coefficient matrix
parameter
coefficient
pupil
Prior art date
Application number
PCT/CN2019/082026
Other languages
English (en)
French (fr)
Inventor
孙高明
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to US16/646,929 priority Critical patent/US11009946B2/en
Publication of WO2019205937A1 publication Critical patent/WO2019205937A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/11Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
    • G06F17/12Simultaneous equations, e.g. systems of linear equations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/955Hardware or software architectures specially adapted for image or video understanding using specific electronic processors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Definitions

  • the present disclosure relates to the field of eye control technologies, and in particular, to a pupil center positioning apparatus and method, and a virtual reality device.
  • a pupil center positioning device may include: a matrix operation circuit configured to obtain a linear normal equation group according to the N boundary point coordinates of the received pupil region, N is a positive integer greater than 5; the parameter operation circuit is configured to be configured according to the The linear normal equations obtain the parameters of the elliptic equation using the Cramer's law; and the coordinate operation circuit is configured to obtain the central coordinates of the pupil region according to the parameters of the elliptic equation.
  • the transposed matrix processor may include: a first buffer configured to acquire a matrix element of the first coefficient matrix G1 from the coefficient matrix processor; a multiplexer, Configuring to acquire all matrix elements from the first buffer for row and column interchange; the second buffer is configured to acquire all matrix elements after row and column interchange from the multiplexer to form a transposed matrix G1 a T ; a counter, respectively connected to the first buffer, the multiplexer and the flag, configured to count up in the address mapping; a flag, respectively connected to the counter and the second buffer, configured In order to control each matrix element output to the second buffer according to the counter.
  • the parameter operation circuit may include: a state controller coupled to the coefficient matrix memory, the multiplexer, and the parameter operator, respectively; and a coefficient matrix memory configured to acquire the second coefficient from the matrix operation circuit
  • the matrix G2 and the second constant term matrix H2 store all matrix elements in the second coefficient matrix G2 and the second constant term matrix H2;
  • the multiplexer is configured to be under the control of the state controller, according to Clem a rule, the parameter matrix composed of the second coefficient matrix G2 and the second constant term matrix H2 is sequentially obtained from the coefficient matrix memory;
  • the parameter operator is configured to calculate each parameter matrix and the first under the control of the state controller The product of the two coefficient matrix G2 obtains each parameter of the elliptic equation.
  • the N 10.
  • a virtual reality device can include a pupil centering device as previously described.
  • a pupil center positioning method may include: obtaining a linear normal equation group according to the N boundary point coordinates of the received pupil region, where N is a positive integer greater than 5; according to the linear normal equation group, obtaining an elliptic equation by using the Cramm's law a parameter; and obtaining a center coordinate of the pupil region according to a parameter of the elliptic equation.
  • obtaining the parameters of the elliptic equation by using the Cramer's law may include: storing matrix elements of the second coefficient matrix G2 and the second constant term matrix H2; according to Clem a rule, sequentially obtaining a parameter matrix composed of the second coefficient matrix G2 and the second constant term matrix H2; calculating a product of each parameter matrix and the second coefficient matrix G2 to obtain parameters of the elliptic equation.
  • the N 10.
  • FIG. 1 is a schematic structural view of a pupil center positioning device according to an embodiment of the present disclosure
  • FIG. 2 is a schematic structural diagram of a matrix operation circuit according to an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of N boundary point coordinates of a pupil region according to an embodiment of the present disclosure
  • FIG. 4 is a schematic structural diagram of a transposed matrix processor according to an embodiment of the present disclosure.
  • FIG. 5 is a schematic structural diagram of a parameter operation circuit according to an embodiment of the present disclosure.
  • FIG. 6 is a flow chart of a pupil center positioning method in accordance with an embodiment of the present disclosure.
  • FIG. 7 is a virtual reality device in accordance with an embodiment of the present disclosure.
  • the eyeball tracking technique involves, based on an image processing method, capturing an image of an eye in real time through an imaging device mounted in front of the eye, the image containing a high-purity flare image formed on the cornea of the eye.
  • the eyeball of the approximate sphere rotates and the spot as a reference point does not move.
  • the position of the eye's current line of sight sitting on the front screen of the eye can be calculated, and then the display screen is operated to realize the man-machine.
  • Interactive or gaze point rendering It can be seen that the precise positioning of the coordinates of the center of the pupil is the basis of the eye tracking technology.
  • the pupil center positioning uses a hardware-implemented projection coordinate method.
  • the projection coordinate method uses the gray scale information of the eyeball to detect the abscissa and the ordinate of the pupil by horizontal and vertical projection, respectively, thereby obtaining the position coordinates of the pupil center.
  • the inventor's research found that the projection coordinate method has low calculation accuracy, is greatly interfered by factors such as eyelashes and eyelids, and has poor anti-interference ability, which affects the user's experience.
  • Embodiments of the present disclosure provide a hardware-implemented pupil center positioning apparatus and method, and a virtual reality device including the pupil center positioning apparatus.
  • the main body structure of the pupil center positioning device based on hardware includes a matrix operation circuit 1, a parameter operation circuit 2, and a coordinate operation circuit 3 which are sequentially connected.
  • the matrix operation circuit 1 is configured to obtain a linear normal equation system based on the N boundary point coordinates of the received pupil region, N being a positive integer greater than 5.
  • the parameter operation circuit 2 is configured to obtain parameters of the elliptic equation using Cramer's Rule according to the linear normal equations.
  • the coordinate operation circuit 3 is configured to obtain the center coordinates of the pupil region in accordance with the parameters of the elliptic equation.
  • Embodiments of the present disclosure provide a hardware-based pupil center positioning device.
  • the pupil center positioning device performs the pupil center positioning based on the ellipse fitting algorithm. Compared with the projection coordinate method, it not only improves the calculation accuracy, but also improves the anti-interference ability, and is convenient for hardware implementation and integration in the virtual reality device, processing the speed block, and processing. Short time and less use of logical resources.
  • the parameters of the elliptic equation are obtained by using the Cramer's law, which can effectively avoid the LU decomposition for the division calculation compared with the LU decomposition (Lower and Upper Decomposition) method used in the related art.
  • the accumulation of errors that occur at the time further improves the calculation accuracy and anti-interference ability.
  • the N boundary point coordinates of the pupil region acquired by the matrix operation circuit 1 are output from the front end pupil region circuit (not shown in FIG. 1).
  • the VR/AR device is provided with a camera device, such as a camera based on a CCD or CMOS imaging component, capturing an image of the eye in real time, and the acquired eye image is transmitted to the pupil area circuit for preprocessing and boundary extraction processing, and the pupil region is extracted and A plurality of boundary point coordinates of the pupil area are sent to the matrix operation circuit 1.
  • the preprocessing includes gradation conversion processing, filtering processing, binarization processing, and the like.
  • the gradation conversion processing converts the RGB image data into (for example, 8-bit) gradation image data by gradation conversion of the color image acquired by the imaging device.
  • the filtering process is to filter the grayscale image data to remove noise in the image.
  • the binarization process is to process an image of 256 gray scales of 8 bits into an image of only two gray scales of 0 or 255, that is, convert the grayscale image into a black and white image.
  • the pretreatment may also include processes such as boundary corrosion and expansion. Boundary corrosion and expansion are the opening of the binarized data, eliminating small objects and smoothing the boundaries of large objects to clear the boundaries and obtain binarized data with clear boundaries.
  • the boundary extraction process is to extract the boundary of the binarized data with clear boundaries and obtain the boundary point coordinates of the pupil region in the image.
  • N boundary points are selected from all of the boundary points, and an averaging method can be employed.
  • the averaging method may include first counting the number S of all boundary points, then starting from a boundary point, selecting a boundary point every S/N boundary points, and finally transmitting the coordinates of the N boundary points to the matrix operation circuit 1.
  • the camera device may also include an infrared capture device.
  • an infrared capture device Under infrared light, because the pupil and iris have different absorption and reflectivity to infrared rays, the pupil reflection effect is very low, most of the infrared light will be absorbed, and the iris will almost completely reflect the infrared rays, so the pupil part will behave. It is dark, and the iris part is bright, and the difference between the two is obvious, so that the pupil can be easily detected.
  • Gray scale conversion processing, filtering processing, binarization processing, boundary corrosion and expansion, and boundary extraction processing can be performed by algorithms well known in the art. For example, filtering processing can use Gaussian filtering, and boundary extraction processing can use four-direction alignment. The law or the eight-neighborhood method, etc., will not be repeated here.
  • the matrix operation circuit can be, for example, the matrix operation circuit 1 shown in FIG. 1.
  • the matrix operation circuit based on hardware implementation may include an array buffer 11, a coefficient matrix processor 12, a transposed matrix processor 13, and a matrix multiplier 14.
  • the array buffer 11 may be connected to the pupil area circuit of the front end and configured to receive N boundary point coordinates (x i and y i , where i is greater than or equal to 1 and less than or equal to N) output by the pupil area circuit, and The form cache of the array.
  • the En shown in Fig. 2 is, for example, an enable signal.
  • a matrix, H1 is a first constant term matrix of N x 1.
  • the transposed matrix processor 13 may be coupled to the coefficient matrix processor 12 and configured to acquire the transposed matrix G1 T of the first coefficient matrix G1.
  • the matrix multiplier 14 may be coupled to the coefficient matrix processor 12 and the transposed matrix processor 13, and configured to multiply the transposed matrix G1 T by the first coefficient matrix G1 and the first constant term matrix H1, respectively, to obtain linearity.
  • the normal equations G2 ⁇ X H2, where G2 is a 5 ⁇ 5 second coefficient matrix, H2 is a 5 ⁇ 1 second constant term matrix, and the second coefficient matrix G2 and the second constant term matrix H2 are sent to Parameter operation circuit 2.
  • Embodiments of the present disclosure perform pupil centering based on an elliptical fitting algorithm. Specifically, using the N boundary sample points of the pupil region, the N boundary sample points are subjected to ellipse fitting to obtain the pupil center. In the ellipse fitting algorithm, 5 sample points can uniquely determine an ellipse, but when 5 sample points are used, there are inevitably some sample points with large errors in the boundary sample points extracted by the pupil region circuit. If all the sample points including these large error sample points are elliptically fitted, the fitting error will be large and the accuracy requirement cannot be met.
  • N boundary sample points greater than 5 are generally employed to form a linear overdetermined system of equations, and ellipse fitting is performed according to a least squares method to obtain an elliptic equation.
  • the least squares method (also known as the least squares method) is a mathematical optimization technique that finds the best function match of the data by minimizing the sum of the squares of the errors.
  • the least squares method can be used to easily obtain unknown data and minimize the sum of the squares of the errors between the obtained data and the actual data.
  • FIG. 3 is a schematic diagram of N boundary point coordinates of a pupil region according to an embodiment of the present disclosure.
  • Figure 3 shows schematically 10 boundary points.
  • (x 1 , y 1 ), (x 2 , y 2 ) (x 9 , y 9 ), (x 10 , y 10 ) sequentially represent the abscissa and the ordinate of the first boundary point, The abscissa and ordinate of the second boundary point, ..., the abscissa and ordinate of the ninth boundary point, and the abscissa and ordinate of the 10th boundary point.
  • x c and y c represent the abscissa and ordinate of the pupil center, respectively.
  • the array buffer 11 receives the N boundary point coordinates output by the pupil area circuit of the front end, and is buffered in the form of an array, wherein x i and y i are the abscissa and the ordinate of the i-th boundary point of the pupil area, respectively. (In this example, i is greater than or equal to 1 and less than or equal to N).
  • an elliptical equation associated with the pupil region needs to be established.
  • each of the coefficients A, B, C, D, E can be determined by taking the boundary point coordinates of the pupil region such that x, y becomes a known amount.
  • the N boundary point coordinates (x 1 , y 1 ), (x 2 , y 2 ), ... (x N-1 , y N-1 ), (x N , are buffered in the array buffer 11 .
  • the coefficient matrix processor 12 After y N ), for the i-th boundary point, the coefficient matrix processor 12 first reads out the abscissa x i and the ordinate y i of the i-th boundary point from the array buffer 11 and obtains the i-th by multiplication.
  • the values of x i y i , x i 2 , y i 2 of the boundary points (as shown in Fig.
  • An overdetermined equation is an equation in which the number of independent equations is greater than the number of independent unknown parameters.
  • the first coefficient matrix of N ⁇ 5 is a coefficient of each variable of the above-mentioned overdetermined equation group, and H1 is a first constant term matrix of N ⁇ 1, which is a column vector formed by a constant term on the right side of the above-mentioned overdetermined equation group. .
  • the first coefficient matrix G1 is:
  • the first constant term matrix H1 is:
  • the transposed matrix processor 13 processes the first coefficient matrix G1 to obtain the first coefficient matrix G1. Transpose matrix G1 T .
  • the transposed matrix processor can be, for example, the transposed matrix processor 13 shown in FIG.
  • the transposed matrix processor 13 may include a first buffer 131, a multiplexer 132, a second buffer 133, a counter 134, and a marker 135.
  • the first buffer 131 is coupled to the coefficient matrix processor 12 and configured to acquire a matrix element of the N ⁇ 5 first coefficient matrix G1 from the coefficient matrix processor 12 and output it to the multiplexer 132.
  • the multiplexer 132 is coupled to the first buffer 131, configured to acquire all matrix elements from the first buffer 131, and to output all matrix elements to the second buffer 133 after performing row and column interchange.
  • the second buffer 133 is coupled to the multiplexer 132 and configured to acquire all matrix elements after row and column interchange from the multiplexer 132 to form a 5 ⁇ N transposed matrix G1 T .
  • the first buffer 131 and the multiplexer 132 are N ⁇ 5 channels for data input, and the multiplexer 132 and the second buffer 133 are 5 ⁇ N channels for data output.
  • Counter 134 is coupled to first buffer 131, multiplexer 132, and flag 135, respectively, and is configured to increment in the address map.
  • the flag 135 is coupled to the counter 134 and the second buffer 133, respectively, and is configured to control the output of each matrix element to the second buffer 133 in accordance with the counter 134.
  • the matrix multiplier 14 obtains the first coefficient matrix G1 and the first constant term matrix H1 from the coefficient matrix processor 12, The matrix processor 13 obtains the transposed matrix G1 T , multiplies the first coefficient matrix G1 and the transposed matrix G1 T to obtain a 5 ⁇ 5 second coefficient matrix G2, and the first constant term matrix H1 and the transposed matrix G1. Multiplying T to obtain a 5 x 1 second constant term matrix H2.
  • the transposed matrix processor 13 may adopt a random access memory (RAM), a multiplier and an adder structure, and set two address mapping RAMs, and two RAMs are sequentially given to be multiplied. - Each element of the addition operation is multiplied-added by a multiplier and an adder.
  • RAM random access memory
  • the coefficient of each variable of the group, H2 is a second constant term matrix of 5 ⁇ 1, which is a column vector composed of constant terms on the right side of the equal sign of the linear normal equation.
  • FIG. 5 is a schematic structural diagram of a parameter operation circuit according to an embodiment of the present disclosure.
  • the parameter operation circuit can be, for example, the parameter operation circuit 2 shown in FIG. 1.
  • the hardware-implemented parameter operation circuit may include a coefficient matrix memory 21, a multiplexer 22, a parameter operator 23, and a state controller 24.
  • the coefficient matrix memory 21 may be connected to the matrix multiplier 14 in the matrix operation circuit 1, and configured to acquire the second coefficient matrix G2 and the second constant term matrix H2 from the matrix multiplier 14, and store the second coefficient matrix G2 And all matrix elements in the second constant term matrix H2;
  • the multiplexer 22 may be coupled to the coefficient matrix memory 21 and the state controller 24, and configured to sequentially acquire the second coefficients from the coefficient matrix memory 21 in accordance with Cramer's law under the control of the state controller 24.
  • a parameter matrix composed of a matrix G2 and a second constant term matrix H2.
  • the parameter operator 23 may be coupled to the multiplexer 22 and the state controller 24 and configured to calculate the product of each parameter matrix and the second coefficient matrix G2 under the control of the state controller 24 to obtain each of the elliptic equations.
  • the parameters are sent to the coordinate operation circuit 3.
  • the state controller 24 can be connected to the coefficient matrix memory 21, the multiplexer 22, and the parameter operator 23, and is configured to collectively control the coefficient matrix memory 21, the multiplexer 22, and the parameter operator 23.
  • Clem's law is a theorem for solving linear equations in linear algebra. It is applicable to linear equations with equal numbers of variables and equations. According to Clem's law, when the second coefficient matrix G2 is reversible, or the corresponding determinant
  • is not equal to 0, it has a unique solution Xi
  • , where G2i(i 1, 2, 3, 4, 5) are a 1i , a 2i , a 3i , a 4i , a 5i of the i-th column in the second coefficient matrix G2 are sequentially replaced by b 1 , b 2 , b 3 , b 4 , b 5 The resulting parameter matrix.
  • the multiplexer 22 obtains the parameter matrix G2i under the control of the state controller 24, and the parameter operator 23 performs the
  • the five variable values of the linear normal equations can be directly obtained by the definition of the determinant, that is, the five parameter values of the elliptic equation: A, B, C, D, E. Since this calculation is a well-known technique in the art, it will not be described here.
  • the parameters are sent to the coordinate operation circuit 3.
  • the coordinate operation circuit 3 uses the five parameters A, B, C, D, and E to calculate the formula through the ellipse center.
  • the ellipse center calculation formula is:
  • x c and y c are the abscissa and ordinate of the ellipse center, respectively, and A, B, C, D, and E are parameters of the elliptic equation.
  • the LU decomposition method is generally used for parameter calculation, but there is a defect in which error accumulation occurs during division calculation, and the calculation accuracy is low.
  • the embodiment of the present disclosure adopts the Clem rule method to perform parameter calculation, effectively avoids error accumulation, improves calculation accuracy, and has the advantages of processing speed block, short processing time, and low use of logic resources. Processing provides a good basis for increasing processing speed and reducing processing time.
  • the matrix operation circuit, the parameter operation circuit, and the coordinate operation circuit of the embodiments of the present disclosure may all be implemented by a hardware circuit composed of a RAM read/write controller, a comparator, a counter, a multiplier, and an adder, or It is implemented by means of combining with FPGA.
  • the circuit structure and code can be directly applied to IC customization, which is easy to integrate in virtual reality devices, especially in head-mounted virtual reality devices.
  • FPGA for example, on Xilinx FPGA
  • the multiplication, division, accumulation, and data storage operations involved in the design process are all written independently, without calling multipliers and divisions in the FPGA.
  • Off-the-shelf IP cores such as accumulators, accumulators, and memory, that is, no Xilinx IP modules need to be called during the design process.
  • the foregoing describes the implementation of the matrix operation circuit, the parameter operation circuit, and the coordinate operation circuit in the embodiments of the present disclosure.
  • a general logic operation processing circuit such as a central processing unit (CPU) or a single chip microcomputer can also be used.
  • An algorithm for implementing the pupil center positioning method provided by the embodiment of the present disclosure implements the above circuit; and an algorithm for curing the pupil center positioning method provided by the embodiment of the present disclosure may be implemented in an application-specific integrated circuit (ASIC) to implement the above circuit. .
  • ASIC application-specific integrated circuit
  • an embodiment of the present disclosure further provides a method for positioning a pupil center based on hardware.
  • 6 is a flow chart of a pupil center positioning method in accordance with an embodiment of the present disclosure. As shown in FIG. 6, the pupil center positioning method may include:
  • the hardware-implemented pupil center positioning method provided by the embodiment of the present disclosure performs the pupil center positioning based on the ellipse fitting algorithm.
  • the embodiment of the present disclosure not only improves the calculation precision, improves the anti-interference ability, but also facilitates the hardware. It is implemented and integrated in virtual reality devices, processing speed blocks, processing time is short, and logic resources are used less.
  • the parameters of the elliptic equation are obtained by the Cramer's law method.
  • the embodiment of the present disclosure can effectively avoid the error accumulation occurring when the LU decomposition performs the division calculation, further Improve calculation accuracy and anti-interference ability.
  • step S1 may include:
  • S11 receiving N boundary point coordinates output by the front end, and buffering in the form of an array
  • Step S2 may include:
  • Step S3 may include:
  • transposed matrices For the processing flow of obtaining linear overdetermined equations, transposed matrices, linear normal equations, parameter matrices, parameters, and pupil center coordinates, refer to the contents of the pupil center positioning device, which will not be described here.
  • FIG. 7 is a schematic structural diagram of a virtual reality device according to an embodiment of the present disclosure.
  • the main structure of the virtual reality device may include the wearing body 7.
  • a display device 71 and an imaging device 72 are provided in the wearing body 7.
  • Display device 71 may include one or more display screens 711 and display drivers 712.
  • the pupil center positioning device 7121 is integrated in the display driver 712.
  • the pupil center positioning device 7121 can be a pupil center positioning device as shown in FIG.
  • the imaging device 72 collects the image of the user's eyes, and sends the image of the eye to the pupil center positioning device 7121. After the pupil center positioning device 7121 obtains the pupil center coordinates, the position of the user's eye is calculated on the display screen in combination with the spot coordinates. Then, the display screen is operated to realize functions such as human-computer interaction or gaze point rendering.
  • installation In the description of the embodiments of the present disclosure, it should be noted that the terms “installation”, “connected”, and “connected” are to be understood broadly, and may be, for example, a fixed connection or a Removable connection, or integral connection; may be mechanical connection or electrical connection; may be directly connected, or may be indirectly connected through an intermediate medium, and may be internal communication between the two elements.
  • the specific meanings of the above terms in the present disclosure can be understood in the specific circumstances by those skilled in the art.
  • embodiments of the present disclosure can be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware aspects.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Analysis (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Computational Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Ophthalmology & Optometry (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Operations Research (AREA)
  • Computing Systems (AREA)
  • Image Processing (AREA)

Abstract

本公开实施例提供了一种瞳孔中心定位装置和方法、虚拟现实设备。该瞳孔中心定位装置可以包括:矩阵运算电路,被配置为根据接收的瞳孔区域的N个边界点坐标获得线性正规方程组,N为大于5的正整数;参数运算电路,被配置为根据所述线性正规方程组,采用克莱姆法则获得椭圆方程的参数;以及坐标运算电路,被配置为根据所述椭圆方程的参数获得瞳孔区域的中心坐标。

Description

瞳孔中心定位装置和方法、虚拟现实设备
相关申请的交叉引用
本申请主张于2018年4月24日提交的中国专利申请No.201810375282.1的优先权,其全部内容通过引用结合于此。
技术领域
本公开涉及眼控技术领域,具体涉及一种瞳孔中心定位装置和方法,以及虚拟现实设备。
背景技术
近年来,虚拟现实(Virtual Reality,VR)/增强现实(Augmented Reality,AR)技术已逐步应用到显示、游戏、医疗等领域。随着技术的发展,人们对VR/AR的期望要求越来越高,靠头部转动来实现视角变换的交互方式已经不能为人们带来满意的产品体验,因而眼球追踪技术逐渐成为提升VR/AR设备体验的一项重要技术。眼球追踪(Eye Tracking)技术是用眼球运动控制机器的一种智能人机交互技术,用眼“看”就能完成所有操作,不仅可以解放双手,而且也是最快捷、最人性化的控制方式。VR/AR领域引入眼球追踪技术,不仅可以满足高清渲染的需求,还可以大幅度提升VR/AR设备的交互体验。在用户通过眼睛与VR/AR用户界面的交互中,可以直接用眼睛控制菜单,触发操作,进而让人摆脱不自然的头部操作。
发明内容
根据本公开的一个方面,提供了一种瞳孔中心定位装置。该瞳孔中心定位装置可以包括:矩阵运算电路,被配置为根据接收的瞳孔区域的N个边界点坐标获得线性正规方程组,N为大于5的正整数;参数运算电路,被配置为根据所述线性正规方程组,采用克莱姆法则获得椭圆方程的参数;以及坐标运算电路,被配置为根据所述椭圆方程的参数获得瞳孔区域的中心坐标。
在一个实施例中,所述矩阵运算电路可以包括:数组缓存器,被配置为接收N个边界点坐标,并以数组的形式缓存;系数矩阵处理器, 被配置为从数组缓存器依次读出每个边界点的坐标,获得线性超定方程组G1×X=H1,其中,G1为N×5的第一系数矩阵,X为5×1的变量矩阵,H1为N×1的第一常数项矩阵;转置矩阵处理器,被配置为获取所述第一系数矩阵G1的转置矩阵G1 T;矩阵相乘器,被配置为将所述转置矩阵G1 T分别与第一系数矩阵G1和第一常数项矩阵H1相乘,获得线性正规方程组G2×X=H2,其中,G2为5×5的第二系数矩阵,H2为5×1的第二常数项矩阵。
在一个实施例中,所述转置矩阵处理器可以包括:第一缓存器,被配置为从所述系数矩阵处理器获取所述第一系数矩阵G1的矩阵元素;多路复用器,被配置为从所述第一缓存器获取所有矩阵元素,进行行列互换;第二缓存器,被配置为从所述多路复用器获取行列互换后的所有矩阵元素,形成转置矩阵G1 T;计数器,分别与所述第一缓存器、多路复用器和标志器连接,被配置为在地址映射中递增计数;标志器,分别与所述计数器和第二缓存器连接,被配置为根据计数器控制各矩阵元素输出到第二缓存器。
在一个实施例中,述参数运算电路可以包括:状态控制器,分别与系数矩阵存储器、多路选择器和参数运算器连接;系数矩阵存储器,被配置为从所述矩阵运算电路获取第二系数矩阵G2和第二常数项矩阵H2,存储第二系数矩阵G2和第二常数项矩阵H2中的所有的矩阵元素;多路选择器,被配置为在状态控制器的控制下,按照克莱姆法则,从系数矩阵存储器依次获取由所述第二系数矩阵G2和第二常数项矩阵H2组成的参数矩阵;参数运算器,被配置为在状态控制器的控制下,计算每个参数矩阵与第二系数矩阵G2的积,获得椭圆方程的每个参数。
在一个实施例中,所述N=10。
根据本公开的另一个方面,提供了一种虚拟现实设备。该虚拟现实设备可以包括如前所述的瞳孔中心定位装置。
根据本公开的又一个方面,提供了一种瞳孔中心定位方法。该瞳孔中心定位方法可以包括:根据接收的瞳孔区域的N个边界点坐标获得线性正规方程组,N为大于5的正整数;根据所述线性正规方程组,采用克莱姆法则获得椭圆方程的参数;以及根据所述椭圆方程的参数获得瞳孔区域的中心坐标。
在一个实施例中,根据接收的瞳孔区域的N个边界点坐标获得线性正规方程组可以包括:接收N个边界点坐标,并以数组的形式缓存;依次读出每个边界点的坐标,获得线性超定方程组G1×X=H1,其中,G1为N×5的第一系数矩阵,X为5×1的变量矩阵,H1为N×1的第一常数项矩阵;获取所述第一系数矩阵的转置矩阵G1 T;将所述转置矩阵G1 T分别与所述第一系数矩阵G1和第一常数项矩阵H1相乘,获得线性正规方程组G2×X=H2,其中,G2为5×5的第二系数矩阵,H2为5×1的第二常数项矩阵。
在一个实施例中,根据所述线性正规方程组,采用克莱姆法则获得椭圆方程的参数可以包括:存储所述第二系数矩阵G2和第二常数项矩阵H2的矩阵元素;按照克莱姆法则,依次获取由所述第二系数矩阵G2和第二常数项矩阵H2组成的参数矩阵;计算每个参数矩阵与第二系数矩阵G2的积,获得椭圆方程的参数。
在一个实施例中,所述N=10。
附图说明
附图用来提供对本公开技术方案的进一步理解,并且构成说明书的一部分,与本申请的实施例一起用于解释本公开的技术方案,并不构成对本公开技术方案的限制。附图中各部件的形状和大小不反映真实比例,目的只是示意说明本公开内容。
图1为根据本公开实施例的瞳孔中心定位装置的结构示意图;
图2为根据本公开实施例的矩阵运算电路的结构示意图;
图3为根据本公开实施例的瞳孔区域N个边界点坐标的示意图;
图4为根据本公开实施例的转置矩阵处理器的结构示意图;
图5为根据本公开实施例的参数运算电路的结构示意图;
图6为根据本公开实施例的瞳孔中心定位方法的流程图;以及
图7为根据本公开实施例的虚拟现实设备。
具体实施方式
下面结合附图和实施例对本公开的具体实施方式作进一步详细描述。以下实施例用于说明本公开,但不用来限制本公开的范围。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征 可以相互任意组合。
眼球追踪技术涉及,基于图像处理方法,通过安装在眼睛前面的摄像装置实时捕捉眼睛的图像,图像中包含在眼睛角膜上形成的高亮度光斑(purkinje image)。当眼睛注视不同位置时,近似球体的眼球转动而作为参考点的光斑不动。通过数据处理获得瞳孔中心的位置坐标后,利用瞳孔中心位置坐标与光斑位置坐标的关系,就可以计算出眼睛当前视线落坐在眼睛前方显示屏的位置,进而对显示屏进行操作,实现人机交互或注视点渲染等功能。由此可见,瞳孔中心位置坐标的精确定位是眼球追踪技术的基础。
在发明人所知的支持眼球追踪的VR/AR设备中,特别是头戴式VR/AR设备中,瞳孔中心定位采用硬件实现的投影坐标法。投影坐标法是利用眼球的灰度信息,通过水平和垂直投影分别检测瞳孔的横坐标和纵坐标,从而获得瞳孔中心的位置坐标。
发明人研究发现,投影坐标法的计算精度较低,受眼睫毛和眼睑等因素的干扰较大,抗干扰能力较差,影响使用者的体验。
本公开实施例提供了一种基于硬件实现的瞳孔中心定位装置和方法,以及包含所述瞳孔中心定位装置的虚拟现实设备。
当然,实施本公开的任一产品或方法并不一定需要同时达到以上所述的所有优点。本公开的其它特征和优点将在随后的说明书实施例中阐述,并且,部分地从说明书实施例中变得显而易见,或者通过实施本公开而了解。本公开实施例的目的和其他优点可通过在说明书、权利要求书以及附图中所特别指出的结构来实现和获得。
图1为根据本公开实施例的瞳孔中心定位装置的结构示意图。如图1所示,基于硬件实现的瞳孔中心定位装置的主体结构包括依次连接的矩阵运算电路1、参数运算电路2和坐标运算电路3。矩阵运算电路1被配置为根据接收的瞳孔区域的N个边界点坐标获得线性正规方程组,N为大于5的正整数。参数运算电路2被配置为根据所述线性正规方程组,采用克莱姆法则(Cramer′s Rule)获得椭圆方程的参数。坐标运算电路3被配置为根据所述椭圆方程的参数获得瞳孔区域的中心坐标。
本公开实施例提供了一种基于硬件实现的瞳孔中心定位装置。该瞳孔中心定位装置基于椭圆拟合算法进行瞳孔中心定位,与投影坐标 法相比,不仅提高了计算精度,提升了抗干扰能力,而且便于硬件实现并集成在虚拟现实设备中,处理速度块,处理时间短,逻辑资源使用少。在对瞳孔区域的多个边界点的处理中,采用克莱姆法则获得椭圆方程的参数,与相关技术中采用的LU分解(Lower and Upper Decomposition)方式相比,可有效避免LU分解进行除法计算时出现的误差累积,进一步提高了计算精度和抗干扰能力。
本公开实施例中,矩阵运算电路1所获取的瞳孔区域的N个边界点坐标是前端的瞳孔区域电路(图1中未示出)输出的。VR/AR设备上设置有摄像装置,如基于CCD或CMOS成像元件的相机,实时捕捉眼睛的图像,所采集的眼睛图像传输给瞳孔区域电路进行预处理和边界提取处理,提取出瞳孔区域并将瞳孔区域的多个边界点坐标发送给矩阵运算电路1。通常,预处理包括灰度转换处理、滤波处理、二值化处理等。灰度转换处理是将摄像装置采集的彩色图像进行灰度转换,将RGB图像数据转换为(例如8bit的)灰度图像数据。滤波处理是对灰度图像数据进行滤波处理,去除图像中的噪声。二值化处理是将8bit的256个灰阶的图像处理成只有0或255两个灰阶的图像,即将灰度图像转化为黑白图像。实际实施时,预处理还可以包括边界腐蚀和膨胀等处理。边界腐蚀和膨胀是对二值化处理后的数据进行开运算,消除细小物体和平滑较大物体边界,以清晰边界,得到具有清晰边界的二值化数据。边界提取处理是对具有清晰边界的二值化数据进行边界提取,获得图像中瞳孔区域的边界点坐标。最后,从瞳孔区域的所有边界点中选取N个边界点,N为大于5的正整数,如N=6、7、8、9或10。在一个实施例中,从所有的边界点中选择N个边界点,可以采用平均选取法。平均选取法可以包括,先统计所有边界点的数量S,然后从一边界点开始,每隔S/N个边界点选取一个边界点,最后将N个边界点的坐标发送给矩阵运算电路1。
可选地,摄像装置也可以包括红外采集装置。在红外光照射下,由于瞳孔与虹膜对红外线有不同的吸收率和反射率,瞳孔反射效果很低,大部分红外光会被吸收,而虹膜几乎会将红外线完全反射,因而瞳孔部分会表现得偏暗,而虹膜部分则会偏亮,两者之间的差距明显,从而可以容易检测出瞳孔。灰度转换处理、滤波处理、二值化处理、边界腐蚀和膨胀以及边界提取处理等,可以采用本领域熟知的算 法进行,例如,滤波处理可以采用高斯滤波,边界提取处理可以采用四方向比对法或八邻域法等,这里不再赘述。
图2为根据本公开实施例的矩阵运算电路的结构示意图。该矩阵运算电路可以例如是图1中所示的矩阵运算电路1。如图2所示,基于硬件实现的该矩阵运算电路可以包括数组缓存器11、系数矩阵处理器12、转置矩阵处理器13和矩阵相乘器14。
数组缓存器11可以与前端的瞳孔区域电路连接,并且被配置为接收所述瞳孔区域电路输出的N个边界点坐标(x i和y i,其中i大于等于1并且小于等于N),并以数组的形式缓存。图2中所示的En例如是使能信号。
系数矩阵处理器12可以与数组缓存器11连接,并且被配置为从数组缓存器11依次读出每个边界点的坐标(其中,第i个边界点的横坐标为x i,第i个边界点的纵坐标为y i,i大于等于1并且小于等于N),获得线性超定方程组G1×X=H1,其中,G1为N×5的第一系数矩阵,X为5×1的变量矩阵,H1为N×1的第一常数项矩阵。
转置矩阵处理器13可以与系数矩阵处理器12连接,并且被配置为获取第一系数矩阵G1的转置矩阵G1 T
矩阵相乘器14可以与系数矩阵处理器12和转置矩阵处理器13连接,并且被配置为将转置矩阵G1 T分别与第一系数矩阵G1和第一常数项矩阵H1相乘,获得线性正规方程组G2×X=H2,其中,G2为5×5的第二系数矩阵,H2为5×1的第二常数项矩阵,并将第二系数矩阵G2和第二常数项矩阵H2发送给参数运算电路2。
稍后将结合图4对图2中所示的矩阵运算电路的工作原理进行详细说明。
本公开实施例基于椭圆拟合算法进行瞳孔中心定位。具体地,利用瞳孔区域的N个边界样本点,对这N个边界样本点进行椭圆拟合来求取瞳孔中心。在椭圆拟合算法中,5个样本点就可唯一地确定一个椭圆,但采用5个样本点时,由于瞳孔区域电路提取的边界样本点中不可避免地存在一些误差较大的样本点,所以如果将包含这些误差较大样本点在内的所有样本点进行椭圆拟合,拟合误差将比较大,不能满足精度要求。因此,根据本公开实施例,一般采用利用大于5的N个边界样本点来形成线性超定方程组,进而按照最小二乘法进行椭圆拟 合得到椭圆方程。最小二乘法(又称最小平方法)是一种数学优化技术,它通过最小化误差的平方和寻找数据的最佳函数匹配。利用最小二乘法可以简便地求得未知的数据,并使得这些求得的数据与实际数据之间误差的平方和为最小。
图3为根据本公开实施例的瞳孔区域N个边界点坐标的示意图。图3示意性地示出了10个边界点。(x 1,y 1)、(x 2,y 2)......(x 9,y 9)、(x 10,y 10)依次表示第1个边界点的横坐标和纵坐标、第2个边界点的横坐标和纵坐标、......、第9个边界点的横坐标和纵坐标和第10个边界点的横坐标和纵坐标。x c和y c分别表示瞳孔中心的横坐标、纵坐标。
下面将结合图4对图2中所示的矩阵运算电路的工作原理进行详细说明。
首先,数组缓存器11接收前端的瞳孔区域电路输出的N个边界点坐标,并以数组的形式缓存,其中,x i和y i分别为瞳孔区域的第i个边界点的横坐标、纵坐标(在该示例中,i大于等于1并且小于等于N)。
根据本公开实施例,为了对瞳孔中心进行定位,需要建立与瞳孔区域相关的椭圆方程。根据本公开实施例,可以将通过椭圆拟合得到的椭圆方程表示为:A+Bx+Cy+Dxy+Ex 2=y 2,其中x表示椭圆上的点的横坐标,椭圆上的点的纵坐标。根据本公开,可以通过获取瞳孔区域的边界点坐标而使得x,y成为已知量,进而确定各个系数A,B,C,D,E。
因此,在数组缓存器11缓存N个边界点坐标(x 1,y 1),(x 2,y 2)......(x N-1,y N-1),(x N,y N)后,对于第i个边界点,系数矩阵处理器12先从数组缓存器11依次读出第i个边界点的横坐标x i和纵坐标y i,通过乘法运算,获得第i个边界点的x iy i、x i 2、y i 2的值(如图2中所示的,x,y,xy、x 2、y 2、1),然后将这些值带入前述椭圆方程,得到一个方程式,A+x iB+y iC+x iy iD+x i 2E=y i 2。如此,依次对缓存的N个边界点执行上述操作,将可以获得如下所述的五元一次线性超定方程组:
A+x 1B+y 1C+x 1y 1D+x 1 2E=y 1 2
A+x 2B+y 2C+x 2y 2D+x 2 2E=y 2 2
……
A+x NB+y NC+x Ny ND+x N 2E=y N 2
超定方程是指独立方程个数大于独立的未知参数的个数的方程。前述五元一次线性超定方程组的矩阵形式为:G1×X=H1,其中,X为5×1的变量矩阵,是5个变量A,B,C,D,E构成的列向量,G1为N×5的第一系数矩阵,是前述超定方程组各变量的系数,H1为N×1的第一常数项矩阵,是前述超定方程组等号右侧的常数项构成的列向量。
第一系数矩阵G1为:
Figure PCTCN2019082026-appb-000001
第一常数项矩阵H1为:
Figure PCTCN2019082026-appb-000002
五元一次线性超定方程组为:
Figure PCTCN2019082026-appb-000003
系数矩阵处理器12获得N×5的第一系数矩阵G1和N×1的第一常数项矩阵H1后,转置矩阵处理器13对第一系数矩阵G1进行处理,获取第一系数矩阵G1的转置矩阵G1 T
图4为根据本公开实施例的转置矩阵处理器的结构示意图。该转置矩阵处理器可以例如是图2中所示的转置矩阵处理器13。如图4所 示,转置矩阵处理器13可以包括第一缓存器131、多路复用器132、第二缓存器133、计数器134和标志器135。第一缓存器131与系数矩阵处理器12连接,被配置为从系数矩阵处理器12获取N×5的第一系数矩阵G1的矩阵元素并输出给多路复用器132。多路复用器132与第一缓存器131连接,被配置为从第一缓存器131获取所有矩阵元素,进行行列互换后将所有矩阵元素输出给第二缓存器133。第二缓存器133与多路复用器132连接,被配置为从多路复用器132获取行列互换后的所有矩阵元素,形成5×N的转置矩阵G1 T。第一缓存器131与多路复用器132之间为数据输入的N×5个通道,多路复用器132与第二缓存器133之间为数据输出的5×N个通道。计数器134分别与第一缓存器131、多路复用器132和标志器135连接,被配置为在地址映射中递增计数。标志器135分别与计数器134和第二缓存器133连接,被配置为根据计数器134控制各矩阵元素输出到第二缓存器133。
转到图2,转置矩阵处理器13获得5×N的转置矩阵G1 T后,矩阵相乘器14从系数矩阵处理器12获得第一系数矩阵G1和第一常数项矩阵H1,从转置矩阵处理器13获得转置矩阵G1 T,将第一系数矩阵G1与转置矩阵G1 T相乘,获得5×5的第二系数矩阵G2,将第一常数项矩阵H1与转置矩阵G1 T相乘,获得5×1的第二常数项矩阵H2。可选地,转置矩阵处理器13可以采用随机存取存储器(Random-Access Memory,RAM)、乘法器和加法器结构,设置两个地址映射RAM,工作时两个RAM依次给出欲进行乘-加运算的各个元素,由乘法器和加法器进行乘-加运算。
矩阵相乘器14的处理,实际上就是将五元一次线性超定方程组G1×X=H1两端同乘以转置矩阵G1 T,得到五元一次线性正规方程组,其矩阵形式为:G2×X=H2,其中,X为5×1的变量矩阵,是5个变量A,B,C,D,E构成的列向量,G2为5×5的第二系数矩阵,是线性正规方程组各变量的系数,H2为5×1的第二常数项矩阵,是线性正规方程组等号右侧的常数项构成的列向量。
矩阵相乘器14获得5×5的第二系数矩阵G2和5×1的第二常数项矩阵H2后,将上述矩阵的所有矩阵元素发送给参数运算电路2。
五元一次线性正规方程组为:
Figure PCTCN2019082026-appb-000004
图5为根据本公开实施例的参数运算电路的结构示意图。该参数运算电路可以例如是图1中所示的参数运算电路2。如图5所示,基于硬件实现的参数运算电路可以包括系数矩阵存储器21、多路选择器22、参数运算器23和状态控制器24。
系数矩阵存储器21可以与矩阵运算电路1中的矩阵相乘器14连接,并且被配置为从矩阵相乘器14获取第二系数矩阵G2和第二常数项矩阵H2,并且存储第二系数矩阵G2和第二常数项矩阵H2中的所有的矩阵元素;
多路选择器22可以与系数矩阵存储器21和状态控制器24连接,并且被配置为在状态控制器24的控制下,按照克莱姆法则,从系数矩阵存储器21依次获取由所述第二系数矩阵G2和第二常数项矩阵H2组成的参数矩阵。
参数运算器23可以与多路选择器22和状态控制器24连接,并且被配置为在状态控制器24的控制下,计算每个参数矩阵与第二系数矩阵G2的积,获得椭圆方程的每个参数,并将所述参数发送给坐标运算电路3。
状态控制器24可以与系数矩阵存储器21、多路选择器22和参数运算器23连接,并且被配置为统一控制系数矩阵存储器21、多路选择器22和参数运算器23。
克莱姆法则是线性代数中一个关于求解线性方程组的定理,适用于变量和方程数目相等的线性方程组。按照克莱姆法则,当第二系数矩阵G2可逆,或者说对应的行列式|G2|不等于0的时候,它有唯一解Xi=|G2i|/|G2|,其中G2i(i=1,2,3,4,5)是第二系数矩阵G2中第i列的a 1i,a 2i,a 3i,a 4i,a 5i依次换成b 1,b 2,b 3,b 4,b 5所得的参数矩阵。具体地,多路选择器22在状态控制器24的控制下分别获得参数矩阵G2i,参数运算器23在状态控制器24的控制下分别进行|G2i|/|G2|计算,获得线性正规方程组的5个变量值。其中,
Figure PCTCN2019082026-appb-000005
Figure PCTCN2019082026-appb-000006
Figure PCTCN2019082026-appb-000007
此外,也可以通过行列式的定义直接求出线性正规方程组的5个变量值,即椭圆方程的5个参数值:A,B,C,D,E。由于该计算是本领域公知的技术,在此不再赘述。
参数运算器23得到椭圆方程的参数后,将这些参数发送给坐标运算电路3,坐标运算电路3利用A,B,C,D,E这5个参数,通过椭圆中心计算公式,即可计算出最终的瞳孔中心坐标(x c,y c)。
椭圆中心计算公式为:
Figure PCTCN2019082026-appb-000008
Figure PCTCN2019082026-appb-000009
其中,x c和y c分别为椭圆中心的横坐标、纵坐标,A,B,C,D,E为椭圆方程的参数。
在发明人所知的技术中,普遍采用LU分解方式进行参数运算,但存在进行除法计算时出现误差累积的缺陷,计算精度较低。相比之下,本公开实施例采用克莱姆法则方法进行参数运算,有效避免了误差累积,提高了计算精度,同时还具有处理速度块,处理时间短,逻辑资源使用少等优点,为后续处理提高处理速度和减少处理时间提供了良好的基础。
可选地,本公开实施例的矩阵运算电路、参数运算电路和坐标运算电路均可以采用由RAM读写控制器、比较器、计数器、乘法器和加法器等器件构成的硬件电路实现,也可以采用结合FPGA的方式实现。电路结构和代码可直接应用于IC定制,便于集成在虚拟现实设备中,特别是头戴式虚拟现实设备中。当采用结合FPGA方式实现时(例如在Xilinx FPGA上实现),设计过程中涉及的乘法运算、除法运算、累加运算和数据存储等操作,均为自主编写,不需调用FPGA中的乘法器、除法器、累加器和存储器等现成的IP核,即设计过程中不需调用任何Xilinx IP模块。
上述描述了本公开实施例中矩阵运算电路、参数运算电路和坐标运算电路的实现方式,本领域技术人员可以理解,还可以采用通用的逻辑运算处理电路,例如中央处理器(CPU)、单片机(MCU)等执行本公开实施例提供的瞳孔中心定位方法的算法实现上述电路;还可以采用将本公开实施例提供的瞳孔中心定位方法的算法固化在专门应用的集成电路(ASIC)中实现上述电路。
基于前述的发明构思,本公开实施例还提供了一种基于硬件实现 的瞳孔中心定位方法。图6为根据本公开实施例的瞳孔中心定位方法的流程图。如图6所示,该瞳孔中心定位方法可以包括:
S1、根据接收的瞳孔区域N个边界点坐标获得线性正规方程组,N为大于5的正整数;
S2、根据所述线性正规方程组,采用克莱姆法则获得椭圆方程的参数;
S3、根据所述椭圆方程的参数获得瞳孔区域的中心坐标。
本公开实施例提供的基于硬件实现的瞳孔中心定位方法,基于椭圆拟合算法进行瞳孔中心定位,与投影坐标法相比,本公开实施例不仅提高了计算精度,提升了抗干扰能力,而且便于硬件实现并集成在虚拟现实设备中,处理速度块,处理时间短,逻辑资源使用少。在对瞳孔区域多个边界点的处理中,采用克莱姆法则方法获得椭圆方程的参数,与LU分解方式相比,本公开实施例可有效避免LU分解进行除法计算时出现的误差累积,进一步提高了计算精度和抗干扰能力。
在一个实施例中,步骤S1可以包括:
S11、接收前端输出的N个边界点坐标,并以数组的形式缓存;
S12、依次读出每个边界点的横坐标和纵坐标,获得线性超定方程组G1×X=H1,其中,G1为N×5的第一系数矩阵,X为5×1的变量矩阵,H1为N×1的第一常数项矩阵;
S13、获取所述第一系数矩阵的转置矩阵G1 T
S14、将所述转置矩阵G1 T分别与所述第一系数矩阵G1和第一常数项矩阵H1相乘,获得线性正规方程组G2×X=H2,其中,G2为5×5的第二系数矩阵,H2为5×1的第二常数项矩阵。
步骤S2可以包括:
S21、存储所述第二系数矩阵G2和第二常数项矩阵H2的矩阵元素;
S22、按照克莱姆法则,依次获取由所述第二系数矩阵G2和第二常数项矩阵H2组成的参数矩阵;
S23、计算每个参数矩阵与第二系数矩阵G2的积,获得椭圆方程的参数。
步骤S3可以包括:
S31、获取椭圆方程的参数;
S32、根据所述椭圆方程的参数,通过椭圆中心计算公式计算出瞳孔中心坐标。
有关获得线性超定方程组、转置矩阵、线性正规方程组、参数矩阵、参数和瞳孔中心坐标等处理流程,可参见瞳孔中心定位装置的内容,这里不再赘述。
基于前述的技术构思,本公开实施例还提供了一种虚拟现实设备,包括前述的瞳孔中心定位装置。图7为根据本公开实施例的虚拟现实设备的结构示意图。通如图7中所示,虚拟现实设备的主体结构可以包括佩戴主体7。在佩戴主体7内设置了显示设备71以及摄像设备72。显示设备71可以包括一个或多个显示屏711以及显示驱动器712。本公开实施例中,瞳孔中心定位装置7121集成在显示驱动器712中。瞳孔中心定位装置7121可以为如图1中所示的瞳孔中心定位装置。摄像设备72采集使用者的眼睛图像,并将眼睛图像发送给瞳孔中心定位装置7121,瞳孔中心定位装置7121获得瞳孔中心坐标后,结合光斑坐标计算出使用者眼睛当前视线落坐在显示屏的位置,进而对显示屏进行操作,实现人机交互或注视点渲染等功能。
在本公开实施例的描述中,需要理解的是,术语“中部”、“上”、“下”、“前”、“后”、“竖直”、“水平”、“顶”、“底”“内”、“外”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本公开和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本公开的限制。
在本公开实施例的描述中,需要说明的是,除非另有明确的规定和限定,术语“安装”、“相连”、“连接”应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或一体地连接;可以是机械连接,也可以是电连接;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通。对于本领域的普通技术人员而言,可以具体情况理解上述术语在本公开中的具体含义。
本领域内的技术人员应明白,本公开的实施例可提供为方法、***、或计算机程序产品。因此,本公开可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。
虽然本公开所揭露的实施方式如上,但所述的内容仅为便于理解 本公开而采用的实施方式,并非用以限定本公开。任何本公开所属领域内的技术人员,在不脱离本公开所揭露的精神和范围的前提下,可以在实施的形式及细节上进行任何的修改与变化,但本公开的专利保护范围,仍须以所附的权利要求书所界定的范围为准。

Claims (10)

  1. 一种瞳孔中心定位装置,包括:
    矩阵运算电路,被配置为根据接收的瞳孔区域的N个边界点坐标获得线性正规方程组,N为大于5的正整数;
    参数运算电路,被配置为根据所述线性正规方程组,采用克莱姆法则获得椭圆方程的参数;以及
    坐标运算电路,被配置为根据所述椭圆方程的参数获得瞳孔区域的中心坐标。
  2. 根据权利要求1所述的瞳孔中心定位装置,其中,所述矩阵运算电路包括:
    数组缓存器,被配置为接收N个边界点坐标,并以数组的形式缓存;
    系数矩阵处理器,被配置为从数组缓存器依次读出每个边界点的坐标,获得线性超定方程组G1×X=H1,其中,G1为N×5的第一系数矩阵,X为5×1的变量矩阵,H1为N×1的第一常数项矩阵;
    转置矩阵处理器,被配置为获取所述第一系数矩阵G1的转置矩阵G1 T
    矩阵相乘器,被配置为将所述转置矩阵G1 T分别与第一系数矩阵G1和第一常数项矩阵H1相乘,获得线性正规方程组G2×X=H2,其中,G2为5×5的第二系数矩阵,H2为5×1的第二常数项矩阵。
  3. 根据权利要求2所述的瞳孔中心定位装置,其中,所述转置矩阵处理器包括:
    第一缓存器,被配置为从所述系数矩阵处理器获取所述第一系数矩阵G1的矩阵元素;
    多路复用器,被配置为从所述第一缓存器获取所有矩阵元素,进行行列互换;
    第二缓存器,被配置为从所述多路复用器获取行列互换后的所有矩阵元素,形成转置矩阵G1 T
    计数器,分别与所述第一缓存器、多路复用器和标志器连接,被配置为在地址映射中递增计数;
    标志器,分别与所述计数器和第二缓存器连接,被配置为根据计 数器控制各矩阵元素输出到第二缓存器。
  4. 根据权利要求2或3所述的瞳孔中心定位装置,其中,所述参数运算电路包括:
    状态控制器,分别与系数矩阵存储器、多路选择器和参数运算器连接;
    系数矩阵存储器,被配置为从所述矩阵运算电路获取第二系数矩阵G2和第二常数项矩阵H2,存储第二系数矩阵G2和第二常数项矩阵H2中的所有的矩阵元素;
    多路选择器,被配置为在状态控制器的控制下,按照克莱姆法则,从系数矩阵存储器依次获取由所述第二系数矩阵G2和第二常数项矩阵H2组成的参数矩阵;
    参数运算器,被配置为在状态控制器的控制下,计算每个参数矩阵与第二系数矩阵G2的积,获得椭圆方程的每个参数。
  5. 根据权利要求1~5任一所述的瞳孔中心定位装置,其中,所述N=10。
  6. 一种虚拟现实设备,包括如权利要求1~5任一所述的瞳孔中心定位装置。
  7. 一种瞳孔中心定位方法,包括:
    根据接收的瞳孔区域的N个边界点坐标获得线性正规方程组,N为大于5的正整数;
    根据所述线性正规方程组,采用克莱姆法则获得椭圆方程的参数;以及
    根据所述椭圆方程的参数获得瞳孔区域的中心坐标。
  8. 根据权利要求7所述的瞳孔中心定位方法,其中,根据接收的瞳孔区域的N个边界点坐标获得线性正规方程组,包括:
    接收N个边界点坐标,并以数组的形式缓存;
    依次读出每个边界点的坐标,获得线性超定方程组G1×X=H1,其中,G1为N×5的第一系数矩阵,X为5×1的变量矩阵,H1为N×1的第一常数项矩阵;
    获取所述第一系数矩阵的转置矩阵G1 T
    将所述转置矩阵G1 T分别与所述第一系数矩阵G1和第一常数项矩阵H1相乘,获得线性正规方程组G2×X=H2,其中,G2为5×5的第二 系数矩阵,H2为5×1的第二常数项矩阵。
  9. 根据权利要求8所述的瞳孔中心定位方法,其中,根据所述线性正规方程组,采用克莱姆法则获得椭圆方程的参数,包括:
    存储所述第二系数矩阵G2和第二常数项矩阵H2的矩阵元素;
    按照克莱姆法则,依次获取由所述第二系数矩阵G2和第二常数项矩阵H2组成的参数矩阵;
    计算每个参数矩阵与第二系数矩阵G2的积,获得椭圆方程的参数。
  10. 根据权利要求7~9任一所述的瞳孔中心定位方法,其中,所述N=10。
PCT/CN2019/082026 2018-04-24 2019-04-10 瞳孔中心定位装置和方法、虚拟现实设备 WO2019205937A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/646,929 US11009946B2 (en) 2018-04-24 2019-04-10 Pupil center positioning apparatus and method, and virtual reality device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810375282.1 2018-04-24
CN201810375282.1A CN108572735B (zh) 2018-04-24 2018-04-24 瞳孔中心定位装置和方法、虚拟现实设备

Publications (1)

Publication Number Publication Date
WO2019205937A1 true WO2019205937A1 (zh) 2019-10-31

Family

ID=63574195

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/082026 WO2019205937A1 (zh) 2018-04-24 2019-04-10 瞳孔中心定位装置和方法、虚拟现实设备

Country Status (3)

Country Link
US (1) US11009946B2 (zh)
CN (1) CN108572735B (zh)
WO (1) WO2019205937A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114281184A (zh) * 2020-09-28 2022-04-05 京东方科技集团股份有限公司 注视点计算装置及其驱动方法、电子设备
CN115451882A (zh) * 2022-11-10 2022-12-09 通用技术集团沈阳机床有限责任公司 基于浮动销的空间位姿计算蒙皮底孔轴线法向量的方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108572735B (zh) * 2018-04-24 2021-01-26 京东方科技集团股份有限公司 瞳孔中心定位装置和方法、虚拟现实设备
CN112258569B (zh) * 2020-09-21 2024-04-09 无锡唐古半导体有限公司 瞳孔中心定位方法、装置、设备及计算机存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090141995A1 (en) * 2007-11-02 2009-06-04 Siemens Corporate Research, Inc. System and Method for Fixed Point Continuation for Total Variation Based Compressed Sensing Imaging
CN106774863A (zh) * 2016-12-03 2017-05-31 西安中科创星科技孵化器有限公司 一种基于瞳孔特征实现视线追踪的方法
CN107833251A (zh) * 2017-11-13 2018-03-23 京东方科技集团股份有限公司 瞳孔定位装置和方法、虚拟现实设备的显示驱动器
CN108572735A (zh) * 2018-04-24 2018-09-25 京东方科技集团股份有限公司 瞳孔中心定位装置和方法、虚拟现实设备

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101266645B (zh) * 2008-01-24 2011-01-19 电子科技大学中山学院 一种基于多分辨率分析的虹膜定位方法
CN103067662A (zh) * 2013-01-21 2013-04-24 天津师范大学 一种自适应视线跟踪***
CN107844736B (zh) * 2016-09-19 2021-01-01 北京眼神科技有限公司 虹膜定位方法和装置
CN108427926A (zh) 2018-03-16 2018-08-21 西安电子科技大学 一种视线跟踪***中的瞳孔定位方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090141995A1 (en) * 2007-11-02 2009-06-04 Siemens Corporate Research, Inc. System and Method for Fixed Point Continuation for Total Variation Based Compressed Sensing Imaging
CN106774863A (zh) * 2016-12-03 2017-05-31 西安中科创星科技孵化器有限公司 一种基于瞳孔特征实现视线追踪的方法
CN107833251A (zh) * 2017-11-13 2018-03-23 京东方科技集团股份有限公司 瞳孔定位装置和方法、虚拟现实设备的显示驱动器
CN108572735A (zh) * 2018-04-24 2018-09-25 京东方科技集团股份有限公司 瞳孔中心定位装置和方法、虚拟现实设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114281184A (zh) * 2020-09-28 2022-04-05 京东方科技集团股份有限公司 注视点计算装置及其驱动方法、电子设备
CN115451882A (zh) * 2022-11-10 2022-12-09 通用技术集团沈阳机床有限责任公司 基于浮动销的空间位姿计算蒙皮底孔轴线法向量的方法

Also Published As

Publication number Publication date
US20200278744A1 (en) 2020-09-03
US11009946B2 (en) 2021-05-18
CN108572735A (zh) 2018-09-25
CN108572735B (zh) 2021-01-26

Similar Documents

Publication Publication Date Title
WO2019205937A1 (zh) 瞳孔中心定位装置和方法、虚拟现实设备
US10241470B2 (en) No miss cache structure for real-time image transformations with data compression
US11467661B2 (en) Gaze-point determining method, contrast adjusting method, and contrast adjusting apparatus, virtual reality device and storage medium
US11632537B2 (en) Method and apparatus for obtaining binocular panoramic image, and storage medium
US10672368B2 (en) No miss cache structure for real-time image transformations with multiple LSR processing engines
CN110249317B (zh) 用于实时图像变换的无未命中高速缓存结构
WO2017107524A1 (zh) 虚拟现实头盔的成像畸变测试方法及装置
EP3755204A1 (en) Eye tracking method and system
WO2019238114A1 (zh) 动态模型三维重建方法、装置、设备和存储介质
WO2015149557A1 (en) Display control method and display control apparatus
CN107452031B (zh) 虚拟光线跟踪方法及光场动态重聚焦显示***
EP3391338A1 (en) Light field rendering of an image using variable computational complexity
CN108230384A (zh) 图像深度计算方法、装置、存储介质和电子设备
CN112241934B (zh) 一种图像处理方法以及相关设备
CN110750157A (zh) 基于3d眼球模型的眼控辅助输入装置及方法
US11288988B2 (en) Display control methods and apparatuses
CN111369435A (zh) 基于自适应稳定模型的彩色图像深度上采样方法及***
US10083675B2 (en) Display control method and display control apparatus
CN112102374B (zh) 图像处理方法、装置、电子设备及介质
WO2023028866A1 (zh) 图像处理方法、装置和车辆
CN117745531B (zh) 图像插值方法、设备及可读存储介质
WO2022267810A1 (en) System, method and storage medium for 2d on-screen user gaze estimation
CN117557722A (zh) 3d模型的重建方法、装置、增强实现设备及存储介质
CN109919985A (zh) 数据处理方法和装置、电子设备和计算机存储介质
CN111311661A (zh) 人脸图像处理方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19791589

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19791589

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 04.05.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19791589

Country of ref document: EP

Kind code of ref document: A1