CN117876488A - Pupil instrument based on image processing core algorithm - Google Patents

Pupil instrument based on image processing core algorithm Download PDF

Info

Publication number
CN117876488A
CN117876488A CN202410050422.3A CN202410050422A CN117876488A CN 117876488 A CN117876488 A CN 117876488A CN 202410050422 A CN202410050422 A CN 202410050422A CN 117876488 A CN117876488 A CN 117876488A
Authority
CN
China
Prior art keywords
pupil
image
algorithm
module
gray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410050422.3A
Other languages
Chinese (zh)
Other versions
CN117876488B (en
Inventor
李华
张茂
韩季诺
靳明
李斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huzhou Luhupo Biotechnology Co ltd
Original Assignee
Huzhou Luhupo Biotechnology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huzhou Luhupo Biotechnology Co ltd filed Critical Huzhou Luhupo Biotechnology Co ltd
Priority to CN202410050422.3A priority Critical patent/CN117876488B/en
Publication of CN117876488A publication Critical patent/CN117876488A/en
Application granted granted Critical
Publication of CN117876488B publication Critical patent/CN117876488B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/11Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for measuring interpupillary distance or diameter of pupils
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/11Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for measuring interpupillary distance or diameter of pupils
    • A61B3/112Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for measuring interpupillary distance or diameter of pupils for measuring diameter of pupils
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/141Control of illumination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Heart & Thoracic Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a pupil instrument based on an image processing core algorithm, which comprises a flash lamp control module, an image acquisition module, an image processing module, a pupil data processing module, a pupil dynamic analysis module, a data storage management module, a system operation and maintenance module and a user interface management module.

Description

Pupil instrument based on image processing core algorithm
Technical Field
The invention relates to the technical fields of image processing, edge detection and neural networks, in particular to a pupil instrument based on an image processing core algorithm.
Background
The image processing and edge detection technology enables a pupil instrument based on an image processing core algorithm to accurately position a pupil through efficient image feature extraction and effectively identify the edge of the eye feature, aims to solve the pupil positioning problem in pupil instrument design, can accurately identify the pupil position in the eye through the image processing technology, achieves accurate measurement of pupil diameter parameters, plays a key role in the edge detection technology, can accurately position the edge of the pupil through detecting the edge in an image, improves the positioning precision of the pupil position, and enables the pupil instrument based on the image processing core algorithm to achieve efficient capture of the eye feature under different environmental conditions.
The neural network technology aims at solving the problem of pupil data analysis in the change of a pupil instrument in a complex eye structure and different individuals by learning and adapting to the complex features of eye images, and by introducing the neural network, the pupil instrument can be more flexibly adapted to the eye features of different people, the robustness to pupil position and diameter parameters is improved, and the neural network can learn and adapt to the complex features of various eye images, so that the performance and stability of the pupil instrument under various use scenes are enhanced, and the pupil instrument based on an image processing core algorithm has more intelligence and self-adaption and is better adapted to the requirements of different users.
The existing pupil meters based on the image processing core algorithm cause unstable system performance when facing to the change of illumination conditions, especially in strong light and weak light environments, the image quality fluctuation is large, which can affect the accurate positioning and measurement of pupils, and secondly, the existing pupil meters based on the image processing core algorithm have limitations on adaptability for the diversity of individual differences and eye structures, especially in the situation of facing to abnormal eye structures and special physiological conditions, the performance of the system can be affected, and furthermore, the performance of the existing pupil meters in dynamic situations needs to be further optimized, for example, in the situations of blinking of users, eye movements and environmental illumination changes.
Disclosure of Invention
The invention aims to provide a pupil instrument based on an image processing core algorithm, which aims to solve the problems that the existing pupil instrument based on the image processing core algorithm in the background art causes unstable system performance when facing the change of illumination conditions, particularly the problem that the accurate positioning and measurement of pupils are affected due to large fluctuation of image quality in strong light and weak light environments, and the problem that the existing pupil instrument based on the image processing core algorithm has limitation in adaptability in the use process of diversity and dynamic situations of individual differences and eye structures.
In order to achieve the above purpose, the present invention provides the following technical solutions: the pupil instrument based on the image processing core algorithm comprises a flash lamp control module, an image acquisition module, an image processing module, a pupil data processing module, a pupil dynamic analysis module, a data storage management module, a system operation and maintenance module and a user interface management module, wherein the flash lamp control module is used for coordinating, triggering and adjusting flash lamp operation required in the pupil measurement process and comprises opening and closing, brightness adjustment and duration maintenance, the image acquisition module is used for acquiring high-resolution images from eyes, ensuring fine capture of eye structures, providing high-quality input for subsequent image processing, the image processing module is used for carrying out preprocessing operation of gray processing and noise removal on the acquired images so as to optimize the accuracy of pupil positioning and measurement, the pupil data processing module comprises a pupil positioning analysis unit and a pupil distance measurement unit, the pupil positioning analysis unit is used for extracting pupil hole data from original image data, carrying out rapid positioning on the position of pupils through infrared rays reflected on cornea and retina, the pupil distance measurement unit is used for measuring and calculating the diameter, the pupil dynamic analysis module is used for carrying out gray scale information based on the acquired images, the pupil positioning and pupil positioning module is used for automatically measuring and calculating the diameter of the pupil positioning module, the pupil positioning module is used for automatically measuring and calculating and stopping the pupil positioning module under the operation and maintenance system, the operation and maintenance module is used for automatically opening and maintenance system, the pupil positioning module is used for automatically opening and closing the pupil positioning module is used for automatically measuring and maintaining the pupil positioning module, and is used for automatically measuring and automatically opening and closing the pupil positioning module, the user interface management module is used for generating an eye data report and displaying the eye data report in a chart.
Preferably, the flash control module is used for precisely controlling the flash of the pupil apparatus, including turning on and off, and adjusting brightness and holding time length, so as to adapt to different environmental conditions and user requirements, and ensure sufficient illumination without causing discomfort.
Preferably, the image acquisition module is used for acquiring infrared light images reflected from eyes in real time, has high resolution and high sensitivity, and is used for capturing detailed information of eye structures and ensuring clear and reliable image quality.
Preferably, the image processing module is used for performing preprocessing operation on the acquired eye image, including gray level processing and noise removal, so as to optimize the image analysis of the pupil data processing module and ensure the definition and accuracy of the pupil area.
Preferably, the pupil data processing module comprises a pupil positioning analysis unit, and the pupil positioning analysis unit proposes a pupil positioning algorithm based on gray information for rapidly and accurately extracting and positioning the rough position of the pupil, so as to ensure the accurate positioning of the pupil under different eye conditions and environmental conditions.
Specifically, the pupil positioning algorithm based on gray information is specifically as follows: first, it is assumed that the gradation image I input to the pupil positioning analysis unit after preprocessing inp The size of a is a multiplied by b, for gray level image I inp Setting a smooth scale simulation domain, calculating an image surface f (x, y), and calculating the image surface by using a Gaussian kernel function, wherein a calculation formula is expressed as follows:
wherein f (x, y) represents the calculated value of the image surface, L represents a smoothing parameter, the degree of smoothing is determined, a and b represent the width and height of the gray scale image respectively, the size of the image is represented together, i and j represent the pixel index of the image respectively, Z ij Representing the gray value of the image at position (i, j) by gray value Z for each pixel of the image ij And carrying out Gaussian weighted average to obtain a smoothed image surface estimated value f (x, y), wherein the Gaussian kernel function K (x, y) is expressed as the following formula:
exp (·) is expressed as an exponential operation, the exponential portion of the gaussian kernel K (x, y) determines the weight of each pixel for weighted averaging of the image, K (x, y) has a higher weight for pixels near the center of the image and gradually decreases for pixels farther from the center, and L is used to control the degree of smoothing; then, partial derivatives of the image surface f (x, y) with respect to x and y are calculated, resulting in a gray gradient vector of:
wherein,representing the gray gradient vector of the image at the position (x, y), representing the gray change direction and intensity of the image at the point (x, y), f x (x, y) and f y (x, y) represents the partial derivatives of f (x, y) with respect to x and y, respectively, and the calculation formulas are:
second, for the calculatedThresholding the gray gradient vector to obtain a binary image I bin Wherein the pupil area is converted into a binary image, and the gray gradient thresholding calculation formula is:
wherein I is bin (i, j) is expressed as a preliminary edge image after the gray thresholding operation, T (x, y) is expressed as a statistical function, T 1 And T is 2 Representing a threshold, the selection of which is based on statistical information of gradient amplitude to distinguish pupil and other eye structures, satisfying the following relationship T 2 >T 1 The statistical function is calculated by the covariance matrix, and the mathematical formula of the covariance matrix E (x, y) is expressed as:
wherein,and->The elements which are all expressed as covariance matrixes are respectively calculated according to the following formulas:
wherein,represents f x The average value of (x, y) representing the degree of change of the gradient in the x-direction, +.>Represents f y The average value of (x, y) represents the degree of change of the gradient in the y direction, and at this time, the statistical function is calculated by the covariance matrix, and the calculation formula of the statistical function T (x, y) is:
describing the change condition of the gradient vector on the image surface according to a statistical function T (x, y), and obtaining a preliminary edge image I through threshold judgment of the statistical function bin (I, j) for the binary image I by means of a median filter bin Further denoising treatment is carried out, noise caused by thick eyelashes, eyelid shielding and uneven illumination is eliminated, the accuracy of subsequent edge detection is improved, and a construction formula of the median filter is as follows:
wherein,representing a final edge image matrix processed by a median filter, wherein mean represents median taking operation, the pixel values in the neighborhood are ordered according to gray values, the ordered intermediate values are selected as median values, and the median values are assigned to pixels at corresponding positions in the image to form a final edge image->Finally, by traversing each pixel in the image, if that pixel is marked as an edge point, i.e. +.>The edge of the pupil is further accurately detected and tracked, the neighborhood pixel (j, k) is traversed, if +.>Then the mark (j, k) is the edge track, and the final pupil edge information is output and stored in the matrix +.>Finally, pupil positioning based on gray information is finished, image noise caused by thick eyelashes, eyelid coverage and uneven illumination is effectively eliminated by an algorithm through binarization segmentation and median filtering, definition of an extracted pupil area is ensured, an image is further purified, a large gradient area is extracted by a statistical function, edge pixels are extracted by a non-extremum suppression algorithm, the algorithm has high resistance to the image noise, and therefore robust pupil measurement is realized in a complex environment, and a pupil instrument based on an image processing core algorithm is suitable for complex practical application scenes.
Preferably, the pupil data processing module comprises a pupil distance measuring unit, wherein the pupil distance measuring unit is used for analyzing the positioned pupil data, calculating pupil diameter parameters, determining the fixed distance of the pupil and ensuring the accuracy and consistency of pupil measurement.
Preferably, the pupil dynamic analysis module provides a pupil dynamic analysis algorithm based on an autonomic neural network, which is used for analyzing the dynamic change process of the pupil under the condition of turning on and off the flash lamp, including contraction and expansion, calculating the parameters of the minimum diameter and the average speed, and ensuring the overall analysis and understanding of the system on the pupil behavior.
Specifically, the pupil dynamic analysis algorithm based on the autonomic neural network specifically comprises the following steps: firstly, extracting pupil tracks, tracking pupil data in each individual eye image by adopting a region growing algorithm, and identifying continuous regions from darkest points by the region growing algorithm, wherein a growing criterion is gray information, and a calculating formula of the growing criterion is as follows:
|I(x,y)-I seed |<H
wherein I (x, y) represents the gray value of the midpoint (x, y) of the pupil data, I seed The gray value of the seed point is represented, the seed point is represented as a starting point for starting growth in the region growing algorithm, H is represented as a threshold value, the region growing algorithm ensures that the identified region is continuous in a manner of connecting the regions and only comprises pupils, so that an accurate track of the pupils is obtained, and a basis is provided for subsequent dynamic analysis; secondly, regarding the pupil size as a time sequence, extracting features through a time sequence embedding method, constructing an autonomous neural network model, learning a nonlinear rule of pupil dynamics, setting a time sequence Y (t), constructing a new feature vector through data of a time delay version, and forming an embedding matrix Y (t), wherein an embedding matrix calculation formula is as follows:
Wherein τ represents time delay, m represents embedding dimension, at this time, a cyclic neural network model is constructed to introduce memory into the neural network in a cyclic manner to capture long-term dependency in a time sequence, and an update formula of a hidden state h (t) of the cyclic neural network is as follows:
h(t)=f(W ih x(t)+W hh h(t-1))
wherein W is ih Weight matrix representing input layer to hidden layer, W hh The weight matrix from hidden layer to hidden layer is represented, f (·) represents an activation function, h (t-1) represents a hidden state at the last moment, t represents the current moment, an implicit state can be kept in the neural network by introducing a cyclic structure, so that the network can memorize previous information, and the previous context is considered when new data are processed, in order to solve the problems of gradient disappearance and gradient explosion of the cyclic neural network when processing long sequences, a long-time memory network model is further constructed, long-time dependence in the time sequences is better captured by introducing a memory unit and a gating mechanism, and the updated formulas of the long-time memory network model comprise an input gate, a forgetting gate and an output gate, and the calculation formulas are respectively:
I t =σ(W I ·[h(t-1),Y(t)]+b I )
F t =σ(W F ·[h(t-1),Y(t)]+b F )
O t =σ(W O ·[h t-1 ,Y(t)]+b O )
wherein I is t Representing the output of the input gate, W I A weight matrix representing the input gates, F t Representing the output of a forgetting gate, W F Weight matrix representing forgetting gate, O t Representing the output of the output gate, W O A weight matrix representing input gates, b I 、b F And b O The bias terms respectively representing the input gate, the forget gate and the output gate, sigma represents the Sigmoid activation function, the current candidate value between-1 and 1 is generated by linear transformation using the input Y (t) at the current moment and the hidden state h (t-1) at the previous moment and scaling by the tanh activation functionThe calculation formula is as follows:
wherein W is C Representing a weight matrix associated with the candidate values, b C Representing the bias term, the current candidate value representing pupil information to be added to the cell state, is a temporary variable containing potential new pupil information by using the forgetting gate F t Control the forgetting part of the cell state, input gate I t Control of the added part of the cell state and the current candidate valueIs to update the cell state as follows:
wherein C is t-1 Representing the state of the cell at the previous time, the state of the cell being the main long-term storage of the stored and transmitted information in the long-short-term memory network, by using the output gate O t Controlling pupil information flow of the cell state, scaling the information of the cell state through a tanh activation function, and generating a final hidden state as follows:
h(t)=O t ·tanh(C t )
The long-time and short-time memory network finely controls the selection of the current candidate value, the updating of the cell state and the generation of the hidden state through a gating mechanism so as to better capture and transfer pupil information in a long sequence, thereby capturing a complex pupil dynamic mode more effectively; finally, extracting a two-dimensional track of pupil size and center motion by using a cyclic graph, and performing cyclic quantization analysis, wherein the cyclic graph is expressed as a binary matrix, R (i, j) represents an element of the cyclic graph and is used for describing whether a cycle exists between states i and j in an embedded space, and a calculation formula is as follows:
R(i,j)=Θ(ε-||Y(t 1 )-Y(t 2 )||)
where Θ represents a step function, ε represents a threshold of the cyclic graph, ||Y (t 1 )-Y(t 2 ) The term "time t" means 1 And t 2 The euclidean distance between the two, if the euclidean distance is smaller than the threshold epsilon, R (i, j) =1, and otherwise, R (i, j) =0, and indicating that no cycle exists, and at this time, calculating the density of the cycle chart, namely, the cycle rate, as follows:
where N represents the size of the cyclic graph, and the percentage of cycles forming a diagonal in the cyclic graph is calculated, i.e., the certainty is:
wherein L is min The minimum diagonal length in the cyclic graph is represented, P (l) represents the frequency distribution with the diagonal length of l, the algorithm adds strong adaptability and depth analysis capability to the pupil instrument based on the image processing core algorithm by introducing an autonomous neural network and a cyclic graph method, the introduction of the neural network enables the system to more accurately capture the change mode of the pupil under complex stimulus, including efficient adaptation to illumination conditions and visual stimulus, simultaneously, the important characteristics of the pupil dynamics, including cyclic rate and certainty, are extracted through the cyclic graph method, the cyclic rate represents the repetition frequency of states in the pupil dynamics, and the certainty represents the predictability of the system, and the characteristics can more comprehensively describe the complexity and regularity of the pupil dynamics, provide deep understanding of the unsteady pupil dynamics for the system, enable the algorithm to more comprehensively capture the subtle change of the pupil under the change environment, and improve the adaptability to individual differences and the eye structural diversity.
Preferably, the data storage management module is used for efficiently storing the collected eye data, including the original eye image data, the measurement parameters and the processed pupil data, so as to realize reliable data retrieval and management and ensure the integrity and traceability of the data.
Preferably, the system operation and maintenance module is used for integrating each module, coordinating the system operation, providing the system fault detection and automatic correction functions, simultaneously feeding back the user operation and the system state, and ensuring the stability and reliability of the system in long-time operation.
Preferably, the user interface management module is used for realizing interaction and management of a user and the pupil instrument system, providing an intuitively friendly interface, enabling the user to flexibly set flash lamp parameters, view pupil measurement results and manage the running state of the system.
Compared with the prior art, the invention has the beneficial effects that:
1. the pupil positioning analysis unit provides a pupil positioning algorithm based on gray information, the algorithm rapidly and accurately positions the pupil center and radius by comprehensively analyzing gray information of a rectangular area where the pupil is positioned, so that the accurate measurement of an eye structure is improved, when complex noise of the iris and the sclera boundary is faced, the algorithm adopts an edge detection threshold analysis method based on a statistical principle, further extracts the outer boundary pixels of the iris, processes the pixels through a Gaussian kernel function, enhances the accuracy of the boundary of the eye ball, in addition, the algorithm effectively eliminates image noise caused by thick eyelashes, eyelid coverage and uneven illumination through binary segmentation and median filtering treatment, ensures the definition of the extracted pupil area, further purifies images, adopts a statistical function to extract a large gradient area and a non-extremum suppression algorithm to extract edge pixels, so that the algorithm has stronger resistance to the image noise, and thus the stable pupil measurement is realized in a complex environment, the pupil instrument based on an image processing core algorithm is suitable for complex practical application scene, and simultaneously provides a innovative scheme for solving the problem that the system performance of the pupil instrument based on the image processing core algorithm is unstable due to illumination condition change;
2. The pupil dynamic analysis module provides a pupil dynamic analysis algorithm based on an autonomous neural network, the algorithm adds strong adaptability and deep analysis capability to a pupil instrument based on an image processing core algorithm by introducing the autonomous neural network and a cyclic graph method, the introduction of the neural network enables a system to more accurately capture the change mode of the pupil under complex stimulus, the efficient adaptation to illumination conditions and visual stimulus is included, meanwhile, important characteristics of the pupil dynamic, including the cyclic rate and certainty, of the state in the pupil dynamic are extracted through the cyclic graph method, the cyclic rate represents the repetition frequency of the state in the pupil dynamic, the certainty represents the predictability of the system, the characteristics can more comprehensively describe the complexity and regularity of the pupil dynamic, deep understanding of unsteady pupil dynamic is provided for the system, the algorithm can more comprehensively capture subtle changes of the pupil under a change environment, the adaptability to individual differences and ocular structure diversity is improved, and more comprehensive data analysis is realized, the pupil instrument based on the image processing core algorithm is improved in the adaptability in actual use, especially when the pupil instrument faces the change of the illumination conditions and the differences, the stability and the applicability of the pupil instrument are improved, and the existing technical scheme is provided for the pupil dynamic adaptation.
Drawings
The invention will be further described with reference to the accompanying drawings, but the embodiments in the drawings do not constitute any limitation of the invention, and other drawings can be obtained by one of ordinary skill in the art without any inventive effort from the following drawings;
FIG. 1 is a schematic diagram of the structure of the present invention;
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, the invention provides a pupil meter based on an image processing core algorithm, which comprises a flash lamp control module, an image acquisition module, an image processing module, a pupil data processing module, a pupil dynamic analysis module, a data storage management module, a system operation and maintenance module and a user interface management module, wherein the flash lamp control module is used for coordinating, triggering and adjusting flash lamp operation required in a pupil measurement process, the flash lamp control module comprises starting and closing, adjusting brightness and keeping time, the image acquisition module is used for acquiring high-resolution images from eyes, ensuring fine capture of eye structures and providing high-quality input for subsequent image processing, the image processing module is used for carrying out preprocessing operation of gray level processing and noise removal on the acquired images so as to optimize the accuracy of pupil positioning and measurement, the pupil data processing module comprises a pupil positioning analysis unit and a pupil distance measurement unit, the pupil positioning analysis unit is used for extracting pupil hole data from original image data, the pupil position is rapidly positioned by infrared rays reflected on cornea and retina, the pupil distance measurement unit is used for measuring and calculating the diameter, the dynamic analysis module is used for automatically expanding the system under the condition of integrating the dynamic neural network and the operation and the maintenance system is used for calculating the pupil positioning and maintenance system, the pupil positioning and the pupil positioning module is used for automatically expanding and storing the pupil positioning and maintaining the pupil positioning data based on the dynamic analysis module, the user interface management module is used for generating an eye data report and displaying the eye data report in a chart.
Referring to fig. 1, further, the flash control module is configured to accurately control the flash of the pupillary device, including turning on and off, and adjusting the brightness and the duration of the hold, so as to adapt to different environmental conditions and user requirements, and ensure sufficient illumination without causing discomfort.
Referring to fig. 1, further, the image acquisition module is configured to acquire an infrared light image reflected from an eye in real time, and has high resolution and high sensitivity, so as to capture detailed information of an eye structure, and ensure clear and reliable image quality.
Referring to fig. 1, further, the image processing module is configured to perform preprocessing operations on the acquired eye image, including gray level processing and noise removal, so as to optimize image analysis of the pupil data processing module and ensure definition and accuracy of the pupil area.
Referring to fig. 1, further, the pupil data processing module includes a pupil positioning analysis unit, and the pupil positioning analysis unit proposes a pupil positioning algorithm based on gray information for rapidly and accurately extracting and positioning the rough position of the pupil, so as to ensure accurate positioning of the pupil under different eye conditions and environmental conditions.
Referring to fig. 1, further, the pupil positioning algorithm based on gray information is specifically as follows: first, it is assumed that the gradation image I input to the pupil positioning analysis unit after preprocessing inp The size of a is a multiplied by b, for gray level image I inp Setting a smooth scale simulation domain, calculating an image surface f (x, y), and calculating the image surface by using a Gaussian kernel function, wherein a calculation formula is expressed as follows:
wherein f (x, y) represents the calculated value of the image surface, L represents a smoothing parameter, the degree of smoothing is determined, a and b represent the width and height of the gray scale image respectively, the size of the image is represented together, i and j represent the pixel index of the image respectively, Z ij Representing the gray value of the image at position (i, j) by gray value Z for each pixel of the image ij And carrying out Gaussian weighted average to obtain a smoothed image surface estimated value f (x, y), wherein the Gaussian kernel function K (x, y) is expressed as the following formula:
exp (·) is expressed as an exponential operation, the exponential portion of the gaussian kernel K (x, y) determines the weight of each pixel for weighted averaging of the image, K (x, y) has a higher weight for pixels near the center of the image and gradually decreases for pixels farther from the center, and L is used to control the degree of smoothing; then, partial derivatives of the image surface f (x, y) with respect to x and y are calculated, resulting in a gray gradient vector of:
wherein,representing the gray gradient vector of the image at the position (x, y), representing the gray change direction and intensity of the image at the point (x, y), f x (x, y) and f y (x, y) represents the partial derivatives of f (x, y) with respect to x and y, respectively, and the calculation formulas are:
second, for the calculatedThresholding the gray gradient vector to obtain a binary image I bin Wherein the pupil area is converted into a binary image, and the gray gradient thresholding calculation formula is:
wherein I is bin (i, j) is expressed as a preliminary edge image after the gray thresholding operation, T (x, y) is expressed as a statistical function, T 1 And T is 2 Representing threshold values, the selection of which is based on statistical information of gradient amplitude to distinguish pupil and other eye structures, satisfyingRelationship T below 2 >T 1 The statistical function is calculated by the covariance matrix, and the mathematical formula of the covariance matrix E (x, y) is expressed as:
wherein,and->The elements which are all expressed as covariance matrixes are respectively calculated according to the following formulas:
wherein,represents f x The average value of (x, y) representing the degree of change of the gradient in the x-direction, +.>Represents f y The average value of (x, y) represents the degree of change of the gradient in the y direction, and at this time, the statistical function is calculated by the covariance matrix, and the calculation formula of the statistical function T (x, y) is:
describing the change condition of the gradient vector on the image surface according to a statistical function T (x, y), and obtaining a preliminary edge image I through threshold judgment of the statistical function bin (I, j) for the binary image I by means of a median filter bin Further denoising treatment is carried out, noise caused by thick eyelashes, eyelid shielding and uneven illumination is eliminated, the accuracy of subsequent edge detection is improved, and a construction formula of the median filter is as follows:
wherein,representing a final edge image matrix processed by a median filter, wherein mean represents median taking operation, the pixel values in the neighborhood are ordered according to gray values, the ordered intermediate values are selected as median values, and the median values are assigned to pixels at corresponding positions in the image to form a final edge image->Finally, by traversing each pixel in the image, if that pixel is marked as an edge point, i.e. +.>The edge of the pupil is further accurately detected and tracked, the neighborhood pixel (j, k) is traversed, if +.>Then the mark (j, k) is the edge track, and the final pupil edge information is output and stored in the matrix +.>Finally, pupil positioning based on gray information is finished, and the algorithm effectively eliminates coverage of thick eyelashes and eyelids through binarization segmentation and median filtering treatmentAnd image noise caused by uneven illumination, the definition of the extracted pupil area is ensured, the image is further purified, a statistical function is adopted to extract a large gradient area and a non-extremum suppression algorithm is adopted to extract edge pixels, so that the algorithm has stronger resistance to the image noise, and therefore, the robust pupil measurement is realized in a complex environment, and the pupil instrument based on the image processing core algorithm is suitable for complex practical application scenes.
Referring to fig. 1, further, the pupil data processing module includes a pupil distance measuring unit, where the pupil distance measuring unit is configured to analyze the located pupil data, calculate a pupil diameter parameter, determine a fixed distance of the pupil, and ensure accuracy and consistency of pupil measurement.
Referring to fig. 1, further, the pupil data processing module includes a pupil distance measuring unit, where the pupil distance measuring unit is configured to analyze the located pupil data, calculate a pupil diameter parameter, determine a fixed distance of the pupil, and ensure accuracy and consistency of pupil measurement.
Referring to fig. 1, further, the pupil dynamic analysis module proposes a pupil dynamic analysis algorithm based on an autonomic neural network, which is used for analyzing the dynamic change process of the pupil, including contraction and expansion, under the condition of turning on and off the flash lamp, calculating the parameters of the minimum diameter and the average speed, and ensuring the overall analysis and understanding of the system on the pupil behavior.
Referring to fig. 1, further, the pupil dynamic analysis algorithm based on the autonomic neural network is specifically as follows: firstly, extracting pupil tracks, tracking pupil data in each individual eye image by adopting a region growing algorithm, and identifying continuous regions from darkest points by the region growing algorithm, wherein a growing criterion is gray information, and a calculating formula of the growing criterion is as follows:
|I(x,y)-I seed |<H
Wherein I (x, y) represents the gray value of the midpoint (x, y) of the pupil data, I seed The gray value of the seed point is represented, the seed point is represented as a starting point for starting growth in the region growing algorithm, H is represented as a threshold value, and the regionThe growth algorithm ensures that the identified area is continuous in a manner of connecting the areas and only comprises pupils so as to obtain an accurate track of the pupils, thereby providing a basis for subsequent dynamic analysis; secondly, regarding the pupil size as a time sequence, extracting features through a time sequence embedding method, constructing an autonomous neural network model, learning a nonlinear rule of pupil dynamics, setting a time sequence Y (t), constructing a new feature vector through data of a time delay version, and forming an embedding matrix Y (t), wherein an embedding matrix calculation formula is as follows:
wherein τ represents time delay, m represents embedding dimension, at this time, a cyclic neural network model is constructed to introduce memory into the neural network in a cyclic manner to capture long-term dependency in a time sequence, and an update formula of a hidden state h (t) of the cyclic neural network is as follows:
h(t)=f(W ih x(t)+W hh h(t-1))
wherein W is ih Weight matrix representing input layer to hidden layer, W hh The weight matrix from hidden layer to hidden layer is represented, f (·) represents an activation function, h (t-1) represents a hidden state at the last moment, t represents the current moment, an implicit state can be kept in the neural network by introducing a cyclic structure, so that the network can memorize previous information, and the previous context is considered when new data are processed, in order to solve the problems of gradient disappearance and gradient explosion of the cyclic neural network when processing long sequences, a long-time memory network model is further constructed, long-time dependence in the time sequences is better captured by introducing a memory unit and a gating mechanism, and the updated formulas of the long-time memory network model comprise an input gate, a forgetting gate and an output gate, and the calculation formulas are respectively:
I t =σ(W I ·[h(t-1),Y(t)]+b I )
F t =σ(W F ·[h(t-1),Y(t)]+b F )
O t =σ(W O ·[h t-1 ,Y(t)]+b O )
Wherein I is t Representing the output of the input gate, W I A weight matrix representing the input gates, F t Representing the output of a forgetting gate, W F Weight matrix representing forgetting gate, O t Representing the output of the output gate, W O A weight matrix representing input gates, b I 、b F And b O The bias terms respectively representing the input gate, the forget gate and the output gate, sigma represents the Sigmoid activation function, the current candidate value between-1 and 1 is generated by linear transformation using the input Y (t) at the current moment and the hidden state h (t-1) at the previous moment and scaling by the tanh activation functionThe calculation formula is as follows:
wherein W is C Representing a weight matrix associated with the candidate values, b C Representing the bias term, the current candidate value representing pupil information to be added to the cell state, is a temporary variable containing potential new pupil information by using the forgetting gate F t Control the forgetting part of the cell state, input gate I t Control of the added part of the cell state and the current candidate valueIs to update the cell state as follows:
wherein C is t-1 Representing the state of the cell at the previous time, the state of the cell being the main long-term storage of the stored and transmitted information in the long-short-term memory network, by using the output gate O t Pupil information flow controlling cell state, and cell state control method The information is scaled by a tanh activation function to generate a final hidden state as follows:
h(t)=O t ·tanh(C t )
the long-time and short-time memory network finely controls the selection of the current candidate value, the updating of the cell state and the generation of the hidden state through a gating mechanism so as to better capture and transfer pupil information in a long sequence, thereby capturing a complex pupil dynamic mode more effectively; finally, extracting a two-dimensional track of pupil size and center motion by using a cyclic graph, and performing cyclic quantization analysis, wherein the cyclic graph is expressed as a binary matrix, R (i, j) represents an element of the cyclic graph and is used for describing whether a cycle exists between states i and j in an embedded space, and a calculation formula is as follows:
R(i,j)=Θ(ε-||Y(t 1 )-Y(t 2 )||)
where Θ represents a step function, ε represents a threshold of the cyclic graph, ||Y (t 1 )-Y(t 2 ) The term "time t" means 1 And t 2 The euclidean distance between the two, if the euclidean distance is smaller than the threshold epsilon, R (i, j) =1, and otherwise, R (i, j) =0, and indicating that no cycle exists, and at this time, calculating the density of the cycle chart, namely, the cycle rate, as follows:
where N represents the size of the cyclic graph, and the percentage of cycles forming a diagonal in the cyclic graph is calculated, i.e., the certainty is:
wherein L is min The minimum diagonal length in the cyclic graph is represented, P (l) represents the frequency distribution with the diagonal length of l, the algorithm adds strong adaptability and depth analysis capability to the pupil instrument based on the image processing core algorithm by introducing an autonomous neural network and a cyclic graph method, and the system is more accurately caught by introducing the neural network Capturing the change modes of the pupil under complex stimulus, including efficient adaptation to illumination conditions and visual stimulus, extracting important characteristics of the pupil dynamics through a cyclic graph method, including cyclic rate and certainty, wherein the cyclic rate represents the repetition frequency of states in the pupil dynamics, and the certainty represents the predictability of the system, the characteristics can describe the complexity and regularity of the pupil dynamics more comprehensively, and deep understanding of unsteady pupil dynamics is provided for the system, so that the algorithm captures subtle changes of the pupil under the change environment more comprehensively, and the adaptability to individual differences and ocular structural diversity is improved.
Referring to fig. 1, further, the data storage management module is configured to efficiently store collected eye data, including original eye image data, measurement parameters, and processed pupil data, so as to implement reliable data retrieval and management, and ensure data integrity and traceability.
Referring to fig. 1, further, the system operation and maintenance module is configured to integrate each module, coordinate system operation, provide functions of system fault detection and automatic correction, and simultaneously feed back user operation and system status, so as to ensure stability and reliability of the system in long-time operation.
Referring to fig. 1, further, the user interface management module is configured to implement interaction and management between a user and the pupil meter system, provide an intuitively friendly interface, enable the user to flexibly set parameters of the flash lamp, view pupil measurement results, and manage an operation state of the system.
When the system is specifically used, firstly, a flash lamp control module coordinates, triggers and adjusts flash lamp operation required in the pupil measurement process, which comprises starting and closing, adjusting brightness and keeping time, secondly, an image acquisition module acquires high-resolution images from eyes, ensures fine capture of eye structures, provides high-quality input for subsequent image processing, then an image processing module performs preprocessing operation of gray processing and noise removal on the acquired images so as to optimize the accuracy of pupil positioning and measurement, then a pupil positioning analysis unit in a pupil data processing module extracts pupil hole data from original image data through a pupil positioning algorithm based on gray information, positions of pupils are rapidly positioned through infrared rays reflected on cornea and retina, a pupil distance measurement unit measures and calculates pupil diameters, then a pupil dynamic analysis module analyzes the shrinkage and expansion process of the pupils under the condition of starting and closing the flash lamps based on a pupil dynamic analysis algorithm of an autonomous neural network, finally, a data storage management module stores and manages the acquired image data, and a system operation and maintenance module integrates each module and coordinates and operates a fault detection system to provide a fault report and a user interface to generate a graph and a maintenance function.
Although the present invention has been described with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments described, or equivalents may be substituted for elements thereof, and any modifications, equivalents, improvements and changes may be made without departing from the spirit and principles of the present invention.

Claims (10)

1. The utility model provides a pupil appearance based on image processing core algorithm, includes flash light control module, image acquisition module, image processing module, pupil data processing module, pupil dynamic analysis module, data storage management module, system fortune dimension module, user interface management module, its characterized in that: the system comprises an image acquisition module, an image processing module, a pupil data storage management module and a user interface management module, wherein the image acquisition module is used for acquiring high-resolution images from eyes, ensuring careful capture of eye structures, providing high-quality input for subsequent image processing, the image processing module is used for carrying out preprocessing operation of gray processing and noise removal on the acquired images so as to optimize the accuracy of pupil positioning and measurement, the pupil data processing module comprises a pupil positioning analysis unit and a pupil distance measurement unit, the pupil positioning analysis unit is used for extracting pupil hole data from original image data by a pupil positioning algorithm based on gray information, rapidly positioning the position of a pupil by infrared rays reflected on the cornea and retina, the pupil distance measurement unit is used for measuring and calculating the diameter of the pupil, the pupil dynamic analysis module is used for providing a pupil dynamic analysis algorithm based on an autonomous neural network for analyzing the contraction and expansion process of the pupil under the condition of opening and closing the flash lamp, calculating the minimum diameter and average speed parameter, the data storage management module is used for storing and managing the acquired images, the pupil positioning analysis module is used for storing and managing the pupil positioning algorithm based on gray information, the pupil positioning algorithm is used for integrating a user interface management system and generating a user interface, and an operation management system is used for automatically reporting and a user interface maintenance system is used for providing a user interface.
2. The pupil apparatus based on an image processing core algorithm as claimed in claim 1, wherein: the flash lamp control module is used for accurately controlling the flash lamp of the pupil instrument, and comprises the steps of starting and closing, and adjusting the brightness and the holding time length so as to adapt to different environmental conditions and user requirements and ensure sufficient illumination without causing discomfort.
3. The pupil apparatus based on an image processing core algorithm as claimed in claim 1, wherein: the image acquisition module is used for acquiring infrared light images reflected from eyes in real time, has high resolution and high sensitivity, and is used for capturing detailed information of eye structures and ensuring clear and reliable image quality.
4. The pupil apparatus based on an image processing core algorithm as claimed in claim 1, wherein: the image processing module is used for preprocessing the acquired eye images, including gray level processing and noise removal, so as to optimize the image analysis of the pupil data processing module and ensure the definition and accuracy of the pupil area.
5. The pupil apparatus based on an image processing core algorithm as claimed in claim 1, wherein: the pupil data processing module comprises a pupil positioning analysis unit, and the pupil positioning analysis unit provides a pupil positioning algorithm based on gray information for rapidly and accurately extracting and positioning the rough position of the pupil, so that the pupil is accurately positioned under different eye conditions and environment conditions.
6. The pupil apparatus based on an image processing core algorithm as claimed in claim 5, wherein: the pupil positioning algorithm based on gray information is specifically as follows: first, it is assumed that the gradation image I input to the pupil positioning analysis unit after preprocessing inp The size of a is a multiplied by b, for gray level image I inp Setting a smooth scale simulation domain, calculating an image surface f (x, y), and calculating the image surface by using a Gaussian kernel function, wherein a calculation formula is expressed as follows:
wherein f (x, y) represents the calculated value of the image surface, L represents a smoothing parameter, the degree of smoothing is determined, a and b represent the width and height of the gray scale image respectively, the size of the image is represented together, i and j represent the pixel index of the image respectively, Z ij Representing the gray value of the image at position (i, j) by gray value Z for each pixel of the image ij And carrying out Gaussian weighted average to obtain a smoothed image surface estimated value f (x, y), wherein the Gaussian kernel function K (x, y) is expressed as the following formula:
exp (·) is expressed as an exponential operation, the exponential portion of the gaussian kernel K (x, y) determines the weight of each pixel for weighted averaging of the image, K (x, y) has a higher weight for pixels near the center of the image and gradually decreases for pixels farther from the center, and L is used to control the degree of smoothing; then, partial derivatives of the image surface f (x, y) with respect to x and y are calculated, resulting in a gray gradient vector of:
Wherein,representing the gray gradient vector of the image at the position (x, y), representing the gray change direction and intensity of the image at the point (x, y), f x (x, y) and f y (x, y) represents the partial derivatives of f (x, y) with respect to x and y, respectively, and the calculation formulas are:
second, for the calculatedThresholding the gray gradient vector to obtain a binary image I bin Wherein the pupil area is converted into a binary image, and the gray gradient thresholding calculation formula is:
wherein I is bin (i, j) is expressed as a preliminary edge image after the gray thresholding operation, T (x, y) is expressed as a statistical function, T 1 And T is 2 Representing a threshold, the selection of the threshold being based on statistical information of gradient amplitude to distinguish pupil and other ocular structures satisfying the followingRelationship T 2 >T 1 The statistical function is calculated by the covariance matrix, and the mathematical formula of the covariance matrix E (x, y) is expressed as:
wherein,and->The elements which are all expressed as covariance matrixes are respectively calculated according to the following formulas:
wherein,represents f x The average value of (x, y) representing the degree of change of the gradient in the x-direction, +.>Represents f y The average value of (x, y) represents the degree of change of the gradient in the y direction, and at this time, the statistical function is calculated by the covariance matrix, and the calculation formula of the statistical function T (x, y) is:
Describing the change condition of the gradient vector on the image surface according to a statistical function T (x, y), and obtaining a preliminary edge image I through threshold judgment of the statistical function bin (I, j) for the binary image I by means of a median filter bin Further denoising treatment is carried out, noise caused by thick eyelashes, eyelid shielding and uneven illumination is eliminated, the accuracy of subsequent edge detection is improved, and a construction formula of the median filter is as follows:
wherein,representing a final edge image matrix processed by a median filter, wherein mean represents median taking operation, the pixel values in the neighborhood are ordered according to gray values, the ordered intermediate values are selected as median values, and the median values are assigned to pixels at corresponding positions in the image to form a final edge image->Finally, by traversing each pixel in the image, if that pixel is marked as an edge point, i.e. +.>The edge of the pupil is further accurately detected and tracked, the neighborhood pixel (j, k) is traversed, if +.>Then the mark (j, k) is the edge track, and the final pupil edge information is output and stored in the matrix +.>Finally, pupil positioning based on gray information is finished, image noise caused by thick eyelashes, eyelid coverage and uneven illumination is effectively eliminated by an algorithm through binarization segmentation and median filtering, definition of an extracted pupil area is ensured, an image is further purified, a large gradient area is extracted by a statistical function, edge pixels are extracted by a non-extremum suppression algorithm, the algorithm has high resistance to the image noise, and therefore robust pupil measurement is realized in a complex environment, and a pupil instrument based on an image processing core algorithm is suitable for complex practical application scenes.
7. The pupil apparatus based on an image processing core algorithm as claimed in claim 1, wherein: the pupil data processing module comprises a pupil distance measuring unit, wherein the pupil distance measuring unit is used for analyzing the positioned pupil data, calculating pupil diameter parameters, determining the fixed distance of the pupil and ensuring the accuracy and consistency of pupil measurement.
8. The pupil apparatus based on an image processing core algorithm as claimed in claim 1, wherein: the pupil dynamic analysis module provides a pupil dynamic analysis algorithm based on an autonomic nerve network, which is used for analyzing the dynamic change process of the pupil under the condition of turning on and off the flash lamp, including contraction and expansion, calculating the parameters of the minimum diameter and the average speed, and ensuring the overall analysis and understanding of the system on the pupil behaviors.
9. The pupil apparatus as defined in claim 9, wherein: the pupil dynamic analysis algorithm based on the autonomic neural network is specifically as follows: firstly, extracting pupil tracks, tracking pupil data in each individual eye image by adopting a region growing algorithm, and identifying continuous regions from darkest points by the region growing algorithm, wherein a growing criterion is gray information, and a calculating formula of the growing criterion is as follows:
|I(x,y)-I seed |<H
Wherein I (x, y) represents the gray value of the midpoint (x, y) of the pupil data, I seed The gray value of the seed point is represented, the seed point is represented as a starting point for starting growth in the region growing algorithm, H is represented as a threshold value, the region growing algorithm ensures that the identified region is continuous in a manner of connecting the regions and only comprises pupils, so that an accurate track of the pupils is obtained, and a basis is provided for subsequent dynamic analysis; secondly, regarding the pupil size as a time sequence, extracting features through a time sequence embedding method, constructing an autonomous neural network model, learning a nonlinear rule of pupil dynamics, setting a time sequence Y (t), constructing a new feature vector through data of a time delay version, and forming an embedding matrix Y (t), wherein an embedding matrix calculation formula is as follows:
wherein τ represents time delay, m represents embedding dimension, at this time, a cyclic neural network model is constructed to introduce memory into the neural network in a cyclic manner to capture long-term dependency in a time sequence, and an update formula of a hidden state h (t) of the cyclic neural network is as follows:
h(t)=f(W ih x(t)+W hh h(t-1))
wherein W is ih Weight matrix representing input layer to hidden layer, W hh The weight matrix from hidden layer to hidden layer is represented, f (·) represents an activation function, h (t-1) represents the hidden state of the last moment, t represents the current moment, an implicit state can be kept in the neural network by introducing a cyclic structure, so that the network can memorize previous information, and the previous context is considered when new data are processed, in order to solve the problems of gradient disappearance and gradient explosion of the cyclic neural network when processing long sequences, a long-time memory network model is further constructed, long-time dependence in the time sequences is better captured by introducing a memory unit and a gating mechanism, an updated formula of the long-time memory network model comprises an input gate, a forgetting gate and an output gate, and a calculation formula is divided into The method comprises the following steps:
I t =σ(W I ·[h(t-1),Y(t)]+b I )
F t =σ(W F ·[h(t-1),Y(t)]+b F )
O t =σ(W O ·[h t-1 ,Y(t)]+b o )
wherein I is t Representing the output of the input gate, W I A weight matrix representing the input gates, F t Representing the output of a forgetting gate, W F Weight matrix representing forgetting gate, O t Representing the output of the output gate, W O A weight matrix representing input gates, b I 、b F And b O The bias terms respectively representing the input gate, the forget gate and the output gate, sigma represents the Sigmoid activation function, the current candidate value between-1 and 1 is generated by linear transformation using the input Y (t) at the current moment and the hidden state h (t-1) at the previous moment and scaling by the tanh activation functionThe calculation formula is as follows:
wherein W is C Representing a weight matrix associated with the candidate values, b C Representing the bias term, the current candidate value representing pupil information to be added to the cell state, is a temporary variable containing potential new pupil information by using the forgetting gate F t Control the forgetting part of the cell state, input gate I t Control of the added part of the cell state and the current candidate valueIs to update the cell state as follows:
wherein C is t-1 Representing the state of the cell at the previous time, the state of the cell being the main long-term storage of the stored and transmitted information in the long-short-term memory network, by using the output gate O t Controlling pupil information flow of the cell state, scaling the information of the cell state through a tanh activation function, and generating a final hidden state as follows:
h(t)=O t ·tanh(C t )
the long-time and short-time memory network finely controls the selection of the current candidate value, the updating of the cell state and the generation of the hidden state through a gating mechanism so as to better capture and transfer pupil information in a long sequence, thereby capturing a complex pupil dynamic mode more effectively; finally, extracting a two-dimensional track of pupil size and center motion by using a cyclic graph, and performing cyclic quantization analysis, wherein the cyclic graph is expressed as a binary matrix, R (i, j) represents an element of the cyclic graph and is used for describing whether a cycle exists between states i and j in an embedded space, and a calculation formula is as follows:
R(i,j)=Θ(ε-||Y(t 1 )-Y(t 2 )||)
where Θ represents a step function, ε represents a threshold of the cyclic graph, ||Y (t 1 )-Y(t 2 ) The term "time t" means 1 And t 2 The euclidean distance between the two, if the euclidean distance is smaller than the threshold epsilon, R (i, j) =1, and otherwise, R (i, j) =0, and indicating that no cycle exists, and at this time, calculating the density of the cycle chart, namely, the cycle rate, as follows:
where N represents the size of the cyclic graph, and the percentage of cycles forming a diagonal in the cyclic graph is calculated, i.e., the certainty is:
Wherein L is min The minimum diagonal length in the cyclic graph is represented, P (l) represents the frequency distribution with the diagonal length of l, the algorithm adds strong adaptability and depth analysis capability to the pupil instrument based on the image processing core algorithm by introducing an autonomous neural network and a cyclic graph method, the introduction of the neural network enables the system to more accurately capture the change mode of the pupil under complex stimulus, including efficient adaptation to illumination conditions and visual stimulus, simultaneously, the important characteristics of the pupil dynamics, including cyclic rate and certainty, are extracted through the cyclic graph method, the cyclic rate represents the repetition frequency of states in the pupil dynamics, and the certainty represents the predictability of the system, and the characteristics can more comprehensively describe the complexity and regularity of the pupil dynamics, provide deep understanding of the unsteady pupil dynamics for the system, enable the algorithm to more comprehensively capture the subtle change of the pupil under the change environment, and improve the adaptability to individual differences and the eye structural diversity.
10. The pupil apparatus based on an image processing core algorithm as claimed in claim 1, wherein: the data storage management module is used for efficiently storing the collected eye data, including the original eye image data, the measurement parameters and the processed pupil data, so as to realize reliable data retrieval and management and ensure the integrity and traceability of the data; the system operation and maintenance module is used for integrating each module, coordinating the system operation, providing the system fault detection and automatic correction functions, simultaneously feeding back the user operation and the system state, and ensuring the stability and reliability of the system in long-time operation; the user interface management module is used for realizing interaction and management of a user and the pupil instrument system, providing an intuitive and friendly interface, enabling the user to flexibly set flash lamp parameters, view pupil measurement results and manage the running state of the system.
CN202410050422.3A 2024-01-12 2024-01-12 Pupil instrument based on image processing core algorithm Active CN117876488B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410050422.3A CN117876488B (en) 2024-01-12 2024-01-12 Pupil instrument based on image processing core algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410050422.3A CN117876488B (en) 2024-01-12 2024-01-12 Pupil instrument based on image processing core algorithm

Publications (2)

Publication Number Publication Date
CN117876488A true CN117876488A (en) 2024-04-12
CN117876488B CN117876488B (en) 2024-07-02

Family

ID=90580724

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410050422.3A Active CN117876488B (en) 2024-01-12 2024-01-12 Pupil instrument based on image processing core algorithm

Country Status (1)

Country Link
CN (1) CN117876488B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080253622A1 (en) * 2006-09-15 2008-10-16 Retica Systems, Inc. Multimodal ocular biometric system and methods
CN103136512A (en) * 2013-02-04 2013-06-05 重庆市科学技术研究院 Pupil positioning method and system
CN103445759A (en) * 2013-09-06 2013-12-18 重庆大学 Self-operated measuring unit for reaction of pupil aperture to light based on digital image processing
CN110345815A (en) * 2019-07-16 2019-10-18 吉林大学 A kind of creeper truck firearms method of sight based on Eye-controlling focus
CN114638879A (en) * 2022-03-21 2022-06-17 四川大学华西医院 Medical pupil size measuring system
CN114816055A (en) * 2022-04-14 2022-07-29 深圳市铱硙医疗科技有限公司 Eyeball motion track capturing and analyzing method, device and medium based on VR equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080253622A1 (en) * 2006-09-15 2008-10-16 Retica Systems, Inc. Multimodal ocular biometric system and methods
CN103136512A (en) * 2013-02-04 2013-06-05 重庆市科学技术研究院 Pupil positioning method and system
CN103445759A (en) * 2013-09-06 2013-12-18 重庆大学 Self-operated measuring unit for reaction of pupil aperture to light based on digital image processing
CN110345815A (en) * 2019-07-16 2019-10-18 吉林大学 A kind of creeper truck firearms method of sight based on Eye-controlling focus
CN114638879A (en) * 2022-03-21 2022-06-17 四川大学华西医院 Medical pupil size measuring system
CN114816055A (en) * 2022-04-14 2022-07-29 深圳市铱硙医疗科技有限公司 Eyeball motion track capturing and analyzing method, device and medium based on VR equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李蕊;张浪千;胡博;蒋政;吴广延;姚娟;刘运航;隋建峰;: "基于红外视频测量技术的瞳孔反应动态识别监测***", 第三军医大学学报, no. 07, 14 March 2013 (2013-03-14) *
王朝;杜志刚;王首硕;陈鑫;陈云;: "小半径国省干道公路短隧道驾驶人视觉特性研究", 武汉理工大学学报(交通科学与工程版), no. 03, 15 July 2020 (2020-07-15) *

Also Published As

Publication number Publication date
CN117876488B (en) 2024-07-02

Similar Documents

Publication Publication Date Title
CN107316307B (en) Automatic segmentation method of traditional Chinese medicine tongue image based on deep convolutional neural network
St-Charles et al. A self-adjusting approach to change detection based on background word consensus
St-Charles et al. SuBSENSE: A universal change detection method with local adaptive sensitivity
Fang et al. Video saliency incorporating spatiotemporal cues and uncertainty weighting
CN105550678A (en) Human body motion feature extraction method based on global remarkable edge area
CN109101865A (en) A kind of recognition methods again of the pedestrian based on deep learning
CN108171201B (en) Rapid eyelash detection method based on gray scale morphology
Lux et al. DIC image segmentation of dense cell populations by combining deep learning and watershed
CN110634116B (en) Facial image scoring method and camera
CN106599994A (en) Sight line estimation method based on depth regression network
CN101359365A (en) Iris positioning method based on Maximum between-Cluster Variance and gray scale information
CN109583331B (en) Deep learning-based accurate positioning method for positions of wrist vein and mouth of person
CN110533100A (en) A method of CME detection and tracking is carried out based on machine learning
Song et al. Feature extraction and target recognition of moving image sequences
CN105631410B (en) A kind of classroom detection method based on intelligent video processing technique
CN117876488B (en) Pupil instrument based on image processing core algorithm
CN111833375B (en) Method and system for tracking animal group track
CN117541994A (en) Abnormal behavior detection model and detection method in dense multi-person scene
CN107315985B (en) Iris identification method and terminal
CN116740125A (en) Detection method for realizing visualization of transparent fluid based on light refraction and deep learning
CN111539985A (en) Self-adaptive moving target tracking method fusing multiple features
CN110826459A (en) Migratable campus violent behavior video identification method based on attitude estimation
Kumar et al. Key frame extraction algorithm for video abstraction applications in underwater videos
CN111914751B (en) Image crowd density identification detection method and system
Lizhong et al. Research on detection and tracking of moving target in intelligent video surveillance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant