CN111191639B - Vertigo type identification method and device based on eye shake, medium and electronic equipment - Google Patents

Vertigo type identification method and device based on eye shake, medium and electronic equipment Download PDF

Info

Publication number
CN111191639B
CN111191639B CN202010170260.9A CN202010170260A CN111191639B CN 111191639 B CN111191639 B CN 111191639B CN 202010170260 A CN202010170260 A CN 202010170260A CN 111191639 B CN111191639 B CN 111191639B
Authority
CN
China
Prior art keywords
eye
dizziness
eye shake
neural network
network model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010170260.9A
Other languages
Chinese (zh)
Other versions
CN111191639A (en
Inventor
李华伟
罗旭
屈寅弘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Zehnit Medical Technology Co ltd
Eye and ENT Hospital of Fudan University
Original Assignee
Shanghai Zehnit Medical Technology Co ltd
Eye and ENT Hospital of Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Zehnit Medical Technology Co ltd, Eye and ENT Hospital of Fudan University filed Critical Shanghai Zehnit Medical Technology Co ltd
Priority to CN202010170260.9A priority Critical patent/CN111191639B/en
Publication of CN111191639A publication Critical patent/CN111191639A/en
Application granted granted Critical
Publication of CN111191639B publication Critical patent/CN111191639B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • G06F2218/10Feature extraction by analysing the shape of a waveform, e.g. extracting parameters relating to peaks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The utility model provides a vertigo type recognition method, recognition device, computer readable storage medium and electronic equipment based on eye shake, through obtaining the eye shake video of the vertigo patient, and extract eye shake signal according to this eye shake video, wherein this eye shake signal includes horizontal direction eye shake signal and vertical direction eye shake signal, at last discern the vertigo type of the vertigo patient according to the eye shake signal of each direction, wherein the specific way of discernment vertigo type is to input the eye shake signal into neural network model, discern the vertigo type of vertigo patient automatically by neural network model; on the premise of considering the eye vibration signals, the neural network model is utilized to identify the dizziness type of the dizziness patient, so that the dizziness type can be automatically identified, a great amount of workload of a doctor for manually interpreting video data is saved, and quick and accurate data information is provided for subsequent diagnosis and treatment.

Description

Vertigo type identification method and device based on eye shake, medium and electronic equipment
Technical Field
The invention relates to the field of dizziness identification, in particular to an eye shake-based dizziness type identification method, an eye shake-based dizziness type identification device, a computer-readable storage medium and electronic equipment.
Background
The balance disorder such as dizziness is a common disease which seriously affects the health and life quality of human beings, and is manifested by the symptoms of astronomical rotation of dizziness, nausea, vomiting, spontaneous nervous disorder such as cold sweat and the like, often causes mental disorder such as panic, anxiety and the like, and the life of the patient with serious illness cannot be self-managed, thus causing great burden to families and society. Statistics results show that the number of patients with annual vertigo in China can reach 7000 ten thousand people, the patients account for about 3.4-4.9% of the general population, the risk of illness is obviously increased along with the increase of the age, wherein the patients taking vertigo as main complaints account for about 80% of the outpatients of the elderly, and the elderly suffer from secondary injuries such as falling and the like caused by vertigo, such as fracture, cerebral trauma and the like, and the health and the quality of life of the elderly are more seriously threatened.
During maintenance of body balance, the eyes provide visual information of posture, movements and surrounding environment, the proprioception of the joints provides information of limb spatial positions, the vestibular receptors of the inner ear provide vestibular sensation information of head position and movements, and the information is integrated and fed back into the brain through a complex neural network. About 60% -70% of clinical balance disorder is caused by inner ear vestibular device dysfunction, the peripheral vestibular system formed by the ear stone device and the semicircular canal can sense gravity, linear acceleration and angular acceleration under the static or moving state of the organism, the movement of skeletal muscles is regulated through vestibular spinal cord reflection to maintain the balance of the body under the static state and during the movement, and the movement of eyeballs is regulated through vestibular ocular reflection to maintain the definition and stability of visual imaging during the movement. Also because of the existence of vestibular ocular reflex, when the vestibular function is abnormal, the eyeball often generates rhythmic movement (ocular vibration) which is not controlled by ideas and is alternated in speed, and the abnormal vestibular ocular reflex becomes a mirror reflecting the vestibular function state, and the qualitative and positioning primary diagnosis of the dizziness disease can be realized through the observation and recording of spontaneous ocular vibration and induced ocular vibration/ocular vibration clinically.
The clinical eye shake examination method for the vertigo disease comprises the following steps: naked eye examination, electrooculography, and oculography. Wherein, the naked eye examination method is easily affected by fixation inhibition and can not quantitatively analyze the eye shake; the electro-oculography is a method of detecting a potential difference CRP between the cornea and the retina, and recording a rotational eye shake is impossible; the eye shake view method is used for locating pupils by collecting eye shake videos, and at present, the eye shake view method which is most widely applied clinically mainly collects eye shake videos through a video eyeshade, and doctors look at the photographed eye shake videos of patients to diagnose. The eye shake video contains more information and takes a long time, relies on manual interpretation by doctors, is time-consuming and poor in repeatability, and is easily influenced by experience of the doctors and factors of management. In addition, in the large environment that the number of patients is increased year by year and medical resources are short of, the workload of doctors for manually judging video data is large, and the social problems of low diagnosis efficiency are prominent.
Disclosure of Invention
In view of the above, embodiments of the present application are directed to providing a method for identifying a type of dizziness based on an eye shake, an identification apparatus, a computer-readable storage medium, and an electronic device, by acquiring an eye shake video of a dizziness patient, and extracting an eye shake signal according to the eye shake video, wherein the eye shake signal includes a horizontal direction eye shake signal and a vertical direction eye shake signal, and finally identifying the type of dizziness of the dizziness patient according to the eye shake signals of each direction, wherein the specific manner of identifying the type of dizziness is to input the eye shake signal into a neural network model, and automatically identify the type of dizziness of the dizziness patient by the neural network model; on the premise of considering the eye vibration signals, the neural network model is utilized to identify the dizziness type of the dizziness patient, so that the dizziness type can be automatically identified, a great amount of workload of a doctor for manually interpreting video data is saved, and quick and accurate data information is provided for subsequent diagnosis and treatment.
According to an aspect of the present application, an embodiment of the present application provides a method for identifying a type of dizziness based on an eye shake, including: acquiring an eye shake video of a patient suffering from dizziness; extracting an eye shake signal according to the eye shake video; wherein the eye shake signals comprise a horizontal eye shake signal and a vertical eye shake signal; identifying the dizziness type of the dizziness patient according to the eye shake signal; the implementation mode for identifying the dizziness type of the dizziness patient comprises the following steps: inputting the eye vibration signals into a neural network model, and identifying the dizziness type of the dizziness patient through the neural network model.
In an embodiment, the extracting the eye shake signal according to the eye shake video includes: based on a set gray threshold, performing binarization processing on the image in the eye shake video; dividing the binarized image to obtain a pupil image; calculating to obtain a pupil center according to the pupil image; and extracting the eye shake signal according to the movement track of the pupil center in the eye shake video.
In an embodiment, the identifying, by the neural network model, the type of dizziness of the patient with dizziness comprises: extracting waveform characteristic data in the eye shake signals; respectively calculating the similarity between the waveform characteristic data and the waveform characteristic data corresponding to each dizziness type to obtain a plurality of similarities; and when one of the plurality of similarities is greater than a preset similarity threshold, determining that the dizziness patient is of the dizziness type corresponding to the similarity.
In an embodiment, the waveform characteristic data comprises any one or a combination of the following: eye shake latency, eye shake direction, eye shake duration, and eye shake intensity.
In an embodiment, the eye shake signal further comprises a rotation direction eye shake signal.
In an embodiment, the method for extracting the eye shake signal in the rotation direction includes: and converting the rotation motion track of the eyeball in the eye shake video into a plane diagram under polar coordinates, and obtaining the rotation direction eye shake signal according to the plane diagram.
In an embodiment, the neural network model comprises a WaveNet-based deep learning neural network model or a sliding window-based convolutional neural network model.
In an embodiment, before the identifying the dizziness type of the dizziness patient according to the eye shake signal, the identifying method further includes: the head position information and/or the body position information of the dizziness patient are/is known; the identifying the dizziness type of the dizziness patient according to the eye shake signal comprises: and identifying the dizziness type of the dizziness patient according to the eye shake signal, the head position information and/or the body position information.
In an embodiment, the head position information and/or body position information comprises any one or a combination of the following information: body posture, initial position of the head and/or body, three-dimensional angle, amount of angular change, angular change angular velocity, angular change angular acceleration.
In an embodiment, the learning of the head position information and/or the body position information of the vertigo patient includes: the head position information and/or the body position information are/is obtained through auxiliary transformation equipment, or the head position information and/or the body position information are/is obtained through a gyroscope.
In an embodiment, the auxiliary transforming device comprises any one of the following devices: a diagnosis bed, a swivel chair, a benign paroxysmal positional vertigo therapeutic instrument, a vertigo therapeutic instrument and a vertigo diagnosis instrument.
In one embodiment, the training method of the neural network model includes: and training the neural network model by taking the eye shake signals and the corresponding dizziness types as training samples.
In one embodiment, the training method of the neural network model includes: adjusting the parameter weight of the neural network model according to the benefit value of the reward function; determining the output quantity of a training sample according to the parameter weight of the neural network model; obtaining a benefit value of a reward function corresponding to the training sample according to the output quantity of the training sample; and stopping training the neural network model when the benefit value is greater than a preset benefit threshold.
According to another aspect of the present application, an embodiment of the present application provides an apparatus for identifying a type of vertigo based on an eye shake, including: the acquisition module is used for acquiring an eye shake video of the dizziness patient; the extraction module is used for extracting an eye shake signal according to the eye shake video; wherein the eye shake signals comprise a horizontal eye shake signal and a vertical eye shake signal; the identification module is used for identifying the dizziness type of the dizziness patient according to the eye shake signal; wherein the identification module is further configured to: inputting the eye vibration signals into a neural network model, and identifying the dizziness type of the dizziness patient through the neural network model.
According to another aspect of the present application, an embodiment of the present application provides a computer readable storage medium storing a computer program for executing any one of the above-described method for identifying a type of vertigo based on an eye shock.
According to another aspect of the present application, an embodiment of the present application provides an electronic device, including: a processor; a memory for storing the processor-executable instructions; the processor is used for executing the vertigo type recognition method based on the eye shake.
According to the method, the device, the computer-readable storage medium and the electronic equipment for identifying the dizziness type based on the eye shake, the eye shake video of a dizziness patient is obtained, the eye shake signals are extracted according to the eye shake video, the eye shake signals comprise horizontal eye shake signals and vertical eye shake signals, and finally the dizziness type of the dizziness patient is identified according to the eye shake signals in all directions, wherein the specific mode for identifying the dizziness type is that the eye shake signals are input into a neural network model, and the dizziness type of the dizziness patient is automatically identified by the neural network model; on the premise of considering the eye vibration signals, the neural network model is utilized to identify the dizziness type of the dizziness patient, so that the dizziness type can be automatically identified, a great amount of workload of a doctor for manually interpreting video data is saved, and quick and accurate data information is provided for subsequent diagnosis and treatment.
Drawings
Fig. 1 is a flowchart of a method for identifying a dizziness type based on an eye shake according to an embodiment of the present application.
Fig. 2 is a flowchart of an eye shake signal extraction method according to an embodiment of the present application.
Fig. 3 is a flowchart of a method for identifying a dizziness type according to an embodiment of the present application.
Fig. 4 is a flowchart of a method for identifying a dizziness type based on an eye shake according to another embodiment of the present application.
Fig. 5 is a flowchart of a method for identifying a dizziness type based on an eye shake according to another embodiment of the present application.
Fig. 6 is a flowchart of a method for identifying a dizziness type based on an eye shake according to another embodiment of the present application.
Fig. 7 is a flowchart of a training method of a neural network model according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of a vertigo type recognition device based on eye shake according to an embodiment of the present application.
Fig. 9 is a block diagram of an electronic device according to an exemplary embodiment of the present application.
Detailed Description
The following description of the technical solutions in the embodiments of the present application will be made clearly and completely with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Furthermore, in the exemplary embodiments, since the same reference numerals denote the same components having the same structures or the same steps of the same methods, if an embodiment is exemplarily described, only structures or methods different from those of the described embodiment will be described in other exemplary embodiments.
Throughout the specification and claims, when an element is referred to as being "connected" to another element, the one element can be "directly connected" to the other element or be "electrically connected" to the other element through a third element. Furthermore, unless explicitly described to the contrary, the term "comprising" and its corresponding terms should be construed to include only the recited components and should not be construed to exclude any other components.
As mentioned before, vertigo diseases affect the daily life of a patient, severely even lead to a failure of the patient's life to self-care, and vertigo diseases can also lead to other diseases or problems, such as the possibility of the patient falling down during an vertigo episode, leading to trauma. At present, a clinical detection method for dizziness diseases mainly comprises an open-hole detection method, an eye vibration electrography method and an eye vibration view method, wherein the eye vibration view method is used for the longest time, and is usually used for collecting eye vibration videos of a patient through equipment such as a video eyeshade and the like, and then doctors judge which type the dizziness of the patient belongs to according to the eye vibration videos, so that a great amount of time and effort are required for doctors to interpret video data, the workload and the working intensity of each doctor are larger and larger along with the rapid increase of the number of patients and the limited double pressure of the doctors, different interpretation results of different doctors often appear in interpretation engineering due to different interpretation levels of each doctor, and the interpretation accuracy of the doctors is reduced under the long-time and high-intensity working state, which is disadvantageous to the patients. In addition, when interpreting video data, a doctor is likely to ignore or miss the movement of the eyeball rotation direction, thereby leading to inaccurate interpretation results.
In order to solve the above problems, the embodiment of the application provides a method for identifying a dizziness type based on an eye shake, an identification device, a computer readable storage medium and an electronic device, which are used for acquiring an eye shake video of a dizziness patient, extracting an eye shake signal according to the eye shake video, wherein the eye shake signal comprises a horizontal eye shake signal and a vertical eye shake signal, and finally identifying the dizziness type of the dizziness patient according to the eye shake signals in all directions, wherein the specific way of identifying the dizziness type is to input the eye shake signal into a neural network model, and automatically identifying the dizziness type of the dizziness patient by the neural network model; on the premise of considering the eye vibration signals, the neural network model is utilized to identify the dizziness type of the dizziness patient, so that the dizziness type can be automatically identified, a great amount of workload of a doctor for manually interpreting video data is saved, and quick and accurate data information is provided for subsequent diagnosis and treatment.
The following specifically describes, with reference to the accompanying drawings, a method for identifying a type of dizziness based on an eye shake, an identifying device, a computer readable storage medium, and a specific implementation manner of an electronic device provided in an embodiment of the present application:
fig. 1 is a flowchart of a method for identifying a dizziness type based on an eye shake according to an embodiment of the present application. As shown in fig. 1, the identification method includes the steps of:
Step 110: and acquiring an eye shake video of the dizziness patient.
According to the embodiment of the application, the terminal equipment with the camera shooting or video recording function can be used for acquiring the eye shake video of the dizziness patient, such as a video eyeshade, an eye shake chart instrument, an nystagmus scanner and the like. It should be appreciated that embodiments of the present application are not limited in the manner in which the eye shake video of a vertigo patient is acquired.
Step 120: extracting an eye shake signal according to the eye shake video; wherein the eye shake signals include a horizontal eye shake signal and a vertical eye shake signal.
And after the eye shake video of the dizziness patient is acquired, extracting corresponding eye shake signals according to the eye shake video, wherein the eye shake signals comprise horizontal eye shake signals and vertical eye shake signals.
Step 130: identifying the dizziness type of the dizziness patient according to the eye shake signal; the implementation mode for identifying the dizziness type of the dizziness patient comprises the following steps: and inputting the eye vibration signals into a neural network model, and identifying the dizziness type of the dizziness patient through the neural network model.
After the eye shake signals are extracted, the dizziness type of the dizziness patient is identified according to the eye shake signals, the specific implementation mode is that the eye shake signals are input into a neural network model to automatically identify the dizziness type of the dizziness patient, wherein the eye shake signals input into the neural network model can be two-dimensional eye shake signal data, the dizziness type can be directly identified according to the two-dimensional eye shake signal data, or the eye shake signals in the horizontal direction and the eye shake signals in the vertical direction can be respectively input, the dizziness type is identified by integrating the eye shake signals in the two dimensions, and the input eye shake signals of the neural network model are not limited. In an embodiment, after extracting the eye shake signal, an eye shake map may also be generated according to the extracted eye shake signal. The neural network model through image recognition can recognize the dizziness type according to the eye shock image, and the accuracy of recognizing the dizziness type can be improved by utilizing an image recognition technology.
According to the method for identifying the dizziness type based on the eye shake, the eye shake video of the dizziness patient is obtained, the eye shake signals are extracted according to the eye shake video, the eye shake signals comprise horizontal eye shake signals and vertical eye shake signals, and finally the dizziness type of the dizziness patient is identified according to the eye shake signals in all directions, wherein the specific mode for identifying the dizziness type is that the eye shake signals are input into a neural network model, and the dizziness type of the dizziness patient is automatically identified by the neural network model; on the premise of considering the eye vibration signals, the neural network model is utilized to identify the dizziness type of the dizziness patient, so that the dizziness type can be automatically identified, a great amount of workload of a doctor for manually interpreting video data is saved, and quick and accurate data information is provided for subsequent diagnosis and treatment.
Fig. 2 is a flowchart of an eye shake signal extraction method according to an embodiment of the present application. As shown in fig. 2, step 120 may specifically include the following steps:
step 121: and carrying out binarization processing on the image in the eye shake video based on the set gray threshold value.
According to the gray value distinction between the pupil and the pupil surrounding image in the obtained eye shake video, a gray threshold value is preset, and binarization processing is carried out on the image in the eye shake video so as to distinguish the pupil region image and the surrounding region image.
Step 122: and dividing the binarized image to obtain a pupil image.
And dividing the image in the eye shake video according to the binarization result, and dividing the pupil region image to obtain a pupil image. For view clarity, the surrounding area image may be deleted, leaving only the pupil image.
Step 123: and calculating the pupil center according to the pupil image.
And calculating the position of the pupil center according to one or more sections of edges of the pupil image.
Step 124: and extracting an eye shake signal according to the movement track of the pupil center in the eye shake video.
The position of the pupil (or the position of the eyeball) is represented by the position of the pupil center, so that the position information of the eyeball is accurately obtained, and an eye shock signal is extracted according to the movement track of the pupil center in an eye shock video, so that the movement track of the eyeball in the eye shock video can be accurately reflected, and further the movement track of the eyeball is used as a basis for judging the dizziness type of a dizziness patient.
Fig. 3 is a flowchart of a method for identifying a dizziness type according to an embodiment of the present application. As shown in fig. 3, the step 130 may specifically include the following steps:
step 131: waveform characteristic data in the eye shake signal is extracted.
In an embodiment, the waveform signature data may include any one or a combination of the following: data information such as eye shake latency, eye shake direction, eye shake duration, eye shake intensity and the like. In an embodiment, the neural network model may include a WaveNet-based deep learning neural network model or a sliding window-based convolutional neural network model. Waveform characteristic data in the eye-shake signals can be accurately extracted through a WaveNet-based deep learning neural network model or a sliding window-based convolution neural network model, and accurate data are provided for the follow-up identification of the dizziness type represented by the eye-shake signals.
Step 132: and respectively calculating the similarity between the waveform characteristic data and the waveform characteristic data corresponding to each dizziness type so as to obtain a plurality of similarities.
Each dizziness type has specific waveform characteristic data, so that the waveform characteristic data of the extracted eye shake signals and the waveform characteristic data corresponding to the dizziness type can be compared to accurately identify which kind of dizziness type the acquired eye shake signals are. According to the method and the device for processing the waveform feature data, the similarity between the extracted waveform feature data and the waveform feature data corresponding to each dizziness type is calculated, for example, the extracted waveform feature data can be converted into characteristic vectors, the waveform feature data corresponding to each dizziness type is also converted into corresponding feature vectors, and the similarity degree between the extracted waveform feature data and the waveform feature data corresponding to each dizziness type can be obtained through calculating the similarity between the feature vectors.
Step 133: when one of the plurality of similarities is larger than a preset similarity threshold, determining that the dizziness patient is of the dizziness type corresponding to the similarity.
And classifying the eye shake signals and the eye shake videos corresponding to the extracted waveform characteristic data into the dizziness types corresponding to the similarity when the similarity between the extracted waveform characteristic data and the waveform characteristic data corresponding to one of the dizziness types is larger than a preset similarity threshold value.
Fig. 4 is a flowchart of a method for identifying a dizziness type based on an eye shake according to another embodiment of the present application. As shown in fig. 4, the identification method may include the steps of:
step 410: and acquiring an eye shake video of the dizziness patient.
This step is similar to step 110 in the above embodiment and will not be described here again.
Step 420: extracting an eye shake signal according to the eye shake video; wherein the eye shake signals include a horizontal eye shake signal, a vertical eye shake signal, and a rotational eye shake signal.
And after the eye shake video of the dizziness patient is acquired, extracting corresponding eye shake signals according to the eye shake video, wherein the eye shake signals comprise horizontal eye shake signals, vertical eye shake signals and rotation eye shake signals.
Step 430: identifying the dizziness type of the dizziness patient according to the eye shake signal; the implementation mode for identifying the dizziness type of the dizziness patient comprises the following steps: and inputting the eye vibration signals into a neural network model, and identifying the dizziness type of the dizziness patient through the neural network model.
This step is similar to step 130 in the above embodiment and will not be described again here.
The eye shake video can reflect the movement of the eyeball in the horizontal and vertical directions and also reflect the movement of the eyeball in the rotation direction. Therefore, according to the eye shake video, eye shake signals of three dimensions such as horizontal, vertical and rotary motion can be extracted, and the dizziness type of a dizziness patient can be more accurately obtained by integrating the eye shake signals of the three dimensions. In one embodiment, the method for extracting the eye shake signal in the rotation direction includes: converting the rotation motion track of the eyeball in the eye shake video into a plane diagram under polar coordinates, and then obtaining the eye shake signal in the rotation direction according to the plane diagram. By converting the rotation motion track of the eyeballs in the eye shake video into a plane view under polar coordinates, the space rotation motion track which is unfavorable for observation and identification is converted into a plane motion track which is easy to observe and identify, so that the accuracy and the efficiency of subsequent identification are improved, and convenience is provided for subsequent manual inspection or review.
Fig. 5 is a flowchart of a method for identifying a dizziness type based on an eye shake according to another embodiment of the present application. As shown in fig. 5, before step 130, the identifying method may further include:
step 140: the head position information and/or the body position information of the dizziness patient are obtained. Step 130 specifically includes: and identifying the dizziness type of the dizziness patient according to the eye shake signals, the head position information and/or the body position information.
The eye shake signals of some specific types of dizziness patients, such as benign paroxysmal positional vertigo patients, are closely related to the head position and the body position of the patients, namely, the corresponding eye shake signals are only induced at the specific head position and the body position, meanwhile, if the head position and the body position change in the process of acquiring the movement track of the eyeballs, the acquired movement track of the eyeballs is inaccurate, so that the head position information and/or the body position information of the dizziness patients can be acquired while the eye shake video of the dizziness patients is acquired, and the dizziness type of the dizziness patients is comprehensively identified by combining the eye shake video and the head position information and/or the body position information, so that the identification accuracy is improved.
In an embodiment, the head position information and/or the body position information comprises any one or a combination of the following information: body posture (e.g., recumbent, right side, etc.), initial position of the head and/or body, three-dimensional angle, amount of angular change, angular change angular velocity, angular change angular acceleration. It should be understood that, in the embodiment of the present application, different head position information and/or body position information may be selected according to requirements of an application scenario, so long as the selected head position information and/or body position information can assist in accurately identifying the dizziness type, the head position information and/or body position information are not limited.
In an embodiment, the specific way to obtain the head position information and/or the body position information of the vertigo patient may be: the head position information and/or the body position information are obtained through the auxiliary transformation equipment, or the head position information and/or the body position information are obtained through the gyroscope. In a further embodiment, the auxiliary transforming device comprises any one of the following devices: a diagnosis bed, a swivel chair, a benign paroxysmal positional vertigo therapeutic apparatus, a vertigo diagnosis and treatment apparatus, etc. It should be understood that, in the embodiment of the present application, different head position information and/or body position information acquisition modes and devices may be selected according to requirements of an application scenario, so long as the selected head position information and/or body position information acquisition modes and devices can assist in accurately identifying the dizziness type, and the head position information and/or body position information acquisition modes and devices are not limited.
Fig. 6 is a flowchart of a method for identifying a dizziness type based on an eye shake according to another embodiment of the present application. As shown in fig. 6, before step 130, the identifying method may further include:
step 150: and acquiring visual stimulus information of the patient when the eye shake video is acquired.
Step 130 specifically includes: and identifying the dizziness type of the dizziness patient according to the eye shake signals, the head position information and/or the body position information and the visual stimulus information.
According to the method and the device for the visual stimulation of the dizziness, when the eye shake video of the dizziness patient and the head position information and/or the body position information of the dizziness patient are obtained, the visual stimulation information of the patient when the eye shake video is obtained can be obtained, the eye shake signal, the head position information and/or the body position information and the visual stimulation information are combined, the dizziness type of the dizziness patient is comprehensively identified, and therefore the identification accuracy is improved.
In an embodiment, the training method of the neural network model may include: and training a neural network model by taking the eye shake signals and the corresponding dizziness types as training samples. Namely, the eye vibration signals with the determined dizziness type are used as training samples to be input into the neural network model to train the neural network model, and the recognition accuracy of the neural network model can be effectively improved through a large number of training samples.
Fig. 7 is a flowchart of a training method of a neural network model according to an embodiment of the present application. As shown in fig. 7, the training method of the neural network model includes:
step 510: and adjusting the parameter weight of the neural network model according to the benefit value of the reward function.
By setting the reward function, when the training sample is input, a benefit value of the reward function is correspondingly obtained, and the parameter weight of the neural network model is adjusted according to the benefit value.
Step 520: and determining the output quantity of the training sample according to the parameter weight of the neural network model.
And calculating an output result of the input training sample according to the determined parameter weight of the neural network model.
Step 530: and obtaining the benefit value of the reward function corresponding to the training sample according to the output quantity of the training sample.
After the output result of the training sample is calculated, the benefit value of the reward function corresponding to the output result is calculated.
Step 540: and stopping training the neural network model when the profit value is greater than a preset profit threshold.
When the profit value is larger than a preset profit threshold, the recognition accuracy of the neural network model is proved to reach the set requirement, namely training of the neural network model can be stopped; and when the benefit value is smaller than or equal to the preset benefit threshold value, the neural network model needs to be continuously trained.
Fig. 8 is a schematic structural diagram of a vertigo type recognition device based on eye shake according to an embodiment of the present application. As shown in fig. 8, the eye-shake based dizziness type recognition apparatus 60 includes: an acquisition module 61, configured to acquire an eye shake video of a patient with vertigo; the extracting module 62 is configured to extract an eye shake signal according to an eye shake video; wherein the eye shake signals comprise horizontal eye shake signals and vertical eye shake signals; the identifying module 63 is used for identifying the dizziness type of the dizziness patient according to the eye shake signal; wherein the identification module is further configured to: and inputting the eye vibration signals into a neural network model, and identifying the dizziness type of the dizziness patient through the neural network model.
According to the vertigo type identification device based on the eye shake, an eye shake video of an vertigo patient is obtained through the obtaining module 61, the extracting module 62 extracts eye shake signals according to the eye shake video, wherein the eye shake signals comprise horizontal eye shake signals and vertical eye shake signals, and finally the identification module 63 identifies the vertigo type of the vertigo patient according to the eye shake signals in all directions, wherein the specific way of identifying the vertigo type is to input the eye shake signals into a neural network model, and the vertigo type of the vertigo patient is automatically identified by the neural network model; on the premise of considering the eye vibration signals, the neural network model is utilized to identify the dizziness type of the dizziness patient, so that the dizziness type can be automatically identified, a great amount of workload of a doctor for manually interpreting video data is saved, and quick and accurate data information is provided for subsequent diagnosis and treatment.
In an embodiment, the eye shake signal may further include a rotational direction eye shake signal, and the extraction module 62 may be further configured to: according to the eye shake video, extracting a rotation direction eye shake signal, wherein the specific method comprises the following steps: and converting the rotation motion track of the eyeball in the eye shake video into a plane diagram under polar coordinates, and obtaining a rotation direction eye shake signal according to the plane diagram.
In one embodiment, as shown in fig. 8, the extraction module 62 may further include the following subunits: a binarization unit 621 for performing binarization processing on the image in the eye shake video based on the set gray threshold value; a pupil dividing unit 622, configured to divide the binarized image to obtain a pupil image; a pupil center calculating unit 623 for calculating a pupil center from the pupil image; the track generating unit 624 is configured to generate an eye shake signal according to a motion track of the pupil center in the eye shake video.
In one embodiment, as shown in fig. 8, the identification module 63 may further include the following subunits: a feature extraction unit 631 for extracting waveform feature data in the eye shake signal; a similarity calculating unit 632, configured to calculate similarities between the waveform characteristic data and waveform characteristic data corresponding to each dizziness type, respectively, so as to obtain a plurality of similarities; a determining unit 633, configured to determine that the vertigo patient is the vertigo type corresponding to the similarity when one of the plurality of similarities is greater than a preset similarity threshold.
In an embodiment, as shown in fig. 8, the dizziness type recognition device 60 may further include: the learning module 64 is configured to learn head position information and/or body position information of the vertigo patient.
In an embodiment, the head position information and/or the body position information comprises any one or a combination of the following information: body posture (e.g., recumbent, right side, etc.), initial position of the head and/or body, three-dimensional angle, amount of angular change, angular change angular velocity, angular change angular acceleration. In an embodiment, the specific way to obtain the head position information and/or the body position information of the vertigo patient may be: the head position information and/or the body position information are obtained through the auxiliary transformation equipment, or the head position information and/or the body position information are obtained through the gyroscope. In a further embodiment, the auxiliary transforming device comprises any one of the following devices: a diagnosis bed, a swivel chair, a benign paroxysmal positional vertigo therapeutic apparatus, a vertigo diagnosis and treatment apparatus, etc.
In an embodiment, as shown in fig. 8, the dizziness type recognition device 60 may further include: the training module 65 is configured to train the neural network model by using the eye shake signal and the corresponding dizziness type as training samples.
In an embodiment, training module 65 may be further configured to: adjusting the parameter weight of the neural network model according to the benefit value of the reward function; determining the output quantity of the training sample according to the parameter weight of the neural network model; obtaining a benefit value of a reward function corresponding to the training sample according to the output quantity of the training sample; and stopping training the neural network model when the profit value is greater than a preset profit threshold.
Next, an electronic device according to an embodiment of the present application is described with reference to fig. 9. The electronic device may be either or both of the first device and the second device, or a stand-alone device independent thereof, which may communicate with the first device and the second device to receive the acquired input signals therefrom.
Fig. 9 illustrates a block diagram of an electronic device according to an embodiment of the present application.
As shown in fig. 9, the electronic device 10 includes one or more processors 11 and a memory 12.
The processor 11 may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities, and may control other components in the electronic device 10 to perform desired functions.
Memory 12 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer readable storage medium that can be executed by the processor 11 to implement the method of eye-shock based vertigo type recognition and/or other desired functions of the various embodiments of the present application described above. Various contents such as an input signal, a signal component, a noise component, and the like may also be stored in the computer-readable storage medium.
In one example, the electronic device 10 may further include: an input device 13 and an output device 14, which are interconnected by a bus system and/or other forms of connection mechanisms (not shown).
For example, when the electronic device is a first device or a second device, the input means 13 may be a data input device for acquiring an input signal of an image or video. When the electronic device is a stand-alone device, the input means 13 may be a communication network connector for receiving the acquired input signals from the first device and the second device.
In addition, the input device 13 may also include, for example, a keyboard, a mouse, and the like.
The output device 14 may output various information to the outside, including the determined distance information, direction information, and the like. The output device 14 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, etc.
Of course, only some of the components of the electronic device 10 relevant to the present application are shown in fig. 9 for simplicity, components such as buses, input/output interfaces, etc. being omitted. In addition, the electronic device 10 may include any other suitable components depending on the particular application.
In addition to the methods and apparatus described above, embodiments of the present application may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps in the method of eye-shake based vertigo type recognition described in the above-described "exemplary method" section of the present application.
The computer program product may write program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium, having stored thereon computer program instructions, which when executed by a processor, cause the processor to perform the steps in the eye-shake based dizziness-type recognition method according to the various embodiments of the present application described in the above-mentioned "exemplary method" section of the present specification.
The computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The basic principles of the present application have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present application are merely examples and not limiting, and these advantages, benefits, effects, etc. are not to be considered as necessarily possessed by the various embodiments of the present application. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, as the application is not intended to be limited to the details disclosed herein as such.
The block diagrams of the devices, apparatuses, devices, systems referred to in this application are only illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
It is also noted that in the apparatus, devices and methods of the present application, the components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent to the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit the embodiments of the application to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.

Claims (16)

1. The utility model provides a vertigo type recognition method based on eye shake, which is characterized by comprising the following steps:
acquiring an eye shake video of a patient suffering from dizziness;
Extracting the pupil motion trail of the dizziness patient according to the eye shake video to obtain an eye shake signal; wherein the eye shake signals comprise a horizontal eye shake signal and a vertical eye shake signal;
identifying the dizziness type of the dizziness patient according to the eye shake signal;
the implementation mode for identifying the dizziness type of the dizziness patient comprises the following steps:
inputting the eye vibration signals into a neural network model, and identifying the dizziness type of the dizziness patient through the neural network model;
the method for inputting the eye vibration signals into the neural network model specifically comprises the following steps:
inputting an eye shake map generated according to the eye shake signals into a neural network model; or alternatively
And respectively inputting the horizontal eye shake signals and the vertical eye shake signals into the neural network model.
2. The method according to claim 1, wherein extracting the pupil motion trajectory of the vertigo patient from the eye shake video to obtain an eye shake signal comprises:
based on a set gray threshold, performing binarization processing on the image in the eye shake video;
dividing the binarized image to obtain a pupil image;
calculating to obtain a pupil center according to the pupil image; and
And extracting the eye shake signal according to the motion trail of the pupil center in the eye shake video.
3. The identification method according to claim 1, wherein the dizziness type of the dizziness patient is identified by the neural network model; comprising
Extracting waveform characteristic data in the eye shake signals;
respectively calculating the similarity between the waveform characteristic data and the waveform characteristic data corresponding to each dizziness type to obtain a plurality of similarities; and when one of the plurality of similarities is greater than a preset similarity threshold, determining that the dizziness patient is of the dizziness type corresponding to the similarity.
4. The identification method of claim 1, wherein the waveform signature data comprises a combination of any one or more of the following: eye shake latency, eye shake direction, eye shake duration, and eye shake intensity.
5. The method of claim 1, wherein the eye shake signal further comprises a rotational direction eye shake signal.
6. The method of claim 5, wherein the method of extracting the rotational direction eye shock signal comprises:
Converting the rotating motion track of the eyeball in the eye shake video into a plan view under polar coordinates; and
and obtaining the eye vibration signals in the rotation direction according to the plane diagram.
7. The identification method of claim 1, wherein the neural network model comprises a WaveNet-based deep learning neural network model or a sliding window-based convolutional neural network model.
8. The method of identifying of claim 1, further comprising, prior to said identifying the vertigo type of the vertigo patient from the eye shock signal:
the head position information and/or the body position information of the dizziness patient are/is known;
the identifying the dizziness type of the dizziness patient according to the eye shake signal comprises:
and identifying the dizziness type of the dizziness patient according to the eye shake signal, the head position information and/or the body position information.
9. The method of claim 8, wherein the header information and/or body position information comprises any one or a combination of the following: body posture, initial position of the head and/or body, three-dimensional angle, amount of angular change, angular change angular velocity, angular change angular acceleration.
10. The method of claim 8, wherein the learning of the head position information and/or body position information of the vertigo patient comprises:
the head position information and/or the body position information are/is obtained through auxiliary transformation equipment, or the head position information and/or the body position information are/is obtained through a gyroscope.
11. The identification method of claim 10, wherein the auxiliary transformation device comprises any one of the following: a diagnosis bed, a swivel chair, a benign paroxysmal positional vertigo therapeutic instrument, a vertigo therapeutic instrument and a vertigo diagnosis instrument.
12. The method of claim 1, wherein the training method of the neural network model comprises:
and training the neural network model by taking the eye shake signals and the corresponding dizziness types as training samples.
13. The method of claim 11, wherein the training method of the neural network model comprises:
adjusting the parameter weight of the neural network model according to the benefit value of the reward function;
determining the output quantity of a training sample according to the parameter weight of the neural network model;
obtaining a benefit value of a reward function corresponding to the training sample according to the output quantity of the training sample; and
And stopping training the neural network model when the benefit value is greater than a preset benefit threshold.
14. Vertigo type recognition device based on eye shake, characterized by comprising:
the acquisition module is used for acquiring eye shake video of the dizziness patient
The extraction module is used for extracting the pupil motion trail of the dizziness patient according to the eye shake video to obtain an eye shake signal; wherein the eye shake signals comprise a horizontal eye shake signal and a vertical eye shake signal;
the identification module is used for identifying the dizziness type of the dizziness patient according to the eye shake signal;
wherein the identification module is further configured to:
inputting the eye vibration signals into a neural network model, and identifying the dizziness type of the dizziness patient through the neural network model;
the method for inputting the eye vibration signals into the neural network model specifically comprises the following steps:
inputting an eye shake map generated according to the eye shake signals into a neural network model; or alternatively
And respectively inputting the horizontal eye shake signals and the vertical eye shake signals into the neural network model.
15. A computer-readable storage medium storing a computer program for executing the eye-shock based vertigo type recognition method according to any one of the preceding claims 1 to 13.
16. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to perform the method for identifying a vertigo type based on an eye shock according to any one of claims 1 to 13.
CN202010170260.9A 2020-03-12 2020-03-12 Vertigo type identification method and device based on eye shake, medium and electronic equipment Active CN111191639B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010170260.9A CN111191639B (en) 2020-03-12 2020-03-12 Vertigo type identification method and device based on eye shake, medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010170260.9A CN111191639B (en) 2020-03-12 2020-03-12 Vertigo type identification method and device based on eye shake, medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN111191639A CN111191639A (en) 2020-05-22
CN111191639B true CN111191639B (en) 2024-03-08

Family

ID=70710902

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010170260.9A Active CN111191639B (en) 2020-03-12 2020-03-12 Vertigo type identification method and device based on eye shake, medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111191639B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652108B (en) * 2020-05-28 2020-12-29 中国人民解放军32802部队 Anti-interference signal identification method and device, computer equipment and storage medium
CN112116856B (en) * 2020-09-08 2022-06-28 温州市人民医院 BPPV diagnosis and treatment skill training system and method
CN114617529B (en) * 2022-05-12 2022-08-26 上海志听医疗科技有限公司 Eyeball dizziness data identification method and system for eye shade equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20010018719A (en) * 1999-08-21 2001-03-15 김윤종 Integration system of diagnosis and treatment of vestibular dysfunction
JP2005304912A (en) * 2004-04-23 2005-11-04 Matsushita Electric Works Ltd Nystagmograph to be used for treating vertigo
CN1695548A (en) * 2005-03-23 2005-11-16 西北工业大学 Synchronous analyzer for detecting signal of blood stream from nystagmus image and ear ending
CN103919557A (en) * 2014-04-17 2014-07-16 大连理工大学 Nystagmus parameter characteristic obtaining method and device for diagnosing benign paroxysmal positional vertigo
KR20180105879A (en) * 2017-03-16 2018-10-01 한림대학교 산학협력단 Server and method for diagnosing dizziness using eye movement measurement, and storage medium storin the same
CN110020597A (en) * 2019-02-27 2019-07-16 中国医学科学院北京协和医院 It is a kind of for the auxiliary eye method for processing video frequency examined of dizziness/dizziness and system
CN110148110A (en) * 2019-04-01 2019-08-20 南京慧视医疗科技有限公司 A kind of spontaneous nystagmus intelligent diagnosis system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9298985B2 (en) * 2011-05-16 2016-03-29 Wesley W. O. Krueger Physiological biosensor system and method for controlling a vehicle or powered equipment
KR101647455B1 (en) * 2014-06-17 2016-08-24 서울대학교병원 Apparatus for diagnosing and treating vertigo

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20010018719A (en) * 1999-08-21 2001-03-15 김윤종 Integration system of diagnosis and treatment of vestibular dysfunction
JP2005304912A (en) * 2004-04-23 2005-11-04 Matsushita Electric Works Ltd Nystagmograph to be used for treating vertigo
CN1695548A (en) * 2005-03-23 2005-11-16 西北工业大学 Synchronous analyzer for detecting signal of blood stream from nystagmus image and ear ending
CN103919557A (en) * 2014-04-17 2014-07-16 大连理工大学 Nystagmus parameter characteristic obtaining method and device for diagnosing benign paroxysmal positional vertigo
KR20180105879A (en) * 2017-03-16 2018-10-01 한림대학교 산학협력단 Server and method for diagnosing dizziness using eye movement measurement, and storage medium storin the same
CN110020597A (en) * 2019-02-27 2019-07-16 中国医学科学院北京协和医院 It is a kind of for the auxiliary eye method for processing video frequency examined of dizziness/dizziness and system
CN110148110A (en) * 2019-04-01 2019-08-20 南京慧视医疗科技有限公司 A kind of spontaneous nystagmus intelligent diagnosis system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘红龙 ; .探讨视频眼震电图在良性阵发性位置性眩晕中的应用价值.基层医学论坛.2018,(14),全文. *
惠晶 ; 訾定京 ; 范秀博 ; .视频眼震电图在良性阵发性位置性眩晕患者诊断中的临床应用价值研究.陕西医学杂志.2019,(05),全文. *
自制视频眼震记录仪在良性阵发性位置性眩晕的应用;彭帆;向园花;***;;中国眼耳鼻喉科杂志;20130125(01);全文 *

Also Published As

Publication number Publication date
CN111191639A (en) 2020-05-22

Similar Documents

Publication Publication Date Title
US20220011864A1 (en) Systems, methods, apparatuses and devices for detecting facial expression and for tracking movement and location in at least one of a virtual and augmented reality system
CN111191639B (en) Vertigo type identification method and device based on eye shake, medium and electronic equipment
US20190110754A1 (en) Machine learning based system for identifying and monitoring neurological disorders
US20220044821A1 (en) Systems and methods for diagnosing a stroke condition
Mengoudi et al. Augmenting dementia cognitive assessment with instruction-less eye-tracking tests
US20210339024A1 (en) Therapeutic space assessment
US11642068B2 (en) Device and method to determine objectively visual memory of images
Liu et al. An elaborate algorithm for automatic processing of eye movement data and identifying fixations in eye-tracking experiments
Zhou et al. Development of the circumduction metric for identification of cervical motion impairment
KR20190112493A (en) Method for controling fundus camera and apparatus using the same
Sangeetha A survey on deep learning based eye gaze estimation methods
Rescio et al. Ambient and wearable system for workers’ stress evaluation
Mouelhi et al. Sparse classification of discriminant nystagmus features using combined video-oculography tests and pupil tracking for common vestibular disorder recognition
EP4325517A1 (en) Methods and devices in performing a vision testing procedure on a person
Li et al. Torsional nystagmus recognition based on deep learning for vertigo diagnosis
Florea et al. Computer vision for cognition: An eye focused perspective
Bethanney et al. An Intelligent Healthcare Monitoring System for Coma Patients
Tong et al. Assessment of spontaneous brain activity patterns in patients with iridocyclitis: a resting-state study
Calvo Córdoba et al. Automatic Video-Oculography System for Detection of Minimal Hepatic Encephalopathy Using Machine Learning Tools
US11983876B2 (en) Image based detection of characteristic eye movements
Saavedra-Peña Saccade latency determination using video recordings from consumer-grade devices
Faubert et al. Task and exposure time modulate laterality of spatial frequency for faces.
Hemmerling et al. Augmented Reality Platform for Neurological Evaluations
Friedrich et al. Convolutional neural networks for quantitative smartphone video nystagmography: ConVNG
Reinhardt et al. Smartphone-Based Videonystagmography Using Artificial Intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant