CN115192043A - Training method and training device for classification model for predicting visual fatigue predictability - Google Patents

Training method and training device for classification model for predicting visual fatigue predictability Download PDF

Info

Publication number
CN115192043A
CN115192043A CN202210833552.5A CN202210833552A CN115192043A CN 115192043 A CN115192043 A CN 115192043A CN 202210833552 A CN202210833552 A CN 202210833552A CN 115192043 A CN115192043 A CN 115192043A
Authority
CN
China
Prior art keywords
index set
classification model
index
visual fatigue
individual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210833552.5A
Other languages
Chinese (zh)
Other versions
CN115192043B (en
Inventor
袁进
肖鹏
马可
吴祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongshan Ophthalmic Center
Original Assignee
Zhongshan Ophthalmic Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongshan Ophthalmic Center filed Critical Zhongshan Ophthalmic Center
Priority to CN202210833552.5A priority Critical patent/CN115192043B/en
Publication of CN115192043A publication Critical patent/CN115192043A/en
Application granted granted Critical
Publication of CN115192043B publication Critical patent/CN115192043B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/372Analysis of electroencephalograms
    • A61B5/374Detecting the frequency distribution of signals, e.g. detecting delta, theta, alpha, beta or gamma waves
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/377Electroencephalography [EEG] using evoked responses
    • A61B5/378Visual stimuli
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • A61B5/7275Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Psychiatry (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Psychology (AREA)
  • Signal Processing (AREA)
  • Physiology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Fuzzy Systems (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The training method comprises the steps of obtaining electroencephalogram signals of a plurality of subjects before using a stereoscopic display device and visual fatigue labels corresponding to the subjects, wherein the electroencephalogram signals comprise resting state signals and task state signals, extracting a first index set aiming at the resting state signals, extracting a second index set aiming at the task state signals, the second index set comprises event index sets corresponding to various types of stimulation events in the task state signals, determining elements of a plurality of individuals of an optimizer for searching the optimal individual based on the hyper-parameters, the first index set and the second index set of the classification model, determining the optimal individual based on the fitness of the individual determined by the classification model and the visual fatigue labels, and further determining the hyper-parameters and target indexes of the classification model based on the optimal individual. Thus, a training method of a classification model capable of predicting a result of visual fatigue before using a stereoscopic display device is provided.

Description

Training method and training device for classification model for predicting visual fatigue predictability
Technical Field
The present disclosure generally relates to the field of machine learning, and in particular, to a training method and a training device for a classification model for predicting visual fatigue predictability.
Background
Nowadays, stereoscopic display technologies (e.g., AR, VR, MR technologies) have a wide application prospect in many fields such as medical treatment, education, industry, military and the like, and as an emerging technology which has already advanced to a high-speed development stage, the stereoscopic display technologies face many problems to be solved. Taking VR products (which may also be referred to as virtual reality devices) as an example, users may experience different degrees of visual fatigue such as soreness, dizziness, nausea, etc. during or after using VR products. Therefore, it is particularly important to monitor the asthenopia of users who use VR products.
At present, visual fatigue caused by VR products is mainly detected, and the detection method comprises a subjective detection method based on scales or questionnaires and an objective detection method based on quantitative data such as electroencephalogram (EEG) signals. Although the subjective detection method is commonly used, the method is not objective and has high dependency; objective detection methods are the mainstream of current research. For example, patent document (CN 112568915B) discloses a visual fatigue evaluation method and system apparatus for stereoscopic display based on multitask learning, which performs visual fatigue detection for stereoscopic display by using deep learning based on a resting-state EEG signal.
However, since the scheme of the above patent document only employs the resting state EEG signal, not the task state EEG signal, there is a lack of an index of dynamic information reflecting the perception process of the vision. In addition, the mainstream method for predicting asthenopia at present mainly aims at detecting asthenopia after a user watches a VR product, and how to predict asthenopia before watching the VR product is still under study.
Disclosure of Invention
In view of the above circumstances, the present disclosure provides a training method and a training device for a classification model capable of predicting visual fatigue by reflecting dynamic information of a visual perception process and predicting visual fatigue before using a stereoscopic display device.
To this end, a first aspect of the present disclosure provides a training method for a classification model for predicting visual fatigue predictability, the training method including: acquiring electroencephalogram signals of a plurality of subjects at a first time and visual fatigue labels corresponding to the subjects, wherein the electroencephalogram signals comprise resting state signals and task state signals under different types of stimulation events adopting preset normal forms, the visual fatigue labels are determined by visual fatigue results of the subjects at the first time and visual fatigue results at a second time, the first time is before the subjects use a stereoscopic display device, and the second time is after the subjects use the stereoscopic display device; extracting a first index set aiming at the resting state signal in the electroencephalogram signal of each subject, wherein the first index set comprises a first sub-index set related to the overall activity of the brain of the subject, a second sub-index set related to the activity of the brain of the subject in corresponding frequency bands and a third sub-index set related to the brain function connection strength of the subject; extracting a second index set aiming at the task state signal in the electroencephalogram signal of each subject, wherein the second index set comprises an event index set corresponding to each type of stimulation event in the task state signal, and the event index set is related to the cognitive function of the subject; and determining elements of a plurality of individuals in an optimizer for searching for optimal individuals based on the hyper-parameters and the index set of the classification model, determining the optimal individuals based on the fitness of the individuals determined by the classification model and the visual fatigue labels, and further determining the hyper-parameters and the target indexes of the classification model based on the optimal individuals to realize determination of target visual fatigue results for the target indexes by using the classification model, wherein the index set comprises the first index set and the second index set.
In the disclosure, a first index set and a second index set are obtained for a resting state signal and a task state signal based on a brain electrical signal before using a stereoscopic display device, the first index set reflects an intrinsic and inherent activity mode of a brain, the second index set reflects dynamic information of a visual perception process, visual fatigue labels corresponding to visual fatigue results before and after using the stereoscopic display device by a subject are obtained, an optimizer and a classification model are trained by using the index set and the visual fatigue labels to simultaneously realize index selection and hyper-parameter optimization and classification functions of the classification model, and further a target visual fatigue result aiming at a target index is determined by using the classification model. Thereby, dynamic information of a perception process of a vision can be reflected and a visual fatigue can be predicted before using the stereoscopic display device.
Further, in the training method according to the first aspect of the present disclosure, optionally, the frequency segment includes a frequency range of at least one of a theta wave, an alpha wave, a beta wave, and a gamma wave. Therefore, at least one index corresponding to the frequency band can be obtained.
Further, in the training method according to the first aspect of the present disclosure, optionally, the first sub-indicator set includes a mean standard deviation; and/or the second sub-indicator set comprises average amplitude values and average coefficient of variation corresponding to waves of each frequency range in the frequency band; and/or the third set of sub-indicators comprises correlation coefficients between electrodes for the signal in the resting state and correlation coefficients between electrodes for waves of respective frequency ranges in the frequency bin; and/or acquiring a signal segment of a time window between before and after the stimulation event is triggered in the task state signal, averaging the signal segments according to the type of the stimulation event in the signal segment to acquire a target signal segment corresponding to each type of stimulation event, wherein the event index set comprises the amplitude and the latency of the target signal segment, and the different types of stimulation events comprise standard stimulation events and deviation stimulation events. Thereby, an index reflecting intrinsic and intrinsic activity patterns of the brain and an index reflecting dynamic information of a perception process of vision can be acquired.
In addition, in the training method according to the first aspect of the present disclosure, optionally, the average standard deviation in the first sub-index set is a mean of standard deviations of resting state signals of a plurality of channels of the occipital region of the brain of the subject, the average amplitude in the second sub-index set is a mean of amplitudes of resting state signals of a plurality of channels of the occipital region of the brain of the subject, and the average coefficient of variation in the second sub-index set is a mean of coefficients of variation of resting state signals of a plurality of channels of the occipital region of the brain of the subject. Thus, an index relating to the overall activity of the subject's brain and the activity of the corresponding frequency band can be obtained.
In addition, in the training method according to the first aspect of the present disclosure, optionally, before extracting the first index set, at least one of processing including sampling frequency reduction processing, filtering processing, interpolation-based bad electrode processing, artifact removal processing, and mean-based recalibration signal processing is performed on the resting-state signal; and/or performing at least one of a baseline-based correction process and a threshold-based artifact removal process on the signal segment before extracting the second set of metrics. Therefore, a high-quality resting state signal and a high-quality signal segment can be obtained.
In addition, in the training method according to the first aspect of the present disclosure, optionally, the types of the indicators in the first indicator set include a time domain type, a frequency domain type, and a time-frequency domain type. In this case, the indexes in the first index set relate to representative indexes of a plurality of domains, and it is advantageous to improve the generalization ability of the classification model.
Further, in the training method according to the first aspect of the present disclosure, optionally, the optimizer is a balance optimizer, and in determining the optimal individual: determining an initial value of an element of an individual based on a first variation range corresponding to the index set and a second variation range corresponding to a hyper-parameter of the classification model; the following steps are repeatedly executed until the search stopping condition is reached: acquiring an evolution set based on the fitness of the individual to determine the evolution direction of the individual, acquiring evolution parameters based on the evolution set, updating the individual of the optimizer based on the evolution parameters, and constraining the individual of the optimizer by using the first variation range and the second variation range; and acquiring the evolutionary set after the search is stopped and determining the optimal individual based on the evolutionary set. In this case, the optimal individual can be determined based on the optimization algorithm of the equilibrium optimizer, so that the hyper-parameters and the target indices of the classification model can be determined based on the optimal individual.
In addition, in the training method according to the first aspect of the present disclosure, optionally, the fitness satisfies a formula:
Figure BDA0003749272420000031
among them, fitnes num Is the fitness of the num individual, S is the number of indexes in the index set, dim num The dimensions of the subset of metrics selected for the num individuals,
Figure BDA0003749272420000032
and representing the error rate corresponding to the classification model trained on the index subset selected on the num individuals, wherein alpha is a weight factor, the error rate is an average error rate, and the average error rate is the average value of the classification error rates corresponding to multiple times of verification in the cross verification. In this case, the fitness of the individual can be obtained by integrating the average error rate and the number of indexes, so that the optimizer can perform a search to determine the optimal individual based on the fitness of the individual.
Additionally, in the training method according to the first aspect of the present disclosure, optionally, the stereoscopic display device is configured to generate an output of at least one type of augmented reality, virtual reality, and mixed reality. Thereby, classification models for different stereoscopic display devices can be obtained.
In addition, in the training method according to the first aspect of the present disclosure, optionally, the classification model is one of a support vector machine model based on a support vector machine and a K-nearest neighbor classification model based on a K-nearest neighbor classification algorithm, the hyper-parameter terms of the support vector machine model are a penalty factor and parameters of a kernel function, and the kernel function is one of a polynomial kernel function and a gaussian kernel function. Thus, the hyper-parameters of various classification models can be acquired.
The second aspect of the present disclosure provides a training device for a classification model for predicting visual fatigue predictability, where the training device includes an obtaining module, an extracting module, and a searching module; the acquisition module is configured to acquire electroencephalogram signals of a plurality of subjects at a first time and visual fatigue labels corresponding to the subjects, wherein the electroencephalogram signals comprise resting state signals and task state signals under different types of stimulation events adopting preset normal forms, the visual fatigue labels are determined by visual fatigue results corresponding to the subjects at the first time and visual fatigue results corresponding to a second time, the first time is before the subjects use a stereoscopic display device, and the second time is after the subjects use the stereoscopic display device; the extraction module is configured to extract a first index set aiming at the resting state signal in the brain electric signal of each subject, and extract a second index set aiming at the task state signal in the brain electric signal of each subject, wherein the first index set comprises a first sub-index set related to the whole activity of the brain of the subject, a second sub-index set related to the activity of the brain of the subject in a corresponding frequency band, and a third sub-index set related to the brain function connection strength of the subject, the second index set comprises event index sets corresponding to various types of stimulation events in the task state signal, and the event index sets are related to the cognitive function of the subject; and the searching module is configured to determine elements of a plurality of individuals in an optimizer for searching for an optimal individual based on the hyper-parameters of the classification model and the index set, determine the optimal individual based on the fitness of the individual determined by the classification model, and further determine the hyper-parameters of the classification model and a target index based on the optimal individual to achieve determination of a target asthenopia result for the target index by using the classification model, wherein the index set comprises the first index set and the second index set. In this case, a first index set for a resting state signal and a second index set for a task state signal are obtained based on a brain electrical signal before the stereoscopic display device is used, the first index set reflects an internal and inherent activity pattern of a brain, the second index set reflects dynamic information of a visual perception process, visual fatigue labels corresponding to visual fatigue results before and after the subject uses the stereoscopic display device are obtained, an optimizer and a classification model are trained by using the index sets and the visual fatigue labels to simultaneously realize index selection and hyper-parameter optimization and classification functions of the classification model, and further, a target visual fatigue result for a target index is determined by using the classification model. Thereby, dynamic information of a perception process of vision can be reflected and visual fatigue can be predicted before using the stereoscopic display device.
According to the present disclosure, it is possible to provide a training method and a training device for a classification model that reflects dynamic information of a visual perception process and can predict a visual fatigue predictive result before using a stereoscopic display device.
Drawings
The disclosure will now be explained in further detail by way of example with reference to the accompanying drawings, in which:
fig. 1 is a schematic scenario illustrating a classification model for visual fatigue predictive prediction according to an example of the present disclosure.
Fig. 2 is a flow chart illustrating predictive visual fatigue prediction using a classification model according to an example of the present disclosure.
Fig. 3 is a flow chart illustrating a method of training a classification model for visual fatigue predictability prediction in accordance with an example of the present disclosure.
FIG. 4 is a flow chart illustrating acquiring an index set based on a brain electrical signal in accordance with an example of the present disclosure.
Fig. 5A is a flow diagram illustrating one example of obtaining a trained classification model based on an optimizer in accordance with examples of the present disclosure.
Fig. 5B is a flow chart illustrating a method of determining optimal individuals based on an equalization optimizer in accordance with an example of the present disclosure.
Fig. 5C is a flowchart of the search step in step S310 in fig. 5B.
Fig. 6 is a block diagram illustrating a training apparatus of a classification model for visual fatigue predictability prediction according to an example of the present disclosure.
Fig. 7 is a block diagram illustrating another example of a training apparatus of a classification model for visual fatigue predictability prediction according to an example of the present disclosure.
Fig. 8 is a comparative schematic diagram illustrating predicted performance for a variety of virtual reality devices to which examples of the present disclosure relate.
Detailed Description
Hereinafter, preferred embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. In the following description, the same components are denoted by the same reference numerals, and redundant description thereof is omitted. The drawings are schematic and the ratio of the dimensions of the components and the shapes of the components may be different from the actual ones. It is noted that the terms "comprises" and "comprising," and any variations thereof, in this disclosure, such that a process, method, system, article, or apparatus that comprises or has a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include or have other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. All methods described in this disclosure can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context.
The training method and the training device for the classification model for predicting the visual fatigue predictability can effectively select and classify indexes of electroencephalogram signals and reflect dynamic information of a visual perception process. In addition, the classification model trained via the training method of the present disclosure can predict the asthenopia result before using the stereoscopic display device. The training method to which the examples of the present disclosure relate may also be referred to as a model training method. The training apparatus to which examples of the present disclosure relate may also be referred to as a model training apparatus. The present disclosure is described in detail below with reference to the attached drawing figures. In addition, the application scenarios described in the examples of the present disclosure are for more clearly illustrating the technical solutions of the present disclosure, and do not constitute a limitation on the technical solutions provided by the present disclosure.
The brain electrical signals to which the examples of the present disclosure relate are electrical signals generated by activity between neurons of the brain. Generally, electroencephalogram signals are random signals with complex background noise, weak amplitude and non-stationary. In addition, different mental or physiological states may cause the brain electrical signal to exhibit different patterns. Therefore, the physiological state can be effectively identified by extracting and selecting the characteristics of the electroencephalogram signals through a machine learning method and classifying, identifying or predicting the characteristics. For example, the state of eye fatigue of the human eye can be recognized.
The stereoscopic display device to which the disclosed examples relate may be any device capable of achieving a stereoscopic effect. In some examples, a stereoscopic display device may be used to generate an output of at least one type of Augmented Reality (AR), virtual Reality (VR), and Mixed Reality (MR). That is, the stereoscopic display device may include at least one of an augmented reality device, a virtual reality device, and a mixed reality device. In some examples, the virtual reality device may include at least one of a closed VR device, a transmissive VR device, and an open-eye VR device.
The stereoscopic display device according to the disclosed example can be used for visual training of human eyes. In this case, the visual impairment can be improved by the visual training. For example, a child with amblyopia may be visually trained with a virtual reality device to improve the amblyopia problem of the child.
The prediction of the visual fatigue predictability relating to the disclosed example may refer to that the visual fatigue result of the user after (or during) using the stereoscopic display device can be predicted before the user uses the stereoscopic display device. In addition, asthenopia may also be referred to as visual fatigue.
The electroencephalogram signals to which the disclosed examples relate may be acquired by an electroencephalogram device. In some examples, the brain electrical device may include a brain-computer interface instrument that acquires an electroencephalogram and a brain electrical signal acquisition instrument that acquires a cortical electroencephalogram. Preferably, the brain electrical signals may be acquired by a brain-computer interface instrument. In some examples, a brain-computer interface instrument may include a plurality of electrodes that may be in contact with a surface of the brain to acquire brain electrical signals. In some examples, the electrodes may be referred to as channels. Therefore, the plurality of electrodes can form a plurality of channels to collect electroencephalogram signals.
Fig. 1 is a schematic scenario illustrating a classification model for visual fatigue predictive prediction according to an example of the present disclosure. Fig. 2 is a flow chart illustrating predictive asthenopia prediction using a classification model according to an example of the present disclosure.
In some examples, referring to fig. 1 and 2, a trained classification model 30 obtained based on the training method to which the present disclosure relates may be applied in a scenario as shown in fig. 1. In a scenario, trained classification models 30 for respective types of stereoscopic display devices may be obtained. Before a user (for example, a patient to be subjected to visual training) uses the corresponding type of stereoscopic display device, the electroencephalogram 10 of the user may be acquired through an electroencephalogram device, that is, the electroencephalogram 10 before the user uses the stereoscopic display device may be acquired (step S100), a target index 20 is acquired based on the electroencephalogram 10 (step S120), and data corresponding to the target index 20 is input into the trained classification model 30 to obtain a target asthenopia result 40 (step S140). Specifically, classification model 30 may receive data corresponding to target indicator 20 and make predictions based on the data to obtain target asthenopia result 40.
The user may decide whether to use the corresponding stereoscopic display device for visual training according to the target asthenopia result 40. In this case, the user can know whether or not the visual fatigue is generated after the visual training before the visual training. Therefore, the scheme of the visual training of each user can be established before the visual training so as to improve the visual training effect of the user.
Fig. 3 is a flow chart illustrating a method of training a classification model 30 for predictive prediction of visual fatigue in accordance with an example of the present disclosure.
In some examples, referring to fig. 3, the training method may include acquiring electroencephalographic signals 10 of a plurality of subjects and corresponding visual fatigue labels of the respective subjects (step S210), acquiring an index set based on the electroencephalographic signals 10 of the subjects (step S230), and acquiring a trained classification model 30 based on the index set and the visual fatigue labels (step S250).
In some examples, in step S210, brain electrical signals 10 of a plurality of subjects may be acquired. In some examples, the ages and heights of the plurality of subjects may be different. For example, brain electrical signals 10 may be collected for different time periods for multiple subjects of different ages and heights. In this case, the training data acquired based on the electroencephalogram signal 10 has diversity, and the generalization ability of the classification model 30 can be improved.
In this embodiment, the electroencephalogram signal 10 of the subject may be an electroencephalogram signal of the subject at the first time. The first time may be before the subject uses the stereoscopic display device. That is, the brain electrical signal 10 may be a brain electrical signal before the subject uses the stereoscopic display device. In addition, the visual fatigue labels corresponding to the respective subjects may have a one-to-one correspondence relationship with the visual fatigue labels for the respective subjects.
In some examples, the brain electrical signals 10 of the individual subjects may include a resting state signal and a task state signal. In this case, the classification model 30 can subsequently be trained by combining the information of both signals simultaneously.
In some examples, the resting state may be a state when the brain is not performing a specific cognitive task. In some examples, the resting state may be a state when the brain remains quiet, relaxed, and awake. In some examples, the resting state signal may be the brain electrical signal 10 acquired by the subject in a closed-eye and relaxed state. In some examples, the resting state signals may reflect intrinsic and intrinsic activity patterns of the brain. In some examples, the resting state signal may be an electroencephalogram signal 10 obtained by continuously acquiring the brain of the subject for a first preset time (e.g., 5 minutes) using an electroencephalogram device (e.g., a brain-computer interface instrument).
In some examples, the resting state may be the most basic and essential of the various complex states that the brain is in. Thus, the resting state signal can reflect the basic information of various cognitive activities of the brain.
In some examples, the task state may be a state of the brain while performing a particular task. In some examples, the task state may be the state of the brain while performing specific tasks such as memory, recognition, and movement.
In some examples, the task state signal may be the electrical brain signal 10 of the subject at a stimulation event that employs a preset paradigm. In some examples, the task state signal may be a brain electrical signal 10 obtained by continuously acquiring the subject for a second predetermined time (e.g., 2 minutes) under a stimulation event in a predetermined pattern. In this case, the task state signal can reflect dynamic information of the visual perception process, and training the classification model 30 based on the training data acquired by the task state signal is beneficial to realizing predictive visual fatigue prediction.
In some examples, the preset paradigm may include an odebauer paradigm. In some examples, the preset paradigm may also include a Go/Nogo paradigm, a Stroop paradigm, and a Flanker paradigm.
In addition, the stimulation events may be of different types. That is, the task state signal may be the electroencephalogram signal 10 collected under different types of stimulation events using preset paradigms. Therefore, task state signals under different types of stimulation events can be acquired. In some examples, the types of stimulation events may include standard stimulation events and deviation stimulation events (described later).
In some examples, the stimulation event may be triggered multiple times. In some examples, different types of stimulation events may be triggered multiple times. Therefore, more training data aiming at the task state signal can be acquired. Specifically, taking the preset paradigm of the idelberg paradigm as an example, the subject is allowed to view a picture, in which only one number 2 or number 3 is displayed at a time, where the number 2 may represent a standard stimulation event, the number 3 may represent a deviation stimulation event, the probability of the occurrence of the number 2 may be 80%, and the probability of the occurrence of the number 3 may be 20%; the two numbers in the picture can be randomly present or the numbers 2 and 3 can be switched back and forth.
As described above, in step S210, the visual fatigue label corresponding to each subject may be acquired. The visual fatigue label can be the corresponding real visual fatigue result of each subject. The visual fatigue labels may be used as gold criteria for training the classification model 30. Thus, performance parameters (e.g., error rate) of the classification model 30 can subsequently be obtained based on the visual fatigue labels.
In some examples, the visual fatigue label may be determined from the visual fatigue results corresponding to the subject at the first time and the visual fatigue results corresponding to the second time. The second time may be after the subject uses the stereoscopic display device. That is, the visual fatigue label of a subject may be determined from the visual fatigue results of the subject before using the stereoscopic display device and the corresponding visual fatigue results after using the stereoscopic display device. In this case, the visual fatigue label can indicate a change in visual fatigue before and after the subject uses the stereoscopic display device, and thus can reduce interference of the visual fatigue state of the subject itself with the classification model 30.
In some examples, the visual fatigue label of the subject may be a difference result of a comparison of the visual fatigue result of the subject before using the stereoscopic display device and a corresponding visual fatigue result after using the stereoscopic display device. In some examples, the visual fatigue result may be a visual fatigue level. In this case, the visual fatigue label of the subject may be the corresponding visual fatigue level after the subject uses the stereoscopic display device minus the visual fatigue level before the stereoscopic display device is used. In some examples, the visual fatigue level may be represented by a score. For example, a higher score may indicate more visual fatigue.
In some examples, the visual fatigue label may include high risk visual fatigue and low risk visual fatigue. In addition, high risk asthenopia may indicate a deepening of the degree of asthenopia of the subject at the second time relative to the first time. In addition, low risk visual fatigue may indicate a reduced or unchanged degree of visual fatigue of the subject at a second time relative to the first time.
In some examples, the visual fatigue level may be obtained by a litterb scale (e.g., a litterb five-grade scale). Specifically, letting a higher score indicate visual fatiguing, the visual fatigue levels of the plurality of subjects before using the stereoscopic display device may be scored by a lectt five-level scale to obtain visual fatigue levels of the plurality of subjects before using the stereoscopic display device (which may be simply referred to as first visual fatigue levels); in the same way, the visual fatigue level (also called a second visual fatigue level) of each subject after using the stereo display device can be obtained; the result of subtracting the corresponding first visual fatigue level from the second visual fatigue level can be used as the visual fatigue label corresponding to each subject.
Additionally, for the aforementioned lectt scale, high risk visual fatigue may indicate that the second level of visual fatigue of the subject is greater than the first level of visual fatigue of the subject. In some examples, low risk visual fatigue may indicate that the second level of visual fatigue of the subject is less than or equal to the first level of visual fatigue of the subject.
In some examples, with continued reference to fig. 3, in step S230, a set of indices may be acquired based on the brain electrical signal 10 of the subject. That is, the electroencephalogram signal 10 may be processed to obtain an index set. As described above, the brain electrical signal 10 may include a resting state signal and a task state signal. In some examples, the set of metrics may include a first set of metrics for the rest state signal and a second set of metrics for the task state signal.
Fig. 4 is a flowchart illustrating acquiring an index set based on a brain electrical signal 10 according to an example of the present disclosure.
In some examples, referring to fig. 4, step S230 may include receiving a brain electrical signal 10 (step S231). The brain electrical signal 10 may be obtained by step S210.
In some examples, the brain electrical signal 10 may be pre-processed. The pre-processing may include first pre-processing for the rest state signal and second pre-processing for the task state signal. If the preprocessing is performed, the index set may be acquired based on the electroencephalogram signal 10 subjected to the preprocessing.
In some examples, the first pre-processing may include at least one of a down-sampling frequency processing, a filtering processing, an interpolation-based bad electrode processing, an artifact removal processing, and an average-based recalibration signal processing. Thereby, a high-quality resting-state signal can be obtained. It should be noted that the various pretreatments may be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context.
In some examples, in the down-sampling frequency process, the sampling point may be reduced by reducing the frequency of sampling. This can reduce the data amount. For example, the sampling frequency may be reduced to 100Hz (Hertz).
In some examples, in the filtering process, the unnecessary signal or the interference signal in the rest state signal may be removed based on band-pass filtering of a preset frequency range. For example, the resting state signal may be filtered using a low pass filter with a cut-off frequency of 45Hz and a high pass filter with a cut-off frequency of 0.5Hz, thereby preserving the resting state signal with a frequency band of 0.5Hz to 45 Hz.
In some examples, in interpolation-based bad electrode processing, spherical linear interpolation may be used instead of the bad electrodes to obtain smooth resting state signals.
In some examples, in the artifact removal process, artifact caused by other signals (e.g., an ocular electrical signal, an electrocardiographic signal, and an electromyographic signal) than the resting-state signal may be removed using independent component analysis.
In some examples, in the average-based recalibration signal, the resting-state signal may be recalibrated based on an average of all electrode signals.
As described above, the pre-processing may include the second pre-processing. In some examples, the second pre-processing may include at least one of baseline-based correction processing and threshold-based artifact removal processing. Thereby, a high quality task state signal can be obtained.
In some examples, the second pre-processing may be for task state signals. Preferably, the second preprocessing may be for a signal segment in the task state signal. The signal segment may be a signal of a time window between before and after the triggering of the stimulation event in the task state signal (described later). That is, in the second preprocessing, at least one of a correction processing based on a baseline and an artifact removal processing based on a threshold may be performed on the signal segment. Thereby, a high quality signal segment can be obtained.
In some examples, in the baseline-based correction process, the signals from the third preset time before the triggering of each stimulation event to the triggering time of the stimulation event are averaged to be used as a baseline, and the signal segment of the stimulation event is corrected based on the baseline. For example, the signal segment corresponding to each stimulation event can be corrected by averaging the signals from 200ms before the triggering of the stimulation event to the time of the triggering of the stimulation event as a baseline.
In some examples, in threshold-based artifact removal processing, a fixed threshold may be employed to remove high amplitude artifacts. In particular, a signal segment may be removed if the most significant value (e.g., the maximum or minimum value) of the signal segment exceeds a fixed threshold. In some examples, the fixed threshold may be ± 50uV (microvolts).
In some examples, referring to fig. 4, step S230 may include extracting a first set of indices in the brain electrical signal 10 (step S232). The first set of metrics may include at least one metric. In some examples, the metrics of the rest state signal may be extracted by specialized mathematical software (e.g., MATLAB software) to obtain the first set of metrics.
In some examples, the first set of metrics includes a first set of sub-metrics, a second set of sub-metrics, and a third set of sub-metrics.
In some examples, the first set of sub-indicators may relate to the overall liveness of the subject's brain. In some examples, the first set of sub-indicators may include a mean standard deviation. In some examples, the mean standard deviation may be the mean of the standard deviations of the resting state signals of the plurality of channels of the occipital region of the brain of the subject. For example, the mean standard deviation may be the mean of the standard deviations of the resting state signals of the two channels (i.e., the O1 channel and the O2 channel) of the occipital region of the brain. In some examples, the occipital region of the brain may be referred to as the occipital region or occipital region of the brain. Alternatively, the occipital region of the brain may also be referred to as the occipital lobe or occipital lobe of the brain.
In some examples, the standard deviation described above may reflect the resting time domain activity state of the occipital region of the brain. In addition, the above-mentioned average standard deviation may be calculated by averaging the standard deviations of the resting-state signals of the O1 channel and the O2 channel. In this case, the larger the average standard deviation, the more active the brain as a whole, and the larger the fluctuation of the resting state signal.
In some examples, the standard deviation of any of the O1 channel and the O2 channel may satisfy the formula:
Figure BDA0003749272420000111
wherein mean can represent the average value, SD can represent the standard deviation, x (k) can represent the electroencephalogram signal 10 of the kth sampling point of the corresponding channel, and N can be the number of sampling points of the corresponding channel.
In some examples, the second set of sub-indicators may relate to the activity of the brain of the subject in the respective frequency bin. In some examples, the second set of sub-indicators may include average amplitudes and average coefficients of variation for waves of respective frequency ranges in respective frequency bins of the signal in the resting state. That is, each frequency bin may correspond to a respective average amplitude and average coefficient of variation. For example, assuming that the number of frequency bins is 4, each frequency bin having a corresponding average amplitude and average coefficient of variation, the second sub-indicator set may include 8 indicators.
In some examples, the average amplitude corresponding to each frequency bin may be a mean of the amplitudes of the resting state signals of the plurality of channels for the corresponding frequency bin of the occipital region of the brain of the subject. In some examples, the average coefficient of variation corresponding to each frequency bin may be a mean of the coefficients of variation of the resting state signals of the plurality of channels for the corresponding frequency bin of the occipital region of the brain of the subject. In some examples, the plurality of channels may be O1 channels and O2 channels of the occipital region of the brain.
In some examples, the respective frequency bins of the resting state signal may include frequency ranges of at least one of a theta wave, an alpha wave, a beta wave, and a gamma wave. Thus, at least one index (e.g., average amplitude and average coefficient of variation) corresponding to the frequency bin can be obtained. In some examples, the resting state signal for each channel of the occipital region of the brain may be decomposed in frequency bins to yield a theta wave, an alpha wave, a beta wave, and a gamma wave. Thus, the second sub-indicator set may include the average amplitude and average coefficient of variation of theta waves, the average amplitude and average coefficient of variation of alpha waves, the average amplitude and average coefficient of variation of beta waves, and the average amplitude and average coefficient of variation of gamma waves for the occipital channel.
In some examples, the theta wave may be an electroencephalogram signal generated with the subject being absentmindedly or hypnotically. the frequency of the theta wave may range from 4Hz to 8Hz. In some examples, the alpha wave may be an electroencephalogram signal generated while the subject is conscious or mentally active. The frequency range of the alpha wave may be 8Hz to 13Hz. In some examples, the beta wave may be an electroencephalogram signal generated under conditions of high concentration of attention or high mental stress of the subject. The frequency range of beta waves can be 13Hz-30Hz. In some examples, the gamma waves may be brain electrical signals generated by the subject in a meditation state. The frequency range of gamma wave can be 30Hz-100Hz. In this case, the activity of the brain in the corresponding frequency band can reflect the mental state or the physiological state of the subject (e.g., the visual fatigue state of the subject). In this case, the indexes of the second sub-index set can reflect the information of the visual fatigue state, and the classification model 30 trained based on the second sub-index set is beneficial to realize the prediction of the visual fatigue.
In the following, an example of obtaining the second sub-index set is described, taking as an example the plurality of channels of the occipital region of the brain as O1 channel and O2 channel.
In some examples, the average amplitude corresponding to each frequency bin may be an average of the amplitudes (which may also be referred to as amplitudes) of the respective frequency bins of the rest state signals of the O1 channel and the O2 channel.
In some examples, the amplitude corresponding to the respective channel for each frequency bin may satisfy the formula: AM = max (x), where max may represent a maximum function, AM may represent the amplitude of the corresponding channel of the corresponding frequency segment, and x may represent the brain electrical signal 10 of the corresponding channel of the corresponding frequency segment.
In some examples, the coefficient of variation for each frequency bin may reflect the time-frequency activity variability of the respective frequency bin of the resting state signal of the occipital region of the brain. In some examples, the average coefficient of variation corresponding to each frequency bin may be calculated by averaging the coefficients of variation of the rest state signals of the corresponding frequency bins of the O1 channel and the O2 channel. In this case, the larger the value of the average coefficient of variation of the frequency bin, the more active the activity level of the frequency bin is.
In some casesIn an example, the variation coefficients corresponding to the channels of each frequency bin may satisfy the following formula:
Figure BDA0003749272420000121
where CV may represent a coefficient of variation of the corresponding channel, SD may represent a standard deviation of the corresponding channel, and mean may represent an average of the corresponding channel.
In some examples, the third set of sub-indicators may be related to brain functional connectivity strength of the subject. In some examples, functional connection of resting state signals between two electrodes of the whole brain and functional connection of various frequency bins between two electrodes may reflect brain functional connection strength. In some examples, the functional connection may be represented by a correlation coefficient between all electrode signals of the whole brain, two by two. In some examples, a greater value for a functional connection indicates a stronger functional connection.
In some examples, the third set of sub-metrics may include the first correlation coefficient and the second correlation coefficient. In some examples, the first correlation coefficient may be a correlation coefficient between electrodes for a signal in a resting state. That is, the first correlation coefficient may be a correlation coefficient for the full frequency band.
In some examples, the first correlation coefficient may be an average of correlation coefficients between the respective electrodes for the resting state signal. In some examples, the first correlation coefficient may be an average of the correlation coefficients of the resting state signal between two electrodes of the whole brain. Specifically, the correlation coefficients of all electrode signals in the full frequency band may be obtained pairwise and averaged to obtain the correlation coefficient of the resting state signal in the full frequency band.
In some examples, the second correlation number may be a correlation coefficient between electrodes for waves of respective frequency ranges in the frequency bin described above. That is, the second correlation coefficient may be a correlation coefficient for each frequency bin, respectively. For example, theta waves, alpha waves, beta waves, and gamma waves may correspond to 4 correlation coefficients.
In some examples, the second correlation number may be an average of correlation coefficients between the electrodes for each frequency bin. For example, the second correlation coefficient may include an average value of correlation coefficients between electrodes for frequency bands corresponding to the theta wave, the alpha wave, the beta wave, and the gamma wave, respectively. In some examples, the second correlation number may be an average of correlation coefficients of the resting state signals of respective frequency bins between two electrodes of the whole brain. Specifically, correlation coefficients may be obtained between every two electrode signals of each frequency bin, and the average may be taken as the correlation coefficient corresponding to each frequency bin.
In some examples, the correlation coefficient may be at least one of a pearson correlation coefficient, a spearman correlation coefficient, and a kender correlation coefficient. In the following, one example of obtaining the third sub-index set is described by taking the example of solving the pearson correlation coefficients of the two electrode signals, where the pearson correlation coefficients may be solved for the first correlation coefficient for the full frequency band of the two electrode signals, and the pearson correlation coefficients may be solved for the second correlation coefficient for the corresponding frequency band of the two electrode signals.
In some examples, the pearson correlation coefficient for the full band of the two electrode signals may satisfy the following equation:
Figure BDA0003749272420000131
where FC may represent the Pearson correlation coefficient, x, of the two electrode signals 1 (k) And x 2 (k) Can respectively represent two electrode signals (also can be called as the electroencephalogram signal 10 of two electrodes), k can represent the index of a sampling point, mean 1 And Mean 2 Can respectively represent x 1 (k) And x 2 (k) Average value of (a). It should be noted that, the pearson correlation coefficient is obtained for the corresponding frequency bands of the two electrode signals, and the description thereof is omitted here. Thus, the third sub-index set may include average Pearson correlation coefficients for the resting state signal and average Pearson correlation coefficients for the theta wave, the alpha wave, the beta wave, and the gamma wave.
In some examples, the types of indicators in the first set of indicators may include a time domain type, a frequency domain type, and a time-frequency domain type. In this case, the indexes in the first index set relate to representative indexes of a plurality of domains, and it is advantageous to improve the generalization ability of the classification model.
In some examples, the time domain type of indicator may include the average standard deviation described above. In some examples, the frequency domain type indicator may include average amplitudes corresponding to the frequency bins described above. For example, the indicators for the frequency domain types may include average amplitudes of theta waves, alpha waves, beta waves, and gamma waves. In some examples, the indicator of the type of the time-frequency domain may include the average coefficient of variation corresponding to each frequency segment as described above. For example, the indicators of the types of time-frequency domains may include average coefficients of variation of theta waves, alpha waves, beta waves, and gamma waves.
As described above, with continued reference to FIG. 4, step S230 may also include extracting a second set of indices in the brain electrical signal 10 (step S233). In some examples, the second set of metrics may include a set of event metrics. In some examples, the second set of metrics may include a set of event metrics corresponding to each type of stimulation event in the task state signal. In some examples, the set of event metrics may be related to cognitive function of the subject. In some examples, the event index set may include an event-related potential P300. P300 is an objective test index that reflects the cognitive function of a subject.
In some examples, signal segments may be acquired based on various types of stimulation events. In some examples, the signal segments for the time window between pre-and post-triggering of each type of stimulation event may be acquired by specialized mathematical software (e.g., MATLAB software). For example, the task state signal can be intercepted as one signal segment 200ms (milliseconds) before the stimulation event and 800ms after the stimulation event. In this case, a plurality of signal segments can be acquired based on the respective types of stimulation events.
In some examples, multiple signal segments may be averaged according to the types of stimulation events in the signal segments to obtain target signal segments corresponding to respective types of stimulation events. In some examples, the different types of stimulation events may include standard stimulation events and deviation stimulation events. Thus, the target signal segment may include a standard stimulation signal segment and an offset stimulation signal segment.
In some examples, the set of event metrics may include a magnitude and a latency of the target signal segment. Thus, the set of event indices may include the amplitude and latency of the standard stimulation signal segment and the amplitude and latency of the offset stimulation signal segment.
One example of acquiring the second index set is described below, taking the signal segment of the event-related potential P300 as an example. In this case, the amplitude and the latency of the target signal segment may be calculated by calculating the amplitude and the latency of the signal segment of P300, respectively.
In some examples, the amplitude of the standard stimulation signal segment may be the amplitude of P300 of the standard stimulation signal segment (i.e., the amplitude of the standard stimulation P300). The latency of the standard stimulation signal segment may be the latency of P300 of the standard stimulation signal segment (i.e., the latency of standard stimulation P300). Likewise, the amplitude of the offset stimulation signal segment may be the amplitude of P300 of the offset stimulation signal segment (i.e., the amplitude of the offset stimulation P300). The latency of P300 of the biased stimulation signal segment may be the latency of P300 of the biased stimulation signal segment (i.e., the latency of the biased stimulation P300).
Thus, the second index set may include the amplitude of the standard stimulus P300, the latency of the standard stimulus P300, the amplitude of the deviation stimulus P300, and the latency of the deviation stimulus P300.
In some examples, referring to fig. 4, step S230 may further include obtaining a set of metrics based on the first set of metrics and the second set of metrics (step S234). In some examples, the set of metrics may include a first set of metrics and a second set of metrics. In this case, the first index set includes indexes capable of reflecting intrinsic and intrinsic activity patterns of the brain, and the second index set includes indexes capable of reflecting dynamic information of a perception process of the vision, and the classification model 30 is trained based on training data acquired by the index sets, so that the classification model 30 can effectively predict the visual fatigue.
As described above, with continued reference to fig. 3, the training method may further include step S250. In step S250, the trained classification model 30 may be obtained based on the index set obtained in step S230 and the visual fatigue label obtained in step S210.
The classification model 30 to which the disclosed examples relate may be any machine learning based model. For example, the classification model 30 may be a conventional machine learning model or a deep learning model. In some examples, the classification model 30 may be one of a support vector machine based support vector machine model and a K-Nearest Neighbor classification model based on a K-Nearest Neighbor classification algorithm (KNN).
In addition, the classification model 30 may have corresponding hyperparameters, and the trained classification model 30 may be obtained by determining the hyperparameters of the classification model 30. In some examples, the hyper-parameters may be set before the classification model 30 begins training, and by optimizing the hyper-parameters, an optimal set of hyper-parameters may be selected for the classification model 30 to improve classification performance. In some examples, training the classification model 30 with the hyper-parameters set may obtain a trained classification model 30. That is, the trained classification model 30 has corresponding hyper-parameters.
In some examples, the hyper-parameters may include hyper-parameter terms and hyper-parameter values corresponding to the hyper-parameter terms.
In some examples, the hyper-parametric terms of the support vector machine model may be penalty factors and parameters of a kernel function. In some examples, the kernel function of the support vector machine model may be one of a polynomial kernel function and a gaussian kernel function (which may also be referred to as a radial basis function). In addition, for a polynomial kernel, the parameters of the kernel may include the polynomial order. Additionally, for a gaussian kernel function, the parameters of the kernel function may include the bandwidth of the gaussian kernel function. Preferably, the hyper-parametric terms of the support vector machine model may include a penalty factor and a bandwidth of the gaussian kernel function.
In some examples, the hyper-parametric terms of the K-nearest neighbor classification model may include K values (which may be positive integers), distance metrics (e.g., distance metrics may include, but are not limited to, euclidean distances, minkowski distances, manhattan distances, and chebyshev distances, etc.), and distance weights (e.g., distance weights may include, but are not limited to, 1/distance, and 1/distance squared, etc.). Preferably, only the K value and the distance metric can be optimized.
In some examples, the classification model 30 may be trained based on the set of indicators and the visual fatigue labels to obtain a trained classification model 30.
FIG. 5A is a flow chart illustrating one example of obtaining a trained classification model 30 based on an optimizer in accordance with examples of the present disclosure.
In some examples, the trained classification model 30 may be obtained based on the optimizer, the set of metrics, and the visual fatigue labels. The optimizer may have a plurality of individuals, and the optimal individual may be obtained by searching the plurality of individuals. In some examples, the direction of evolution (which may also be referred to as the search direction) may be determined based on fitness, and the optimal individual may be obtained by searching. The fitness may be determined by the classification model 30. Specifically, the trained classification model 30 may be obtained by determining hyper-parameters of the classification model 30 and selecting a subset of indices from the set of indices as the target indices 20 of the classification model 30 using an optimizer. To this end, examples of the present disclosure also provide a method of obtaining a trained classification model 30 based on an optimizer.
In some examples, referring to fig. 5A, the method of obtaining the trained classification model 30 based on the optimizer may include determining elements of individuals of the optimizer based on hyper-parameters and index sets of the classification model 30 (step S251), determining optimal individuals of the optimizer based on fitness of the individuals determined by the classification model 30 and the visual fatigue labels (step S253), and determining hyper-parameters and target indices 20 of the classification model 30 based on the optimal individuals (step S255).
As described above, a hyperparameter may have a hyperparameter term. In some examples, in step S251, the individual of the optimizer may be initialized based on the hyper-parameter terms and the set of metrics. In some examples, the hyper-parameter terms and the set of metrics may be mapped to elements in the individuals to determine the individual elements of the optimizer. In some examples, the hyper-parameter terms and the index sets may be in a one-to-one correspondence with elements in the individuals. That is, the number of individual elements may be the sum of the number of hyper-parameter terms and the number of index sets.
In some examples, the preceding elements in the individual may be corresponding to hyper-parameter items, and the following elements may be corresponding to index sets. For example, if the classification model 30 is a support vector machine model with penalty factors and two hyperparametric terms of bandwidth of gaussian kernel function, and the number of index sets is s, the first two elements of an individual may correspond to the hyperparametric terms, and the last s elements may correspond to the index sets. Examples of the present disclosure are not limited thereto, and the hyper-parameter items and index sets may correspond to arbitrary elements in the individuals by labeling the correspondence of the hyper-parameter items and index sets to the elements in the individuals. For example, the hyper-parameter items may correspond to intermediate elements, and the index sets may correspond to other elements. As another example, a hyper-parameter item may correspond to a later element and a set of metrics may correspond to a previous element.
In some examples, when initializing an individual of the optimizer, elements corresponding to the hyper-parameter items and elements corresponding to the index set may be initialized with different upper and lower limits. In some examples, when initializing an individual of the optimizer, elements of the individual corresponding to the set of metrics are initialized based on a first range of variation, and elements of the individual corresponding to the hyper-parametric items may be initialized based on a second range of variation. Therefore, the elements corresponding to the hyper-parameter items and the elements corresponding to the index sets in the individual can be initialized through different variation ranges.
In some examples, referring to fig. 5A, the method of obtaining a trained classification model 30 based on an optimizer may further include determining an optimal individual of the optimizer based on the fitness of the individual determined by the classification model 30 and the visual fatigue label (step S253).
In some examples, in step S253, the fitness of the individual may be determined by the classification model 30 and the visual fatigue label, and the optimal individual for the optimizer is determined based on the fitness of the individual.
In some examples, in determining the fitness of the individual, a hyper-parameter value corresponding to the hyper-parameter item may be obtained based on the individual and a subset of the index may be determined, and the classification model 30 corresponding to the individual may be trained based on data corresponding to the subset of the index to obtain a trained classification model 30 corresponding to the individual (hereinafter may be simply referred to as an individual classification model), and the fitness of the individual may be obtained based on the individual classification model and the visual fatigue label. Specifically, the hyper-parameter of the classification model 30 corresponding to the individual may be set by using the hyper-parameter value, and the classification model 30 may be trained by using the data set corresponding to the index subset to obtain the individual classification model.
In some examples, the hyper-parameter values corresponding to the hyper-parameter items may be individual element values corresponding to the hyper-parameter items.
As described above, the subset of metrics may be determined on an individual basis. In some examples, the index subset may be selected based on element values that individually correspond to the index set and are processed via binarization. Specifically, in the binarization processing, the element values of the individual corresponding to the index set may be binarized based on a preset binarization threshold value, if the binarized element value is 1, the index corresponding to the selected index set (that is, the index corresponding to the element having the element value of 1) may be represented, and if the binarized element value is 0, the index corresponding to the discarded index set (that is, the index corresponding to the element having the element value of 0) may be represented. In this case, the indexes can be selected from the index set as the index subset on an individual basis. Therefore, index subsets corresponding to the individuals can be obtained, and the index subsets can be used for training the individual classification models.
In some examples, a data set corresponding to the subset of metrics may be selected from the data of the set of metrics based on the subset of metrics. In some examples, data corresponding to each of a subset of metrics may be filtered out of the data of the set of metrics, and the subset of metrics and a plurality of metric data corresponding to the subset of metrics may be treated as the set of data.
In some examples, after the individual classification models are obtained by training the classification models 30 corresponding to the individuals based on the data corresponding to the index subsets, the fitness of the individuals may be calculated based on the individual classification models. Specifically, each individual in the optimizer may correspond to an individual classification model, and the fitness of an individual may be calculated based on the individual classification model. Thus, subsequent optimizers can search to determine the optimal individual based on the fitness of the individual.
In some examples, the fitness may be calculated based on parameters of the classification performance of the individual classification models. In some examples, the parameters of classification performance may include at least one of sensitivity, specificity, error rate, and accuracy rate. In some examples, fitness of an individual may be calculated based on the error rate corresponding to the individual classification model. Specifically, the error rate corresponding to the individual classification model may be obtained based on the individual classification model and the visual fatigue label, and the fitness of the individual may be calculated based on the error rate. Thus, the fitness of the individual can be calculated based on the error rate of the individual classification model. In some examples, the visual fatigue results predicted by the individual classification models may be compared to the visual fatigue labels to obtain corresponding error rates for the individual classification models.
In some examples, the fitness of an individual is calculated based on the error rate corresponding to the individual classification model, and the formula may be satisfied:
Figure BDA0003749272420000171
among them, fitnes num May be the fitness of the num individual in the optimizer, S may be the number of indices in the index set, dim num The dimensions of the subset of metrics that can be selected for the num individual,
Figure BDA0003749272420000172
may represent the error rate corresponding to the classification model 30 trained on the index subset selected for the num individuals, and α may be a weighting factor. In some examples, α may be 0.98. In some examples, the error rate may be an average error rate. The average error rate may be an average of predicted error rates corresponding to multiple verifications in the cross-validation. In this case, the fitness of the individual can be obtained by integrating the average error rate and the index number, so that the optimizer can perform a search to determine the optimal individual based on the fitness of the individual.
In some examples, the cross-validation may be K-fold cross-validation. In some examples, K may be 5 or 10.
In the following, taking K-fold cross validation and support vector machine model as an example, a process of obtaining an error rate of the support vector machine model is described.
As described above, the hyper-parameter items of the support vector machine model may be two hyper-parameter items of a penalty factor and a bandwidth of a gaussian kernel function, and the number of the index set is represented as s, the two hyper-parameter items may correspond to the first two elements of an individual of the optimizer, and the index set may correspond to the last s elements of the individual. It should be noted that the description of the process of obtaining the error rate does not represent a limitation to the present disclosure, and may be applied to the description of other classification performances.
In the present embodiment, first, the first two element values of each individual may be input to the support vector machine model (i.e., as hyper-parameters of the support vector machine model). Then, a subset of indices is selected based on the last s binarized element values of each individual (that is, a subset of indices is selected based on the element values corresponding to the index set and binarized), and a data set corresponding to the subset of indices is constructed based on the data of the subset of indices and the index set. Then, the data set can be divided into K-fold by using K-fold cross validation, wherein K-1-fold can be used for training the support vector machine model each time training is performed, another fold can be used for testing the trained support vector machine model, and K prediction error rates can be obtained by repeating the K times. Finally, the predicted error rates of K times may be averaged to obtain an average error and used as an error rate for calculating the fitness of the individual.
As described above, in step S253, the optimal individual of the optimizer may be determined based on the fitness of the individual. In some examples, the optimizer may search to determine the optimal individual based on the fitness of the individual. In some examples, the optimal individual may be the individual with the least fitness.
In some examples, referring to fig. 5A, the method of obtaining a trained classification model 30 based on an optimizer may further include determining hyper-parameters and target metrics 20 of the classification model 30 based on the optimal individuals (step S255).
As described above, in some examples, the hyper-parameters of the classification model 30 may be set using the hyper-parameter values and the classification model 30 may be trained using the data sets corresponding to the index subsets to obtain individual classification models. That is, each individual has a corresponding trained classification model 30, a hyper-parameter value, and a subset of indices.
In some examples, the trained classification model 30 corresponding to the optimal individual (i.e., the individual classification model) may serve as the trained classification model 30 for predicting the target asthenopia result 40.
In some examples, target asthenopia result 40 may be a prediction result obtained by a user inputting data corresponding to target index 20 into classification model 30 for a stereoscopic display device before using the stereoscopic display device. In some examples, the predicted outcome may correspond to a visual fatigue label of the user. In particular, the predicted outcome may include high risk asthenopia and low risk asthenopia. In this case, the results of the comparison of the target visual fatigue results 40 with the visual fatigue labels can be integrated to obtain the predictive performance of the classification model 30.
In some examples, the hyper-parameter value corresponding to the optimal individual and the hyper-parameter item corresponding to the hyper-parameter value may be used as the hyper-parameter corresponding to the trained classification model 30, and the index subset corresponding to the optimal individual (i.e. the optimal index subset) may be used as the target index 20. Thereby, the hyper-parameters of the classification model 30 and the target indices 20 can be determined based on the optimal individual. In addition, the target index 20 may represent a visual center sensitivity index of the asthenopia prediction.
In some examples, the optimizer may be an equalization optimizer. To this end, examples of the present disclosure also provide a method of determining optimal individuals based on a balance optimizer.
Fig. 5B is a flow chart illustrating an example of a method for determining optimal individuals based on an equalization optimizer in accordance with examples of the present disclosure.
As shown in fig. 5B, the method for determining the optimal individual based on the equilibrium optimizer may include initializing the individual of the equilibrium optimizer (step S300), repeatedly performing the search step until the search stop condition is reached (step S310), and determining the optimal individual based on the evolutionary set after the search stop (step S330). In this case, the optimal individual can be determined based on the optimization algorithm of the equilibrium optimizer, so that the hyper-parameters and the target indices of the classification model can be determined based on the optimal individual.
In some examples, in step S300, initial values of the elements of the individual may be determined based on a first range of variation corresponding to the set of metrics and a second range of variation corresponding to the hyper-parameters of the classification model 30. In some examples, the initial values of the elements corresponding to the index set may be determined based on a first range of variation, and the initial values of the elements corresponding to the hyperparametric term may be determined based on a second range of variation. In some examples, the second range of variation may include a range of variation for each of the hyper-parameters.
In some examples, taking the number of hyperparameters as an example, the initial value for each individual of the equalization optimizer may satisfy the following formula:
Figure BDA0003749272420000191
where Ci, j may represent an initial value, γ, of the jth element of the ith individual in the equalization optimizer j Can represent a random number, [ F ] l ,F u ]Can represent a first variation range, [ alpha ] lu ]And [ beta ] lu ]The variation range corresponding to the first hyperparameter and the variation range corresponding to the second hyperparameter may be expressed separately. In some examples, γ j Can represent [0,1]A random number in between.
In addition, the equalization optimizer may have multiple individuals. In some examples, the number of the plurality of individuals may be 10. In some examples, the number of elements per individual may be 20.
Taking the support vector machine model as an example, let the hyper-parameter term of the support vector machine model be the penalty factor and the bandwidth of the Gaussian kernel function, [ F l ,F u ]Can be [0,1],[α lu ]Can represent the variation range of the penalty factor, [ alpha ] lu ]Can be [0.001,1000],[β lu ]Can represent the variation range of the bandwidth of the Gaussian kernel function, [ beta ] lu ]Can be [0.001,1000]。
In some examples, referring to fig. 5B, the method of determining the optimal individual based on the equalization optimizer may further include repeating the performing of the searching step until a stop search condition is reached (step S310). In some examples, the stop search condition may be that a total number of iterations is reached (e.g., the total number of iterations may be 100) or that the fitness of the individual meets a preset requirement.
Fig. 5C is a flowchart of the search step in step S310 in fig. 5B.
In some examples, referring to fig. 5C, the searching step in step S310 may include obtaining an evolutionary set based on fitness of the individual to determine an evolution direction of the individual (step S311), obtaining evolution parameters based on the evolutionary set (step S313), updating the individual of the balanced optimizer based on the evolution parameters (step S315), and constraining the individual of the balanced optimizer with the first variation range and the second variation range (step S317).
In some examples, the searching step in step S310 may include step S311. In step S311, an evolutionary set may be obtained based on fitness of the individual to determine an evolutionary direction of the individual. In some examples, n pending solutions (e.g., n may be 5) may be selected to form an evolutionary set to determine the direction of evolution for searching for individuals. Specifically, the fitness of all individuals in the balanced optimization can be obtained, then n-1 search individuals with the minimum fitness are sequentially taken as the first n-1 undetermined solutions, and then the n-1 undetermined solutions are averaged to form the nth undetermined solution, so that the evolution set with n undetermined solutions is formed. In some examples, all of the pending solutions in the evolutionary set may be inherited with equal probability to a new individual.
In some examples, the searching step in step S310 may include step S313. In step S313, evolution parameters may be acquired based on the evolutionary set. In some examples, the evolution parameters may include updating the index term coefficients and the quality generation rate. In this case, the local search capability of the equalization optimizer can be improved.
In some examples, the specific update procedure to update the index term coefficients and the quality generation rate is as follows:
Figure BDA0003749272420000201
where F may represent the update index term coefficient, G may represent the quality generation rate, a 1 And a 2 Constant terms may be represented, sign may represent a sign function, r and γ may represent random vectors, ite may represent the current number of iterations, ite max Can represent the total number of iterations, gamma 1 And gamma 2 Can represent a random number, C can represent the current whole individual, C eq May represent individuals inherited from the above-described evolutionary set, and GP may represent the probability of generation.
In some examples, a 1 Can be 2,a 2 May be 1. In some examples, r may represent [0,1]The random vector of (2). In some examples, γ may represent [0,1]The random vector of (2). In some examples, γ 1 And gamma 2 Can be [0,1]The random number of (2). In some examples, GP may be 0.5.
In some examples, referring to fig. 5C, the searching step in step S310 may further include step S315. In step S315, the individuals of the equalization optimizer may be updated based on the evolutionary parameters. In some examples, in step S315, the equalization optimizer may perform a search to generate a new round of all individuals based on the fitness of the individuals.
In some examples, a new round of individuals may satisfy the following formula:
Figure BDA0003749272420000211
wherein, C eq May represent individuals inherited from the evolutionary set described above, F may represent update exponential term coefficients, G may represent mass generation rate, γ may represent a random vector, and V may represent control volume.
Referring to fig. 5C, in some examples, step S310 may further include step S317. In step S317, the individual equalization optimizers may be constrained using the first variation range and the second variation range.
In some examples, the first and second variation ranges may be utilized to constrain the element values of the individuals corresponding to the set of metrics and the hyperparameter term in the equalization optimizer, respectively. Specifically, the element values of the elements corresponding to the index set in the individual are constrained based on the first variation range, and the element values of the elements corresponding to the hyper-parameter items in the individual are constrained based on the second variation range. Thereby, the individual element values can be constrained to be within a specific variation range.
Referring back to fig. 5B, the method of determining optimal individuals based on the equilibrium optimizer may further include determining optimal individuals based on the evolutionary set after the search is stopped (step S330). That is, the evolutionary set after the search is stopped may be acquired and the optimal individual may be determined based on the evolutionary set.
In some examples, the first solution to be determined in the evolutionary set may be the optimal individual. In some examples, a corresponding subset of metrics in the optimal individual may be used as the target metrics 20, and a corresponding hyperparameter in the optimal individual may be used as a hyperparameter for the trained classification model 30. Taking the support vector machine model as an example, the corresponding hyper-parameters in the optimal individual may be a penalty factor and the bandwidth of a gaussian kernel function.
Hereinafter, a training device of the classification model 30 for predicting visual fatigue predictability according to the present disclosure will be described in detail with reference to the drawings. The training apparatus 100 may also be referred to as a model training apparatus. The training apparatus 100 according to the present disclosure may be used to implement the training method described above.
Fig. 6 is a block diagram of a training apparatus 100 showing a classification model 30 for visual fatigue predictability prediction according to an example of the present disclosure. Fig. 7 is a block diagram illustrating another example of the training apparatus 100 of the classification model 30 for predictive prediction of asthenopia according to an example of the present disclosure.
As shown in fig. 6, in some examples, training apparatus 100 may include an acquisition module 110, an extraction module 130, and a search module 150.
In some examples, the acquisition module 110 may be configured to acquire electroencephalographic signals 10 of a plurality of subjects and corresponding visual fatigue signatures for each subject. In this embodiment, the electroencephalogram signal 10 of the subject may be an electroencephalogram signal of the subject at a first time. In some examples, the brain electrical signals 10 of the individual subjects may include a resting state signal and a task state signal. The first time may be before the subject uses the stereoscopic display device. In some examples, the task state signal may be a brain electrical signal 10 acquired under different types of stimulation events in a preset paradigm. For details, refer to the related description in step S210.
In some examples, the visual fatigue label may be determined from the visual fatigue results corresponding to the subject at the first time and the visual fatigue results corresponding to the second time. The second time may be after the subject uses the stereoscopic display device. For details, refer to the related description in step S210.
In some examples, the extraction module 130 may be configured to obtain a set of metrics based on the brain electrical signal 10 of the subject. In some examples, the set of metrics may include a first set of metrics for the rest state signal and a second set of metrics for the task state signal. In some examples, the first set of metrics includes a first set of sub-metrics, a second set of sub-metrics, and a third set of sub-metrics. Additionally, the first set of sub-indicators may relate to the overall activity of the subject's brain. Additionally, the second sub-set of indicators may relate to the activity of the brain of the subject in the respective frequency bin. Additionally, the third sub-index set may be correlated to brain functional connectivity strength of the subject. In some examples, the second set of metrics may include a set of event metrics. In some examples, the second set of metrics may include a set of event metrics corresponding to respective types of stimulation events in the task state signal. In some examples, the set of event metrics may be related to cognitive function of the subject. For details, refer to the relevant description in step S230.
In some examples, search module 150 may be configured to obtain a trained classification model 30 based on the set of metrics and the visual fatigue labels. In some examples, the search module 150 may be configured to determine elements of a plurality of individuals in the optimizer based on the hyper-parameters and the index set of the classification model 30, and determine an optimal individual of the optimizer based on the fitness of the individual determined by the classification model 30, which in turn determines the hyper-parameters and the target index 20 of the classification model 30 based on the optimal individual to enable determination of the target asthenopia result 40 for the target index 20 using the classification model 30. In addition, the optimizer may be used to search for optimal individuals. For details, refer to the related description in step S250.
In some examples, referring to fig. 7, the training device 100 may also include a pre-processing module 120. The pre-processing module 120 may be configured to pre-process the brain electrical signal 10. In some examples, the pre-processing may include first pre-processing for the rest state signal and second pre-processing for the task state signal. In some examples, the first pre-processing may include at least one of a down-sampling frequency processing, a filtering processing, an interpolation-based bad electrode processing, an artifact removal processing, and an average-based recalibration signal processing. In some examples, the second pre-processing may include at least one of baseline-based correction processing and threshold-based artifact removal processing. For details, refer to the relevant description in step S231.
The training method and the training apparatus 100 according to the examples of the present disclosure may be applied to different types of stereoscopic display devices. For example, for different stereoscopic display devices (e.g., a transmissive VR device, a closed VR device, and a naked eye VR device corresponding to a virtual reality device), corresponding training data may be respectively collected and trained by using the training method, so that a trained classification model 30 corresponding to each stereoscopic display device can be obtained. Based on the trained classification model 30 corresponding to each type of stereoscopic display device, the result of eye fatigue after the user uses (e.g., views) the stereoscopic display device can be predicted before the user uses the stereoscopic display device. Thereby, it is possible to effectively predict visual fatigue in advance for different types of stereoscopic display devices.
Fig. 8 is a comparative schematic diagram illustrating predicted performance for a variety of virtual reality devices to which examples of the present disclosure relate. In order to verify the effectiveness of the scheme related to the present disclosure, verification is respectively performed on three different virtual reality devices.
In this verification, the visual center sensitivity index (i.e., the target index 20) selected by the three virtual reality devices is as shown in table 1:
Figure BDA0003749272420000231
TABLE 1 visual center sensitivity index for virtual reality devices
In addition, the average accuracy, sensitivity and variability of the classification model 30 corresponding to each virtual reality device obtained by the verification are shown in table 2:
virtual reality equipment Sensitivity (%) Specificity (%) Accuracy (%)
Transmission-type VR equipment 84.00±8.94 81.33±1.83 82.55±4.74
Closed VR equipment 79.00±24.60 96.00±8.94 88.18±16.46
Naked eye VR equipment 96.67±7.45 92.00±10.95 94.18±5.32
TABLE 2 Performance of the classification model 30 for virtual reality devices
As can be seen from table 2, the scheme of the present disclosure obtains excellent asthenopia prediction performance (accuracy rate > 80%) in different virtual reality devices, and the result proves the reliability and effectiveness of the scheme of the present disclosure, which can be applied to assist in visual training based on a stereoscopic display device in the future. Therefore, the trained classification model 30 obtained by the scheme of the disclosure is applied to different virtual reality devices, and reliable visual fatigue prediction performance can be obtained.
Referring to fig. 8, the classification model 30 for three different virtual reality devices is trained by the scheme of the present disclosure, and the AUC values (Area Under values) of the prediction performance of the trained classification model 30 for performing asthenopia prediction on the three different virtual reality devices are respectively: the AUC value for the transmissive VR device is 0.806 ± 0.038, the AUC value for the closed VR device is 0.816 ± 0.239, and the AUC value for the naked eye VR device is 0.967 ± 0.0483. Therefore, the trained classification model 30 obtained by the scheme of the disclosure has better prediction performance, the classification model 30 can effectively predict the visual fatigue result after the virtual reality device is used before the virtual reality device is used, and a user can decide whether to adopt the stereoscopic display device for visual training according to the visual fatigue result. Therefore, the scheme of the visual training of each user can be established before the visual training so as to improve the visual training effect of the user.
The training method and the training apparatus 100 according to the present disclosure can obtain an index set based on an electroencephalogram signal 10, the index set including a first index set for a resting state signal and a second index set for a task state signal, wherein the resting state signal is the electroencephalogram signal 10 of a subject in a resting and relaxed state, the resting state signal is a basis of various cognitive activities of a brain, so that the first index set can reflect intrinsic and inherent activity patterns of the brain, the task state signal is the electroencephalogram signal 10 of the brain when performing specific cognitive activities, so that the second index set can reflect dynamic information of a perception process of the vision, elements of an individual of an optimizer are determined based on a hyper-parameter and the index set of a classification model 30, and an optimal individual fitness of the optimizer is determined based on the individual fitness determined by the classification model 30 and a visual fatigue label, so that a hyper-parameter and a target index 20 of the classification model 30 are determined based on the optimal individual. In this case, the optimizer-based optimization algorithm and the machine-learned classification model 30 can implement the functions of obtaining the target index 20, hyper-parametric optimization and prediction of the classification model 30, so that the classification model 30 can be used to determine the target asthenopia result 40 for the target index 20 before using the stereoscopic display device. Thus, the present disclosure can provide a training method and a training apparatus 100 for a classification model 30 that reflects dynamic information of a visual perception process and can predict visual fatigue before using a stereoscopic display device.
While the present disclosure has been described in detail in connection with the drawings and examples, it should be understood that the above description is not intended to limit the disclosure in any way. Those skilled in the art can make modifications and variations to the present disclosure as needed without departing from the true spirit and scope of the disclosure, which fall within the scope of the disclosure.

Claims (11)

1. A training method for a classification model for predictive visual fatigue prediction, the training method comprising:
acquiring electroencephalogram signals of a plurality of subjects at a first time and visual fatigue labels corresponding to the subjects, wherein the electroencephalogram signals comprise resting state signals and task state signals under different types of stimulation events adopting preset normal forms, the visual fatigue labels are determined by visual fatigue results of the subjects at the first time and visual fatigue results corresponding to a second time, the first time is before the subjects use a stereoscopic display device, and the second time is after the subjects use the stereoscopic display device;
extracting a first index set aiming at the resting state signal in the electroencephalogram signal of each subject, wherein the first index set comprises a first sub-index set related to the overall activity of the brain of the subject, a second sub-index set related to the activity of the brain of the subject in corresponding frequency bands and a third sub-index set related to the brain function connection strength of the subject;
extracting a second index set aiming at the task state signal in the electroencephalogram signal of each subject, wherein the second index set comprises event index sets corresponding to various types of stimulation events in the task state signal, and the event index sets are related to the cognitive function of the subject; and is
Determining elements of a plurality of individuals in an optimizer for searching for an optimal individual based on the hyper-parameters and the index set of the classification model, determining the optimal individual based on the fitness of the individual determined by the classification model and the visual fatigue label, and further determining the hyper-parameters and the target index of the classification model based on the optimal individual to achieve determination of a target visual fatigue result for the target index by using the classification model, wherein the index set comprises the first index set and the second index set.
2. Training method according to claim 1, characterized in that:
the frequency band includes a frequency range of at least one of a theta wave, an alpha wave, a beta wave, and a gamma wave.
3. Training method according to claim 2, characterized in that:
the first set of sub-indicators comprises a mean standard deviation; and/or
The second sub-index set comprises average amplitude values and average variation coefficients corresponding to waves of all frequency ranges in the frequency band; and/or
The third set of sub-indicators comprises correlation coefficients between electrodes for the signal in the resting state and correlation coefficients between electrodes for waves of respective frequency ranges in the frequency bin; and/or
Acquiring a signal section of a time window between before and after the stimulation event is triggered in the task state signal, averaging the signal sections according to the type of the stimulation event in the signal section to acquire a target signal section corresponding to each type of stimulation event, wherein the event index set comprises the amplitude and the latency of the target signal section, and the different types of stimulation events comprise standard stimulation events and deviation stimulation events.
4. A training method as claimed in claim 3, characterized in that:
the average standard deviation in the first sub-index set is a mean of standard deviations of resting state signals of a plurality of channels of the occipital region of the brain of the subject, the average amplitude in the second sub-index set is a mean of amplitudes of resting state signals of a plurality of channels of the occipital region of the brain of the subject, and the average coefficient of variation in the second sub-index set is a mean of coefficients of variation of resting state signals of a plurality of channels of the occipital region of the brain of the subject.
5. A training method as claimed in claim 3, characterized in that:
before extracting the first index set, performing at least one of sampling frequency reduction processing, filtering processing, interpolation-based bad electrode processing, artifact removal processing and mean-based recalibration signal processing on the resting-state signal; and/or
Before extracting the second index set, the signal segment is subjected to at least one of a baseline-based correction process and a threshold-based artifact removal process.
6. Training method according to claim 1, characterized in that:
the types of the indexes in the first index set comprise a time domain type, a frequency domain type and a time-frequency domain type.
7. Training method according to claim 1, characterized in that:
the optimizer is a balanced optimizer, and in determining the optimal individual:
determining an initial value of an element of an individual based on a first variation range corresponding to the index set and a second variation range corresponding to a hyper-parameter of the classification model;
repeatedly executing the following steps until the search stopping condition is reached:
acquiring an evolutionary set based on the fitness of the individual to determine an evolutionary direction of the individual,
obtaining evolution parameters based on the evolutionary set,
updating the individual of the optimizer based on the evolution parameters,
constraining an individual of the optimizer with the first range of variation and the second range of variation;
and acquiring the evolutionary set after the search is stopped and determining the optimal individual based on the evolutionary set.
8. Training method according to claim 1, characterized in that:
the fitness satisfies the formula:
Figure FDA0003749272410000031
among them, fitnes num Is the fitness of the num individual, S is the number of indexes in the index set, dim num The dimension of the subset of metrics selected for the num individual,
Figure FDA0003749272410000032
and the error rate corresponding to the classification model trained on the index subset selected on the basis of the num individual is represented, and alpha is a weight factor, wherein the error rate is an average error rate which is an average value of classification error rates corresponding to multiple times of verification in cross verification.
9. Training method according to claim 1, characterized in that:
the stereoscopic display device is used to generate at least one type of output of augmented reality, virtual reality, and mixed reality.
10. Training method according to claim 1, characterized in that:
the classification model is one of a support vector machine model based on a support vector machine and a K nearest neighbor classification model based on a K nearest neighbor classification algorithm, the hyper-parameter items of the support vector machine model are parameters of a penalty factor and a kernel function, and the kernel function is one of a polynomial kernel function and a Gaussian kernel function.
11. The training device for the classification model for predicting the visual fatigue predictability is characterized by comprising an acquisition module, an extraction module and a search module;
the acquisition module is configured to acquire electroencephalogram signals of a plurality of subjects at a first time and visual fatigue labels corresponding to the subjects, wherein the electroencephalogram signals comprise resting state signals and task state signals under different types of stimulation events adopting preset normal forms, the visual fatigue labels are determined by visual fatigue results of the subjects at the first time and visual fatigue results at a second time, the first time is before the subjects use a stereoscopic display device, and the second time is after the subjects use the stereoscopic display device;
the extraction module is configured to extract a first index set aiming at the resting state signal in the electroencephalogram signals of each subject, and extract a second index set aiming at the task state signal in the electroencephalogram signals of each subject, wherein the first index set comprises a first sub-index set related to the overall activity of the brain of the subject, a second sub-index set related to the activity of the brain of the subject in corresponding frequency bands, and a third sub-index set related to the brain function connection strength of the subject, the second index set comprises event index sets corresponding to various types of stimulation events in the task state signals, and the event index sets are related to the cognitive function of the subject; and
the search module is configured to determine elements of a plurality of individuals in an optimizer for searching for an optimal individual based on the hyper-parameters and the index set of the classification model, determine the optimal individual based on the fitness of the individual determined by the classification model and the visual fatigue label, and further determine the hyper-parameters and the target index of the classification model based on the optimal individual to achieve determination of a target visual fatigue result for the target index using the classification model, wherein the index set comprises the first index set and the second index set.
CN202210833552.5A 2022-07-15 2022-07-15 Training method and training device for classification model for predicting visual fatigue predictability Active CN115192043B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210833552.5A CN115192043B (en) 2022-07-15 2022-07-15 Training method and training device for classification model for predicting visual fatigue predictability

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210833552.5A CN115192043B (en) 2022-07-15 2022-07-15 Training method and training device for classification model for predicting visual fatigue predictability

Publications (2)

Publication Number Publication Date
CN115192043A true CN115192043A (en) 2022-10-18
CN115192043B CN115192043B (en) 2023-03-31

Family

ID=83582793

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210833552.5A Active CN115192043B (en) 2022-07-15 2022-07-15 Training method and training device for classification model for predicting visual fatigue predictability

Country Status (1)

Country Link
CN (1) CN115192043B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180220885A1 (en) * 2017-02-03 2018-08-09 Sangmyung University Industry-Academy Cooperation Foundation Method and system for noncontact vision-based 3d cognitive fatigue measuring by using task evoked pupillary response
CN109276227A (en) * 2018-08-22 2019-01-29 天津大学 Based on EEG technology to visual fatigue analysis method caused by three-dimensional Depth Motion
CN110215206A (en) * 2019-06-12 2019-09-10 中国科学院自动化研究所 Stereoscopic display visual fatigue evaluation method, system, device based on EEG signals
WO2020151144A1 (en) * 2019-01-24 2020-07-30 五邑大学 Generalized consistency-based fatigue classification method for constructing brain function network and relevant vector machine
CN113887397A (en) * 2021-09-29 2022-01-04 中山大学中山眼科中心 Classification method and classification system of electrophysiological signals based on ocean predator algorithm

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180220885A1 (en) * 2017-02-03 2018-08-09 Sangmyung University Industry-Academy Cooperation Foundation Method and system for noncontact vision-based 3d cognitive fatigue measuring by using task evoked pupillary response
CN109276227A (en) * 2018-08-22 2019-01-29 天津大学 Based on EEG technology to visual fatigue analysis method caused by three-dimensional Depth Motion
WO2020151144A1 (en) * 2019-01-24 2020-07-30 五邑大学 Generalized consistency-based fatigue classification method for constructing brain function network and relevant vector machine
CN110215206A (en) * 2019-06-12 2019-09-10 中国科学院自动化研究所 Stereoscopic display visual fatigue evaluation method, system, device based on EEG signals
CN113887397A (en) * 2021-09-29 2022-01-04 中山大学中山眼科中心 Classification method and classification system of electrophysiological signals based on ocean predator algorithm

Also Published As

Publication number Publication date
CN115192043B (en) 2023-03-31

Similar Documents

Publication Publication Date Title
Ihmig et al. On-line anxiety level detection from biosignals: Machine learning based on a randomized controlled trial with spider-fearful individuals
Wilson et al. Spike detection: a review and comparison of algorithms
Karthikeyan et al. Detection of human stress using short-term ECG and HRV signals
Alotaibi et al. Ensemble Machine Learning Based Identification of Pediatric Epilepsy.
Migliorelli et al. SGM: a novel time-frequency algorithm based on unsupervised learning improves high-frequency oscillation detection in epilepsy
Ahire et al. A comprehensive review of machine learning approaches for dyslexia diagnosis
Cui et al. Subject-independent drowsiness recognition from single-channel EEG with an interpretable CNN-LSTM model
Balam et al. Statistical channel selection method for detecting drowsiness through single-channel EEG-based BCI system
Fauzi et al. Continuous stress detection of hospital staff using smartwatch sensors and classifier ensemble
Epstein et al. Ensemble statistics can be available before individual item properties: Electroencephalography evidence using the oddball paradigm
Hasan et al. Validation and interpretation of a multimodal drowsiness detection system using explainable machine learning
Saini et al. Discriminatory features based on wavelet energy for effective analysis of electroencephalogram during mental tasks
Chen et al. An effective entropy-assisted mind-wandering detection system using EEG signals of MM-SART database
Trigka et al. A survey on signal processing methods for EEG-based brain computer interface systems
CN115192043B (en) Training method and training device for classification model for predicting visual fatigue predictability
Chakraborty et al. A survey on Internet-of-Thing applications using electroencephalogram
Aswathi et al. Comparison of Machine Learning Algorithms for Heart Rate Variability Based Driver Drowsiness Detection
Arif et al. An Approach to ECG-based Gender Recognition Using Random Forest Algorithm
Vijean et al. Objective investigation of vision impairments using single trial pattern reversal visually evoked potentials
Chen et al. An effective entropy-assisted mind-wandering detection system with EEG signals based on MM-SART database
Goumopoulos et al. Mental stress detection using a wearable device and heart rate variability monitoring
Zhu et al. Visceral versus verbal: Can we see depression?
Jyothirmy et al. Stress Monitoring in Humans using Biomedical Signal Analysis
Hussein et al. Seizure prediction algorithm based on simulated annealing and machine learning
Babaeian et al. Applying HRV based online clustering method to identify driver drowsiness

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant