CN115137374A - Sleep stage oriented electroencephalogram interpretability analysis method and related equipment - Google Patents

Sleep stage oriented electroencephalogram interpretability analysis method and related equipment Download PDF

Info

Publication number
CN115137374A
CN115137374A CN202210723823.1A CN202210723823A CN115137374A CN 115137374 A CN115137374 A CN 115137374A CN 202210723823 A CN202210723823 A CN 202210723823A CN 115137374 A CN115137374 A CN 115137374A
Authority
CN
China
Prior art keywords
time
sample
sleep
frequency
discrimination
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210723823.1A
Other languages
Chinese (zh)
Inventor
陈丹
殷丁泽
姬一峰
熊明福
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202210723823.1A priority Critical patent/CN115137374A/en
Publication of CN115137374A publication Critical patent/CN115137374A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/372Analysis of electroencephalograms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/372Analysis of electroencephalograms
    • A61B5/374Detecting the frequency distribution of signals, e.g. detecting delta, theta, alpha, beta or gamma waves
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4812Detecting sleep stages or cycles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/725Details of waveform analysis using specific filters therefor, e.g. Kalman or adaptive filters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Surgery (AREA)
  • Veterinary Medicine (AREA)
  • General Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Psychiatry (AREA)
  • Artificial Intelligence (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Psychology (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention provides an electroencephalogram interpretability analysis method facing sleep stages and related equipment, wherein the electroencephalogram interpretability analysis method facing the sleep stages comprises the following steps: constructing a sleep period judging rule based on conceptual characteristics, and realizing coding of sleep period judging knowledge; fusing the discrimination rules into the sleep staging model to perform discrimination analysis of the sleep state; training the sleep staging model fused with the discrimination rules to obtain a sleep staging model after parameter learning optimization; the electroencephalogram data to be evaluated are input into the sleep stage model after parameter learning optimization, the sleep stage result corresponding to the electroencephalogram data can be predicted, and the judgment rule based on the concept characteristics is obtained in the concept layer module correspondingly. The trained sleep staging model corresponding to the invention can provide expert understandable interpretation for the output sleep staging result, thereby realizing the purpose of interpretability analysis.

Description

Sleep stage oriented electroencephalogram interpretability analysis method and related equipment
Technical Field
The invention relates to the technical field of sleep stages, in particular to an electroencephalogram interpretability analysis method for sleep stages and related equipment.
Background
The task of sleep staging is to determine the sleep state through physiological signals (such as electroencephalogram, EEG for short) recorded in sleep, and is the primary task of diagnosing sleep disorder diseases such as insomnia and somnolence. Currently, in clinical applications, usually neurologists visually measure signal time sequences, manually score night records according to the handbook of american society for sleep medicine (AASM) for interpretation, and manually mark different sleep stages, such as waking, rapid Eye Movement (REM), and non-REM stages (N1, N2, N3). In order to alleviate the phenomenon, a large number of intelligent sleep staging methods based on deep learning models are provided in the existing research. Although the deep learning method has excellent performance on the sleep staging task, the internal judgment rule of the neural network model is difficult to understand, and the aim of model design is to provide a judgment result rather than a prediction basis, so that in the sleep staging task, an expert is difficult to understand the staging basis from a physiological signal level, the trust of the model is more difficult to form, and the clinical application value of the intelligent sleep staging model is insufficient. The interpretable technology in the deep learning can provide a prediction basis for the sleep staging result of the model, and the intelligent sleep staging model is promoted to be changed from a black box to a white box. However, the existing interpretable deep learning model is difficult to be directly applied in sleep stage based on electroencephalogram, and the difficulty is mainly reflected in that: the cognition modes of the expert and the model to the electroencephalogram signal are different, and the interpretable deep learning model is difficult to provide the judgment basis which can be understood by the expert. Unlike pictures and texts, electroencephalogram signals lack semantic information, and people cannot directly understand the intrinsic complex neural activity information from subtle changes of electroencephalogram waveforms. In the medical field, experts generally summarize brain activity states using statistical information (energy information and special waveforms, such as spindle waves) of brain electrical signals, but it is difficult for models to autonomously learn these concepts. Medical field knowledge and knowledge learned by the staging model cannot be mutually explained, so that the expert cannot understand the judgment basis of the sleep staging model.
Disclosure of Invention
The invention mainly aims to provide an electroencephalogram interpretability analysis method facing sleep stages and related equipment, and aims to solve the technical problem that knowledge learned by an existing sleep stage model and medical field knowledge cannot be mutually interpreted, so that a medical field expert cannot understand a judgment basis of a prediction result obtained based on the sleep stage model.
In a first aspect, the invention provides a sleep stage-oriented electroencephalogram interpretability analysis method, which comprises the following steps:
carrying out time-frequency transformation on the preprocessed electroencephalogram data to obtain a plurality of time-frequency segments;
inputting the time-frequency segments into a trained sleep staging model to obtain a discrimination rule set corresponding to the electroencephalogram data and a sleep staging result;
the step of inputting the time-frequency segments into the trained sleep staging model to obtain the discrimination rule set corresponding to the electroencephalogram data and the sleep staging result comprises the following steps:
inputting the time-frequency segments into a trained feature extraction module to obtain corresponding first features;
inputting the first characteristics into a trained concept layer module to obtain discrimination information corresponding to a plurality of time-frequency segments, wherein each time-frequency segment comprises a first preset number of discrimination information;
processing the discrimination information based on the first activation function to obtain a discrimination rule set corresponding to the electroencephalogram data, wherein each time-frequency segment corresponds to one discrimination rule;
merging the discrimination information corresponding to the time-frequency segments into a first information group, and inputting the first information group into a trained classification module to obtain a second information group;
and processing the second information group based on the second activation function to obtain a sleep stage result corresponding to the electroencephalogram data.
Optionally, before the step of inputting the time-frequency segments into the trained sleep stage model to obtain the decision rule set corresponding to the electroencephalogram data and the sleep stage result, the method further includes:
performing time-frequency transformation on the preprocessed sample electroencephalogram data to obtain a plurality of sample time-frequency segments;
screening out a second preset number of adjacent sample time-frequency segments, and inputting the second preset number of adjacent sample time-frequency segments into the sleep staging model to obtain a first loss function and a second loss function;
combining the first loss function and the second loss function to obtain a joint loss function;
adjusting parameters of the sleep staging model based on the joint loss function;
detecting whether the joint loss function is converged;
if the combined loss function is not converged, taking the screened new adjacent sample time-frequency segments with the second preset number as the adjacent sample time-frequency segments with the second preset number, and returning to execute the step of inputting the adjacent sample time-frequency segments with the second preset number into the sleep stage model to obtain a first loss function and a second loss function;
if the combined loss function is converged, taking the latest sleep staging model as a trained sleep staging model;
the step of inputting a second preset number of adjacent sample time-frequency segments into the sleep staging model to obtain a first loss function and a second loss function comprises:
inputting the adjacent sample time-frequency segments into a feature extraction module to obtain corresponding first sample features;
inputting the first sample characteristic into a concept layer module to obtain corresponding sample discrimination information;
processing the sample discrimination information based on the first activation function to obtain a discrimination rule predicted value corresponding to the sample electroencephalogram data;
obtaining a first loss function based on a discrimination rule real value and a discrimination rule predicted value corresponding to the sample electroencephalogram data;
merging the sample discrimination information into a first sample information group, and inputting the first sample information group into a classification module to obtain a second sample information group;
processing the second sample information group based on a second activation function to obtain a sleep stage predicted value corresponding to the sample electroencephalogram data;
and obtaining a second loss function based on the real value of the discrimination rule and the predicted value of the discrimination rule corresponding to the sample electroencephalogram data.
Optionally, before the step of inputting the second preset number of adjacent sample time-frequency segments into the sleep staging model to obtain the first loss function and the second loss function, the method includes:
according to quantifiable medical concepts in the AASM rules, feature extraction is carried out on the adjacent sample time-frequency segments with the second preset number through a signal processing technology, and corresponding concept features are obtained;
discretizing the conceptual features by adopting an equal-frequency binning method based on the distribution frequency of the conceptual features to obtain corresponding discrete features;
carrying out rule screening on the discrete features by a fully-relevant feature selection method based on a Shapril value to obtain a discrimination rule set relevant to sleep stages, wherein the discrimination rule set comprises a first preset number of discrimination rules;
and constructing a concept function based on a self-attention mechanism for each discrimination rule, and fusing the concept function based on the self-attention mechanism into a concept layer module of a sleep stage model, wherein the concept layer module comprises a self-attention model layer and a linear layer.
Optionally, the step of inputting the first feature to the trained concept layer module to obtain the discrimination information corresponding to the plurality of time-frequency segments includes:
weighting the first characteristics through a self-attention model layer to obtain second characteristics corresponding to a plurality of time-frequency segments, wherein the second characteristics corresponding to each time-frequency segment comprise a time dimension and a frequency dimension;
and weighting the frequency dimension of the second characteristic through a linear layer to obtain the discrimination information corresponding to a plurality of time-frequency segments.
Optionally, the step of performing time-frequency transformation on the preprocessed electroencephalogram data to obtain a time-frequency graph corresponding to a plurality of time-frequency segments includes:
performing noise reduction processing on the acquired electroencephalogram data by adopting a down-sampling, band-pass filtering and artifact removing algorithm;
dividing the electroencephalogram data subjected to noise reduction processing into a plurality of signal segments with preset duration as one frame by adopting an AASM (adaptive amplitude modulation) interpretation standard;
and carrying out multi-window power spectrum analysis on the plurality of signal segments to obtain a plurality of time-frequency segments.
In a second aspect, the present invention also provides a sleep stage-oriented electroencephalogram interpretability analysis apparatus comprising:
the time-frequency transformation module is used for carrying out time-frequency transformation on the preprocessed electroencephalogram data to obtain a plurality of time-frequency segments;
the model output module is used for inputting the time-frequency segments into the trained sleep staging model to obtain a discrimination rule set corresponding to the electroencephalogram data and a sleep staging result;
the model output module is specifically configured to:
inputting the time-frequency segments into a trained feature extraction module to obtain corresponding first features;
inputting the first characteristics into a trained concept layer module to obtain discrimination information corresponding to a plurality of time-frequency segments, wherein each time-frequency segment comprises a first preset number of discrimination information;
processing the discrimination information based on the first activation function to obtain a discrimination rule set corresponding to the electroencephalogram data, wherein each time-frequency segment corresponds to one discrimination rule;
combining the discrimination information corresponding to the time-frequency segments into a first information group, and inputting the first information group into a trained classification module to obtain a second information group;
and processing the second information group based on the second activation function to obtain a sleep stage result corresponding to the electroencephalogram data.
Optionally, the sleep stage oriented electroencephalogram interpretability analysis apparatus further includes a training module, configured to:
performing time-frequency transformation on the preprocessed sample electroencephalogram data to obtain a plurality of sample time-frequency segments;
screening out a second preset number of adjacent sample time-frequency segments, and inputting the second preset number of adjacent sample time-frequency segments into the sleep staging model to obtain a first loss function and a second loss function;
combining the first loss function and the second loss function to obtain a joint loss function;
adjusting parameters of the sleep staging model based on the joint loss function;
detecting whether the joint loss function is converged;
if the combined loss function is not converged, taking the screened new adjacent sample time-frequency segments with the second preset number as the adjacent sample time-frequency segments with the second preset number, and returning to execute the step of inputting the adjacent sample time-frequency segments with the second preset number into the sleep stage model to obtain a first loss function and a second loss function;
if the combined loss function is converged, taking the latest sleep staging model as a trained sleep staging model;
the step of inputting a second preset number of adjacent sample time-frequency segments into the sleep staging model to obtain a first loss function and a second loss function comprises:
inputting the adjacent sample time-frequency segments into a feature extraction module to obtain corresponding first sample features;
inputting the first sample characteristic into a concept layer module to obtain corresponding sample discrimination information;
processing the sample discrimination information based on the first activation function to obtain a discrimination rule predicted value corresponding to the sample electroencephalogram data;
obtaining a first loss function based on a discrimination rule real value and a discrimination rule predicted value corresponding to the sample electroencephalogram data;
merging the sample discrimination information into a first sample information group, and inputting the first sample information group into a classification module to obtain a second sample information group;
processing the second sample information group based on the second activation function to obtain a sleep stage predicted value corresponding to the sample electroencephalogram data;
and obtaining a second loss function based on the real value of the discrimination rule and the predicted value of the discrimination rule corresponding to the sample electroencephalogram data.
Optionally, the sleep stage-oriented electroencephalogram interpretability analysis apparatus further includes a construction model block configured to:
according to quantifiable medical concepts in the AASM rules, feature extraction is carried out on the adjacent sample time-frequency segments with the second preset number through a signal processing technology, and corresponding concept features are obtained;
discretizing the conceptual features by adopting an equal-frequency binning method based on the distribution frequency of the conceptual features to obtain corresponding discrete features;
carrying out rule screening on the discrete features by a fully-relevant feature selection method based on a Shapril value to obtain a discrimination rule set relevant to sleep stages, wherein the discrimination rule set comprises a first preset number of discrimination rules;
and constructing a conceptual function based on a self-attention mechanism for each judgment rule, and fusing the conceptual function based on the self-attention mechanism into a conceptual layer module of a sleep stage model, wherein the conceptual layer module comprises a self-attention model layer and a linear layer.
Optionally, the model output module is further specifically configured to:
weighting the first characteristics through a self-attention model layer to obtain second characteristics corresponding to a plurality of time-frequency segments, wherein the second characteristics corresponding to each time-frequency segment comprise a time dimension and a frequency dimension;
and weighting the frequency dimension of the second characteristic through a linear layer to obtain the discrimination information corresponding to a plurality of time-frequency segments.
Optionally, the time-frequency transform module is further specifically configured to:
performing noise reduction processing on the acquired electroencephalogram data by adopting a down-sampling, band-pass filtering and artifact removing algorithm;
dividing the electroencephalogram data subjected to noise reduction processing into a plurality of signal segments with preset duration as one frame by adopting an AASM (adaptive amplitude modulation) interpretation standard;
and carrying out multi-window power spectrum analysis on the plurality of signal segments to obtain a plurality of time-frequency segments.
In a third aspect, the present invention further provides a sleep stage oriented electroencephalogram interpretability analysis apparatus, which includes a processor, a memory, and a sleep stage oriented electroencephalogram interpretability analysis program stored on the memory and executable by the processor, wherein when the sleep stage oriented electroencephalogram interpretability analysis program is executed by the processor, the steps of the sleep stage oriented electroencephalogram interpretability analysis method as described above are implemented.
In a fourth aspect, the present invention further provides a readable storage medium, on which a sleep stage oriented electroencephalogram interpretability analysis program is stored, wherein when the sleep stage oriented electroencephalogram interpretability analysis program is executed by a processor, the steps of the sleep stage oriented electroencephalogram interpretability analysis method as described above are implemented.
The invention provides an electroencephalogram interpretability analysis method facing sleep stages and related equipment, wherein the electroencephalogram interpretability analysis method facing the sleep stages comprises the following steps: constructing a sleep period judging rule based on conceptual characteristics, and realizing coding of sleep period judging knowledge; fusing the discrimination rules into the sleep staging model to perform discrimination analysis of the sleep state; training the sleep staging model fused with the discrimination rules to obtain a sleep staging model after parameter learning optimization; the electroencephalogram data to be evaluated are input into the sleep stage model after parameter learning optimization, the sleep stage result corresponding to the electroencephalogram data can be predicted, and the judgment rule based on the concept characteristics is obtained in the concept layer module correspondingly. The trained sleep staging model corresponding to the invention can provide expert understandable interpretation for the output sleep staging result, thereby realizing the purpose of interpretability analysis.
Drawings
Fig. 1 is a schematic diagram of a hardware structure of sleep stage oriented electroencephalogram interpretability analysis equipment according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of an embodiment of a sleep stage oriented electroencephalogram interpretability analysis method according to the present invention;
FIG. 3 is a schematic flow chart of another embodiment of the sleep stage oriented electroencephalogram interpretability analysis method of the present invention;
FIG. 4 is a schematic flow chart of a sleep stage oriented EEG interpretability analysis method according to another embodiment of the present invention;
fig. 5 is a functional block diagram of an embodiment of the sleep stage oriented electroencephalogram interpretability analysis apparatus according to the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In a first aspect, embodiments of the present invention provide an electroencephalogram interpretability analysis apparatus for sleep stages.
Referring to fig. 1, fig. 1 is a schematic diagram of a hardware structure of a sleep stage oriented electroencephalogram interpretability analysis apparatus according to an embodiment of the present invention. In this embodiment of the present invention, the sleep stage oriented electroencephalogram interpretability analysis apparatus may include a processor 1001 (e.g., a Central Processing Unit, CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. The communication bus 1002 is used for implementing connection communication among the components; the user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard); the network interface 1004 may optionally include a standard wired interface, a WIreless interface (e.g., a WI-FI interface, WI-FI interface); the memory 1005 may be a Random Access Memory (RAM) or a non-volatile memory (non-volatile memory), such as a magnetic disk memory, and the memory 1005 may optionally be a storage device independent of the processor 1001. Those skilled in the art will appreciate that the hardware configuration depicted in FIG. 1 is not intended to be limiting of the present invention, and may include more or less components than those shown, or some components in combination, or a different arrangement of components.
With continued reference to fig. 1, a memory 1005, which is one type of computer storage medium in fig. 1, may include an operating system, a network communication module, a user interface module, and a sleep stage oriented electrobrain interpretability analysis program. The processor 1001 may call a sleep stage oriented electroencephalogram interpretability analysis program stored in the memory 1005, and execute the sleep stage oriented electroencephalogram interpretability analysis method provided by the embodiment of the present invention.
In a second aspect, the embodiment of the invention provides an electroencephalogram interpretability analysis method for sleep stages.
Referring to fig. 2, fig. 2 is a schematic flowchart of an embodiment of the sleep stage oriented electroencephalogram interpretability analysis method of the present invention.
In an embodiment of the sleep stage oriented electroencephalogram interpretability analysis method, the sleep stage oriented electroencephalogram interpretability analysis method comprises the following steps:
s10, performing time-frequency transformation on the preprocessed electroencephalogram data to obtain a plurality of time-frequency segments;
in this embodiment, because the electroencephalogram data collected by the sleep monitoring device is an original signal of the whole night, the original signal is not processed, which may affect the evaluation of the sleep stage, for example: (1) The original signal is a whole segment of signal, the duration is long, and the AASM interpretation rule is not met; (2) The original signal includes a noise signal that interferes with the analysis of subsequent sleep sessions. Therefore, the electroencephalogram data need to be preprocessed, and because the main difference of the electroencephalogram signals in different sleep periods appears in frequency distribution, in order to make the electroencephalogram characteristics more prominent and make the judgment result of the sleep periods more accurate, the preprocessed electroencephalogram data need to be subjected to time-frequency transformation, the electroencephalogram signals are converted from time domains to time-frequency domains, and a plurality of time-frequency segments, namely time-frequency graphs corresponding to each electroencephalogram signal segment, are obtained.
Further, in an embodiment, referring to fig. 3, step S10 includes:
s101, performing noise reduction processing on acquired electroencephalogram data by adopting a down-sampling, band-pass filtering and artifact removing algorithm;
step S102, segmenting the electroencephalogram data subjected to noise reduction processing into a plurality of signal segments with preset duration as one frame by adopting an AASM interpretation standard;
and step S103, performing multi-window power spectrum analysis on the plurality of signal segments to obtain a plurality of time-frequency segments.
In this embodiment, specifically, the step of performing time-frequency transformation on the preprocessed electroencephalogram data to obtain a plurality of time-frequency segments includes: the acquired electroencephalogram number is down-sampled to 256Hz, and electroencephalogram signal segments from light-off to light-on are selected according to light-on and light-off events in the acquisition process. And removing the frequency band of non-electroencephalogram signals in the electroencephalogram signal segment between the light-off state and the light-on state through band-pass filtering of 0.5Hz-45Hz, and removing artifacts such as myoelectricity, electrooculogram and the like in the residual electroencephalogram signals. The method comprises the steps of carrying out noise reduction on collected electroencephalogram data by adopting a down-sampling, band-pass filtering and artifact removing algorithm to obtain the electroencephalogram data subjected to noise reduction so as to improve the anti-noise capability of the electroencephalogram data and reduce the interference of noise signals on subsequent sleep stage analysis.
Since the sleep stage is generally judged by one frame of a 30-second signal segment according to the AASM judgment rule at present. Therefore, after the noise-reduced electroencephalogram data is obtained, the scheme of the embodiment divides the noise-reduced electroencephalogram data into a plurality of signal segments with preset duration as one frame by adopting the AASM interpretation standard, and the segments are not overlapped. And taking the signal segment with preset duration as the minimum segment of the sleep stage, and taking the sleep stage with the highest score in the signal segment with preset duration as the sleep state of the segment.
And performing multi-window power spectrum analysis on the plurality of segmented signal segments, so that the electroencephalogram signals corresponding to the plurality of signal segments can be converted from time domains to time-frequency domains to obtain the power spectral density of the electroencephalogram signals. The multi-window power spectrum analysis is carried out on the signal segments to obtain a plurality of time-frequency segments, so that the electroencephalogram characteristics are more prominent.
S20, inputting a plurality of time-frequency segments into the trained sleep staging model to obtain a discrimination rule set corresponding to the electroencephalogram data and a sleep staging result;
referring to fig. 4, the step S20 includes:
step S201, inputting a plurality of time-frequency segments to a trained feature extraction module to obtain corresponding first features;
step S202, inputting the first characteristics to a trained concept layer module to obtain discrimination information corresponding to a plurality of time-frequency segments, wherein each time-frequency segment comprises a first preset number of discrimination information;
step S203, processing the discrimination information based on the first activation function to obtain a discrimination rule set corresponding to the electroencephalogram data, wherein each time-frequency segment corresponds to one discrimination rule;
step S204, combining the discrimination information corresponding to a plurality of time-frequency segments into a first information group, and inputting the first information group into a trained classification module to obtain a second information group;
and S205, processing the second information group based on the second activation function to obtain a sleep stage result corresponding to the electroencephalogram data.
In this embodiment, the plurality of time-frequency segments are input to the trained sleep stage model, so that a discrimination rule set corresponding to electroencephalogram data and a sleep stage result can be obtained, and an expert in the medical field can understand the discrimination rule corresponding to a medical concept by combining the discrimination rule with the sleep stage result to determine a discrimination basis of a prediction result obtained by the sleep stage model, thereby correspondingly solving the technical problem that the knowledge learned by the existing sleep stage model and the medical field knowledge cannot be mutually interpreted, so that the expert in the medical field cannot understand the discrimination basis of the prediction result obtained by the sleep stage model. The sleep staging model capable of outputting the discrimination rule set corresponding to the electroencephalogram data and the sleep staging result specifically comprises: the device comprises a feature extraction module, a concept layer module, a first activation function, a classification module and a second activation function. Specifically, the step of inputting the time-frequency segments into the trained sleep stage model to obtain the discrimination rule set corresponding to the electroencephalogram data and the sleep stage result comprises the following steps:
and inputting the time-frequency segments into the trained feature extraction module to obtain corresponding first features. The characteristic extraction module takes the encoder as a main body, a plurality of layers of encoders are connected in series, and the input dimension and the output dimension of each layer of encoders are kept consistent. After being processed by the multilayer encoder, the time sequence incidence relation in a plurality of time-frequency segments can be extracted, and the time-frequency segments can be converted into high-level internal features, namely first features, the dimension of the first features is T multiplied by F, wherein T is a time dimension and F is a frequency dimension.
And inputting the obtained first characteristics into a trained concept layer module to obtain discrimination information corresponding to a plurality of time-frequency segments, wherein each time-frequency segment comprises a first preset number of discrimination information. In the concept layer module, a concept function based on a Self-Attention mechanism (Self-Attention) is correspondingly set for each judgment rule, and the concept function comprises a Self-Attention model layer and a linear layer, so that the decoding from the obtained first characteristic to the judgment rule is realized. Wherein, the concept function corresponding to the concept layer module is h (·) = { h = 1 ,…,h k ,…,h K },h k Is the concept function corresponding to the kth discrimination rule, and K is the number of discrimination rules. And obtaining judgment information of a plurality of time-frequency segments which are spliced by corresponding output information based on the concept function, wherein each time-frequency segment comprises a first preset number of judgment information.
And processing the obtained discrimination information corresponding to each time-frequency segment based on a first activation function, so as to obtain a discrimination rule set corresponding to the electroencephalogram data as a discrimination basis for the later sleep stage, wherein each time-frequency segment corresponds to one discrimination rule, and the first activation function can adopt a sigmoid function.
And merging the discrimination information corresponding to the time-frequency segments into a first information group, and inputting the first information group into a trained classification module to obtain a second information group. The classification module takes the encoder as a main body, the encoders on a plurality of layers are connected in series, and the input dimension and the output dimension of each layer of encoder are kept consistent. After the processing of the multi-layer encoder, the time sequence association relationship between each time-frequency segment in the first information group can be obtained, and the first information group is encoded to obtain a second information group.
And processing the second information group obtained by the encoding processing based on a second activation function, and distributing the probability that each time-frequency segment belongs to each sleep stage to obtain a sleep stage result corresponding to the electroencephalogram data.
Further, in an embodiment, step S202 further includes:
weighting the first characteristics through a self-attention model layer to obtain second characteristics corresponding to a plurality of time-frequency segments, wherein the second characteristics corresponding to each time-frequency segment comprise a time dimension and a frequency dimension;
and weighting the frequency dimension of the second characteristic through a linear layer to obtain the discrimination information corresponding to a plurality of time-frequency segments.
In this embodiment, specifically, the step of inputting the first feature to the trained concept layer module to obtain the discrimination information corresponding to the plurality of time-frequency segments further includes:
and performing self-attention learning on the first characteristics corresponding to each time-frequency segment through the self-attention model layer to obtain second characteristics corresponding to a plurality of time-frequency segments, wherein the second characteristics corresponding to each time-frequency segment comprise a time dimension and a frequency dimension. Wherein, the attention mechanism can be regarded as the weighting of the internal features, and the calculation formula is as follows:
Figure BDA0003710167900000111
wherein z is k The result obtained by the self-attention model corresponding to the kth judgment rule is shared by K independent concept functions corresponding to the self-attention model;
Figure BDA0003710167900000112
is a first characteristic
Figure BDA0003710167900000113
T is a time dimension; alpha (alpha) ("alpha") t Determining the attention degree of the model to the vector for the weight corresponding to the t-th frequency vector, and calculating the attention degree by the following formula:
Figure BDA0003710167900000121
Figure BDA0003710167900000122
wherein W a And b a Are trainable model parameters. Obtaining the weighted second feature through the concept functions corresponding to the K self-attention models
Figure BDA0003710167900000123
Each of the second characteristics
Figure BDA0003710167900000124
The dimension of (b) is K × F, where K is the number of discriminant rules.
For the second feature by K linear layers respectively
Figure BDA0003710167900000125
The frequency dimension in the time-frequency domain is weighted to obtain the corresponding discrimination information of a plurality of time-frequency segments,
Figure BDA0003710167900000126
wherein
Figure BDA0003710167900000127
And the dimension is K, which is the discrimination information of the ith time-frequency graph. Wherein, the calculation formula of each linear layer is as follows:
x c =W c z * +b c
wherein, W c And b c Are trainable model parameters.
Further, in an embodiment, before step S20, the method further includes:
performing time-frequency transformation on the preprocessed sample electroencephalogram data to obtain a plurality of sample time-frequency segments;
screening out a second preset number of adjacent sample time-frequency segments, and inputting the second preset number of adjacent sample time-frequency segments into the sleep stage model to obtain a first loss function and a second loss function;
combining the first loss function and the second loss function to obtain a joint loss function;
adjusting parameters of the sleep staging model based on the joint loss function;
detecting whether the joint loss function is converged;
if the joint loss function is not converged, taking the screened new adjacent sample time-frequency segments with the second preset number as the adjacent sample time-frequency segments with the second preset number, and returning to execute the step of inputting the adjacent sample time-frequency segments with the second preset number into the sleep stage model to obtain a first loss function and a second loss function;
if the combined loss function is converged, taking the latest sleep staging model as a trained sleep staging model;
the step of inputting a second preset number of adjacent sample time-frequency segments into the sleep staging model to obtain a first loss function and a second loss function comprises:
inputting the time-frequency segments of the adjacent samples into a feature extraction module to obtain corresponding first sample features;
inputting the first sample characteristic into a concept layer module to obtain corresponding sample discrimination information;
processing the sample discrimination information based on the first activation function to obtain a discrimination rule predicted value corresponding to the sample electroencephalogram data;
obtaining a first loss function based on a discrimination rule real value and a discrimination rule predicted value corresponding to the sample electroencephalogram data;
merging the sample discrimination information into a first sample information group, and inputting the first sample information group into a classification module to obtain a second sample information group;
processing the second sample information group based on the second activation function to obtain a sleep stage predicted value corresponding to the sample electroencephalogram data;
and obtaining a second loss function based on the discrimination rule real value and the discrimination rule predicted value corresponding to the sample electroencephalogram data.
In this embodiment, before the decision rule set corresponding to the electroencephalogram data and the sleep stage result are obtained based on the trained sleep stage model, the constructed sleep stage model is trained to train trainable parameters of the model to a better level.
And selecting sample electroencephalogram data, and preprocessing the sample electroencephalogram data to obtain a plurality of sample time-frequency fragments corresponding to the sample electroencephalogram data. The sample time-frequency segments are used for training the sleep stage model. In order to ensure that the plurality of sample time-frequency data can correspondingly train the sleep stage model for the preset times, a second preset number of adjacent sample time-frequency segments are screened out each time, and the second preset number of adjacent sample time-frequency segments are input into the sleep stage model to obtain a first loss function
Figure BDA0003710167900000131
And a second loss function L Y
Combining the first loss functions
Figure BDA0003710167900000132
And a second loss function L Y A joint loss function may be derived and parameters of the sleep staging model adjusted based on the derived joint loss function. This embodiment trains the model using batch gradients and Adam optimizer, setting learning rate to 10 -3 . Optimizing a loss function by iterative training by using a self-adaptive learning rate attenuation strategy, wherein if the loss of the model is not reduced in 5 training periods, the learning rate is attenuated to half of the original learning rate, and the minimum learning rate is set to be 10 -6 . An early stop method is applied to avoid overfitting of the model, and if the loss of the model does not decrease within 20 periods, the training is stopped. The model parameters can be trained to a better level in the iterative training manner described above.
And after adjusting parameters of the sleep staging model based on the obtained joint loss function each time, detecting whether the joint loss function in the period is converged relative to the joint loss function obtained in the last training period. And if the combined loss function is not converged, continuing training to select a new second preset number of adjacent sample time-frequency segments as the second preset number of adjacent sample time-frequency segments, and returning to execute the step of inputting the second preset number of adjacent sample time-frequency segments into the sleep stage model to obtain the first loss function and the second loss function. And if the combined loss function is converged, finishing training, and taking the latest sleep staging model as the trained sleep staging model.
Specifically, the step of inputting a second preset number of adjacent sample time-frequency segments into the sleep staging model to obtain a first loss function and a second loss function includes:
inputting the time-frequency segments of the adjacent samples into a feature extraction module to obtain corresponding first sample features;
inputting the first sample characteristic into a concept layer module to obtain corresponding sample discrimination information;
processing the sample discrimination information based on the first activation function to obtain a discrimination rule predicted value corresponding to the sample electroencephalogram data;
obtaining a first loss function based on a discrimination rule true value (a label value obtained when the sample electroencephalogram data is processed) corresponding to the sample electroencephalogram data and a discrimination rule predicted value
Figure BDA0003710167900000141
Wherein h is j (g(x (i) ) ) is a discriminant rule;
merging the sample discrimination information into a first sample information group, and inputting the first sample information group into a classification module to obtain a second sample information group;
processing the second sample information group based on the second activation function to obtain a sleep stage predicted value corresponding to the sample electroencephalogram data;
based on the real value (the label value obtained when the sample electroencephalogram data is processed) of the discrimination rule corresponding to the sample electroencephalogram data and the predicted value of the discrimination rule, a second loss function, L, is obtained Y (f(h(g(x (i) )));y (i) ) Wherein, f (h (g (x)) is (i) ) ) is a sleep staging result.
Further, in an embodiment, before the step S02, the method further includes:
according to quantifiable medical concepts in the AASM rules, feature extraction is carried out on the adjacent sample time-frequency segments with the second preset number through a signal processing technology, and corresponding concept features are obtained;
discretizing the conceptual features by adopting an equal-frequency binning method based on the distribution frequency of the conceptual features to obtain corresponding discrete features;
carrying out rule screening on the discrete features by a fully-relevant feature selection method based on a Shapril value to obtain a discrimination rule set relevant to sleep stages, wherein the discrimination rule set comprises a first preset number of discrimination rules;
and constructing a concept function based on a self-attention mechanism for each discrimination rule, and fusing the concept function based on the self-attention mechanism into a concept layer module of a sleep stage model, wherein the concept layer module comprises a self-attention model layer and a linear layer.
In this embodiment, before the step of inputting the second preset number of adjacent sample time-frequency segments into the sleep staging model to obtain the first loss function and the second loss function, a discrimination rule needs to be constructed and incorporated into the sleep staging model, so that the discrimination rule corresponding to the medical concept can be obtained based on the sleep staging result obtained by the trained sleep staging model. Specifically, the step of constructing a discrimination rule and incorporating the discrimination rule into the sleep staging model includes:
and according to quantifiable medical concepts in the AASM rules, performing feature extraction on the adjacent sample time-frequency segments with the second preset number through a signal processing technology to obtain corresponding concept features. The concept features selected by the scheme of the embodiment are 24, and are mainly classified into three categories, namely:
(1) Band energy. According to the power spectral density X = { X corresponding to the signal segment 1 ,...,x i ,...,x N Respectively calculating Delta (0.5-4 Hz), theta (4-8 Hz), alpha (8-12 Hz) and Beta (0.5-4 Hz)>12 Hz) frequency bands, wherein the corresponding relationship between each frequency band and the sleep period is: the Delta band is more prominent in stage N3, and the Theta band is a marker of stage N1, alpha and Beta bands for distinguishing between stage W and stage N1. A total of 4 energy signatures were obtained.
(2) Entropy. The characteristic can quantify the complexity of the brain electrical sequence, the scheme of the embodiment adopts approximate entropy, permutation entropy, wavelet entropy and multi-scale entropy as measurement modes, wherein the scale degree of the multi-scale entropy is set to be 13. A total of 16 entropy features are obtained.
(3) The basic signal properties. The scheme of the embodiment adopts kurtosis, mean value, variance and zero crossing point as basic attributes of the signal. The kurtosis can detect the high peak and the low valley in the electroencephalogram signal, and is beneficial to identifying special waveforms such as K complex waves and the like; a total of 4 basic features are obtained.
In the scheme of the embodiment, discretization processing is carried out on the conceptual features by adopting an equal frequency binning method based on the distribution frequency of the conceptual features to obtain discrete features corresponding to 0-1. If the continuous conceptual feature is divided into 3 discrete features, the division criterion is 1) the 25% quantile of the sample feature; 2) A 50% quantile of sample features; 3) A 75% quantile of a sample feature, if the feature is greater than 25% and less than 50% quantile, then the three discrete features discretized are [1,0,0].
In order to ensure the correlation between the discriminant rules and the sleep states, the model is prevented from being subjected to trade-off between interpretability (medical concept prediction) and accuracy (sleep period prediction) in training, so that the concept accuracy rate or the prediction accuracy rate is low. The invention utilizes a fully-relevant feature selection method based on a Shapril value to regularly screen the discrete features to obtain a sleep scorePhase correlation discrimination rule set C = { C 1 ,...,c i ,...,c N A first predetermined number of discrimination rules in the discrimination rule set, c i Each c contains K pieces of discrimination information as a discrimination rule for the ith time slice.
And constructing a conceptual function based on a self-attention mechanism for each judgment rule, and fusing the conceptual function based on the self-attention mechanism into a conceptual layer module of a sleep stage model, wherein the conceptual layer module comprises a self-attention model layer and a linear layer.
In the embodiment, an electroencephalogram interpretability analysis method for sleep stages is provided, which includes: constructing a sleep period judging rule based on conceptual characteristics, and realizing coding of sleep period judging knowledge; fusing the discrimination rules into the sleep staging model to perform discrimination analysis of the sleep state; training the sleep staging model fused with the discrimination rules to obtain a sleep staging model after parameter learning optimization; the electroencephalogram data to be evaluated are input into the sleep stage model after parameter learning optimization, the sleep stage result corresponding to the electroencephalogram data can be predicted, and the judgment rule based on the concept characteristics is correspondingly obtained in the concept layer module. The trained sleep staging model corresponding to the invention can provide expert understandable explanation for the output sleep staging result, thereby realizing the purpose of interpretability analysis.
In a third aspect, the embodiment of the invention further provides an electroencephalogram interpretability analysis device for sleep stages.
Referring to fig. 5, a functional module diagram of an embodiment of the sleep stage oriented electroencephalogram interpretability analysis apparatus.
In this embodiment, the sleep stage-oriented electroencephalogram interpretability analysis apparatus includes:
the time-frequency transformation module 10 is used for performing time-frequency transformation on the preprocessed electroencephalogram data to obtain a plurality of time-frequency segments;
the model output module 20 is configured to input the time-frequency segments into the trained sleep stage model to obtain a discrimination rule set and a sleep stage result corresponding to the electroencephalogram data;
the model output module 20 is specifically configured to:
inputting the time-frequency segments into a trained feature extraction module to obtain corresponding first features;
inputting the first characteristics into a trained concept layer module to obtain discrimination information corresponding to a plurality of time-frequency segments, wherein each time-frequency segment comprises a first preset number of discrimination information;
processing the discrimination information based on the first activation function to obtain a discrimination rule set corresponding to the electroencephalogram data, wherein each time-frequency segment corresponds to one discrimination rule;
combining the discrimination information corresponding to the time-frequency segments into a first information group, and inputting the first information group into a trained classification module to obtain a second information group;
and processing the second information group based on the second activation function to obtain a sleep stage result corresponding to the electroencephalogram data.
Further, in an embodiment, the sleep stage oriented electroencephalogram interpretability analysis apparatus further includes a training module, configured to:
performing time-frequency transformation on the preprocessed sample electroencephalogram data to obtain a plurality of sample time-frequency segments;
screening out a second preset number of adjacent sample time-frequency segments, and inputting the second preset number of adjacent sample time-frequency segments into the sleep staging model to obtain a first loss function and a second loss function;
combining the first loss function and the second loss function to obtain a joint loss function;
adjusting parameters of the sleep staging model based on the joint loss function;
detecting whether the joint loss function is converged;
if the combined loss function is not converged, taking the screened new adjacent sample time-frequency segments with the second preset number as the adjacent sample time-frequency segments with the second preset number, and returning to execute the step of inputting the adjacent sample time-frequency segments with the second preset number into the sleep stage model to obtain a first loss function and a second loss function;
if the combined loss function is converged, taking the latest sleep staging model as a trained sleep staging model;
the step of inputting a second preset number of adjacent sample time-frequency segments into the sleep staging model to obtain a first loss function and a second loss function comprises:
inputting the adjacent sample time-frequency segments into a feature extraction module to obtain corresponding first sample features;
inputting the first sample characteristic into a concept layer module to obtain corresponding sample discrimination information;
processing the sample discrimination information based on the first activation function to obtain a discrimination rule predicted value corresponding to the sample electroencephalogram data;
obtaining a first loss function based on a discrimination rule real value and a discrimination rule predicted value corresponding to the sample electroencephalogram data;
merging the sample discrimination information into a first sample information group, and inputting the first sample information group into a classification module to obtain a second sample information group;
processing the second sample information group based on a second activation function to obtain a sleep stage predicted value corresponding to the sample electroencephalogram data;
and obtaining a second loss function based on the real value of the discrimination rule and the predicted value of the discrimination rule corresponding to the sample electroencephalogram data.
Further, in an embodiment, the sleep stage oriented electroencephalogram interpretability analysis apparatus further includes a construction modeling block configured to:
according to quantifiable medical concepts in the AASM rules, feature extraction is carried out on the adjacent sample time-frequency segments with the second preset number through a signal processing technology, and corresponding concept features are obtained;
discretizing the conceptual features by adopting an equal-frequency binning method based on the distribution frequency of the conceptual features to obtain corresponding discrete features;
carrying out rule screening on the discrete features by a fully-relevant feature selection method based on a Shapril value to obtain a discrimination rule set relevant to sleep stages, wherein the discrimination rule set comprises a first preset number of discrimination rules;
and constructing a concept function based on a self-attention mechanism for each discrimination rule, and fusing the concept function based on the self-attention mechanism into a concept layer module of a sleep stage model, wherein the concept layer module comprises a self-attention model layer and a linear layer.
Further, in an embodiment, the model output module 20 is further specifically configured to:
weighting the first characteristics through a self-attention model layer to obtain second characteristics corresponding to a plurality of time-frequency segments, wherein the second characteristics corresponding to each time-frequency segment comprise a time dimension and a frequency dimension;
and weighting the frequency dimension of the second characteristic through a linear layer to obtain the discrimination information corresponding to a plurality of time-frequency segments.
Further, in an embodiment, the time-frequency transform module 10 is further specifically configured to:
performing noise reduction processing on the acquired electroencephalogram data by adopting a down-sampling, band-pass filtering and artifact removing algorithm;
dividing the electroencephalogram data subjected to noise reduction processing into a plurality of signal segments with preset duration as one frame by adopting an AASM (adaptive amplitude modulation) interpretation standard;
and performing multi-window power spectrum analysis on the plurality of signal segments to obtain a plurality of time-frequency segments.
The function realization of each module in the sleep stage oriented electroencephalogram interpretability analysis device corresponds to each step in the sleep stage oriented electroencephalogram interpretability analysis method embodiment, and the function and the realization process are not repeated herein.
In a fourth aspect, the embodiment of the present invention further provides a readable storage medium.
The readable storage medium of the invention is stored with a sleep stage-oriented electroencephalogram interpretability analysis program, wherein when the sleep stage-oriented electroencephalogram interpretability analysis program is executed by a processor, the steps of the sleep stage-oriented electroencephalogram interpretability analysis method are realized.
The method for implementing the sleep stage-oriented electroencephalogram interpretability analysis program when executed can refer to various embodiments of the sleep stage-oriented electroencephalogram interpretability analysis method, and details are not repeated here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for causing a terminal device to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A sleep stage oriented electroencephalogram interpretability analysis method is characterized by comprising the following steps:
carrying out time-frequency transformation on the preprocessed electroencephalogram data to obtain a plurality of time-frequency segments;
inputting the time-frequency segments into a trained sleep staging model to obtain a discrimination rule set corresponding to the electroencephalogram data and a sleep staging result;
the step of inputting the time-frequency segments into the trained sleep staging model to obtain the discrimination rule set corresponding to the electroencephalogram data and the sleep staging result comprises the following steps:
inputting the time-frequency segments into a trained feature extraction module to obtain corresponding first features;
inputting the first characteristics into a trained concept layer module to obtain discrimination information corresponding to a plurality of time-frequency segments, wherein each time-frequency segment comprises a first preset number of discrimination information;
processing the discrimination information based on the first activation function to obtain a discrimination rule set corresponding to the electroencephalogram data, wherein each time-frequency segment corresponds to one discrimination rule;
merging the discrimination information corresponding to the time-frequency segments into a first information group, and inputting the first information group into a trained classification module to obtain a second information group;
and processing the second information group based on the second activation function to obtain a sleep stage result corresponding to the electroencephalogram data.
2. The sleep stage-oriented electroencephalogram interpretability analysis method of claim 1, wherein before the step of inputting a plurality of time-frequency segments into the trained sleep stage model to obtain a discrimination rule set corresponding to electroencephalogram data and a sleep stage result, the method further comprises:
performing time-frequency transformation on the preprocessed sample electroencephalogram data to obtain a plurality of sample time-frequency segments;
screening out a second preset number of adjacent sample time-frequency segments, and inputting the second preset number of adjacent sample time-frequency segments into the sleep staging model to obtain a first loss function and a second loss function;
combining the first loss function and the second loss function to obtain a joint loss function;
adjusting parameters of the sleep staging model based on the joint loss function;
detecting whether the joint loss function is converged;
if the combined loss function is not converged, taking the screened new adjacent sample time-frequency segments with the second preset number as the adjacent sample time-frequency segments with the second preset number, and returning to execute the step of inputting the adjacent sample time-frequency segments with the second preset number into the sleep stage model to obtain a first loss function and a second loss function;
if the combined loss function is converged, taking the latest sleep staging model as a trained sleep staging model;
the step of inputting a second preset number of adjacent sample time-frequency segments into the sleep staging model to obtain a first loss function and a second loss function comprises:
inputting the adjacent sample time-frequency segments into a feature extraction module to obtain corresponding first sample features;
inputting the first sample characteristic into a concept layer module to obtain corresponding sample discrimination information;
processing the sample discrimination information based on the first activation function to obtain a discrimination rule predicted value corresponding to the sample electroencephalogram data;
obtaining a first loss function based on a discrimination rule real value and a discrimination rule predicted value corresponding to the sample electroencephalogram data;
merging the sample discrimination information into a first sample information group, and inputting the first sample information group into a classification module to obtain a second sample information group;
processing the second sample information group based on the second activation function to obtain a sleep stage predicted value corresponding to the sample electroencephalogram data;
and obtaining a second loss function based on the real value of the discrimination rule and the predicted value of the discrimination rule corresponding to the sample electroencephalogram data.
3. The sleep stage oriented electroencephalogram interpretability analysis method of claim 2, wherein the step of inputting a second preset number of adjacent sample time-frequency segments into the sleep stage model to obtain the first loss function and the second loss function comprises:
according to quantifiable medical concepts in the AASM rules, feature extraction is carried out on the adjacent sample time-frequency segments with the second preset number through a signal processing technology, and corresponding concept features are obtained;
discretizing the conceptual features by adopting an equal-frequency binning method based on the distribution frequency of the conceptual features to obtain corresponding discrete features;
carrying out rule screening on the discrete features by a fully-relevant feature selection method based on a Shapril value to obtain a discrimination rule set relevant to sleep stages, wherein the discrimination rule set comprises a first preset number of discrimination rules;
and constructing a concept function based on a self-attention mechanism for each discrimination rule, and fusing the concept function based on the self-attention mechanism into a concept layer module of a sleep stage model, wherein the concept layer module comprises a self-attention model layer and a linear layer.
4. The sleep stage oriented electroencephalogram interpretability analysis method of claim 3, wherein the step of inputting the first feature into a trained concept layer module to obtain discrimination information corresponding to a plurality of time-frequency segments comprises:
weighting the first characteristics through a self-attention model layer to obtain second characteristics corresponding to a plurality of time-frequency segments, wherein the second characteristics corresponding to each time-frequency segment comprise a time dimension and a frequency dimension;
and weighting the frequency dimension of the second characteristic through a linear layer to obtain the discrimination information corresponding to a plurality of time-frequency segments.
5. The sleep stage oriented electroencephalogram interpretability analysis method of claim 1, wherein the step of performing time-frequency transformation on the preprocessed electroencephalogram data to obtain a time-frequency graph corresponding to a plurality of time-frequency segments comprises the following steps:
performing noise reduction processing on the acquired electroencephalogram data by adopting a down-sampling, band-pass filtering and artifact removing algorithm;
dividing the electroencephalogram data subjected to noise reduction processing into a plurality of signal segments with preset duration as one frame by adopting an AASM (adaptive amplitude modulation) interpretation standard;
and performing multi-window power spectrum analysis on the plurality of signal segments to obtain a plurality of time-frequency segments.
6. An electroencephalogram interpretability analysis apparatus for a sleep stage, characterized by comprising:
the time-frequency transformation module is used for carrying out time-frequency transformation on the preprocessed electroencephalogram data to obtain a plurality of time-frequency segments;
the model output module is used for inputting the time-frequency segments into the trained sleep staging model to obtain a discrimination rule set corresponding to the electroencephalogram data and a sleep staging result;
the model output module is specifically configured to:
inputting the time-frequency segments into a trained feature extraction module to obtain corresponding first features;
inputting the first characteristics into a trained concept layer module to obtain discrimination information corresponding to a plurality of time-frequency segments, wherein each time-frequency segment comprises a first preset number of discrimination information;
processing the discrimination information based on the first activation function to obtain a discrimination rule set corresponding to the electroencephalogram data, wherein each time-frequency segment corresponds to one discrimination rule;
merging the discrimination information corresponding to the time-frequency segments into a first information group, and inputting the first information group into a trained classification module to obtain a second information group;
and processing the second information group based on the second activation function to obtain a sleep stage result corresponding to the electroencephalogram data.
7. The sleep stage oriented electroencephalogram interpretability analysis apparatus of claim 6, further comprising a training module for:
performing time-frequency transformation on the preprocessed sample electroencephalogram data to obtain a plurality of sample time-frequency segments;
screening out a second preset number of adjacent sample time-frequency segments, and inputting the second preset number of adjacent sample time-frequency segments into the sleep stage model to obtain a first loss function and a second loss function;
combining the first loss function and the second loss function to obtain a joint loss function;
adjusting parameters of the sleep staging model based on a joint loss function;
detecting whether the joint loss function is converged;
if the combined loss function is not converged, taking the screened new adjacent sample time-frequency segments with the second preset number as the adjacent sample time-frequency segments with the second preset number, and returning to execute the step of inputting the adjacent sample time-frequency segments with the second preset number into the sleep stage model to obtain a first loss function and a second loss function;
if the combined loss function is converged, taking the latest sleep staging model as a trained sleep staging model;
the step of inputting a second preset number of adjacent sample time-frequency segments into the sleep stage model to obtain a first loss function and a second loss function comprises:
inputting the time-frequency segments of the adjacent samples into a feature extraction module to obtain corresponding first sample features;
inputting the first sample characteristic into a concept layer module to obtain corresponding sample discrimination information;
processing the sample discrimination information based on the first activation function to obtain a discrimination rule predicted value corresponding to the sample electroencephalogram data;
obtaining a first loss function based on a discrimination rule real value and a discrimination rule predicted value corresponding to the sample electroencephalogram data;
merging the sample discrimination information into a first sample information group, and inputting the first sample information group into a classification module to obtain a second sample information group;
processing the second sample information group based on the second activation function to obtain a sleep stage predicted value corresponding to the sample electroencephalogram data;
and obtaining a second loss function based on the real value of the discrimination rule and the predicted value of the discrimination rule corresponding to the sample electroencephalogram data.
8. The sleep stage oriented electroencephalographic interpretability analyzing apparatus of claim 7, further comprising a construction model block for:
according to quantifiable medical concepts in the AASM rules, feature extraction is carried out on the adjacent sample time-frequency segments with the second preset number through a signal processing technology, and corresponding concept features are obtained;
discretizing the conceptual features by adopting an equal-frequency binning method based on the distribution frequency of the conceptual features to obtain corresponding discrete features;
carrying out rule screening on the discrete features by a fully-relevant feature selection method based on a Shapril value to obtain a discrimination rule set relevant to sleep stages, wherein the discrimination rule set comprises a first preset number of discrimination rules;
and constructing a concept function based on a self-attention mechanism for each discrimination rule, and fusing the concept function based on the self-attention mechanism into a concept layer module of a sleep stage model, wherein the concept layer module comprises a self-attention model layer and a linear layer.
9. A sleep stage oriented electroencephalogram interpretability analysis apparatus comprising a processor, a memory, and a sleep stage oriented electroencephalogram interpretability analysis program stored on the memory and executable by the processor, wherein the sleep stage oriented electroencephalogram interpretability analysis program when executed by the processor implements the steps of the sleep stage oriented electroencephalogram interpretability analysis method of any one of claims 1 to 5.
10. A readable storage medium, characterized in that the readable storage medium has stored thereon a sleep stage oriented electroencephalogram interpretability analysis program, wherein the sleep stage oriented electroencephalogram interpretability analysis program when executed by a processor implements the steps of the sleep stage oriented electroencephalogram interpretability analysis method according to any one of claims 1 to 5.
CN202210723823.1A 2022-06-23 2022-06-23 Sleep stage oriented electroencephalogram interpretability analysis method and related equipment Pending CN115137374A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210723823.1A CN115137374A (en) 2022-06-23 2022-06-23 Sleep stage oriented electroencephalogram interpretability analysis method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210723823.1A CN115137374A (en) 2022-06-23 2022-06-23 Sleep stage oriented electroencephalogram interpretability analysis method and related equipment

Publications (1)

Publication Number Publication Date
CN115137374A true CN115137374A (en) 2022-10-04

Family

ID=83408713

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210723823.1A Pending CN115137374A (en) 2022-06-23 2022-06-23 Sleep stage oriented electroencephalogram interpretability analysis method and related equipment

Country Status (1)

Country Link
CN (1) CN115137374A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116525063A (en) * 2023-06-28 2023-08-01 安徽星辰智跃科技有限责任公司 Sleep periodicity detection and adjustment method, system and device based on time-frequency analysis
CN117077013A (en) * 2023-10-12 2023-11-17 之江实验室 Sleep spindle wave detection method, electronic equipment and medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116525063A (en) * 2023-06-28 2023-08-01 安徽星辰智跃科技有限责任公司 Sleep periodicity detection and adjustment method, system and device based on time-frequency analysis
CN116525063B (en) * 2023-06-28 2024-03-22 安徽星辰智跃科技有限责任公司 Sleep periodicity detection and adjustment method, system and device based on time-frequency analysis
CN117077013A (en) * 2023-10-12 2023-11-17 之江实验室 Sleep spindle wave detection method, electronic equipment and medium
CN117077013B (en) * 2023-10-12 2024-03-26 之江实验室 Sleep spindle wave detection method, electronic equipment and medium

Similar Documents

Publication Publication Date Title
Siuly et al. Exploring Hermite transformation in brain signal analysis for the detection of epileptic seizure
Sharma et al. Seizures classification based on higher order statistics and deep neural network
Zandi et al. Automated real-time epileptic seizure detection in scalp EEG recordings using an algorithm based on wavelet packet transform
Acharya et al. Automated diagnosis of epileptic EEG using entropies
Yildiz et al. Application of adaptive neuro-fuzzy inference system for vigilance level estimation by using wavelet-entropy feature extraction
Tuncer et al. Classification of epileptic seizures from electroencephalogram (EEG) data using bidirectional short-term memory (Bi-LSTM) network architecture
CN115137374A (en) Sleep stage oriented electroencephalogram interpretability analysis method and related equipment
CN112244873A (en) Electroencephalogram time-space feature learning and emotion classification method based on hybrid neural network
CN112043252B (en) Emotion recognition system and method based on respiratory component in pulse signal
Atal et al. A hybrid feature extraction and machine learning approaches for epileptic seizure detection
Yazid et al. Simple detection of epilepsy from EEG signal using local binary pattern transition histogram
Asghar et al. Semi-skipping layered gated unit and efficient network: hybrid deep feature selection method for edge computing in EEG-based emotion classification
Sabor et al. Detection of the interictal epileptic discharges based on wavelet bispectrum interaction and recurrent neural network
Santoso et al. Epileptic EEG signal classification using convolutional neural network based on multi-segment of EEG signal
Tigga et al. Efficacy of novel attention-based gated recurrent units transformer for depression detection using electroencephalogram signals
Malviya et al. CIS feature selection based dynamic ensemble selection model for human stress detection from EEG signals
Wang et al. A particle swarm algorithm optimization‐based SVM–KNN algorithm for epileptic EEG recognition
Zhou et al. A novel real-time EEG based eye state recognition system
Gao et al. A self-interpretable deep learning model for seizure prediction using a multi-scale prototypical part network
Srinivasan et al. A novel approach to schizophrenia Detection: Optimized preprocessing and deep learning analysis of multichannel EEG data
Xu et al. Decode brain system: A dynamic adaptive convolutional quorum voting approach for variable-length EEG data
CN115293210A (en) Instruction prediction output control method based on brain waves
Gharbali et al. Transfer learning of spectrogram image for automatic sleep stage classification
Tian et al. EEG Epileptic Seizure Classification Using Hybrid Time-Frequency Attention Deep Network
Wang et al. Epileptic Seizures Prediction Based on Unsupervised Learning for Feature Extraction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination