CN117158912B - Sleep stage detection system based on graph attention mechanism and space-time graph convolution - Google Patents

Sleep stage detection system based on graph attention mechanism and space-time graph convolution Download PDF

Info

Publication number
CN117158912B
CN117158912B CN202311445079.4A CN202311445079A CN117158912B CN 117158912 B CN117158912 B CN 117158912B CN 202311445079 A CN202311445079 A CN 202311445079A CN 117158912 B CN117158912 B CN 117158912B
Authority
CN
China
Prior art keywords
module
stage
sleep
gatv2
graph
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311445079.4A
Other languages
Chinese (zh)
Other versions
CN117158912A (en
Inventor
叶建宏
胡祎东
史文彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202311445079.4A priority Critical patent/CN117158912B/en
Publication of CN117158912A publication Critical patent/CN117158912A/en
Application granted granted Critical
Publication of CN117158912B publication Critical patent/CN117158912B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention provides a sleep stage detection system based on a graph attention mechanism and a space-time graph convolution, belonging to the technical field of sleep health monitoring; the invention introduces a non-spectral domain graph rolling method into sleep stage based on a characteristic learning and graph rolling mechanism, simultaneously provides a T-GATv2 module for capturing an intermediate stage between adjacent sleep stages, and simultaneously provides a correction function for representing the variation trend among different stage characteristics, and a better result is obtained in a sleep stage task, wherein the characteristic learning module can enable a model to adaptively extract the characteristics from data without prior knowledge, and a model structure matched with a data acquisition channel is adopted to ensure the rationality and the interpretability of the model, so that a better stage effect can be obtained in the sleep stage task.

Description

Sleep stage detection system based on graph attention mechanism and space-time graph convolution
Technical Field
The invention belongs to the technical field of sleep health monitoring, and particularly relates to a sleep stage detection system based on a graph attention mechanism and a space-time graph convolution.
Background
Sleep is the basic regulation of life, and multiple diseases (such as hypertension, arrhythmia, cerebral apoplexy and ischemia) are closely related to whether sleep is adequate or not. Sleep stage classification is the key to assessing sleep quality. Two common sleep criteria are the Rechtschaffen and Kales criteria (R & K) and the sleep medical community criteria (AASM), respectively. Polysomnography (PSG) is a record of many kinds including Electrooculography (EOG), electroencephalogram (EEG), electromyography (EMG), and Electrocardiogram (ECG), and the like, and a sleep professional recognizes sleep stages. Considering the time cost of artificial vision sleep classification and the lack of consistency of different expert classification results, many studies focus on efficient automatic sleep stage classifiers.
In the initial stage, feature extraction of the traditional machine learning or deep learning method is mainly based on time domain, frequency domain, time-frequency domain or nonlinear features, and some feature selection strategies are proposed. However, the performance of this type of method is mainly dependent on feature data preprocessing and feature engineering, and requires a great deal of expert prior knowledge.
Under the support of strong characterization capability, a class of methods for extracting features by means of deep learning (namely, characterization learning) are proposed and successfully used in multiple fields so as to solve the problem of insufficient generalization capability caused by heterogeneity among different disciplines in the traditional feature engineering. For sleep stage tasks, convolutional Neural Networks (CNNs) with various layers and different parameters are widely used for extracting multi-scale features in data, and excellent classification performance (the classification accuracy of MASS sleep brain electrical data sets is about 87%) is obtained.
While considerable accuracy can be achieved using Recurrent Neural Networks (RNNs) or other conventional deep learning layers (e.g., self-encoders) to represent CNN-based learning, the above model is limited in its effective potential representation of euclidean data (e.g., video, text, and images), i.e., it has a mesh structure resembling a CNN, but cannot characterize non-euclidean data. Conversely, the graph data structure is more suitable for capturing hidden patterns and connections of non-Euclidean spaces (e.g., connections between regions of the brain), and for Graph Neural Networks (GNNs), each electroencephalogram signal channel corresponds to a node of the graph structure.
Currently, domain convolution is largely divided into a non-spectral domain method and a spectral domain method. The initial spectral domain convolution is based on computing the eigendecomposition of the graph laplace and performing convolution operations in the fourier domain, although several simplified approximations have been proposed, which still have a large computational overhead. Furthermore, another limitation of the spectral domain approach is that it relies heavily on the laplace feature bases of the graph structure decisions, meaning that their training models for a particular structure may not be directly used for other different graph structure models. In contrast, the non-spectral domain method directly defines convolutions on the graph through operations on a set of adjacent spatial nodes, and therefore has greater flexibility and intuitiveness and lower computational cost overhead than the spectral domain method. Recently, some studies used graph convolution for SLEEP stage, achieving high accuracy (MASS SLEEP brain electrical data set up to 88.9% and ISRUC-SLEEP SLEEP brain electrical data set up to 91%), but all use graph convolution based on spectral domain method.
Disclosure of Invention
Therefore, the invention aims to provide a sleep stage detection system based on the graph attention mechanism and the time-space graph convolution, which can improve the accuracy of sleep stages.
The sleep stage detection system for convolving a graph annotating force mechanism with a space-time graph is characterized by comprising a data preprocessing module, a characterization learning module, an ST-GATv2 module and a stage identification module;
the data preprocessing module is used for preprocessing data:
using directed graphsRepresenting the connection between different brain electrode channels; wherein the method comprises the steps ofRepresenting a set of electroencephalogram electrode channels; />The C-th electroencephalogram electrode channel in the set V is represented, and C represents the total number of electroencephalogram electrode channels; />Representing an electroencephalogram data sequence output by a c-th electroencephalogram electrode channel, wherein N is the total number of sleep slices of the electroencephalogram data sequence output by each channel,/-A>Represents the nth sleep slice, L is +.>Is a sequence length of (2);
sliding window pair sequence with window length T and sliding step length 1Sliding window processing is performed so as to enable the current stage in the electroencephalogram data sequence to be +>With next adjacent stepSection->And the last stage->Binding to get new sleep stage->The method comprises the steps of carrying out a first treatment on the surface of the The new signal sequence set after sliding window processing is denoted +.>The method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>
The characterization learning module adopts 6 1D-CNNs taking a Relu activation function layer as a main body and is used for extracting characteristics of the preprocessed data;
the ST-GATv2 module comprises an S-GATv2 sub-module and a T-GATv2 sub-module;
the input of the S-GATv2 submodule is the characteristics extracted by the characteristic learning module and is set as,/>Features representing the c-th electroencephalogram electrode channel; attention coefficient between the ith and jth electroencephalogram electrode channels ∈ ->Denoted as->Wherein->And->Are trainable parameters->Representing a splicing operation; />Representing an activation function;
connection relation between ith brain electrode channel and jth brain electrode channelExpressed as:
(1)
wherein,representing a set of contiguous nodes of an ith electroencephalogram electrode channel; the node output characteristics for each with a sigmoid function are expressed as:
(2)
wherein,representing an activation function; />Representation feature->A corresponding feature sequence;
obtaining final output characteristics by splicing or averagingAs shown in formulas (3) and (4):
(3)
(4)
wherein H represents the number of independent attention mechanisms performed by equation (2) under different initialization parameters;represents the h independent attention output +.>Is a characteristic sequence of (2); />Represents the h independent attention output +.>Is a characteristic sequence of (2);
the T-GATv2 submodule is used for: calculation ofTo represent the current stage->Characteristics of the ith brain electrode channel +.>And (4) the last stage->Features on the jth EEG channelAttention coefficient between->As shown in formula (5):
(5)
the characteristic sequence of the middle stage between the two adjacent stages on the ith EEG electrode channel is obtained according to the following formula
(6)
Wherein,representation feature->A corresponding feature sequence;
thereby obtaining a characteristic sequenceCharacteristic sequence of the attention output at the individual heads +.>Average output of +.>
(7)
Obtaining mid-stage featuresThen, the S-GATv2 submodule calculates the characteristic sequence of each electroencephalogram electrode channel in the middle stage by using formulas (1) - (4);
the stage identification module is used for: according to the characteristic sequences of the brain electrode channels in each stage output by the ST-GATv2 module, the extracted characteristics are fused in a dimension changing mode, the output dimension is matched with the sleep type dimension in a dimension compression and linear change mode, and the maximum value is taken as a final sleep stage classification result.
Further, the system also comprises a correction function module; the output of the ST-GATv2 module is used as the input of the correction function module, and corrected data are input to the stage identification module for sleep stage identification;
the correction function module corrects the characteristics of the middle stage, and the formulas are as follows (8), (9) and (10):
(8)
(9)
(10)
wherein the method comprises the steps ofIs->Part of (2)>Representation->And->Difference between->Is a trainable parameter, tanh is an activation function for compressing the value range to +.>The method comprises the steps of carrying out a first treatment on the surface of the The final corrected feature representation is shown in equation (11):
(11)
wherein the method comprises the steps ofRepresenting +_head version>
Further, the system also comprises a selective kernel layer module for outputting the ST-GATv2 module or outputting the correction function module as the input of the module, and for fusing the output results of different core sizes by adopting a soft attention mechanism in the multi-branch input.
Preferably, the data preprocessing module removes the electroencephalogram signals marked as unknown phases in the sleep start phase and the sleep end phase.
Preferably, the data preprocessing module resamples the channel record to eliminate the influence of sampling frequency difference.
Preferably, the characterization learning module uses a Maxpool layer to screen the extracted features after each group of 1D-CNN layers containing 2 are executed, namely downscaling; relu activation is defined asWhere x is the input.
The invention has the following beneficial effects:
the invention provides a sleep stage detection system based on graph annotating force mechanism and space-time graph convolution, which introduces a non-spectral domain graph convolution method into sleep stage based on representation learning and graph convolution mechanism, and simultaneously provides a T-GATv2 module for capturing intermediate stages between adjacent sleep stages, and simultaneously provides a correction function for representing variation trend among different stage characteristics, and a better result (89.0%) is obtained in sleep stage task, wherein the representation learning module can enable a model to adaptively extract characteristics from data without priori knowledge (the priori knowledge often has limitation and cannot obtain abstract characteristics), and meanwhile, a model structure (graph structure) matched with a data acquisition channel is adopted to ensure rationality and interpretability of the model, so that a better stage effect can be obtained in sleep stage task.
Drawings
FIG. 1 is a system workflow diagram of the present invention;
FIG. 2 is a schematic diagram of a sliding window segmentation according to the present invention;
FIG. 3 is a representation of the learning module network structure and parameters of the present invention;
FIG. 4 is a schematic diagram of a multi-head S-GATv2 aggregation operation of the present invention;
FIG. 5 is an intermediate stage acquisition process of the present invention;
FIG. 6 is a schematic diagram of a T-GATv2 flow scheme in accordance with the present invention;
FIG. 7 is a schematic diagram of the fusion process of the present invention.
Detailed Description
The invention will now be described in detail by way of example with reference to the accompanying drawings.
In order to improve the accuracy of an automatic sleep stage model and verify the effectiveness of the non-spectral domain graph convolution in the sleep field, the invention provides a sleep stage detection system based on graph attention mechanism and space-time graph convolution, and the sleep stage with high accuracy is obtained in an SS3 data set in MASS, and a system block diagram of the invention is shown in figure 1 and comprises the following steps: the system comprises a data preprocessing module, a characterization learning module, an ST-GATv2 module and a stage identification module.
1. Data preprocessing module
1) First, the electroencephalogram signals labeled as unknown phases in the sleep start phase and end phase are removed. Second, resampling the channel records eliminates the effect of sampling frequency differences, for example, MASS, with all channels (128 Hz to 256 Hz) sampling frequency unifying 256 Hz.
2) And carrying out dimension division on the preprocessed electroencephalogram signals. Defining a sleep staging network as a directed graphRepresenting the connection between the different electrodes, wherein V (vertex of figure) and E (edge of figure) represent the connection state between the electroencephalogram electrode and the adjacent electrode, respectively. With adjacency matrix->Representing the connection between different electrodes, where(/>) Representing a collection of electrodes. />Represents the C-th electrode channel in the V set, and C represents the total number of electrode channels. />(/>) An electroencephalogram data sequence (divided by sleep epochs of 30 seconds) representing the output of the c-th electrode channel, wherein N is the total number of sleep epochs of the electroencephalogram data sequence output by each channel,/day>Represents a method wherein the nth sleep epoch, L is +.>Is a sequence length of (a) in a sequence.
Taking into account the continuity of sleep kinetics, each electroencephalogram data sequence is segmented using a sliding window, as shown in fig. 2.
The length is set as T%) Sliding window pair sequence with sliding step length of 1 +.>Carrying out sliding window treatment; after the sliding window treatment, in->Describing a set of new signal sequences, wherein->And->Sleep stage number->Inversely proportional to T. To ensure a sufficient number of sleep stages after the sliding window treatment +.>We set T to 3 (+)>) Thereby the current stage in the electroencephalogram data sequence is +.>Next phase adjacent->And the last stage->Binding to get new sleep stage->The method comprises the steps of carrying out a first treatment on the surface of the Wherein the current phase is the target phase.
2. Characterization learning module
The characterization learning module adopts 6 1D-CNNs taking the Relu activation function layer as a main body and is used for extracting features of the preprocessed data. After each group of 1D-CNN layers containing 2 are executed, a Maxpool layer is used for screening the extracted features, namely downscaling processing is carried out, redundant information is removed, and calculation cost is reduced. Relu activation is defined asWhere x is the input. The structure and parameters of the proposed CNNs integrated characterization learning network are shown in FIG. 3.
3. The ST-GATv2 module includes an S-GATv2 sub-module and a T-GATv2 sub-module. To distinguish the conventional GATv2 from the timing GATv2 (T-GATv 2) proposed by the present invention, we rename the conventional GATv2 to S-GATv2 for highlighting its aggregation capability to graph space nodes.
1)、S-GATv2 sub-module: the input of which is a feature extracted by a characteristic learning module, e.g,/>Characterizing the c-th electrode channel; the module output is defined as +.>,/>Is an output feature (typically of a different dimension than the input feature). The ith channel->And jth channel->Attention coefficients corresponding to and between the sums, respectively>Denoted as->(main difference between GAT and GATv2 networks), wherein>Are trainable parameters->Representing a stitching operation.
To facilitate comparison of attention coefficients between different channels, a softmax function is usedDefinition is as formula (1):
(1)
wherein,representing an activation function; />Represent the firstiElectrode channels and the firstjA relationship between the electrode channels; />Representing a set of contiguous nodes of the ith electrode channel. After the adjacency matrix is obtained, each node output characteristic with sigmoid function can be expressed as formula (2):
(2)
wherein,representing a sequence of features;
to stabilize the self-attention mechanism learning process, a multi-head attention method is introduced. After defining the multi-headed attentiveness mechanism, a final output characteristic is obtained using a stitching or averaging operation, as shown in equations (3) and (4):
(3)
(4)
wherein H represents the number of independent attention mechanisms performed by equation (2) under different initialization parameters;represents the h independent attention output +.>Is a characteristic sequence of (2);/>represents the h independent attention output +.>Is a characteristic sequence of (2);a trainable parameter representing the h independent attention.
Formulas (3) or (4) are often used to link or average these independent results to obtain the final output. The aggregation operation of multi-head S-GATv2 is shown in FIG. 4.
2) T-GATv2 submodule: inspired by the S-GATv2 structure, the invention introduces the attention mechanism and the multi-head idea into adjacent sleep sequences/stages, thereby constructing a T-GATv2 sub-module. To represent the sleep stage transition process, the concept of an intermediate stage (extracted from two temporally adjacent stages of T-GATv 2) is introduced. Definition of the definition(timing version->) Is thatFor indicating the current stage->Features at node i->And (4) the last stage->Feature on node j>Attention coefficient between->As shown in formula (5) The following is shown:
(5)
in view of formulas (2) and (4), as shown in fig. 5, the T-GATv2 module provided by the invention can obtain the characteristic sequence of the intermediate stage between two adjacent stages on the node i
(6)
Wherein,representation feature->A corresponding feature sequence;
thereby obtaining a characteristic sequenceCharacteristic sequence of the output on the individual multiple heads +.>Average multi-head output +.>
(7)
The main flow of the T-GATv2 module adopts a formula (5) to calculate the normalized attention coefficients of the current stage and the last stage of the adjacent node of the target node, and then uses formulas (6) and (7) for node characteristic aggregation of the last stage to acquire the middle stage characteristic of the target node. As shown in fig. 6, we extend the original three-phase set to a five-phase set by the T-GATv2 module. The presence or absence of a transition from one phase to another (the last phase, the current phase and the next phase)Same trainable parametersWBut the two intermediate phases are identical.
Obtaining mid-stage featuresThe S-GATv2 submodule then calculates the node characteristics of the intermediate stage using equations (1) - (4).
4. Stage identification module:
according to the characteristic sequences of the brain electrode channels in each stage output by the ST-GATv2 module, the extracted characteristics are fused in a dimension changing mode, the output dimension is matched with the sleep type dimension in a dimension compression and linear change mode, and the maximum value is taken as a final sleep stage classification result.
Furthermore, the invention also provides a correction function module. And outputting the ST-GATv2 module as the input of the correction function module, and inputting corrected data into the stage identification module for sleep stage identification. Although the T-GATv2 module may obtain information on the time level of the neighboring node, it cannot capture hidden information (reflecting the trend to some extent) of the difference between neighboring phases. To compensate for this, the correction function module corrects the characteristics of the intermediate stage, and the formulas are as follows (8), (9) and (10):
(8)
(9)
(10)
wherein the method comprises the steps ofIs->Is a part of->Representation->And->Difference between->Is a trainable parameter, tanh is an activation function, aimed at compressing the value range to +.>(negative and positive values correspond to falling and rising trends, respectively). Finally, the collective output representation of T-GATv2 with the correction function is shown in equation (11):
(11)
wherein the method comprises the steps ofRepresenting +_head version>
Further, the invention also provides a selective kernel layer module: the output of the ST-GATv2 module or the output of the correction function module is used as the input of the module, and the module is an output result which is proposed by Li and the like and used for fusing different core sizes in multi-branch input by adopting a soft attention mechanism. Since the core sizes for the different stages are different (last stage, current stage, next stage, and intermediate stage), we employ four different sized core branches including,/>,/>And->Fusion is performed. The selection fusion process of the four branch selection cores is shown in fig. 7:
feature map for given input dataWherein C represents the number of feature graphs, H and W represent the height and width of feature graphs, four copies of selected kernel layer input data are respectively used as inputs of four branches with different core sizes, and each branch independently processes the respective inputs to obtain->,/>,/>And->. Obtained by adding corresponding position elements of the matrix. In the fusion and linear mapping section, global average pooling operations are used to fuse global information。/>The nth element of (2) is represented as shown in formula (12):
(12)
after global averaging pooling, the compaction characteristics are obtained by equation (13)Which provides a weight guide for subsequent selection parameters.
(13)
Where Relu is the activation function,representing a batch normalization operation, +.>Is a trainable parameter. The above procedure involves fusion and linear mapping operations.
In the selecting step, the attention weight of each channel is operated with softmax as shown in equation (14):
(14)
wherein the method comprises the steps ofRepresentation->(D, E, F, G is the transition->To output->Trainable parameters of (c)cLines d, e, f, g are each +.>,/>,/>And->Is a soft attention mechanism vector of (1). />C->As shown in equation (15):
(15)
wherein the method comprises the steps ofIs->The final output of the c-th element of the selected kernel layer is
The invention has completed verification on SS3 data set in MASS database, and compared with baseline model, the invention has higher accuracy, and the potential of the invention in sleep prediction is fully illustrated, and the result is shown in the following table 1.
Table 1 comparison of the present invention with baseline model.
In summary, the above embodiments are only preferred embodiments of the present invention, and are not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (6)

1. The sleep stage detection system based on the graph attention mechanism and the space-time graph convolution is characterized by comprising a data preprocessing module, a characterization learning module, an ST-GATv2 module and a stage identification module;
the data preprocessing module is used for preprocessing data:
using a directed graph G= (V, E) to represent the connection between different brain electrode channels; wherein the method comprises the steps ofRepresenting a set of electroencephalogram electrode channels; v c The C-th electroencephalogram electrode channel in the set V is represented, and C represents the total number of electroencephalogram electrode channels;
representing an electroencephalogram data sequence output by a c-th electroencephalogram electrode channel, wherein N is the total number of sleep slices of the electroencephalogram data sequence output by each channel, and s n Represents the nth sleep slice, L is s n Is a sequence length of (2);
sliding window pair sequence with window length T and sliding step length 1Sliding window processing is carried out, so that the current stage s in the electroencephalogram data sequence is processed n Adjacent next stage s n+1 And the last stage s n-1 Combining to obtain new sleep stage s' n′ The method comprises the steps of carrying out a first treatment on the surface of the The new signal sequence set after sliding window processing is denoted +.>Wherein t=2d+1, d>0;
The characterization learning module adopts 6 1D-CNNs taking a Relu activation function layer as a main body and is used for extracting characteristics of the preprocessed data;
the ST-GATv2 module comprises an S-GATv2 sub-module and a T-GATv2 sub-module;
the input of the S-GATv2 submodule is the characteristic extracted by the characteristic learning module and is set as F= { F 1 ,f 2 ,...,f c ,...},f c Features representing the c-th electroencephalogram electrode channel; ith brain electrode channel and jth brain electrode channelAttention coefficient e between tracks ij Denoted as e ij =α T LeakyRelu(W·[f i ||f j ]) Wherein W and alpha are trainable parameters, and I represents a splicing operation; leakyRelu denotes an activation function;
connection relation alpha between ith brain electrode channel and jth brain electrode channel ij Expressed as:
wherein χ is i Representing a set of contiguous nodes of an ith electroencephalogram electrode channel; the node output characteristics for each with a sigmoid function are expressed as:
wherein σ (·) represents the activation function;representing feature f j A corresponding feature sequence;
obtaining final output characteristics by splicing or averagingAs shown in formulas (3) and (4):
wherein H represents the number of independent attention mechanisms performed by equation (2) under different initialization parameters;represents the h independent attention output +.>Is a characteristic sequence of (2); />Represents the h independent attention output +.>Is a characteristic sequence of (2);
the T-GATv2 submodule is used for: calculation ofTo represent the current stage t n Characteristics of the ith brain electrode channel +.>And the last stage t n-1 Features on the jth EEG channel->The connection relation between the two is shown in the formula (5):
the characteristic sequence of the middle stage between the two adjacent stages on the ith EEG electrode channel is obtained according to the following formula
Wherein,representation feature->A corresponding feature sequence;
thereby obtaining a characteristic sequenceCharacteristic sequence of the attention output at the individual heads +.>Average output of +.>
Obtaining mid-stage featuresThen, the S-GATv2 submodule calculates the characteristic sequence of each electroencephalogram electrode channel in the middle stage by using formulas (1) - (4);
the stage identification module is used for: according to the characteristic sequences of the brain electrode channels in each stage output by the ST-GATv2 module, the extracted characteristics are fused in a dimension changing mode, the output dimension is matched with the sleep type dimension in a dimension compression and linear change mode, and the maximum value is taken as a final sleep stage classification result.
2. A sleep stage detection system based on a graph attention mechanism convolved with a space-time graph as claimed in claim 1, further comprising a correction function module; the output of the ST-GATv2 module is used as the input of the correction function module, and corrected data are input to the stage identification module for sleep stage identification;
the correction function module corrects the characteristics of the middle stage, and the formulas are as follows (8), (9) and (10):
wherein the method comprises the steps ofIs->Part of M ij Representing t n And t n-1 Difference between W i Is a trainable parameter, tanh is an activation function for compressing the value range to [ -1,1]The method comprises the steps of carrying out a first treatment on the surface of the The final corrected feature representation is shown in equation (11):
wherein func h (i) Func (i) representing a multi-headed version.
3. A sleep stage detection system based on a graph attention mechanism convolved with a space-time graph as claimed in claim 1 or 2, further comprising a selective kernel layer module for outputting the ST-GATv2 module or outputting the correction function module as the present module input for fusing the output results of different kernel sizes with a soft attention mechanism in the multi-branch input.
4. A sleep stage detection system based on a graph attention mechanism convolved with a space-time graph as claimed in claim 1, wherein the data preprocessing module removes the brain electrical signals labeled as unknown stages in the sleep start and end stages.
5. A sleep stage detection system based on a graph attention mechanism convolved with a space-time graph as claimed in claim 1, wherein said data preprocessing module resamples the channel records to eliminate sampling frequency difference effects.
6. The sleep stage detection system based on graph attention mechanism and space-time graph convolution as claimed in claim 1, wherein the feature learning module uses a Maxpool layer to screen the extracted features after each group of 2 1D-CNN layers are executed, namely downscaling; the Relu activation is defined as Relu (x) =max (0, x), where x is the input.
CN202311445079.4A 2023-11-02 2023-11-02 Sleep stage detection system based on graph attention mechanism and space-time graph convolution Active CN117158912B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311445079.4A CN117158912B (en) 2023-11-02 2023-11-02 Sleep stage detection system based on graph attention mechanism and space-time graph convolution

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311445079.4A CN117158912B (en) 2023-11-02 2023-11-02 Sleep stage detection system based on graph attention mechanism and space-time graph convolution

Publications (2)

Publication Number Publication Date
CN117158912A CN117158912A (en) 2023-12-05
CN117158912B true CN117158912B (en) 2024-03-19

Family

ID=88941587

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311445079.4A Active CN117158912B (en) 2023-11-02 2023-11-02 Sleep stage detection system based on graph attention mechanism and space-time graph convolution

Country Status (1)

Country Link
CN (1) CN117158912B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116584891A (en) * 2023-04-04 2023-08-15 华南理工大学 Sleep apnea syndrome detection method based on multi-level feature fusion

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230196567A1 (en) * 2021-12-21 2023-06-22 Hospital on Mobile, Inc. Systems, devices, and methods for vital sign monitoring

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116584891A (en) * 2023-04-04 2023-08-15 华南理工大学 Sleep apnea syndrome detection method based on multi-level feature fusion

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Multi-Layer Graph Attention Network for Sleep Stage Classification Based on EEG;Qi Wang等;sensors;第22卷(第23期);1-17 *
基于 CNN 和注意力机制的睡眠分期算法研究;尹路;中国优秀硕士学位论文全文数据库医药卫生科技辑(第2期);第1-61页 *
基于心电和脑电多模态特征组合的自动睡眠分期方法研究;吕君同等;生命科学仪器;第21卷(第1期);第41-40页 *

Also Published As

Publication number Publication date
CN117158912A (en) 2023-12-05

Similar Documents

Publication Publication Date Title
Bi et al. Early Alzheimer’s disease diagnosis based on EEG spectral images using deep learning
Liu et al. EEG emotion recognition based on the attention mechanism and pre-trained convolution capsule network
CN110399857A (en) A kind of brain electricity emotion identification method based on figure convolutional neural networks
CN114266276B (en) Motor imagery electroencephalogram signal classification method based on channel attention and multi-scale time domain convolution
CN113128552B (en) Electroencephalogram emotion recognition method based on depth separable causal graph convolution network
CN111956208B (en) ECG signal classification method based on ultra-lightweight convolutional neural network
Wan et al. EEG fading data classification based on improved manifold learning with adaptive neighborhood selection
CN111091074A (en) Motor imagery electroencephalogram signal classification method based on optimal region common space mode
CN110781751A (en) Emotional electroencephalogram signal classification method based on cross-connection convolutional neural network
CN112932501B (en) Method for automatically identifying insomnia based on one-dimensional convolutional neural network
Zhao et al. Interactive local and global feature coupling for EEG-based epileptic seizure detection
Ma et al. An effective data enhancement method for classification of ECG arrhythmia
CN114781441B (en) EEG motor imagery classification method and multi-space convolution neural network model
CN116072265A (en) Sleep stage analysis system and method based on convolution of time self-attention and dynamic diagram
CN114595725B (en) Electroencephalogram signal classification method based on addition network and supervised contrast learning
CN110811591A (en) Heart failure grading method based on heart rate variability
Li et al. GCNs–FSMI: EEG recognition of mental illness based on fine-grained signal features and graph mutual information maximization
CN117407748A (en) Electroencephalogram emotion recognition method based on graph convolution and attention fusion
CN117158912B (en) Sleep stage detection system based on graph attention mechanism and space-time graph convolution
CN112259228A (en) Depression screening method by dynamic attention network non-negative matrix factorization
CN113768474B (en) Anesthesia depth monitoring method and system based on graph convolution neural network
CN115758118A (en) Multi-source manifold embedding feature selection method based on electroencephalogram mutual information
CN115349821A (en) Sleep staging method and system based on multi-modal physiological signal fusion
CN115017960A (en) Electroencephalogram signal classification method based on space-time combined MLP network and application
CN114081492A (en) Electroencephalogram emotion recognition system based on learnable adjacency matrix

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant