CN117257324A - Atrial fibrillation detection method based on convolutional neural network and ECG (electro-magnetic resonance) signals - Google Patents

Atrial fibrillation detection method based on convolutional neural network and ECG (electro-magnetic resonance) signals Download PDF

Info

Publication number
CN117257324A
CN117257324A CN202311558447.6A CN202311558447A CN117257324A CN 117257324 A CN117257324 A CN 117257324A CN 202311558447 A CN202311558447 A CN 202311558447A CN 117257324 A CN117257324 A CN 117257324A
Authority
CN
China
Prior art keywords
feature
layer
neural network
convolutional neural
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311558447.6A
Other languages
Chinese (zh)
Other versions
CN117257324B (en
Inventor
吕国华
宋文廓
池强
程嘉玟
王美慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Weirong Information Technology Co ltd
Qilu University of Technology
Original Assignee
Shandong Weirong Information Technology Co ltd
Qilu University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Weirong Information Technology Co ltd, Qilu University of Technology filed Critical Shandong Weirong Information Technology Co ltd
Priority to CN202311558447.6A priority Critical patent/CN117257324B/en
Publication of CN117257324A publication Critical patent/CN117257324A/en
Application granted granted Critical
Publication of CN117257324B publication Critical patent/CN117257324B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/318Heart-related electrical modalities, e.g. electrocardiography [ECG]
    • A61B5/346Analysis of electrocardiograms
    • A61B5/349Detecting specific parameters of the electrocardiograph cycle
    • A61B5/361Detecting fibrillation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Cardiology (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The application provides an atrial fibrillation detection method based on a convolutional neural network and an ECG signal. The invention comprises the following steps: s1: acquiring a training set and a testing set: s2: constructing a CNN-RdNet convolutional neural network; s3: constructing a total loss function; s4: training the CNN-RdNet convolutional neural network to obtain a CNN-RdNet convolutional neural network model; s5: loading electrocardiographic data into a CNN-RdNet convolutional neural network model for forward propagation once, and outputting a probability value with the range of [0,1] and accuracy; when the probability value is larger than 0.5, the prediction result is atrial fibrillation; when the probability value is less than 0.5 and greater than 0, the prediction result is not atrial fibrillation. The method can obtain higher accuracy and has shorter required running time.

Description

Atrial fibrillation detection method based on convolutional neural network and ECG (electro-magnetic resonance) signals
Technical Field
The invention belongs to the technical field of new generation information, and particularly relates to an atrial fibrillation detection method based on a convolutional neural network and an ECG signal.
Background
Atrial fibrillation is a medically common arrhythmia, which refers to the condition in which the blood of the superior vena cava of the heart enters the atrium to shrink irregularly and rapidly, causing the atrium to lose effective contraction and shake irregularly. In recent years, deep learning has been widely used in the field of electrocardiographic monitoring, but the accuracy obtained when processing a large number of data samples and the running time of a deep learning model are not ideal, so the application proposes an atrial fibrillation detection method based on a convolutional neural network and an ECG signal aiming at the defects.
Disclosure of Invention
In order to make up for the defects of the prior art, the invention provides an atrial fibrillation detection method based on a convolutional neural network and an ECG signal.
The technical scheme of the invention is as follows:
an atrial fibrillation detection method based on convolutional neural network and ECG signals comprises the following steps:
s1: acquiring a training set and a testing set:
s2: constructing a CNN-RdNet convolutional neural network, and firstly counting the number of the marked electrocardiograms input into the CNN-RdNet convolutional neural network by using the CNN-RdNet convolutional neural network; then sequentially carrying out dimension lifting, shallow feature extraction, shallow feature focus, feature capture and retention of the focused feature, feature extraction, edge feature and texture feature focus, feature capture and retention of the focused feature, feature extraction, heart beating frequency and the frequency of a P wave waveform, a QRS waveform and an ST wave waveform, feature capture and retention of the focused feature, feature extraction, dimension reduction and feature vector generation, feature mapping of the feature vector, and mapping of a parameter matrix to obtain a probability value in the range of [0,1] and operation of calculating accuracy;
S3: constructing a total loss function of the CNN-RdNet convolutional neural network;
s4: training the CNN-RdNet convolutional neural network by using a training set and a total loss function to obtain a CNN-RdNet convolutional neural network model;
s5: loading the marked electrocardiogram data into the CNN-RdNet convolutional neural network model obtained in the step S4 for forward propagation once, and outputting a probability value with the range of [0,1] and the accuracy; when the probability value is larger than 0.5, the marked electrocardiogram input into the CNN-RdNet convolutional neural network model is an atrial fibrillation pattern, and the prediction result is atrial fibrillation; when the probability value is smaller than 0.5 and larger than 0, the marked electrocardiogram input into the CNN-RdNet convolutional neural network model is not atrial fibrillation pattern, and the prediction result is not atrial fibrillation.
Preferably, step S1 comprises the following specific steps: s1-1: acquiring 1000 pieces of electrocardiosignal (electrocardiosignal is also called ECG signal) data of clinical atrial fibrillation from a PhysioNet data set to form an initial data set; s1-2: and processing and deriving the electrocardiosignals in the initial data set by using Matlab R2021b software to obtain an electrocardiogram, and dividing the electrocardiogram according to the proportion of 7:3 to obtain a training set T and a test set S.
Preferably, step S1-2 comprises the following specific steps: s1-2-1, reading 1000 electrocardiosignals in an initial data set in Matlab R2021b software by using a rdmat function, and converting the electrocardiosignals in the initial data set read by the rdmat function into visualized electrocardiosignals, namely electrocardiographic signals;
s1-2-2: marking R waves of the electrocardiogram signals by using Signal Labeler in Matlab R2021b software to obtain marked electrocardiogram signals;
s1-2-3: and (3) deriving the marked electrocardiogram signals from Matlab R2021b software to obtain marked electrocardiograms, and dividing the marked electrocardiograms into a training set T and a test set S according to the proportion of 7:3.
Preferably, in step S2, the CNN-RdNet convolutional neural network includes an input layer, a counter i, a convolutional operation module, a feature extraction block i, a spatial attention module i, a max pooling layer i, a feature extraction block ii, a spatial attention module ii, a max pooling layer ii, a feature extraction block iii, a spatial attention module iii, a max pooling layer iii, a feature extraction block iv, a global max pooling layer, a prediction layer, a counting layer ii, and an output layer, which are sequentially connected;
the counter I is used for counting the number of the noted electrocardiograms input into the input layer of the CNN-RdNet convolutional neural network;
The convolution operation module is used for carrying out dimension lifting on the marked electrocardiogram input into the convolution operation module to obtain an initial characteristic diagram;
the feature extraction block I is used for extracting shallow features from the initial feature map to obtain a shallow feature map I; the shallow layer characteristics comprise a P wave form, a QRS wave form and an ST wave form, and the extraction of the shallow layer characteristics is used for judging whether the wave form in the electrocardiogram to be detected is one of the P wave form, the QRS wave form and the ST wave form;
the spatial attention module I is used for focusing on the characteristics of the P wave waveform, the characteristics of the Q wave waveform and the S wave waveform in the QRS wave waveform, the peak position of the R wave waveform, the characteristics of the T wave waveform and the variation of the ST wave waveform in the shallow characteristic diagram I, so as to obtain a shallow characteristic diagram II; wherein the characteristics of the P-wave waveform include the shape, amplitude, and duration of the P-wave waveform; the characteristics of the Q wave waveform include depth and width, and the characteristics of the S wave waveform include depth and width; characteristics of the T wave waveform include characteristics of morphology, polarity and duration of the T wave waveform, and changes in the ST segment waveform include characteristics of abnormal elevation or abnormal depression of the ST segment;
the maximum pooling layer I is used for capturing and retaining the focused features in the shallow feature map II to obtain a shallow feature map III.
The feature extraction block II is used for extracting features which are captured and reserved in the shallow feature map III, so as to obtain a shallow feature map IV comprising edge features and texture features;
the space attention module II is used for focusing on edge features and texture features in the shallow feature map IV to obtain a shallow feature map V;
the maximum pooling layer II is used for capturing and retaining the important features in the shallow feature map V to obtain a shallow feature map VI;
the feature extraction block III is used for extracting features which are captured and reserved in the shallow feature map VI to obtain a deep feature map I, wherein the deep feature map I comprises the heart beating frequency, the frequencies of a P wave waveform, a QRS waveform and an ST segment waveform;
the space attention module III is used for focusing on the heart beating frequency and the frequencies of the P wave waveform, the QRS waveform and the ST wave waveform in the deep feature diagram I, and focusing on the correlation and the continuity among the P wave waveform, the QRS waveform and the ST wave waveform to obtain a deep feature diagram II;
the maximum pooling layer III is used for capturing and retaining the important features in the deep feature map II to obtain the deep feature map III;
the feature extraction block IV is used for extracting features which are captured and reserved in the deep feature map III, so as to obtain a deep feature map IV;
The global maximum pooling layer is used for reducing the dimension of the deep feature map IV and generating feature vectors;
the prediction layer comprises a full connection layer and a Sigmoid layer; the full connection layer (fully connected layers, FC) is used for carrying out feature mapping on the feature vector output by the global maximum pooling layer to obtain a parameter matrix [64,2]; the Sigmoid layer is used for performing matrix mapping on the parameter matrix output by the full-connection layer to obtain a probability value with the range of [0,1 ]; when the probability value is larger than 0.5, representing that the marked electrocardiogram input into the feature extraction block I is an atrial fibrillation figure; when the probability value is smaller than 0.5 and larger than 0, the marked electrocardiogram input into the feature extraction block I is not an atrial fibrillation pattern;
the counter II is used for counting the number with the probability value larger than 0.5 and storing the ratio of the number with the probability value larger than 0.5 to the number of the marked electrocardiograms input into the CNN-RdNet convolutional neural network as the accuracy.
Preferably, in step S2, the convolution operation module is formed by sequentially connecting 64 convolution layers with a convolution kernel size of 3×3.
Preferably, in step S2, the feature extraction block i includes a backbone network and a residual network, wherein: the main network comprises a first convolution block I, a first convolution block II, a second convolution block I, a first Concat layer, a third convolution block I, a second convolution block II, a fourth convolution block I and a second Concat layer which are connected in sequence; the first convolution block I is also connected with the second convolution block I and the first Concat layer respectively, and the first convolution block II is also connected with the first Concat layer; the third convolution block I is also connected with the fourth convolution block I and a second Concat layer; the second convolution block II is also connected with a second Concat layer; the input layer is also connected with a second Concat layer; the residual error network comprises a fifth convolution block I, a third convolution block II, a sixth convolution block I and a fourth convolution block II, wherein the fifth convolution block I is connected with the third convolution block II, and the sixth convolution block I is connected with the fourth convolution block II; the input layer is connected with a fifth convolution block I, a third convolution block II is connected with a first Concat layer, the first Concat layer is connected with a sixth convolution block I, and a fourth convolution block II is connected with a second Concat layer.
Preferably, in step S2, the convolution block i is composed of a convolution layer having a convolution kernel size of 1×1 and an lrehu layer, and the convolution block ii is composed of a convolution layer having a convolution kernel size of 3×3 and an lrehu layer.
Preferably, in step S2, the feature extraction block i, the feature extraction block ii, the feature extraction block iii, and the feature extraction block iv have the same structure.
Preferably, in step S2, the spatial attention module i, the spatial attention module ii and the spatial attention module iii have the same structure, and the spatial attention module i is an existing spatial attention module.
Preferably, in step S2, the maximum pooling layer i, the maximum pooling layer ii, the maximum pooling layer iii, and the global maximum pooling layer are all existing maximum pooling layers, and the difference between the maximum pooling layer i, the maximum pooling layer ii, and the maximum pooling layer iii and the global maximum pooling layer is that the sizes of convolution kernels of the pooling layers are different, the sizes of the convolution kernels of the maximum pooling layer i, the maximum pooling layer ii, and the maximum pooling layer iii are 2×2, and the sizes of the convolution kernels of the global maximum pooling layer is 7×7.
Preferably, in step S3, the total loss function of the CNN-RdNet convolutional neural networkAs shown in formula (1); in the formula (1), - >For the two-class cross entropy loss function, as shown in equation (2),
(1)
(2)
in the formula (1), the components are as follows,z in (a) represents a predicted output probability value of the model, and y represents an actual label; n represents the nth annotated electrocardiogram input into the CNN-RdNet convolutional neural network; />Representing the loss of the nth noted electrocardiogram input into the CNN-RdNet convolutional neural network; />Summing and averaging the loss of the 1 st labeled electrocardiogram input into the CNN-RdNet convolutional neural network to the loss of the n-th labeled electrocardiogram in the CNN-RdNet convolutional neural network;
in the formula (2), the number of samples is represented by N,label representing nth annotated electrocardiogram input into CNN-RdNet convolutional neural network, ++>Representing a probability of predicting that an nth labeled electrocardiogram input into the CNN-RdNet convolutional neural network is an atrial fibrillation pattern, < >>Represents the loss of the nth annotated electrocardiogram input into the CNN-RdNet convolutional neural network.
Preferably, the step S4 specifically includes the following steps: inputting the marked electrocardiogram in the training set into the CNN-RdNet convolutional neural network, and outputting probability values, actual labels and accuracy by a prediction layer of the CNN-RdNet convolutional neural network after each training segment (namely 1 epoch) is completed; then, the total loss function of the CNN-RdNet convolutional neural network calculates the loss between the output probability value and the actual label, namely the total loss of the CNN-RdNet convolutional neural network; and then optimizing gradient and back-propagating according to the total loss of the CNN-RdNet convolutional neural network, storing parameters, updating model parameters of the CNN-RdNet convolutional neural network, and when the training times of the training segment epoch reach the preset 100 times, taking the parameter reserved by the training segment epoch with the maximum output precision as the final model parameter of the CNN-RdNet convolutional neural network to obtain a CNN-RdNet convolutional neural network model.
Compared with the prior art, the invention has the following beneficial effects:
the application provides a atrial fibrillation detection method based on a convolutional neural network and ECG signals, which aims to overcome the defect that the detection speed and the accuracy of the existing atrial fibrillation detection method are not ideal, effectively improve the detection speed of atrial fibrillation, reduce the detection time of atrial fibrillation and improve the accuracy of atrial property detection. In the atrial fibrillation detection process, the CNN-RdNet convolutional neural network is utilized to fully learn the characteristics, and the spatial attention module is utilized to pay attention to the key characteristics, so that the capability of the characteristic extraction block for extracting the characteristics and learning the characteristics is effectively improved, more characteristics favorable for detecting atrial fibrillation can be obtained for detecting atrial fibrillation by the prediction layer, meanwhile, the calculation amount of the atrial fibrillation detection method is greatly reduced due to the addition of the maximum pooling layer, the calculation time is saved, and the detection speed is improved.
Compared with the traditional atrial fibrillation detection method with better performance based on a convolutional neural network-CNN atrial fibrillation detection method, the atrial fibrillation detection method has the advantages that when the detection sample size reaches 60000, the atrial fibrillation detection method is improved by 5.55% in accuracy and is shortened by 56.41% in running time.
Drawings
FIG. 1 is a general flow chart of the present invention;
FIG. 2 is a schematic diagram of a network structure of a CNN-RdNet convolutional neural network;
FIG. 3 is a schematic diagram of a network structure of the feature extraction block I;
fig. 4 is a graph comparing atrial fibrillation detection accuracy obtained by testing the atrial fibrillation detection method described in the application and five existing atrial fibrillation detection methods based on sample entropy, convolutional neural network-CNN, random forest-RF, frequency domain feature and support vector machine-SVM by using the test set S of the application;
fig. 5 is a comparison chart of the running time obtained by testing the atrial fibrillation detection method described in the application and five existing atrial fibrillation detection methods based on sample entropy, convolutional neural network-CNN, random forest-RF, frequency domain feature and support vector machine-SVM by using the test set S of the application.
Detailed Description
In order to enable those skilled in the art to better understand the technical solution of the present invention, the technical solution of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
Term interpretation:
PhysioNet: all the materials are called Research Resource for Complex Physiologic Signals, and are interpreted as complex physiological signal research resources, and the website is https:// physionet. Supported by MIH, is now managed by the MIT's computational physiology laboratory.
The general flow chart of the atrial fibrillation detection method based on the CNN-RdNet convolutional neural network and the ECG signal is shown in fig. 1, and specifically comprises the following steps:
s1: acquiring a training set and a testing set:
s1-1: 1000 pieces of electrocardiosignal data of clinical atrial fibrillation (electrocardiosignal is also called ECG signal) are obtained from a PhysioNet data set, the 1000 pieces of electrocardiosignal data of clinical atrial fibrillation are existing data with standard heartbeat types and sampling frequency of 1000Hz, the 1000 pieces of electrocardiosignal data of clinical atrial fibrillation form an initial data set, the 1000 pieces of electrocardiosignal data of clinical atrial fibrillation in the initial data set are electrocardiosignal records of 1000 patients, the duration of one RR interval is 0.6-1s under normal conditions, and the electrocardiosignal data of half a minute, namely 30s, are taken as the electrocardiosignal records.
S1-2: the electrocardiograph signals in the initial data set are processed and derived by Matlab R2021b software to obtain an electrocardiograph, and then the electrocardiograph is divided according to the proportion of 7:3 to obtain a training set T and a testing set S, and the specific steps are as follows:
S1-2-1, reading 1000 electrocardiosignals in an initial data set in the existing Matlab R2021b software by using a rdmat function in the Matlab software, and converting the electrocardiosignals in the read initial data set into visualized electrocardiosignals, namely electrocardiographic signals;
s1-2-2: marking R waves of the electrocardiogram signals by using Signal Labeler in Matlab R2021b software Signal Processing Toolbox to obtain marked electrocardiogram signals, so that the marked electrocardiogram signals can be conveniently segmented in the follow-up process;
s1-2-3: and (3) deriving the marked electrocardiographic signals from Matlab R2021b software to obtain marked electrocardiographs, and dividing the marked electrocardiographs into a training set T and a test set S according to the proportion of 7:3, wherein 700 pieces of electrocardiographic data are marked in the training set T, and 300 pieces of electrocardiographic data are marked in the test set S.
S2: constructing a CNN-RdNet convolutional neural network:
the CNN-RdNet convolutional neural network comprises an input layer, a counter I, a convolutional operation module, a feature extraction block I, a spatial attention module I, a maximum pooling layer I, a feature extraction block II, a spatial attention module II, a maximum pooling layer II, a feature extraction block III, a spatial attention module III, a maximum pooling layer III, a feature extraction block IV, a global maximum pooling layer, a prediction layer, a counting layer II and an output layer which are connected in sequence; the structure of the feature extraction block I, the feature extraction block II, the feature extraction block III and the feature extraction block IV is the same; the space attention module I, the space attention module II and the space attention module III have the same structure, and the space attention module I adopts the existing space attention module; the maximum pooling layer I, the maximum pooling layer II, the maximum pooling layer III and the global maximum pooling layer are all existing maximum pooling layers, the maximum pooling layer I, the maximum pooling layer II and the maximum pooling layer III are different from the global maximum pooling layer in the size of convolution cores of the pooling layers, the size of the convolution cores of the maximum pooling layer I, the maximum pooling layer II and the maximum pooling layer III is 2 multiplied by 2, and the size of the convolution cores of the global maximum pooling layer is 7 multiplied by 7;
The counter I is used for counting the number of the noted electrocardiograms input into the input layer of the CNN-RdNet convolutional neural network;
the convolution operation module is used for carrying out dimension ascending on the marked electrocardiogram (the size is 3 multiplied by 56) input into the convolution operation module to obtain an initial characteristic diagram with the size of 64 multiplied by 56; the convolution operation module is formed by sequentially connecting 64 convolution layers with the convolution kernel size of 3 multiplied by 3;
the feature extraction block I is used for extracting shallow features from the initial feature map output by the convolution operation module to obtain a shallow feature map I, wherein the size of the shallow feature map I is 64 multiplied by 56; the shallow layer characteristics comprise a P wave form, a QRS wave form and an ST wave form, and the extraction of the shallow layer characteristics is used for judging whether the wave form in the electrocardiogram to be detected is one of the P wave form, the QRS wave form and the ST wave form;
the spatial attention module I is used for focusing on the characteristics of the P wave waveform, the characteristics of the Q wave waveform and the S wave waveform in the QRS wave waveform, the peak position of the R wave waveform, the characteristics of the T wave waveform and the variation of the ST wave waveform in the shallow characteristic diagram I, so as to obtain a shallow characteristic diagram II; wherein the characteristics of the P-wave waveform include the shape, amplitude, and duration of the P-wave waveform; the characteristics of the Q wave waveform include depth and width, and the characteristics of the S wave waveform include depth and width; characteristics of the T wave waveform include characteristics of morphology, polarity and duration of the T wave waveform, and changes in the ST segment waveform include characteristics of abnormal elevation or abnormal depression of the ST segment;
The maximum pooling layer I is used for capturing and retaining the focused features in the shallow feature map II output by the spatial attention module I to obtain a shallow feature map III with the size of 64 multiplied by 28.
The feature extraction block II is used for extracting features which are captured and reserved in the shallow feature map III output by the maximum pooling layer I, so as to obtain a shallow feature map IV comprising edge features and texture features, wherein the size of the shallow feature map IV is 64 multiplied by 28;
the space attention module II is used for focusing on edge features and texture features in the shallow feature map IV output by the feature extraction block II to obtain a shallow feature map V; the space attention module II can facilitate the application to pay attention to the obvious directional texture, such as vortex and spiral, which can display atrial fibrillation signals in the electrocardiogram;
the maximum pooling layer II is used for capturing and retaining the focused features in the shallow feature map V output by the spatial attention module II to obtain a shallow feature map VI, wherein the size of the shallow feature map VI is 64 multiplied by 14; the arrangement of the maximum pooling layer II can effectively capture and retain important characteristics of edge characteristics and texture characteristics in an electrocardiogram, and meanwhile, the size of the shallow characteristic diagram VI is changed into 64 multiplied by 14;
The feature extraction block III is used for extracting the features which are captured and reserved in the shallow feature map VI output by the maximum pooling layer II to obtain a deep feature map I, wherein the deep feature map I comprises the heart beating frequency, the frequencies of a P wave waveform, a QRS waveform and an ST wave waveform;
the space attention module III is used for focusing on the heart beating frequency and the frequencies of the P wave waveform, the QRS waveform and the ST wave waveform in the deep feature diagram I output by the feature extraction block III, and focusing on the correlation and the continuity among the P wave waveform, the QRS waveform and the ST wave waveform to obtain a deep feature diagram II;
the maximum pooling layer III is used for capturing and retaining the important focused features in the deep feature map II output by the spatial attention module III to obtain a deep feature map III, wherein the size of the deep feature map III is 64 multiplied by 7;
the feature extraction block IV is used for extracting features which are captured and reserved in the deep feature map III output by the maximum pooling layer III, so as to obtain a deep feature map IV.
The global maximum pooling layer is used for reducing the dimension of the deep feature map IV output by the feature extraction block IV and generating feature vectors, wherein the feature vectors are 64 multiplied by 1 in size and used for subsequent classification or prediction tasks.
The prediction layer comprises a full connection layer and a Sigmoid layer; the full-connection layer (fully connected layers, FC) is used for carrying out feature mapping on the feature vector output by the global maximum pooling layer to obtain a parameter matrix [64,2],64 represents the dimension of the feature vector input to the full-connection layer, and 2 represents the output dimension of the full-connection layer, namely the category number to be generated; the Sigmoid layer is used for mapping the parameter matrix output by the full-connection layer to obtain a probability value with the range of [0,1 ]; when the probability value is larger than 0.5, representing that the marked electrocardiogram input into the feature extraction block I is an atrial fibrillation figure; when the probability value is smaller than 0.5 and larger than 0, the marked electrocardiogram input into the feature extraction block I is not an atrial fibrillation pattern;
the counter II is used for counting the number with the probability value larger than 0.5 and storing the ratio of the number with the probability value larger than 0.5 to the number of the marked electrocardiograms input into the CNN-RdNet convolutional neural network as the accuracy;
the feature extraction block i comprises a backbone network and a residual network, wherein:
the main network comprises a first convolution block I, a first convolution block II, a second convolution block I, a first Concat layer, a third convolution block I, a second convolution block II, a fourth convolution block I and a second Concat layer which are connected in sequence; the first convolution block I is also connected with the second convolution block I and the first Concat layer respectively, and the first convolution block II is also connected with the first Concat layer; the third convolution block I is also connected with the fourth convolution block I and a second Concat layer; the second convolution block II is also connected with a second Concat layer; the input layer is also connected with a second Concat layer;
The residual error network comprises a fifth convolution block I, a third convolution block II, a sixth convolution block I and a fourth convolution block II, wherein the fifth convolution block I is connected with the third convolution block II, and the sixth convolution block I is connected with the fourth convolution block II; the input layer is connected with a fifth convolution block I, a third convolution block II is connected with a first Concat layer, the first Concat layer is connected with a sixth convolution block I, and a fourth convolution block II is connected with a second Concat layer;
in the application, the convolution block I consists of a convolution layer with a convolution kernel size of 1 multiplied by 1 and an LReLu layer, and the convolution block II consists of a convolution layer with a convolution kernel size of 3 multiplied by 3 and an LReLu layer;
the feature extraction block I is used for extracting shallow features from the initial feature map output by the convolution operation module to obtain a shallow feature map I, wherein the size of the shallow feature map I is 64 multiplied by 56; specifically:
the first convolution block I in the feature extraction block I is used for carrying out feature fusion on features of different channels of the input initial feature map to obtain a feature map A with more abundant features 1 The method comprises the steps of carrying out a first treatment on the surface of the The first convolution block II is used for outputting a characteristic diagram A to the first convolution block I 1 The P wave waveform, the QRS wave waveform and the ST wave waveform are simply subjected to feature extraction to obtain a feature map B 1 The method comprises the steps of carrying out a first treatment on the surface of the The first Concat layer is used for mapping the feature map A 1 And feature map B 1 Adding to obtain a characteristic diagram C 1 The method comprises the steps of carrying out a first treatment on the surface of the A second convolution block I is used for the characteristic diagram C 1 Different channel characteristics are fused to obtain a characteristic diagram D 1 The method comprises the steps of carrying out a first treatment on the surface of the The characteristics extracted by the residual network are used for supplementing the characteristics extracted by the main networkFilling; the fifth convolution block I is used for carrying out feature fusion on other features except the P wave waveform, the QRS wave waveform and the ST wave waveform in the input initial feature map to obtain a feature map E 1 The method comprises the steps of carrying out a first treatment on the surface of the A third convolution block II is used for the characteristic diagram E 1 Extracting features, and reserving space information to obtain a feature map F 1 The second Concat layer is used for mapping the feature map A 1 Feature map C 1 Convolution block D 1 And feature map F 1 Adding to obtain a characteristic diagram G 1 The method comprises the steps of carrying out a first treatment on the surface of the A third convolution block I is used for the characteristic graph G 1 Features of different channels in the model are subjected to feature fusion to obtain a feature map H 1 The method comprises the steps of carrying out a first treatment on the surface of the A second convolution block II is used for the characteristic diagram H 1 The P wave waveform, the QRS wave waveform and the ST wave waveform are subjected to detailed feature extraction to obtain a feature map J 1 The method comprises the steps of carrying out a first treatment on the surface of the The third Concat layer is used for mapping the feature map H 1 And feature map J 1 Feature fusion is carried out to obtain a feature map K 1 The method comprises the steps of carrying out a first treatment on the surface of the A fourth convolution block I is used for the characteristic diagram K 1 The features of different channels in the model are subjected to feature fusion again to obtain a feature map L 1 The method comprises the steps of carrying out a first treatment on the surface of the While the sixth convolution block i of the residual network is used for the feature map G 1 Feature fusion is carried out on other shallow features except the P wave waveform, the QRS wave waveform and the ST wave waveform to obtain a feature map M 1 The method comprises the steps of carrying out a first treatment on the surface of the A fourth convolution block II is used for the characteristic diagram M 1 The other shallow layer features except the P wave waveform, the QRS wave waveform and the ST wave waveform are subjected to detailed feature extraction to obtain a feature map N 1 The method comprises the steps of carrying out a first treatment on the surface of the The fourth Concat layer is used for mapping the feature map L 1 Feature map K 1 Feature map H 1 Feature map N 1 And adding the initial feature images to obtain a shallow feature image I.
The feature extraction block II is used for continuously extracting shallow features from the shallow feature map III output by the maximum pooling layer I to obtain a shallow feature map IV, wherein the size of the shallow feature map IV is 64 multiplied by 28; specifically:
the first convolution block I in the feature extraction block II is used for carrying out feature fusion on features of different channels of the shallow feature map III to obtain a feature map A with more abundant features 2 The first convolution block II is opposite to the first convolutionFeature map A output by block I 2 The edge features and the texture features are subjected to rough feature extraction to obtain a feature map B 2 The first Concat layer is used for mapping the feature map A 2 And feature map B 2 Adding to obtain a characteristic diagram C 2 The method comprises the steps of carrying out a first treatment on the surface of the A second convolution block I is used for the characteristic diagram C 2 Different channel characteristics are fused to obtain a characteristic diagram D 2 The method comprises the steps of carrying out a first treatment on the surface of the The characteristics extracted by the residual network are used for supplementing the extracted characteristics of the main network; the fifth convolution block I is used for carrying out feature fusion on edge-removing features and texture features in the input shallow feature map III to obtain a feature map E 2 The method comprises the steps of carrying out a first treatment on the surface of the A third convolution block II is used for the characteristic diagram E 2 Extracting features, and reserving space information to obtain a feature map F 2 The method comprises the steps of carrying out a first treatment on the surface of the The second Concat layer is used for mapping the feature map A 2 Feature map C 2 Convolution block D 2 And feature map F 2 Adding to obtain a characteristic diagram G 2 The method comprises the steps of carrying out a first treatment on the surface of the A third convolution block I is used for the characteristic graph G 2 Features of different channels in the model are subjected to feature fusion to obtain a feature map H 2 The method comprises the steps of carrying out a first treatment on the surface of the A second convolution block II is used for the characteristic diagram H 2 The edge features and texture features in the model are subjected to detailed feature extraction to obtain a feature map J 2 The method comprises the steps of carrying out a first treatment on the surface of the The third Concat layer is used for mapping the feature map H 2 And feature map J 2 Feature fusion is carried out to obtain a feature map K 2 The method comprises the steps of carrying out a first treatment on the surface of the A fourth convolution block I is used for the characteristic diagram K 2 The features of different channels in the model are subjected to feature fusion again to obtain a feature map L 2 The method comprises the steps of carrying out a first treatment on the surface of the While the sixth convolution block i of the residual network is used for the feature map G 2 Feature fusion is carried out on other features except the edge features and the texture features to obtain a feature map M 2 The method comprises the steps of carrying out a first treatment on the surface of the A fourth convolution block II is used for the characteristic diagram M 2 The shallow layer features except the edge features and the texture features are subjected to detailed feature extraction to obtain a feature map N 2 The method comprises the steps of carrying out a first treatment on the surface of the The fourth Concat layer is used for mapping the feature map L 2 Feature map K 2 Feature map H 2 Feature map N 2 And adding the shallow feature images III to obtain a shallow feature image IV.
The feature extraction block III is used for continuously extracting shallow features from the shallow feature map VI output by the maximum pooling layer II to obtain a deep feature map I, wherein the size of the deep feature map I is 64 multiplied by 14; specifically:
the first convolution block I in the feature extraction block III is used for carrying out feature fusion on the features of different channels of the input shallow feature map VI to obtain a feature map A with more abundant features 3 The method comprises the steps of carrying out a first treatment on the surface of the The first convolution block II is used for the characteristic diagram A 3 The heart beating frequency and the frequency of the P wave waveform, the QRS waveform and the ST wave waveform are roughly extracted to obtain a characteristic diagram B 3 The method comprises the steps of carrying out a first treatment on the surface of the The first Concat layer is used for mapping the feature map A 3 And feature map B 3 Adding to obtain a characteristic diagram C 3 The method comprises the steps of carrying out a first treatment on the surface of the A second convolution block I is used for the characteristic diagram C 3 Different channel characteristics are fused to obtain a characteristic diagram D 3 The method comprises the steps of carrying out a first treatment on the surface of the On the residual network, a fifth convolution block I is used for carrying out feature fusion on the features of heart beat removal frequency, P wave waveform, QRS waveform and ST wave waveform in the input shallow feature map VI to obtain a feature map E 3 The method comprises the steps of carrying out a first treatment on the surface of the A third convolution block II is used for the characteristic diagram E 3 Extracting features, and reserving space information to obtain a feature map F 3 The second Concat layer is used for mapping the feature map A 3 Feature map C 3 Convolution block D 3 And feature map F 3 Adding to obtain a characteristic diagram G 3 The method comprises the steps of carrying out a first treatment on the surface of the A third convolution block I is used for the characteristic graph G 3 Features of different channels in the model are subjected to feature fusion to obtain a feature map H 3 The method comprises the steps of carrying out a first treatment on the surface of the A second convolution block II is used for the characteristic diagram H 3 The features of the heart beating frequency, the P wave waveform, the QRS waveform and the ST wave waveform are extracted in detail to obtain a feature map J 3 The method comprises the steps of carrying out a first treatment on the surface of the The third Concat layer is used for mapping the feature map H 3 And feature map J 3 Feature fusion is carried out to obtain a feature map K 3 The method comprises the steps of carrying out a first treatment on the surface of the A fourth convolution block I is used for the characteristic diagram K 3 The features of different channels in the model are subjected to feature fusion again to obtain a feature map L 3 The method comprises the steps of carrying out a first treatment on the surface of the While the sixth convolution block i of the residual network is used for the feature map G 3 In addition to the characteristics of the heart beat frequency and the frequencies of the P-wave, QRS and ST-segment waveformsFeature fusion is carried out on other deep features of the model to obtain a feature map M 3 The method comprises the steps of carrying out a first treatment on the surface of the A fourth convolution block II is used for the characteristic diagram M 3 Deep features other than the heart beating frequency and the frequencies of the P wave waveform, the QRS waveform and the ST wave waveform are extracted in detail to obtain a feature diagram N 3 The method comprises the steps of carrying out a first treatment on the surface of the The fourth Concat layer is used for mapping the feature map L 3 Feature map K 3 Feature map H 3 Feature map N 3 And adding the shallow feature images VI to obtain a deep feature image I.
The feature extraction block IV is used for continuously extracting deep features from the deep feature map III output by the maximum pooling layer III to obtain a deep feature map IV, wherein the size of the deep feature map IV is 64 multiplied by 7; specifically:
the first convolution block I in the feature extraction block IV is used for carrying out feature fusion on features of different channels in the deep feature map III to obtain a feature map A with more abundant features 4 The first convolution block II is used for outputting a characteristic diagram A to the first convolution block I 4 The correlation and continuity among the P wave waveform, the QRS wave waveform and the ST wave waveform are simply extracted to obtain a characteristic diagram B 4 The method comprises the steps of carrying out a first treatment on the surface of the The first Concat layer is used for mapping the feature map A 4 And feature map B 4 Adding to obtain a characteristic diagram C 4 The method comprises the steps of carrying out a first treatment on the surface of the A second convolution block I is used for the characteristic diagram C 4 Different channel characteristics are fused to obtain a characteristic diagram D 4 The method comprises the steps of carrying out a first treatment on the surface of the On the residual network, a fifth convolution block I is used for carrying out feature fusion on other features except correlation and continuity among the P wave waveform, the QRS waveform and the ST wave waveform in the input deep feature diagram IV to obtain a feature diagram E 4 The method comprises the steps of carrying out a first treatment on the surface of the A third convolution block II is used for the characteristic diagram E 4 Extracting features, and reserving space information to obtain a feature map F 4 The second Concat layer is used for mapping the feature map A 4 Feature map C 4 Convolution block D 4 And feature map F 4 Adding to obtain a characteristic diagram G 4 The method comprises the steps of carrying out a first treatment on the surface of the A third convolution block I is used for the characteristic graph G 4 Features of different channels in the model are subjected to feature fusion to obtain a feature map H 4 The method comprises the steps of carrying out a first treatment on the surface of the A second convolution block II is used for the oppositeSign chart H 4 The correlation and continuity among the P wave waveform, the QRS waveform and the ST wave waveform are subjected to detailed feature extraction to obtain a feature map J 4 The method comprises the steps of carrying out a first treatment on the surface of the The third Concat layer is used for mapping the feature map H 4 And feature map J 4 Feature fusion is carried out to obtain a feature map K 4 The method comprises the steps of carrying out a first treatment on the surface of the A fourth convolution block I is used for the characteristic diagram K 4 The features of different channels in the model are subjected to feature fusion again to obtain a feature map L 4 The method comprises the steps of carrying out a first treatment on the surface of the While the sixth convolution block i of the residual network is used for the feature map G 4 Feature fusion is carried out on other features except for the correlation and continuity among the P wave waveform, the QRS wave waveform and the ST wave waveform, and a feature map M is obtained 4 The method comprises the steps of carrying out a first treatment on the surface of the A fourth convolution block II is used for the characteristic diagram M 4 Deep features other than the correlation and continuity among the P wave waveform, the QRS waveform and the ST wave waveform are subjected to detailed feature extraction to obtain a feature map N 4 The method comprises the steps of carrying out a first treatment on the surface of the The fourth Concat layer is used for mapping the feature map L 4 Feature map K 4 Feature map H 4 Feature map N 4 And adding the deep feature images III to obtain a deep feature image IV.
S3: constructing a total loss function of the CNN-RdNet convolutional neural network:
wherein, the total loss function of the CNN-RdNet convolutional neural networkAs shown in the formula (1),for the two-class cross entropy loss function, as shown in equation (2),
(1)
(2)
in the formula (1), the components are as follows,wherein z representsThe prediction of the model outputs a probability value, y represents the actual label; n represents the nth annotated electrocardiogram input into the CNN-RdNet convolutional neural network; />Representing the loss of the nth noted electrocardiogram input into the CNN-RdNet convolutional neural network; />Summing and averaging the loss of the 1 st labeled electrocardiogram input into the CNN-RdNet convolutional neural network to the loss of the n-th labeled electrocardiogram in the CNN-RdNet convolutional neural network;
In the formula (2), the number of samples is represented by N,label representing nth annotated electrocardiogram input into CNN-RdNet convolutional neural network, ++>Representing a probability of predicting that an nth labeled electrocardiogram input into the CNN-RdNet convolutional neural network is an atrial fibrillation pattern, < >>Represents the loss of the nth annotated electrocardiogram input into the CNN-RdNet convolutional neural network.
S4: training the CNN-RdNet convolutional neural network by using a training set and a total loss function to obtain a CNN-RdNet convolutional neural network model:
the step S4 specifically comprises the following steps: inputting the marked electrocardiogram in the training set into the CNN-RdNet convolutional neural network, and outputting probability values, actual labels and accuracy by a prediction layer of the CNN-RdNet convolutional neural network after each training segment (namely 1 epoch) is completed; then, the total loss function of the CNN-RdNet convolutional neural network calculates the loss between the output probability value and the actual label, namely the total loss of the CNN-RdNet convolutional neural network; and then optimizing gradient and back-propagating according to the total loss of the CNN-RdNet convolutional neural network, storing parameters, updating model parameters of the CNN-RdNet convolutional neural network, and when the training times of the training segment epoch reach the preset 100 times, taking the parameter reserved by the training segment epoch with the maximum output precision as the final model parameter of the CNN-RdNet convolutional neural network to obtain a CNN-RdNet convolutional neural network model.
S5: loading the electrocardiographic data marked in the test set S into the CNN-RdNet convolutional neural network model obtained in the step S4 for forward propagation once, and outputting a probability value with the range of [0,1] and the accuracy; when the probability value is larger than 0.5, the marked electrocardiogram input into the CNN-RdNet convolutional neural network model is an atrial fibrillation pattern, and the prediction result is atrial fibrillation; when the probability value is smaller than 0.5 and larger than 0, the marked electrocardiogram input into the CNN-RdNet convolutional neural network model is not atrial fibrillation pattern, and the prediction result is not atrial fibrillation.
In order to compare the atrial fibrillation effect of the method (shown in table 1, table 2, fig. 4 and fig. 5) with that of five existing atrial fibrillation detection methods based on sample entropy (from Computer Methods and Programs in Biomedicine), convolutional neural network-CNN (from Journal of physics: conference Series), random forest-RF (from interson 2023), frequency domain feature-based and support vector machine-SVM (from Biomedical Signal Processing and Control 2019), the atrial fibrillation detection method is compared with the five existing atrial fibrillation detection methods based on sample entropy, convolutional neural network-CNN, random forest-RF, frequency domain feature-based and support vector machine-SVM on the same test set, and the same test strategy is used, specifically, 100 pieces of marked electrocardiographic data are randomly selected from 300 pieces of electrocardiographic data in the test set S each time and loaded into the cnr-set convolutional neural network model for one forward propagation to complete one test, and the test is repeated 200 times, 250 times, 300 times, 350 times, 400 times, 600 times, 450 times, 500 times, and 500 times respectively; and respectively outputting the accuracy; similarly, 100 pieces of marked electrocardiographic data in the test set S are randomly selected and loaded into a network model based on sample entropy, convolutional neural network-CNN, random forest-RF, frequency domain feature and support vector machine-SVM, and forward propagation is carried out once to complete one test, and the test is respectively repeated for 200 times, 250 times, 300 times, 350 times, 400 times, 450 times, 500 times, 550 times and 600 times; and simultaneously recording the running time of the five existing atrial fibrillation detection methods based on sample entropy, convolutional neural network-CNN, random forest-RF and frequency domain feature and support vector machine-SVM. The accuracy test results are shown in table 1 and fig. 4, and the run time test results are shown in fig. 5 and fig. 2.
TABLE 1
TABLE 2
As can be seen from Table 1 and FIG. 4, compared with the five existing atrial fibrillation detection methods, the atrial fibrillation detection method of the present application has a better effect on the number of test samples of 20000-60000. Specifically: in the process of increasing the number of test samples from 20000 to 60000, the accuracy of the atrial fibrillation detection method described in the present application increases most, by 0.23, over any other comparison method. When the number of test samples is 20000, the accuracy of the atrial fibrillation detection method is up to 0.72, and when the number of test samples is 60000, the accuracy of the atrial fibrillation detection method is up to 0.95, which also shows that when the number of test samples is 60000, compared with the convolutional neural network-CNN based atrial fibrillation detection method with the highest current accuracy, the accuracy of the atrial fibrillation detection method is higher than the accuracy of the convolutional neural network-CNN based atrial fibrillation detection method, and the accuracy of the atrial fibrillation detection method is increased by (0.95-0.90)/0.90X100% = 5.55%.
As can be seen from Table 2 and FIG. 5, the running time obtained by the atrial fibrillation detection method of the present application has a better effect on the number of test samples of 20000-60000 than the five conventional atrial fibrillation detection methods. Specifically: at each test sample number in table 2, the time used by the atrial fibrillation detection method described in the application is minimal compared with other comparison methods, and in the process that the running time is increased from 20000 to 60000, it can be seen that the time used by the method of the application is minimal, and the increase is minimal. When the number of test samples is 20000, the running time of the atrial fibrillation detection method reaches 6s, and when the number of test samples is 60000, the running time of the atrial fibrillation detection method reaches 17s, which also shows that when the number of test samples is 60000, the running time of the atrial fibrillation detection method is obviously shorter than that of the convolutional neural network-CNN based atrial fibrillation detection method with the shortest running time, and the running time of the atrial fibrillation detection method is shortened by (39-17)/39×100% = 56.41% compared with that of the convolutional neural network-CNN based atrial fibrillation detection method.

Claims (9)

1. An atrial fibrillation detection method based on a convolutional neural network and an ECG signal is characterized in that: the method comprises the following steps:
s1: acquiring a training set and a testing set;
s2: constructing a CNN-RdNet convolutional neural network, and firstly counting the number of the marked electrocardiograms input into the CNN-RdNet convolutional neural network by using the CNN-RdNet convolutional neural network; then sequentially carrying out dimension lifting, shallow feature extraction, shallow feature focus, feature capture and retention of the focused feature, feature extraction, edge feature and texture feature focus, feature capture and retention of the focused feature, feature extraction, heart beating frequency and the frequency of a P wave waveform, a QRS waveform and an ST wave waveform, feature capture and retention of the focused feature, feature extraction, dimension reduction and feature vector generation, feature mapping of the feature vector, and mapping of a parameter matrix to obtain a probability value in the range of [0,1] and operation of calculating accuracy;
S3: constructing a total loss function of the CNN-RdNet convolutional neural network;
s4: training the CNN-RdNet convolutional neural network by using a training set and a total loss function to obtain a CNN-RdNet convolutional neural network model;
s5: loading the marked electrocardiogram data into the CNN-RdNet convolutional neural network model obtained in the step S4 for forward propagation once, and outputting a probability value with the range of [0,1] and the accuracy; when the probability value is larger than 0.5, the marked electrocardiogram input into the CNN-RdNet convolutional neural network model is an atrial fibrillation pattern, and the prediction result is atrial fibrillation; when the probability value is smaller than 0.5 and larger than 0, the marked electrocardiogram input into the CNN-RdNet convolutional neural network model is not atrial fibrillation pattern, and the prediction result is not atrial fibrillation.
2. The atrial fibrillation detection method based on convolutional neural network and ECG signal as claimed in claim 1, wherein: the step S1 comprises the following specific steps: s1-1: acquiring 1000 pieces of electrocardiosignal data of clinical atrial fibrillation from a PhysioNet data set to form an initial data set;
s1-2: and processing and deriving the electrocardiosignals in the initial data set by using Matlab R2021b software to obtain an electrocardiogram, and dividing the electrocardiogram according to the proportion of 7:3 to obtain a training set T and a test set S.
3. The atrial fibrillation detection method based on convolutional neural network and ECG signal as claimed in claim 2, wherein: the step S1-2 comprises the following specific steps:
s1-2-1, reading 1000 electrocardiosignals in an initial data set in Matlab R2021b software by using a rdmat function, and converting the electrocardiosignals in the initial data set read by the rdmat function into visualized electrocardiosignals, namely electrocardiographic signals;
s1-2-2: marking R waves of the electrocardiogram signals by using Signal Labeler in Matlab R2021b software to obtain marked electrocardiogram signals;
s1-2-3: and (3) deriving the marked electrocardiogram signals from Matlab R2021b software to obtain marked electrocardiograms, and dividing the marked electrocardiograms into a training set T and a test set S according to the proportion of 7:3.
4. The atrial fibrillation detection method based on convolutional neural network and ECG signal as claimed in claim 1, wherein: in the step S2, the CNN-RdNet convolutional neural network comprises an input layer, a counter I, a convolutional operation module, a feature extraction block I, a spatial attention module I, a maximum pooling layer I, a feature extraction block II, a spatial attention module II, a maximum pooling layer II, a feature extraction block III, a spatial attention module III, a maximum pooling layer III, a feature extraction block IV, a global maximum pooling layer, a prediction layer, a counting layer II and an output layer which are sequentially connected;
The counter I is used for counting the number of the noted electrocardiograms input into the input layer of the CNN-RdNet convolutional neural network; the convolution operation module is used for carrying out dimension ascending on the marked electrocardiogram input into the convolution operation module to obtain an initial characteristic diagram with the size of 64 multiplied by 56; the feature extraction block I is used for extracting shallow features from the initial feature map to obtain a shallow feature map I; the spatial attention module I is used for focusing on the characteristics of the P wave waveform, the characteristics of the Q wave waveform and the S wave waveform in the QRS wave waveform, the peak position of the R wave waveform, the characteristics of the T wave waveform and the variation of the ST wave waveform in the shallow characteristic diagram I, so as to obtain a shallow characteristic diagram II; the maximum pooling layer I is used for capturing and retaining the focused features in the shallow feature map II to obtain a shallow feature map III; the feature extraction block II is used for extracting features which are captured and reserved in the shallow feature map III, so as to obtain a shallow feature map IV comprising edge features and texture features; the space attention module II is used for focusing on edge features and texture features in the shallow feature map IV to obtain a shallow feature map V; the maximum pooling layer II is used for capturing and retaining the important features in the shallow feature map V to obtain a shallow feature map VI; the feature extraction block III is used for extracting the features which are captured and reserved in the shallow feature map VI to obtain a deep feature map I; the space attention module III is used for focusing on the heart beating frequency and the frequencies of the P wave waveform, the QRS waveform and the ST wave waveform in the deep feature diagram I, and focusing on the correlation and the continuity among the P wave waveform, the QRS waveform and the ST wave waveform to obtain a deep feature diagram II; the maximum pooling layer III is used for capturing and retaining the important features in the deep feature map II to obtain the deep feature map III; the feature extraction block IV is used for extracting features which are captured and reserved in the deep feature map III, so as to obtain a deep feature map IV; the global maximum pooling layer is used for reducing the dimension of the deep feature map IV and generating feature vectors; the prediction layer comprises a full connection layer and a Sigmoid layer; the full connection layer is used for carrying out feature mapping on the feature vectors to obtain a parameter matrix [64,2]; the Sigmoid layer is used for mapping the parameter matrix to obtain a probability value with the range of [0,1 ]; the counter II is used for counting the number with the probability value larger than 0.5 and storing the accuracy.
5. The atrial fibrillation detection method based on convolutional neural network and ECG signal as defined in claim 4, wherein: in step S2, the convolution operation module is formed by sequentially connecting 64 convolution layers with the convolution kernel size of 3×3.
6. The atrial fibrillation detection method based on convolutional neural network and ECG signal as defined in claim 4, wherein: in step S2, the feature extraction block i includes a backbone network and a residual network, where: the main network comprises a first convolution block I, a first convolution block II, a second convolution block I, a first Concat layer, a third convolution block I, a second convolution block II, a fourth convolution block I and a second Concat layer which are connected in sequence; the first convolution block I is also connected with the second convolution block I and the first Concat layer respectively, and the first convolution block II is also connected with the first Concat layer; the third convolution block I is also connected with the fourth convolution block I and a second Concat layer; the second convolution block II is also connected with a second Concat layer; the input layer is also connected with a second Concat layer; the residual error network comprises a fifth convolution block I, a third convolution block II, a sixth convolution block I and a fourth convolution block II, wherein the fifth convolution block I is connected with the third convolution block II, and the sixth convolution block I is connected with the fourth convolution block II; the input layer is connected with a fifth convolution block I, a third convolution block II is connected with a first Concat layer, the first Concat layer is connected with a sixth convolution block I, and a fourth convolution block II is connected with a second Concat layer.
7. The atrial fibrillation detection method based on convolutional neural network and ECG signal as defined in claim 4, wherein: in step S2, the convolution block i is composed of a convolution layer having a convolution kernel size of 1×1 and an lrehu layer, and the convolution block ii is composed of a convolution layer having a convolution kernel size of 3×3 and an lrehu layer.
8. The atrial fibrillation detection method based on convolutional neural network and ECG signal as defined in claim 4, wherein: in the step S2, the structures of the feature extraction block I, the feature extraction block II, the feature extraction block III and the feature extraction block IV are the same.
9. The atrial fibrillation detection method based on convolutional neural network and ECG signal as defined in claim 4, wherein: in step S2, the sizes of the convolution kernels of the maximum pooling layer i, the maximum pooling layer ii and the maximum pooling layer iii are all 2×2, and the size of the convolution kernel of the global maximum pooling layer is 7×7.
CN202311558447.6A 2023-11-22 2023-11-22 Atrial fibrillation detection method based on convolutional neural network and ECG (electro-magnetic resonance) signals Active CN117257324B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311558447.6A CN117257324B (en) 2023-11-22 2023-11-22 Atrial fibrillation detection method based on convolutional neural network and ECG (electro-magnetic resonance) signals

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311558447.6A CN117257324B (en) 2023-11-22 2023-11-22 Atrial fibrillation detection method based on convolutional neural network and ECG (electro-magnetic resonance) signals

Publications (2)

Publication Number Publication Date
CN117257324A true CN117257324A (en) 2023-12-22
CN117257324B CN117257324B (en) 2024-01-30

Family

ID=89206719

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311558447.6A Active CN117257324B (en) 2023-11-22 2023-11-22 Atrial fibrillation detection method based on convolutional neural network and ECG (electro-magnetic resonance) signals

Country Status (1)

Country Link
CN (1) CN117257324B (en)

Citations (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170105643A1 (en) * 2015-06-23 2017-04-20 Shenzhen Institutes Of Advanced Technology Chinese Academy Of Sciences Gpu-based parallel electrocardiogram signal analysis method, computer readable storage medium and device
WO2017072250A1 (en) * 2015-10-27 2017-05-04 CardioLogs Technologies An automatic method to delineate or categorize an electrocardiogram
JP2018000224A (en) * 2016-06-27 2018-01-11 公立大学法人会津大学 Respiration detection device, respiration detection method, and program for respiration detection
US20190059763A1 (en) * 2017-08-25 2019-02-28 Cambridge Heartwear Limited Method of detecting abnormalities in ecg signals
CN110037684A (en) * 2019-04-01 2019-07-23 上海数创医疗科技有限公司 Device based on the identification rhythm of the heart type for improving convolutional neural networks
CN110037685A (en) * 2019-04-01 2019-07-23 上海数创医疗科技有限公司 With the portable electrocardiograph for improving convolutional neural networks recognizer
CN110037682A (en) * 2019-04-01 2019-07-23 上海数创医疗科技有限公司 Method based on the identification rhythm of the heart type for improving convolutional neural networks
US20190236411A1 (en) * 2016-09-14 2019-08-01 Konica Minolta Laboratory U.S.A., Inc. Method and system for multi-scale cell image segmentation using multiple parallel convolutional neural networks
CN110379506A (en) * 2019-06-14 2019-10-25 杭州电子科技大学 The cardiac arrhythmia detection method of binaryzation neural network is used for ECG data
US20190328251A1 (en) * 2018-04-27 2019-10-31 Boe Technology Group Co., Ltd. Arrhythmia detection method, arrhythmia detection device and arrhythmia detection system
CN110680310A (en) * 2019-10-21 2020-01-14 北京航空航天大学 Electrocardiosignal atrial fibrillation detection method based on one-dimensional dense connection convolution network
CN110840402A (en) * 2019-11-19 2020-02-28 山东大学 Atrial fibrillation signal identification method and system based on machine learning
WO2020047750A1 (en) * 2018-09-04 2020-03-12 深圳先进技术研究院 Arrhythmia detection method and apparatus, electronic device, and computer storage medium
US20200205687A1 (en) * 2017-09-21 2020-07-02 Koninklijke Philips N.V. Detecting atrial fibrillation using short single-lead ecg recordings
US20200229728A1 (en) * 2019-01-22 2020-07-23 Industry-Academic Cooperation Foundation Chosun University R wave detection method using periodicity of electrocardiogram signal
US20200260980A1 (en) * 2017-11-27 2020-08-20 Lepu Medical Technology (Bejing) Co., Ltd. Method and device for self-learning dynamic electrocardiography analysis employing artificial intelligence
US20200289010A1 (en) * 2019-03-14 2020-09-17 University Of Seoul Industry Cooperation Foundation Method for classifying type of heartbeat and apparatus using the same
CN111700609A (en) * 2020-07-27 2020-09-25 郑州大学 Atrial fibrillation detection method, device and equipment based on short-time electrocardiosignals
US20200312459A1 (en) * 2017-12-19 2020-10-01 Shenzhen Institutes Of Advanced Technology Chinese Academy Of Sciences Atrial fibrillation signal recognition method, apparatus and device
US20200342893A1 (en) * 2017-10-25 2020-10-29 Samsung Electronics Co., Ltd. Electronic device and control method therefor
CN111956211A (en) * 2020-07-29 2020-11-20 鲁东大学 Automatic detection method for atrial fibrillation of single lead electrocardiosignal
CN111990988A (en) * 2020-08-10 2020-11-27 北京航空航天大学 Electrocardiosignal atrial fibrillation detection device based on dense connection convolution cyclic neural network
CN112587153A (en) * 2020-12-08 2021-04-02 合肥工业大学 End-to-end non-contact atrial fibrillation automatic detection system and method based on vPPG signal
CN112617849A (en) * 2020-12-31 2021-04-09 山西三友和智慧信息技术股份有限公司 Atrial fibrillation detection and classification method based on CNN + LSTM
CN113171106A (en) * 2021-04-25 2021-07-27 安徽十锎信息科技有限公司 Electrocardio abnormality detection method based on VQ-VAE2 and deep neural network method
US20210369131A1 (en) * 2018-02-24 2021-12-02 Shanghai Yocaly Health Management Company Electrocardiogram information dynamic monitoring method and dynamic monitoring system
US20220004810A1 (en) * 2018-09-28 2022-01-06 Pavel Sinha Machine learning using structurally regularized convolutional neural network architecture
US20220044778A1 (en) * 2020-08-06 2022-02-10 Atsens Co., Ltd. Method and electronic apparatus for providing classification data of electrocardiogram signals
US20220039729A1 (en) * 2020-08-10 2022-02-10 Cardiologs Technologies Sas Electrocardiogram processing system for detecting and/or predicting cardiac events
CN114224351A (en) * 2022-01-13 2022-03-25 浙江好络维医疗技术有限公司 Atrial fibrillation identification method based on fusion of multiple deep learning models
WO2022119155A1 (en) * 2020-12-02 2022-06-09 재단법인 아산사회복지재단 Apparatus and method for diagnosing explainable multiple electrocardiogram arrhythmias
WO2022146057A1 (en) * 2020-12-29 2022-07-07 서울대학교병원 Method and apparatus for converting electrical biosignal data into numerical vectors, and method and apparatus for analyzing disease by using same
KR20230001096A (en) * 2021-06-28 2023-01-04 금오공과대학교 산학협력단 Classification method of atrial fibrillation and congestive heart failure using a convolutional artificial neural network
CN116898451A (en) * 2023-07-19 2023-10-20 福州大学 Method for realizing atrial fibrillation prediction by using neural network with multi-scale attention mechanism
CN116999063A (en) * 2023-08-03 2023-11-07 哈尔滨工业大学(威海) Method for realizing electrocardiographic atrial fibrillation detection based on signal decomposition and convolution network

Patent Citations (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170105643A1 (en) * 2015-06-23 2017-04-20 Shenzhen Institutes Of Advanced Technology Chinese Academy Of Sciences Gpu-based parallel electrocardiogram signal analysis method, computer readable storage medium and device
WO2017072250A1 (en) * 2015-10-27 2017-05-04 CardioLogs Technologies An automatic method to delineate or categorize an electrocardiogram
JP2018000224A (en) * 2016-06-27 2018-01-11 公立大学法人会津大学 Respiration detection device, respiration detection method, and program for respiration detection
US20190236411A1 (en) * 2016-09-14 2019-08-01 Konica Minolta Laboratory U.S.A., Inc. Method and system for multi-scale cell image segmentation using multiple parallel convolutional neural networks
US20190059763A1 (en) * 2017-08-25 2019-02-28 Cambridge Heartwear Limited Method of detecting abnormalities in ecg signals
US20200205687A1 (en) * 2017-09-21 2020-07-02 Koninklijke Philips N.V. Detecting atrial fibrillation using short single-lead ecg recordings
US20200342893A1 (en) * 2017-10-25 2020-10-29 Samsung Electronics Co., Ltd. Electronic device and control method therefor
US20200260980A1 (en) * 2017-11-27 2020-08-20 Lepu Medical Technology (Bejing) Co., Ltd. Method and device for self-learning dynamic electrocardiography analysis employing artificial intelligence
US20200312459A1 (en) * 2017-12-19 2020-10-01 Shenzhen Institutes Of Advanced Technology Chinese Academy Of Sciences Atrial fibrillation signal recognition method, apparatus and device
US20210369131A1 (en) * 2018-02-24 2021-12-02 Shanghai Yocaly Health Management Company Electrocardiogram information dynamic monitoring method and dynamic monitoring system
US20190328251A1 (en) * 2018-04-27 2019-10-31 Boe Technology Group Co., Ltd. Arrhythmia detection method, arrhythmia detection device and arrhythmia detection system
WO2020047750A1 (en) * 2018-09-04 2020-03-12 深圳先进技术研究院 Arrhythmia detection method and apparatus, electronic device, and computer storage medium
US20220004810A1 (en) * 2018-09-28 2022-01-06 Pavel Sinha Machine learning using structurally regularized convolutional neural network architecture
US20200229728A1 (en) * 2019-01-22 2020-07-23 Industry-Academic Cooperation Foundation Chosun University R wave detection method using periodicity of electrocardiogram signal
US20200289010A1 (en) * 2019-03-14 2020-09-17 University Of Seoul Industry Cooperation Foundation Method for classifying type of heartbeat and apparatus using the same
CN110037682A (en) * 2019-04-01 2019-07-23 上海数创医疗科技有限公司 Method based on the identification rhythm of the heart type for improving convolutional neural networks
CN110037685A (en) * 2019-04-01 2019-07-23 上海数创医疗科技有限公司 With the portable electrocardiograph for improving convolutional neural networks recognizer
CN110037684A (en) * 2019-04-01 2019-07-23 上海数创医疗科技有限公司 Device based on the identification rhythm of the heart type for improving convolutional neural networks
CN110379506A (en) * 2019-06-14 2019-10-25 杭州电子科技大学 The cardiac arrhythmia detection method of binaryzation neural network is used for ECG data
CN110680310A (en) * 2019-10-21 2020-01-14 北京航空航天大学 Electrocardiosignal atrial fibrillation detection method based on one-dimensional dense connection convolution network
CN110840402A (en) * 2019-11-19 2020-02-28 山东大学 Atrial fibrillation signal identification method and system based on machine learning
CN111700609A (en) * 2020-07-27 2020-09-25 郑州大学 Atrial fibrillation detection method, device and equipment based on short-time electrocardiosignals
CN111956211A (en) * 2020-07-29 2020-11-20 鲁东大学 Automatic detection method for atrial fibrillation of single lead electrocardiosignal
US20220044778A1 (en) * 2020-08-06 2022-02-10 Atsens Co., Ltd. Method and electronic apparatus for providing classification data of electrocardiogram signals
US20220039729A1 (en) * 2020-08-10 2022-02-10 Cardiologs Technologies Sas Electrocardiogram processing system for detecting and/or predicting cardiac events
CN111990988A (en) * 2020-08-10 2020-11-27 北京航空航天大学 Electrocardiosignal atrial fibrillation detection device based on dense connection convolution cyclic neural network
WO2022119155A1 (en) * 2020-12-02 2022-06-09 재단법인 아산사회복지재단 Apparatus and method for diagnosing explainable multiple electrocardiogram arrhythmias
CN112587153A (en) * 2020-12-08 2021-04-02 合肥工业大学 End-to-end non-contact atrial fibrillation automatic detection system and method based on vPPG signal
WO2022146057A1 (en) * 2020-12-29 2022-07-07 서울대학교병원 Method and apparatus for converting electrical biosignal data into numerical vectors, and method and apparatus for analyzing disease by using same
CN112617849A (en) * 2020-12-31 2021-04-09 山西三友和智慧信息技术股份有限公司 Atrial fibrillation detection and classification method based on CNN + LSTM
CN113171106A (en) * 2021-04-25 2021-07-27 安徽十锎信息科技有限公司 Electrocardio abnormality detection method based on VQ-VAE2 and deep neural network method
KR20230001096A (en) * 2021-06-28 2023-01-04 금오공과대학교 산학협력단 Classification method of atrial fibrillation and congestive heart failure using a convolutional artificial neural network
CN114224351A (en) * 2022-01-13 2022-03-25 浙江好络维医疗技术有限公司 Atrial fibrillation identification method based on fusion of multiple deep learning models
CN116898451A (en) * 2023-07-19 2023-10-20 福州大学 Method for realizing atrial fibrillation prediction by using neural network with multi-scale attention mechanism
CN116999063A (en) * 2023-08-03 2023-11-07 哈尔滨工业大学(威海) Method for realizing electrocardiographic atrial fibrillation detection based on signal decomposition and convolution network

Also Published As

Publication number Publication date
CN117257324B (en) 2024-01-30

Similar Documents

Publication Publication Date Title
Xia et al. Detecting atrial fibrillation by deep convolutional neural networks
CN110840402B (en) Atrial fibrillation signal identification method and system based on machine learning
Strodthoff et al. Detecting and interpreting myocardial infarction using fully convolutional neural networks
Mousavi et al. ECGNET: Learning where to attend for detection of atrial fibrillation with deep visual attention
CN106725426A (en) A kind of method and system of electrocardiosignal classification
US20230293079A1 (en) Electrocardiogram image processing method and device, medium, and electrocardiograph
Cao et al. Atrial fibrillation detection using an improved multi-scale decomposition enhanced residual convolutional neural network
CN112006678B (en) Electrocardiogram abnormity identification method and system based on combination of AlexNet and transfer learning
CN110638430B (en) Method for building cascade neural network ECG signal arrhythmia classification model
CN113080990A (en) Heart beat anomaly detection method based on CycleGAN and BilSTM neural network method
CN111956208B (en) ECG signal classification method based on ultra-lightweight convolutional neural network
Liu et al. Using the VQ-VAE to improve the recognition of abnormalities in short-duration 12-lead electrocardiogram records
JP7487965B2 (en) Prediction method of electrocardiogram heart rate multi-type based on graph convolution
Xu et al. An ECG denoising method based on the generative adversarial residual network
Ullah et al. An End‐to‐End Cardiac Arrhythmia Recognition Method with an Effective DenseNet Model on Imbalanced Datasets Using ECG Signal
Khan et al. ECG classification using 1-D convolutional deep residual neural network
CN116361688A (en) Multi-mode feature fusion model construction method for automatic classification of electrocardiographic rhythms
Berger et al. Generative adversarial networks in electrocardiogram synthesis: Recent developments and challenges
CN117257324B (en) Atrial fibrillation detection method based on convolutional neural network and ECG (electro-magnetic resonance) signals
WO2024098553A1 (en) Method and system for analyzing and identifying electrocardiogram, and storage medium
Sane et al. Detection of myocardial infarction from 12 lead ECG images
CN116172572A (en) SHAP value weighted sum segmentation HDBSCAN-based heart beat clustering method and system
CN114366116B (en) Parameter acquisition method based on Mask R-CNN network and electrocardiogram
Zhang et al. An atrial fibrillation classification method based on an outlier data filtering strategy and modified residual block of the feature pyramid network
Ma et al. SIMCLR-UNET: An ECG feature wave segmentation algorithm based on a self-supervised learning strategy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant