CN112001481A - P wave identification method based on counterstudy, terminal equipment and storage medium - Google Patents

P wave identification method based on counterstudy, terminal equipment and storage medium Download PDF

Info

Publication number
CN112001481A
CN112001481A CN202010818810.3A CN202010818810A CN112001481A CN 112001481 A CN112001481 A CN 112001481A CN 202010818810 A CN202010818810 A CN 202010818810A CN 112001481 A CN112001481 A CN 112001481A
Authority
CN
China
Prior art keywords
loss
output
wave
data
disc
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010818810.3A
Other languages
Chinese (zh)
Inventor
李熙
徐拥军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Nalong Science & Technology Co ltd
Original Assignee
Xiamen Nalong Science & Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Nalong Science & Technology Co ltd filed Critical Xiamen Nalong Science & Technology Co ltd
Priority to CN202010818810.3A priority Critical patent/CN112001481A/en
Publication of CN112001481A publication Critical patent/CN112001481A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Signal Processing (AREA)
  • Psychiatry (AREA)
  • Physiology (AREA)
  • Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention relates to a P wave identification method based on counterstudy, a terminal device and a storage medium, wherein the method comprises the following steps: s1: acquiring electrocardiogram data and marking P waves in the electrocardiogram data, and forming a training set by the electrocardiogram data and the corresponding P wave marking data; s2: constructing and generating a confrontation network model, setting loss functions of a generator and a discriminator in the confrontation network model, and performing iterative training on the generator and the discriminator through a training set to minimize the loss function of the generator and maximize the loss function of the discriminator; s3: p wave marking data in electrocardiogram data to be marked are generated through the trained generator, and P wave identification of the electrocardiogram data is achieved. The invention realizes the identification of the electrocardiogram P wave by the constructed generation countermeasure network and solves the problem of poor P wave identification effect in the prior art.

Description

P wave identification method based on counterstudy, terminal equipment and storage medium
Technical Field
The invention relates to the field of electrocardiogram recognition, in particular to a P-wave recognition method based on counterstudy, a terminal device and a storage medium.
Background
The P wave of the electrocardiogram is atrial depolarization wave, represents the excitation of the left atrium and the right atrium, and has important significance in the diagnosis and differential diagnosis of arrhythmia by analyzing the P wave. In the automatic analysis of electrocardiogram, P-wave identification tends to face greater difficulty because P-waves are low in amplitude, low in energy, and often overlapping with other waveforms compared to the ventricular waves QRS-T. The existing P wave identification method is usually based on a band-pass filtering and threshold value comparison method, the identification success rate of the common sinus P wave can be accepted under the condition of small artifact interference, but the identification effect of the unconventional P waves such as atrial P waves, reverse P waves, non-downloading and the like is poor. The pattern of the P-wave in these cases is difficult to express with fixed rules and can only be determined by the experience and intuition of human doctors. It is difficult to directly encode human experience and intuition on a machine, however, the problem can be translated into building a conditional distribution model of the P-wave. In recent years, a generation countermeasure network (GAN) developed in the field of deep learning has shown excellent results in learning image data distribution, but such applications are not currently available due to the specificity of electrocardiographic signals with respect to natural images.
Disclosure of Invention
In order to solve the above problems, the present invention provides a P-wave identification method, a terminal device, and a storage medium based on counterstudy.
The specific scheme is as follows:
a P-wave identification method based on antagonistic learning comprises the following steps:
s1: acquiring electrocardiogram data and marking P waves in the electrocardiogram data, and forming a training set by the electrocardiogram data and the corresponding P wave marking data;
s2: constructing and generating a confrontation network model, setting loss functions of a generator and a discriminator in the confrontation network model, and performing iterative training on the generator and the discriminator through a training set to enable the loss function of the generator to be minimum and the loss function of the discriminator to be maximum;
s3: p wave marking data in electrocardiogram data to be marked are generated through the trained generator, and P wave identification of the electrocardiogram data is achieved.
Further, step S1 further includes cropping the electrocardiogram data and the corresponding P-wave tag data, where the cropping method includes:
(1) setting an array input _ image as an array corresponding to electrocardiogram data, and an array real _ image as an array corresponding to P-wave marker data, converting the input _ image and real _ image into tensors, wherein the tensor dimension is [1, 1, n, 1], and n represents the sampling length of the original electrocardiogram data;
(2) stacking input _ image and real _ image in a first dimension, and recording the stacked result as a stacked _ image, wherein the dimension of the stacked _ image is [2, 1, n, 1 ];
(3) randomly sampling a third dimension of the buffered _ image, wherein the sampling range is from 0 to n-s, s represents the clipping size, and then intercepting data of s points starting from a sampling point, and recording the data as the buffered _ image, wherein the dimension is [2, 1, s, 1 ];
(4) and splitting the cropped _ image in the first dimension, and assigning the split result to the input _ image and the real _ image again as the clipped result.
Further, step S1 includes performing normalization processing on the electrocardiogram data and the corresponding P-wave tag data.
Further, the loss function total _ gen _ loss of the generator is:
total_gen_loss=gan_loss+l1_loss
gan_loss=BinaryCrossentropy(disc_generated_output,ones_like_disc_generated_output)
l1_loss=reduce_mean(|gen_output-target|)
wherein disc _ generated _ output represents the output tensor of the discriminator, ones _ like _ disc _ generated _ output represents the tensor which has the same format as disc _ generated _ output and all values are 1, binarycrosenstrophe represents the calculation of the cross entropy of two classes, gen _ output represents the output tensor of the generator, target represents the real P wave mark data corresponding to the input of the generator, reduce _ mean represents the average value in all dimensions, and both the gan _ loss and the l1_ loss are intermediate variables of the loss function.
Further, the loss function total _ disc _ loss of the discriminator is:
total_disc_loss=real_loss+generated_loss
real_loss=BinaryCrossentropy(disc_real_output,ones_like_disc_real_output)
generated_loss=BinaryCrossentropy(disc_generated_output,zeros_like_disc_generated_output)
wherein, disc _ real _ output is the output obtained after inputting the electrocardiogram data and the corresponding P-wave mark data into the discriminator, ones _ like _ disc _ real _ output is the tensor which has the same format as disc _ real _ output but all the values are 1, binary cross entropy represents the calculation of the two-classification cross entropy, disc _ generated _ output is the output obtained after inputting the P-wave mark data generated by the electrocardiogram data and the electrocardiogram data into the discriminator through the generator, zeros _ like _ disc _ generated _ output is the tensor which has the same format as disc _ generated _ output but all the values are 0, and both real _ loss and generated _ loss are intermediate variables of the loss function.
A P-wave identification terminal device based on counterlearning, comprising a processor, a memory and a computer program stored in the memory and operable on the processor, wherein the processor executes the computer program to implement the steps of the method of the embodiment of the present invention.
A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the method as described above for an embodiment of the invention.
By adopting the technical scheme, the invention realizes the identification of the electrocardiogram P wave through the constructed generation countermeasure network, and solves the problem of poor P wave identification effect in the prior art.
Drawings
Fig. 1 is a flowchart illustrating a first embodiment of the present invention.
Fig. 2 is a schematic diagram of P-wave marking in this embodiment.
Fig. 3 is a schematic structural diagram of the generator in this embodiment.
Fig. 4 is a schematic diagram showing an internal topology of an example of a coding unit in this embodiment.
Fig. 5 is a schematic diagram showing an internal topology of an example of a decoding unit in this embodiment.
FIG. 6 shows the topology of the generator model and the input-output dimensions of each layer in this embodiment.
Fig. 7 is a schematic structural diagram of the discriminator in this embodiment.
Fig. 8 shows the topology of the discriminator model and the input-output dimension of each layer in this embodiment.
Detailed Description
To further illustrate the various embodiments, the invention provides the accompanying drawings. The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate embodiments of the invention and, together with the description, serve to explain the principles of the embodiments. With these references, one of ordinary skill in the art will appreciate other possible implementations and advantages of the present invention.
The invention will now be further described with reference to the accompanying drawings and detailed description.
The first embodiment is as follows:
an embodiment of the present invention provides a P-wave identification method based on counterstudy, as shown in fig. 1, including the following steps.
S1: the electrocardiogram data are collected and marked by P waves in the electrocardiogram data, and the electrocardiogram data and the corresponding P wave marked data jointly form a training set.
The electrocardiogram data may be single-lead data or multi-lead data, which is not limited herein, and the embodiment is described by taking the single-lead data as an example.
The marking may be manually marked by a medical professional and then saved as a normalized position mask, as shown in fig. 2.
The electrocardiogram data refers to electrocardiogram voltage data, and the P-wave marking data refers to a P-wave position mask. In this embodiment, the electrocardiographic data is read into the memory and laid out as an array, which is denoted as input _ image, and the signal is resampled at a sampling rate of 360 HZ. Reading the P-wave mark file into the memory, where the P-wave mark information is a 1-dimensional numerical sequence, and the numerical value is the offset position of the sampling point, where it needs to be converted into a continuous P-wave position mask. Firstly, an array real _ image which is the same as the input _ image in length but has all values of 0 is constructed, then the P-wave marker sequence is traversed, and for each P-wave position point, the value of the array real _ image is set to be 1 within the sampling range from 0.055 second on the left side to 0.055 second on the right side. Then the array input _ image corresponds to the electrocardiogram and the array real _ image corresponds to the P-wave markup file.
Since the tensor dimension of the input image received by the network model is [1, 2048], and the data length of the electrocardiogram is generally far greater than 2048 points, the image used for training needs to be cropped. The purpose of adding random clipping in this embodiment is to increase the robustness of the system, especially the diversity of clipping boundary conditions. The cutting process comprises the following steps:
(1) input _ image and real _ image are converted into tensors with tensor dimensions [1, 1, n, 1], where n represents the sample length of the original electrocardiographic data.
(2) Input _ image and real _ image are stacked in a first dimension, and the stacked result is recorded as a stacked _ image with the dimension of [2, 1, n, 1 ].
(3) The third dimension of the sampled _ image is randomly sampled from 0 to n-2048, and 2048 points of data from the sampling point is truncated, denoted as cropped _ image, with dimensions [2, 1,2048,1 ].
(4) And splitting the cropped _ image in the first dimension, and assigning the split result to the input _ image and the real _ image again as the clipped result.
In order to ensure the uniformity of the data input into the model, the embodiment further includes normalizing the electrocardiogram data, and the specific adopted method is as follows: the input image units are normalized (to millivolts) and subtracted by their mean value in the 3 rd dimension to obtain normalized data.
S2: and constructing a generation confrontation network model, setting loss functions of a generator and a discriminator in the confrontation network model, and training the generator and the discriminator through a training set to ensure that the loss function of the generator is minimum and the loss function of the discriminator is maximum.
The generation countermeasure network is composed of a generator and an arbiter.
S21: a generator is constructed.
The generator is a deep neural network comprising one coding unit and one decoding unit, and the result is shown in fig. 3. It can map the input signal x (electrocardiogram data) to a probability sequence y representing the probability of the occurrence of a P-wave at each position, i.e. a P-wave position mask. In the embodiment, the coding unit adopts a structure similar to U-Net, and has the characteristic that the decoding process does not lose detail information, which is particularly important for signals with small scale such as P wave. And performing dot multiplication on the P-wave position mask and the original electrocardiogram to obtain a P-wave signal.
S211: a coding unit is defined.
The coding unit is a deep neural network layer for down-sampling and feature extraction of the electrocardiogram, and its construction method is defined as downsamples (filters, size). The internal topology of an example coding unit cell is shown in fig. 4.
The received input in fig. 4 is a 4-dimensional tensor; conv2D is a two-dimensional convolution layer, filters parameter indicates the number of convolution kernels, size parameter indicates the size of convolution kernels, and strides is step; BatchNormalization is a batch normalization layer; LeakyReLU is the activation function layer. The above layers are supported by deep learning framework software, and the implementation details are not expanded.
S212: a decoding unit is defined.
The decoding unit performs upsampling and image transformation on the encoded data, and its construction method is defined as upsamples (sizes). The internal topology of an example decoding unit is shown in fig. 5.
In FIG. 5, Conv2DTranspose is the transposed convolution and Dropout is the randomizing layer used to regularize the network.
S213: a generator is constructed.
The input of the generator is a 4-dimensional tensor with a format of [ batch _ size,1,2048,1], the input tensor is subjected to stack coding by a coding unit, then subjected to stack decoding by a decoding unit, and finally activated by sigmoid to be converted into a probability value.
The coding unit stack is configured as follows:
down_stack=[
downsample(64,4),
downsample(128,4),
downsample(256,4),
downsample(512,4),
downsample(512,4),
downsample(512,4),
downsample(512,4),
downsample(512,4),
]
the decoding unit stack is configured as follows:
up_stack=[
upsample(512,4),
upsample(512,4),
upsample(512,4),
upsample(512,4),
upsample(256,4),
upsample(128,4),
upsample(64,4),
]
the connection topology between the decoding unit and the encoding unit is as follows:
(1) the last layer of the coding unit is connected to the first layer of the decoding unit:
down_stack[7]->up_stack[0]
(2) the 7-i layer of the encoding unit and the i-1 layer of the decoding unit are linked (coordinated) in the last dimension of tensor, and are sent to the i-th layer of the decoding unit: [ down _ stack [7-i ], up _ stack [ i-1] ] - > up _ stack [ i ], for i in 1.. 7.
This connection is called a jump connection.
The last layer output of the decoding unit is subjected to sequential upsampling and sigmoid activation, and the value of the output is mapped to the range of 0-1, so that probability regression is realized.
The complete model topology of the entire generator and the input-output dimensions of each layer are shown in fig. 6.
S22: a discriminator is constructed.
The discriminator consists of a multi-layer convolutional neural network and fully-connected structure, as shown in FIG. 7, with the input being a pair of signals X and Y, where X is electrocardiogram data, Y is P-wave labeled data, and the output is a probability value representing the probability that Y is "true" (artificially labeled) P-wave labeled data given X.
The input to the discriminator is two four-dimensional tensors of the format [ batch _ size,1,2048,1], corresponding to electrocardiographic data and P-wave marker data, respectively. The two tensors are subjected to point multiplication to obtain a P wave signal.
The P-wave marker data and the last dimension of the electrocardiogram data are connected and fused into a tensor. And carrying out a series of feature extraction and downsampling operations on the tensor to obtain a coding vector. And finally, performing regression by using a convolution layer with the convolution kernel number of 1 to obtain a probability value for judging whether the P wave is true or not.
The topology of the discriminator model and the input-output dimensions of each layer are shown in fig. 8.
S23: a loss function is constructed.
(1) The loss function total _ gen _ loss of the generator is obtained by adding Gan _ loss and l1_ loss.
gan _ loss, i.e., the loss in which the discriminator does not classify the P-wave tag data produced by the generator as 1 (true), the calculation is thus as follows:
the output tensor of the discriminator is recorded as disc _ generated _ output with the format [ batch _ size,1,254,1], a tensor with the format of all 1 is constructed, the format is the same as that of disc _ generated _ output, and is recorded as ons _ like _ disc _ generated _ output, then: gan _ loss ═ binary cross output (ies _ like _ disc _ generated _ output). Wherein binarycrosenstropy represents the calculation of binary cross entropy.
l1_ loss is the 1 norm absolute value error of the P-wave tag data output by the generator and the real P-wave tag data, and the calculation formula is as follows:
the output tensor of the generator is denoted as gen _ output, the real P-wave marking data is denoted as target, and l1_ loss is reduced _ mean (| gen _ output-target |), and reduction _ mean represents the average value in all dimensions.
Generator loss function total _ gen _ loss ═ gan _ loss + l1_ loss
An L1 regularization term is added in order to make the P-wave generated by the generator and the true P-wave similar on the image and to speed up the process of training convergence.
(2) The loss function total _ disc _ loss of the discriminator is obtained by adding real _ loss and generated _ loss.
real _ loss, i.e., the loss of the combination of the real P-wave marker data and the ECG data, is considered to be false, is calculated as follows:
and (2) real _ loss, namely binary cross output, wherein disc _ real _ output is output obtained after the electrocardiogram data and the corresponding P-wave mark data are input into the discriminator, and the ones _ like _ disc _ output is a tensor which has the same format as the disc _ real _ output but has the value of 1.
generated _ loss, i.e., the loss of the combination of the P-wave marker data generated by the generator and the ecg data, is considered as true and is calculated as follows:
generated _ loss is binary crosstalk (disc _ generated _ output, zero _ like _ disc _ generated _ output), where disc _ generated _ output is an output obtained by inputting electrocardiographic data and P-wave mark data generated by a generator into a discriminator, and zero _ like _ disc _ generated _ output is a tensor having the same format as disc _ generated _ output but all values of 0.
The total loss of the discriminator, total _ disc _ loss, real _ loss + generated _ loss.
S24: and performing iterative training on the generator and the discriminator for generating the confrontation network model through a training set to minimize the loss function of the generator and maximize the loss function of the discriminator.
The optimization goal of the generator is to minimize the loss function on the training set, i.e. by continuously improving the parameters of the generator, the P-wave generated by the generator is considered to be true by the discriminator; the goal of the arbiter optimization is to maximize the penalty function, i.e., continually refine the arbiter parameters so that it can discriminate that the artificial target P-wave is true, while the generator generated is false.
The generator and the discriminator are continuously optimized in an iterative mode in the training process, and although the optimization targets of the generator and the discriminator are completely opposite, Nash balance can be finally obtained, so that the generator and the discriminator cannot be further optimized, even if the discriminator cannot distinguish the P wave generated by the generator from the real P wave.
Each iteration specifically performs the following operations:
the tensor corresponding to the electrocardiogram data in the training set is set as input _ image, and the tensor corresponding to the P-wave label data is set as target.
The input _ image is input to the generator, and the image gen _ output is generated.
Input _ image and target are input into a discriminator to generate disc _ real _ output.
The input _ image and gen _ output are input to the discriminator to generate disc _ generated _ output.
The generator loss gen _ total _ loss is calculated from disc _ generated _ output, gen _ output, and target.
The discriminator loss disc _ total _ loss is calculated from disc _ real _ output and disc _ generated _ output.
And calculating the gradient of gen _ total _ loss to the generator parameters, and performing gradient reduction on the generator.
And calculating the gradient of the discc _ total _ loss to the parameter of the discriminator, and performing gradient reduction on the discriminator.
And during iteration, recording loss curves of the generator and the discriminator, stopping iteration when the loss curves do not decrease obviously, and finishing the training of the network model. The generator model may be serialized in h5 or json fashion for save and load.
S3: p wave marking data in electrocardiogram data to be marked are generated through the trained generator, and P wave identification of the electrocardiogram data is achieved.
According to the embodiment of the invention, through the constructed generation countermeasure network, the identification of the electrocardiogram P wave is realized, and the problem of poor P wave identification effect in the prior art is solved.
Example two:
the invention also provides a P-wave identification terminal device based on counterlearning, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor executes the computer program to realize the steps of the method embodiment of the first embodiment of the invention.
Further, as an executable solution, the P-wave identification terminal device based on counterstudy may be a computing device such as a desktop computer, a notebook, a palm computer, and a cloud server. The countermeasure learning based P-wave identification terminal device may include, but is not limited to, a processor, a memory. Those skilled in the art will understand that the above-mentioned structure of the P-wave identification terminal device based on countermeasure learning is only an example of the P-wave identification terminal device based on countermeasure learning, and does not constitute a limitation of the P-wave identification terminal device based on countermeasure learning, and may include more or less components than the above, or combine some components, or different components, for example, the P-wave identification terminal device based on countermeasure learning may further include an input-output device, a network access device, a bus, etc., which is not limited by the embodiment of the present invention.
Further, as an executable solution, the Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, a discrete hardware component, and so on. The general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like, and the processor is a control center of the P-wave identification terminal device based on countermeasure learning, and various interfaces and lines are used to connect various parts of the entire P-wave identification terminal device based on countermeasure learning.
The memory can be used for storing the computer program and/or the module, and the processor can realize various functions of the P-wave identification terminal device based on the counterstudy by running or executing the computer program and/or the module stored in the memory and calling the data stored in the memory. The memory can mainly comprise a program storage area and a data storage area, wherein the program storage area can store an operating system and an application program required by at least one function; the storage data area may store data created according to the use of the mobile phone, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
The invention also provides a computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the above-mentioned method of an embodiment of the invention.
The P-wave identification terminal device integrated module/unit based on counterlearning may be stored in a computer-readable storage medium if it is implemented in the form of a software function unit and sold or used as a stand-alone product. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in source code form, object code form, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM ), Random Access Memory (RAM), software distribution medium, and the like.
While the invention has been particularly shown and described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (7)

1. A P-wave identification method based on antagonistic learning is characterized by comprising the following steps:
s1: acquiring electrocardiogram data and marking P waves in the electrocardiogram data, and forming a training set by the electrocardiogram data and the corresponding P wave marking data;
s2: constructing and generating a confrontation network model, setting loss functions of a generator and a discriminator in the confrontation network model, and performing iterative training on the generator and the discriminator through a training set to minimize the loss function of the generator and maximize the loss function of the discriminator;
s3: p wave marking data in electrocardiogram data to be marked are generated through the trained generator, and P wave identification of the electrocardiogram data is achieved.
2. The counterlearning-based P-wave identification method of claim 1, wherein: step S1 further includes cutting the electrocardiogram data and the corresponding P-wave tag data, the cutting method includes:
(1) setting an array input _ image as an array corresponding to electrocardiogram data, and an array real _ image as an array corresponding to P-wave marker data, converting the input _ image and real _ image into tensors, wherein the tensor dimension is [1, 1, n, 1], and n represents the sampling length of the original electrocardiogram data;
(2) stacking input _ image and real _ image in a first dimension, and recording the stacked result as a stacked _ image, wherein the dimension of the stacked _ image is [2, 1, n, 1 ];
(3) randomly sampling a third dimension of the buffered _ image, wherein the sampling range is from 0 to n-s, s represents the clipping size, and then intercepting data of s points starting from a sampling point, and recording the data as the buffered _ image, wherein the dimension is [2, 1, s, 1 ];
(4) and splitting the cropped _ image in the first dimension, and assigning the split result to the input _ image and the real _ image again as the clipped result.
3. The counterlearning-based P-wave identification method of claim 1, wherein: step S1 also includes normalizing the electrocardiographic data and the corresponding P-wave marker data.
4. The counterlearning-based P-wave identification method of claim 1, wherein: the loss function total _ gen _ loss of the generator is:
total_gen_loss=gan_loss+l1_loss
gan_loss=BinaryCrossentropy(disc_generated_output,ones_like_disc_generated_output)
l1_loss=reduce_mean(|gen_output-target|)
wherein disc _ generated _ output represents the output tensor of the discriminator, ones _ like _ disc _ generated _ output represents the tensor which has the same format as disc _ generated _ output and all values are 1, binarycrosenstrophy represents the calculation of the cross entropy of two classes, gen _ output represents the output tensor of the generator, target represents the real P wave mark data corresponding to the input of the generator, reduce _ mean represents the average value in all dimensions, and both the gan _ loss and the l1_ loss are intermediate variables of the loss function.
5. The counterlearning-based P-wave identification method of claim 1, wherein: the loss function total _ disc _ loss of the discriminator is:
total_disc_loss=real_loss+generated_loss
real_loss=BinaryCrossentropy(disc_real_output,ones_like_disc_real_output)
generated_loss=BinaryCrossentropy(disc_generated_output,zeros_like_disc_generated_output)
wherein, disc _ real _ output is the output obtained after inputting the electrocardiogram data and the corresponding P-wave mark data into the discriminator, ones _ like _ disc _ real _ output is the tensor which has the same format as disc _ real _ output but all the values are 1, binary cross entropy represents the calculation of the two-classification cross entropy, disc _ generated _ output is the output obtained after inputting the P-wave mark data generated by the electrocardiogram data and the electrocardiogram data into the discriminator through the generator, zeros _ like _ disc _ generated _ output is the tensor which has the same format as disc _ generated _ output but all the values are 0, and both real _ loss and generated _ loss are intermediate variables of the loss function.
6. A P wave identification terminal device based on counterstudy is characterized in that: comprising a processor, a memory and a computer program stored in the memory and running on the processor, the processor implementing the steps of the method according to any of claims 1 to 5 when executing the computer program.
7. A computer-readable storage medium storing a computer program, characterized in that: the computer program when executed by a processor implementing the steps of the method as claimed in any one of claims 1 to 5.
CN202010818810.3A 2020-08-14 2020-08-14 P wave identification method based on counterstudy, terminal equipment and storage medium Pending CN112001481A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010818810.3A CN112001481A (en) 2020-08-14 2020-08-14 P wave identification method based on counterstudy, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010818810.3A CN112001481A (en) 2020-08-14 2020-08-14 P wave identification method based on counterstudy, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112001481A true CN112001481A (en) 2020-11-27

Family

ID=73473188

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010818810.3A Pending CN112001481A (en) 2020-08-14 2020-08-14 P wave identification method based on counterstudy, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112001481A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210349718A1 (en) * 2020-05-08 2021-11-11 Black Sesame International Holding Limited Extensible multi-precision data pipeline for computing non-linear and arithmetic functions in artificial neural networks

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109864737A (en) * 2019-04-02 2019-06-11 安徽心之声医疗科技有限公司 The recognition methods of P wave and system in a kind of electrocardiosignal
CN110558971A (en) * 2019-08-02 2019-12-13 苏州星空大海医疗科技有限公司 Method for generating countermeasure network electrocardiogram abnormity detection based on single target and multiple targets
WO2019238560A1 (en) * 2018-06-12 2019-12-19 Tomtom Global Content B.V. Generative adversarial networks for image segmentation
CN111488911A (en) * 2020-03-15 2020-08-04 北京理工大学 Image entity extraction method based on Mask R-CNN and GAN

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019238560A1 (en) * 2018-06-12 2019-12-19 Tomtom Global Content B.V. Generative adversarial networks for image segmentation
CN109864737A (en) * 2019-04-02 2019-06-11 安徽心之声医疗科技有限公司 The recognition methods of P wave and system in a kind of electrocardiosignal
CN110558971A (en) * 2019-08-02 2019-12-13 苏州星空大海医疗科技有限公司 Method for generating countermeasure network electrocardiogram abnormity detection based on single target and multiple targets
CN111488911A (en) * 2020-03-15 2020-08-04 北京理工大学 Image entity extraction method based on Mask R-CNN and GAN

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210349718A1 (en) * 2020-05-08 2021-11-11 Black Sesame International Holding Limited Extensible multi-precision data pipeline for computing non-linear and arithmetic functions in artificial neural networks
US11687336B2 (en) * 2020-05-08 2023-06-27 Black Sesame Technologies Inc. Extensible multi-precision data pipeline for computing non-linear and arithmetic functions in artificial neural networks

Similar Documents

Publication Publication Date Title
Tang et al. Deep networks for robust visual recognition
CN109949255B (en) Image reconstruction method and device
Chen et al. Median filtering forensics based on convolutional neural networks
CN109165556B (en) Identity recognition method based on GRNN
WO2020108562A1 (en) Automatic tumor segmentation method and system in ct image
CN112132959B (en) Digital rock core image processing method and device, computer equipment and storage medium
CN111772619B (en) Heart beat identification method based on deep learning, terminal equipment and storage medium
CN110522440B (en) Electrocardiosignal recognition device based on grouping convolution neural network
CN113256592B (en) Training method, system and device of image feature extraction model
CN108764358A (en) A kind of Terahertz image-recognizing method, device, equipment and readable storage medium storing program for executing
CN110674824A (en) Finger vein segmentation method and device based on R2U-Net and storage medium
Elalfi et al. Artificial neural networks in medical images for diagnosis heart valve diseases
CN114707530A (en) Bimodal emotion recognition method and system based on multi-source signal and neural network
WO2021184195A1 (en) Medical image reconstruction method, and medical image reconstruction network training method and apparatus
CN111488810A (en) Face recognition method and device, terminal equipment and computer readable medium
CN110570394A (en) medical image segmentation method, device, equipment and storage medium
CN111968137A (en) Head CT image segmentation method and device, electronic device and storage medium
CN113838067A (en) Segmentation method and device of lung nodule, computing equipment and storable medium
CN114863225A (en) Image processing model training method, image processing model generation device, image processing equipment and image processing medium
CN114399510A (en) Skin lesion segmentation and classification method and system combining image and clinical metadata
CN112001481A (en) P wave identification method based on counterstudy, terminal equipment and storage medium
CN113850796A (en) Lung disease identification method and device based on CT data, medium and electronic equipment
CN116543259A (en) Deep classification network noise label modeling and correcting method, system and storage medium
CN110613445A (en) DWNN framework-based electrocardiosignal identification method
CN114224354B (en) Arrhythmia classification method, arrhythmia classification device, and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination