EP4228510A1 - Intracardiac ecg noise detection and reduction - Google Patents

Intracardiac ecg noise detection and reduction

Info

Publication number
EP4228510A1
EP4228510A1 EP21801993.3A EP21801993A EP4228510A1 EP 4228510 A1 EP4228510 A1 EP 4228510A1 EP 21801993 A EP21801993 A EP 21801993A EP 4228510 A1 EP4228510 A1 EP 4228510A1
Authority
EP
European Patent Office
Prior art keywords
noise
ecg
signals
data
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP21801993.3A
Other languages
German (de)
French (fr)
Inventor
Matityahu Amit
Stanislav Goldberg
Yariv Avraham Amos
Lior Botzer
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Biosense Webster Israel Ltd
Original Assignee
Biosense Webster Israel Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Biosense Webster Israel Ltd filed Critical Biosense Webster Israel Ltd
Publication of EP4228510A1 publication Critical patent/EP4228510A1/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/318Heart-related electrical modalities, e.g. electrocardiography [ECG]
    • A61B5/346Analysis of electrocardiograms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/318Heart-related electrical modalities, e.g. electrocardiography [ECG]
    • A61B5/33Heart-related electrical modalities, e.g. electrocardiography [ECG] specially adapted for cooperation with other devices
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/318Heart-related electrical modalities, e.g. electrocardiography [ECG]
    • A61B5/367Electrophysiological study [EPS], e.g. electrical activation mapping or electro-anatomical mapping
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2560/00Constructional details of operational features of apparatus; Accessories for medical measuring apparatus
    • A61B2560/02Operational features
    • A61B2560/0242Operational features adapted to measure environmental factors, e.g. temperature, pollution
    • A61B2560/0247Operational features adapted to measure environmental factors, e.g. temperature, pollution for compensation or correction of the measured physiological value

Definitions

  • the present invention is related to artificial intelligence and machine learning associated with intracardiac electrocardiogram (ECG) noise detection and reduction.
  • ECG electrocardiogram
  • ECG signals and intracardiac ECG signals are often detected prior to and/or during a cardiac procedure.
  • ECG signals and intracardiac ECG signal can be used to identify potential locations of a heart where arrhythmia causing signals originate from.
  • an ECG or intracardiac ECG is a signal that describes the electrical activity of the heart.
  • ECG signals and intracardiac ECG signals may also be used to map portions of a heart.
  • Such interference may also result from the processing of areas of the signal with sharp changes, peaks, and/or pacing signals including areas of high frequency and harmonics. Interference obscures the accuracy of the ECG and intracardiac ECG readings. Therefore, a need exists to provide improved methods of identifying features so that the effects of such features may be removed from an electrical signal study thereby allowing the electrical signals of the heart to be viewed.
  • FIG. 1 is a block diagram of an example system for remotely monitoring and communicating patient biometrics
  • FIG. 2 is a system diagram of an example of a computing environment in communication with network;
  • FIG. 3 is a block diagram of an example device in which one or more features of the disclosure can be implemented;
  • FIG. 4 illustrates a graphical depiction of an artificial intelligence system incorporating the example device of FIG. 3;
  • FIG. 5 illustrates a method performed in the artificial intelligence system of FIG. 4
  • FIG. 6 illustrates an example of the probabilities of a naive Bayes calculation
  • FIG. 7 illustrates an exemplary decision tree
  • FIG. 8 illustrates an exemplary random forest classifier
  • FIG. 9 illustrates an exemplary logistic regression
  • FIG. 10 illustrates an exemplary support vector machine
  • FIG. 11 illustrated an exemplary linear regression model
  • FIG. 12 illustrates an exemplary K-means clustering
  • FIG. 13 illustrates an exemplary ensemble learning algorithm
  • FIG. 14 illustrates an exemplary neural network
  • FIG. 15 illustrates a hardware based neural network
  • FIG. 16A illustrates an ECG signal that contains a P wave (due to atrial depolarization), a QRS complex (due to atrial repolarization and ventricular depolarization) and a T wave (due to ventricular repolarization);
  • FIG. 16B illustrates a frequency content of the baseline wander
  • FIG. 16C shows an ECG signal interfered by an EMG noise
  • FIG. 16D illustrates examples of power line noise
  • FIG. 16E illustrates a signal during ventricle activity including baseline wander
  • FIG. 16F illustrates the signal of FIG. 16E after baseline wander removal
  • FIG. 16G illustrates an example of high frequency noise and baseline wander for bipolar measurements
  • FIG. 17A is a diagram of an exemplary system in which one or more features of the disclosure subject matter can be implemented;
  • FIG. 17B illustrates an exemplary catheter placed in the right atria with bipolar intracardiac ECG signals
  • FIG. 18 is a depiction of an illustration of a lab
  • FIG. 19 illustrates signals and their respective frequencies that may be found within a specific lab
  • FIG. 20 illustrates a method for dealing with the described noise signals
  • FIG. 21 illustrates a method performed to denoise signals for a lab (A and B);
  • FIG. 22 illustrates contact noise examples recorded in a controlled aquarium environment
  • FIG. 23A illustrates deflection noise examples recorded in a controlled aquarium environment
  • FIG. 23B illustrates deflection noise examples of FIG. 23A with an increased x- axis to zoom in on features
  • FIG. 24 illustrates additional deflection noise examples
  • FIG. 25 illustrates a contact and deflection noise model
  • FIG. 26 illustrates a CNN inception model
  • FIG. 27 illustrates a second learning phase that may be implemented to capture the methods described herein.
  • FIG. 1 is a block diagram of an example system 100 for remotely monitoring and communicating patient biometrics (i.e., patient data).
  • the system 100 includes a patient biometric monitoring and processing apparatus 102 associated with a patient 104, a local computing device 106, a remote computing system 108, a first network 110 and a second network 120.
  • a monitoring and processing apparatus 102 may be an apparatus that is internal to the patient’s body (e.g., subcutaneously implantable).
  • the monitoring and processing apparatus 102 may be inserted into a patient via any applicable manner including orally injecting, surgical insertion via a vein or artery, an endoscopic procedure, or a laparoscopic procedure.
  • a monitoring and processing apparatus 102 may be an apparatus that is external to the patient.
  • the monitoring and processing apparatus 102 may include an attachable patch (e.g., that attaches to a patient’s skin).
  • the monitoring and processing apparatus 102 may also include a catheter with one or more electrodes, a probe, a blood pressure cuff, a weight scale, a bracelet or smart watch biometric tracker, a glucose monitor, a continuous positive airway pressure (CPAP) machine or virtually any device which may provide an input concerning the health or biometrics of the patient.
  • CPAP continuous positive airway pressure
  • a monitoring and processing apparatus 102 may include both components that are internal to the patient and components that are external to the patient.
  • Example systems may, however, may include a plurality of patient biometric monitoring and processing apparatuses.
  • a patient biometric monitoring and processing apparatus may be in communication with one or more other patient biometric monitoring and processing apparatuses. Additionally, or alternatively, a patient biometric monitoring and processing apparatus may be in communication with the network 110.
  • One or more monitoring and processing apparatuses 102 may acquire patient biometric data (e.g., electrical signals, blood pressure, temperature, blood glucose level or other biometric data) and receive at least a portion of the patient biometric data representing the acquired patient biometrics and additional formation associated with acquired patient biometrics from one or more other monitoring and processing apparatuses 102.
  • the additional information may be, for example, diagnosis information and/or additional information obtained from an additional device such as a wearable device.
  • Each monitoring and processing apparatus 102 may process data, including its own acquired patient biometrics as well as data received from one or more other monitoring and processing apparatuses 102.
  • network 110 is an example of a short-range network (e.g., local area network (LAN), or personal area network (PAN)).
  • Information may be sent, via short-range network 110, between monitoring a processing apparatus 102 and local computing device 106 using any one of various short-range wireless communication protocols, such as Bluetooth, WiFi, Zigbee, Z-Wave, near field communications (NFC), ultraband, Zigbee, or infrared (IR).
  • Network 120 may be a wired network, a wireless network or include one or more wired and wireless networks.
  • a network 120 may be a long-range network (e.g., wide area network (WAN), the internet, or a cellular network,).
  • Information may be sent, via network 120 using any one of various long-range wireless communication protocols (e.g., TCP/IP, HTTP, 3G, 4G/LTE, or 5G/New Radio).
  • the patient monitoring and processing apparatus 102 may include a patient biometric sensor 112, a processor 114, a user input (UI) sensor 116, a memory 118, and a transmitter-receiver (i.e., transceiver) 122.
  • the patient monitoring and processing apparatus 102 may continually or periodically monitor, store, process and communicate, via network 110, any number of various patient biometrics.
  • patient biometrics include electrical signals (e.g., ECG signals and brain biometrics), blood pressure data, blood glucose data and temperature data.
  • the patient biometrics may be monitored and communicated for treatment across any number of various diseases, such as cardiovascular diseases (e.g., arrhythmias, cardiomyopathy, and coronary artery disease) and autoimmune diseases (e.g., type I and type II diabetes).
  • cardiovascular diseases e.g., arrhythmias, cardiomyopathy, and coronary artery disease
  • autoimmune diseases e.g., type I and type II diabetes.
  • Patient biometric sensor 112 may include, for example, one or more sensors configured to sense a type of biometric patient biometrics.
  • patient biometric sensor 112 may include an electrode configured to acquire electrical signals (e.g., heart signals, brain signals or other bioelectrical signals), a temperature sensor, a blood pressure sensor, a blood glucose sensor, a blood oxygen sensor, a pH sensor, an accelerometer and a microphone.
  • electrical signals e.g., heart signals, brain signals or other bioelectrical signals
  • a temperature sensor e.g., a blood pressure sensor, a blood glucose sensor, a blood oxygen sensor, a pH sensor, an accelerometer and a microphone.
  • patient biometric monitoring and processing apparatus 102 may be an ECG monitor for monitoring ECG signals of a heart.
  • the patient biometric sensor 112 of the ECG monitor may include one or more electrodes for acquiring ECG signals.
  • the ECG signals may be used for treatment of various cardiovascular diseases.
  • the patient biometric monitoring and processing apparatus 102 may be a continuous glucose monitor (CGM) for continuously monitoring blood glucose levels of a patient on a continual basis for treatment of various diseases, such as type I and type II diabetes.
  • the CGM may include a subcutaneously disposed electrode, which may monitor blood glucose levels from interstitial fluid of the patient.
  • the CGM may be, for example, a component of a closed-loop system in which the blood glucose data is sent to an insulin pump for calculated delivery of insulin without user intervention.
  • Transceiver 122 may include a separate transmitter and receiver. Alternatively, transceiver 122 may include a transmitter and receiver integrated into a single device.
  • Processor 114 may be configured to store patient data, such as patient biometric data in memory 118 acquired by patient biometric sensor 112, and communicate the patient data, across network 110, via a transmitter of transceiver 122. Data from one or more other monitoring and processing apparatus 102 may also be received by a receiver of transceiver 122, as described in more detail below.
  • the monitoring and processing apparatus 102 includes UI sensor 116 which may be, for example, a piezoelectric sensor or a capacitive sensor configured to receive a user input, such as a tapping or touching.
  • UI sensor 116 may be controlled to implement a capacitive coupling, in response to tapping or touching a surface of the monitoring and processing apparatus 102 by the patient 104.
  • Gesture recognition may be implemented via any one of various capacitive types, such as resistive capacitive, surface capacitive, projected capacitive, surface acoustic wave, piezoelectric and infra-red touching.
  • Capacitive sensors may be disposed at a small area or over a length of the surface such that the tapping or touching of the surface activates the monitoring device.
  • the processor 114 may be configured to respond selectively to different tapping patterns of the capacitive sensor (e.g., a single tap or a double tap), which may be the UI sensor 116, such that different tasks of the patch (e.g., acquisition, storing, or transmission of data) may be activated based on the detected pattern.
  • different tasks of the patch e.g., acquisition, storing, or transmission of data
  • audible feedback may be given to the user from processing apparatus 102 when a gesture is detected.
  • the local computing device 106 of system 100 is in communication with the patient biometric monitoring and processing apparatus 102 and may be configured to act as a gateway to the remote computing system 108 through the second network 120.
  • the local computing device 106 may be, for example, a, smart phone, smartwatch, tablet or other portable smart device configured to communicate with other devices via network 120.
  • the local computing device 106 may be a stationary or standalone device, such as a stationary base station including, for example, modem and/or router capability, a desktop or laptop computer using an executable program to communicate information between the processing apparatus 102 and the remote computing system 108 via the PC's radio module, or a USB dongle.
  • Patient biometrics may be communicated between the local computing device 106 and the patient biometric monitoring and processing apparatus 102 using a short-range wireless technology standard (e.g., Bluetooth, WiFi, ZigBee, Z-wave and other short-range wireless standards) via the short-range wireless network 110, such as a local area network (LAN) (e.g., a personal area network (PAN)).
  • a short-range wireless technology standard e.g., Bluetooth, WiFi, ZigBee, Z-wave and other short-range wireless standards
  • the local computing device 106 may also be configured to display the acquired patient electrical signals and information associated with the acquired patient electrical signals, as described in more detail below.
  • remote computing system 108 may be configured to receive at least one of the monitored patient biometrics and information associated with the monitored patient via network 120, which is a long-range network.
  • network 120 may be a wireless cellular network, and information may be communicated between the local computing device 106 and the remote computing system 108 via a wireless technology standard, such as any of the wireless technologies mentioned above.
  • the remote computing system 108 may be configured to provide (e.g., visually display and/or aurally provide) the at least one of the patient biometrics and the associated information to a healthcare professional (e.g., a physician).
  • a healthcare professional e.g., a physician
  • FIG. 2 is a system diagram of an example of a computing environment 200 in communication with network 120.
  • the computing environment 200 is incorporated in a public cloud computing platform (such as Amazon Web Services or Microsoft Azure), a hybrid cloud computing platform (such as HP Enterprise OneSphere) or a private cloud computing platform.
  • a public cloud computing platform such as Amazon Web Services or Microsoft Azure
  • a hybrid cloud computing platform such as HP Enterprise OneSphere
  • computing environment 200 includes remote computing system 108 (hereinafter computer system), which is one example of a computing system upon which embodiments described herein may be implemented.
  • computer system hereinafter computer system
  • the remote computing system 108 may, via processors 220, which may include one or more processors, perform various functions. The functions may include analyzing monitored patient biometrics and the associated information and, according to physician- determined or algorithm driven thresholds and parameters, providing (e.g., via display 266) alerts, additional information, or instructions. As described in more detail below, the remote computing system 108 may be used to provide (e.g., via display 266) healthcare personnel (e.g., a physician) with a dashboard of patient information, such that such information may enable healthcare personnel to identify and prioritize patients having more critical needs than others.
  • healthcare personnel e.g., a physician
  • the computer system 210 may include a communication mechanism such as a bus 221 or other communication mechanism for communicating information within the computer system 210.
  • the computer system 210 further includes one or more processors
  • the processors 220 coupled with the bus 221 for processing the information.
  • the processors 220 may include one or more CPUs, GPUs, or any other processor known in the art.
  • the computer system 210 also includes a system memory 230 coupled to the bus
  • the system memory 230 may include computer readable storage media in the form of volatile and/or nonvolatile memory, such as read only system memory (ROM) 231 and/or random-access memory (RAM) 232.
  • the system memory RAM 232 may include other dynamic storage device(s) (e.g., dynamic RAM, static RAM, and synchronous DRAM).
  • the system memory ROM 231 may include other static storage device(s) (e.g., programmable ROM, erasable PROM, and electrically erasable PROM).
  • the system memory 230 may be used for storing temporary variables or other intermediate information during the execution of instructions by the processors 220.
  • a basic input/output system 233 may contain routines to transfer information between elements within computer system 210, such as during start-up, that may be stored in system memory ROM 231.
  • RAM 232 may comprise data and/or program modules that are immediately accessible to and/or presently being operated on by the processors 220.
  • System memory 230 may additionally include, for example, operating system 234, application programs 235, other program modules 236 and program data 237.
  • the illustrated computer system 210 also includes a disk controller 240 coupled to the bus 221 to control one or more storage devices for storing information and instructions, such as a magnetic hard disk 241 and a removable media drive 242 (e.g., floppy disk drive, compact disc drive, tape drive, and/or solid-state drive).
  • the storage devices may be added to the computer system 210 using an appropriate device interface (e.g., a small computer system interface (SCSI), integrated device electronics (IDE), Universal Serial Bus (USB), or FireWire).
  • SCSI small computer system interface
  • IDE integrated device electronics
  • USB Universal Serial Bus
  • FireWire FireWire
  • the computer system 210 may also include a display controller 265 coupled to the bus 221 to control a monitor or display 266, such as a cathode ray tube (CRT) or liquid crystal display (LCD), for displaying information to a computer user.
  • the illustrated computer system 210 includes a user input interface 260 and one or more input devices, such as a keyboard 262 and a pointing device 261, for interacting with a computer user and providing information to the processor 220.
  • the pointing device 261 for example, may be a mouse, a trackball, or a pointing stick for communicating direction information and command selections to the processor 220 and for controlling cursor movement on the display 266.
  • the display 266 may provide a touch screen interface that may allow input to supplement or replace the communication of direction information and command selections by the pointing device 261 and/or keyboard 262.
  • the computer system 210 may perform a portion or each of the functions and methods described herein in response to the processors 220 executing one or more sequences of one or more instructions contained in a memory, such as the system memory 230. Such instructions may be read into the system memory 230 from another computer readable medium, such as a hard disk 241 or a removable media drive 242.
  • the hard disk 241 may contain one or more data stores and data files used by embodiments described herein. Data store contents and data files may be encrypted to improve security.
  • the processors 220 may also be employed in a multi-processing arrangement to execute the one or more sequences of instructions contained in system memory 230.
  • hard-wired circuitry may be used in place of or in combination with software instructions. Thus, embodiments are not limited to any specific combination of hardware circuitry and software.
  • the computer system 210 may include at least one computer readable medium or memory for holding instructions programmed according to embodiments described herein and for containing data structures, tables, records, or other data described herein.
  • the term computer readable medium as used herein refers to any non-transitory, tangible medium that participates in providing instructions to the processor 220 for execution.
  • a computer readable medium may take many forms including, but not limited to, non-volatile media, volatile media, and transmission media.
  • Non-limiting examples of non-volatile media include optical disks, solid state drives, magnetic disks, and magneto-optical disks, such as hard disk 241 or removable media drive 242.
  • Non-limiting examples of volatile media include dynamic memory, such as system memory 230.
  • Non-limiting examples of transmission media include coaxial cables, copper wire, and fiber optics, including the wires that make up the bus 221. Transmission media may also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications.
  • the computing environment 200 may further include the computer system 210 operating in a networked environment using logical connections to local computing device 106 and one or more other devices, such as a personal computer (laptop or desktop), mobile devices (e.g., patient mobile devices), a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to computer system 210.
  • computer system 210 may include modem 272 for establishing communications over a network 120, such as the Internet. Modem 272 may be connected to system bus 221 via network interface 270, or via another appropriate mechanism.
  • Network 120 may be any network or system generally known in the art, including the Internet, an intranet, a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a direct connection or series of connections, a cellular telephone network, or any other network or medium capable of facilitating communication between computer system 610 and other computers (e.g., local computing device 106).
  • LAN local area network
  • WAN wide area network
  • MAN metropolitan area network
  • a direct connection or series of connections a cellular telephone network
  • cellular telephone network e.g., cellular telephone network
  • FIG. 3 is a block diagram of an example device 300 in which one or more features of the disclosure can be implemented.
  • the device 300 may be local computing device 106, for example.
  • the device 300 can include, for example, a computer, a gaming device, a handheld device, a set-top box, a television, a mobile phone, or a tablet computer.
  • the device 300 includes a processor 302, a memory 304, a storage device 306, one or more input devices 308, and one or more output devices 310.
  • the device 300 can also optionally include an input driver 312 and an output driver 314. It is understood that the device 300 can include additional components not shown in FIG. 3 including an artificial intelligence accelerator.
  • the processor 302 includes a central processing unit (CPU), a graphics processing unit (GPU), a CPU and GPU located on the same die, or one or more processor cores, wherein each processor core can be a CPU or a GPU.
  • the memory 304 is located on the same die as the processor 302, or is located separately from the processor 302.
  • the memory 304 includes a volatile or non-volatile memory, for example, random access memory (RAM), dynamic RAM, or a cache.
  • the storage device 306 includes a fixed or removable storage means, for example, a hard disk drive, a solid-state drive, an optical disk, or a flash drive.
  • the input devices 308 include, without limitation, a keyboard, a keypad, a touch screen, a touch pad, a detector, a microphone, an accelerometer, a gyroscope, a biometric scanner, or a network connection (e.g., a wireless local area network card for transmission and/or reception of wireless IEEE 802 signals).
  • the output devices 310 include, without limitation, a display, a speaker, a printer, a haptic feedback device, one or more lights, an antenna, or a network connection (e.g., a wireless local area network card for transmission and/or reception of wireless IEEE 802 signals).
  • a network connection e.g., a wireless local area network card for transmission and/or reception of wireless IEEE 802 signals.
  • the input driver 312 communicates with the processor 302 and the input devices 308, and permits the processor 302 to receive input from the input devices 308.
  • the output driver 314 communicates with the processor 302 and the output devices 310, and permits the processor 302 to send output to the output devices 310. It is noted that the input driver 312 and the output driver 314 are optional components, and that the device 300 will operate in the same manner if the input driver 312 and the output driver 314 are not present.
  • the output driver 316 includes an accelerated processing device (“APD”) 316 which is coupled to a display device 318.
  • the APD accepts compute commands and graphics rendering commands from processor 302, processes those compute and graphics rendering commands, and provides pixel output to display device 318 for display.
  • the APD 316 includes one or more parallel processing units to perform computations in accordance with a single-instruction-multiple-data (“SIMD”) paradigm.
  • SIMD single-instruction-multiple-data
  • the functionality described as being performed by the APD 316 is additionally or alternatively performed by other computing devices having similar capabilities that are not driven by ahost processor (e.g., processor 302) and provides graphical output to a display device 318.
  • ahost processor e.g., processor 302
  • any processing system that performs processing tasks in accordance with a SIMD paradigm may perform the functionality described herein.
  • computing systems that do not perform processing tasks in accordance with a SIMD paradigm performs the functionality described herein.
  • FIG. 4 illustrates a graphical depiction of an artificial intelligence system 200 incorporating the example device of FIG. 3.
  • System 400 includes data 410, a machine 420, a model 430, a plurality of outcomes 440 and underlying hardware 450.
  • System 400 operates by using the data 410 to train the machine 420 while building a model 430 to enable a plurality of outcomes 440 to be predicted.
  • the system 400 may operate with respect to hardware 450.
  • the data 410 may be related to hardware 450 and may originate with apparatus 102, for example.
  • the data 410 may be on-going data, or output data associated with hardware 450.
  • the machine 420 may operate as the controller or data collection associated with the hardware 450, or be associated therewith.
  • the model 430 may be configured to model the operation of hardware 450 and model the data 410 collected from hardware 450 in order to predict the outcome achieved by hardware 450.
  • hardware 450 may be configured to provide a certain desired outcome 440 from hardware 450.
  • FIG. 5 illustrates a method 500 performed in the artificial intelligence system of FIG. 4.
  • Method 500 includes collecting data from the hardware at step 510.
  • This data may include currently collected, historical or other data from the hardware.
  • this data may include measurements during a surgical procedure and may be associated with the outcome of the procedure.
  • the temperature of a heart may be collected and correlated with the outcome of a heart procedure.
  • method 500 includes training a machine on the hardware.
  • the training may include an analysis and correlation of the data collected in step 510.
  • the data of temperature and outcome may be trained to determine if a correlation or link exists between the temperature of the heart during the procedure and the outcome.
  • method 500 includes building a model on the data associated with the hardware. Building a model may include physical hardware or software modeling, algorithmic modeling, and the like, as will be described below. This modeling may seek to represent the data that has been collected and trained.
  • method 500 includes predicting the outcomes of the model associated with the hardware. This prediction of the outcome may be based on the trained model. For example, in the case of the heart, if the temperature during the procedure between 97.7 - 100.2 produces a positive result from the procedure, the outcome can be predicted in a given procedure based on the temperature of the heart during the procedure. While this model is rudimentary, it is provided for exemplary purposes and to increase understanding of the present invention.
  • the present system and method operate to train the machine, build the model, and predict outcomes using algorithms. These algorithms may be used to solve the trained model and predict outcomes associated with the hardware. These algorithms may be divided generally into classification, regression, and clustering algorithms. [0080] For example, a classification algorithm is used in the situation where the dependent variable, which is the variable being predicted, is divided into classes, and predicting a class, the dependent variable, for a given input. Thus, a classification algorithm is used to predict an outcome, from a set number of fixed, predefined outcomes.
  • a classification algorithm may include naive Bayes algorithms, decision trees, random forest classifiers, logistic regressions, support vector machines and k nearest neighbors.
  • a naive Bayes algorithm follows the Bayes theorem, and follows a probabilistic approach. As would be understood, other probabilistic-based algorithms may also be used, and generally operate using similar probabilistic principles to those described below for the exemplary naive Bayes algorithm.
  • FIG. 6 illustrates an example of the probabilities of a naive Bayes calculation.
  • This naive Bayes algorithm, and Bayes algorithms generally, may be useful when needing to predict whether your input belongs to a given list of n classes or not.
  • the probabilistic approach may be used because the probabilities for all the n classes will be quite low.
  • a person playing golf which depends on factors including the weather outside shown in a first data set 610.
  • the first data set 610 illustrates the weather in a first column and an outcome of playing associated with that weather in a second column.
  • the frequency table 620 the frequencies with which certain events occur are generated.
  • the frequency of a person playing or not playing golf in each of the weather conditions is determined. From there, a likelihood table is compiled to generate initial probabilities. For example, the probability of the weather being overcast is 0.29 while the general probability of playing is 0.64.
  • the posterior probabilities may be generated from the likelihood table 630. These posterior probabilities may be configured to answer questions about weather conditions and whether golf is played in those weather conditions. For example, the probability of it being sunny outside and golf being played may be set forth by the Bayesian formula:
  • a decision tree is a flowchart-like tree structure where each external node denotes a test on an attribute and each branch represents the outcome of that test.
  • the leaf nodes contain the actual predicted labels.
  • the decision tree begins from the root of the tree with attribute values being compared until a leaf node is reached.
  • a decision tree can be used as a classifier when handling high dimensional data and when little time has been spent behind data preparation.
  • Decision trees may take the form of a simple decision tree, a linear decision tree, an algebraic decision tree, a deterministic decision tree, a randomized decision tree, a nondeterministic decision tree, and a quantum decision tree.
  • An exemplary decision tree is provided below in FIG. 7.
  • FIG. 7 illustrates a decision tree, along the same structure as the Bayes example above, in deciding whether to play golf.
  • the first node 710 examines the weather providing sunny 712, overcast 714, and rain 716 as the choices to progress down the decision tree. If the weather is sunny, the leg of the tree is followed to a second node 720 examining the temperature. The temperature at node 720 may be high 722 or normal 724, in this example. If the temperature at node 720 is high 722, then the predicted outcome of “No” 723 golf occurs. If the temperature at node 720 is normal 724, then the predicted outcome of “Yes” 725 golf occurs.
  • a random forest classifier is a committee of decision trees, where each decision tree has been fed a subset of the attributes of data and predicts on the basis of that subset. The mode of the actual predicted values of the decision trees are considered to provide an ultimate random forest answer.
  • the random forest classifier generally, alleviates overfitting, which is present in a standalone decision tree, leading to a much more robust and accurate classifier.
  • FIG. 8 illustrates an exemplary random forest classifier for classifying the color of a garment.
  • the random forest classifier includes five decision trees 810i, 8IO2, 8IO3, 8IO4, and 8IO5 (collectively or generally referred to as decision trees 810).
  • Each of the trees is designed to classify the color of the garment.
  • a discussion of each of the trees and decisions made is not provided, as each individual tree generally operates as the decision tree of FIG. 7.
  • three (8101, 8 IO2, 8IO4) of the five trees determines that the garment is blue, while one determines the garment is green (8IO3) and the remaining tree determines the garment is red (8 IO5).
  • the random forest takes these actual predicted values of the five trees and calculates the mode of the actual predicted values to provide random forest answer that the garment is blue.
  • Logistic Regression is another algorithm for binary classification tasks. Logistic regression is based on the logistic function, also called the sigmoid function. This S-shaped curve can take any real-valued number and map it between 0 and 1 asymptotically approaching those limits.
  • the logistic model may be used to model the probability of a certain class or event existing such as pass/fail, win/lose, alive/dead or healthy/sick. This can be extended to model several classes of events such as determining whether an image contains a cat, dog, lion, etc. Each object being detected in the image would be assigned a probability between 0 and 1 with the sum of the probabilities adding to one.
  • the log-odds (the logarithm of the odds) for the value labeled "1" is a linear combination of one or more independent variables ("predictors"); the independent variables can each be a binary variable (two classes, coded by an indicator variable) or a continuous variable (any real value).
  • the corresponding probability of the value labeled " 1 " can vary between 0 (certainly the value "0") and 1 (certainly the value " 1"), hence the labeling; the function that converts log-odds to probability is the logistic function, hence the name.
  • the unit of measurement for the log-odds scale is called a logit, from logistic unit, hence the alternative names.
  • Analogous models with a different sigmoid function instead of the logistic function can also be used, such as the probit model; the defining characteristic of the logistic model is that increasing one of the independent variables multiplicatively scales the odds of the given outcome at a constant rate, with each independent variable having its own parameter; for a binary dependent variable this generalizes the odds ratio.
  • the dependent variable has two levels (categorical). Outputs with more than two values are modeled by multinomial logistic regression and, if the multiple categories are ordered, by ordinal logistic regression (for example the proportional odds ordinal logistic model).
  • the logistic regression model itself simply models probability of output in terms of input and does not perform statistical classification (it is not a classifier), though it can be used to make a classifier, for instance by choosing a cutoff value and classifying inputs with probability greater than the cutoff as one class, below the cutoff as the other; this is a common way to make a binary classifier.
  • FIG. 9 illustrates an exemplary logistic regression.
  • This exemplary logistic regression enables the prediction of an outcome based on a set of variables. For example, based on a person’s grade point average, and outcome of being accepted to a school may be predicted. The past history of grade point averages and the relationship with acceptance enables the prediction to occur.
  • the logistic regression of FIG. 9 enables the analysis of the grade point average variable 920 to predict the outcome 910 defined by 0 to 1. At the low end 930 of the S- shaped curve, the grade point average 920 predicts an outcome 910 of not being accepted. While at the high end 940 of the S- shaped curve, the grade point average 920 predicts an outcome 10 of being accepted. Logistic regression may be used to predict house values, customer lifetime value in the insurance sector, etc.
  • a support vector machine may be used to sort the data with the margins between two classes as far apart as possible. This is called maximum margin separation.
  • the SVM may account for the support vectors while plotting the hyperplane, unlike linear regression which uses the entire dataset for that purpose.
  • FIG. 10 illustrates an exemplary support vector machine.
  • data may be classified into two different classes represented as squares 1010 and triangles 1020.
  • SVM 1000 operates by drawing a random hyperplane 1030. This hyperplane 1030 is monitored by comparing the distance (illustrated with lines 1040) between the hyperplane 1030 and the closest data points 1050 from each class. The closest data points 1050 to the hyperplane 1030 are known as support vectors.
  • the hyperplane 1030 is drawn based on these support vectors 1050 and an optimum hyperplane has a maximum distance from each of the support vectors 1050. The distance between the hyperplane 1030 and the support vectors 1050 is known as the margin.
  • SVM 1000 may be used to classify data by using a hyperplane 1030, such that the distance between the hyperplane 1030 and the support vectors 1050 is maximum. Such an SVM 1000 may be used to predict heart disease, for example.
  • K Nearest Neighbors refers to a set of algorithms that generally do not make assumptions on the underlying data distribution, and perform a reasonably short training phase. Generally, KNN uses many data points separated into several classes to predict the classification of a new sample point. Operationally, KNN specifies an integer N with a new sample. The N entries in the model of the system closest to the new sample are selected. The most common classification of these entries is determined, and that classification is assigned to the new sample. KNN generally requires the storage space to increase as the training set increases. This also means that the estimation time increases in proportion to the number of training points.
  • the output is a continuous quantity so regression algorithms may be used in cases where the target variable is a continuous variable.
  • FIG. 11 illustrates an exemplary linear regression model.
  • a predicted variable 1110 is modeled against a measured variable 1120.
  • a cluster of instances of the predicted variable 1110 and measured variable 1120 are plotted as data points 1130.
  • Data points 1130 are then fit with the best fit line 1140.
  • the best fit line 1140 is used in subsequent predicted, given a measured variable 1120, the line 1140 is used to predict the predicted variable 1110 for that instance.
  • Linear regression may be used to model and predict in a financial portfolio, salary forecasting, real estate and in traffic in arriving at estimated time of arrival.
  • Clustering algorithms may also be used to model and train on a data set.
  • the input is assigned into two or more clusters based on feature similarity.
  • Clustering algorithms generally learn the patterns and useful insights from data without any guidance. For example, clustering viewers into similar groups based on their interests, age, geography, etc. may be performed using unsupervised learning algorithms like K- means clustering.
  • K-means clustering generally is regarded as a simple unsupervised learning approach. In K-means clustering similar data points may be gathered together and bound in the form of a cluster. One method for binding the data points together is by calculating the centroid of the group of data points. In determining effective clusters, in K-means clustering the distance between each point from the centroid of the cluster is evaluated. Depending on the distance between the data point and the centroid, the data is assigned to the closest cluster. The goal of clustering is to determine the intrinsic grouping in a set of unlabeled data.
  • the ‘K’ in K-means stands for the number of clusters formed. The number of clusters (basically the number of classes in which new instances of data may be classified) may be determined by the user. This determination may be performed using feedback and viewing the size of the clusters during training, for example.
  • K-means is used majorly in cases where the data set has points which are distinct and well separated, otherwise, if the clusters are not separated the modeling may render the clusters inaccurate. Also, K-means may be avoided in cases where the data set contains a high number of outliers or the data set is non-linear.
  • FIG. 12 illustrates a K-means clustering.
  • new cluster centroids formed, an iteration, or series of iterations may occur to enable the clusters to be minimized in size and the centroid of the optimal centroid determined. Then as new data points are measured, the new data points may be compared with the centroid and cluster to identify with that cluster.
  • Ensemble learning algorithms may be used. These algorithms use multiple learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone. Ensemble learning algorithms perform the task of searching through a hypothesis space to find a suitable hypothesis that will make good predictions with a particular problem. Even if the hypothesis space contains hypotheses that are very well- suited for a particular problem, it may be very difficult to find a good hypothesis. Ensemble algorithms combine multiple hypotheses to form a better hypothesis. The term ensemble is usually reserved for methods that generate multiple hypotheses using the same base learner. The broader term of multiple classifier systems also covers hybridization of hypotheses that are not induced by the same base learner.
  • An ensemble is itself a supervised learning algorithm because it can be trained and then used to make predictions.
  • the trained ensemble therefore, represents a single hypothesis.
  • This hypothesis is not necessarily contained within the hypothesis space of the models from which it is built.
  • ensembles can be shown to have more flexibility in the functions they can represent. This flexibility can, in theory, enable them to over-fit the training data more than a single model would, but in practice, some ensemble techniques (especially bagging) tend to reduce problems related to over-fitting of the training data.
  • Some common types of ensembles include Bayes optimal classifier, bootstrap aggregating (bagging), boosting, Bayesian model averaging, Bayesian model combination, bucket of models and stacking.
  • FIG. 13 illustrates an exemplary ensemble learning algorithm where bagging is being performed in parallel 1310 and boosting is being performed sequentially 1320.
  • a neural network is a network or circuit of neurons, or in a modem sense, an artificial neural network, composed of artificial neurons or nodes.
  • the connections of the biological neuron are modeled as weights.
  • a positive weight reflects an excitatory connection, while negative values mean inhibitory connections.
  • Inputs are modified by a weight and summed using a linear combination.
  • An activation function may control the amplitude of the output. For example, an acceptable range of output is usually between 0 and 1, or it could be -1 and 1.
  • These artificial networks may be used for predictive modeling, adaptive control and applications and can be trained via a dataset. Self-learning resulting from experience can occur within networks, which can derive conclusions from a complex and seemingly unrelated set of information.
  • a biological neural network is composed of a group or groups of chemically connected or functionally associated neurons.
  • a single neuron may be connected to many other neurons and the total number of neurons and connections in a network may be extensive. Connections, called synapses, are usually formed from axons to dendrites, though dendrodendritic synapses and other connections are possible.
  • connections called synapses, are usually formed from axons to dendrites, though dendrodendritic synapses and other connections are possible.
  • Artificial intelligence, cognitive modeling, and neural networks are information processing paradigms inspired by the way biological neural systems process data. Artificial intelligence and cognitive modeling try to simulate some properties of biological neural networks. In the artificial intelligence field, artificial neural networks have been applied successfully to speech recognition, image analysis and adaptive control, in order to construct software agents (in computer and video games) or autonomous robots.
  • a neural network in the case of artificial neurons called artificial neural network (ANN) or simulated neural network (SNN), is an interconnected group of natural or artificial neurons that uses a mathematical or computational model for information processing based on a connectionistic approach to computation.
  • an ANN is an adaptive system that changes its structure based on external or internal information that flows through the network.
  • neural networks are non-linear statistical data modeling or decisionmaking tools. They can be used to model complex relationships between inputs and outputs or to find patterns in data.
  • An artificial neural network involves a network of simple processing elements (artificial neurons) which can exhibit complex global behavior, determined by the connections between the processing elements and element parameters.
  • One classical type of artificial neural network is the recurrent Hopfield network.
  • the utility of artificial neural network models lies in the fact that they can be used to infer a function from observations and also to use it.
  • Unsupervised neural networks can also be used to learn representations of the input that capture the salient characteristics of the input distribution, and more recently, deep learning algorithms, which can implicitly learn the distribution function of the observed data. Learning in neural networks is particularly useful in applications where the complexity of the data or task makes the design of such functions by hand impractical.
  • Neural networks can be used in different fields. The tasks to which artificial neural networks are applied tend to fall within the following broad categories: function approximation, or regression analysis, including time series prediction and modeling; classification, including pattern and sequence recognition, novelty detection and sequential decision making, data processing, including filtering, clustering, blind signal separation and compression.
  • Application areas of ANNs include nonlinear system identification and control (vehicle control, process control), game-playing and decision making (backgammon, chess, racing), pattern recognition (radar systems, face identification, object recognition), sequence recognition (gesture, speech, handwritten text recognition), medical diagnosis, financial applications, data mining (or knowledge discovery in databases, "KDD"), visualization and e-mail spam filtering. For example, it is possible to create a semantic profile of user's interests emerging from pictures trained for object recognition.
  • FIG. 14 illustrates an exemplary neural network.
  • the neural network there is an input layer represented by a plurality of inputs, such as 1410i and 14102.
  • the inputs 14101, 14102 are provided to a hidden layer depicted as including nodes 14201, 14202, 1420s, 14204. These nodes 14201, 14202, 14203,14204 are combined to produce an output 1430 in an output layer.
  • the neural network performs simple processing via the hidden layer of simple processing elements, nodes 14201, 14202, 1420s, 14204, which can exhibit complex global behavior, determined by the connections between the processing elements and element parameters.
  • the neural network of FIG. 14 may be implemented in hardware. As illustrated in FIG. 15 a hardware based neural network is depicted.
  • ECG signals are often detected prior to and/or during a cardiac procedure.
  • ECG signals can be used to identify potential locations of a heart where arrhythmia causing signals originate from.
  • an ECG is a signal that describes the electrical activity of the heart.
  • ECG signals may also be used to map portions of a heart.
  • physicians use an ECG to study heart activity, an accounting for the interference needs to occur in order to isolate the electrical signals from the heart.
  • Such interference may also result from the processing of areas of the signal with sharp changes, peaks, and/or pacing signals including areas of high frequency and harmonics. Interference obscures the accuracy of the ECG readings. Therefore, a need exists to provide improved methods of identifying features so that the effects of such features may be removed from an electrical signal study thereby allowing the electrical signals of the heart to be viewed.
  • An ECG signal is generated by contraction (depolarization) and relaxation (repolarization) of atrial and ventricular muscles of the heart. As shown by signal 1602 in FIG. 16 A, an ECG signal contains a P wave (due to atrial depolarization), a QRS complex (due to atrial repolarization and ventricular depolarization) and a T wave (due to ventricular repolarization).
  • electrodes can be placed at specific positions on the human body or can be positioned within a human body via a catheter.
  • Artifacts e.g., noise
  • Artifacts in electrical signals can be baseline wander, powerline interference, electromyogram (EMG) noise, power line noise, etc. These noise signals may include site base noise and other additive noise.
  • Baseline wander or baseline drift occurs where the base axis (x-axis) of a signal appears to ‘wander’ or move up and down rather than be straight. This may cause the entire signal to shift from its normal base.
  • the baseline wander is caused due to improper electrode contact (e.g., electrode- skin impedance), patient movement, and cyclical movement (e.g., respiration).
  • FIG. 16B shows a typical ECG signal 1612 affected by baseline wander. As shown in the example of FIG. 16B, the frequency content of the baseline wander is in the range of 0.5 Hz. However, increased movement of the body during exercise or stress test increase the frequency content of baseline wander.
  • a Finite Impulse Response (FIR) high-pass zero phase forwardbackward filtering with a cut-off frequency of 0.5 Hz to estimate and remove the baseline in the ECG signal can be used.
  • FIR Finite Impulse Response
  • Electromagnetic fields caused by a powerline represent a common noise source in electronic signals such as ECGs, as well as to any other bioelectrical signal recorded from a patient’s body.
  • Such noise is characterized by, for example, 50 or 60 Hz sinusoidal interference, possibly accompanied by a number of harmonics.
  • Such narrowband noise renders the analysis and interpretation of the ECG more difficult since the delineation of low-amplitude waveforms becomes unreliable and spurious waveforms may be introduced. It may be necessary to remove powerline interference from ECG signals as it superimposes the low frequency ECG waves like P wave and T wave.
  • muscle noise can interfere with in many electrical signal applications such as ECG applications, as low amplitude waveforms can become obscured. Muscle noise is, in contrast to baseline wander and 50/60 Hz interference, not removed by narrowband filtering, but presents a different filtering problem as the spectral content of muscle activity considerably overlaps that of the PQRST complex. As an ECG signal is a repetitive signal, techniques can be used to reduce muscle noise in a manner similar to the processing of evoked potentials.
  • FIG. 16C shows an ECG signal 1630 interfered by an EMG noise 1632.
  • Instruments for measuring electrical signals such as ECG signals often detect electrical interference corresponding to a line, or mains, frequency. Line frequencies in most countries, though nominally set at 50 Hz or 60 Hz, may vary by several percent from these nominal values.
  • Various techniques for removing electrical interference from electrical signals can be implemented. Several of these techniques use of one or more low-pass or notch filters. For example, a system for variable filtering of noise in ECG signals may be implemented. The system may have a plurality of low pass filters including one filter with a, for example, 3 dB point at approximately 50 Hz and, for example, a second low pass filter with a 3-dB point at approximately 5 Hz.
  • a system for rejecting a line frequency component of an electronic signal may be implemented by passing the signal through two serially linked notch filter .
  • a system with a notch filter that may have either or both low -pass and high-pass coefficients for removing line frequency components from an ECG signal may be implemented.
  • the system may also support removal of burst noise and calculate a heart rate from the notch filter output.
  • a system with several units for removing interference may be implemented.
  • the units may include a mean value unit to generate an average signal over several cardiac cycles, a subtracting unit to subtract the average signal from the input signal to generate a residual signal, a filter unit to provide a filtered signal from the residual signal, and/or an addition unit to add the filtered signal to the average signal.
  • an analo -to-digital (A/D) converter may provide noise rejection by synchronizing a clock of the converter with a phase locked loop set to the line frequency.
  • biometric patient monitors may use surface electrodes to make measurements of bioelectric potentials such as ECG or electroencephalogram (EEG).
  • ECG electroencephalogram
  • the fidelity of these measurements is limited by the effectiveness of the connection of the electrode to the patient.
  • the resistance of the electrode system to the flow of electric currents, known as the electric impedance characterizes the effectiveness of the connection.
  • the higher the impedance the lower the fidelity of the measurement.
  • Several mechanisms may contribute to lower fidelity.
  • FIG. 16D illustrates examples of power line noise.
  • FIG 16D illustrates a body surface lead and signals from the mapping catheter including bipolar, unipolar distal, and unipolar proximal signal.
  • the area indicated around the signal in gray is the signals of interest illustrating the power line noise and in particular the dot on the unipolar distal signal and unipolar proximal signal indicate further areas of noise.
  • FIG. 16E illustrates a signal during ventricle activity including baseline wander.
  • FIG. 16E includes the MAP 1-2 signal at the top, the MAP 1 and MAP 2 signals.
  • FIG. 16F illustrates the signal of FIG. 16E after baseline wander removal.
  • FIG. 16F again includes the MAP 1-2 signal at the top, the MAP 1 and MAP 2 signals after baseline removal.
  • FIG. 16G illustrates an example of high frequency noise and baseline wander for bipolar measurements.
  • FIG. 17A is a diagram of an exemplary system 1720 in which one or more features of the disclosure subject matter can be implemented. All or parts of system 1720 may be used to collect information for a training dataset and/or all or parts of system 1720 may be used to implement a trained model.
  • System 1720 may include components, such as a catheter 1740, that are configured to damage tissue areas of an intra-body organ. The catheter 1740 may also be further configured to obtain biometric data including electronic signals.
  • catheter 1740 is shown to be a point catheter, it will be understood that a catheter of any shape that includes one or more elements (e.g., electrodes) may be used to implement the embodiments disclosed herein.
  • System 1720 includes a probe 1721, having shafts that may be navigated by a physician 1730 into a body part, such as heart 1726, of a patient 1728 lying on a table 1729.
  • a physician 1730 may insert shaft 1722 through a sheath 1723, while manipulating the distal end of the shafts 1722 using a manipulator 1732 near the proximal end of the catheter 1740 and/or deflection from the sheath 1723.
  • catheter 1740 may be fitted at the distal end of shafts 1722.
  • Catheter 1740 may be inserted through sheath 1723 in a collapsed state and may be then expanded within heart 1726. Gather 1740 may include at least one ablation electrode 1747 and a catheter needle 1748, as further disclosed herein.
  • catheter 1740 may be configured to ablate tissue areas of a cardiac chamber of heart 1726.
  • Inset 1745 shows catheter 1740 in an enlarged view, inside a cardiac chamber of heart 1726.
  • catheter 1740 may include at least one ablation electrode 1747 coupled onto the body of the catheter.
  • multiple elements may be connected via splines that form the shape of the catheter 1740.
  • One or more other elements may be provided and may be any elements configured to ablate or to obtain biometric data and may be electrodes, transducers, or one or more other elements.
  • the ablation electrodes such as electrode 1747, may be configured to provide energy to tissue areas of an intra-body organ such as heart 1726.
  • the energy may be thermal energy and may cause damage to the tissue area starting from the surface of the tissue area and extending into the thickness of the tissue area.
  • biometric data may include one or more of LATs, electrical activity, topology, bipolar mapping, dominant frequency, impedance, or the like.
  • the local activation time may be a point in time of a threshold activity corresponding to a local activation, calculated based on a normalized initial starting point.
  • Electrical activity may be any applicable electrical signals that may be measured based on one or more thresholds and may be sensed and/or augmented based on signal to noise ratios and/or other filters.
  • a topology may correspond to the physical structure of a body part or a portion of a body part and may correspond to changes in the physical structure relative to different parts of the body part or relative to different body parts.
  • a dominant frequency may be a frequency or a range of frequency that is prevalent at a portion of a body part and may be different in different portions of the same body part.
  • the dominant frequency of a pulmonary vein of a heart may be different than the dominant frequency of the right atrium of the same heart.
  • Impedance may be the resistance measurement at a given area of a body part.
  • the probe 1721, and catheter 1740 may be connected to a console 1724.
  • Console 1724 may include a processor 1741, such as a general-purpose computer, with suitable front end and interface circuits 1738 for transmitting and receiving signals to and from catheter, as well as for controlling the other components of system 1720.
  • processor 1741 may be further configured to receive biometric data, such as electrical activity, and determine if a given tissue area conducts electricity.
  • the processor may be external to the console 1724 and may be located, for example, in the catheter, in an external device, in a mobile device, in a cloud-based device, or may be a standalone processor.
  • processor 1741 may include a general -purpose computer, which may be programmed in software to carry out the functions described herein.
  • the software may be downloaded to the general-purpose computer in electronic form, over a network, for example, or it may, alternatively or additionally, be provided and/or stored on non-transitory tangible media, such as magnetic, optical, or electronic memory.
  • non-transitory tangible media such as magnetic, optical, or electronic memory.
  • FIG. 17A may be modified to implement the embodiments disclosed herein.
  • the disclosed embodiments may similarly be applied using other system components and settings.
  • system 1720 may include additional components, such as elements for sensing electrical activity, wired or wireless connectors, processing and display devices, or the like.
  • a display connected to a processor may be located at a remote location such as a separate hospital or in separate healthcare provider networks.
  • the system 1720 may be part of a surgical system that is configured to obtain anatomical and electrical measurements of a patient’s organ, such as a heart, and performing a cardiac ablation procedure.
  • a surgical system is the CARTO® system sold by Biosense Webster.
  • the system 1720 may also, and optionally, obtain biometric data such as anatomical measurements of the patient’s heart using ultrasound, computed tomography (CT), magnetic resonance imaging (MRI) or other medical imaging techniques known in the art.
  • the system 1720 may obtain electrical measurements using catheters, electrocardiograms (EKGs) or other sensors that measure electrical properties of the heart.
  • the biometric data including anatomical and electrical measurements may then be stored in a memory 1742 of the mapping system 1720, as shown in FIG. 17 A.
  • the biometric data may be transmitted to the processor 1741 from the memory 1742.
  • the biometric data may be transmitted to a server 1760, which may be local or remote, using a network 1762.
  • Network 1762 may be any network or system generally known in the art such as an intranet, a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a direct connection or series of connections, a cellular telephone network, or any other network or medium capable of facilitating communication between the mapping system 1720 and the server 1760.
  • the network 1762 may be wired, wireless or a combination thereof. Wired connections may be implemented using Ethernet, Universal Serial Bus (USB), RJ-11 or any other wired connection generally known in the art. Wireless connections may be implemented using WiFi, WiMAX, and Bluetooth, infrared, cellular networks, satellite or any other wireless connection methodology generally known in the art. Additionally, several networks may work alone or in communication with each other to facilitate communication in the network 1762.
  • server 1762 may be implemented as a physical server. In other instances, server 1762 may be implemented as a virtual server a public cloud computing provider (e.g., Amazon Web Services (AWS) ®).
  • AWS Amazon Web Services
  • Control console 1724 may be connected, by a cable 1739, to body surface electrodes 1743, which may include adhesive skin patches that are affixed to the patient 1730.
  • the processor in conjunction with a current tracking module, may determine position coordinates of the catheter 1740 inside the body part (e.g., heart 1726) of a patient. The position coordinates may be based on impedances or electromagnetic fields measured between the body surface electrodes 1743 and the electrode 1748 or other electromagnetic components of the catheter 1740. Additionally, or alternatively, location pads may be located on the surface of bed 1729 and may be separate from the bed 1729.
  • Processor 1741 may include real-time noise reduction circuitry typically configured as a field programmable gate array (FPGA), followed by an analog -to-digital (A/D) ECG (electrocardiograph) or EMG (electromyogram) signal conversion integrated circuit.
  • the processor 1741 may pass the signal from an A/D ECG or EMG circuit to another processor and/or can be programmed to perform one or more functions disclosed herein.
  • Control console 1724 may also include an input/output (I/O) communications interface that enables the control console to transfer signals from, and/or transfer signals to electrode 1747.
  • I/O input/output
  • processor 1741 may facilitate the presentation of a body part rendering 1735 to physician 1730 on a display 1727, and store data representing the body part rendering 1735 in a memory 1742.
  • Memory 1742 may comprise any suitable volatile and/or nonvolatile memory, such as random-access memory or a hard disk drive.
  • medical professional 1730 may be able to manipulate a body part rendering 1735 using one or more input devices such as a touch pad, a mouse, a keyboard, a gesture recognition apparatus, or the like.
  • an input device may be used to change the position of catheter 1740 such that rendering 1735 is updated.
  • display 1727 may include a touchscreen that can be configured to accept inputs from medical professional 1730, in addition to presenting a body part rendering 1735.
  • FIG. 17B illustrates an exemplary catheter 1750 placed in the right atria with bipolar intracardiac ECG signals 1780 via the intracardiac ECG leads 1770.
  • FIG. 18 there is a depiction of an illustration 1800 of a lab.
  • Illustration 1800 includes many specific electronic devices that provide noise to the measurements.
  • external (from the lab) sources that influence electronic signals within the lab.
  • power banks for the hospital if the lab is included within a hospital, for example
  • Air conditioning units may be located above the lab.
  • Other devices providing or increasing noise may be located in adjacent rooms.
  • each lab is unique with respect to internal and external noise signals. That is, one lab may include air conditions, another power banks, and in fact, each lab may have a particular configuration of the monitors and other internal device.
  • FIG. 19 illustrates signals and their respective frequencies that may be found within a specific lab. For example, fluorescent noise around 200 Hz, power noise 50/60 Hz and the respective harmonics of these signals.
  • each lab has a different typical spectrum of noise and by characterizing the typical noise signals on a laboratory basis, the present system and method may design a filter set specific to each particular lab. Reviewing the ECG and the ICEG has shown that each lab has its own unique noise pattern. Therefore, there is a need to define a method to generate lab specific noise algorithm.
  • FIG. 20 illustrates a method 2000 for dealing with the described noise signals.
  • Current concepts for dealing with these types of noise are based on generating "one method fit all" ECG/ICEG denoising algorithms to all ECG processing systems currently deployed on the field.
  • Method 2000 includes collecting ECG plus noise signals at step 2010.
  • method 2000 applies a set of filters (the same filters for all labs).
  • method 2000 provides the cleaned ECG signals.
  • the method described herein is based on generating a specific denoising algorithm for each specific lab, consequently reducing impact of denoising on ECG/ICEG signals morphology.
  • data is collected on the types of noise for each lab, both internal and external sources of noise. This includes power lines, converters, X-ray machines and even the CARTO ACT, for example.
  • the transformers located below the floor in some labs, as well as noise from other external sources are also collected.
  • a method 2100 may be performed to denoise signals for a lab.
  • Method 2100A may be employed for a first lab.
  • Method 2100A includes collecting ECG plus noise signals at step 2110A.
  • method 2100A ensures that enough data is collected to build a lab noise profile for lab A.
  • method 2100A applies a set of filters (a specific filter designed for lab A).
  • method 2100A provides the cleaned ECG signals.
  • method 2100B may be employed for a second lab.
  • Method 2100B includes collecting ECG plus noise signals at step 2110B.
  • method 2100B ensures that enough data is collected to build a lab noise profile for lab A.
  • method 2100B applies a set of filters (a specific filter designed for lab A). The set of filters applied in step 2130A and step 2130B may be different as each filter set is dependent upon the noise found within the respective lab.
  • method 2100B provides the cleaned ECG signals.
  • data from each lab in order to address site-based noise the data from each specific lab is collected at one of steps 2120 depending on which lab is being tested.
  • ECG/ICEG collection performed by aggregating each lab recordings to a database.
  • each case that is uploaded to the database includes the identification of the institute and the specific lab.
  • the data may be recorded with filtering.
  • the raw data may also be collected, i.e., ECG/ICEG signals without filtering, and the WCT recording (channel 21) that includes all the lab base noise may also be collected.
  • the data on each specific WS may be collected. For example, there may be specific disk space for the collection. As discussed with respect to FIG.
  • a local algorithm may run within the disk space.
  • the ECG/ICEG denoising algorithm may train on the data.
  • a specific algorithm may be designed and generated to filter the lab noise at step 2130. In order to do so, an FFT may be performed on all the ECG/ICEG signals to determine the typical lab noise. As biologic noise is different between patients, and the lab noise is consistent in specific labs, the correlations on the data (e.g., above 30) provide the ability to distinguish between biologic noise and lab noise.
  • machine learning may be performed to enable the machine to learn the coherent noise.
  • An algorithm may be generated in the cloud or a specific application on the workstation. The algorithm may be based on an autoencoder, for example. This may include LTSM and/or CNN architectures, as described hereinabove. Once generated, the trained model may be deployed for the specific lab to provide clean ECG from that lab at step 2140.
  • the resulting filter for each lab may be presented to the user (physician) to approve the algorithm results based on clinical data. Such presentation may include the raw data, previous filtering algorithm, and other filtering algorithms to enable ease of approval.
  • the trained algorithm or the frequencies of the specific lab noise may be loaded into the CARTO® system (i.e., to the ECG/ICEG presenting and storing system).
  • the presence of environment related additive noise in recorded signals may include, for example, power noise, contact noise, and deflection noise.
  • Contact noise may include noise created by catheter collision during data collection.
  • Deflection noise may include noise created by discharges of static electricity during catheter deflection.
  • CARTO's detections of physiological features e.g. mapping annotations within contact affected points ECG
  • This method will allow user to filter out contact noise affected CARTO® points automatically, to produce CARTO® maps free of artifacts.
  • the present method allows generation of a set of noise samples in a "quiet lab" condition that may be added to real-life signals allowing the generation of practically unlimited number of real-life noise samples.
  • a quite lab may be used with low or no noise.
  • Samples may be recorded in a sterile environment (also known as an aquarium). In this environment, intentional electrode collisions and catheter deflections may be provided.
  • the collected noise signals may be embedded into ECG data by storing time references as annotations to noise segments.
  • data may be collected by configuring the system in "quiet lab” with a minimal number of possible noises or ideally free of any noise.
  • a set of signals may be generated and recorded with specific noise of interest, assuming that the noise of interest is additive and has not created a dependence on signals or other noises in the system.
  • the needed, required, or desired number of samples of the noise may be provided by embedding the collected noise samples in real-life system's signal recordings by addition of recorded noise samples to real- life systems' signal recordings.
  • the signals that may be provided as additive noise include, but are not limited to, power noise, contact noise, and deflection noise.
  • noise detection may occur using deep autoencoder with fully connected (dense) layer.
  • Contact noise is a distinctive non-clinical artifact caused when catheter electrodes are in contact with each other. This contact may occur when two or more electrodes of the catheter or different catheters intersect, i.e., touch each other.
  • the present description presents a method to detect such a noise in CARTO® points ECG and to filter out those points from CARTO® map.
  • intracardiac ECG signals may be modeled as a linear combination of the signal and several noise components as described in the following Equation:
  • ICECG Signal + DN + CN + PN + muscle artifact + ••• where DN is deflection noise, CN is contact noise, PN is powerline noise, for example.
  • the noise component is created based on manual operation of the user.
  • FIG. 22 illustrates contact noise examples recorded in a controlled aquarium environment. As illustrated a number of signals may be recorded over a 2.5 second interval. The signals are displayed on the respective plots. The 24 signals may be recorded. Map 1-4 may be provided in this example as the mapping catheter. These mapping catheter signals may be provided in the first four plots 2202, 2204, 2206, 2208. Catheters P1-P20 are provided as the penta-ray different catheters. The catheters are provided in the remaining 20 plots 2210, 2212, 2214, 2216, 2218, 2220, 2222, 2224, 2226, 2228, 22300, 2232, 2234, 2236, 2238, 2240, 2242, 2244, 2246, 2248.
  • Mapl mapping catheter electrode 1 represented in plot 2202 is touching Penta-Ray electrode 5 represented in plot 2218 and 6 represented in plot 2220 (P5, P6).
  • the plot 2202 is affected by the signals represented in plots 2218,2220.
  • Map 2 represented in plot 2204 touches P8 represented in plot 2224 illustrating the contact noise.
  • the signals represent the contact noise of each of the catheter contacting situations.
  • the plot 2204 is affected by the signals represented in plot 2224.
  • Data is collected using an ICEG and ECG data collection by aggregating lab clinical recordings into a database. This may be a backup of the CARTO® data to CARTONET, for example.
  • a designated GUI allows manual marking of contact start and end times per electrode of each CARTO® points consequently dividing the data into two classes. This may operate as described above with respect to a binary classification.
  • One of the classes is contact noise present in points (CN).
  • the other class includes ECG points free of contact noise (FR).
  • training and evaluating may occur on the model.
  • the training may include deep learning model architecture is based on deep autoencoder network.
  • the output of the encoder is connected to a dense layer in order to classify contact noise per channel.
  • the model may be trained on 5 GB of IC ECG data.
  • the model allows the classification of each CARTO® point into (CN. FR) classes during the clinical procedure. This enables the user to filter out CN classified CARTO® point and to display CARTO® maps free from of contact noise induced artifacts.
  • deflection noise detection For deflection noise detection, a LSTM deep network is described. As set forth above, the task of finding noise affected points is rather cumbersome and is performed manually if at all. As a result, CARTO's detections of physiological features (e.g. mapping annotations within deflection affected points ECG) will likely produce artifacts and thus to affect clinical understanding of CARTO® maps. This method will allow user to filter out deflection noise affected CARTO® points automatically, to produce CARTO® maps free of artifacts. Deflection noise appears as chaotic peaks when catheter is deflected by clinical specialist. This disclosure presents a method to detect such noise in CARTO® points ECG and to filter out those points from CARTO® map.
  • physiological features e.g. mapping annotations within deflection affected points ECG
  • FIG. 23 A illustrates deflection noise examples recorded in a controlled aquarium environment. These data samples may be provided with random start times and having random durations.
  • FIG. 23B illustrates deflection noise examples of FIG. 23A with an increased x-axis to zoom in on features from the FIG. 23A depictions.
  • the bottom plot (orange) 2310 also shown as zoomed plot 2310.1 in FIG. 23B
  • the bottom plot illustrates three high frequency bursts that indicate the three times the catheter was deflected.
  • the upper plot (green) 2320 also shown as zoomed plot 2320.1 in FIG. 23B) represents a signal free from deflection or contact noise.
  • the middle plot (blue) 2330 (also shown as zoomed plot 2330.1 in FIG. 23B) illustrates a signal that is the sum of deflection and contact noise.
  • Data is collected using an ICEG (deflection noise manifests only in the ICEG) and ECG data collection by aggregating lab clinical recordings into a database.
  • This may be a backup of the CARTO® data to CARTONET, for example.
  • a designated GUI allows manual marking of deflection start and end times per electrode of each CARTO® points consequently dividing the data into two classes. This may operate as described above with respect to a binary classification.
  • One of the classes is deflection noise present in point's ECG (DN).
  • the other class includes ECG point free of deflection noise (FR).
  • training and evaluating may occur on the model.
  • the training may include a deep learning model architecture is based on LSTM deep network.
  • a three-layer LSTM networks to capture feature representation of deflection noise.
  • the last layer output is connected to dense fully connected layer in order to predict the presence of deflection noise.
  • the model may be trained on 10 GB of BS ECG and IC ECG data.
  • the model allows the classification of each CARTO point into (DN. FR) classes during the clinical procedure. This enables the user to filter out DN classified CARTO® point and to display CARTO® maps free of deflection noise induced artifacts.
  • FIG. 24 illustrates additional deflection noise 2450 examples. This deflection noise 2450 is illustrated whereas in prior beats the deflection noise does not exist or is reduced. As illustrated, the first two plots 2452,2454 represent the body surface ECG and the next ten plots 2456, 2458, 2460, 2462, 2464, 2466, 2468, 2470, 2472, 2474 represent 10 bipolar channels of the intracardiac ECG during the deflection noise. The deflection noise 2450 is illustrated on each of the signals.
  • a contact and deflection noise model 2500 is provided in FIG. 25 illustrating a LSTM network.
  • the input data may include unipolar intracardiac ECG including Penta, Lasso and the like, and a mapping catheter with 2-4 unipolar catheters.
  • Model 2500 includes LSTM1 input as an input layer 2510.
  • LSTM1 performs LSTM 2520.
  • dropoutl occurs at step 2530.
  • LSTM2 is performed at step 2540.
  • Dropout2 occurs at step 2550.
  • LSTM3 is performed at step 2560. Dropout3 occurs at step 2570.
  • a densel connected layer occurs at step 2580.
  • Dropout4 occurs at step 2590.
  • a dense2 connected layer occurs at step 2595.
  • the output includes a per sample classification of 0, 1, 2, where 0 is a normal signal, 1 is contact noise and 2 is deflection noise.
  • a CNN inception model 2600 is provided in FIG. 26.
  • the input data may include unipolar intracardiac ECG including Penta, Lasso and the like, and a mapping catheter with 2-4 unipolar catheters.
  • the input data may include position information (x,y,z), angular movement, and movement or displacement.
  • Input 1 may be the ECG signals and input 2 may be the position information (x,y,z), angular movement, and movement or displacement of the catheter.
  • Model 2600 includes an inputl in input layer 2605.
  • a 2D convolution is performed at step 2610. This may be performed three-fold in 2610a, 2610b, 2610c.
  • Each respective conv2d (2610a, 2610b, 2610c) is the nominalized in a batch nominalization at step 2615.
  • These batch normalizations are concatenated in a first concatenation at step 2620.
  • a second set of 2D convolution is performed at step 2625. This may be performed three-fold in 2625a, 2625b, 2625c.
  • Each respective conv2d (2625a, 2625b, 2625c) is the nominalized in a batch nominalization at step 2630.
  • a third set of 2D convolution is performed at step 2640. This may be performed three-fold in 2640a, 2640b, 2640c. Each respective conv2d (2640a, 2640b, 2640c) is the nominalized in a batch nominalization at step 2645. These batch normalizations are concatenated in a third concatenation at step 2650.
  • a fourth set of 2D convolution is performed at step 2655. This may be performed three-fold in 2655a, 2655b, 2655c. Each respective conv2d (2655a, 2655b, 2655c) is the nominalized in a batch nominalization at step 2660.
  • These batch normalizations are concatenated in a fourth concatenation at step 2665.
  • the data is then flattened in a first flattening at step 2670 and output to a first dense filtering at step 2675.
  • a second input layer is provided at step 2680 which feeds a second dense filter at step 2685.
  • the outputs of the first dense filter at step 2675 and the second dense filter at step 2685 are concatenated at step 2690, which is then provided to a third dense filter at step 2695.
  • the output includes a per sample classification of 0, 1, 2, where 0 is a normal signal, 1 is contact noise and 2 is deflection noise.
  • FIG. 27 illustrates a second learning phase 2700 that may be implemented to capture the methods described herein.
  • the second learning phase 2700 may be run iteratively.
  • Method 2700 includes recording the noise in the laboratory in step 2710. This recording is described herein above.
  • step 2720 noise samples are added to clean intracardiac signals. The noise sample being added in are described in detail above.
  • a model is built. This model may include a neural network, for example. The model may be built on the additive assumption of the noise being evaluated on real life data as illustrated in the Equation above. Steps 2710, 2720 and 2730 have been described herein above for various additive signals.
  • the model may be evaluated on general data sets and may be compared to manual annotations. This retraining of the model provides a second learning phase. The retraining may be iteratively repeated until specificity and sensitivity of the model above a desired level is achieved.
  • Examples of computer-readable storage media include, but are not limited to, a read only memory (ROM), a random-access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs).
  • ROM read only memory
  • RAM random-access memory
  • register cache memory
  • semiconductor memory devices magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs).
  • a processor in association with software may be used to implement a radio frequency transceiver for use in a WTRU, UE, terminal, base station, RNC, or any host computer.

Abstract

A system and method for detecting and reducing noise in an ECG environment is disclosed. The system and method include inputting data regarding the ECG and ECG noise into a database, the database including data on other ECG patients and their respective signals, modeling the noise of the ECG in a quiet environment to provide samples to train and model to identify noise in an ECG system, and identifying the noise signals within the ECG data and removing the noise from the signals. The noise may include per site noise signals, additive noises, contact noise and deflection noise. The quiet environment may include an aquarium.

Description

INTRACARDIAC ECG NOISE DETECTION AND REDUCTION
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. Provisional Application No. 63/091,186, filed October 13, 2020, the contents of which are incorporated herein by reference.
FIELD OF INVENTION
[0002] The present invention is related to artificial intelligence and machine learning associated with intracardiac electrocardiogram (ECG) noise detection and reduction.
BACKGROUND
[0003] Electrical signals such as electrocardiogram (ECG) and intracardiac ECG signals are often detected prior to and/or during a cardiac procedure. For example, ECG signals and intracardiac ECG signal can be used to identify potential locations of a heart where arrhythmia causing signals originate from. Generally, an ECG or intracardiac ECG is a signal that describes the electrical activity of the heart. ECG signals and intracardiac ECG signals may also be used to map portions of a heart. When physicians use an ECG or intracardiac ECG to study heart activity, an accounting for the interference needs to occur in order to isolate the electrical signals from the heart. Such interference may also result from the processing of areas of the signal with sharp changes, peaks, and/or pacing signals including areas of high frequency and harmonics. Interference obscures the accuracy of the ECG and intracardiac ECG readings. Therefore, a need exists to provide improved methods of identifying features so that the effects of such features may be removed from an electrical signal study thereby allowing the electrical signals of the heart to be viewed.
BRIEF DESCRIPTION OF THE DRAWINGS
[0004] A more detailed understanding may be had from the following description, given by way of example in conjunction with the accompanying drawings, wherein like reference numerals in the figures indicate like elements, and wherein:
[0005] FIG. 1 is a block diagram of an example system for remotely monitoring and communicating patient biometrics;
[0006] FIG. 2 is a system diagram of an example of a computing environment in communication with network; [0007] FIG. 3 is a block diagram of an example device in which one or more features of the disclosure can be implemented;
[0008] FIG. 4 illustrates a graphical depiction of an artificial intelligence system incorporating the example device of FIG. 3;
[0009] FIG. 5 illustrates a method performed in the artificial intelligence system of FIG. 4;
[0010] FIG. 6 illustrates an example of the probabilities of a naive Bayes calculation;
[0011] FIG. 7 illustrates an exemplary decision tree;
[0012] FIG. 8 illustrates an exemplary random forest classifier;
[0013] FIG. 9 illustrates an exemplary logistic regression;
[0014] FIG. 10 illustrates an exemplary support vector machine;
[0015] FIG. 11 illustrated an exemplary linear regression model;
[0016] FIG. 12 illustrates an exemplary K-means clustering;
[0017] FIG. 13 illustrates an exemplary ensemble learning algorithm;
[0018] FIG. 14 illustrates an exemplary neural network;
[0019] FIG. 15 illustrates a hardware based neural network;
[0020] FIG. 16A illustrates an ECG signal that contains a P wave (due to atrial depolarization), a QRS complex (due to atrial repolarization and ventricular depolarization) and a T wave (due to ventricular repolarization);
[0021] FIG. 16B illustrates a frequency content of the baseline wander;
[0022] FIG. 16C shows an ECG signal interfered by an EMG noise;
[0023] FIG. 16D illustrates examples of power line noise;
[0024] FIG. 16E illustrates a signal during ventricle activity including baseline wander;
[0025] FIG. 16F illustrates the signal of FIG. 16E after baseline wander removal;
[0026] FIG. 16G illustrates an example of high frequency noise and baseline wander for bipolar measurements;
[0027] FIG. 17A is a diagram of an exemplary system in which one or more features of the disclosure subject matter can be implemented;
[0028] FIG. 17B illustrates an exemplary catheter placed in the right atria with bipolar intracardiac ECG signals;
[0029] FIG. 18 is a depiction of an illustration of a lab;
[0030] FIG. 19 illustrates signals and their respective frequencies that may be found within a specific lab;
[0031] FIG. 20 illustrates a method for dealing with the described noise signals; [0032] FIG. 21 illustrates a method performed to denoise signals for a lab (A and B);
[0033] FIG. 22 illustrates contact noise examples recorded in a controlled aquarium environment;
[0034] FIG. 23A illustrates deflection noise examples recorded in a controlled aquarium environment;
[0035] FIG. 23B illustrates deflection noise examples of FIG. 23A with an increased x- axis to zoom in on features;
[0036] FIG. 24 illustrates additional deflection noise examples;
[0037] FIG. 25 illustrates a contact and deflection noise model;
[0038] FIG. 26 illustrates a CNN inception model; and
[0039] FIG. 27 illustrates a second learning phase that may be implemented to capture the methods described herein.
DETAILED DESCRIPTION
[0040] Systems and methods for providing improved methods of identifying features so that the effects of such features may be removed from an electrical signal study thereby allowing the electrical signals of the heart to be viewed are described.
[0041] FIG. 1 is a block diagram of an example system 100 for remotely monitoring and communicating patient biometrics (i.e., patient data). In the example illustrated in FIG. 1, the system 100 includes a patient biometric monitoring and processing apparatus 102 associated with a patient 104, a local computing device 106, a remote computing system 108, a first network 110 and a second network 120.
[0042] According to an embodiment, a monitoring and processing apparatus 102 may be an apparatus that is internal to the patient’s body (e.g., subcutaneously implantable). The monitoring and processing apparatus 102 may be inserted into a patient via any applicable manner including orally injecting, surgical insertion via a vein or artery, an endoscopic procedure, or a laparoscopic procedure.
[0043] According to an embodiment, a monitoring and processing apparatus 102 may be an apparatus that is external to the patient. For example, as described in more detail below, the monitoring and processing apparatus 102 may include an attachable patch (e.g., that attaches to a patient’s skin). The monitoring and processing apparatus 102 may also include a catheter with one or more electrodes, a probe, a blood pressure cuff, a weight scale, a bracelet or smart watch biometric tracker, a glucose monitor, a continuous positive airway pressure (CPAP) machine or virtually any device which may provide an input concerning the health or biometrics of the patient. [0044] According to an embodiment, a monitoring and processing apparatus 102 may include both components that are internal to the patient and components that are external to the patient.
[0045] A single monitoring and processing apparatus 102 is shown in FIG. 1. Example systems may, however, may include a plurality of patient biometric monitoring and processing apparatuses. A patient biometric monitoring and processing apparatus may be in communication with one or more other patient biometric monitoring and processing apparatuses. Additionally, or alternatively, a patient biometric monitoring and processing apparatus may be in communication with the network 110.
[0046] One or more monitoring and processing apparatuses 102 may acquire patient biometric data (e.g., electrical signals, blood pressure, temperature, blood glucose level or other biometric data) and receive at least a portion of the patient biometric data representing the acquired patient biometrics and additional formation associated with acquired patient biometrics from one or more other monitoring and processing apparatuses 102. The additional information may be, for example, diagnosis information and/or additional information obtained from an additional device such as a wearable device. Each monitoring and processing apparatus 102 may process data, including its own acquired patient biometrics as well as data received from one or more other monitoring and processing apparatuses 102.
[0047] In FIG. 1, network 110 is an example of a short-range network (e.g., local area network (LAN), or personal area network (PAN)). Information may be sent, via short-range network 110, between monitoring a processing apparatus 102 and local computing device 106 using any one of various short-range wireless communication protocols, such as Bluetooth, WiFi, Zigbee, Z-Wave, near field communications (NFC), ultraband, Zigbee, or infrared (IR).
[0048] Network 120 may be a wired network, a wireless network or include one or more wired and wireless networks. For example, a network 120 may be a long-range network (e.g., wide area network (WAN), the internet, or a cellular network,). Information may be sent, via network 120 using any one of various long-range wireless communication protocols (e.g., TCP/IP, HTTP, 3G, 4G/LTE, or 5G/New Radio).
[0049] The patient monitoring and processing apparatus 102 may include a patient biometric sensor 112, a processor 114, a user input (UI) sensor 116, a memory 118, and a transmitter-receiver (i.e., transceiver) 122. The patient monitoring and processing apparatus 102 may continually or periodically monitor, store, process and communicate, via network 110, any number of various patient biometrics. Examples of patient biometrics include electrical signals (e.g., ECG signals and brain biometrics), blood pressure data, blood glucose data and temperature data. The patient biometrics may be monitored and communicated for treatment across any number of various diseases, such as cardiovascular diseases (e.g., arrhythmias, cardiomyopathy, and coronary artery disease) and autoimmune diseases (e.g., type I and type II diabetes).
[0050] Patient biometric sensor 112 may include, for example, one or more sensors configured to sense a type of biometric patient biometrics. For example, patient biometric sensor 112 may include an electrode configured to acquire electrical signals (e.g., heart signals, brain signals or other bioelectrical signals), a temperature sensor, a blood pressure sensor, a blood glucose sensor, a blood oxygen sensor, a pH sensor, an accelerometer and a microphone.
[0051] As described in more detail below, patient biometric monitoring and processing apparatus 102 may be an ECG monitor for monitoring ECG signals of a heart. The patient biometric sensor 112 of the ECG monitor may include one or more electrodes for acquiring ECG signals. The ECG signals may be used for treatment of various cardiovascular diseases.
[0052] In another example, the patient biometric monitoring and processing apparatus 102 may be a continuous glucose monitor (CGM) for continuously monitoring blood glucose levels of a patient on a continual basis for treatment of various diseases, such as type I and type II diabetes. The CGM may include a subcutaneously disposed electrode, which may monitor blood glucose levels from interstitial fluid of the patient. The CGM may be, for example, a component of a closed-loop system in which the blood glucose data is sent to an insulin pump for calculated delivery of insulin without user intervention.
[0053] Transceiver 122 may include a separate transmitter and receiver. Alternatively, transceiver 122 may include a transmitter and receiver integrated into a single device.
[0054] Processor 114 may be configured to store patient data, such as patient biometric data in memory 118 acquired by patient biometric sensor 112, and communicate the patient data, across network 110, via a transmitter of transceiver 122. Data from one or more other monitoring and processing apparatus 102 may also be received by a receiver of transceiver 122, as described in more detail below.
[0055] According to an embodiment, the monitoring and processing apparatus 102 includes UI sensor 116 which may be, for example, a piezoelectric sensor or a capacitive sensor configured to receive a user input, such as a tapping or touching. For example, UI sensor 116 may be controlled to implement a capacitive coupling, in response to tapping or touching a surface of the monitoring and processing apparatus 102 by the patient 104. Gesture recognition may be implemented via any one of various capacitive types, such as resistive capacitive, surface capacitive, projected capacitive, surface acoustic wave, piezoelectric and infra-red touching. Capacitive sensors may be disposed at a small area or over a length of the surface such that the tapping or touching of the surface activates the monitoring device.
[0056] As described in more detail below, the processor 114 may be configured to respond selectively to different tapping patterns of the capacitive sensor (e.g., a single tap or a double tap), which may be the UI sensor 116, such that different tasks of the patch (e.g., acquisition, storing, or transmission of data) may be activated based on the detected pattern. In some embodiments, audible feedback may be given to the user from processing apparatus 102 when a gesture is detected.
[0057] The local computing device 106 of system 100 is in communication with the patient biometric monitoring and processing apparatus 102 and may be configured to act as a gateway to the remote computing system 108 through the second network 120. The local computing device 106 may be, for example, a, smart phone, smartwatch, tablet or other portable smart device configured to communicate with other devices via network 120. Alternatively, the local computing device 106 may be a stationary or standalone device, such as a stationary base station including, for example, modem and/or router capability, a desktop or laptop computer using an executable program to communicate information between the processing apparatus 102 and the remote computing system 108 via the PC's radio module, or a USB dongle. Patient biometrics may be communicated between the local computing device 106 and the patient biometric monitoring and processing apparatus 102 using a short-range wireless technology standard (e.g., Bluetooth, WiFi, ZigBee, Z-wave and other short-range wireless standards) via the short-range wireless network 110, such as a local area network (LAN) (e.g., a personal area network (PAN)). In some embodiments, the local computing device 106 may also be configured to display the acquired patient electrical signals and information associated with the acquired patient electrical signals, as described in more detail below.
[0058] In some embodiments, remote computing system 108 may be configured to receive at least one of the monitored patient biometrics and information associated with the monitored patient via network 120, which is a long-range network. For example, if the local computing device 106 is a mobile phone, network 120 may be a wireless cellular network, and information may be communicated between the local computing device 106 and the remote computing system 108 via a wireless technology standard, such as any of the wireless technologies mentioned above. As described in more detail below, the remote computing system 108 may be configured to provide (e.g., visually display and/or aurally provide) the at least one of the patient biometrics and the associated information to a healthcare professional (e.g., a physician). [0059] FIG. 2 is a system diagram of an example of a computing environment 200 in communication with network 120. In some instances, the computing environment 200 is incorporated in a public cloud computing platform (such as Amazon Web Services or Microsoft Azure), a hybrid cloud computing platform (such as HP Enterprise OneSphere) or a private cloud computing platform.
[0060] As shown in FIG. 2, computing environment 200 includes remote computing system 108 (hereinafter computer system), which is one example of a computing system upon which embodiments described herein may be implemented.
[0061] The remote computing system 108 may, via processors 220, which may include one or more processors, perform various functions. The functions may include analyzing monitored patient biometrics and the associated information and, according to physician- determined or algorithm driven thresholds and parameters, providing (e.g., via display 266) alerts, additional information, or instructions. As described in more detail below, the remote computing system 108 may be used to provide (e.g., via display 266) healthcare personnel (e.g., a physician) with a dashboard of patient information, such that such information may enable healthcare personnel to identify and prioritize patients having more critical needs than others.
[0062] As shown in FIG. 2, the computer system 210 may include a communication mechanism such as a bus 221 or other communication mechanism for communicating information within the computer system 210. The computer system 210 further includes one or more processors
220 coupled with the bus 221 for processing the information. The processors 220 may include one or more CPUs, GPUs, or any other processor known in the art.
[0063] The computer system 210 also includes a system memory 230 coupled to the bus
221 for storing information and instructions to be executed by processors 220. The system memory 230 may include computer readable storage media in the form of volatile and/or nonvolatile memory, such as read only system memory (ROM) 231 and/or random-access memory (RAM) 232. The system memory RAM 232 may include other dynamic storage device(s) (e.g., dynamic RAM, static RAM, and synchronous DRAM). The system memory ROM 231 may include other static storage device(s) (e.g., programmable ROM, erasable PROM, and electrically erasable PROM). In addition, the system memory 230 may be used for storing temporary variables or other intermediate information during the execution of instructions by the processors 220. A basic input/output system 233 (BIOS) may contain routines to transfer information between elements within computer system 210, such as during start-up, that may be stored in system memory ROM 231. RAM 232 may comprise data and/or program modules that are immediately accessible to and/or presently being operated on by the processors 220. System memory 230 may additionally include, for example, operating system 234, application programs 235, other program modules 236 and program data 237.
[0064] The illustrated computer system 210 also includes a disk controller 240 coupled to the bus 221 to control one or more storage devices for storing information and instructions, such as a magnetic hard disk 241 and a removable media drive 242 (e.g., floppy disk drive, compact disc drive, tape drive, and/or solid-state drive). The storage devices may be added to the computer system 210 using an appropriate device interface (e.g., a small computer system interface (SCSI), integrated device electronics (IDE), Universal Serial Bus (USB), or FireWire).
[0065] The computer system 210 may also include a display controller 265 coupled to the bus 221 to control a monitor or display 266, such as a cathode ray tube (CRT) or liquid crystal display (LCD), for displaying information to a computer user. The illustrated computer system 210 includes a user input interface 260 and one or more input devices, such as a keyboard 262 and a pointing device 261, for interacting with a computer user and providing information to the processor 220. The pointing device 261, for example, may be a mouse, a trackball, or a pointing stick for communicating direction information and command selections to the processor 220 and for controlling cursor movement on the display 266. The display 266 may provide a touch screen interface that may allow input to supplement or replace the communication of direction information and command selections by the pointing device 261 and/or keyboard 262.
[0066] The computer system 210 may perform a portion or each of the functions and methods described herein in response to the processors 220 executing one or more sequences of one or more instructions contained in a memory, such as the system memory 230. Such instructions may be read into the system memory 230 from another computer readable medium, such as a hard disk 241 or a removable media drive 242. The hard disk 241 may contain one or more data stores and data files used by embodiments described herein. Data store contents and data files may be encrypted to improve security. The processors 220 may also be employed in a multi-processing arrangement to execute the one or more sequences of instructions contained in system memory 230. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. Thus, embodiments are not limited to any specific combination of hardware circuitry and software.
[0067] As stated above, the computer system 210 may include at least one computer readable medium or memory for holding instructions programmed according to embodiments described herein and for containing data structures, tables, records, or other data described herein. The term computer readable medium as used herein refers to any non-transitory, tangible medium that participates in providing instructions to the processor 220 for execution. A computer readable medium may take many forms including, but not limited to, non-volatile media, volatile media, and transmission media. Non-limiting examples of non-volatile media include optical disks, solid state drives, magnetic disks, and magneto-optical disks, such as hard disk 241 or removable media drive 242. Non-limiting examples of volatile media include dynamic memory, such as system memory 230. Non-limiting examples of transmission media include coaxial cables, copper wire, and fiber optics, including the wires that make up the bus 221. Transmission media may also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications.
[0068] The computing environment 200 may further include the computer system 210 operating in a networked environment using logical connections to local computing device 106 and one or more other devices, such as a personal computer (laptop or desktop), mobile devices (e.g., patient mobile devices), a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to computer system 210. When used in a networking environment, computer system 210 may include modem 272 for establishing communications over a network 120, such as the Internet. Modem 272 may be connected to system bus 221 via network interface 270, or via another appropriate mechanism.
[0069] Network 120, as shown in FIGs. 1 and 2, may be any network or system generally known in the art, including the Internet, an intranet, a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a direct connection or series of connections, a cellular telephone network, or any other network or medium capable of facilitating communication between computer system 610 and other computers (e.g., local computing device 106).
[0070] FIG. 3 is a block diagram of an example device 300 in which one or more features of the disclosure can be implemented. The device 300 may be local computing device 106, for example. The device 300 can include, for example, a computer, a gaming device, a handheld device, a set-top box, a television, a mobile phone, or a tablet computer. The device 300 includes a processor 302, a memory 304, a storage device 306, one or more input devices 308, and one or more output devices 310. The device 300 can also optionally include an input driver 312 and an output driver 314. It is understood that the device 300 can include additional components not shown in FIG. 3 including an artificial intelligence accelerator.
[0071] In various alternatives, the processor 302 includes a central processing unit (CPU), a graphics processing unit (GPU), a CPU and GPU located on the same die, or one or more processor cores, wherein each processor core can be a CPU or a GPU. In various alternatives, the memory 304 is located on the same die as the processor 302, or is located separately from the processor 302. The memory 304 includes a volatile or non-volatile memory, for example, random access memory (RAM), dynamic RAM, or a cache.
[0072] The storage device 306 includes a fixed or removable storage means, for example, a hard disk drive, a solid-state drive, an optical disk, or a flash drive. The input devices 308 include, without limitation, a keyboard, a keypad, a touch screen, a touch pad, a detector, a microphone, an accelerometer, a gyroscope, a biometric scanner, or a network connection (e.g., a wireless local area network card for transmission and/or reception of wireless IEEE 802 signals). The output devices 310 include, without limitation, a display, a speaker, a printer, a haptic feedback device, one or more lights, an antenna, or a network connection (e.g., a wireless local area network card for transmission and/or reception of wireless IEEE 802 signals).
[0073] The input driver 312 communicates with the processor 302 and the input devices 308, and permits the processor 302 to receive input from the input devices 308. The output driver 314 communicates with the processor 302 and the output devices 310, and permits the processor 302 to send output to the output devices 310. It is noted that the input driver 312 and the output driver 314 are optional components, and that the device 300 will operate in the same manner if the input driver 312 and the output driver 314 are not present. The output driver 316 includes an accelerated processing device (“APD”) 316 which is coupled to a display device 318. The APD accepts compute commands and graphics rendering commands from processor 302, processes those compute and graphics rendering commands, and provides pixel output to display device 318 for display. As described in further detail below, the APD 316 includes one or more parallel processing units to perform computations in accordance with a single-instruction-multiple-data (“SIMD”) paradigm. Thus, although various functionality is described herein as being performed by or in conjunction with the APD 316, in various alternatives, the functionality described as being performed by the APD 316 is additionally or alternatively performed by other computing devices having similar capabilities that are not driven by ahost processor (e.g., processor 302) and provides graphical output to a display device 318. For example, it is contemplated that any processing system that performs processing tasks in accordance with a SIMD paradigm may perform the functionality described herein. Alternatively, it is contemplated that computing systems that do not perform processing tasks in accordance with a SIMD paradigm performs the functionality described herein.
[0074] FIG. 4 illustrates a graphical depiction of an artificial intelligence system 200 incorporating the example device of FIG. 3. System 400 includes data 410, a machine 420, a model 430, a plurality of outcomes 440 and underlying hardware 450. System 400 operates by using the data 410 to train the machine 420 while building a model 430 to enable a plurality of outcomes 440 to be predicted. The system 400 may operate with respect to hardware 450. In such a configuration, the data 410 may be related to hardware 450 and may originate with apparatus 102, for example. For example, the data 410 may be on-going data, or output data associated with hardware 450. The machine 420 may operate as the controller or data collection associated with the hardware 450, or be associated therewith. The model 430 may be configured to model the operation of hardware 450 and model the data 410 collected from hardware 450 in order to predict the outcome achieved by hardware 450. Using the outcome 440 that is predicted, hardware 450 may be configured to provide a certain desired outcome 440 from hardware 450.
[0075] FIG. 5 illustrates a method 500 performed in the artificial intelligence system of FIG. 4. Method 500 includes collecting data from the hardware at step 510. This data may include currently collected, historical or other data from the hardware. For example, this data may include measurements during a surgical procedure and may be associated with the outcome of the procedure. For example, the temperature of a heart may be collected and correlated with the outcome of a heart procedure.
[0076] At step 520, method 500 includes training a machine on the hardware. The training may include an analysis and correlation of the data collected in step 510. For example, in the case of the heart, the data of temperature and outcome may be trained to determine if a correlation or link exists between the temperature of the heart during the procedure and the outcome.
[0077] At step 530, method 500 includes building a model on the data associated with the hardware. Building a model may include physical hardware or software modeling, algorithmic modeling, and the like, as will be described below. This modeling may seek to represent the data that has been collected and trained.
[0078] At step 540, method 500 includes predicting the outcomes of the model associated with the hardware. This prediction of the outcome may be based on the trained model. For example, in the case of the heart, if the temperature during the procedure between 97.7 - 100.2 produces a positive result from the procedure, the outcome can be predicted in a given procedure based on the temperature of the heart during the procedure. While this model is rudimentary, it is provided for exemplary purposes and to increase understanding of the present invention.
[0079] The present system and method operate to train the machine, build the model, and predict outcomes using algorithms. These algorithms may be used to solve the trained model and predict outcomes associated with the hardware. These algorithms may be divided generally into classification, regression, and clustering algorithms. [0080] For example, a classification algorithm is used in the situation where the dependent variable, which is the variable being predicted, is divided into classes, and predicting a class, the dependent variable, for a given input. Thus, a classification algorithm is used to predict an outcome, from a set number of fixed, predefined outcomes. A classification algorithm may include naive Bayes algorithms, decision trees, random forest classifiers, logistic regressions, support vector machines and k nearest neighbors.
[0081] Generally, a naive Bayes algorithm follows the Bayes theorem, and follows a probabilistic approach. As would be understood, other probabilistic-based algorithms may also be used, and generally operate using similar probabilistic principles to those described below for the exemplary naive Bayes algorithm.
[0082] FIG. 6 illustrates an example of the probabilities of a naive Bayes calculation. The probability approach of Bayes theorem essentially means, that instead of jumping straight into the data, the algorithm has a set of prior probabilities for each of the classes for the target. After the data is entered, the naive Bayes algorithm may update the prior probabilities to form a posterior probability. This is given by the formula: prior x likelihood, posterior = - — - evidence
[0083] This naive Bayes algorithm, and Bayes algorithms generally, may be useful when needing to predict whether your input belongs to a given list of n classes or not. The probabilistic approach may be used because the probabilities for all the n classes will be quite low.
[0084] For example, as illustrated in FIG. 6, a person playing golf, which depends on factors including the weather outside shown in a first data set 610. The first data set 610 illustrates the weather in a first column and an outcome of playing associated with that weather in a second column. In the frequency table 620 the frequencies with which certain events occur are generated. In frequency table 620, the frequency of a person playing or not playing golf in each of the weather conditions is determined. From there, a likelihood table is compiled to generate initial probabilities. For example, the probability of the weather being overcast is 0.29 while the general probability of playing is 0.64.
[0085] The posterior probabilities may be generated from the likelihood table 630. These posterior probabilities may be configured to answer questions about weather conditions and whether golf is played in those weather conditions. For example, the probability of it being sunny outside and golf being played may be set forth by the Bayesian formula:
P(Yes | Sunny) = P( Sunny | Yes) * P(Yes) / P (Sunny) According to likelihood table 630:
P (Sunny |Yes) = 3/9 = 0.33, P(Sunny) = 5/14 = 0.36, P(Yes)= 9/14 = 0.64.
Therefore, the P(Yes | Sunny) = .33*.64/.36 or approximately 0.60 (60%).
[0086] Generally, a decision tree is a flowchart-like tree structure where each external node denotes a test on an attribute and each branch represents the outcome of that test. The leaf nodes contain the actual predicted labels. The decision tree begins from the root of the tree with attribute values being compared until a leaf node is reached. A decision tree can be used as a classifier when handling high dimensional data and when little time has been spent behind data preparation. Decision trees may take the form of a simple decision tree, a linear decision tree, an algebraic decision tree, a deterministic decision tree, a randomized decision tree, a nondeterministic decision tree, and a quantum decision tree. An exemplary decision tree is provided below in FIG. 7.
[0087] FIG. 7 illustrates a decision tree, along the same structure as the Bayes example above, in deciding whether to play golf. In the decision tree, the first node 710 examines the weather providing sunny 712, overcast 714, and rain 716 as the choices to progress down the decision tree. If the weather is sunny, the leg of the tree is followed to a second node 720 examining the temperature. The temperature at node 720 may be high 722 or normal 724, in this example. If the temperature at node 720 is high 722, then the predicted outcome of “No” 723 golf occurs. If the temperature at node 720 is normal 724, then the predicted outcome of “Yes” 725 golf occurs.
[0088] Further, from the first node 710, an outcome overcast 714, “Yes” 715 golf occurs.
[0089] From the first node weather 710, an outcome of rain 716 results in the third node
730 (again) examining temperature. If the temperature at third node 730 is normal 732, then “Yes” 733 golf is played. If the temperature at third node 730 is low 734, then “No” 735 golf is played. [0090] From this decision tree, a golfer plays golf if the weather is overcast 715, in normal temperature sunny weather 725, and in normal temperature rainy weather 733, while the golfer does not play if there are sunny high temperatures 723 or low rainy temperatures 735.
[0091] A random forest classifier is a committee of decision trees, where each decision tree has been fed a subset of the attributes of data and predicts on the basis of that subset. The mode of the actual predicted values of the decision trees are considered to provide an ultimate random forest answer. The random forest classifier, generally, alleviates overfitting, which is present in a standalone decision tree, leading to a much more robust and accurate classifier.
[0092] FIG. 8 illustrates an exemplary random forest classifier for classifying the color of a garment. As illustrated in FIG. 8, the random forest classifier includes five decision trees 810i, 8IO2, 8IO3, 8IO4, and 8IO5 (collectively or generally referred to as decision trees 810). Each of the trees is designed to classify the color of the garment. A discussion of each of the trees and decisions made is not provided, as each individual tree generally operates as the decision tree of FIG. 7. In the illustration, three (8101, 8 IO2, 8IO4) of the five trees determines that the garment is blue, while one determines the garment is green (8IO3) and the remaining tree determines the garment is red (8 IO5). The random forest takes these actual predicted values of the five trees and calculates the mode of the actual predicted values to provide random forest answer that the garment is blue.
[0093] Logistic Regression is another algorithm for binary classification tasks. Logistic regression is based on the logistic function, also called the sigmoid function. This S-shaped curve can take any real-valued number and map it between 0 and 1 asymptotically approaching those limits. The logistic model may be used to model the probability of a certain class or event existing such as pass/fail, win/lose, alive/dead or healthy/sick. This can be extended to model several classes of events such as determining whether an image contains a cat, dog, lion, etc. Each object being detected in the image would be assigned a probability between 0 and 1 with the sum of the probabilities adding to one.
[0094] In the logistic model, the log-odds (the logarithm of the odds) for the value labeled "1" is a linear combination of one or more independent variables ("predictors"); the independent variables can each be a binary variable (two classes, coded by an indicator variable) or a continuous variable (any real value). The corresponding probability of the value labeled " 1 " can vary between 0 (certainly the value "0") and 1 (certainly the value " 1"), hence the labeling; the function that converts log-odds to probability is the logistic function, hence the name. The unit of measurement for the log-odds scale is called a logit, from logistic unit, hence the alternative names. Analogous models with a different sigmoid function instead of the logistic function can also be used, such as the probit model; the defining characteristic of the logistic model is that increasing one of the independent variables multiplicatively scales the odds of the given outcome at a constant rate, with each independent variable having its own parameter; for a binary dependent variable this generalizes the odds ratio.
[0095] In a binary logistic regression model, the dependent variable has two levels (categorical). Outputs with more than two values are modeled by multinomial logistic regression and, if the multiple categories are ordered, by ordinal logistic regression (for example the proportional odds ordinal logistic model). The logistic regression model itself simply models probability of output in terms of input and does not perform statistical classification (it is not a classifier), though it can be used to make a classifier, for instance by choosing a cutoff value and classifying inputs with probability greater than the cutoff as one class, below the cutoff as the other; this is a common way to make a binary classifier.
[0096] FIG. 9 illustrates an exemplary logistic regression. This exemplary logistic regression enables the prediction of an outcome based on a set of variables. For example, based on a person’s grade point average, and outcome of being accepted to a school may be predicted. The past history of grade point averages and the relationship with acceptance enables the prediction to occur. The logistic regression of FIG. 9 enables the analysis of the grade point average variable 920 to predict the outcome 910 defined by 0 to 1. At the low end 930 of the S- shaped curve, the grade point average 920 predicts an outcome 910 of not being accepted. While at the high end 940 of the S- shaped curve, the grade point average 920 predicts an outcome 10 of being accepted. Logistic regression may be used to predict house values, customer lifetime value in the insurance sector, etc.
[0097] A support vector machine (SVM) may be used to sort the data with the margins between two classes as far apart as possible. This is called maximum margin separation. The SVM may account for the support vectors while plotting the hyperplane, unlike linear regression which uses the entire dataset for that purpose.
[0098] FIG. 10 illustrates an exemplary support vector machine. In the exemplary SVM 1000, data may be classified into two different classes represented as squares 1010 and triangles 1020. SVM 1000 operates by drawing a random hyperplane 1030. This hyperplane 1030 is monitored by comparing the distance (illustrated with lines 1040) between the hyperplane 1030 and the closest data points 1050 from each class. The closest data points 1050 to the hyperplane 1030 are known as support vectors. The hyperplane 1030 is drawn based on these support vectors 1050 and an optimum hyperplane has a maximum distance from each of the support vectors 1050. The distance between the hyperplane 1030 and the support vectors 1050 is known as the margin.
[0099] SVM 1000 may be used to classify data by using a hyperplane 1030, such that the distance between the hyperplane 1030 and the support vectors 1050 is maximum. Such an SVM 1000 may be used to predict heart disease, for example.
[0100] K Nearest Neighbors (KNN) refers to a set of algorithms that generally do not make assumptions on the underlying data distribution, and perform a reasonably short training phase. Generally, KNN uses many data points separated into several classes to predict the classification of a new sample point. Operationally, KNN specifies an integer N with a new sample. The N entries in the model of the system closest to the new sample are selected. The most common classification of these entries is determined, and that classification is assigned to the new sample. KNN generally requires the storage space to increase as the training set increases. This also means that the estimation time increases in proportion to the number of training points.
[0101] In regression algorithms, the output is a continuous quantity so regression algorithms may be used in cases where the target variable is a continuous variable. Linear regression is a general example of regression algorithms. Linear regression may be used to gauge genuine qualities (cost of houses, number of calls, all out deals and so forth) in view of the consistent variable(s). A connection between the variables and the outcome is created by fitting the best line (hence linear regression). This best fit line is known as regression line and spoken to by a direct condition Y= a *X + b. Linear regression is best used in approaches involving a low number of dimensions.
[0102] FIG. 11 illustrates an exemplary linear regression model. In this model, a predicted variable 1110 is modeled against a measured variable 1120. A cluster of instances of the predicted variable 1110 and measured variable 1120 are plotted as data points 1130. Data points 1130 are then fit with the best fit line 1140. Then the best fit line 1140 is used in subsequent predicted, given a measured variable 1120, the line 1140 is used to predict the predicted variable 1110 for that instance. Linear regression may be used to model and predict in a financial portfolio, salary forecasting, real estate and in traffic in arriving at estimated time of arrival.
[0103] Clustering algorithms may also be used to model and train on a data set. In clustering, the input is assigned into two or more clusters based on feature similarity. Clustering algorithms generally learn the patterns and useful insights from data without any guidance. For example, clustering viewers into similar groups based on their interests, age, geography, etc. may be performed using unsupervised learning algorithms like K- means clustering.
[0104] K-means clustering generally is regarded as a simple unsupervised learning approach. In K-means clustering similar data points may be gathered together and bound in the form of a cluster. One method for binding the data points together is by calculating the centroid of the group of data points. In determining effective clusters, in K-means clustering the distance between each point from the centroid of the cluster is evaluated. Depending on the distance between the data point and the centroid, the data is assigned to the closest cluster. The goal of clustering is to determine the intrinsic grouping in a set of unlabeled data. The ‘K’ in K-means stands for the number of clusters formed. The number of clusters (basically the number of classes in which new instances of data may be classified) may be determined by the user. This determination may be performed using feedback and viewing the size of the clusters during training, for example.
[0105] K-means is used majorly in cases where the data set has points which are distinct and well separated, otherwise, if the clusters are not separated the modeling may render the clusters inaccurate. Also, K-means may be avoided in cases where the data set contains a high number of outliers or the data set is non-linear.
[0106] FIG. 12 illustrates a K-means clustering. In K-means clustering, the data points are plotted, and the K value is assigned. For example, for K=2 in FIG. 12, the data points are plotted as shown in depiction 1210. The points are then assigned to similar centers at step 1220. The cluster centroids are identified as shown in 1230. Once centroids are identified, the points are reassigned to the cluster to provide the minimum distance between the data point to the respective cluster centroid as illustrated in 1240. Then a new centroid of the cluster may be determined as illustrated in depiction 1250. As the data pints are reassigned to a cluster, new cluster centroids formed, an iteration, or series of iterations, may occur to enable the clusters to be minimized in size and the centroid of the optimal centroid determined. Then as new data points are measured, the new data points may be compared with the centroid and cluster to identify with that cluster.
[0107] Ensemble learning algorithms may be used. These algorithms use multiple learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone. Ensemble learning algorithms perform the task of searching through a hypothesis space to find a suitable hypothesis that will make good predictions with a particular problem. Even if the hypothesis space contains hypotheses that are very well- suited for a particular problem, it may be very difficult to find a good hypothesis. Ensemble algorithms combine multiple hypotheses to form a better hypothesis. The term ensemble is usually reserved for methods that generate multiple hypotheses using the same base learner. The broader term of multiple classifier systems also covers hybridization of hypotheses that are not induced by the same base learner.
[0108] Evaluating the prediction of an ensemble typically requires more computation than evaluating the prediction of a single model, so ensembles may be thought of as a way to compensate for poor learning algorithms by performing a lot of extra computation. Fast algorithms such as decision trees are commonly used in ensemble methods, for example, random forests, although slower algorithms can benefit from ensemble techniques as well.
[0109] An ensemble is itself a supervised learning algorithm because it can be trained and then used to make predictions. The trained ensemble, therefore, represents a single hypothesis. This hypothesis, however, is not necessarily contained within the hypothesis space of the models from which it is built. Thus, ensembles can be shown to have more flexibility in the functions they can represent. This flexibility can, in theory, enable them to over-fit the training data more than a single model would, but in practice, some ensemble techniques (especially bagging) tend to reduce problems related to over-fitting of the training data.
[0110] Empirically, ensemble algorithms tend to yield better results when there is a significant diversity among the models. Many ensemble methods, therefore, seek to promote diversity among the models they combine. Although non-intuitive, more random algorithms (like random decision trees) can be used to produce a stronger ensemble than very deliberate algorithms (like entropy -reducing decision trees). Using a variety of strong learning algorithms, however, has been shown to be more effective than using techniques that attempt to dumb-down the models in order to promote diversity.
[0U1] The number of component classifiers of an ensemble has a great impact on the accuracy of prediction. A priori determining of ensemble size and the volume and velocity of big data streams make this even more crucial for online ensemble classifiers. A theoretical framework suggests that there are an ideal number of component classifiers for an ensemble such that having more or less than this number of classifiers would deteriorate the accuracy. The theoretical framework shows that using the same number of independent component classifiers as class labels gives the highest accuracy.
[0112] Some common types of ensembles include Bayes optimal classifier, bootstrap aggregating (bagging), boosting, Bayesian model averaging, Bayesian model combination, bucket of models and stacking. FIG. 13 illustrates an exemplary ensemble learning algorithm where bagging is being performed in parallel 1310 and boosting is being performed sequentially 1320.
[0113] A neural network is a network or circuit of neurons, or in a modem sense, an artificial neural network, composed of artificial neurons or nodes. The connections of the biological neuron are modeled as weights. A positive weight reflects an excitatory connection, while negative values mean inhibitory connections. Inputs are modified by a weight and summed using a linear combination. An activation function may control the amplitude of the output. For example, an acceptable range of output is usually between 0 and 1, or it could be -1 and 1.
[0114] These artificial networks may be used for predictive modeling, adaptive control and applications and can be trained via a dataset. Self-learning resulting from experience can occur within networks, which can derive conclusions from a complex and seemingly unrelated set of information.
[0115] For completeness, a biological neural network is composed of a group or groups of chemically connected or functionally associated neurons. A single neuron may be connected to many other neurons and the total number of neurons and connections in a network may be extensive. Connections, called synapses, are usually formed from axons to dendrites, though dendrodendritic synapses and other connections are possible. Apart from the electrical signaling, there are other forms of signaling that arise from neurotransmitter diffusion.
[0116] Artificial intelligence, cognitive modeling, and neural networks are information processing paradigms inspired by the way biological neural systems process data. Artificial intelligence and cognitive modeling try to simulate some properties of biological neural networks. In the artificial intelligence field, artificial neural networks have been applied successfully to speech recognition, image analysis and adaptive control, in order to construct software agents (in computer and video games) or autonomous robots.
[0117] A neural network (NN), in the case of artificial neurons called artificial neural network (ANN) or simulated neural network (SNN), is an interconnected group of natural or artificial neurons that uses a mathematical or computational model for information processing based on a connectionistic approach to computation. In most cases an ANN is an adaptive system that changes its structure based on external or internal information that flows through the network. In more practical terms neural networks are non-linear statistical data modeling or decisionmaking tools. They can be used to model complex relationships between inputs and outputs or to find patterns in data.
[0118] An artificial neural network involves a network of simple processing elements (artificial neurons) which can exhibit complex global behavior, determined by the connections between the processing elements and element parameters.
[0119] One classical type of artificial neural network is the recurrent Hopfield network. The utility of artificial neural network models lies in the fact that they can be used to infer a function from observations and also to use it. Unsupervised neural networks can also be used to learn representations of the input that capture the salient characteristics of the input distribution, and more recently, deep learning algorithms, which can implicitly learn the distribution function of the observed data. Learning in neural networks is particularly useful in applications where the complexity of the data or task makes the design of such functions by hand impractical.
[0120] Neural networks can be used in different fields. The tasks to which artificial neural networks are applied tend to fall within the following broad categories: function approximation, or regression analysis, including time series prediction and modeling; classification, including pattern and sequence recognition, novelty detection and sequential decision making, data processing, including filtering, clustering, blind signal separation and compression. [0121] Application areas of ANNs include nonlinear system identification and control (vehicle control, process control), game-playing and decision making (backgammon, chess, racing), pattern recognition (radar systems, face identification, object recognition), sequence recognition (gesture, speech, handwritten text recognition), medical diagnosis, financial applications, data mining (or knowledge discovery in databases, "KDD"), visualization and e-mail spam filtering. For example, it is possible to create a semantic profile of user's interests emerging from pictures trained for object recognition.
[0122] FIG. 14 illustrates an exemplary neural network. In the neural network there is an input layer represented by a plurality of inputs, such as 1410i and 14102. The inputs 14101, 14102 are provided to a hidden layer depicted as including nodes 14201, 14202, 1420s, 14204. These nodes 14201, 14202, 14203,14204 are combined to produce an output 1430 in an output layer. The neural network performs simple processing via the hidden layer of simple processing elements, nodes 14201, 14202, 1420s, 14204, which can exhibit complex global behavior, determined by the connections between the processing elements and element parameters.
[0123] The neural network of FIG. 14 may be implemented in hardware. As illustrated in FIG. 15 a hardware based neural network is depicted.
[0124] Electrical signals such as electrocardiogram (ECG) signals are often detected prior to and/or during a cardiac procedure. For example, ECG signals can be used to identify potential locations of a heart where arrhythmia causing signals originate from. Generally, an ECG is a signal that describes the electrical activity of the heart. ECG signals may also be used to map portions of a heart. When physicians use an ECG to study heart activity, an accounting for the interference needs to occur in order to isolate the electrical signals from the heart. Such interference may also result from the processing of areas of the signal with sharp changes, peaks, and/or pacing signals including areas of high frequency and harmonics. Interference obscures the accuracy of the ECG readings. Therefore, a need exists to provide improved methods of identifying features so that the effects of such features may be removed from an electrical signal study thereby allowing the electrical signals of the heart to be viewed.
[0125] An ECG signal is generated by contraction (depolarization) and relaxation (repolarization) of atrial and ventricular muscles of the heart. As shown by signal 1602 in FIG. 16 A, an ECG signal contains a P wave (due to atrial depolarization), a QRS complex (due to atrial repolarization and ventricular depolarization) and a T wave (due to ventricular repolarization). In order to record an ECG signal, electrodes can be placed at specific positions on the human body or can be positioned within a human body via a catheter. Artifacts (e.g., noise) are the unwanted signals that are merged with electronic signals, such as ECG signals, and sometimes create obstacles for the diagnosis and/or treatment of a cardiac condition. Artifacts in electrical signals can be baseline wander, powerline interference, electromyogram (EMG) noise, power line noise, etc. These noise signals may include site base noise and other additive noise.
[0126] Baseline wander or baseline drift occurs where the base axis (x-axis) of a signal appears to ‘wander’ or move up and down rather than be straight. This may cause the entire signal to shift from its normal base. In ECG signals, the baseline wander is caused due to improper electrode contact (e.g., electrode- skin impedance), patient movement, and cyclical movement (e.g., respiration). FIG. 16B shows a typical ECG signal 1612 affected by baseline wander. As shown in the example of FIG. 16B, the frequency content of the baseline wander is in the range of 0.5 Hz. However, increased movement of the body during exercise or stress test increase the frequency content of baseline wander. According to implementations, given that the baseline signal is a low frequency signal, a Finite Impulse Response (FIR) high-pass zero phase forwardbackward filtering with a cut-off frequency of 0.5 Hz to estimate and remove the baseline in the ECG signal can be used.
[0127] Electromagnetic fields caused by a powerline represent a common noise source in electronic signals such as ECGs, as well as to any other bioelectrical signal recorded from a patient’s body. Such noise is characterized by, for example, 50 or 60 Hz sinusoidal interference, possibly accompanied by a number of harmonics. Such narrowband noise renders the analysis and interpretation of the ECG more difficult since the delineation of low-amplitude waveforms becomes unreliable and spurious waveforms may be introduced. It may be necessary to remove powerline interference from ECG signals as it superimposes the low frequency ECG waves like P wave and T wave.
[0128] The presence of muscle noise can interfere with in many electrical signal applications such as ECG applications, as low amplitude waveforms can become obscured. Muscle noise is, in contrast to baseline wander and 50/60 Hz interference, not removed by narrowband filtering, but presents a different filtering problem as the spectral content of muscle activity considerably overlaps that of the PQRST complex. As an ECG signal is a repetitive signal, techniques can be used to reduce muscle noise in a manner similar to the processing of evoked potentials. FIG. 16C shows an ECG signal 1630 interfered by an EMG noise 1632.
[0129] Instruments for measuring electrical signals such as ECG signals often detect electrical interference corresponding to a line, or mains, frequency. Line frequencies in most countries, though nominally set at 50 Hz or 60 Hz, may vary by several percent from these nominal values. [0130] Various techniques for removing electrical interference from electrical signals can be implemented. Several of these techniques use of one or more low-pass or notch filters. For example, a system for variable filtering of noise in ECG signals may be implemented. The system may have a plurality of low pass filters including one filter with a, for example, 3 dB point at approximately 50 Hz and, for example, a second low pass filter with a 3-dB point at approximately 5 Hz.
[0131] According to another example, a system for rejecting a line frequency component of an electronic signal may be implemented by passing the signal through two serially linked notch filter . A system with a notch filter that may have either or both low -pass and high-pass coefficients for removing line frequency components from an ECG signal may be implemented. The system may also support removal of burst noise and calculate a heart rate from the notch filter output.
[0132] According to another example, a system with several units for removing interference may be implemented. The units may include a mean value unit to generate an average signal over several cardiac cycles, a subtracting unit to subtract the average signal from the input signal to generate a residual signal, a filter unit to provide a filtered signal from the residual signal, and/or an addition unit to add the filtered signal to the average signal.
[0133] According to another example, an analo -to-digital (A/D) converter may provide noise rejection by synchronizing a clock of the converter with a phase locked loop set to the line frequency.
[0134] Additionally, biometric (e.g., biopotential) patient monitors may use surface electrodes to make measurements of bioelectric potentials such as ECG or electroencephalogram (EEG). The fidelity of these measurements is limited by the effectiveness of the connection of the electrode to the patient. The resistance of the electrode system to the flow of electric currents, known as the electric impedance, characterizes the effectiveness of the connection. Typically, the higher the impedance, the lower the fidelity of the measurement. Several mechanisms may contribute to lower fidelity.
[0135] Signals from electrodes with high impedances are subject to thermal noise (or so- called Johnson noise), voltages that increase with the square root of the impedance value. In addition, biopotential electrodes tend to have voltage noises in excess of that predicted by Johnson. Also, amplifier systems making measurements from biopotential electrodes can have degraded performance at higher electrode impedances. The impairments are characterized by poor common mode rejection, which tends to increase the contamination of the bioelectric signal by noise sources such as patient motion and electronic equipment that may be in use on or around the patient. These noise sources are particularly prevalent in the operating theatre and may include equipment such as electrosurgical units (ESU), cardiopulmonary bypass pumps (CPB), electric motor-driven surgical saws, lasers, and other sources.
[0136] During a cardiac procedure, it is often desirable to measure electrode impedances continuously in real time while a patient is being monitored. To do this, a very small electric current is typically injected through the electrodes and the resulting voltage measured, thereby establishing the impedance using Ohm's law. This current may be injected using DC or AC sources. It is often not possible to separate voltage due to the electrode impedance from voltage artifacts arising from interference. Interference tends to increase the measured voltage and thus the apparent measured impedance, causing the biopotential measurement system to falsely detect higher impedances than are actually present. Often such monitoring systems have maximum impedance threshold limits that may be programmed to prevent their operation when they detect impedances in excess of these limits. This is particularly true of systems that make measurements of very small voltages, such as the EEG. Such systems require very low electrode impedances.
[0137] FIG. 16D illustrates examples of power line noise. FIG 16D illustrates a body surface lead and signals from the mapping catheter including bipolar, unipolar distal, and unipolar proximal signal. The area indicated around the signal in gray is the signals of interest illustrating the power line noise and in particular the dot on the unipolar distal signal and unipolar proximal signal indicate further areas of noise.
[0138] FIG. 16E illustrates a signal during ventricle activity including baseline wander. FIG. 16E includes the MAP 1-2 signal at the top, the MAP 1 and MAP 2 signals.
[0139] FIG. 16F illustrates the signal of FIG. 16E after baseline wander removal. FIG. 16F again includes the MAP 1-2 signal at the top, the MAP 1 and MAP 2 signals after baseline removal.
[0140] FIG. 16G illustrates an example of high frequency noise and baseline wander for bipolar measurements.
[0141] FIG. 17A is a diagram of an exemplary system 1720 in which one or more features of the disclosure subject matter can be implemented. All or parts of system 1720 may be used to collect information for a training dataset and/or all or parts of system 1720 may be used to implement a trained model. System 1720 may include components, such as a catheter 1740, that are configured to damage tissue areas of an intra-body organ. The catheter 1740 may also be further configured to obtain biometric data including electronic signals. Although catheter 1740 is shown to be a point catheter, it will be understood that a catheter of any shape that includes one or more elements (e.g., electrodes) may be used to implement the embodiments disclosed herein. System 1720 includes a probe 1721, having shafts that may be navigated by a physician 1730 into a body part, such as heart 1726, of a patient 1728 lying on a table 1729. According to embodiments, multiple probes may be provided, however, for purposes of conciseness, a single probe 1721 is described herein but it will be understood that probe 1721 may represent multiple probes. As shown in FIG. 17A, physician 1730 may insert shaft 1722 through a sheath 1723, while manipulating the distal end of the shafts 1722 using a manipulator 1732 near the proximal end of the catheter 1740 and/or deflection from the sheath 1723. As shown in an inset 1725, catheter 1740 may be fitted at the distal end of shafts 1722. Catheter 1740 may be inserted through sheath 1723 in a collapsed state and may be then expanded within heart 1726. Gather 1740 may include at least one ablation electrode 1747 and a catheter needle 1748, as further disclosed herein.
[0142] According to embodiments, catheter 1740 may be configured to ablate tissue areas of a cardiac chamber of heart 1726. Inset 1745 shows catheter 1740 in an enlarged view, inside a cardiac chamber of heart 1726. As shown, catheter 1740 may include at least one ablation electrode 1747 coupled onto the body of the catheter. According to other embodiments, multiple elements may be connected via splines that form the shape of the catheter 1740. One or more other elements (not shown) may be provided and may be any elements configured to ablate or to obtain biometric data and may be electrodes, transducers, or one or more other elements.
[0143] According to embodiments disclosed herein, the ablation electrodes, such as electrode 1747, may be configured to provide energy to tissue areas of an intra-body organ such as heart 1726. The energy may be thermal energy and may cause damage to the tissue area starting from the surface of the tissue area and extending into the thickness of the tissue area.
[0144] According to embodiments disclosed herein, biometric data may include one or more of LATs, electrical activity, topology, bipolar mapping, dominant frequency, impedance, or the like. The local activation time may be a point in time of a threshold activity corresponding to a local activation, calculated based on a normalized initial starting point. Electrical activity may be any applicable electrical signals that may be measured based on one or more thresholds and may be sensed and/or augmented based on signal to noise ratios and/or other filters. A topology may correspond to the physical structure of a body part or a portion of a body part and may correspond to changes in the physical structure relative to different parts of the body part or relative to different body parts. A dominant frequency may be a frequency or a range of frequency that is prevalent at a portion of a body part and may be different in different portions of the same body part. For example, the dominant frequency of a pulmonary vein of a heart may be different than the dominant frequency of the right atrium of the same heart. Impedance may be the resistance measurement at a given area of a body part. [0145] As shown in FIG. 17A, the probe 1721, and catheter 1740 may be connected to a console 1724. Console 1724 may include a processor 1741, such as a general-purpose computer, with suitable front end and interface circuits 1738 for transmitting and receiving signals to and from catheter, as well as for controlling the other components of system 1720. In some embodiments, processor 1741 may be further configured to receive biometric data, such as electrical activity, and determine if a given tissue area conducts electricity. According to an embodiment, the processor may be external to the console 1724 and may be located, for example, in the catheter, in an external device, in a mobile device, in a cloud-based device, or may be a standalone processor.
[0146] As noted above, processor 1741 may include a general -purpose computer, which may be programmed in software to carry out the functions described herein. The software may be downloaded to the general-purpose computer in electronic form, over a network, for example, or it may, alternatively or additionally, be provided and/or stored on non-transitory tangible media, such as magnetic, optical, or electronic memory. The example configuration shown in FIG. 17A may be modified to implement the embodiments disclosed herein. The disclosed embodiments may similarly be applied using other system components and settings. Additionally, system 1720 may include additional components, such as elements for sensing electrical activity, wired or wireless connectors, processing and display devices, or the like.
[0147] According to an embodiment, a display connected to a processor (e.g., processor 1741) may be located at a remote location such as a separate hospital or in separate healthcare provider networks. Additionally, the system 1720 may be part of a surgical system that is configured to obtain anatomical and electrical measurements of a patient’s organ, such as a heart, and performing a cardiac ablation procedure. An example of such a surgical system is the CARTO® system sold by Biosense Webster.
[0148] The system 1720 may also, and optionally, obtain biometric data such as anatomical measurements of the patient’s heart using ultrasound, computed tomography (CT), magnetic resonance imaging (MRI) or other medical imaging techniques known in the art. The system 1720 may obtain electrical measurements using catheters, electrocardiograms (EKGs) or other sensors that measure electrical properties of the heart. The biometric data including anatomical and electrical measurements may then be stored in a memory 1742 of the mapping system 1720, as shown in FIG. 17 A. The biometric data may be transmitted to the processor 1741 from the memory 1742. Alternatively, or in addition, the biometric data may be transmitted to a server 1760, which may be local or remote, using a network 1762. [0149] Network 1762 may be any network or system generally known in the art such as an intranet, a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a direct connection or series of connections, a cellular telephone network, or any other network or medium capable of facilitating communication between the mapping system 1720 and the server 1760. The network 1762 may be wired, wireless or a combination thereof. Wired connections may be implemented using Ethernet, Universal Serial Bus (USB), RJ-11 or any other wired connection generally known in the art. Wireless connections may be implemented using WiFi, WiMAX, and Bluetooth, infrared, cellular networks, satellite or any other wireless connection methodology generally known in the art. Additionally, several networks may work alone or in communication with each other to facilitate communication in the network 1762.
[0150] In some instances, the server 1762 may be implemented as a physical server. In other instances, server 1762 may be implemented as a virtual server a public cloud computing provider (e.g., Amazon Web Services (AWS) ®).
[0151] Control console 1724 may be connected, by a cable 1739, to body surface electrodes 1743, which may include adhesive skin patches that are affixed to the patient 1730. The processor, in conjunction with a current tracking module, may determine position coordinates of the catheter 1740 inside the body part (e.g., heart 1726) of a patient. The position coordinates may be based on impedances or electromagnetic fields measured between the body surface electrodes 1743 and the electrode 1748 or other electromagnetic components of the catheter 1740. Additionally, or alternatively, location pads may be located on the surface of bed 1729 and may be separate from the bed 1729.
[0152] Processor 1741 may include real-time noise reduction circuitry typically configured as a field programmable gate array (FPGA), followed by an analog -to-digital (A/D) ECG (electrocardiograph) or EMG (electromyogram) signal conversion integrated circuit. The processor 1741 may pass the signal from an A/D ECG or EMG circuit to another processor and/or can be programmed to perform one or more functions disclosed herein.
[0153] Control console 1724 may also include an input/output (I/O) communications interface that enables the control console to transfer signals from, and/or transfer signals to electrode 1747.
[0154] During a procedure, processor 1741 may facilitate the presentation of a body part rendering 1735 to physician 1730 on a display 1727, and store data representing the body part rendering 1735 in a memory 1742. Memory 1742 may comprise any suitable volatile and/or nonvolatile memory, such as random-access memory or a hard disk drive. In some embodiments, medical professional 1730 may be able to manipulate a body part rendering 1735 using one or more input devices such as a touch pad, a mouse, a keyboard, a gesture recognition apparatus, or the like. For example, an input device may be used to change the position of catheter 1740 such that rendering 1735 is updated. In alternative embodiments, display 1727 may include a touchscreen that can be configured to accept inputs from medical professional 1730, in addition to presenting a body part rendering 1735.
[0155] FIG. 17B illustrates an exemplary catheter 1750 placed in the right atria with bipolar intracardiac ECG signals 1780 via the intracardiac ECG leads 1770.
[0156] As set forth above, noise is an issue that causes constant concern in ECG measurements. This noise requires denoising plus removal of additive noises including contact and deflection noise. For example, referring to FIG. 18 there is a depiction of an illustration 1800 of a lab. Illustration 1800 includes many specific electronic devices that provide noise to the measurements. There are many monitors, machines light sources, and other machines commonly found with the internal lab environment. In addition, there may be external (from the lab) sources that influence electronic signals within the lab. For example, power banks for the hospital (if the lab is included within a hospital, for example) may be located below the floor of the lab. Air conditioning units may be located above the lab. Other devices providing or increasing noise may be located in adjacent rooms. All of the internal and external machines may affect the environment of the ECG signal within the lab. Further, each lab is unique with respect to internal and external noise signals. That is, one lab may include air conditions, another power banks, and in fact, each lab may have a particular configuration of the monitors and other internal device.
[0157] FIG. 19 illustrates signals and their respective frequencies that may be found within a specific lab. For example, fluorescent noise around 200 Hz, power noise 50/60 Hz and the respective harmonics of these signals. As illustrated in FIG. 19, and for the present discussion, each lab has a different typical spectrum of noise and by characterizing the typical noise signals on a laboratory basis, the present system and method may design a filter set specific to each particular lab. Reviewing the ECG and the ICEG has shown that each lab has its own unique noise pattern. Therefore, there is a need to define a method to generate lab specific noise algorithm.
[0158] FIG. 20 illustrates a method 2000 for dealing with the described noise signals. Current concepts for dealing with these types of noise are based on generating "one method fit all" ECG/ICEG denoising algorithms to all ECG processing systems currently deployed on the field. Method 2000 includes collecting ECG plus noise signals at step 2010. At step 2020, method 2000 applies a set of filters (the same filters for all labs). At step 2030, method 2000 provides the cleaned ECG signals. [0159] The method described herein is based on generating a specific denoising algorithm for each specific lab, consequently reducing impact of denoising on ECG/ICEG signals morphology. In order to provide unique noise algorithms for each lab, data is collected on the types of noise for each lab, both internal and external sources of noise. This includes power lines, converters, X-ray machines and even the CARTO ACT, for example. The transformers located below the floor in some labs, as well as noise from other external sources are also collected.
[0160] For example, in FIG. 21, a method 2100 may be performed to denoise signals for a lab. Method 2100A may be employed for a first lab. Method 2100A includes collecting ECG plus noise signals at step 2110A. At step 2120A, method 2100A ensures that enough data is collected to build a lab noise profile for lab A. At step 2130A, method 2100A applies a set of filters (a specific filter designed for lab A). At step 2140A, method 2100A provides the cleaned ECG signals.
[0161] Similarly, as shown in FIG. 21, method 2100B may be employed for a second lab. Method 2100B includes collecting ECG plus noise signals at step 2110B. At step 2120B, method 2100B ensures that enough data is collected to build a lab noise profile for lab A. At step 2130B, method 2100B applies a set of filters (a specific filter designed for lab A). The set of filters applied in step 2130A and step 2130B may be different as each filter set is dependent upon the noise found within the respective lab. At step 2140BA, method 2100B provides the cleaned ECG signals.
[0162] In order to provide, the system of FIG. 4, data from each lab in order to address site-based noise, the data from each specific lab is collected at one of steps 2120 depending on which lab is being tested. ECG/ICEG collection performed by aggregating each lab recordings to a database. By backing CARTO® data to CARTONET, each case that is uploaded to the database includes the identification of the institute and the specific lab. The data may be recorded with filtering. The raw data may also be collected, i.e., ECG/ICEG signals without filtering, and the WCT recording (channel 21) that includes all the lab base noise may also be collected. Alternatively, the data on each specific WS may be collected. For example, there may be specific disk space for the collection. As discussed with respect to FIG. 4, after collection a local algorithm may run within the disk space. The ECG/ICEG denoising algorithm may train on the data. A specific algorithm may be designed and generated to filter the lab noise at step 2130. In order to do so, an FFT may be performed on all the ECG/ICEG signals to determine the typical lab noise. As biologic noise is different between patients, and the lab noise is consistent in specific labs, the correlations on the data (e.g., above 30) provide the ability to distinguish between biologic noise and lab noise. As described in FIG. 4, machine learning may be performed to enable the machine to learn the coherent noise. An algorithm may be generated in the cloud or a specific application on the workstation. The algorithm may be based on an autoencoder, for example. This may include LTSM and/or CNN architectures, as described hereinabove. Once generated, the trained model may be deployed for the specific lab to provide clean ECG from that lab at step 2140.
[0163] The resulting filter for each lab may be presented to the user (physician) to approve the algorithm results based on clinical data. Such presentation may include the raw data, previous filtering algorithm, and other filtering algorithms to enable ease of approval. Upon approval, the trained algorithm (or the frequencies of the specific lab noise) may be loaded into the CARTO® system (i.e., to the ECG/ICEG presenting and storing system).
[0164] In addition to lab specific noise, other additive noises may also be found within ECG signals. The present description provides an automatic noise sample generation technique for additive noises. In solving denoising issues, including solving such issues using Al, it is often important to harvest enough samples to provide a good coverage of possible examples of specific noise.
[0165] The presence of environment related additive noise in recorded signals may include, for example, power noise, contact noise, and deflection noise. Contact noise may include noise created by catheter collision during data collection. Deflection noise may include noise created by discharges of static electricity during catheter deflection.
[0166] Currently, some additive noise signals are hard to detect and can be cumbersome to annotate with reviewers needing to view hours of ECG recordings to identify a few noise events. Today the task of finding noise affected points is rather cumbersome and is performed manually if at all. For example, in a manual example, a number of reviewers may be deployed with each reviewer reviewing hours of ECG signals to find and annotate a few noise events.
[0167] As a result, CARTO's detections of physiological features (e.g. mapping annotations within contact affected points ECG) will likely produce artifacts and thus to affect clinical understanding of CARTO® maps. This method will allow user to filter out contact noise affected CARTO® points automatically, to produce CARTO® maps free of artifacts.
[0168] The present method allows generation of a set of noise samples in a "quiet lab" condition that may be added to real-life signals allowing the generation of practically unlimited number of real-life noise samples. For example, a quite lab may be used with low or no noise. Samples may be recorded in a sterile environment (also known as an aquarium). In this environment, intentional electrode collisions and catheter deflections may be provided. The collected noise signals may be embedded into ECG data by storing time references as annotations to noise segments. [0169] Referring again to FIG. 4, data may be collected by configuring the system in "quiet lab" with a minimal number of possible noises or ideally free of any noise. A set of signals may be generated and recorded with specific noise of interest, assuming that the noise of interest is additive and has not created a dependence on signals or other noises in the system. The needed, required, or desired number of samples of the noise may be provided by embedding the collected noise samples in real-life system's signal recordings by addition of recorded noise samples to real- life systems' signal recordings. The signals that may be provided as additive noise include, but are not limited to, power noise, contact noise, and deflection noise.
[0170] For contact noise for example, noise detection may occur using deep autoencoder with fully connected (dense) layer. Contact noise is a distinctive non-clinical artifact caused when catheter electrodes are in contact with each other. This contact may occur when two or more electrodes of the catheter or different catheters intersect, i.e., touch each other. The present description presents a method to detect such a noise in CARTO® points ECG and to filter out those points from CARTO® map.
[0171] By way of example, intracardiac ECG signals may be modeled as a linear combination of the signal and several noise components as described in the following Equation:
ICECG = Signal + DN + CN + PN + muscle artifact + ••• where DN is deflection noise, CN is contact noise, PN is powerline noise, for example.
[0172] In some cases, the noise component is created based on manual operation of the user.
[0173] FIG. 22 illustrates contact noise examples recorded in a controlled aquarium environment. As illustrated a number of signals may be recorded over a 2.5 second interval. The signals are displayed on the respective plots. The 24 signals may be recorded. Map 1-4 may be provided in this example as the mapping catheter. These mapping catheter signals may be provided in the first four plots 2202, 2204, 2206, 2208. Catheters P1-P20 are provided as the penta-ray different catheters. The catheters are provided in the remaining 20 plots 2210, 2212, 2214, 2216, 2218, 2220, 2222, 2224, 2226, 2228, 22300, 2232, 2234, 2236, 2238, 2240, 2242, 2244, 2246, 2248.
[0174] In this example, Mapl (mapping catheter electrode 1) represented in plot 2202 is touching Penta-Ray electrode 5 represented in plot 2218 and 6 represented in plot 2220 (P5, P6). The plot 2202 is affected by the signals represented in plots 2218,2220. [0175] Map 2 represented in plot 2204 touches P8 represented in plot 2224 illustrating the contact noise. The signals represent the contact noise of each of the catheter contacting situations. The plot 2204 is affected by the signals represented in plot 2224.
[0176] Data is collected using an ICEG and ECG data collection by aggregating lab clinical recordings into a database. This may be a backup of the CARTO® data to CARTONET, for example. A designated GUI allows manual marking of contact start and end times per electrode of each CARTO® points consequently dividing the data into two classes. This may operate as described above with respect to a binary classification. One of the classes is contact noise present in points (CN). The other class includes ECG points free of contact noise (FR). From the database, as described above with respect to FIG. 4, training and evaluating may occur on the model. The training may include deep learning model architecture is based on deep autoencoder network. For example, three layers of 2D convolutional neural network as an encoder and similar three layers of 2D convolutional network as a decoder. The output of the encoder is connected to a dense layer in order to classify contact noise per channel. In such a configuration, the model may be trained on 5 GB of IC ECG data.
[0177] The model allows the classification of each CARTO® point into (CN. FR) classes during the clinical procedure. This enables the user to filter out CN classified CARTO® point and to display CARTO® maps free from of contact noise induced artifacts.
[0178] For deflection noise detection, a LSTM deep network is described. As set forth above, the task of finding noise affected points is rather cumbersome and is performed manually if at all. As a result, CARTO's detections of physiological features (e.g. mapping annotations within deflection affected points ECG) will likely produce artifacts and thus to affect clinical understanding of CARTO® maps. This method will allow user to filter out deflection noise affected CARTO® points automatically, to produce CARTO® maps free of artifacts. Deflection noise appears as chaotic peaks when catheter is deflected by clinical specialist. This disclosure presents a method to detect such noise in CARTO® points ECG and to filter out those points from CARTO® map.
[0179] FIG. 23 A illustrates deflection noise examples recorded in a controlled aquarium environment. These data samples may be provided with random start times and having random durations. FIG. 23B illustrates deflection noise examples of FIG. 23A with an increased x-axis to zoom in on features from the FIG. 23A depictions. In particular, the bottom plot (orange) 2310 (also shown as zoomed plot 2310.1 in FIG. 23B) represents deflection noise recorded in an aquarium. The bottom plot illustrates three high frequency bursts that indicate the three times the catheter was deflected. The upper plot (green) 2320 (also shown as zoomed plot 2320.1 in FIG. 23B) represents a signal free from deflection or contact noise. The middle plot (blue) 2330 (also shown as zoomed plot 2330.1 in FIG. 23B) illustrates a signal that is the sum of deflection and contact noise.
[0180] Data is collected using an ICEG (deflection noise manifests only in the ICEG) and ECG data collection by aggregating lab clinical recordings into a database. This may be a backup of the CARTO® data to CARTONET, for example. A designated GUI allows manual marking of deflection start and end times per electrode of each CARTO® points consequently dividing the data into two classes. This may operate as described above with respect to a binary classification. One of the classes is deflection noise present in point's ECG (DN). The other class includes ECG point free of deflection noise (FR). From the database, as described above with respect to FIG. 4, training and evaluating may occur on the model. The training may include a deep learning model architecture is based on LSTM deep network. For example, a three-layer LSTM networks to capture feature representation of deflection noise. The last layer output is connected to dense fully connected layer in order to predict the presence of deflection noise. The model may be trained on 10 GB of BS ECG and IC ECG data.
[0181] The model allows the classification of each CARTO point into (DN. FR) classes during the clinical procedure. This enables the user to filter out DN classified CARTO® point and to display CARTO® maps free of deflection noise induced artifacts.
[0182] FIG. 24 illustrates additional deflection noise 2450 examples. This deflection noise 2450 is illustrated whereas in prior beats the deflection noise does not exist or is reduced. As illustrated, the first two plots 2452,2454 represent the body surface ECG and the next ten plots 2456, 2458, 2460, 2462, 2464, 2466, 2468, 2470, 2472, 2474 represent 10 bipolar channels of the intracardiac ECG during the deflection noise. The deflection noise 2450 is illustrated on each of the signals.
[0183] A contact and deflection noise model 2500 is provided in FIG. 25 illustrating a LSTM network. The input data may include unipolar intracardiac ECG including Penta, Lasso and the like, and a mapping catheter with 2-4 unipolar catheters. Model 2500 includes LSTM1 input as an input layer 2510. LSTM1 performs LSTM 2520. Then dropoutl occurs at step 2530. LSTM2 is performed at step 2540. Dropout2 occurs at step 2550. LSTM3 is performed at step 2560. Dropout3 occurs at step 2570. A densel connected layer occurs at step 2580. Dropout4 occurs at step 2590. A dense2 connected layer occurs at step 2595. The output includes a per sample classification of 0, 1, 2, where 0 is a normal signal, 1 is contact noise and 2 is deflection noise. [0184] A CNN inception model 2600 is provided in FIG. 26. The input data may include unipolar intracardiac ECG including Penta, Lasso and the like, and a mapping catheter with 2-4 unipolar catheters. The input data may include position information (x,y,z), angular movement, and movement or displacement. For example. Input 1 may be the ECG signals and input 2 may be the position information (x,y,z), angular movement, and movement or displacement of the catheter.
[0185] Model 2600 includes an inputl in input layer 2605. A 2D convolution is performed at step 2610. This may be performed three-fold in 2610a, 2610b, 2610c. Each respective conv2d (2610a, 2610b, 2610c) is the nominalized in a batch nominalization at step 2615. These batch normalizations are concatenated in a first concatenation at step 2620. A second set of 2D convolution is performed at step 2625. This may be performed three-fold in 2625a, 2625b, 2625c. Each respective conv2d (2625a, 2625b, 2625c) is the nominalized in a batch nominalization at step 2630. These batch normalizations are concatenated in a second concatenation at step 2635. A third set of 2D convolution is performed at step 2640. This may be performed three-fold in 2640a, 2640b, 2640c. Each respective conv2d (2640a, 2640b, 2640c) is the nominalized in a batch nominalization at step 2645. These batch normalizations are concatenated in a third concatenation at step 2650. A fourth set of 2D convolution is performed at step 2655. This may be performed three-fold in 2655a, 2655b, 2655c. Each respective conv2d (2655a, 2655b, 2655c) is the nominalized in a batch nominalization at step 2660. These batch normalizations are concatenated in a fourth concatenation at step 2665. The data is then flattened in a first flattening at step 2670 and output to a first dense filtering at step 2675. Meanwhile, a second input layer is provided at step 2680 which feeds a second dense filter at step 2685. The outputs of the first dense filter at step 2675 and the second dense filter at step 2685 are concatenated at step 2690, which is then provided to a third dense filter at step 2695. The output includes a per sample classification of 0, 1, 2, where 0 is a normal signal, 1 is contact noise and 2 is deflection noise.
[0186] FIG. 27 illustrates a second learning phase 2700 that may be implemented to capture the methods described herein. The second learning phase 2700 may be run iteratively. Method 2700 includes recording the noise in the laboratory in step 2710. This recording is described herein above. In step 2720, noise samples are added to clean intracardiac signals. The noise sample being added in are described in detail above. In step 2730, a model is built. This model may include a neural network, for example. The model may be built on the additive assumption of the noise being evaluated on real life data as illustrated in the Equation above. Steps 2710, 2720 and 2730 have been described herein above for various additive signals. At step 2740, the model may be evaluated on general data sets and may be compared to manual annotations. This retraining of the model provides a second learning phase. The retraining may be iteratively repeated until specificity and sensitivity of the model above a desired level is achieved.
[0187] Although features and elements are described above in particular combinations, one of ordinary skill in the art will appreciate that each feature or element can be used alone or in any combination with the other features and elements. In addition, the methods described herein may be implemented in a computer program, software, or firmware incorporated in a computer- readable medium for execution by a computer or processor. Examples of computer-readable media include electronic signals (transmitted over wired or wireless connections) and computer- readable storage media. Examples of computer-readable storage media include, but are not limited to, a read only memory (ROM), a random-access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs). A processor in association with software may be used to implement a radio frequency transceiver for use in a WTRU, UE, terminal, base station, RNC, or any host computer.

Claims

35 CLAIMS What is claimed is:
1. A method for detecting and reducing noise in an ECG environment, the method comprising: inputting data regarding the ECG and ECG noise into a database, the database including data on other ECG patients and their respective signals; modeling the noise of the ECG in a quiet environment to provide samples to train and model to identify noise in an ECG system; and identifying the noise signals within the ECG data and removing the noise from the signals.
2. The method of claim 1 wherein the noise includes per site noise signals.
3. The method of claim 1 wherein the noise includes additive noises.
4. The method of claim 1 wherein the noise includes contact noise.
5. The method of claim 1 wherein the noise includes deflection noise.
6. The method of claim 1 wherein the quiet environment includes an aquarium.
7. The method of claim 1 wherein the identifying includes per site noise.
8. A method for removing noise specific to a laboratory, the method comprising: building a laboratory profile by measuring signals within the laboratory; designing a laboratory specific filter to apply to signals collected within the laboratory; and measuring signals within the lab and applying the designed filter to the signals to remove laboratory specific noise from the signals.
9. The method of claim 8 wherein the noise includes per site noise signals.
10. The method of claim 8 wherein the noise includes additive noises.
11. The method of claim 8 wherein the noise includes contact noise.
12. The method of claim 8 wherein the noise includes deflection noise.
13. The method of claim 8 wherein the quiet environment includes an aquarium.
14. A system for detecting and reducing noise in an ECG environment, the system comprising: a plurality of mapping catheters capable of measuring signals in an ECG; a plurality of penta-ray catheters capable of measuring signals in the ECG; and a signal processor and database cooperatively operating to process and record signals measured on at least a portion of the plurality of mapping catheters and plurality of penta-ray catheters, 36 at least a portion of the plurality of mapping catheters and plurality of penta-ray catheters inputting data regarding the ECG and ECG noise into the database, the database including data on other ECG patients and their respective signals; the processor modeling the noise of the ECG in a quiet environment to provide samples to train and model to identify noise in an ECG system; and the processor identifying the noise signals within the ECG data and removing the noise from the signals.
15. The system of claim 14 wherein the noise includes additive noises.
16. The system of claim 14 wherein the noise includes contact noise.
17. The system of claim 14 wherein the noise includes deflection noise.
18. The system of claim 14 wherein the quiet environment includes an aquarium.
19. The system of claim 14 wherein the identifying includes per site noise.
20. The system of claim 14, wherein the signals with the noise removed are output as measured ECG signals.
EP21801993.3A 2020-10-13 2021-10-13 Intracardiac ecg noise detection and reduction Pending EP4228510A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202063091186P 2020-10-13 2020-10-13
PCT/IB2021/059387 WO2022079622A1 (en) 2020-10-13 2021-10-13 Intracardiac ecg noise detection and reduction

Publications (1)

Publication Number Publication Date
EP4228510A1 true EP4228510A1 (en) 2023-08-23

Family

ID=78500660

Family Applications (1)

Application Number Title Priority Date Filing Date
EP21801993.3A Pending EP4228510A1 (en) 2020-10-13 2021-10-13 Intracardiac ecg noise detection and reduction

Country Status (5)

Country Link
EP (1) EP4228510A1 (en)
JP (1) JP2023544895A (en)
CN (1) CN116367779A (en)
IL (1) IL301721A (en)
WO (2) WO2022079621A1 (en)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140278171A1 (en) * 2013-03-15 2014-09-18 Robert James Kahlke Frequency Adaptive Line Voltage Filters
US10751001B2 (en) * 2013-12-03 2020-08-25 General Electric Company Systems and methods for tracking and analysis of electrical-physiological interference

Also Published As

Publication number Publication date
WO2022079621A1 (en) 2022-04-21
JP2023544895A (en) 2023-10-25
CN116367779A (en) 2023-06-30
WO2022079622A1 (en) 2022-04-21
IL301721A (en) 2023-05-01

Similar Documents

Publication Publication Date Title
US20210386355A1 (en) System and method to detect stable arrhythmia heartbeat and to calculate and detect cardiac mapping annotations
EP3928702A2 (en) Ecg based arrhythmia location identification and improved mapping
EP3933852A1 (en) Improved mapping efficiency by suggesting map points location
US20210369174A1 (en) Automatic detection of cardiac structures in cardiac mapping
US20220008126A1 (en) Optimized ablation for persistent atrial fibrillation
EP4113528A1 (en) System and method to determine the location of a catheter
EP3937182A1 (en) System and method to determine the location of a catheter
JP2021186675A (en) Automatic detection of cardiac structures in cardiac mapping
JP2022023017A (en) Automatic contiguity estimation of wide area circumferential ablation points
JP2021194543A (en) Atrial fibrillation
WO2022079622A1 (en) Intracardiac ecg noise detection and reduction
US20220338939A1 (en) System and method to determine the location of a catheter
EP3965115A1 (en) Automatically identifying scar areas within organic tissue using multiple imaging modalities
WO2024009220A1 (en) System and method to determine the location of a catheter
US20220238203A1 (en) Adaptive navigation and registration interface for medical imaging
EP3988025A1 (en) Signal analysis of movements of a reference electrode of a catheter in a coronary sinus vein
WO2022079623A1 (en) Personalized arrythmia treatment planning
JP2022040059A (en) Separating abnormal heart activities into different classes

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20230331

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)