WO2021152603A1 - Système et procédé de classification d'échocardiogrammes de déformation - Google Patents

Système et procédé de classification d'échocardiogrammes de déformation Download PDF

Info

Publication number
WO2021152603A1
WO2021152603A1 PCT/IL2021/050119 IL2021050119W WO2021152603A1 WO 2021152603 A1 WO2021152603 A1 WO 2021152603A1 IL 2021050119 W IL2021050119 W IL 2021050119W WO 2021152603 A1 WO2021152603 A1 WO 2021152603A1
Authority
WO
WIPO (PCT)
Prior art keywords
video segments
spatio
measurements
systolic
temporal measurements
Prior art date
Application number
PCT/IL2021/050119
Other languages
English (en)
Inventor
Dan Adam
Hanan KHAMIS
Zvi Friedman
Amir YAHAV
Hannah ORNSTEIN
Alon BEGIN
Ehud TAL
Grigoriy ZURAKHOV
Avraham Nahmany
Ivgeni KUCHEROV
Nahum Smirin
Original Assignee
Technion Research & Development Foundation Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Technion Research & Development Foundation Limited filed Critical Technion Research & Development Foundation Limited
Publication of WO2021152603A1 publication Critical patent/WO2021152603A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T7/0014Biomedical image inspection using an image reference approach
    • G06T7/0016Biomedical image inspection using an image reference approach involving temporal comparison
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0883Detecting organic movements or changes, e.g. tumours, cysts, swellings for diagnosis of the heart
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/48Diagnostic techniques
    • A61B8/485Diagnostic techniques involving measuring strain or elastic properties
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5223Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for extracting a diagnostic or physiological parameter from medical diagnostic data
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac

Definitions

  • the present invention relates to the field of machine learning methods. More particularly, the present invention relates to machine learning classification of strain echocardiograms .
  • Echocardiographic assessment of the left ventricular function is a highly- specialized task typically performed manually and with substantial inter-rater variability. Clinical utility depends entirely on the skill of users who are trained in image acquisition, analysis, and interpretation.
  • Automated machine-learning methods may aid in the interpretation of a high volume of cardiac ultrasound images, reduce variability, and improve diagnostic accuracy. Therefore, machine learning methods offer the potential to improve the accuracy and reliability of echocardiography-based detection, which is central to modern diagnosis and management of heart disease.
  • a system comprising: at least one hardware processor; and a non-transitory computer-readable storage medium having stored thereon program code, the program code executable by the at least one hardware processor to receive, as input, a plurality of video segments associated with echocardiogram of a plurality of human subjects; calculate, from each of said video segments, a set of spatio-temporal measurements representing left ventricle (LV) strain over at least one cardiac cycle; at a training stage, train a machine learning model on a training set comprising: (i) said sets of spatio-temporal measurements, and (ii) labels associated with a classification of each of said spatio-temporal measurements as one of: normal, pathological, and artefactual; and at an inference stage, apply said trained machine learning model to a target set of spatio-temporal measurements, to classify said target set as one of: normal, pathological, and artefactual.
  • LV left ventricle
  • a method comprising: receiving, as input, a plurality of video segments associated with echocardiogram of a plurality of human subjects; calculating, from each of said video segments, a set of spatio-temporal measurements representing left ventricle (LV) strain over at least one cardiac cycle; at a training stage, training a machine learning model on a training set comprising: (i) said sets of spatio-temporal measurements, and (ii) labels associated with a classification of each of said spatio-temporal measurements as one of: normal, pathological, and artefactual; and at an inference stage, applying said trained machine learning model to a target set of spatio- temporal measurements, to classify said target set as one of: normal, pathological, and artefactual.
  • LV left ventricle
  • a computer program product comprising: receive, as input, a plurality of video segments associated with echocardiogram of a plurality of human subjects; calculate, from each of said video segments, a set of spatio- temporal measurements representing left ventricle (LV) strain over at least one cardiac cycle; at a training stage, train a machine learning model on a training set comprising: (i) said sets of spatio-temporal measurements, and (ii) labels associated with a classification of each of said spatio-temporal measurements as one of: normal, pathological, and artefactual; and at an inference stage, apply said trained machine learning model to a target set of spatio- temporal measurements, to classify said target set as one of: normal, pathological, and artefactual.
  • LV left ventricle
  • the calculating comprises selecting at least one of: said video segments representing long-axis views and said video segments representing short-axis views.
  • the calculating further comprises classifying said selected video segments into one of: 2 chamber view, 3 chamber view, and 4 chamber view.
  • the measurements comprise at least one feature associated with said LV strain selected from the group consisting of: time-to-peak, time-to-post- systolic-peak, systolic-peak value, and post-systolic peak value.
  • the calculating further comprises segmenting, in each frame of said selected video segments, said LV into a plurality of regions of interest (ROI) selected from the group consisting of: apical segment, mid-wall segment, basal segment, epicardium layer, myocardium layer, and endocardium layer.
  • ROI regions of interest
  • the set of spatio-temporal measurements represent measurements associated with at least some of said ROIs.
  • Fig. 1 illustrates the functional steps in an automated assessment of cardiac function based on machine learning-based analysis of echocardiography results in a subject, in accordance with some embodiments of the present invention
  • Fig. 2A shows standard echocardiogram views
  • Fig 2B shows an example of a machine learning process according to some embodiments of the present invention
  • FIG. 2C shows an example of a pre-processing stage according to some embodiments
  • FIG. 3 shows exemplary left ventricle strain curve analysis, in accordance with some embodiments.
  • FIG. 4 shows a block diagram of an exemplary computing device, according to an embodiment of the present invention.
  • the terms “plurality” and “a plurality” as used herein may include, for example, “multiple” or “two or more”.
  • the terms “plurality” or “a plurality” may be used throughout the specification to describe two or more components, devices, elements, units, parameters, or the like.
  • the method embodiments described herein are not constrained to a particular order or sequence. Additionally, some of the described method embodiments or elements thereof can occur or be performed simultaneously, at the same point in time, or concurrently.
  • Ultrasound imaging of the heart is known as echocardiography.
  • a sonographer typically rotates an ultrasound transducer at different windows in order to get different cross sections of the heart. These cross-sections allow a medical practitioner to visualize different heart structures at different angles.
  • the long axis views and the short axis views constitute the standard two categories of these cross- sections. Although many of the same cardiac structures are shown in both categories, long axis views and short axis views are orthogonal from one another.
  • Long axis views taken from an apical imaging window include the two chamber (2CH), three chamber (3CH), and four chamber views (4CH). All three of these long axis views include the left ventricle, left atrium, and mitral valve.
  • the right ventricle, right atrium, and tricuspid valve can be visualized in the four chamber view, while the aortic valve is shown in the three chamber view.
  • Short axis views from the left parasternal imaging window include the mitral valve (MV), papillary muscle (PM), and apical (AP) views. All three of these short axis views show the left ventricles and partially the right ventricle walls whereas the mitral valve is included in the MV view and the papillary muscles are shown in the PM view.
  • Echocardiograms may thus be used to assess both cardiac structure and function and to detect diseases, wherein different analyses rely on different views.
  • the four chamber (4CH) view is commonly used to calculate the left ventricular volume
  • the apical view is commonly used to diagnose hypertrophic cardiomyopathy which is a genetic heart disease that causes thickening of the heart muscle.
  • the same echo view may appear differently in different patients, as body composition can affect image quality; for example, higher levels of body adipose tissue may lead to lower quality images.
  • the same view may contain different background information if different ultrasound machines were used. Different sonographers may capture the same views using slightly different techniques which can affect the view as well. Additionally, ultrasounds suffer from speckle noise so even frames of the same echo clip appear differently from one another.
  • Speckle tracking echocardiography is a non-invasive technique for the assessment of left ventricle (FV) function, which mainly provides global and regional time strain curves (TSCs). Amplitude and profile analysis of these TSCs, such as global and segmental peak systolic strain, allow to detect various malfunctions of the FV myocardium.
  • TSCs time strain curves
  • the present disclosure provides for an automated machine learning-based method for calculating and classifying strain curves in echocardiography imaging.
  • the present disclosure provides for automated assessment of left ventricular (LV) morphology and function in some embodiments, the assessment is based, at least in part, on calculating and classifying strain curves in echocardiograms, e.g., to normal, pathological, and artefactual strain curves.
  • the classification results of the present disclosure are used in clinical decision making in the context of, e.g., cardiac risk assessment and diagnoses.
  • the present disclosure provides for, at a first stage, automatic classification of echocardiograms into the different anatomical views using a trained machine learning classifier.
  • the present disclosure provides for estimating mechanical changes of the myocardium along a cardiac cycle, to produce segmental and/or transmural strain measurements.
  • the present disclosure provides for a supervised machine learning model for the classification of spatio-temporal strain measurements (e.g., strain curves) into artefactual or physiological patterns, to allow reliable determination prior to clinical diagnosis.
  • the supervised learning model allows to determine the reliability of transmural and segmental TSCs.
  • Fig. 1 illustrates the functional steps in an automated assessment of cardiac function based on machine learning-based analysis of echocardiography results in a subject, in accordance with some embodiments of the present invention.
  • the present disclosure receives, as input, a plurality of echocardiography imaging results associated with a plurality of subjects.
  • the input comprises a plurality of echocardiogram video segments associated with the subjects, wherein each cardiogram represents the heart structure of a subject during, e.g., one or more cardiac cycles.
  • the present disclosure provides for an image preprocessing stage of classifying the plurality of cardiograms into a plurality of views commonly associated with echocardiograms.
  • the preprocessing stage provides for classifying the echocardiograms into long axis views and short axis views.
  • the long- and short-axis classification may be based on a machine learning approach.
  • Fig. 2B illustrates an example of a a machine learning process according to some embodiments, which employs several steps as will be further detailed with reference to Figs. 2B and 2C below.
  • the image preprocessing stage further provides for classifying the long axis views into one of several common views, e.g., 2 chamber, 3 chamber, and 4 chamber views (as can be seen in Fig. 2).
  • the image preprocessing stage is performed by applying one or more trained machine learning models, e.g., 2-dimensional Convolutional Neural Network (CNN), to the echocardiograms.
  • CNN 2-dimensional Convolutional Neural Network
  • the classification into views is based, at least in part, on image transformations (e.g. morphological transformations) which enhance intra-class similarities vs. inter-class differences.
  • image preprocessing is performed with respect to each frame in the echocardiograms.
  • image preprocessing stage further provides for at least some of data cleaning, normalization, and/or standardization, for example, by employing Classic adaptive filters, Partial differential equations-based filters, Filtering algorithms in the transform domain, and/or speckle-removing neural networks.
  • the preprocessing comprises removing those cardiograms containing, e.g., color representations and/or divided (multiple) images.
  • the present disclosure then provides for a subsequent image processing stage which segments the heart structures in the classified views into, e.g., regions of interest (ROI).
  • ROI regions of interest
  • the segmentation stage is performed by applying one or more trained machine learning models to the echocardiograms, e.g.
  • An additional convolutional network may be included in the framework, which may act as a regularization agent (an auxiliary network), allowing the main CNN model to better converge over the training data, while also improving generalization capabilities.
  • the regularization agent may be for example, a Convolutional Auto-Encoder (CAE).
  • the segmentation is based, at least in part, on a detection of the endocardial boundary in a frame of the echocardiogram.
  • the ROI comprises segmenting a left ventricle (LV).
  • the ROI segmentation further comprises, e.g., segmental and/or transmural regions of the LV.
  • the segmental ROIs comprise specified regions of the LV, e.g., left and/or right apical, mid-wall, and/or basal segments.
  • transmural ROIs comprise specified layers, e.g., epicardium, myocardium, and/or endocardium layers.
  • the segmentation is performed with respect to each frame in the echocardiograms.
  • the segments ROIs are continuously tracked frame-to-frame within each cardiogram video segment.
  • the segmented views are used to calculate a plurality of strain curves that are region- and/or layer specific.
  • the strain curves are calculated with respect to each frame in cardiogram video segments.
  • the calculations may comprise strain-related features, e.g., peak global systolic longitudinal (PGLSS) and/or peak regional systolic strains (PRLSS).
  • PGLSS may be calculated for, e.g., full wall width and/or any specified layer, e.g., the sub-endocardial layer only.
  • PRLSS may be calculated for, e.g., one or more of the apical, mid, and basal regions.
  • a machine learning model may be trained to classify strain curves, such as those produced in step 108, into one or more classes, e.g., normal, pathological, and/or artefactual.
  • a supervised machine learning model of the present disclosure provides for a classification of spatio-temporal strain measurements into artefactual or physiological patterns, to allow reliable determination prior to clinical diagnosis.
  • a machine learning model of the present disclosure may be trained on a training set comprising the strain curves produced in step 108 with respect to the plurality of echocardiograms.
  • the trained machine learning model may be applied to a target set of strain curves and/or strain-related features, to classify target curves as one of normal, pathological and artefactual.
  • the machine learning model is trained on a training set comprising a plurality of echocardiograms associated with a plurality of subjects.
  • the training set comprises a plurality of strain curves associated with at least some of said echocardiograms.
  • the training set comprises a set of features representing each strain curve, e.g., one or more of time-to-peak, time-to-post- systolic-peak, systolic-peak value, post-systolic peak value.
  • the strain corves and/or extracted and/or selected features are labeled as, e.g., normal, pathological, and artefactual strain curves.
  • the strain curves are labeled manually, e.g., by specialist, based on a combination of, e.g., visual observation and/or characteristic features defined in a labeling protocol.
  • a trained machine learning model of the present disclosure may be applied to calculated strain curves associated with a target one or more echocardiograms of a subject, to classify the strain curves into, e.g., normal, pathological, and artefactual strain curves.
  • Figs 2B and 2C show examples of a machine learning method according to some embodiments of the present invention.
  • the process may start with receiving of an input video (Step 2010) and preprocessing it in the preprocessing stage (Step 2020), further described with reference to Fig. 2C below.
  • Step 2030 it is decimated (Step 2030), for example, into 5 smaller clips.
  • each of the clips e.g., 5 clips
  • each clip’s resultant features are then passed through a fully connected neural network which acts as a classifier (Step 2050).
  • Step 2060 This results in a number of vectors, as the number of clips (in the example of Fig. 2B, 5 vectors are received), each containing the network’s class predictions for the respective clip.
  • the final step is aggregating the results through a decision function (Step 2060), which determines the final clip classification (Step 2070).
  • the preprocessing may include the following steps:
  • all frames may be extracted from the input video file and converted from RGB to grayscale (Step 2024).
  • a central portion of each frame may be cropped, for example, the central 300x300 pixels may be cropped from each frame.
  • each frame may be downsampled to, for example, 100x100 pixels by Bicubic interpolation.
  • the downsampled frames may then serve as input to step 2030 in Fig. 2B.
  • Computing device 400 may include a controller 402 that may be, for example, a central processing unit processor (CPU), a chip or any suitable computing or computational device, an operating system 404, a memory 420, a storage 430, at least one input device 435 and at least one output devices 440.
  • controller 402 may be, for example, a central processing unit processor (CPU), a chip or any suitable computing or computational device, an operating system 404, a memory 420, a storage 430, at least one input device 435 and at least one output devices 440.
  • Operating system 404 may be or may include any code segment designed and/or configured to perform tasks involving coordination, scheduling, arbitration, supervising, controlling or otherwise managing operation of computing device 400, for example, scheduling execution of programs. Operating system 404 may be a commercial operating system.
  • Memory 420 may be or may include, for example, a Random Access Memory (RAM), a read only memory (ROM), a Dynamic RAM (DRAM), a Synchronous DRAM (SD-RAM), a double data rate (DDR) memory chip, a Flash memory, a volatile memory, a non-volatile memory, a cache memory, a buffer, a short term memory unit, a long term memory unit, or other suitable memory units or storage units.
  • Memory 20 may be or may include a plurality of, possibly different memory units.
  • Executable code 425 may be any executable code, e.g., an application, a program, a process, task or script. Executable code 425 may be executed by controller 402 possibly under control of operating system 404. For example, executable code 425 may be an application for image classification. Where applicable, executable code 425 may carry out operations described herein in real-time. Computing device 400 and executable code 425 may be configured to update, process and/or act upon information at the same rate the information, or a relevant event, are received. In some embodiments, more than one computing device 400 may be used. For example, a plurality of computing devices that include components similar to those included in computing device 400 may be connected to a network and used as a system. For example, classification of echocardiograms may be performed in real-time by executable code 425 when executed on one or more computing devices such as computing device 400.
  • Storage 430 may be or may include, for example, a hard disk drive, a floppy disk drive, a Compact Disk (CD) drive, a CD-Recordable (CD-R) drive, a universal serial bus (USB) device or other suitable removable and/or fixed storage unit.
  • Content may be stored in storage 430 and may be loaded from storage 430 into memory 420 where it may be processed by controller 402.
  • controller 402. In some embodiments, some of the components shown in Fig. 4 may be omitted.
  • memory 420 may be a non-volatile memory having the storage capacity of storage 430. Accordingly, although shown as a separate component, storage 130 may be embedded or included in memory 420.
  • Input devices 435 may be or may include a mouse, a keyboard, a touch screen or pad, different sensors connected to computing device 400, such as an imager, or any other suitable input device. It will be recognized that any suitable number of input devices may be operatively connected to computing device 400 as shown by block 435.
  • Output devices 440 may include one or more displays, speakers and/or any other suitable output devices. It will be recognized that any suitable number of output devices may be operatively connected to computing device 400 as shown by block 440.
  • Any applicable input/output (I/O) devices may be connected to computing device 400 as shown by blocks 435 and 440.
  • NIC network interface card
  • modem printer or facsimile machine
  • USB universal serial bus
  • Embodiments of the invention may include an article such as a computer or processor non-transitory readable medium, or a computer or processor non-transitory storage medium, such as for example a memory, a disk drive, or a USB flash memory, encoding, including or storing instructions, e.g., computer-executable instructions, which, when executed by a processor or controller, carry out methods disclosed herein.
  • a storage medium such as memory 420, computer-executable instructions such as executable code 425 and a controller such as controller 402.
  • the non-transitory storage medium may include, but is not limited to, any type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), rewritable compact disk (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs), such as a dynamic RAM (DRAM), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, or any type of media suitable for storing electronic instructions, including programmable storage devices.
  • ROMs read-only memories
  • RAMs random access memories
  • DRAM dynamic RAM
  • EPROMs erasable programmable read-only memories
  • EEPROMs electrically erasable programmable read-only memories
  • magnetic or optical cards or any type of media suitable for storing electronic instructions, including programmable storage devices.
  • a system may include components such as, but not limited to, a plurality of central processing units (CPU) or any other suitable multi-purpose or specific processors or controllers, a plurality of input units, a plurality of output units, a plurality of memory units, and a plurality of storage units.
  • a system may additionally include other suitable hardware components and/or software components.
  • a system may include or may be, for example, a personal computer, a desktop computer, a mobile computer, a laptop computer, a notebook computer, a terminal, a workstation, a server computer, a Personal Digital Assistant (PDA) device, a tablet computer, a network device, or any other suitable computing device.
  • PDA Personal Digital Assistant
  • the computing device of Fig. 4, or components of the device of Fig. 4, may for example be used in various steps of the method of Fig. 1, e.g., at step 100, inputs such as echocardiography imaging results associated with a plurality of subjects, may be received via input device, such as input device 435 in Fig. 4, and at steps 108 and 110, the machine learning model may run on a processor or controller, such as controller 402 in Fig. 4.
  • aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
  • a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof.
  • a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object-oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Pathology (AREA)
  • Physics & Mathematics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Cardiology (AREA)
  • Physiology (AREA)
  • Quality & Reliability (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

Système comprenant : au moins un processeur matériel ; et un support de stockage lisible par ordinateur non transitoire sur lequel est stocké un code de programme, le code de programme étant exécutable par le ou les processeurs matériels pour : recevoir, en tant qu'entrée, une pluralité de segments vidéo associés à un échocardiogramme d'une pluralité de sujets humains ; calculer, à partir de chacun desdits segments vidéo, un ensemble de mesures spatio-temporelles représentant la déformation du ventricule gauche (LV) au cours d'au moins un cycle cardiaque ; à un stade d'apprentissage, entraîner un modèle d'apprentissage machine sur un ensemble d'apprentissage comprenant : (i) lesdits ensembles de mesures spatio-temporelles, et (ii) des marqueurs associés à une classification de chacune desdites mesures spatio-temporelles en tant que : normale, pathologique ou artéfactuelle ; et à un stade d'inférence, appliquer ledit modèle d'apprentissage machine entraîné à un ensemble cible de mesures spatio-temporelles, pour classer ledit ensemble cible en tant que : normal, pathologique ou artéfactuel.
PCT/IL2021/050119 2020-02-02 2021-02-02 Système et procédé de classification d'échocardiogrammes de déformation WO2021152603A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202062969109P 2020-02-02 2020-02-02
US62/969,109 2020-02-02

Publications (1)

Publication Number Publication Date
WO2021152603A1 true WO2021152603A1 (fr) 2021-08-05

Family

ID=77078617

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2021/050119 WO2021152603A1 (fr) 2020-02-02 2021-02-02 Système et procédé de classification d'échocardiogrammes de déformation

Country Status (1)

Country Link
WO (1) WO2021152603A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115761381A (zh) * 2022-12-14 2023-03-07 安徽鲲隆康鑫医疗科技有限公司 超声心动图的分类方法、分类装置
WO2024116201A1 (fr) * 2022-11-29 2024-06-06 Aarogyaai Innovations Pvt. Ltd. Classification de lignées et de sous-lignées d'agents pathogènes

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019178404A1 (fr) * 2018-03-14 2019-09-19 The Regents Of The University Of California Évaluation automatisée de la fonction cardiaque par échocardiographie

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019178404A1 (fr) * 2018-03-14 2019-09-19 The Regents Of The University Of California Évaluation automatisée de la fonction cardiaque par échocardiographie

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KHAMIS HANAN, YAHAV AMIR, FRIEDMAN ZVI, D'HOOGE JAN, ADAM DAN: "Supervised learning approach for tracking quality determination of transmural and segmental time strain curves: A feasibility study", JOURNAL OF BIOMEDICAL ENGINEERING AND INFORMATICS, vol. 3, no. 2, 26 June 2017 (2017-06-26), pages 43 - 54, XP055845228, DOI: 10.5430/jbei.v3n2p43 *
MADANI ALI, ARNAOUT RAMY; MOFRAD MOHAMMAD; ARNAOUT RIMA: "Fast and accurate view classification of echocardiograms using deep learning", NPJ DIGITAL MEDICINE, vol. 1, no. 6, 21 March 2018 (2018-03-21), pages 1 - 8, XP055651829, DOI: 10.1038/s41746-017-0013-1 *
TABASSIAN MAHDI; SUNDERJI IMRAN; ERDEI TAMAS; SANCHEZ-MARTINEZ SERGIO; DEGIOVANNI ANNA; MARINO PAOLO; FRASER ALAN G; D'HOOGE JAN: "Diagnosis of heart failure with preserved ejection fraction: machine learning of spatiotemporal variations in left ventricular deformation", JOURNAL OF THE AMERICAN SOCIETY OF ECHOCARDIOGRAPHY, vol. 31, no. 12, 1 December 2018 (2018-12-01), pages 1272 - 84, XP085552384, DOI: 10.1016/j.echo.2018.07.013 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024116201A1 (fr) * 2022-11-29 2024-06-06 Aarogyaai Innovations Pvt. Ltd. Classification de lignées et de sous-lignées d'agents pathogènes
CN115761381A (zh) * 2022-12-14 2023-03-07 安徽鲲隆康鑫医疗科技有限公司 超声心动图的分类方法、分类装置
CN115761381B (zh) * 2022-12-14 2023-11-07 安徽鲲隆康鑫医疗科技有限公司 超声心动图的分类方法、分类装置

Similar Documents

Publication Publication Date Title
Kusunose et al. A deep learning approach for assessment of regional wall motion abnormality from echocardiographic images
Litjens et al. State-of-the-art deep learning in cardiovascular image analysis
US10702247B2 (en) Automatic clinical workflow that recognizes and analyzes 2D and doppler modality echocardiogram images for automated cardiac measurements and the diagnosis, prediction and prognosis of heart disease
Kusunose et al. Utilization of artificial intelligence in echocardiography
Zhu et al. Comparative analysis of active contour and convolutional neural network in rapid left-ventricle volume quantification using echocardiographic imaging
US11301996B2 (en) Training neural networks of an automatic clinical workflow that recognizes and analyzes 2D and doppler modality echocardiogram images
Hernandez et al. Deep learning in spatiotemporal cardiac imaging: A review of methodologies and clinical usability
Zamzmi et al. Harnessing machine intelligence in automatic echocardiogram analysis: Current status, limitations, and future directions
US20220012875A1 (en) Systems and Methods for Medical Image Diagnosis Using Machine Learning
CN112435247B (zh) 一种卵圆孔未闭检测方法、***、终端以及存储介质
CN113012173A (zh) 基于心脏mri的心脏分割模型和病理分类模型训练、心脏分割、病理分类方法及装置
Liu et al. Cardiac magnetic resonance image segmentation based on convolutional neural network
Kim et al. Automatic segmentation of the left ventricle in echocardiographic images using convolutional neural networks
WO2021152603A1 (fr) Système et procédé de classification d'échocardiogrammes de déformation
Shalbaf et al. Automatic classification of left ventricular regional wall motion abnormalities in echocardiography images using nonrigid image registration
Moal et al. Explicit and automatic ejection fraction assessment on 2D cardiac ultrasound with a deep learning-based approach
Farhad et al. Cardiac phase detection in echocardiography using convolutional neural networks
WO2024126468A1 (fr) Classification d'échocardiogramme par apprentissage automatique
Ragnarsdottir et al. Interpretable prediction of pulmonary hypertension in newborns using echocardiograms
Elwazir et al. Fully automated mitral inflow doppler analysis using deep learning
Wu et al. Biomedical video denoising using supervised manifold learning
Nageswari et al. Preserving the border and curvature of fetal heart chambers through TDyWT perspective geometry wrap segmentation
Shalbaf et al. Automatic assessment of regional and global wall motion abnormalities in echocardiography images by nonlinear dimensionality reduction
Farhad et al. Deep learning based cardiac phase detection using echocardiography imaging
Patel et al. Arterial parameters and elasticity estimation in common carotid artery using deep learning approach

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21747151

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21747151

Country of ref document: EP

Kind code of ref document: A1