EP3261024B1 - Method and system for vascular disease detection using recurrent neural networks - Google Patents

Method and system for vascular disease detection using recurrent neural networks Download PDF

Info

Publication number
EP3261024B1
EP3261024B1 EP17174063.2A EP17174063A EP3261024B1 EP 3261024 B1 EP3261024 B1 EP 3261024B1 EP 17174063 A EP17174063 A EP 17174063A EP 3261024 B1 EP3261024 B1 EP 3261024B1
Authority
EP
European Patent Office
Prior art keywords
cross
lstm
resolution
section image
sampling point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP17174063.2A
Other languages
German (de)
English (en)
French (fr)
Other versions
EP3261024A3 (en
EP3261024A2 (en
Inventor
Dorin Comaniciu
Mehmet A. Gulsun
Yefeng Zheng
Puneet Sharma
Bogdan Georgescu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Siemens Healthcare GmbH
Original Assignee
Siemens Healthcare GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens Healthcare GmbH filed Critical Siemens Healthcare GmbH
Publication of EP3261024A2 publication Critical patent/EP3261024A2/en
Publication of EP3261024A3 publication Critical patent/EP3261024A3/en
Application granted granted Critical
Publication of EP3261024B1 publication Critical patent/EP3261024B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/698Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/191Design or setup of recognition systems or techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
    • G06V30/19173Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/24Character recognition characterised by the processing or recognition method
    • G06V30/248Character recognition characterised by the processing or recognition method involving plural approaches, e.g. verification by template match; Resolving confusion among similar patterns, e.g. "O" versus "Q"
    • G06V30/2504Coarse or fine approaches, e.g. resolution of ambiguities or multiscale approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30172Centreline of tubular or elongated structure

Definitions

  • the present invention relates to vascular disease detection and characterization in medical images, and more particularly, to vascular disease detection and characterization in medical images using recurrent neural networks.
  • CTA computed tomography
  • the present invention provides a method and system for automated vascular disease detection and characterization in medical images using recurrent neural networks.
  • Embodiments of the present invention exploit sequential image context information embedded along vascular branches to detect and characterize vascular abnormalities in computed tomography angiography (CTA) images of a patient using recurrent neural networks (RNN).
  • CTA computed tomography angiography
  • RNN recurrent neural networks
  • the present invention relates to a method and system for automated vascular disease detection and characterization in medical images using recurrent neural networks.
  • Embodiments of the present invention are described herein to give a visual understanding of the vascular disease detection method and a method for classifying medical images using recurrent neural networks.
  • a digital image is often composed of digital representations of one or more objects (or shapes).
  • the digital representation of an object is often described herein in terms of identifying and manipulating the objects.
  • Such manipulations are virtual manipulations accomplished in the memory or other circuitry / hardware of a computer system. Accordingly, is to be understood that embodiments of the present invention may be performed within a computer system using data stored within the computer system.
  • Embodiments of the present invention detect and characterize vascular abnormalities, such as stenosis, plaques, etc., using sequential image context along a centerline of a vascular branch within a recurrent neural network (RNN) architecture.
  • FIG. 1 illustrates an example of vascular disease along a coronary artery.
  • the coronary artery 100 contains vascular abnormalities 102, 104, and 106, each of which is a non-calcified plaque.
  • the non-calcified plaques 102, 104, and 106 are classified as moderate, critical, and mild, respectively.
  • Embodiments of the present invention detect locations of such plaques and stenosis, as well as characterize their type and/or severity.
  • FIG. 2 illustrates a method for vascular disease detection using a recurrent neural network according to an embodiment of the present invention.
  • a 3D computed tomography angiography (CTA) image of the patient is received.
  • the CTA image includes at least one vessel of interest (e.g., coronary artery, renal artery, cerebral artery, etc.) of the patient.
  • the CTA image can be received directly from a CT scanner or can be received by loading a previously stored CTA image.
  • a centerline of a vessel is detected in the CTA image.
  • the centerline of the vessel is automatically detected in the CTA image, for example using a centerline tracing method or a machine learning based centerline detection method.
  • the vessel centerline can be detected in the CTA image using a combined model-driven and data-driven method, as described in U.S. Patent Application No. 9,129,417 , entitled “Method and System for Coronary Artery Centerline Extraction", which is incorporated herein by reference in its entirety.
  • the vessel centerline can be detected in the CTA image using the method described in U.S. Patent Application No.
  • 2D cross section images are extracted from the 3D CTA image at a plurality of sampling points along the vessel centerline.
  • the vessel centerline is sampled to establish a plurality of sample points along the vessel centerline.
  • a uniformly sampling distribution may be used to define evenly spaced sampling points, but the present invention is not limited thereto and other possible sampling distributions may be used depending on the vessel of interest.
  • a respective 2D cross-section image is extracted from the 3D CTA image at each sampling point on the vessel centerline.
  • Each 2D cross-section image can be a predetermined size image centered at the respective sampling point and aligned with a tangent direction to the vessel centerline at that sampling point.
  • each sampling point along the vessel centerline is classified based on the extracted 2D cross-section images using a trained recurrent neural network (RNN).
  • RNN recurrent neural network
  • the classification of each sampling point by the RNN depends not only on the cross-section image extracted at that sampling point, but also on the cross-section images extracted at the other sampling points as well.
  • RNNs have typically been used for prediction tasks from sequential information with multiple time points.
  • the cross-section images extracted from various spatial locations (i.e., respective centerline sampling points) in the CTA image are input to an RNN, which processes the spatial sequence of images as if they were a dynamic time-sequence of images.
  • RNN architectures such as long short-term memory (LSTM) or gated recurrent unit (GRU), can be used to classify the sampling points of the vessel centerline.
  • LSTM long short-term memory
  • GRU gated recurrent unit
  • FIG. 3A illustrates an RNN architecture using a bi-directional LSTM and a convolutional neural network (CNN) for vessel abnormality detection according to an embodiment of the present invention.
  • 2D cross-section images 312, 314, 316, and 318 are extracted at sampling points 302, 304, 306, and 308, respectively, along a vessel centerline 300.
  • Each of the 2D cross-section images 312, 314, 316, and 318 can be considered as corresponding to time point of an RNN input.
  • the 2D cross-section images 312, 314, 316, and 318 are connected to an RNN layer 330 through convolutional neural network (CNN) 320.
  • CNN convolutional neural network
  • a fully connected layer may be used in place of the CNN 320.
  • the CNN 320 takes image of a fixed size. Accordingly, the 2D cross-section images 312, 314, 316, and 318 input to the CNN 320 can be predetermined size image patches extracted larger 2D cross-section images.
  • the CNN 320 encodes each input image 312, 314, 316, and 318 into a feature vector that is a high-level semantic representation of the input image, and the feature vector extracted for each 2D cross-section image 312, 314, 316, and 318 by the CNN 320 is input to the RNN layer 330. It is to be understood that the same trained CNN 320 (i.e., having the same learned weights) is applied to each of the 2D cross-section images 312, 314, 316, and 318.
  • the RNN layer 330 inputs the feature vector extracted by the CNN 320 for each of the 2D cross-section images 312, 314, 316, and 318 and outputs a classification result for each of the corresponding sampling points 302, 304, 306, and 308 of the vessel centerline 300.
  • the classification result may be a binary classification of normal or abnormal, as shown in FIG. 3A .
  • the classification result may also be a multi-class label, e.g., for plaque type classification (e.g., calcified, non-calcified, mixed), or a continuous value, e.g., for stenosis grading regression.
  • the RNN layer is implemented as a bi-directional LSTM 330.
  • a bi-directional RNN is constructed where both the original (forward direction) and reversed (backward direction) inputs are fed into the RNN.
  • FIG. 3B illustrates the RNN architecture of FIG. 3A with a detailed depiction of the operation of the bi-directional LSTM layer 330. As shown in FIG.
  • the bi-directional LSTM layer 330 includes a forward direction LSTM layer 430a and a backward direction LSTM layer 430b.
  • the forward LSTM layer 430a and the backward LSTM layer 430b are first and second trained LSTMs with different learned weights.
  • the features extracted by the CNN 320 for each 2D cross-section image 312, 314, 316, and 318 are input to both the forward LSTM layer 430a and the backward LSTM layer 430b.
  • the forward LSTM layer starts by classifying a first sampling point (e.g., sampling point 302) in the sequence based on the corresponding 2D cross-section image (e.g., image 312), and then sequentially classifies each subsequent sampling point in a forward direction (e.g., from ostium to distal end) along the vessel centerline based on the corresponding 2D cross-section image and image information from the cross-section images corresponding to the previously classified sampling points.
  • a first sampling point e.g., sampling point 302
  • a forward direction e.g., from ostium to distal end
  • the backward LSTM layer starts by classifying a final sampling point (e.g., sampling point 308) in the sequence based on the corresponding 2D cross-section image (e.g., image 318), and then sequentially classifies each preceding sampling point in a backward direction (e.g., from distal end to ostium) along the centerline based on the corresponding 2D cross-section image and image information from the cross-section images corresponding to the previously classified sampling points. That is, the forward LSTM layer starts at one end of the vessel centerline 300 and works forward and the backward LSTM layer starts at the other end of the vessel centerline 300 and works backward. The forward LSTM and the backward LSTM.
  • a final sampling point e.g., sampling point 308
  • a backward direction e.g., from distal end to ostium
  • the forward LSTM output and the backward LSTM output for each sampling point are combined (e.g., by concatenating, summing, or averaging (weighted or unweighted) the forward and backward outputs) in order to determine the final classification results 332, 334, 336, and 338.
  • the forward and backward LSTMs directly output the classification labels, the results of two LSTMs can be summed or averaged.
  • 1-2 neural network layers can be added between the LSTM output and the final classification label. In this case, the outputs of two LSTMs can be concatenated into a longer feature vector as input to the additional neural network layers.
  • the RNN/LSTM architecture shown in FIG. 3A and 3B may be modified to incorporate multi-scale image information, as described below in connection with the method of FIG. 6 .
  • each sampling point can be classified as normal or abnormal.
  • the RNN architecture may output a multi-class label for each sampling point for plaque type classification.
  • each sampling point may be classified as one of normal, calcified plaque, non-calcified plaque, or mixed plaque.
  • the RNN architecture may output a numerical value for each sampling point. This numerical value can be used to determine the abnormal or normal classification as well as a severity (e.g., mild, moderate, critical) for each sampling point.
  • the classification results may be visualized in various ways.
  • locations along the vessel centerline classified as abnormal can be provided to the user or highlighted in a CTA image displayed on a display device, along with characterizations of the type of plaque and/or the severity of the abnormality.
  • the numerical values determined for the sampling points can be converted to a color map which can be displayed to provide a visualization of vascular disease along the entire vessel.
  • input from a static CTA image is formulated as a dynamic sequence.
  • image patch is sampled perpendicular to the centerline trajectory. Moving along the centerline from ostium to distal end, a dynamic sequence of cross-section images is generated.
  • the length of the sequence may vary from one case to another.
  • Most machine learning algorithms only accept an input vector with a fixed length.
  • the input image size for most machine learning algorithms is set as a hyperparameter, which is fixed after training. However, the object being detected may have significantly different size across patients or even within the same sequence.
  • the present inventors have recognized that a method that can handle variations in object size as well as input sequence length is desirable.
  • a window (with a fixed length) based approach is often used to handle an input with variable length. For example, it is possible to consider only a current frame together with n frames before and after the current frame, resulting in an observation window of 2n + 1. With an input of a fixed length, many existing machine learning algorithms (e.g., AdaBoost or Support Vector Machine) can be applied. However, the length of the event (e.g., coronary artery stenosis) being detected may vary significantly, and a fixed window may be too small for some datasets, but too large for others.
  • Embodiments of the present invention use LSTM to handle input with variable length for vascular disease (e.g., coronary plaque) detection, as illustrated in FIGS. 3A and 3B .
  • vascular disease e.g., coronary plaque
  • LSTM considers all input until the current frame to make a prediction. Furthermore, as illustrated in FIGS. 3A and 3B , embodiments of the present invention use bi-directional LSTM, which utilizes two LSTM layers, one handling forward observation of a sequence and one handling backward observation of the sequence. Accordingly, by using bi-direction LSTM, the whole input is observed to predict if a current position (sampling point) contains or coronary plaque or not.
  • a CNN is used to encode each input image into a feature vector, which is a high level semantic representation of the input image.
  • the CNN takes an image with a fixed size.
  • the object being detected/classified may have significantly different sizes across patients or even with the same sequence.
  • the optimal field-of-view (FoV) to perform classification for each sequence or each frame may be different.
  • a recurrent neural network is an architecture that can handle input with variable length.
  • FIG. 4 illustrates an RNN 400 and an unrolled representation 410 of the RNN 400. Different from a conventional network, an RNN contains a feedback loop in its memory cell, as shown in RNN 400 of FIG. 4 . As shown by the unrolled representation 410 of the RNN 400 in FIG. 4 , given an input sequence
  • an RNN can be "unrolled" t times to generate a loopless architecture matching the input length.
  • An unrolled network has t + 1 layers and each layer is identical (i.e., each layer shares the same learned weights).
  • an RNN can be trained based on ground truth training samples with back-propagation, similar to a conventional feed-forward neural network. The only difference in the training is that the weights of each copy of the network need to be averaged to ensure that all copies are identical after the update.
  • One challenge for training is that, during the gradient back-propagation phase, the gradient signal can end up being multiplied a large number of times (as many as the number of time steps).
  • FIG. 5 illustrates an LSTM network.
  • the LSTM network 500 includes an input gate 502, an output gate 504, and a forget gate 506, which control the input, output, and memory state, respectively.
  • the memory state is C t -1
  • the output state is h t -1
  • the input state at time t is X t .
  • each sampling point is considered to be a "time step" (and each 2D cross-section image patch the corresponding input) even though the sampling points are a spatial sequence, not a time sequence.
  • LSTM has typically been used for applications involving 1D data sequences.
  • a compact high-level representation of the image is extracted, e.g., using a CNN (as shown in FIGS. 3A and 3B ).
  • the CNN takes an image with fixed size as input (otherwise padding or cropping is performed to normalize the size).
  • the object of interest may have quite different sizes in different sequences.
  • the size of the object may even vary within the same sequence.
  • a fixed field-of-view (FoV) may not be optimal to handle such a large variation.
  • multiscale image information can be incorporated in the LSTM framework.
  • FIG. 6 illustrates a method for classifying a sequence of medical images using a multiscale spatio-temporal LSTM according to an embodiment of the present invention.
  • the method of FIG. 6 is described using LSTM architecture, the method is not limited to LSTM and may be applied to RNN in general.
  • the method of FIG. 6 can be used in step 208 of FIG. 2 with the architecture of FIGS. 3A and 3B to perform detection and characterization of vascular disease.
  • the present invention is not limited thereto, and the method of FIG. 6 may be applied to 2D or 3D medical images to perform other medical image classification tasks as well.
  • a sequence of medical images is received.
  • the sequence of medical images is the 2D cross-section images (or image patches) extracted at the sampling points of the vessel centerline.
  • the sequence can be a sequence of images extracted at various spatial locations from a static image (as in the vascular disease detection embodiment), or a time sequence of 2D or 3D medical images.
  • the medical images can be received directly from a medical image acquisition device or may be received by loading a previously stored sequence of medical images.
  • an image pyramid with multiple reduced resolution images is generated for each image in the sequence of medical images.
  • An image pyramid is a scale space representation of the input image data.
  • an image pyramid with multiple reduced resolution images is generated for each 2D cross-section image.
  • a three-level image pyramid may have reduced resolution images of 8 mm, 4 mm, and 2 mm from coarse to fine.
  • An image patch with a fixed size in pixels (or voxels) actually has a different field-of-view (FoV) at different resolutions. For example, a patch with 15 x 15 pixels has a FoV of 120 x 120 mm 2 at 8 mm resolution, but an FoV of 30 x 30 mm 2 at 2 mm resolution.
  • the sequence of medical images is classified based on the multiple reduced resolution images generated for each image in the sequence of medical images using a trained LSTM architecture.
  • the LSTM architecture outputs a classification result for each "time step" (e.g., each sampling point in the vascular disease detection embodiment).
  • the LSTM architecture utilizes a bi-directional LSTM.
  • FIGS. 7 , 8 , and 9 illustrate alterative embodiments for incorporating the multi-resolution image information into the LSTM framework.
  • FIG. 7 illustrates a concatenated multiscale spatio-temporal LSTM according to an embodiment of the present invention.
  • a concatenation operation 708 is used to concatenate CNN features extracted from image patches 702, 704, and 706 with different image resolutions (e.g., 8 mm, 4 mm, and 2 mm, respectively) into one long feature vector.
  • the concatenated feature vector then is input to the LSTM via the input gate 710 of the LSTM.
  • An advantage of this embodiment is that there is no change to the LSTM architecture, but it is trained using concatenated features extracted from multi-resolution input images.
  • a disadvantage with this embodiment is that the most discriminative features may be overwhelmed by less discriminative features, which can limit the overall improvement on recognition accuracy achieved.
  • FIG. 8 illustrates a gated multiscale spatio-temporal LSTM according to an embodiment of the present invention.
  • the "gating" of the LSTM can be extended to the image pyramid images.
  • a separate gate can be used (and a separate gating function learned) for each pyramid level.
  • gates 812, 814, and 816 control the input of the CNN features for the image patches 802, 814, and 816, respectively with different image resolutions (e.g., 8 mm, 4 mm, and 2 mm, respectively).
  • a summation operation 820 is used to add the input signals from the different resolutions together, which does not change the input dimension (different from the concatenation operation of FIG. 7 ).
  • This embodiment allows the LSTM to be trained to select the right "scale” by ignoring information from other resolutions.
  • FIG. 9 illustrates an integrated concatenated and gated multiscale spatio-temporal LSTM according to an embodiment of the present invention.
  • the integration of multiscale image information is limited to the input and a single LSTM is used to perform learning.
  • the multiscale image information is integrated more tightly. As shown in FIG.
  • a respective LSTM (LSTM 1 912, LSTM 2 914, and LSTM 3 916) is trained for each pyramid level image 902, 904, and 906, and the output of a lower level LSTM is fed to the next level LSTM.
  • the higher level LSTM takes two inputs: one from the image at it corresponding pyramid level and the other from the lower level LSTM. These two input signals can be integrated by either concatenation or gating.
  • the lower level LSTM output can be used to control the gates of the higher level LSTM.
  • the higher level LSTM takes one input from its own pyramid level image, but the gates are controlled by three signals: current input, its own previous output, and current output of the lower level LSTM.
  • the classification results are output.
  • the classification results for each "time step" (or each spatial location) (e.g., each sampling point on the vessel centerline) can be output, for example, by displaying the classification results on a display device of a computer system.
  • the detection accuracy of vascular disease can be improved in the presence of varying vessel (e.g., coronary artery) size.
  • the method of FIG. 6 can also be used for other medical imaging classification / detection applications, as well.
  • the methods of FIG. 2 and FIG. 6 are utilized to detect and characterize vascular disease (abnormalities) from a CTA image of a patient. These methods may be similarly applied for other vascular classification applications, as well. For example, the above described methods may be similarly applied to perform vessel lumen and outer wall cross-sectional diameter or area estimation. Most quantitative vascular analysis methods use a 3D model of the vessel to calculate diameter/area. Estimating vessel diameter directly from the image data can remove the need to compute a 3D model of the vessel, which is a very time consuming task. The above described methods may also be similarly applied to detecting image artifacts, such as motion artifacts, blooming artifacts due to calcification and/or metal implants, and partial volume effects.
  • image artifacts such as motion artifacts, blooming artifacts due to calcification and/or metal implants, and partial volume effects.
  • Automated detection of vessel image artifacts in medical images as soon as they are acquired can greatly benefit the clinical workflow by flagging such images for re-acquisition while the patient is still on the table.
  • the detection of artifacts may also be used to provide feedback to the image reconstruction algorithm to determine an optimal set of reconstruction parameters.
  • the above described methods may also be similarly applied to detection of vessel centerline leakage. Methods have been proposed to extract vessel centerlines in medical images with very high sensitivity, but at the expense of false branches leaking into nearby structures or other vessels. There is a need to distinguish true centerlines from leakage.
  • the above described methods can be used to train a centerline leakage detector.
  • the above described methods may also be similarly applied to detection of vascular devices, such as stents, grafts, coils, and clips in the vasculature.
  • Computer 1002 contains a processor 1004, which controls the overall operation of the computer 1002 by executing computer program instructions which define such operation.
  • the computer program instructions may be stored in a storage device 1012 (e.g., magnetic disk) and loaded into memory 1010 when execution of the computer program instructions is desired.
  • An image acquisition device 1020 such as a CT scanner, can be connected to the computer 1002 to input image data to the computer 1002. It is possible to implement the image acquisition device 1020 and the computer 1002 as one device. It is also possible that the image acquisition device 1020 and the computer 1002 communicate wirelessly through a network. In a possible embodiment, the computer 1002 can be located remotely with respect to the image acquisition device 1020 and the method steps described herein can be performed as part of a server or cloud based service. In this case, the method steps may be performed on a single computer or distributed between multiple networked computers.
  • the computer 1002 also includes one or more network interfaces 1006 for communicating with other devices via a network.
  • the computer 1002 also includes other input/output devices 1008 that enable user interaction with the computer 1002 (e.g., display, keyboard, mouse, speakers, buttons, etc.).
  • Such input/output devices 1008 may be used in conjunction with a set of computer programs as an annotation tool to annotate volumes received from the image acquisition device 1020.
  • FIG. 10 is a high level representation of some of the components of such a computer for illustrative purposes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Medical Informatics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Quality & Reliability (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Databases & Information Systems (AREA)
  • Geometry (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
EP17174063.2A 2016-06-23 2017-06-01 Method and system for vascular disease detection using recurrent neural networks Active EP3261024B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201662353888P 2016-06-23 2016-06-23
US15/429,806 US9767557B1 (en) 2016-06-23 2017-02-10 Method and system for vascular disease detection using recurrent neural networks

Publications (3)

Publication Number Publication Date
EP3261024A2 EP3261024A2 (en) 2017-12-27
EP3261024A3 EP3261024A3 (en) 2018-03-14
EP3261024B1 true EP3261024B1 (en) 2019-12-25

Family

ID=59828800

Family Applications (1)

Application Number Title Priority Date Filing Date
EP17174063.2A Active EP3261024B1 (en) 2016-06-23 2017-06-01 Method and system for vascular disease detection using recurrent neural networks

Country Status (3)

Country Link
US (2) US9767557B1 (zh)
EP (1) EP3261024B1 (zh)
CN (1) CN107545269B (zh)

Families Citing this family (98)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8917925B1 (en) * 2014-03-28 2014-12-23 Heartflow, Inc. Systems and methods for data and model-driven image reconstruction and enhancement
KR20180003123A (ko) * 2016-06-30 2018-01-09 삼성전자주식회사 메모리 셀 유닛 및 메모리 셀 유닛들을 포함하는 순환 신경망
US11621075B2 (en) * 2016-09-07 2023-04-04 Koninklijke Philips N.V. Systems, methods, and apparatus for diagnostic inferencing with a multimodal deep memory network
US10176388B1 (en) 2016-11-14 2019-01-08 Zoox, Inc. Spatial and temporal information for semantic segmentation
US10331368B2 (en) * 2017-04-03 2019-06-25 Gyrfalcon Technology Inc. MLC based magnetic random access memory used in CNN based digital IC for AI
US10552733B2 (en) * 2017-04-03 2020-02-04 Gyrfalcon Technology Inc. Memory subsystem in CNN based digital IC for artificial intelligence
US10331367B2 (en) * 2017-04-03 2019-06-25 Gyrfalcon Technology Inc. Embedded memory subsystems for a CNN based processing unit and methods of making
US10546234B2 (en) 2017-04-03 2020-01-28 Gyrfalcon Technology Inc. Buffer memory architecture for a CNN based processing unit and creation methods thereof
US10534996B2 (en) 2017-04-03 2020-01-14 Gyrfalcon Technology Inc. Memory subsystem in CNN based digital IC for artificial intelligence
US10296824B2 (en) 2017-04-03 2019-05-21 Gyrfalcon Technology Inc. Fabrication methods of memory subsystem used in CNN based digital IC for AI
US10331999B2 (en) * 2017-04-03 2019-06-25 Gyrfalcon Technology Inc. Memory subsystem in CNN based digital IC for artificial intelligence
US10475214B2 (en) * 2017-04-05 2019-11-12 General Electric Company Tomographic reconstruction based on deep learning
US11037330B2 (en) * 2017-04-08 2021-06-15 Intel Corporation Low rank matrix compression
US11030394B1 (en) * 2017-05-04 2021-06-08 Amazon Technologies, Inc. Neural models for keyphrase extraction
CN107527069A (zh) * 2017-08-22 2017-12-29 京东方科技集团股份有限公司 图像处理方法、装置、电子设备及计算机可读介质
WO2019070570A1 (en) * 2017-10-06 2019-04-11 Tellus You Care, Inc. NON-CONTACT ACTIVITY DETECTION NETWORK FOR CARE OF OLDER PEOPLE
US11114206B2 (en) 2017-10-06 2021-09-07 Tellus You Care, Inc. Vital signs with non-contact activity sensing network for elderly care
CN107730497B (zh) * 2017-10-27 2021-09-10 哈尔滨工业大学 一种基于深度迁移学习的血管内斑块属性分析方法
US10762637B2 (en) 2017-10-27 2020-09-01 Siemens Healthcare Gmbh Vascular segmentation using fully convolutional and recurrent neural networks
DE102017010397A1 (de) * 2017-11-09 2019-05-09 FotoFinder Systems GmbH Verfahren zur Auswertung des Hautbildes eines Menschen im Rahmen der Hautkrebs-Vorsorgeuntersuchung sowie Vorrichtung zu dessen Betrieb
CN107895369B (zh) * 2017-11-28 2022-11-22 腾讯科技(深圳)有限公司 图像分类方法、装置、存储介质及设备
JP2019097591A (ja) * 2017-11-28 2019-06-24 キヤノン株式会社 画像処理装置、画像処理方法、及びプログラム
US10258304B1 (en) 2017-11-29 2019-04-16 Siemens Healthcare Gmbh Method and system for accurate boundary delineation of tubular structures in medical images using infinitely recurrent neural networks
WO2019134753A1 (en) * 2018-01-08 2019-07-11 Siemens Healthcare Gmbh Biologically-inspired network generation
CN108399409B (zh) * 2018-01-19 2019-06-18 北京达佳互联信息技术有限公司 图像分类方法、装置及终端
CN108280827B (zh) * 2018-01-24 2020-11-24 北京红云视界技术有限公司 基于深度学习的冠状动脉病变自动检测方法、***和设备
US10657426B2 (en) 2018-01-25 2020-05-19 Samsung Electronics Co., Ltd. Accelerating long short-term memory networks via selective pruning
EP3749215A4 (en) 2018-02-07 2021-12-01 Atherosys, Inc. DEVICE AND METHOD FOR CONTROLLING THE ULTRASONIC RECORDING OF THE PERIPHERAL ARTERIES IN THE TRANSVERSAL PLANE
US10896508B2 (en) * 2018-02-07 2021-01-19 International Business Machines Corporation System for segmentation of anatomical structures in cardiac CTA using fully convolutional neural networks
CN108470359A (zh) * 2018-02-11 2018-08-31 艾视医疗科技成都有限公司 一种糖尿病性视网膜眼底图像病变检测方法
CN110310287B (zh) * 2018-03-22 2022-04-19 北京连心医疗科技有限公司 基于神经网络的危及器官自动勾画方法、设备和存储介质
CN108510506A (zh) * 2018-04-14 2018-09-07 深圳市图智能科技有限公司 一种管状结构图像分割方法
US11367222B2 (en) 2018-04-20 2022-06-21 Hewlett-Packard Development Company, L.P. Three-dimensional shape classification and retrieval using convolutional neural networks and majority vote
EP3787480A4 (en) * 2018-04-30 2022-01-26 Atherosys, Inc. METHOD AND APPARATUS FOR THE AUTOMATIC DETECTION OF ATHEROMAS IN PERIPHERAL ARTERIES
EP3564862A1 (en) * 2018-05-03 2019-11-06 Siemens Aktiengesellschaft Determining influence of attributes in recurrent neural networks trained on therapy prediction
US11836997B2 (en) * 2018-05-08 2023-12-05 Koninklijke Philips N.V. Convolutional localization networks for intelligent captioning of medical images
CN108629773B (zh) * 2018-05-10 2021-06-18 北京红云智胜科技有限公司 建立训练识别心脏血管类型的卷积神经网络数据集的方法
CN108830155B (zh) * 2018-05-10 2021-10-15 北京红云智胜科技有限公司 一种基于深度学习的心脏冠状动脉分割及识别的方法
CA3100495A1 (en) 2018-05-16 2019-11-21 Benevis Informatics, Llc Systems and methods for review of computer-aided detection of pathology in images
CN110070534B (zh) * 2018-05-22 2021-11-23 深圳科亚医疗科技有限公司 用于基于血管图像自动获取特征序列的方法和预测血流储备分数的装置
US10937549B2 (en) 2018-05-22 2021-03-02 Shenzhen Keya Medical Technology Corporation Method and device for automatically predicting FFR based on images of vessel
CN108776779B (zh) * 2018-05-25 2022-09-23 西安电子科技大学 基于卷积循环网络的sar序列图像目标识别方法
CN108921024A (zh) * 2018-05-31 2018-11-30 东南大学 基于人脸特征点信息与双网络联合训练的表情识别方法
WO2019233812A1 (en) * 2018-06-07 2019-12-12 Agfa Healthcare Nv Sequential segmentation of anatomical structures in 3d scans
CN108960303B (zh) * 2018-06-20 2021-05-07 哈尔滨工业大学 一种基于lstm的无人机飞行数据异常检测方法
US10776923B2 (en) 2018-06-21 2020-09-15 International Business Machines Corporation Segmenting irregular shapes in images using deep region growing
US10643092B2 (en) 2018-06-21 2020-05-05 International Business Machines Corporation Segmenting irregular shapes in images using deep region growing with an image pyramid
DE112019001959T5 (de) * 2018-06-21 2021-01-21 International Business Machines Corporation Segmentieren unregelmässiger formen in bildern unter verwendung von tiefem bereichswachstum
KR102154652B1 (ko) 2018-07-04 2020-09-11 서울대학교산학협력단 순환신경망을 이용한 구간혈압 추정 방법 및 그 방법을 구현하기 위한 구간 혈압 추정 장치
CN109035226B (zh) * 2018-07-12 2021-11-23 武汉精测电子集团股份有限公司 基于LSTM模型的Mura缺陷检测方法
EP3593722A1 (en) * 2018-07-13 2020-01-15 Neuroanalytics Pty. Ltd. Method and system for identification of cerebrovascular abnormalities
CN110794254B (zh) * 2018-08-01 2022-04-15 北京映翰通网络技术股份有限公司 一种基于强化学习的配电网故障预测方法及***
CN110599444B (zh) * 2018-08-23 2022-04-19 深圳科亚医疗科技有限公司 预测血管树的血流储备分数的设备、***以及非暂时性可读存储介质
CN110490927B (zh) * 2018-08-23 2022-04-12 深圳科亚医疗科技有限公司 用于为图像中的对象生成中心线的方法、装置和***
DE102018214325A1 (de) * 2018-08-24 2020-02-27 Siemens Healthcare Gmbh Verfahren und Bereitstellungseinheit zum Bereitstellen eines virtuellen tomographischen Schlaganfall-Nachfolgeuntersuchungsbildes
CN109389045B (zh) * 2018-09-10 2021-03-02 广州杰赛科技股份有限公司 基于混合时空卷积模型的微表情识别方法与装置
KR102170733B1 (ko) 2018-09-21 2020-10-27 서울대학교산학협력단 주기적 생체신호의 잡음을 제거하는 방법 및 그 장치
EP3637428A1 (en) 2018-10-12 2020-04-15 Siemens Healthcare GmbH Natural language sentence generation for radiology reports
CN109359698B (zh) * 2018-10-30 2020-07-21 清华大学 基于长短时记忆神经网络模型的漏损识别方法
US10964017B2 (en) * 2018-11-15 2021-03-30 General Electric Company Deep learning for arterial analysis and assessment
CN109567793B (zh) * 2018-11-16 2021-11-23 西北工业大学 一种面向心律不齐分类的ecg信号处理方法
CN109753049B (zh) * 2018-12-21 2021-12-17 国网江苏省电力有限公司南京供电分公司 一种源网荷互动工控***的异常指令检测方法
US10813612B2 (en) 2019-01-25 2020-10-27 Cleerly, Inc. Systems and method of characterizing high risk plaques
EP3928285A4 (en) * 2019-02-19 2022-12-07 Cedars-Sinai Medical Center CALCIUM-FREE CT ANGIOGRAPHY SYSTEMS AND METHODS
US11308362B2 (en) * 2019-03-26 2022-04-19 Shenzhen Keya Medical Technology Corporation Method and system for generating a centerline for an object, and computer readable medium
KR102202029B1 (ko) 2019-04-18 2021-01-13 서울대학교산학협력단 순환신경망을 이용한 구간혈압 추정 방법 및 그 방법을 구현하기 위한 구간 혈압 추정 장치
CN110222759B (zh) * 2019-06-03 2021-03-30 中国医科大学附属第一医院 一种冠状动脉易损斑块自动识别***
CN110276748B (zh) * 2019-06-12 2022-12-02 上海移视网络科技有限公司 心肌缺血区域的血流速和血流储备分数的分析方法
CN110349143B (zh) * 2019-07-08 2022-06-14 上海联影医疗科技股份有限公司 一种确定管状组织感兴趣区的方法、装置、设备及介质
CN110378412B (zh) * 2019-07-17 2021-07-27 湖南视比特机器人有限公司 基于局部几何特征序列建模的二维轮廓形状识别分类方法
WO2021008697A1 (en) * 2019-07-17 2021-01-21 Siemens Healthcare Gmbh 3d vessel centerline reconstruction from 2d medical images
US11931195B2 (en) 2019-07-22 2024-03-19 Siemens Healthineers Ag Assessment of coronary artery calcification in angiographic images
WO2021012225A1 (en) * 2019-07-24 2021-01-28 Beijing Didi Infinity Technology And Development Co., Ltd. Artificial intelligence system for medical diagnosis based on machine learning
CN114365188A (zh) * 2019-08-16 2022-04-15 未艾医疗技术(深圳)有限公司 基于vrds ai下腔静脉影像的分析方法及产品
CN110490863B (zh) * 2019-08-22 2022-02-08 北京红云智胜科技有限公司 基于深度学习的检测冠脉造影有无完全闭塞病变的***
US11200976B2 (en) 2019-08-23 2021-12-14 Canon Medical Systems Corporation Tracking method and apparatus
EP3796210A1 (en) * 2019-09-19 2021-03-24 Siemens Healthcare GmbH Spatial distribution of pathological image patterns in 3d image data
CN110517279B (zh) * 2019-09-20 2022-04-05 北京深睿博联科技有限责任公司 头颈血管中心线提取方法及装置
US11195273B2 (en) * 2019-10-11 2021-12-07 International Business Machines Corporation Disease detection from weakly annotated volumetric medical images using convolutional long short-term memory
US11417424B2 (en) 2019-10-11 2022-08-16 International Business Machines Corporation Disease detection from weakly annotated volumetric medical images using convolutional long short-term memory and multiple instance learning
CN110808096B (zh) * 2019-10-30 2022-04-19 北京邮电大学 基于卷积神经网络的心脏病变自动检测***
US11348291B2 (en) * 2019-11-29 2022-05-31 Shanghai United Imaging Intelligence Co., Ltd. System and method for reconstructing magnetic resonance images
CN111062963B (zh) * 2019-12-16 2024-03-26 上海联影医疗科技股份有限公司 一种血管提取方法、***、设备及存储介质
CN111161240B (zh) * 2019-12-27 2024-03-05 上海联影智能医疗科技有限公司 血管分类方法、装置、计算机设备和可读存储介质
US20220392065A1 (en) 2020-01-07 2022-12-08 Cleerly, Inc. Systems, methods, and devices for medical image analysis, diagnosis, risk stratification, decision making and/or disease tracking
US11969280B2 (en) 2020-01-07 2024-04-30 Cleerly, Inc. Systems, methods, and devices for medical image analysis, diagnosis, risk stratification, decision making and/or disease tracking
KR20220124217A (ko) 2020-01-07 2022-09-13 클리어리, 인크. 의료 영상 분석, 진단, 위험도 층화, 의사 결정 및/또는 질환 추적을 위한 시스템, 방법 및 디바이스
CN111369528B (zh) * 2020-03-03 2022-09-09 重庆理工大学 基于深度卷积网络的冠状动脉血管造影图像狭窄区域标示方法
CN111681226B (zh) * 2020-06-09 2024-07-12 上海联影医疗科技股份有限公司 基于血管识别的目标组织定位方法和装置
CN111598870B (zh) * 2020-05-15 2023-09-15 北京小白世纪网络科技有限公司 基于卷积神经网络端对端推理计算冠状动脉钙化比的方法
CN111797196B (zh) * 2020-06-01 2021-11-02 武汉大学 一种结合注意力机制lstm和神经主题模型的服务发现方法
CN111815599B (zh) * 2020-07-01 2023-12-15 上海联影智能医疗科技有限公司 一种图像处理方法、装置、设备及存储介质
KR102387928B1 (ko) * 2020-07-06 2022-04-19 메디컬아이피 주식회사 의료영상을 기초로 인체 조직을 분석하는 방법 및 그 장치
CN111738735B (zh) * 2020-07-23 2021-07-13 腾讯科技(深圳)有限公司 一种图像数据处理方法、装置和相关设备
US11678853B2 (en) * 2021-03-09 2023-06-20 Siemens Healthcare Gmbh Multi-task learning framework for fully automated assessment of coronary artery disease
EP4084011A1 (en) 2021-04-30 2022-11-02 Siemens Healthcare GmbH Computer-implemented method and evaluation system for evaluating at least one image data set of an imaging region of a patient, computer program and electronically readable storage medium
US20230289963A1 (en) 2022-03-10 2023-09-14 Cleerly, Inc. Systems, devices, and methods for non-invasive image-based plaque analysis and risk determination
CN116863013B (zh) * 2023-05-26 2024-04-12 新疆生产建设兵团医院 基于人工智能的扫描图像处理方法及其***

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3037432B2 (ja) * 1993-11-01 2000-04-24 カドラックス・インク 光波オーブンによる食物調理方法および調理装置
US6285992B1 (en) * 1997-11-25 2001-09-04 Stanley C. Kwasny Neural network based methods and systems for analyzing complex data
JP5337018B2 (ja) 2006-03-22 2013-11-06 ヴォルケイノウ・コーポレーション 分類基準に従った自動プラーク特性決定に基づく自動病変解析
US7940974B2 (en) * 2006-11-21 2011-05-10 General Electric Company Method and system for adjusting 3D CT vessel segmentation
US7953266B2 (en) 2007-02-06 2011-05-31 Siemens Medical Solutions Usa, Inc. Robust vessel tree modeling
US20100004526A1 (en) 2008-06-04 2010-01-07 Eigen, Inc. Abnormality finding in projection images
US9761004B2 (en) * 2008-09-22 2017-09-12 Siemens Healthcare Gmbh Method and system for automatic detection of coronary stenosis in cardiac computed tomography data
JP2011135651A (ja) 2009-12-22 2011-07-07 Panasonic Electric Works Co Ltd 電力供給システム
US8526699B2 (en) * 2010-03-12 2013-09-03 Siemens Aktiengesellschaft Method and system for automatic detection and classification of coronary stenoses in cardiac CT volumes
JP5309109B2 (ja) * 2010-10-18 2013-10-09 富士フイルム株式会社 医用画像処理装置および方法、並びにプログラム
CN101996329B (zh) * 2010-11-17 2012-10-31 沈阳东软医疗***有限公司 一种对血管形变区域的检测装置和方法
US9449241B2 (en) * 2011-02-23 2016-09-20 The Johns Hopkins University System and method for detecting and tracking a curvilinear object in a three-dimensional space
US9129417B2 (en) 2012-02-21 2015-09-08 Siemens Aktiengesellschaft Method and system for coronary artery centerline extraction
CN103337071B (zh) * 2013-06-19 2016-03-30 北京理工大学 基于结构重建的皮下静脉三维可视化装置及方法
WO2016054779A1 (en) * 2014-10-09 2016-04-14 Microsoft Technology Licensing, Llc Spatial pyramid pooling networks for image processing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None *

Also Published As

Publication number Publication date
CN107545269B (zh) 2021-06-08
US20170372475A1 (en) 2017-12-28
EP3261024A3 (en) 2018-03-14
CN107545269A (zh) 2018-01-05
US9881372B2 (en) 2018-01-30
US9767557B1 (en) 2017-09-19
EP3261024A2 (en) 2017-12-27

Similar Documents

Publication Publication Date Title
EP3261024B1 (en) Method and system for vascular disease detection using recurrent neural networks
US10762637B2 (en) Vascular segmentation using fully convolutional and recurrent neural networks
Maharjan et al. A novel enhanced softmax loss function for brain tumour detection using deep learning
US10664979B2 (en) Method and system for deep motion model learning in medical images
US10258304B1 (en) Method and system for accurate boundary delineation of tubular structures in medical images using infinitely recurrent neural networks
US6366684B1 (en) Image processing method and system involving contour detection steps
CN110490863B (zh) 基于深度学习的检测冠脉造影有无完全闭塞病变的***
Al-Ayyoub et al. Determining the type of long bone fractures in x-ray images
CN111784671A (zh) 基于多尺度深度学习的病理图像病灶区域检测方法
CN111815599A (zh) 一种图像处理方法、装置、设备及存储介质
SivaSai et al. An automated segmentation of brain MR image through fuzzy recurrent neural network
US20050018890A1 (en) Segmentation of left ventriculograms using boosted decision trees
Manning et al. Image analysis and machine learning-based malaria assessment system
Li et al. Image segmentation based on improved unet
CN114782302A (zh) 基于医学图像预测生理相关参数的***和方法
Blackledge et al. Object detection and classification with applications to skin cancer screening
WO2022096867A1 (en) Image processing of intravascular ultrasound images
Lainé et al. Carotid artery wall segmentation in ultrasound image sequences using a deep convolutional neural network
EP4343680A1 (en) De-noising data
Castillo et al. RSNA bone-age detection using transfer learning and attention mapping
Kabani et al. Ejection fraction estimation using a wide convolutional neural network
Blackledge et al. Texture classification using fractal geometry for the diagnosis of skin cancers
Khowaja et al. Supervised method for blood vessel segmentation from coronary angiogram images using 7-D feature vector
EP4009227A1 (en) Local spectral-covariance computation and display
Agafonova et al. Meningioma detection in MR images using convolutional neural network and computer vision methods

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20170601

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

PUAL Search report despatched

Free format text: ORIGINAL CODE: 0009013

AK Designated contracting states

Kind code of ref document: A3

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

RIC1 Information provided on ipc code assigned before grant

Ipc: G06K 9/68 20060101ALI20180208BHEP

Ipc: G06K 9/00 20060101ALI20180208BHEP

Ipc: G06K 9/62 20060101ALI20180208BHEP

Ipc: G06T 7/60 20170101ALI20180208BHEP

Ipc: G06N 3/02 20060101AFI20180208BHEP

Ipc: G06K 9/46 20060101ALI20180208BHEP

Ipc: G06T 7/00 20170101ALI20180208BHEP

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20181221

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

RIC1 Information provided on ipc code assigned before grant

Ipc: G06T 7/00 20170101ALI20190627BHEP

Ipc: G06K 9/46 20060101ALI20190627BHEP

Ipc: G06N 3/04 20060101ALI20190627BHEP

Ipc: G06K 9/68 20060101ALI20190627BHEP

Ipc: G06N 3/08 20060101ALI20190627BHEP

Ipc: G06K 9/62 20060101ALI20190627BHEP

Ipc: G06K 9/00 20060101ALI20190627BHEP

Ipc: G06N 3/02 20060101AFI20190627BHEP

Ipc: G06T 7/60 20170101ALI20190627BHEP

INTG Intention to grant announced

Effective date: 20190724

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 1217949

Country of ref document: AT

Kind code of ref document: T

Effective date: 20200115

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602017010012

Country of ref document: DE

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20191225

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200325

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200326

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200325

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200520

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200425

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602017010012

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1217949

Country of ref document: AT

Kind code of ref document: T

Effective date: 20191225

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

26N No opposition filed

Effective date: 20200928

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200601

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20200630

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200601

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200630

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200630

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200630

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191225

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20220630

Year of fee payment: 7

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20230710

Year of fee payment: 7

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20240617

Year of fee payment: 8

REG Reference to a national code

Ref country code: DE

Ref legal event code: R081

Ref document number: 602017010012

Country of ref document: DE

Owner name: SIEMENS HEALTHINEERS AG, DE

Free format text: FORMER OWNER: SIEMENS HEALTHCARE GMBH, MUENCHEN, DE