CN118072065A - Artificial intelligence system and method for defining and visualizing placement of a catheter using a patient coordinate system - Google Patents

Artificial intelligence system and method for defining and visualizing placement of a catheter using a patient coordinate system Download PDF

Info

Publication number
CN118072065A
CN118072065A CN202311567680.0A CN202311567680A CN118072065A CN 118072065 A CN118072065 A CN 118072065A CN 202311567680 A CN202311567680 A CN 202311567680A CN 118072065 A CN118072065 A CN 118072065A
Authority
CN
China
Prior art keywords
image
tube
patient
coordinate system
processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311567680.0A
Other languages
Chinese (zh)
Inventor
帕尔·泰格泽什
Z·赫兹格
杨鸿绪
Z·基斯
B·P·奇里亚
P·达拉尔
A·巴恩
吉雷沙·拉奥
B·赫克尔
P·戈斯瓦米
D·周
戈帕尔·阿维纳什
L·费伦齐
凯特琳·奈
N·汤普森奥菲尔德
E·尼尔
S·瓦达赫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GE Precision Healthcare LLC
Original Assignee
GE Precision Healthcare LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US18/385,448 external-priority patent/US20240164845A1/en
Application filed by GE Precision Healthcare LLC filed Critical GE Precision Healthcare LLC
Publication of CN118072065A publication Critical patent/CN118072065A/en
Pending legal-status Critical Current

Links

Landscapes

  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention provides an image processing system (1300) and method. The image processing system (1300) includes a display (1324), a processor (1312), and a memory (1313). The memory (1313) stores processor executable code (220) that, when executed by the processor (1312), causes: receiving an image (1701) of a region of interest of a patient, wherein a medical catheter, tube or line (1705) is disposed within the region of interest; detecting a medical tube or line (1705) within the image (1701); generating a patient coordinate system (1700) relative to an anatomy of a patient within the image (1701); generating a combined image (1702) by superimposing a first graphical marker (1734) indicative of an end (1707) of a medical catheter, tube or line (1705) on the image (1701) and a second graphical marker (1736) indicative of a patient coordinate system (1700) on the image (1701); and displaying the combined image on a display (1702). Further, the system (1300) evaluates common visible complications associated with CVC placement, including but not limited to hydrothorax, pneumothorax, mediastinal pneumatosis, and CVC position changes between x-rays taken at different times.

Description

Artificial intelligence system and method for defining and visualizing placement of a catheter using a patient coordinate system
Cross Reference to Related Applications
The present application claims priority from U.S. provisional patent application Ser. No. 63/427,646, filed 11/23 at 2022, which is expressly incorporated herein by reference in its entirety for all purposes.
Technical Field
The subject matter disclosed herein relates to medical image processing, and more particularly, to systems and methods for visualizing placement of medical tubes or lines.
Background
Medical imaging may be used to visualize a medically placed tube or line (e.g., chest tube, nasogastric tube, endotracheal tube, vascular line, peripherally Inserted Central Catheter (PICC), catheter, etc.). However, it may be difficult for medical personnel (e.g., doctors, radiologists, technicians, etc.) to visualize these medically placed tubes or lines. Furthermore, medical personnel may not be trained or experienced enough, which may hamper their ability to identify a medically placed tube or wire and determine if it is properly placed. Furthermore, medical personnel may have to manually evaluate typical complications associated with tube/line placement and manually make measurements (which may be time consuming) to determine if a medically placed tube or line is properly placed. However, if a medically placed tube or wire is misplaced, rapid intervention is required to move the tube or wire into place for patient safety.
To aid in the visualization and placement of a tube within a patient, certain systems and methods have been developed, such as those disclosed in U.S. patent No. 11,410,341 (the' 341 patent) entitled SYSTEM AND Method For Visualizing Placement Of A Medical Tube Or Line, the entire contents of which are expressly incorporated herein by reference for all purposes. In the' 341 patent, artificial Intelligence (AI) is trained to be used as part of an image processing system that includes a display, a processor, and memory. The memory stores processor executable code for a trained AI that, when executed by the processor, causes receiving an image of a region of interest of a patient, wherein a medical tube (e.g., an endotracheal tube (ETT) or nasogastric tube (NGT) or peripherally inserted central venous catheter (PICC) line) is disposed within the region of interest, detecting the medical tube or line within the image, generating a combined image by overlaying a first graphical marker on the image indicating an end of the medical tube or line and a reference point of a patient anatomy, and displaying the combined image on a display.
While the above-described devices meet important medical objectives, their placement or misplacement may also have some adverse side effects or complications. Thus, when such devices are presented in an image, a user may want to verify whether typical complications associated with a given device exist. For example, if the presence of one or more central venous access catheter/pulmonary artery catheter (CVC/PAC) devices is detected in the image, it is useful to check the image for the presence of their typical complications, including, but not limited to, indwelling guidewires, pneumothorax, hydrothorax, haematocarcinoma, mediastinal air and pericardial effusion.
Thus, while the determination of the pipe or line end and the reference point provides the user with enough information to determine whether the point of the pipe or line is in the correct location, for proper placement of the CVC/PAC, it is also desirable to show the CVC/PAC end-to-end process, as well as provide information about the existence of any complications that may arise from placement of the device.
Furthermore, the assessment of the correct location of the CVC/PAC may be affected by the positioning and/or size of the patient. More specifically, the patient may be rotated relative to the main axis and/or main plane of the image, and thus the measurement of the vertical distance between the points of interest should most preferably be made in a direction appropriate for the patient's position. Furthermore, the optimal distance and/or location of the CVC/PAC may depend on the patient size, such that the optimal distance for large adults may be too long for small or pediatric patients. To account for these differences in the body size of individual patients, the distance may be measured in units appropriate for the particular patient's body size. For example, the distance between the carina and the CVC/PAC tip may be measured in units of vertebral bodies, as suggested by Baskin et al in Cavoatrial Junction and Central Venous Anatomy:Implications for Central Venous Access Tip Position,Journal of Vascular Interventional Radiology 2008;19:359–365, the entire contents of which are incorporated herein by reference for all purposes.
Thus, in order to enable a physician/user to more easily achieve proper placement of a CVC/PAC within a patient, it is desirable to have a system and method that provides information about the path of the CVC/PAC and the location of the tip within the patient's anatomy and the singular points for the tip and the anatomical reference point, as well as information about the location of any complications related to the intended location of the CVC/PAC within the patient's anatomy.
Disclosure of Invention
In accordance with one aspect of exemplary embodiments of the present disclosure, generally, with respect to a CVC model, the systems and methods of the present disclosure help confirm proper CVC tube placement, early detect misplaced CVCs that may lead to complications, and early detect potential complications from and/or interfering with CVC placement.
According to another aspect of exemplary embodiments of the present disclosure, additional benefits provided by the system and method are as follows:
-visualization of the CVC process and visualization of the CVC end. These visualizations are critical because they help the user to determine if the CVC is properly placed.
Visualization of anatomical reference landmarks, such as the carina and the airway (trachea and main trachea (mainstems)) of the region of correct end position.
Visualizing the measurement of the euclidean distance of the catheter (CVC) distal end from the anatomical reference markers (e.g. the carina) and calculating the vertical and horizontal components measured in the patient coordinate system may help the user determine if the CVC is correctly placed at the axis of the patient coordinate system and/or where the measurement units may be based on the anatomical structure (e.g. vertebral body or trachea) presented in the image, to visualize the length of the CVC along these patient axes (optionally defined using patient-specific measurement units) and to project/show the distance of the distal end to the anatomical reference markers (e.g. the carina) along the vertical axis of the patient and optionally with patient-specific measurement units.
-Displaying the system-determined patient coordinate system and related measurement information
Visualizing the measurement of euclidean distance of the distal end from the desired location and/or anatomical reference markers (e.g. carina), and calculating the vertical and horizontal components measured in the patient coordinate system to help the user determine if the CVC is correctly placed, and if these values are abnormal, classifying the image for the radiologist and sending an alert to the bedside team.
Detecting the lateral aspect of the catheter (CVC), tube or line and access site (IJ versus PICC versus subclavian versus femoral, etc.) to help prevent incorrect line recordings.
-Comparing the current image with previous images/measurements to help identify tube movement or migration between images.
Detecting common visible complications associated with CVC placement, such as pneumothorax, hydrothorax, mediastinal pneumatosis, pericardial effusion, indwelling guidewires, and the like.
-Creating and displaying structured reports on catheter (CVC and/or PAC) position, misalignment, complications and guidewire retention.
All the above operations can be performed on a plurality of CVC pipes present in the image.
According to yet another aspect of exemplary embodiments of the present disclosure, the system and method are based on the Deep Learning (DL) technique of Artificial Intelligence (AI) to detect in an image and visualize in a combined image formed from the image and information obtained from the image by DL/AI:
CVC and PAC lines and their ends.
Correct destination location and/or area of the ends of the CVC and PAC catheters.
Anatomical structures related to the destination location of the tip (e.g., carina and airways (trachea and main trachea (mainstems))).
-Determination of a patient coordinate system for defining the correct destination location and illustration of a vertical axis of the patient coordinate system.
-Determination of patient specific measurement units (e.g. distances), and graphical representation of the positions of various anatomical structures and CVC/PAC structures on the displayed vertical axis of the patient coordinate system using the patient specific units.
Misplaced catheter locations such as, but not limited to, catheter in too high or too low a location, catheter placed in the wrong vessel, kinking in the catheter, etc.
The presence and location of complications such as, but not limited to, pneumothorax and hematoma.
Presence and location of the in-situ guidewire, and/or other never-occurring events.
According to yet another aspect of exemplary embodiments of the present disclosure, the system and method will calculate and visualize:
-a horizontal axis of the patient coordinate system.
Catheter tip and desired positioning location/area and/or anatomical reference markers (e.g. carina)
Euclidean distance between
Differences in Euclidean distance between catheter tip and desired anatomical reference marker (e.g. carina) between x-rays obtained at different times
-A vertical component and a horizontal component of the end-carina euclidean distance in the image coordinate system.
-A vertical component and a horizontal component of the end-carina euclidean distance in the patient coordinate system.
-A change in end position compared to a previously saved scan accessible by the system.
-Creation and display of warning messages related to misplacement, complications and seated guidewires.
-Classification of the detected critical conditions for CVCs and/or PACs.
For jugular vein, subclavian or peripheral (elbow, thigh) from all sides (left, right)
All types and applications of intended use of central venous and pulmonary arterial catheters in which access points are inserted.
According to yet another aspect of exemplary embodiments of the present disclosure, a medical image processing system includes: a display; a processor; and a memory storing processor executable code that, when executed by the processor, causes the processor to: receiving an image of a region of interest of a patient, wherein a medical tube or line is disposed within the region of interest; detecting a medical tube or line within the image; detecting a reference marker within a region of interest within the image, wherein the reference marker is within the patient; generating a patient coordinate system for the image; generating a combined image by superimposing a first graphical marker on the image indicating the position of the end of the medical tube or line and superimposing a second graphical marker on the image indicating the position of the reference mark; generating an indication of the position of the end of the medical tube relative to the reference mark in the combined image using the patient coordinate system; and presenting the combined image on a display.
According to yet another aspect of exemplary embodiments of the present disclosure, an imaging system includes: a radiation source; a detector alignable with the radiation source; a display for presenting information to a user; and a controller connected to the display and operable to control operation of the radiation source and the detector to generate image data, the controller comprising an image processing system having a processor and a memory storing processor executable code that when executed by the processor causes: receiving an image of a region of interest of a patient, wherein a medical catheter, tube or line is disposed within the region of interest; detecting a medical catheter, tube or line within the image; detecting a reference marker within a region of interest within the image, wherein the reference marker is within the patient; generating a combined image by superimposing a first graphical marker indicative of an end of a medical catheter, tube or line on the image, superimposing a second graphical marker indicative of a reference marker on the image, and superimposing a third graphical marker indicative of a patient coordinate system; and displaying the combined image on a display.
According to yet another aspect of exemplary embodiments of the present disclosure, a method for medical image processing includes: receiving, via a processor, an image of a region of interest of a patient, wherein a medical catheter, tube, or line is disposed within the region of interest; detecting, via the processor, a medical catheter, tube, or line within the image; detecting, via a processor, a number of reference markers within a region of interest within an image, wherein the number of reference markers are each within a patient; generating, via the processor, a patient coordinate system relative to the detected several reference markers; generating, via the processor, a combined image by superimposing a first graphical marker indicative of an end of a medical catheter, tube, or line on the image and superimposing a second graphical marker indicative of a reference marker on the image; and causing, via the processor, the combined image to be displayed on the display.
These and other exemplary aspects, features and advantages of the present invention will become apparent from the following detailed description, which is to be read in connection with the accompanying drawings.
Drawings
The drawings illustrate the best mode presently contemplated for practicing the invention.
In the drawings:
fig. 1 is a schematic diagram of a condition comparator according to an exemplary embodiment of the present disclosure.
Fig. 2 is a schematic diagram of an embodiment of a clinical progress analysis device according to an exemplary embodiment of the present disclosure.
Fig. 3 is a schematic diagram of an embodiment of a learning neural network according to an exemplary embodiment of the present disclosure.
Fig. 4 is a schematic diagram of an embodiment of a processor platform configured to execute example machine readable instructions to implement the components disclosed and described herein, according to an example embodiment of the present disclosure.
Fig. 5 is a flowchart of an embodiment of a method for determining placement of a medically placed tube or wire within a region of interest according to an exemplary embodiment of the present disclosure.
Fig. 6 is a first example of identifying a combined image of a catheter within a patient according to an exemplary embodiment of the present disclosure.
Fig. 7 is a second example of a combined image identifying a catheter within a patient according to an exemplary embodiment of the present disclosure.
Fig. 8 is a first schematic view of a user interface with a combined image identifying a catheter within a patient according to an exemplary embodiment of the present disclosure.
Fig. 9 is a second schematic view of a user interface with a combined image identifying a catheter within a patient according to an exemplary embodiment of the present disclosure.
Fig. 10 is a schematic illustration of a combined image including anatomical reference points for defining patient specific distance units according to an exemplary embodiment of the disclosure.
Fig. 11 is a schematic illustration of identifying a combined image of a catheter within a patient using a patient-specific measurement unit according to an exemplary embodiment of the present disclosure.
Detailed Description
One or more specific embodiments will be described below. In an effort to provide a concise description of these embodiments, all features of an actual implementation may not be described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure.
When introducing elements of various embodiments of the present subject matter, the articles "a," "an," "the," and "said" are intended to mean that there are one or more of the elements. The terms "comprising," "including," and "having" are intended to be inclusive and mean that there may be additional elements other than the listed elements. Furthermore, any numerical examples in the following discussion are intended to be non-limiting, and thus additional values, ranges, and percentages are within the scope of the disclosed embodiments.
Imaging devices (e.g., gamma cameras, positron Emission Tomography (PET) scanners, computed Tomography (CT) scanners, X-ray machines, fluoroscopes, magnetic Resonance (MR) imagers, ultrasound scanners, etc.) generate medical images (e.g., digital imaging in raw medicine and communication (DICOM) images) representing a body part (e.g., organ, tissue, etc.) to diagnose and/or treat disease. The medical image may include volume data including voxels associated with body parts captured in the medical image. Medical image visualization software allows a clinician to segment, annotate, measure and/or report functional or anatomical characteristics at various locations of the medical image. In some examples, the clinician may identify the region of interest in the medical image using medical image visualization software.
Acquisition, processing, quality control, analysis, and storage of medical image data plays an important role in diagnosis and treatment of patients in a healthcare environment. The medical imaging workflow and the devices involved in the workflow may be configured, monitored and updated throughout the operation of the medical imaging workflow and the devices. Machine and/or deep learning may be used to help configure, monitor, and update medical imaging workflows and devices.
Certain examples provide and/or facilitate improved imaging devices, thereby increasing diagnostic accuracy and/or coverage. Certain examples facilitate improved image reconstruction and further processing, thereby increasing diagnostic accuracy.
Some examples provide an image processing apparatus including an artificial intelligence system (AI system). For example, AI systems may detect, segment, and quantify lesions. The AI system may be a positive or negative discrete output for lookup, segmentation, etc. For example, the AI system may instantiate machine learning and/or other artificial intelligence to detect, segment, and analyze the presence of a medical device (e.g., a medically placed tube or line). For example, the AI system may instantiate machine learning and/or other artificial intelligence to detect an end of a medically placed tube or wire, detect an anatomical reference marker, determine a position of the medically placed tube or wire relative to the reference marker or anatomical reference marker, measure a distance between the end of the medically placed tube or wire and the reference marker, and determine whether the tube or wire is properly placed.
For example, machine learning techniques (whether deep learning networks or other experience/observation learning systems) may be used to locate objects in images, understand speech and convert speech to text, and improve relevance of search engine results. Deep learning is a subset of machine learning that uses a set of algorithms to model high-level abstractions in data using depth maps with multiple processing layers (including linear and nonlinear transformations). While many machine learning systems are implanted with initial features and/or network weights to be modified by learning and updating of the machine learning network, the deep learning network is a "good" feature that is analyzed by training itself. Using a multi-layer architecture, machines employing deep learning techniques may process raw data better than machines employing conventional machine learning techniques. The use of different layers of evaluation or abstraction facilitates data inspection of sets of highly correlated values or distinguishing topics.
Throughout the specification and claims, the following terms take the meanings explicitly associated herein, unless the context clearly dictates otherwise. The term "deep learning" is a machine learning technique that utilizes multiple data processing layers to identify various structures in data sets and to classify those data sets with high accuracy. The deep learning network may be a training network (e.g., a training network model or device) that learns patterns based on a plurality of inputs and outputs. The deep learning network may be a deployed network (e.g., a deployed network model or device) that is generated by a training network and that provides output in response to input.
The term "supervised learning" is a deep learning training method that provides machine with classified data from a human source. The term "unsupervised learning" is a deep learning training method that does not give the machine classified data, but rather makes the machine available for anomaly detection. The term "semi-supervised learning" is a deep learning training approach in which a machine is provided with a small amount of classified data from a human source, as opposed to a larger amount of unclassified data available to the machine.
The term "representation learning" is a field of methods that transform raw data into representations or features that can be utilized in machine learning tasks. In supervised learning, features are learned via a marker input.
The term "convolutional neural network" or "CNN" is a bioheuristic network for detecting, segmenting and identifying interconnected data of related objects and regions in a dataset in deep learning. CNNs evaluate raw data in multiple arrays, divide the data into a series of stages, and examine learned features in the data.
The term "transfer learning" is the process by which a machine stores information used when properly or improperly solving one problem to solve another problem of the same or similar nature as the first problem. Transfer learning may also be referred to as "induction learning". For example, the transfer learning may utilize data from previous tasks.
The term "active learning" is a machine learning process in which a machine selects a set of examples to receive training data, rather than passively receiving examples selected by an external entity. For example, when machine learning, rather than relying solely on an external human expert or external system to identify and provide examples, a machine may be allowed to select examples for which machine determination would be most useful for learning.
The term "computer-aided detection" or "computer-aided diagnosis" refers to a computer that analyzes medical images for the purpose of suggesting a possible diagnosis.
Some examples use neural networks and/or other machine learning to implement new workflows for image and associated patient analysis, including generating alerts based on radiological findings that may be generated and delivered at a point of care of a radiological examination. Some examples use Artificial Intelligence (AI) algorithms to process one or more imaging exams (e.g., images or image sets) and provide an alert based on automated exam analysis. Alerts (e.g., including notifications, recommendations, other actions, etc.) may be intended for technicians, clinical team suppliers (e.g., nurses, doctors, etc.), radiologists, administrative staff, surgical staff, and/or even patients to obtain exams. For example, an alarm may be used to indicate a particular or multiple quality controls and/or radiological findings in the examination image data, or lack thereof.
In some examples, the AI algorithm may be (1) embedded within the imaging device, (2) run on a mobile device (e.g., tablet, smartphone, laptop, other handheld or mobile computing device, etc.), and/or (3) run in the cloud (e.g., internal or external), and deliver the alert via a web browser (e.g., which may appear on a radiology system, mobile device, computer, etc.). Such a configuration may be vendor neutral and compatible with conventional imaging systems. For example, if the AI processor is running on a mobile device and/or in the "cloud", the configuration may receive the image as follows: (a) directly from an X-ray and/or other imaging system (e.g., established as an auxiliary push destination such as a digital imaging and communications in medicine (DICOM) node, etc.), (B) by typing a Picture Archiving and Communications System (PACS) destination for redundant image access, (C) by retrieving image data via a sniffer method (e.g., pulling a DICOM image from the system once it is generated), etc.
Certain examples provide apparatus, systems, methods, etc. for determining the progression of a disease and/or other condition based on the output of an algorithm instantiated using and/or driven by an Artificial Intelligence (AI) model, such as a deep learning network model, a machine learning network model, etc. For example, the presence of a medically placed tube or line (e.g., chest tube, nasogastric tube, endotracheal tube, vascular line, peripherally inserted central catheter, central venous access catheter, pulmonary artery catheter, etc.) may be determined based on the output of the AI detection algorithm. Further, placement of a catheter, medical tube, or wire within a region of interest (e.g., lung, stomach, vascular system, etc.) may be determined based on the output of the AI detection (e.g., whether the catheter is properly placed).
Accordingly, certain examples provide systems and methods for detecting a medically placed catheter, tube, or line within a region of interest of a patient and whether the catheter, tube, or line is properly placed within the region of interest based on AI algorithms applied to patient data. Example methods include detecting the presence of a medically placed catheter, tube, or line in an image; detecting the end of a medically placed catheter, tube or wire in the image; detecting at least one anatomical reference marker in the image; determining an anatomical coordinate system or a patient coordinate system using the at least one anatomical reference marker, determining whether the end of the medically placed catheter, tube or wire is properly placed with respect to the anatomical reference marker, providing a visual representation of the patient coordinate system and a position of the end of the catheter, tube or wire with respect to the visual representation, and/or providing a notification to a physician regarding whether the medically placed tube or wire is properly placed with respect to the anatomical reference marker. In certain embodiments, the AI system may detect one or more anatomical reference markers, e.g., for defining a patient coordinate system employed in determining proper placement of a catheter, tube, or wire relative to the patient coordinate system, optionally in combination with patient-specific or patient-defined axes and units/measurement units, detect the presence of a medically placed catheter, tube, or wire based on one or more anatomical reference markers referencing the patient coordinate system, graphically label the medically placed catheter, tube, or wire with a color graphic overlay on the patient coordinate system, detect an end (e.g., distal end) of the medically placed catheter, tube, or wire referencing the patient coordinate system, graphically label the medically placed end of the catheter, tube, or wire referencing the patient coordinate system, graphically label one or more anatomical reference markers, calculate a distance between the medically placed end of the catheter, tube, or wire and a desired location of the end, and/or calculate and provide a confidence measure (e.g., accuracy of the calculated distance, determination of the presence of the medically placed tube, detection of the medically placed tube, or wire, accuracy of the detected end, anatomical reference markers, etc.). AI systems are trained based on images with or without a medically placed catheter, tube, or line, images with a properly placed catheter, tube, or line, images with a misplaced catheter, tube, or line, images with one or more anatomical reference landmarks, and/or images without one or more of the one or more anatomical reference landmarks.
For example, a patient in an intensive care setting receives chest x-rays (or other areas) to monitor the placement of a medically placed catheter, tube, or line. If the catheter, tube, or line is misplaced, the medical team may need to perform faster interventions to properly place the medical catheter, tube, or line. The artificial intelligence system may detect the presence of a medically placed catheter, tube, or wire, detect the end of the medically placed catheter, tube, or wire, detect one or more anatomical reference markers, determine a patient coordinate system using the locations of the detected reference markers, and evaluate whether the catheter, tube, or wire is properly placed. For example, an alert may be generated at a point of care, on a device (e.g., imaging device, imaging workstation, etc.), and output to notify a clinical care team and/or otherwise provide instructions (e.g., notification that a catheter is properly placed or not, or instructions to remove a catheter, tube, or line, displace a catheter, tube, or line in a certain direction, etc.).
The techniques described herein provide a faster means of determining whether a medically placed catheter, tube, or wire is not properly placed and specifying the location of proper placement of the catheter, tube, or wire. This enables faster intervention to ensure that the catheter, tube or line is in the proper position for patient safety. In addition, this relieves some of the burden on the medical team (especially those who may be untrained or inexperienced) providing assistance to the patient.
Deep learning is a class of machine learning techniques employing representation learning methods that allow machines to be given raw data and determine the representation required for data classification. Deep learning uses a back-propagation algorithm for changing internal parameters (e.g., node weights) of the deep learning machine to determine the structure in the dataset. Deep learning machines may utilize a variety of multi-layer architectures and algorithms. For example, while machine learning involves identifying features to be used to train a network, deep learning processes raw data to identify features of interest without external identification.
Deep learning in a neural network environment includes a number of interconnected nodes called neurons. Input neurons activated by external sources activate other neurons based on connections to those other neurons controlled by machine parameters. The neural network functions in a manner based on its own parameters. Learning improves machine parameters and, in general, improves connections between neurons in a network so that the neural network functions in a desired manner.
Deep learning with convolutional neural networks uses convolutional filters to segment data to locate and identify learned observable features in the data. Each filter or layer of the CNN architecture transforms the input data to increase the selectivity and invariance of the data. This abstraction of the data allows the machine to focus on the features in the data that it attempts to classify and ignore irrelevant background information.
The operation of deep learning is based on the understanding that many data sets include high-level features, which in turn include low-level features. For example, when inspecting an image, rather than looking for objects, it is more efficient to look for edges, which form the mold body, which forms part of the mold body, which forms the object to be found. These hierarchies of features can be found in many different forms of data, such as speech and text.
The learned observable features include objects and quantifiable regularities that the machine learns during supervised learning. Machines provided with a large set of data for efficient classification more conditionally distinguish and extract features related to successful classification of new data.
Deep learning machines that utilize transfer learning can correctly connect data features to certain classifications that are confirmed by human experts. Instead, the same machine may update the parameters for classification when a human expert informs of a classification error. For example, settings and/or other configuration information may be guided through the use of learned settings and/or other configuration information, and as the system is used more times (e.g., repeatedly and/or by multiple users), the number of changes and/or other possibilities of settings and/or other configuration information may be reduced for a given situation.
For example, an exemplary deep learning neural network may be trained using expert classification data sets, classified and further annotated for object localization. The data set builds the first parameter of the neural network and this will become the supervised learning phase. During the supervision learning phase, the neural network may be tested for whether the desired behavior has been achieved.
Once the desired neural network behavior has been achieved (e.g., the machine is trained to operate according to specified thresholds, etc.), the machine may be deployed for use (e.g., the machine is tested using "real" data, etc.). During operation, the neural network classification may be confirmed or rejected (e.g., by an expert user, an expert system, a reference database, etc.) to continue to improve neural network behavior. The exemplary neural network is then in a transition learning state in that the classification parameters that determine the neural network behavior are updated based on the ongoing interactions. In some examples, the neural network may provide direct feedback to another process. In some examples, the data output by the neural network is first buffered (e.g., via a cloud, etc.) and validated before being provided to another process.
A deep learning machine using Convolutional Neural Networks (CNNs) may be used for image analysis. The stages of CNN analysis may be used for face recognition in natural images, computer Aided Diagnosis (CAD), etc.
High quality medical image data may be acquired using one or more imaging modalities such as x-ray, computed Tomography (CT), molecular and computed tomography (MICT), magnetic Resonance Imaging (MRI), and the like. Medical image quality is generally not affected by the machine that produced the image, but by the patient. For example, patient movement during MRI can form blurred or distorted images, which can prevent accurate diagnosis.
Interpreting medical images without quality consideration is only a recent development. Medical images are mostly interpreted by physicians, but these interpretations may be subjective and affected by the physician's experience in the field and/or fatigue status. Image analysis via machine learning may support the workflow of a healthcare practitioner.
For example, a deep learning machine may provide computer-aided detection support to improve its image analysis in terms of image quality and classification. However, the problems faced by deep learning machines applied to the medical field often cause many misclassifications. For example, deep learning machines must overcome small training data sets and require iterative adjustments.
For example, a deep learning machine may be used to determine the quality of medical images with minimal training. Semi-supervised and unsupervised deep learning machines may be used to quantitatively measure quality aspects of images. For example, a deep learning machine may be utilized after the images have been acquired to determine whether the quality of the images is sufficient for diagnosis. Supervised deep learning machines may also be used for computer aided diagnosis. For example, supervised learning may help reduce misclassification sensitivity.
The deep learning machine may utilize transfer learning when interacting with a physician to offset the small data sets available in supervised training. These deep learning machines may improve their computer-aided diagnosis over time through training and transfer learning.
Referring now to fig. 1, as also disclosed in U.S. patent No. 11,410,341 (the' 341 patent) entitled SYSTEM AND Method For Visualizing Placement Of A Medical Tube Or Line, the entire contents of which are expressly incorporated herein by reference for all purposes, an example condition comparator apparatus 100 is shown as including a plurality of inputs 110, 115 (e.g., medical images or medical image data), an Artificial Intelligence (AI) system 120, and an output comparator 130. Each input 110, 115 is provided to an AI system 120 that classifies images and/or other information in the respective input 110, 115 to identify a condition in the input 110, 115 and generates an indication of the identified condition based on the input 110, 115. In certain embodiments, the AI system 120 can classify images and/or other information in the respective inputs 110, 115 to identify a medically placed catheter, tube, or line (e.g., chest tube, nasogastric tube, endotracheal tube, vascular line, peripherally inserted central venous catheter, central venous access catheter, pulmonary artery catheter, etc.), and to identify anatomical reference markers related to the catheter, tube, or line, and its desired placement. Using the example comparator device 100, it may be determined whether the end of a catheter, tube, or wire is properly placed in the proper location or region, or within an anatomical structure/anatomical region of interest of a patient, relative to a patient coordinate system (fig. 5) defined by one or more detected anatomical reference markers. In particular, both the end of the catheter, tube or wire and the anatomical reference marker may be located and it is determined whether the end of the tube or wire is correctly placed relative to the anatomical reference marker by using the determined patient coordinate system. The distance between the end of the tube or wire and the anatomical reference marker may be measured via the patient coordinate system using absolute or patient specific measurements or units of distance to determine whether the end of the catheter, tube or wire is properly placed. Confidence metrics (e.g., for the calculated distance, for determination of the presence of a medically placed catheter, tube, or line, for detection of the accuracy of the end of the catheter, tube, or line, for detection of anatomical reference markers, etc.) may be calculated and/or provided via user-perceptible notifications, or stored for further reference. In addition, a notification or alarm may be provided as to whether the medically placed catheter, tube or wire is properly placed. If the tube or wire is not properly placed, additional instructions may be provided via the patient coordinate system associated with moving the catheter, tube or wire in a certain direction.
Further, using the example comparator device 100, it may be determined whether any complications are detected within the patient anatomy, where the detected complications are in the form of non-biological complications (e.g., indwelling guidewire or other catheter failure from catheter, tube, or wire) or biological complications (e.g., pneumothorax, hematoma, etc.). If a complication is detected, a notification may be provided to the physician regarding the presence and location of the complication relative to the patient coordinate system.
Fig. 2 illustrates an example clinical progress analysis device 200 that may be configured based on the example condition comparator device 100 of fig. 1. The example apparatus 200 includes a data source 210, an Artificial Intelligence (AI) system 220, a data store 230, a comparator 240, an output generator 250, and a trigger 260. The inputs 110, 115 may be provided to the AI system 220 by a data source 210 (e.g., a storage device incorporated into the apparatus 200 and/or otherwise connected to the apparatus, etc., an imaging device, etc.).
The example AI system 220 processes the input over time to associate the input from the data source 210 with a classification. Accordingly, the AI system 220 processes the input image data and/or other data to identify a condition in the input data and classifies the condition according to one or more conditions (e.g., catheter, tube, or line present, catheter, tube, or line not present, anatomical reference marker not present, determining a patient coordinate system (fig. 6 and 7) using anatomical reference markers and/or anatomical markers to define proper placement of the catheter, tube, or line, misplacement of the catheter, tube, or line, presence of complications, absence of complications), as specified by equations, thresholds, and/or other criteria. In certain embodiments, the AI system 220 processes the input image data and/or other data to detect a medically placed catheter, tube, or line, determine whether the end of the medically placed catheter, tube, or line is properly placed, identify a location of the end of the catheter, tube, or line relative to a patient coordinate system based on anatomical reference markers and/or anatomical landmarks of the patient, and determine the presence of any complications in the patient anatomy regarding proper placement of the end of the catheter, tube, or line. For example, the output of the AI system 220 can be stored in the data store 230.
Over time, classifications made by the AI system 220 relative to the same type of input 110, 115 from the data source 210 (e.g., lung MR images of the same patient acquired at times t0 and t1, etc.) may be generated and stored in the data store 230. The classifications are provided to a comparator 240 that compares classifications at two or more different times (e.g., before insertion of a catheter, tube, or wire and after insertion of a catheter, tube, or wire) to identify a medically placed catheter, tube, or wire and to determine whether the end of the medically placed catheter, tube, or wire is properly placed. For example, at time t0, the catheter, tube, or line may not be present in the region of interest, and at time t1 or later, the end of the catheter, tube, or line may be placed in a location within the region of interest/acceptable area defined using the patient coordinate system (which may or may not be placed correctly).
Comparator 240 provides a result indicating trend/progress. In certain embodiments, the comparator 240 provides a result indicative of the placement of the end of the medically placed catheter, tube, or wire. The output generator 250 converts the results into outputs that can be displayed, stored, provided to another system for further processing, such as alarms, commands, adjustment of patient care (e.g., point-of-care alarm systems, imaging/radiology workstations, computer Aided Diagnosis (CAD) processors, scheduling systems, medical devices, etc.), and so forth.
Trigger 260 coordinates actions between data source 210, AI system 220, data store 230, comparator 240, and output generator 250. The trigger 260 may initiate an input of data from the data source 210 to the AI system 220, a comparison of results from the data store 230 by the comparator 240, an output by the output generator 250. Thus, the trigger 260 acts as a coordinator between the elements of the apparatus 200.
Fig. 3 illustrates an example implementation of an AI system 220 for processing image data for use by an AI model to quantify a condition (e.g., placement of a catheter, tube, or line). An example implementation of AI system 220 enables annotation of one or more images including an organ region and a region of interest within the organ region. The example AI system 220 of FIG. 3 includes an image divider 1010, a mask combiner 1020, and an example condition comparator 1040.
The example image divider 1010 is to identify a first mask and a second mask in an input image. For example, the image segmenter 1010 processes the image to segment the region of interest within the organ region identified in the image to obtain a first mask. The first mask is a segmentation mask, which is a filter that includes the region of interest in the image and excludes the rest of the image. For example, a mask may be applied to the image data to exclude all regions except the region of interest. The mask may be obtained using a convolutional neural network model (e.g., generating an countermeasure network, etc.). Image segmenter 1010 further processes the image to segment the organ region according to one or more criteria to obtain a second mask. For example, the second mask may represent an organ region, an area of the organ region outside the region of interest, and so on.
For example, if the organ region is a lung (and surrounding regions such as the trachea) and the region of interest is a tube or line identified in the trachea, a first mask is generated to identify the medically placed tube or line and a second mask is generated to identify the entire organ region. In another embodiment, if the organ region is the stomach and the region of interest is a tube or line identified in the stomach, a first mask is generated to identify the medically placed tube or line and a second mask is generated to identify the entire organ region. For example, if the organ region is the heart (and surrounding regions such as veins or other vasculature) and the region of interest is a catheter or line identified in veins or other vasculature near the heart, a first mask is generated to identify the medically placed catheter or line and a second mask is generated to identify the entire organ region. Thus, with respect to a medically placed catheter, tube, or line, a first mask is generated for the catheter, tube, or line, and a second mask is generated for the entire organ region (e.g., vasculature, heart, lung, stomach, trachea, chest, pleural cavity, etc.) in which the catheter, tube, or line is placed.
The example combiner 1020 combines the first and second masks and the associated areas with annotation terms in the image. For example, the annotation may be a relative qualification term used to produce the quantification. For example, mask area may be combined with descriptive terms (such as blur, scatter, density, etc.) to calculate relative density values for regions of interest and organ areas in the image. For example, image areas (e.g., areas of front and side images, etc.) may be combined to produce a volumetric measure.
The example distance computer 1030 determines the distance between the end of the identified tube or wire and the anatomical reference marker (or determines the position of the tube or wire relative to the reference marker). The distance may be calculated by identifying one or more attributes, structures, or markers (such as the carina, transverse processes, and/or spine) within the image.
In one exemplary embodiment for operating the example distance computer 1030, the anatomy presented in the images 1701 and/or 1702 may be used in a variety of ways to form the patient coordinate system 1700 (fig. 6 and 7) for providing distance information. In a first exemplary embodiment, the AI system 220, the example distance computer 1030, and/or the processor 1312 (fig. 4) may define one or more local or patient coordinate systems 1700 proximate to the carina 1712 by detecting and displaying a midpoint of one or more identified or first anatomical markers (e.g., the patient's vertebral body/vertebrae 1704 and the "vertical axis" 1706 they form) within the anatomical structure represented within the image 1701. AI system 220 and/or processor 1312 may similarly define a "horizontal axis" 1708 formed by a midpoint of one or more other or second anatomical landmarks (e.g., transverse processes 1710 of vertebral body/vertebra 1704). These horizontal axis 1708 and vertical axis 1706 will be produced at each vertebral level. This embodiment selects a local vertical axis 1706 to be used to perform the measurement based on one or more vertebrae 1704 near the location of the carina 1712 and/or the end of the device. The detected horizontal axis 1708 and vertical axis 1706 may be fine tuned to ensure that the two directions are exactly perpendicular to each other. Such a local or patient coordinate system 1700 may be useful in cases of severe spinal conditions, where the shape of the spine is significantly different from a straight line, and thus the local "vertical" axis may be substantially different at different locations within the patient anatomy.
According to another exemplary embodiment of the present disclosure, to define a vertical axis 1706 for a thoracic image by employing a spinal axis, the AI system 220/processor 1312 for an associated anatomical landmark may identify as a first anatomical landmark a midpoint of an upper edge of the uppermost thoracic vertebra 1704 (which is typically the first thoracic vertebra) and a midpoint of a lower edge of the lowermost thoracic vertebra 1704 (which is typically the twelfth thoracic vertebra) such that a line extending therebetween can be used as the vertical axis 1706 of the coordinate system 1700. Further, to define the horizontal axis 1708, the ai system 220/processor 1312 can calculate a straight line orthogonal to the vertical axis 1706 that extends through the carina 1712, e.g., a second anatomical landmark. In a modified version of this process, the AI system 220/processor 1312 may detect edges of the plurality of vertebrae 1704 and calculate a midpoint of each vertebra 1704, and then fit a straight line to the set of points. In this process, the AI system 220/processor 1312 may determine the total angle of patient rotation and adapt the coordinate system utilized to the patient position.
According to yet another exemplary embodiment of the present disclosure for determining a vertical axis 1706 for a chest image, AI system 220/processor 1312 may use the position of air tube 1714 detected by the AI system as an associated anatomical landmark. One potential embodiment defines a vertical axis between carina 1712 (e.g., a first anatomical landmark) and the uppermost midpoint of trachea 1714 (e.g., a second anatomical landmark). Alternative embodiments segment the entire trachea 1714 and fit straight lines to the detected tracheal points. To define the horizontal axis 1708, the ai system 220/processor 1312 calculates a line orthogonal to the vertical axis 1706 through the carina 1712 (e.g., the second anatomical landmark).
In yet another exemplary embodiment of the present disclosure, the patient coordinate system 1700 may be determined by the AI system 220/processor 1312 using a deep-learning regression model that predicts the patient rotation angle directly from the entire x-ray image (e.g., image 1701).
The example condition comparator 1040 compares the distance or measured position of the catheter tip to a preset distance or desired position (e.g., according to predetermined rules) for the type of catheter, tube, or wire and/or the anatomy/anatomical region/region of interest in which the catheter, tube, or wire is placed using a patient coordinate system. The preset distance may be provided in absolute units or in patient specific units (to be described). Based on this comparison, the example condition comparator 1040 may determine whether the end of the catheter, tube, or wire is properly positioned relative to the anatomical reference landmarks. The determination and the patient coordinate system used to make the determination may be annotated onto the medical image in order to provide the physician with direct and clear information about the catheter and the position of the catheter tip and any positional changes that need to be made to the catheter tip for proper placement, and/or to detect any movement of the catheter tip over time when separate images of the region of interest and the catheter are obtained at different times.
Accordingly, the AI system 220 may be configured to annotate medical images or related sets of medical images for AI/machine learning/deep learning/CAD algorithm training to quantify the condition. Such methods are consistent, repeatable methods that can replace the current common subjective methods, enabling automatic, accurate detection of the presence of a medically placed catheter, tube or line and its placement.
While an example implementation is described in connection with what is disclosed in U.S. patent No. 11,410,341 (the '341 patent) entitled SYSTEM AND Method For Visualizing Placement Of A Medical Tube Or Line, the entire contents of which are expressly incorporated herein by reference for all purposes, the disclosed elements, processes, and/or devices described in the' 341 patent may be combined, divided, rearranged, omitted, eliminated, and/or implemented in any other way. Furthermore, the components disclosed and described herein may be implemented by hardware, machine-readable instructions, software, firmware, and/or any combination of hardware, machine-readable instructions, software, and/or firmware. Thus, for example, the components disclosed and described herein may be implemented by analog and/or digital circuitry, logic circuitry, a programmable processor, an Application Specific Integrated Circuit (ASIC), a Programmable Logic Device (PLD), and/or a Field Programmable Logic Device (FPLD). When read in light of the apparatus or system claims of any of the present patents that cover a purely software and/or firmware implementation, at least one of these components is expressly defined herein as including a tangible computer-readable storage device or storage disk that stores software and/or firmware, such as memory, digital Versatile Disk (DVD), compact Disk (CD), blu-ray disk, etc.
Fig. 4 is a block diagram of an example processor platform 1300 configured to execute at least the instructions of fig. 5 to be described to implement the example components disclosed and described herein. The processor platform 1300 may be, for example, a server, a personal computer, a mobile device (e.g., a cellular telephone, a smart phone, a tablet (such as an iPad TM)), a Personal Digital Assistant (PDA), an internet appliance, or any other type of computing device, such as a computing device that forms part of a digital imaging system, such as an imaging system 1350 that incorporates the processor platform 1300 as a controller 1352 or as part of the controller 1352 or is operatively connected to the controller 1352. The imaging system 1350 further includes a radiation source 1354 (e.g., an X-ray source) and a detector 1356 alignable with the radiation source 1354 (such as by being mounted to a gantry 1358 having the radiation source 1354) that is selectively positionable and operable by the controller 1352 to obtain image data utilized by the processor platform 1300 to generate an anatomical image or target image 1701 (fig. 6) of the region of interest and a combined image 1702 (fig. 7) for presentation on a connected output device or display 1324.
The processor platform 1300 of the illustrated example includes a processor 1312. The processor 1312 of the illustrated example is hardware. For example, the processor 1312 may be implemented as an integrated circuit, logic circuit, microprocessor, or controller from any desired product family or manufacturer.
The processor 1312 of the illustrated example includes a local memory 1313 (e.g., cache). The example processor 1312 of fig. 4 executes at least the instructions of fig. 5 to implement the system, infrastructure, display, and associated methods of training and implementing the method 1600, such as the example data source 210, AI system 220, data store 230, comparator 240, output generator 250, trigger 260, and the like. The processor 1312 of the illustrated example communicates with a main memory including a volatile memory 1314 and a non-volatile memory 1316 via a bus 1318. The volatile memory 1314 may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), dynamic Random Access Memory (DRAM), RAMBUS Dynamic Random Access Memory (RDRAM), and/or any other type of random access memory device. The non-volatile memory 1316 may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory 1314, 1316 is controlled by a clock controller.
The processor platform 1300 of the illustrated example also includes interface circuitry 1320. The interface circuit 1320 may be implemented by any type of interface standard, such as an ethernet interface, a Universal Serial Bus (USB), and/or a PCI express interface.
In the illustrated example, one or more input devices 1322 are connected to the interface circuit 1320. Input device(s) 1322 allow a user to input data and commands into processor 1312. The input device may be implemented by, for example, a sensor, a microphone, a camera (still or video camera, RGB or depth, etc.), a keyboard, buttons, a mouse, a touch screen, a touch pad, a trackball, an isopoint, and/or a speech recognition system.
One or more output devices 1324 are also operatively (e.g., wired or wireless) connected to the interface circuit 1320 of the illustrated example. The output device 1324 may be implemented, for example, by a display device (e.g., a Light Emitting Diode (LED), an Organic Light Emitting Diode (OLED), a liquid crystal display, a cathode ray tube display (CRT), a touch screen, a haptic output device, and/or a speaker). Thus, the interface circuit 1320 of the illustrated example generally includes a graphics driver card, a graphics driver chip, or a graphics driver processor.
The interface circuit 1320 of the illustrated example also includes communication devices such as a transmitter, receiver, transceiver, modem, and/or network interface card to facilitate exchange of data with external machines (e.g., any kind of computing device) via a network 1326 (e.g., an ethernet connection, a Digital Subscriber Line (DSL), a telephone line, a coaxial cable, a cellular telephone system, etc.).
The processor platform 1300 of the illustrated example also includes one or more mass storage devices 1328 for storing software and/or data. Examples of such mass storage devices 1328 include floppy disk drives, hard disk drives, optical disk drives, blu-ray disc drives, RAID systems, and Digital Versatile Disk (DVD) drives.
The encoded instructions 1332 of fig. 4 may be stored in the mass storage device 1328, in the volatile memory 1314, in the nonvolatile memory 1316, and/or on a removable tangible computer-readable storage medium (such as a CD or DVD).
A flowchart representative of example machine readable instructions for implementing the components disclosed and described herein in the example method 1600 is shown in connection with at least fig. 5. In an example, machine-readable instructions comprise a program executed by a processor (such as processor 1312 shown in example processor platform 1300 discussed in connection with fig. 4). The program(s) may be embodied in machine-readable instructions stored on a tangible computer-readable storage medium, such as a CD-ROM, a floppy disk, a hard drive, a Digital Versatile Disk (DVD), a blu-ray disk, or a memory associated with the processor 1312, but the entire program and/or parts thereof could alternatively be executed by a device other than the processor 1312 and/or embodied in firmware or dedicated hardware. Additionally, although the example program is described with reference to a flowchart illustrated in connection with at least FIG. 5, many other methods of implementing the components disclosed and described herein may alternatively be used. For example, the order of execution of the blocks may be changed, and/or some of the blocks described may be changed, eliminated, or combined. Although the flowchart of at least fig. 5 depicts example operations in the order shown, these operations are not exhaustive and are not limited to the order shown. In addition, various changes and modifications may be made by one skilled in the art within the spirit and scope of the disclosure. For example, blocks shown in the flowcharts may be performed in alternative orders or may be performed in parallel.
As described above, at least the example processes of fig. 5 may be implemented using coded instructions (e.g., computer and/or machine readable instructions) stored on a tangible computer readable storage medium such as a hard drive, a flash memory, a Read Only Memory (ROM), a Compact Disc (CD), a Digital Versatile Disc (DVD), a cache, a Random Access Memory (RAM), and/or any other storage device or storage disk, wherein information is stored for any duration (e.g., for extended time periods, permanently, brief instances, for temporarily buffering, and/or for caching of the information). As used herein, the term tangible computer-readable storage medium is expressly defined to include any type of computer-readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media. As used herein, "tangible computer-readable storage medium" and "tangible machine-readable storage medium" are used interchangeably. Additionally or alternatively, at least the example process of fig. 5 may be implemented using coded instructions (e.g., computer and/or machine readable instructions) stored on a non-transitory computer and/or machine readable medium such as a hard disk drive, a flash memory, a read-only memory, a compact disk, a digital versatile disk, a cache, a random access memory, and/or any other storage device or storage disk, wherein information is stored for any duration (e.g., for extended time periods, permanently, brief instances, for temporarily buffering, and/or for caching of the information). As used herein, the term non-transitory computer readable medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media. As used herein, when the phrase "at least" is used as a transitional term in the preamble of a claim, the term "comprising" is open-ended as well. In addition, as the term "comprising" is open ended, the term "comprising" is also open ended.
As described above, these techniques may be used to identify a medically placed tube or wire and to determine whether the medically placed tube or wire is properly placed. For example, the medically placed tube or wire may be an endotracheal tube, and the correct placement of the endotracheal tube within the trachea (e.g., relative to the carina) may be determined. In another example, the medically placed tube or wire may be a nasogastric tube and the correct placement of the nasogastric tube within the stomach may be determined. In another example, the medically placed tube or wire may be a vascular wire (e.g., PICC wire, central venous access catheter (CVC), pulmonary Artery Catheter (PAC), etc.), and the proper placement of the vascular wire within a certain vasculature may be determined. In yet another example, the medically placed line may be a chest tube, and the correct placement of the chest tube within the chest (in particular, the plenum) may be determined. These examples are intended to be non-limiting and any other tube or wire inserted within a region of interest of the body may be identified and its proper placement determined.
Fig. 5 is a flow chart of an embodiment of a method 1600 of operation of the AI system 220 and/or the processor 1312 for determining placement of a medically placed catheter, tube, or wire within a region of interest. One or more steps of the method may be performed by processor platform 1600 in fig. 4. One or more steps may be performed concurrently or in a different order than shown in fig. 5. The method 1600 includes receiving or obtaining an image 1701 (e.g., a chest image) of a patient including a region of interest (ROI) (block 1602). The image may comprise a medically placed catheter, tube or wire inserted into the region of interest. The image may be provided when the patient is inserted into a catheter, tube or line. The method 1600 further includes receiving or obtaining input regarding a type of catheter, tube, or line to be detected (e.g., CVC or PAC) and/or a region of interest (e.g., central vein or artery) in which the catheter, tube, or line is to be inserted (block 1603). The input may be a user-defined distance or rule for defining the correct placement of the end of a medically placed catheter, tube or wire relative to a reference or anatomical location (e.g., carina). In some embodiments, the input may simply be the type of catheter, tube or wire and/or the desired region of interest for proper placement of the catheter, tube or wire therein. Based on this input, certain defined distances or rules (e.g., left, right, above, and/or below a particular anatomical location) may be utilized that define the correct placement of the end of a particular catheter, tube, or line within a particular region of interest (e.g., a particular distance range of a CVC or PAC above a carina). The method 1600 also includes detecting a catheter, tube, or line within the image using the techniques described above (block 1604). The method 1600 includes identifying an end (e.g., distal end) of a catheter, tube, or wire within a region of interest in an image (block 1606). The method 1600 also includes identifying anatomical reference markers within the image (block 1608). The anatomical reference markers will vary based on the type of catheter, tube or wire utilized and the region of interest in which the catheter, tube or wire is disposed. For example, for an endotracheal tube, the anatomical reference marker may be the carina of the trachea. For nasogastric tubes, the anatomical reference marker may be a location within the stomach below the gastroesophageal junction. For a vascular line, the anatomical reference marker may be the carina of the trachea or a location proximal to the superior vena cava, inferior vena cava, or right atrium, etc. The determination of the location of the anatomical reference markers also includes a determination of anatomical landmarks or structures utilized in forming the patient coordinate system 1700 (block 1609), which may include anatomical reference markers previously determined based on the type of catheter, tube, or line utilized and the region of interest within the image 1701 and/or separate from the anatomical reference markers according to one or more of the previously described processes.
After identifying the end of the catheter, tube, or wire, the anatomical reference marker, and the patient coordinate system 1700, the method 1600 includes measuring a distance between the end of the catheter, tube, or wire and an acceptable position or range of positions of the tip relative to the anatomical reference marker (block 1610). The distances may be illustrated as euclidean lengths between the ends and the desired location or range of locations, and/or respective horizontal and vertical distance components relative to the patient coordinate system 1700. The method 1600 includes generating a combined image 1702 (fig. 6 and 7) having an indication of an end of a catheter, tube, or line, an anatomical reference marker, a patient coordinate system, and/or a measured distance identified in the combined image 1702, such as in the form of a stack 1703 (block 1612). Generating a combined image 1702 comprising an x-ray image 1701 and a stack 1703 includes superimposing various markers on the received patient image 1701. For example, a color code (e.g., a color-coded graphics overlay) may be superimposed on the detected catheter, tube, or line 1705. In certain embodiments, the patient may include more than one catheter, tube, or line, and the catheter, tube, or line of interest is color coded. A graphical marker may be superimposed on the image to indicate the end 1707 of the catheter, wire or tube. Another graphical marker may be superimposed on the image to indicate the anatomical reference marker location 1709. The graphical indicia may comprise the same shape or different shapes. Non-limiting examples of such shapes may be hollow circles or other oval shapes, hollow rectilinear shapes, hollow triangular shapes, or other shapes. Different graphics and tubes may be color coded with different colors. For example, the graphical indicia of the tube or line, the graphical indicia of the anatomical reference marker, and the tube or line may be green, blue, and yellow, respectively. Graphical markers may also be superimposed on the image to indicate the patient coordinate system 1700, the distance 1720 between the end of the tube or line and the anatomical reference marker when the distance is calculated, and the horizontal and vertical components 1722, 1724 of the distance 1720 as determined with respect to the patient coordinate system 1700. The graphical indicia of distance may also include a measurement. The method 1600 also includes displaying the combined image on a display (block 1614). The combined image may be displayed in real time to medical personnel so that they can adjust the placement of the tube or wire as desired. In some implementations, the combined image may be displayed as a DICOM image.
In certain embodiments, the method 1600 includes calculating one or more respective confidence metrics (block 1616). The confidence measure may be for the calculated distance, for a determination of the presence of a medically placed device (e.g., a tube or wire), for an accuracy of detecting an end of a tube or wire, and/or for an accuracy of detecting an anatomical reference marker. The confidence measure may include a confidence level or confidence interval. The confidence measures may be stored for future reference. In some embodiments, the method 1600 may include providing one or more of the confidence metrics to the user (block 1618). For example, the confidence measures may be displayed on the combined image or provided on a separate device (e.g., the user's device). In some embodiments, the confidence measures may be written in a standard or private information tag (e.g., DICOM) and visible in a subsequent information system (e.g., PACS) to which the image is sent.
In some embodiments, in determining whether the end of the medically placed tube or wire is properly placed (e.g., via artificial intelligence and/or deep learning network models), the method 1600 includes comparing the measured distance between the end of the tube or wire and the anatomical reference marker to a desired threshold (such as a desired threshold stored in memory 1328 for proper placement of a particular medical device during an associated procedure) (block 1620) and determining whether the distance is acceptable (block 1622). The desired threshold may represent an acceptable range of distances between the end of the tube or wire and anatomical reference landmarks of the tube or wire to be placed correctly. For example, for an endotracheal tube, the desired threshold may be 2 centimeters (cm) to 3cm above the carina (e.g., anatomical reference marker). For nasogastric tubes, the desired threshold may be a range of distances below the gastroesophageal junction. For a Central Venous Catheter (CVC), the desired threshold may be a distance range above or below a reference marker, such as the carina or right atrium. If the measured distance is not acceptable, the method 1600 includes providing a misplacement indication that is perceptible to the user (block 1624). The indication may be provided on a display where the combined image is displayed or provided on another device (e.g., the user's device). The indication may be text stating that the pipe or line is misplaced. In certain embodiments, the text may be more specific and state that the tube or line is too high (e.g., greater than 2cm to 3cm for endotracheal tube placement desired) or too low (e.g., less than 2cm for endotracheal tube placement). In certain embodiments, the text may provide additional instructions (e.g., for raising or lowering the end of a pipe or line a distance). In some embodiments, the text may be color coded (e.g., in orange or red) to further indicate misplacement. In some embodiments, the indication may be provided via one or more graphical indicia or color coding of a tube or line displayed on the combined image. For example, one or more of the graphical markers (e.g., for an end of the tube or wire, for an indication of the anatomical reference markers and/or the measured distance therebetween) and/or the tube or wire may be color coded to a particular color (e.g., red or orange) to indicate misplacement. Alternatively or in addition, one or more of the graphical markers may flash if the pipe or line is misplaced. If the measured distance is acceptable, the method 1600 includes providing a user perceivable indication of proper placement of the tube or line (block 1626). The indication may be provided on a display where the combined image is displayed or provided on another device (e.g., the user's device). The indication of proper placement may be text stating that the pipe or line is properly placed. In some implementations, the indication for proper placement may be provided via color coding one or more graphical indicia of the tube or line displayed on the combined image (e.g., all graphical indicia and/or the tube or line may be green color coded). In some embodiments, the indication of proper placement or misplacement may be written in a standard or private information tag (e.g., DICOM) and visible in a subsequent information system (e.g., PACS) to which the image is sent. In certain embodiments, the determination of the end of a medically placed tube or wire may be done manually by medical personnel viewing the displayed combined image.
In yet another exemplary embodiment of the method 1600, in block 1625, the AI system 220 and/or the processor 1312 may also be configured to detect any complications, e.g., non-biological complications (such as events that never occur), including but not limited to indwelling guidewires, and/or biological complications, including but not limited to pneumothorax, hemothorax, etc., using, in part, the information previously determined regarding the position of the tube or wire, the position of the end of the tube or wire, the applicable anatomical reference markers, and/or the patient coordinate system. If one or more complications are detected by AI system 220 and/or processor 1312, then in block 1627, an alert may be provided to the user/physician regarding the presence, location, and type of detected complications, such as by employing a user-perceptible indication similar to that discussed with respect to block 1626.
Fig. 6 and 7 are exemplary embodiments of a combined image 1702 (e.g., a DICOM image), such as the image produced by the method 1600 in block 1612, that identifies a catheter, tube, or line 1705 within a patient that may be displayed on a display or other output device 1324. As depicted, the combined image 1702 is a chest image 1701 of a patient, showing a CVC 1705 disposed within a central vein and including a stack 1703. The overlay 1703 includes a first graphical indicia 1734 (e.g., circular) superimposed on the chest image 1701 that indicates the position of the tip or end 1707 of the CVC 1705. A second graphical marker 1736 (e.g., a solid circle with a chevron shape) superimposed on the chest image indicates a desired placement location 1737 of tip 1707, which may be determined relative to or the same as reference or anatomical landmark location 1709 (e.g., vertebrae 1704, carina 1712, trachea 1714, which may be the same or different for each anatomical location 1709). Third graphical indicia 1738 indicates a distance 1720 (e.g., euclidean distance) between end 1707 of CVC 1705 and second graphical indicia 1736. The measured distance value 1740 is accompanied by a graphical marker 1738. Optionally as part of a third marker 1738, also shown in stack 1703 is a patient coordinate system 1700 that includes a vertical axis 1706, a horizontal axis 1708, and a horizontal component 1722 and a vertical component 1724 of distance 1720 determined relative to patient coordinate system 1700 and accompanied by their independent values 1740.
Exemplary embodiments of the present disclosure may also use different methods to determine the patient coordinate system 1700 and distance units to be used during the calculation. In the previously described embodiments for the AI system 220 and/or the processor 1312 and associated method 600, absolute distance units (e.g., millimeters (mm), centimeters (cm), inches (in), etc.) may be used to provide a distance 1720 based on a known geometric magnification of the image 1701 (as shown in fig. 6 and 7), e.g., a horizontal component 1722 and a vertical component 1724 of the distance 1720 between the tip 1707 and the anatomical landmark 1709, which are determined relative to the patient coordinate system 1700 and accompanied by their independent values 1740.
However, the assessment of the correct location of the CVC 1705 may be affected by the patient's positioning and/or size. For example, the patient may rotate relative to the main axis and/or plane of the image 1701, and thus the measurement of the vertical distance between a point of interest (such as the end 1707 of the CVC 1705) and one or more anatomical reference markers (e.g., the carina) should most preferably be made in a direction appropriate for the patient's position and/or orientation. Furthermore, the optimal distance and/or location of the CVC 1705 may depend on the patient's size, such that the optimal distance for large adults may be too long for small or pediatric patients. To account for these differences in the size of individual patients, the distance between the CVC 1705 and/or end 1707 and marker 1709 may be measured in units 1800 appropriate for the particular patient size. More specifically, in another exemplary embodiment for the AI system 220/processor 1312 and associated method 600, some anatomical reference locations 1709 (such as vertebrae 1704) may be located within the image 1701 and their distances may be used to define patient-specific distance units 1800. For example, referring now to fig. 10 and 11, the distance between the center points 1802 of the successive intervertebral discs 1806 represented within the image 1701 may be defined as patient-specific vertebral body units 1804. The distance between the desired location 1737 and/or anatomical location 1709 (e.g., carina 1712) and the CVC-tip 1707 may then be measured in patient-specific vertebral units 1804, as suggested by Baskin et al in Cavoatrial Junction and Central Venous Anatomy:Implications for Central Venous Access Tip Position,Journal of Vascular Interventional Radiology 2008;19:359–365, the entire contents of which are expressly incorporated herein by reference for all purposes. Furthermore, the patient-specific distance units 1800 may also be used in conjunction with the patient midline or vertical axis 1706 of the patient coordinate system 1700 (as defined by the center point 1802 of vertebra 1704 in one exemplary embodiment) to help determine whether the end 1707 is to the left or right of the midline 1706, with the horizontal component 1722 defined by the patient-specific units 1800. Further, AI 220 and/or processor 1312 and associated method 600 may illustrate the position of intervertebral disc 1806, which indicates the size of patient-specific vertebral units 1804, and allows a user to evaluate the distance in these patient-specific vertebral units 1804 relative to horizontal axis 1708 and/or vertical axis 1706, i.e., horizontal component 1722 or vertical component 1724, or both. In another embodiment, the AI system 220 and/or the processor 1312 may detect the center 1802 of the intervertebral disc 1806 and display it as a point 1808 that is presented on the image 1701 on the display 1324 and connected to form a partial midline or vertical axis 1706 of the patient. In this way, the spacing/distance between points 1808 indicates the cone units 1804, and the displayed points 1808 may be used as a patient-specific ruler to assess the location of the CVC 1705 using the cone units 1804 represented on the midline/vertical axis 1706.
In certain other embodiments, confidence metrics (e.g., as depicted by confidence levels) such as calculated in block 1616 of method 1600 in the measured distances generated by the artificial intelligence are also displayed. In certain embodiments, the tube 1705, the first graphical indicia 1734, and/or the second graphical indicia 1736 may be color coded (e.g., yellow, green, and red). The combined image 1702 may include a header (not shown) that includes information related to the image 1702. For example, the header may include information regarding the type of tube or wire 1705 (e.g., CVC), whether the placement of the tube is correct relative to predetermined placement parameters, such as stored in memory 1313, and the calculated distance 1720 between the end 1707 of the tube 1705 and the desired placement location 1737 and/or anatomical location 1709. In some embodiments, the header may include an indication of whether a pipe or line is detected. In certain implementations, one or more confidence metrics may be displayed on the combined image 1702, such as a confidence metric for one or more of the calculated distance, the determination of the presence of a medically placed tube or wire, the accuracy of detecting the end of the tube or wire, and/or the accuracy of detecting anatomical reference markers.
Fig. 8 and 9 are schematic illustrations of exemplary embodiments of a user interface 1752 presented in conjunction with a combined image 1702 that identifies a catheter 1705, tube, or line within a patient that may be displayed on a display or output device 1324. As depicted in fig. 8, the combined image 1702 is a chest image 1701 of a patient, showing a CVC 1705 disposed within a central vein. A first graphical marker 1734 (e.g., circular) superimposed on the chest image 1701 indicates the position of the end 1707 of the CVC 1705. A second graphical marker 1736 (e.g., circular) superimposed on the chest image 1701 indicates a desired placement location of the end 1707 and/or a reference or anatomical location 1709 (e.g., carina). A third graphical indicia 1738 indicates a distance 1720 between an end 1707 of the CVC 1705 and a desired location and/or reference or anatomical location 1709, including a horizontal component 1722 and a vertical component 1724. The measured value 1740 and/or components of distance 1720 are accompanied by graphical indicia 1738. As described above, in certain embodiments, the tube 1705, graphic indicia 1734, and/or graphic indicia 1736 may be color coded (e.g., yellow, green, and red). The user interface 1752 includes controls 1753 for changing the presentation of the image 1702 on the display 1324, and one or more indications 1754 of analysis by the AI 120/220 and/or the processor 1312 regarding one or more of detection of the CVC 1705, the CVC end 1707, placement of the end 1707 of the CVC 1705 (e.g., indicated by the mark 1734) relative to a distance of the desired location 1737 and/or anatomical location 1709 (e.g., indicated by the mark 1736), and placement acceptability of the end 1707. This calculated distance can be tracked between different x-rays to assess the CVC position change over time between the x-rays. In fig. 8, as depicted, indication 1754 indicates that tip 1707 is detected and in the correct position. In addition, the indication 1754 also lists typical potential complications associated with placement of the device (e.g., CVC) presented in the image 1702 within the anatomical region (e.g., chest) represented within the image 1702 and provides for automatic assessment of each of the conditions. In particular, in FIG. 8, the AI system 120/220 does not detect the presence of the listed complications.
With respect to fig. 9, the indication 1754 provides information regarding the results of the analysis of the image 1702 by the AI system 120/220 that shows that the end of the tube or wire is not properly positioned relative to the desired location and/or the reference or anatomical location 1709, and that complications in the form of a hydrothorax are detected and located in the right lung. In certain implementations, the indication 1754 may state that the tube or line 1705 is correctly placed, misplaced, provides an indication of a position error, or provides an instruction to correct a position.
From the foregoing, it should be appreciated that the above-disclosed methods, apparatus, and articles of manufacture have been disclosed to monitor, process, and improve the operation of imaging and/or other medical systems using a variety of deep learning and/or other machine learning techniques.
Thus, certain examples facilitate image acquisition and analysis via a portable imaging device at a point of care, such as at a patient imaging point. If the image should be re-captured, further analysis should be performed immediately and/or other criticality should be explored early (rather than later), the example systems, devices and methods disclosed and described herein may facilitate such actions to automate analysis, simplify workflow and improve patient care.
Some examples provide specially configured imaging devices that can acquire images and operate as decision support tools at the point of care of an intensive care team. Certain examples provide imaging devices for use as medical equipment to provide and/or facilitate diagnosis at a point of care to detect radiological findings, and the like. The device may trigger critical alarms for radiologists and/or intensive care teams to immediately draw the patient's attention. The device enables classification of patients after their examination (such as in a screening environment), where a negative test allows the patient to go home, while a positive test would require the patient to be viewed by a physician prior to coming home.
In some examples, the mobile device and/or cloud product enables vendor neutral solutions, proving point-of-care alerts on any digital x-ray system (e.g., fully integrated, upgrade kit, etc.). In some examples, embedded AI algorithms executing on mobile imaging systems (such as mobile x-ray machines and the like) provide point-of-care alerts in real-time during and/or after image acquisition and the like.
By hosting the AI on an imaging device, the mobile x-ray system may be used in rural areas, for example, where there is no hospital information technology network, or even on mobile trucks that bring imaging to a patient community. In addition, if there is a long delay in sending the image to the server or cloud, the AI on the imaging device may be performed instead and generate an output back to the imaging device for further action. Rather than having the x-ray technologist move to the next patient and having the x-ray device no longer at the patient's bedside with the clinical care team, image processing, analysis, and output may occur in real-time (or substantially real-time given some data transfer/retrieval, processing, and output delays) to provide relevant notifications to the clinical care team while the clinical care team and device are still at or near the patient. For example, for traumatic conditions, a quick treatment decision is required, and some examples mitigate the delay introduced by other clinical decision support tools.
Mobile X-ray systems travel throughout a hospital to the bedside of a patient (e.g., emergency room, operating room, intensive care unit, etc.). Within a hospital, network communications may be unreliable in "dead" areas of the hospital (e.g., basements, rooms with electrical signal interference or blockage, etc.). For example, if an X-ray device relies on establishing Wi-Fi to push an image to a server or cloud that is hosting an AI model and then waiting to receive AI output back to the X-ray device, the patient is at risk of critical alarms being unreliable if needed. In addition, if a network or power interruption affects communication, the AI operating on the imaging device may continue to act as a stand-alone mobile processing unit.
Examples of alarms generated for general radiology may include critical alarms (e.g., for mobile x-rays, etc.), such as tube and line placement, pleural effusion, lung lobe collapse, indwelling CVC guidewire, pneumoperitoneum, mediastinal effusion, pneumonia, etc.; screening alarms (e.g., for stationary x-rays, etc.), such as tuberculosis, pulmonary nodules, etc.; quality alarms (e.g., for mobile and/or stationary x-rays, etc.), such as patient positioning, anatomy being trimmed, insufficient techniques, image artifacts, etc.
Thus, certain examples improve the accuracy of artificial intelligence algorithms. Some examples take into account patient medical information as well as image data to more accurately predict the existence of critical findings, emergency findings, and/or other problems.
Although certain example methods, apparatus and articles of manufacture have been described herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus and articles of manufacture fairly falling within the scope of the appended claims.
Technical effects of the disclosed subject matter include providing systems and methods that utilize an AI (e.g., a deep learning network) to determine whether a medically placed tube or line is properly placed (e.g., relative to anatomical reference landmarks) within a region of interest. The system and method may provide real-time feedback that determines whether a medically placed tube or wire is misplaced in a more accurate and faster manner. Thus, if desired, a quick intervention is allowed to move the tube or line into place for patient safety.
This written description uses examples to disclose the subject matter, including the best mode, and also to enable any person skilled in the art to practice the subject matter, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the presently disclosed subject matter is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal languages of the claims.

Claims (15)

1. A medical image processing system, the medical image processing system comprising:
a. A display;
b. A processor; and
C. A memory storing processor executable code that, when executed by the processor, causes:
i. receiving an image of a region of interest of a patient, wherein a medical tube or line is disposed within the region of interest;
Detecting the medical tube or line within the image;
Detecting a reference marker within the region of interest within the image, wherein the reference marker is within the patient;
Generating a patient coordinate system for the image;
Generating a combined image by superimposing a first graphical marker on the image indicating the position of the end of the medical tube or line and superimposing a second graphical marker on the image indicating the position of the reference marker;
Generating an indication of the position of the end of the medical tube relative to the reference marker in the combined image using the patient coordinate system; and
Presenting the combined image on the display.
2. The medical image processing system of claim 1, wherein to generate the patient coordinate system, the processor executable code, when executed by the processor, causes:
a. determining a vertical axis of the patient coordinate system within the image; and
B. a horizontal axis of the patient coordinate system within the image is determined.
3. The medical image processing system of claim 2, wherein to determine the vertical axis of the patient coordinate system, the processor executable code, when executed by the processor, causes:
a. detecting one or more first anatomical landmarks presented in the image; and
B. The vertical axis is determined for the patient coordinate system relative to the one or more first anatomical landmarks.
4. The medical image processing system of claim 3, wherein to generate the patient coordinate system, the processor executable code, when executed by the processor, causes:
a. detecting one or more vertebrae as the one or more first anatomical landmarks within the image;
b. the vertical axis of the patient coordinate system is determined using the one or more vertebrae.
5. The medical image processing system of claim 2, wherein to determine the horizontal axis of the patient coordinate system, the processor executable code, when executed by the processor, causes:
a. detecting one or more second anatomical landmarks presented in the image; and
B. the horizontal axis is determined relative to the one or more second anatomical landmarks.
6. The image processing system of claim 2, wherein to generate the indication of the position of the end of the medical tube or line relative to the reference marker in the combined image using the patient coordinate system, the processor executable code, when executed by the processor, causes:
a. Generating, in the combined image, an indication of a vertical distance between the first graphical indicia and the second graphical indicia along the vertical axis; and
B. In the combined image, an indication of a horizontal distance between the first graphical indicia and the second graphical indicia is generated along the horizontal axis.
7. The image processing system of claim 2, wherein to generate the indication of the position of the end of the medical tube or line relative to the reference marker in the combined image using the patient coordinate system, the processor executable code, when executed by the processor, causes: a third graphical marker is generated in the combined image, the third graphical marker representing the vertical axis and the horizontal axis of the patient coordinate system in the combined image.
8. The image processing system of claim 1, wherein to superimpose the first graphical marker on the image to indicate a position of an end of the medical tube or line, the processor executable code, when executed by the processor, causes: the first graphical indicia is generated to show the location of the entire length of the medical tube or wire including the end of the medical tube or wire in the combined image.
9. The medical image processing system of claim 1, wherein to generate the patient coordinate system, the processor executable code, when executed by the processor, causes: a measurement unit associated with the patient coordinate system is generated, wherein the measurement unit is selected from an absolute measurement unit and a patient-specific measurement unit.
10. The medical image processing system of claim 1, wherein the processor executable code is artificial intelligence.
11. The medical image processing system of claim 10, wherein the artificial intelligence, when executed by the processor, causes: the placement of the end of the medical tube or wire in the image is determined relative to a threshold for proper placement of the end of the medical tube or wire.
12. An imaging system, the imaging system comprising:
a. A radiation source;
b. a detector alignable with the radiation source;
c. a display for presenting information to a user; and
D. A controller connected to the display and operable to control operation of the radiation source and the detector to generate image data, the controller comprising an image processing system comprising:
a. A processor; and
B. A memory storing processor executable code that, when executed by the processor, causes:
i. receiving an image of a region of interest of a patient, wherein a medical catheter, tube or line is disposed within the region of interest;
detecting the medical catheter, tube or line within the image;
Detecting a reference marker within the region of interest within the image, wherein the reference marker is within the patient;
Generating a patient coordinate system relative to the patient's anatomy within the image;
generating a combined image by superimposing a first graphical marker indicative of an end of the medical catheter, tube or line on the image, superimposing a second graphical marker indicative of the reference marker on the image, and superimposing a third graphical marker indicative of the patient coordinate system; and
Displaying the combined image on the display.
13. The imaging system of claim 12, wherein to generate the patient coordinate system, the processor executable code, when executed by the processor, causes:
a. determining a vertical axis of the patient coordinate system within the image; and
B. a horizontal axis of the patient coordinate system within the image is determined.
14. The imaging system of claim 12, wherein to generate an indication of the position of the end of the medical tube or line relative to the reference marker in the combined image using the patient coordinate system, the processor executable code, when executed by the processor, causes:
a. Generating, in the combined image, an indication of a vertical distance between the first graphical indicia and the second graphical indicia along the vertical axis; and
B. In the combined image, an indication of a horizontal distance between the first graphical indicia and the second graphical indicia is generated along the horizontal axis.
15. The imaging system of claim 14, wherein to generate the indication of the position of the end of the medical tube or line relative to the reference marker in the combined image using the patient coordinate system, the processor executable code, when executed by the processor, causes: the third graphical indicia is generated in the combined image, the third graphical indicia representing the vertical axis and the horizontal axis of the patient coordinate system in the combined image.
CN202311567680.0A 2022-11-23 2023-11-23 Artificial intelligence system and method for defining and visualizing placement of a catheter using a patient coordinate system Pending CN118072065A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US63/427,646 2022-11-23
US18/385,448 US20240164845A1 (en) 2022-11-23 2023-10-31 Artificial Intelligence System and Method for Defining and Visualizing Placement of a Catheter in a Patient Coordinate System Together with an Assessment of Typical Complications
US18/385,448 2023-10-31

Publications (1)

Publication Number Publication Date
CN118072065A true CN118072065A (en) 2024-05-24

Family

ID=91096109

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311567680.0A Pending CN118072065A (en) 2022-11-23 2023-11-23 Artificial intelligence system and method for defining and visualizing placement of a catheter using a patient coordinate system

Country Status (1)

Country Link
CN (1) CN118072065A (en)

Similar Documents

Publication Publication Date Title
CN111401398B (en) System and method for determining disease progression from artificial intelligence detection output
US7940970B2 (en) Method and system for automatic quality control used in computerized analysis of CT angiography
US8064677B2 (en) Systems and methods for measurement of objects of interest in medical images
JP2019093137A (en) Systems and methods to deliver point-of-care alerts for radiological findings
JP7004648B2 (en) X-ray image Intake quality monitoring
US9471973B2 (en) Methods and apparatus for computer-aided radiological detection and imaging
US20230172451A1 (en) Medical image visualization apparatus and method for diagnosis of aorta
EP4002275A1 (en) System and method for visualizing placement of a medical tube or line
JP2024061756A (en) Information processing device, information processing method, and program
JP2021521949A (en) Interactive coronary labeling with interventional x-ray images and deep learning
US20220331556A1 (en) System and method for visualizing placement of a medical tube or line
US20240164845A1 (en) Artificial Intelligence System and Method for Defining and Visualizing Placement of a Catheter in a Patient Coordinate System Together with an Assessment of Typical Complications
CN118072065A (en) Artificial intelligence system and method for defining and visualizing placement of a catheter using a patient coordinate system
EP4020492A1 (en) Method and system for automatically determining (alertable) changes of a condition of a patient
US20230162352A1 (en) System and method for visualizing placement of a medical tube or line
US20230162355A1 (en) System and method for visualizing placement of a medical tube or line
EP4375921A1 (en) System and method for visualizing placement of a medical tube or line
EP3644274A1 (en) Orientation detection for vessel segmentation
KR102682936B1 (en) Urinary tract location estimation system
JP2023539891A (en) Automatic detection of malposition of medical devices in medical images
KR20220129144A (en) Apparatus and method for determining disease of target object based on patch image
KR20240059417A (en) Urinary tract location estimation system
KR20230148972A (en) Apparatus and method for automatically analyzing the lower extremity alignment status in lower extremity x-ray images using convolutional neural network
TW202422489A (en) Multi-label classification method and multi-label classification system
Ochs et al. Computer-aided detection of endobronchial valves using volumetric CT

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination