WO2024145341A1 - Systems and methods for generating 3d navigation interfaces for medical procedures - Google Patents

Systems and methods for generating 3d navigation interfaces for medical procedures Download PDF

Info

Publication number
WO2024145341A1
WO2024145341A1 PCT/US2023/086007 US2023086007W WO2024145341A1 WO 2024145341 A1 WO2024145341 A1 WO 2024145341A1 US 2023086007 W US2023086007 W US 2023086007W WO 2024145341 A1 WO2024145341 A1 WO 2024145341A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
model
patient
view
data
Prior art date
Application number
PCT/US2023/086007
Other languages
French (fr)
Inventor
Mitchell DOUGHTY
Mark GARIBALDI
Michael Jones
Sida LI
Richard Mahoney
Govinda PAYYAVULA
Simon P. Dimaio
Original Assignee
Intuitive Surgical Operations, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intuitive Surgical Operations, Inc. filed Critical Intuitive Surgical Operations, Inc.
Publication of WO2024145341A1 publication Critical patent/WO2024145341A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/25User interfaces for surgical systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/102Modelling of surgical devices, implants or prosthesis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/105Modelling of the patient, e.g. for ligaments or bones
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/107Visualisation of planned trajectories or target regions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2051Electromagnetic tracking systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2061Tracking techniques using shape-sensors, e.g. fiber shape sensors with Bragg gratings
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2074Interface software
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • A61B2034/301Surgical robots for introducing or steering flexible instruments inserted into the body, e.g. catheters or endoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • A61B2090/365Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/372Details of monitor hardware
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/374NMR or MRI
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/376Surgical systems with images on a monitor during operation using X-rays, e.g. fluoroscopy
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/50Supports for surgical instruments, e.g. articulated arms
    • A61B2090/502Headgear, e.g. helmet, spectacles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/46Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
    • A61B8/461Displaying means of special interest
    • A61B8/466Displaying means of special interest adapted to display 3D data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Definitions

  • Minimally invasive medical techniques are intended to reduce the amount of tissue that is damaged during medical procedures, thereby reducing patient recovery time, discomfort, and harmful side effects.
  • Such minimally invasive techniques may be performed through natural orifices in a patient anatomy or through one or more surgical incisions. Through these natural orifices or incisions, physicians may insert minimally invasive medical instruments (including surgical, diagnostic, therapeutic, and/or biopsy instruments) to reach a target tissue location.
  • minimally invasive medical instruments including surgical, diagnostic, therapeutic, and/or biopsy instruments
  • One such minimally invasive technique is to use a flexible and/or steerable elongate device, such as a flexible catheter, which can be inserted into anatomic passageways and navigated toward a region of interest within the patient anatomy.
  • Past medical techniques use two-dimensional display s/interf aces when navigating a patient volume, and when determining where to perform specific procedures (e.g., biopsies, ablation, etc.).
  • specific procedures e.g., biopsies, ablation, etc.
  • displays/interfaces may lack details useful for the physician, especially when performing a complex procedure.
  • a two-dimensional display may require a physician to mentally map two-dimensional information to a mental/imagined three-dimensional construct to determine where to proceed, which is mentally taxing on a physician and can lead to mistakes during operation.
  • conventional displays or interfaces may fail to provide adequate information to a physician on their surroundings when performing a medical procedure. As such, a physician may be unaware of important details during the medical procedure.
  • a computer-implemented method for generating a three-dimensional (3D) navigation interface for a robotically-assisted medical procedure is provided.
  • the method may be implemented via one or more local or remote processors, servers, sensors, transceivers, memory units, and/or other electronic or electrical components.
  • the method may include: (i) receiving, by one or more processors, a 3D model representative of a volume of a patient; (ii) receiving, by the one or more processors, two-dimensional (2D) data representative of one or more 2D images corresponding to at least a portion of the volume of the patient; (iii) generating, by the one or more processors, co-registered operation data relating the 3D model to the 2D data; (iv) generating, by the one or more processors, a 3D navigation interface, including generating a display of at least a portion of the 3D model based on the co-registered operation data; and (v) causing, by the one or more processors, a display device to display the 3D navigation interface to a user.
  • a system for generating an interactive view for a robotically- assisted medical procedure may include one or more processors; a communication unit; a display device; and a non-transitory computer-readable medium coupled to the one or more processors and the communication unit and storing instructions thereon that, when executed by the one or more processors, cause the system to: (i) receive a 3D model representative of a volume of a patient; (ii) generate a 2D view of the volume of the patient, the 2D view representative of a 2D imaging modality when positioned at a particular apparatus projection angle; (iii) determine a display orientation for the 2D view relative to the user; (iv) register the 2D view to the 3D model such that both the 2D view and the 3D model share the display orientation; (v) cause the display device to simultaneously display the 2D view and the 3D model to a user in accordance with the shared display orientation; (vi) receive control input from the user; and (vii) update
  • FIGS. 2A and 2B are diagrams of different views of an example environment in which a 3D model of an internal volume for a patient and 2D images are displayed to assist a user in performing a medical procedure, according to some examples.
  • FIG. 5 is an example flow diagram for generating a 3D navigation interface for a robotically-assisted medical procedure, according to some examples.
  • FIG. 6 is another example flow diagram for generating a 3D navigation interface for a robotically-assisted medical procedure, according to some examples.
  • FIG. 8B is a simplified diagram of a medical tool within the flexible elongate device of FIG. 8 A, according to some examples.
  • the systems and methods described herein may provide a number of improvements through the generation and use of a 3D navigation environment.
  • the 3D navigation environment may allow a user to navigate the environment as described herein even when not present in the physical environment.
  • additional users may be able to observe a process in real time without interrupting a first (e.g., primary) user.
  • HMD head-mounted display
  • 3D information may improve the ergonomics by allowing a user a free and natural range of motion to function as though the user is in a normal environment while still providing the benefits of a virtual environment.
  • a system may improve the mental mapping by a user by reducing reliance on a 2D display and instead displaying the 3D model in conjunction with 2D information where appropriate, such that a user may rotate, modify, and otherwise adjust the display to a preferred level of comfort.
  • the medical instrument 104 is an instrument configured and prepared to be manipulated by a user (by way of the robotic-assisted platform 102 and/or the HMD 110) when performing a medical procedure on a patient.
  • the medical instrument 104 may be a flexible elongate device (e.g., a catheter), as described in more detail with regard to FIGS. 8A and 8B below.
  • the medical instrument 104 includes, is part of, or is a medical instrument as described in more detail below with regard to FIG. 7.
  • the HMD 110 is a device designed to be mounted on a user’s head, and to display information to the user in an extended reality (XR) view 110R.
  • the HMD 110 may use XR techniques such as by presenting a mixed reality (MR) view, an augmented reality (AR) view, a virtual reality (VR) view, etc.
  • the HMD 110 includes one or more processors, a display device, memory, sensors, controllers, etc., and may be communicatively coupled to the robotic-assistcd platform 102 and/or medical instrument 104.
  • processors of the HMD 110 may perform various operations as described herein and may cause the display device to generate, display, modify, or otherwise manipulate elements of the XR view 110R based on movements or indications from the user.
  • the HMD 110 may receive feedback, inputs, and/or indications from the robotic-assisted platform 102 and/or medical instrument 104 (and/or other components not shown in FIG. 1), and may generate, display, modify, or otherwise manipulate the elements of the XR view 110R.
  • the HMD 110 may determine that the user is indicating to zoom in or out, respectively, on an element, and may subsequently modify the XR view 110R accordingly.
  • a user may use multiple methods for transmitting manual inputs 106 according to the above interchangeably.
  • a user may use movement sensors to provide manual input 106 by way of hand gestures, before switching to a mouse for finer control.
  • the XR view 110R may include a 3D model 112R of an internal volume of the patient.
  • the internal volume may be or include a particular organ (e.g., a lung), a bodily system (e.g., the respiratory system), a larger area of a patient (e.g., the chest of a patient), the entirety of the patient, etc.
  • the HMD 110 displays various airways in the lungs of a patient.
  • the 3D model 112R may include one or more landmarks 116R, such as a lesion or other similarly identifiable feature.
  • the display device 210 may generate and/or display a path to the landmark 216.
  • the path may use the centerline of the model pathways to generate the path.
  • the display device 210 may calculate a path using the centerline of the air pathways.
  • the medical instrument 204 and/or a computing device associated with the medical instrument 204 may determine the amount of force applied to the medical instrument 204 and may subsequently determine to stop applying force (and, in response, stop applying force) if the force reaches a threshold quantity.
  • the display device 210 may transmit a signal that causes the real world imaging device 220 to follow a similar movement path (e.g., rotating the actual C-arm).
  • the display device 210 may display a prompt to the user, asking the user to confirm the movement path and/or displaying the movement path before causing the imaging device 220 to follow the path.
  • the computer 320 may receive the shape sensor data 355 at a streaming application 340 and may modify, update, and/or otherwise manipulate the shape sensor data 355 to generate shape sensor data 345.
  • the computer 320 may capture video data from the assembly 302 through use of a recording device (e.g., a capture card) according to a video handler 347, or may otherwise manipulate already captured video data using the video handler 347.
  • the computer 320 may retrieve 3D assets 349 including 3D models, 3D targets, etc. as defined, generated, or otherwise supplied by the user O, a team associated with the computer 320, a patient, etc.
  • the 3D assets may include assets associated with the volume of the patient (e.g., models generated from 3D volume data, instrument video data, etc.), pre-generated internal models, and/or other similar data as described herein.
  • the computer 320 similarly transmits relevant data to the XR device 310 and, as such, the user O from the streaming application 340.
  • the computer 320 may similarly receive and/or transmit the shape sensor data 355 via a TCP/IP socket connection, a Wi-Fi connection, a Bluetooth connection, etc.
  • the XR device 310 receives the relevant data, such as shape sensor data 345 from the computer 320 via an extended reality application 330.
  • the XR device may generate the imaging modality data 338 based on simulated imaging modalities in the XR application 330 and/or from one or more imaging modalities associated with an imaging device of the assembly 302 (e.g., imaging device 220 as described above with regard to FIGS. 2A and 2B).
  • the imaging modality data 338 may further be based on a desired position and/or orientation of the imaging device (e.g., as indicated by the user and/or depicted by the XR device as a virtual representation of the imaging device in a virtual space).
  • the XR device 310 may retrieve, render, generate, and/or otherwise display 3D models to a user according to model data 332 based on the XR shape sensor data 335, video data 336, 3D assets 339, user input 334, etc.
  • the XR device may additionally generate a navigation interface 310R similar to XR view 110R and/or navigation interface 210R based on the data processed and/or generated by the XR application 330.
  • FIG. 4 depicts an example architecture for a system 400 similar to the architecture and system of FIG. 3.
  • the system 400 is accessed by multiple users, and in which the XR device and the assembly are directly communicatively coupled, rather than through an intermediary computing device.
  • the system includes an XR device 410 used at least by a user O and an assembly 402.
  • the XR device 410 and/or the assembly 402 may include, be, or resemble the XR device 310 and/or assembly 302, respectively, as described above with regard to FIG. 3.
  • the assembly 402 may include a system application 450 that functions similarly to the system data streaming application 350 as described with regard to FIG. 3 above.
  • console 403 and/or assembly 402 associated with the console 403 and/or assembly 402) and/or indirectly (e.g., via one or more controllers, sensors, etc. of the XR device 410) and may update the assembly 402, console 403, system application 450, and/or any stored data accordingly.
  • the imaging data 461 may be 2D or 3D imaging data (e.g., CT data, CBCT data, fluoroscopy data, (Radial) EBUS data, etc.) as described herein.
  • the system application 450 may communicate the data with the XR application 430 via a TCP/IP socket connection, a Wi-Fi connection, a Bluetooth connection, etc.
  • the XR application 430 may function similarly to the XR application 330 as described above with regard to FIG. 3.
  • the model data 432, user inputs 434, shape sensor data 435, video data 436, and/or 3D assets 439 may resemble the corresponding data as described above.
  • the XR application 430 may generate and/or receive 3D data 431 (e.g., patient volume data, pre-operative data such as a CT scan data, intra-operative data such as CBCT data, etc.) or 2D data 441 (pre-operative data such as fluoroscopy data, intra-operative data such as R- EBUS data, etc.).
  • the XR application 430 may also receive and/or generate user interface data 444 from the assembly 402 and/or console 403, such as commands for a medical instrument associated with the assembly 402, or XR toolkit elements.
  • the system 400 includes one or more additional users Q via a multiplayer sync module 445.
  • the system 400 may include additional XR devices similar to XR device 410 and/or computing devices for the additional user(s) Q to use.
  • the system 400 automatically displays the navigation interface 41 OR as generated for the user O and the XR device 410.
  • the system 400 does not register and/or accept commands or instructions from the additional users Q. As such, the additional users Q are only able to observe.
  • any of a computed tomography (CT) scan device, a cone-beam computed tomography (CBCT) scan device, a magnetic resonance imaging (MRI) scan device, a positron emission tomography (PET) scan device, a tomosynthesis device, etc. may generate the 3D model. Further, the device that generates the 3D model may generate the model prior to the medical procedure (e.g., immediately before, hours before, days before, etc.) or during the medical procedure.
  • CT computed tomography
  • CBCT cone-beam computed tomography
  • MRI magnetic resonance imaging
  • PET positron emission tomography
  • the robotic-assisted platform receives 2D data representative of one or more 2D images corresponding to at least a portion of the volume of the patient.
  • the 2D images may include 2D x-ray images, such as fluoroscopic images, which may be captured by a C-arm. Additionally or alternatively, the 2D images may include a synthetic 2D image generated from the 3D model and representative of a 2D x-ray image captured by a C-arm.
  • the C-arm may be associated with the environment 100 or may be a separate C-arm that transmits the 2D images to the environment 100.
  • the co-registered operation data may include optical markers and/or fiducials for visual tracking with a 2D or 3D imaging sensor (e.g., an RGB camera, infrared camera, etc.).
  • the co-registered operation data may include a common reference frame that may be, for example, a surgical reference frame for a user, a patient reference frame for a patient, an observer reference frame for an observer, etc.
  • the co-registered operation data may allow the robotic-assisted platform 102 to orientate, overlay, and/or otherwise generate components of the view, such as a 2D view, a 3D model, etc.
  • the robotic-assisted platform 102 generates a 3D navigation interface.
  • generating the 3D navigation interface includes generating a display of at least a portion of the 3D model based on the co-registered operation data.
  • the robotic-assisted platform 102 may display or cause a display device to display the 3D model in a predetermined position relative to the patient.
  • the robotic-assisted platform 102 may display or cause a display device to display the 3D model above the patient (e.g., as depicted in FIGS. 2A and 2B), adjacent to the patient, superimposed with the patient, etc.
  • the robotic-assisted platform 102 may cause the display device to adjust the position of the 3D model based on user inputs. For example, the robotic-assisted platform 102 may ensure that the display device displays the 3D model as remaining in position above the patient when the user moves the view within the 3D navigation interface. Alternatively, the robotic-assisted platform 102 may move the 3D model display in response to receiving an indication from the user to move the display (e.g., the user drags the display elsewhere, the user inputs a particular command to change positioning, etc.).
  • the robotic-assisted platform 102 generates the 3D navigation interface by additionally generating a display of one or more 2D images based at least on the coregistered operation data.
  • the robotic-assisted platform 102 may generate the 3D navigation interface by orienting the 2D images such that an orientation of the 2D images is based at least on the 3D model.
  • the robotic-assistcd platform 102 may orient the 2D images and the 3D model such that the 2D images and the 3D model share an orientation from the user perspective, allowing the user to more easily identify shared locations, landmarks, etc. between the 2D images and 3D model.
  • the robotic-assisted platform 102 may orient the 2D images relative to the 3D model such that the 2D images rotate or tilt as the user moves the 3D model (e.g., a top down view of the 2D images and a front view of the 3D model may still rotate as the 3D model rotates).
  • the 3D navigation interface includes a live video feed from an instrument of the environment 100.
  • the 3D navigation interface may include a feed from one or more imaging devices associated with a flexible elongate device as described herein.
  • the live video feed may be a 2D video feed or a 3D video feed.
  • the robotic-assisted platform 102 may cause the display device to align the view of the live video feed with the 3D model.
  • the robotic-assisted platform 102 may generate the 3D navigation interface such that the video feed and the 3D model share an orientation, as described above with regard to the 2D view.
  • the robotic-assisted platform 102 may superimpose some or all of the video feed with the 3D model or otherwise indicate where the video feed is displaying with regard to the 3D model.
  • the robotic- assisted platform 102 may display video feed in various orientations.
  • the robotic- assisted platform 102 may display the video feed directly behind a patient and/or 3D model (e.g., similar to 2D instrument video 219 as described above with regard to FIGS. 2 A and 2B), oriented depending on the current pose of the virtual instrument (e.g., similar to oriented 2D view 218 as described above with regard to FIGS. 2A and 2B), oriented depending on the position of the user, oriented above and/or below the 3D model, oriented superimposed with the 3D model, etc.
  • a patient and/or 3D model e.g., similar to 2D instrument video 219 as described above with regard to FIGS. 2 A and 2B
  • the current pose of the virtual instrument e.g., similar to oriented 2D view 218 as described
  • the robotic-assisted platform 102 further receives sensor data from one or more sensors configured to generate data associated with the patient, the internal volume of the patient, a portion of the internals of the patient, etc.
  • the robotic- assisted platform 102 may receive 2D sensor data or 3D sensor data.
  • the sensor data may include any of: computed tomography (CT) data, cone-beam computed tomography (CBCT) data, catheter data, endoscope video data, magnetic resonance imaging (MRI) data, C-arm data, radial endobronchial ultrasound (EBUS) data, a combination of data types, and/or any other such data as described herein.
  • the robotic-assisted platform 102 additionally or alternatively receives navigation information regarding the patient, the internal volume of the patient, a portion of the internals of the patient, etc.
  • the navigation information may include a historical navigation in the patient.
  • the historical navigation may include navigation history during the current session, navigation history during past sessions for the same patient, generalized and/or normalized navigation history for the volume in general (e.g., various navigation paths taken by the physician in similar cases for the lungs), etc.
  • the robotic- assisted platform 102 may cause the display device to display the navigation history as drawn or otherwise generated paths along the 3D model, each path indicating a past navigation path.
  • each path includes an indication of when the navigation occurred, relative to other paths and/or according to the actual time or date of the navigation.
  • the historical navigation may include the history for past sessions as separate maps, as lists of passed landmarks, as descriptions of distances traveled or turns taken, etc.
  • the robotic-assisted platform 102 may cause the display device to display a subset of the navigation history according to user preferences, user indications, a current user task, etc.
  • the navigation history may additionally or alternatively include a navigation path representative of a recommended path for the instrument in the volume of the patient.
  • a physician may use the robotic-assisted platform 102 or an external computing device to generate the navigation path prior to the medical procedure or during the medical procedure.
  • the robotic-assisted platform 102 may generate a navigation path based on indications from the user, navigation history, etc. using machine learning techniques and/or a trained neural network to predict a preferred path for the user.
  • the navigation information additionally or alternatively includes visual or auditory elements representative of landmarks (e.g., distinct and/or easily recognizable locations in the patient) and/or one or more sampled tissue locations (e.g., locations at which an instrument of robotic-assisted platform 102 has sampled tissue) in the patient.
  • landmarks e.g., distinct and/or easily recognizable locations in the patient
  • sampled tissue locations e.g., locations at which an instrument of robotic-assisted platform 102 has sampled tissue
  • the robotic-assisted platform 102 and/or the display device modifies at least a portion of the 3D navigation interface based on control input(s) received from the user.
  • the control input(s) may modify the 3D navigation interface directly (e.g., moving interface elements, zooming in or out on elements, shifting a user perspective or view, etc.) or responsive to a physical element moving (e.g., modifying model display in response to the user moving an instrument in the patient, etc.).
  • the user may drag a virtual representation of an instrument within the boundary of the 3D navigation interface to indicate where a corresponding instrument is to move within the patient volume.
  • the user may tap an indication of a virtual target in the volume of the patient, and the robotic-assisted platform 102 causes the instrument including the sensor to follow a path to the indicated target.
  • the robotic-assisted platform 102 causes the instrument to follow a path according to the navigation information.
  • the robotic-assisted platform 102 automatically generates a path in response to the user indication using navigation history, landmarks, tissue sample locations, etc. and follows the generated path.
  • the robotic-assisted platform 102 modifies the visibility of parts of the 3D navigation interface depending on the current task, user context, etc. For example, the robotic-assisted platform 102 may determine that at least one element is distracting and/or likely to distract the user. In response, the robotic-assisted platform 102 may dim the visibility of the element (e.g., by reducing the opacity, reducing a lighting, washing out colors of the element, etc.) to allow the user to better focus on other elements. Tn further examples, the robotic-assisted platform 102 instead changes the XR type in response to determining that an element is distracting a user.
  • the robotic-assisted platform 102 may determine that another individual in the user view is distracting the user and may change from an AR view to a VR view that does not show the other individual.
  • the robotic-assisted platform 102 may determine that an object or element is distracting or likely to distract the user based on a user indication, via a trained machine learning algorithm to predict distraction, etc.
  • dimming the visibility of the element or of a broader portion of the interface may occur naturally with and/or in conjunction with changing the XR type.
  • additional second users may observe the 3D navigation interface as the first user performs the medical procedure.
  • the robotic-assisted platform 102 generates a second 3D navigation interface and displays the second interface to the second users.
  • the second 3D navigation interface may include a reduced set of information, such as removing past navigation history, particular patient information, etc.
  • the robotic- assisted platform 102 may modify the second 3D navigation interface in response to inputs from the first user, but may avoid modifying the first 3D navigation interface and/or the second 3D navigation interface in response to inputs from the second user(s).
  • FIG. 6 a flow diagram depicts an example method 600 for generating a 3D navigation interface for a robotically-assisted medical procedure.
  • the method 600 is described below with regard to environment 100 and components thereof as illustrated in FIG. 1 , it will be understood that other similarly suitable imaging devices and components may be used instead.
  • the robotic-assisted platform 102 receives a 3D model representative of a volume of a patient.
  • the 3D model may be of a particular organ (e.g., the lungs) of a patient or a portion of an organ, a larger system (e.g., the respiratory system) of a patient, an entirety of a patient anatomy, etc.
  • a component of the robotic- assisted platform 102 may generate the 3D model and transmit the model within the environment 100.
  • an element outside the environment 100 may generate the 3D model and transmit the model to the environment 100.
  • any of a computed tomography (CT) scan device, a cone-beam computed tomography (CBCT) scan device, a magnetic resonance imaging (MRI) scan device, a positron emission tomography (PET) scan device, a tomosynthesis device, etc. may generate the 3D model.
  • the device that generates the 3D model may generate the model prior to the medical procedure (e.g., immediately before, hours before, days before, etc.) or during the medical procedure. Similar to FIG. 5, the system may display the 3D model above a patient and/or a view of a patient, adjacent to a patient and/or view, superimposed with a patient and/or view, etc.
  • the view of the patient is a view of the physical patient seen via the display device, such as in an AR view.
  • the view of the patient is a representation of the patient, such as one generated in a VR view.
  • the 3D model includes a virtual apparatus that rotates to match a positioning of a corresponding physical apparatus.
  • the virtual apparatus is offset from the physical apparatus by the same distance the 3D model is offset from the patient view.
  • the virtual apparatus is superimposed with the physical apparatus even though the remainder of the 3D model is not superimposed with the patient view.
  • the imaging modality may include at least one of: (i) C-arm imaging, (ii) CT imaging, (iii) MRI imaging, (iv) CBCT imaging, (v) EBUS imaging, or (vi) any other similarly appropriate imaging modality.
  • the robotic-assisted platform 102 receives an additional image (e.g., a CT image, etc.) and/or generates the 2D view such that the image diverges from the patient volume.
  • the robotic-assisted platform 102 may, in response to determining that the image diverges from the reality of the patient volume, the robotic-assisted platform 102 may transmit a request for an updated image, using the same or a different imaging modality (e.g., a CBCT image to replace a CT image).
  • the robotic-assisted platform 102 determines a display orientation for the 2D view relative to the user.
  • the robotic-assistcd platform 102 determines the display orientation for the 2D view based on input from the user, based on a registration or coregistration process, automatically based on the 3D model, based on a patient position, etc.
  • the robotic-assisted platform 102 may reduce the spacing between the component dashes, so that a user continues at a safe speed. In some such examples, the robotic-assisted platform 102 indicates that a user is moving at an unsafe or maximum speed when the dashed line transforms into a solid line.
  • FIGS. 7-9B depict diagrams of a medical system that may be used for manipulating a medical instrument according to any of the methods and systems described above, in some examples.
  • medical system 700 may include a manipulator assembly 702 that controls the operation of a medical instrument 704 in performing various procedures on a patient P.
  • Medical instrument 704 may extend into an internal site within the body of patient P via an opening in the body of patient P.
  • the manipulator assembly 702 may be teleoperated, nonteleoperated, or a hybrid teleoperated and non-teleoperated assembly with one or more degrees of freedom of motion that may be motorized and/or one or more degrees of freedom of motion that may be non-motorized (e.g., manually operated).
  • the manipulator assembly 702 may be mounted to and/or positioned near a patient table T.
  • a master assembly 706 allows an operator O (e.g., a surgeon, a clinician, a physician, or other user) to control the manipulator assembly 702.
  • the master assembly 706 allows the operator O to view the procedural site or other graphical or informational displays.
  • the manipulator assembly 702 may be excluded from the medical system 700 and the instrument 704 may be controlled directly by the operator O.
  • the manipulator assembly 702 may be manually controlled by the operator O. Direct operator control may include various handles and operator interfaces for handheld operation of the instrument 704.
  • the master assembly 706 may be located at a surgeon’s console which is in proximity to (e.g., in the same room as) a patient table T on which patient P is located, such as at the side of the patient table T. In some examples, the master assembly 706 is remote from the patient table T, such as in in a different room or a different building from the patient table T.
  • the master assembly 706 may include one or more control devices for controlling the manipulator assembly 702.
  • the control devices may include any number of a variety of input devices, such as joysticks, trackballs, scroll wheels, directional pads, buttons, data gloves, trigger-guns, hand-operated controllers, voice recognition devices, motion or presence sensors, and/or the like.
  • the master assembly 706 may be or include an extended reality (XR) device, such as a virtual reality (VR) device, an augmented reality (AR) device, a mixed reality (MR) device, or any other such device as described herein.
  • XR extended reality
  • VR virtual reality
  • AR
  • the manipulator assembly 702 supports the medical instrument 704 and may include a kinematic structure of links that provide a set-up structure.
  • the links may include one or more non-servo controlled links (e.g., one or more links that may be manually positioned and locked in place) and/or one or more servo controlled links (e.g., one or more links that may be controlled in response to commands, such as from a control system 712).
  • the manipulator assembly 702 may include a plurality of actuators (e.g., motors) that drive inputs on the medical instrument 704 in response to commands, such as from the control system 712.
  • the actuators may include drive systems that move the medical instrument 704 in various ways when coupled to the medical instrument 704.
  • Actuators can also be used to move an articulable end effector of medical instrument 704, such as for grasping tissue in the jaws of a biopsy device and/or the like, or may be used to move or otherwise control tools (e.g., imaging tools, ablation tools, biopsy tools, electroporation tools, etc.) that are inserted within the medical instrument 704.
  • the manipulator assembly 702 may include or be a robotic-assisted platform as described in more detail above with regard to FIGS. 1-4.
  • the medical instrument 704 may be or include elements of a medical instrument as described above with regal'd to FIGS. 1-4.
  • Display system 710 may also display an image of the procedural site and medical instruments, which may be captured by the visualization system.
  • the medical system 700 provides a perception of telepresence to the operator O.
  • images captured by an imaging device at a distal portion of the medical instrument 704 may be presented by the display system 710 to provide the perception of being at the distal portion of the medical instrument 704 to the operator O.
  • the input to the master assembly 706 provided by the operator O may move the distal portion of the medical instrument 704 in a manner that corresponds with the nature of the input (e.g., distal tip turns right when a trackball is rolled to the right) and results in corresponding change to the perspective of the images captured by the imaging device at the distal portion of the medical instrument 704.
  • the perception of telepresence for the operator O is maintained as the medical instrument 704 is moved using the master assembly 706.
  • the operator O can manipulate the medical instrument 704 and hand controls of the master assembly 706 as if viewing the workspace in substantially true presence, simulating the experience of an operator that is physically manipulating the medical instrument 704 from within the patient anatomy.
  • the display system 710 may present virtual images of a procedural site that are created using image data recorded pre-operatively (e.g., prior to the procedure performed by the medical instrument system 800) or intra-operatively (e.g., concurrent with the procedure performed by the medical instrument system 800), such as image data created using computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), fluoroscopy, thermography, ultrasound, optical coherence tomography (OCT), thermal imaging, impedance imaging, laser imaging, nanotube X-ray imaging, and/or the like.
  • CT computed tomography
  • MRI magnetic resonance imaging
  • PET positron emission tomography
  • OCT optical coherence tomography
  • thermal imaging impedance imaging
  • laser imaging laser imaging
  • nanotube X-ray imaging and/or the like.
  • the virtual images may include two-dimensional, three-dimensional, or higher-dimensional (e.g., including, for example, time based or vclocity-bascd information) images.
  • one or more models are created from pre-operative or intra-operative image data sets and the virtual images are generated using the one or more models.
  • Medical system 700 may further include operations and support systems (not shown) such as illumination systems, steering control systems, irrigation systems, and/or suction systems.
  • the medical system 700 may include more than one manipulator assembly and/or more than one master assembly.
  • the exact number of manipulator assemblies may depend on the medical procedure and space constraints within the procedural room, among other factors. Multiple master assemblies may be co-located, or they may be positioned in separate locations. Multiple master assemblies may allow more than one operator to control one or more manipulator assemblies in various combinations.
  • the instrument body 912 may be coupled to an instrument carriage 906.
  • the instrument carriage 906 may be mounted to an insertion stage 908 that is fixed within the surgical environment 900.
  • the insertion stage 908 may be movable but have a known location (e.g., via a tracking sensor or other tracking device) within surgical environment 900.
  • Instrument carriage 906 may be a component of a manipulator assembly (e.g., manipulator assembly 702) that couples to the medical instrument 904 to control insertion motion (e.g., motion along an insertion axis A) and/or motion of the distal end 918 of the elongate device 910 in multiple directions, such as yaw, pitch, and/or roll.
  • the instrument carriage 906 or insertion stage 908 may include actuators, such as servomotors, that control motion of instrument carriage 906 along the insertion stage 908.
  • a sensor device 920 which may be a component of the sensor system 708, may provide information about the position of the instrument body 912 as it moves relative to the insertion stage 908 along the insertion axis A.
  • the sensor device 920 may include one or more resolvers, encoders, potentiometers, and/or other sensors that measure the rotation and/or orientation of the actuators controlling the motion of the instrument carriage 906, thus indicating the motion of the instrument body 912.
  • the insertion stage 908 has a linear track as shown in FIGS. 9 A and 9B.
  • the insertion stage 908 may have curved track or have a combination of curved and linear track sections.
  • FIG. 9A shows the instrument body 912 and the instrument carriage 906 in a retracted position along the insertion stage 908.
  • the proximal point 916 is at a position E0 on the insertion axis A.
  • the location of the proximal point 916 may be set to a zero value and/or other reference value to provide a base reference (e.g., corresponding to the origin of a desired reference frame) to describe the position of the instrument carriage 906 along the insertion stage 908.
  • the distal end 918 of the elongate device 910 may be positioned just inside an entry orifice of patient P.
  • the instrument body 912 and the instrument carriage 906 have advanced along the linear track of insertion stage 908, and the distal end 918 of the elongate device 910 has advanced into patient P.
  • the proximal point 916 is at a position LI on the insertion axis A.
  • the rotation and/or orientation of the actuators measured by the sensor device 920 indicating movement of the instrument carriage 906 along the insertion stage 908 and/or one or more position sensors associated with instrument carriage 906 and/or the insertion stage 908 may be used to determine the position LI of the proximal point 916 relative to the position L0.
  • the position LI may further be used as an indicator of the distance or insertion depth to which the distal end 918 of the elongate device 910 is inserted into the passageway(s) of the anatomy of patient P.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Robotics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Gynecology & Obstetrics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Pathology (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

Systems and methods are described for generating a 3D navigation interface for a robotically-assisted medical procedure. The method may include: (i) receiving a 3D model representative of a volume of a patient; (ii) receiving 2D data representative of one or more 2D images corresponding to at least a portion of the volume of the patient; (iii) generating co-registered operation data relating the 3D model to the 2D data; (iv) generating a 3D navigation interface, including generating a display of at least a portion of the 3D model based on the co-registered operation data; and (v) causing a display device to display the 3D navigation interface to a user. The method may be implemented by a system including a head mounted device displaying the 3D navigation interface in an extended reality view.

Description

SYSTEMS AND METHODS FOR GENERATING 3D NAVIGATION INTERFACES
FOR MEDICAL PROCEDURES
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority to and the benefit of the filing date of provisional U.S. Patent Application No. 63/477,752 entitled “SYSTEMS AND METHODS FOR GENERATING 3D NAVIGATION INTERFACES FOR MEDICAL PROCEDURES,” filed on December 29, 2022. The entire contents of the provisional application are hereby expressly incorporated herein by reference.
FIELD
[0002] Disclosed examples relate to three-dimensional navigation systems. In particular, the disclosed examples relate to systems and methods for generating and modifying 3D navigation systems for performing medical procedures on a patient.
BACKGROUND
[0003] Minimally invasive medical techniques are intended to reduce the amount of tissue that is damaged during medical procedures, thereby reducing patient recovery time, discomfort, and harmful side effects. Such minimally invasive techniques may be performed through natural orifices in a patient anatomy or through one or more surgical incisions. Through these natural orifices or incisions, physicians may insert minimally invasive medical instruments (including surgical, diagnostic, therapeutic, and/or biopsy instruments) to reach a target tissue location. One such minimally invasive technique is to use a flexible and/or steerable elongate device, such as a flexible catheter, which can be inserted into anatomic passageways and navigated toward a region of interest within the patient anatomy.
[0004] Past medical techniques use two-dimensional display s/interf aces when navigating a patient volume, and when determining where to perform specific procedures (e.g., biopsies, ablation, etc.). However, because the internals of a patient are three-dimensional, such displays/interfaces may lack details useful for the physician, especially when performing a complex procedure. For example, a two-dimensional display may require a physician to mentally map two-dimensional information to a mental/imagined three-dimensional construct to determine where to proceed, which is mentally taxing on a physician and can lead to mistakes during operation. Similarly, conventional displays or interfaces may fail to provide adequate information to a physician on their surroundings when performing a medical procedure. As such, a physician may be unaware of important details during the medical procedure.
SUMMARY
[0005] The following presents a simplified summary of various examples described herein and is not intended to identify key or critical elements or to delineate the scope of the claims.
[0006] In some examples, a computer-implemented method for generating a three-dimensional (3D) navigation interface for a robotically-assisted medical procedure is provided. The method may be implemented via one or more local or remote processors, servers, sensors, transceivers, memory units, and/or other electronic or electrical components. The method may include: (i) receiving, by one or more processors, a 3D model representative of a volume of a patient; (ii) receiving, by the one or more processors, two-dimensional (2D) data representative of one or more 2D images corresponding to at least a portion of the volume of the patient; (iii) generating, by the one or more processors, co-registered operation data relating the 3D model to the 2D data; (iv) generating, by the one or more processors, a 3D navigation interface, including generating a display of at least a portion of the 3D model based on the co-registered operation data; and (v) causing, by the one or more processors, a display device to display the 3D navigation interface to a user.
[0007] In further examples, a system for generating a 3D navigation interface for a robotically- assisted medical procedure is provided. The system may include one or more processors; a communication unit; a display device; and a non-transitory computer-readable medium coupled to the one or more processors and the communication unit and storing instructions thereon that, when executed by the one or more processors, cause the system to: (i) receive a 3D model representative of a volume of a patient; (ii) receive 2D data representative of one or more 2D images corresponding to at least a portion of the volume of the patient; (iii) generate co-registered operation data relating the 3D model to the 2D data; (iv) generate a 3D navigation interface, including generating a display of at least a portion of the 3D model based on the co-registered operation data; and (v) cause the display device to display the 3D navigation interface to a user.
[0008] In still further examples, a method for generating an interactive view for a robotically- assisted medical procedure is provided. The method may be implemented via one or more local or remote processors, servers, sensors, transceivers, memory units, and/or other electronic or electrical components. The method may include: (i) receiving, by one or more processors, a 3D model representative of a volume of a patient; (ii) generating, by the one or more processors, a 2D view of the volume of the patient, the 2D view representative of a 2D imaging modality when positioned at a particular apparatus projection angle; (iii) determining a display orientation for the 2D view relative to the user; (iv) registering the 2D view to the 3D model such that both the 2D view and the 3D model share the display orientation; (v) causing, by the one or more processors, a display device to simultaneously display the 2D view and the 3D model to a user in accordance with the shared display orientation; (vi) receiving, by the one or more processors, control input from the user; and (vii) updating, by the one or more processors, at least the 2D view of the volume of the patient based on the control input.
[0009] In yet further examples, a system for generating an interactive view for a robotically- assisted medical procedure is provided. The system may include one or more processors; a communication unit; a display device; and a non-transitory computer-readable medium coupled to the one or more processors and the communication unit and storing instructions thereon that, when executed by the one or more processors, cause the system to: (i) receive a 3D model representative of a volume of a patient; (ii) generate a 2D view of the volume of the patient, the 2D view representative of a 2D imaging modality when positioned at a particular apparatus projection angle; (iii) determine a display orientation for the 2D view relative to the user; (iv) register the 2D view to the 3D model such that both the 2D view and the 3D model share the display orientation; (v) cause the display device to simultaneously display the 2D view and the 3D model to a user in accordance with the shared display orientation; (vi) receive control input from the user; and (vii) update at least the 2D view of the volume of the patient based on the control input.
[0010] It is to be understood that both the foregoing general description and the following detailed description are illustrative and explanatory in nature and are intended to provide an understanding of the present disclosure without limiting the scope of the present disclosure. In that regard, additional aspects, features, and advantages of the present disclosure will be apparent to one skilled in the art from the following detailed description. BRIEF DESCRIPTIONS OF THE DRAWINGS
[0011] FIG. 1 is a diagram of an example environment in which a user may perform a medical procedure using a medical device, according to some examples.
[0012] FIGS. 2A and 2B are diagrams of different views of an example environment in which a 3D model of an internal volume for a patient and 2D images are displayed to assist a user in performing a medical procedure, according to some examples.
[0013] FIG. 3 depicts an example architecture for a system that assists a user in performing a medical procedure, according to some examples.
[0014] FIG. 4 depicts another example architecture for a system that assists a user in performing a medical procedure, according to some examples.
[0015] FIG. 5 is an example flow diagram for generating a 3D navigation interface for a robotically-assisted medical procedure, according to some examples.
[0016] FIG. 6 is another example flow diagram for generating a 3D navigation interface for a robotically-assisted medical procedure, according to some examples.
[0017] FIG. 7 is a simplified diagram of a medical system in which techniques disclosed herein may be implemented, according to some examples.
[0018] FIG. 8A is a simplified diagram of a medical instrument system, including a flexible elongate device, which may be used in connection with the techniques disclosed herein, according to some examples.
[0019] FIG. 8B is a simplified diagram of a medical tool within the flexible elongate device of FIG. 8 A, according to some examples.
[0020] FIGS. 9A and 9B are simplified diagrams of side views of a patient coordinate space including a medical instrument mounted on an insertion assembly, according to some examples.
[0021] Examples of the present disclosure and their advantages are best understood by referring to the detailed description that follows. It should be appreciated that like reference numerals are used to identify like elements illustrated in one or more of the figures, wherein showings therein are for purposes of illustrating examples of the present disclosure and not for purposes of limiting the same. DETAILED DESCRIPTION
[0022] In the following description, specific details are set forth describing some examples consistent with the present disclosure. Numerous specific details are set forth in order to provide a thorough understanding of the examples. It will be apparent, however, to one skilled in the art that some examples may be practiced without some or all of these specific details. The specific examples disclosed herein are meant to be illustrative but not limiting. One skilled in the art may realize other elements that, although not specifically described here, are within the scope and the spirit of this disclosure. In addition, to avoid unnecessary repetition, one or more features shown and described in association with one example may be incorporated into other examples unless specifically described otherwise or if the one or more features would make an example nonfunctional. In some instances, well known methods, procedures, components, and circuits have not been described in detail so as not to unnecessarily obscure aspects of the examples.
[0023] This disclosure describes various instruments and portions of instruments in terms of their state in three-dimensional space. As used herein, the term “position” refers to the location of an object or a portion of an object in a three-dimensional space (e.g., three degrees of translational freedom along Cartesian x-, y-, and z-coordinates). As used herein, the term “orientation” refers to the rotational placement of an object or a portion of an object (e.g., one or more degrees of rotational freedom such as, roll, pitch, and yaw). As used herein, the term “pose” refers to the position of an object or a portion of an object in at least one degree of translational freedom and to the orientation of that object or portion of the object in at least one degree of rotational freedom (e.g., up to six total degrees of freedom). As used herein, the term “shape” refers to a set of poses, positions, and/or orientations measured along an object. As used herein, the term “distal” refers to a position that is closer to a procedural site and the term “proximal” refers to a position that is further from the procedural site. Accordingly, the distal portion or distal end of an instrument is closer to a procedural site than a proximal portion or proximal end of the instrument when the instrument is being used as designed to perform a procedure.
[0024] This disclosure may relate to using a mixed reality head mounted display (HMD) or other display device to provide a three dimensional (3D) navigation volume and input interface. In particular’, the disclosed systems and methods include techniques for providing a 3D interface by generating co-registered data that relates a 3D model of a patient volume to 2D images of at least a portion of the patient volume. The 3D interface can display relevant graphics/images, and possibly other information (e.g., historical data, annotations, etc.), to a physician preparing for or conducting a medical procedure. As such, the physician may more easily determine relevant portions of the patient’ s anatomy and navigate an anatomical map without concern for mismatch between images and the patient model, and without having to mentally map a graphical representation of a 3D model to different portions of 2D images (e.g., 2D intraoperative images). Similarly, the physician maintains awareness of the surrounding physical area in which the operation is performed, and therefore can more easily track and predict the manner in which assisting medical tools or devices move or interact with the patient. Similarly, the disclosed systems and methods include techniques for generating a 2D view of a patient volume while maintaining an orientation relative to the physician. As such, the physician may modify or otherwise interact with the 2D view without requiring the physician to remember or incorporate changes in orientation between views, images, modifications, etc.
[0025] The systems and methods described herein may provide a number of improvements through the generation and use of a 3D navigation environment. For example, the 3D navigation environment may allow a user to navigate the environment as described herein even when not present in the physical environment. Further, by generating and using a 3D navigation environment, additional users may be able to observe a process in real time without interrupting a first (e.g., primary) user.
[0026] Similarly, the systems and methods described herein may offer improvements in spatial awareness for a user by visualizing and displaying 3D content in a 3D setting rather than a 2D medium. Moreover, the use of precise hand and/or joint tracking may provide the option for a user to navigate the 3D environment more precisely rather than relying solely on windows, icons, menus, and other such traditional 2D methodologies.
[0027] Further, the introduction of the instant systems and methods through a head-mounted display (HMD) for 3D information may improve the ergonomics by allowing a user a free and natural range of motion to function as though the user is in a normal environment while still providing the benefits of a virtual environment. For example, a system may improve the mental mapping by a user by reducing reliance on a 2D display and instead displaying the 3D model in conjunction with 2D information where appropriate, such that a user may rotate, modify, and otherwise adjust the display to a preferred level of comfort. [0028] It will be understood that such improvements do not constitute an exhaustive list, and other improvements will be clear according to the various examples discussed herein.
[0029] Referring first to FIG. 1, an example environment 100 is illustrated in which a user may perform a medical procedure using a medical device as described in more detail herein. In particular, the environment 100 includes a robotic-assisted platform 102, a medical instrument 104, and a head mounted display (HMD) 110. It will be understood that the environment 100 is an example, and alternative examples may be envisioned that include additional, fewer, or alternative components. For example, depending on the example, the robotic-assisted platform 102 may include or omit the monitor and manual input console depicted in FIG. 1.
[0030] The robotic-assisted platform 102 is a tool to assist a physician in performing a medical procedure on a patient, such as an endoluminal procedure (e.g., a minimally invasive lung biopsy or ablation procedure). In some examples, the robotic-assisted platform 102 includes at least some of the components of FIG. 7 (as described in more detail below), such as a manipulator assembly, a control system, a sensor system, a display system, and/or a master assembly. Depending on the example, the robotic-assisted platform 102 may receive commands from a user by way of the HMD 110. In the example of FIG. 1, the robotic-assisted platform 102 is physically connected to and capable of manipulating the medical instrument 104 in response to received commands. Depending on the example, the robotic-assisted platform 102 may include a monitor to present information and/or an input console for a user to manipulate a medical instrument 104 as described below.
[0031] The medical instrument 104 is an instrument configured and prepared to be manipulated by a user (by way of the robotic-assisted platform 102 and/or the HMD 110) when performing a medical procedure on a patient. The medical instrument 104 may be a flexible elongate device (e.g., a catheter), as described in more detail with regard to FIGS. 8A and 8B below. Similarly, in some examples, the medical instrument 104 includes, is part of, or is a medical instrument as described in more detail below with regard to FIG. 7.
[0032] The HMD 110 is a device designed to be mounted on a user’s head, and to display information to the user in an extended reality (XR) view 110R. Depending on the example, the HMD 110 may use XR techniques such as by presenting a mixed reality (MR) view, an augmented reality (AR) view, a virtual reality (VR) view, etc. In some examples, the HMD 110 includes one or more processors, a display device, memory, sensors, controllers, etc., and may be communicatively coupled to the robotic-assistcd platform 102 and/or medical instrument 104. Depending on the example, processors of the HMD 110 may perform various operations as described herein and may cause the display device to generate, display, modify, or otherwise manipulate elements of the XR view 110R based on movements or indications from the user. Similarly, the HMD 110 may receive feedback, inputs, and/or indications from the robotic-assisted platform 102 and/or medical instrument 104 (and/or other components not shown in FIG. 1), and may generate, display, modify, or otherwise manipulate the elements of the XR view 110R.
[0033] In some examples, the HMD 110 receives control input according to one or more manual inputs 106 from the user by way of a controller, a trackball, a keyboard, a mouse, a touchscreen device, touch sensors, movement sensors, accelerometers, gyroscopes, positional sensors, etc. Depending on the example, the HMD 110 may receive the manual input 106 by detecting movement of a user, such as hand gestures, head movement, etc., and possibly also by interpreting a particular action based on the movement. For example, if a user spreads a thumb and a pointer finger apart or brings the thumb and pointer finger together, the HMD 110 may determine that the user is indicating to zoom in or out, respectively, on an element, and may subsequently modify the XR view 110R accordingly. In further examples, a user may use multiple methods for transmitting manual inputs 106 according to the above interchangeably. For example, a user may use movement sensors to provide manual input 106 by way of hand gestures, before switching to a mouse for finer control.
[0034] The XR view 110R may include a 3D model 112R of an internal volume of the patient. Depending on the example, the internal volume may be or include a particular organ (e.g., a lung), a bodily system (e.g., the respiratory system), a larger area of a patient (e.g., the chest of a patient), the entirety of the patient, etc. In the example of FIG. 1, the HMD 110 displays various airways in the lungs of a patient. Depending on the example, the 3D model 112R may include one or more landmarks 116R, such as a lesion or other similarly identifiable feature. In some examples, the HMD 110 may generate indications of the landmark(s) 116R, highlight the landmark(s) 116R, enlarge the landmark(s) 116R, and/or otherwise emphasize the landmark(s) 116R in the XR view 110R to allow a user to more easily detect the landmark(s) 116R. [0035] Further, the XR view 11 OR may include a virtual representation of the medical instrument 104 as a virtual instrument 114R. Depending on the example, the virtual instrument 114R may substantially match the medical instrument 104, or may be a simplified version of the medical instrument 104, such as a simple shape (e.g., a line) that follows the contours of the medical instrument, etc. In some examples, the user may be able to provide a manual input 106 to the HMD 110 by interacting with the virtual instrument 114R.
[0036] In further examples, the HMD 110 may communicate the manual input 106 received from the user to the robotic-assisted platform 102 and/or the medical instrument 104. As such, the robotic-assisted platform 102 may manipulate the medical instrument 104 and/or the medical instrument may perform various functionalities based on the manual input 106 from the user in the XR view 110R. For example, the user may manipulate (interact with) the virtual instrument 114R to “drag” the virtual instrument 114R along the model 112R. The physical medical instrument 104 may then follow the path of the dragged virtual instrument 114R. In other examples, the HMD 110 may display a confirmation message in the XR view 110R before the physical medical instrument 114R follows a path in question. In still other examples, the user may tap a landmark 116R or other location on the 3D model 112R, and the medical instrument 104 may follow a path to reach the indicated location.
[0037] In some examples, the HMD 110 may display the 3D model and/or available commands or instructions to the user upon startup of the application or device. By selecting commands or performing actions according to instructions, the HMD 110 may display further elements in the XR view 110R, as explained in more detail below with regard to FIGS. 2A and 2B.
[0038] It will be understood that, although the illustrative environment 100 of FIG. 1 depicts an HMD 110 as displaying information to a user, an alternative example of environment 100 may use a different display device, such as a 3D computer, a handheld XR device, a mobile computing device, etc.
[0039] Referring next to FIGS. 2 A and 2B, example AR views 200 A and 200B of an environment 200 are illustrated. In some examples, AR view 200B is a zoomed in representation of AR view 200A. Both AR views 200A and 200B depict a 3D model positioned above a patient, including a physical medical instrument and a virtual medical instrument. [0040] In particular, the environment 200 includes a medical instrument 204, a display device 210 depicting a navigation interface 210R, and an imaging device 220. It will be understood that the AR views 200A and 200B are views of an example, and alternative examples may be envisioned that include additional, fewer, or alternative components. The navigation interface 210R may include a 2D view 211, a 3D model 212, a 2D instrument projection 213, the virtual instrument 214, a 2D landmark projection 215, an oriented 2D view 218, and a 2D instrument video 219. Depending on the example, a navigation interface 21 OR may include more, fewer, or alternative elements as compared to those shown and described herein. For example, the navigation interface 210R may include an interface panel with commands, instructions, patient information, preoperative image data, etc. In some examples, elements of the navigation interface 210R includes or are elements of the XR view 110R as described above with regard to FIG. 1. For example, the display device 210 may include the HMD 110, the navigation interface 210R may include the XR view 11 OR, the medical instrument 204 may include the medical instrument 104, the 3D model 212 may include the 3D model 112R, the virtual instrument 214 may include the virtual instrument 114R, the landmark 216 may include the landmark 214, etc. As such, alternative examples described with regard to FIG. 1 may similarly apply to the components of FIGS. 2A and 2B as appropriate.
[0041] In some examples, the display device 210 receives an image depicting a 2D view 211 of the patient volume, taken by an imaging device such as imaging device 220. Depending on the example, the imaging device 220 may be a C-arm that performs x-ray fluoroscopy, a computed tomography (CT) imaging device, a cone-beam computed tomography (CBCT) imaging device, a magnetic resonance imaging (MRI) device, a positron emission tomography (PET) imaging device, a tomosynthesis imaging device, a combination of devices, and/or any other similar imaging device or devices. The display device 210 may display the 2D view 211 in response to receiving the image and/or in response to a command by the user. Depending on the example, the display device 210 may display the 2D view 211 such that the 2D view 211 is superimposed with the patient body, is above the 3D model 212, is below the 3D model 212, is adjacent to the 3D model 212, etc.
[0042] It will be understood that, although the examples described herein refer to particular components of the environment 200 (e.g., display device 210) as performing particular processes, other components may perform similar or identical processes, depending on the example. For example, the robotic-assisted platform 102 or other components of the environment 100 may perform various processes in place of or in addition to various components of the environment 200 as described above.
[0043] In some examples, the display device 210 determines that there is a divergence between the 2D view 211 and the 3D model 212 or the patient volume. In such examples, the display device 210 may update the 2D view 211 by causing a component of the environment 200 to take an intraoperative image (e.g., the imaging device 220) to replace the 2D view 211. In further examples, the display device 210 replaces the 2D view 211 with another type of 2D data (e.g., replacing a CT image with CBCT volumetric data).
[0044] In further examples, the 2D view 211 may include 2D projections of various elements in the 3D model 212. For example, the virtual instrument 214 in the 3D model 212 may be represented in the 2D view 211 by a 2D instrument projection 213. In some examples, the display device 210 adjusts the 2D instrument projection 213 in accordance with changes to the positioning, angle, etc. of the medical instrument 204 and/or the virtual instrument 214. Similarly, the display device 210 may project a landmark 216 to the 2D view 211 as a 2D landmark projection 215. In some examples, the landmark 216 and/or the 2D landmark projection 215 indicate a location to which the user is to guide the medical instrument 204, and tapping on the landmark 216 or the 2D landmark projection 215 may cause the medical instrument 204 to navigate to the indicated location, as described in more detail with regard to FIG. 1 above.
[0045] In some examples, the display device 210 may generate and/or display a path to the landmark 216. Depending on the example, the path may use the centerline of the model pathways to generate the path. For example, in a 3D model 212 of a lung, the display device 210 may calculate a path using the centerline of the air pathways. In further examples, the medical instrument 204 and/or a computing device associated with the medical instrument 204 may determine the amount of force applied to the medical instrument 204 and may subsequently determine to stop applying force (and, in response, stop applying force) if the force reaches a threshold quantity.
[0046] Depending on the example, the user may cause the medical instrument 204 to take samples of the landmark 216 and/or another location (lesion, tissue, etc.) to which the medical instrument 204 navigates. For example, the medical instrument 204 may take images of the landmark 216, collect biopsy samples of the landmark 216, collect histology samples via rapid histology sampling, etc. In some such examples, the display device 210 associates the sample and/or sample information with the sample location in the 3D model 212 and/or the entire 3D model 212. The display device 210 may determine whether a tissue is malignant (e.g., by scraping and logging the tissue material prior to sending to a lab for analysis) and may mark the location with the information in question.
[0047] In further examples, the display device 210 may generate a post-procedure report representing the samples correlated with precise locations in response to an indication from the user and/or determination by the display device 210. In some examples, the procedure report may include the navigation history such that a user can replay portions of and/or the entirety of the procedure after the operation. The procedure report may be based on virtual simulated data as well as real data received from the medical instrument 204.
[0048] In some examples, the navigation interface 210R includes a view into the actual patient volume. For example, the navigation interface 21 OR may include an oriented 2D view 218 and/or a 2D instrument video 219. In some examples, the oriented 2D view 218 may depict the current view from a medical tool 204, similar to the 2D instrument video 219, but oriented according to the pose of the virtual instrument 214. For example, the oriented 2D view 218 may be displayed in an orientation corresponding to the pointing direction of the virtual instrument 214. As another example, the oriented 2D view 218 may depict the current view from a medical tool 204 oriented according to the user position. For example, the oriented 2D view 218 may include a live camera view or feed oriented to always face the user. In further examples, the oriented 2D view 218 may additionally or alternatively depict a virtual 2D view of a particular point and/or portion generated from the 3D model 212. For example, in an example of AR view 200B, the oriented 2D view 218 depicts a generated 2D view of the 3D model based on the position and/or location of the virtual instrument 214 and/or medical instrument 204. In some examples, the display device 210 automatically generates and/or updates the oriented 2D view 218 as the medical instrument 204 and/or virtual instrument 214 moves and/or the 3D model 212 otherwise changes. In further examples, the display device 210 generates and/or updates the oriented 2D view 218 in response to an indication from the user. Similarly, the display device 210 may display the 2D instrument video 219 in the navigation interface 210R, depicting a live feed from the medical instrument 204. In some examples, the oriented 2D view 218 comprises an image captured by the medical instrument 204 at a location indicated by a user (e.g., a landmark 216, a sample location, etc.). Similar to the 2D view 211, the display device 210 may display the 2D instrument video in the navigation interface 210R such that the 2D instrument video 219 is offset from the 3D model 212 and/or the patient, superimposed with the 3D model 212 and/or the patient, etc. It will be understood that, although the FIG. 2B depicts an oriented 2D view 218, the display device 210 may additionally or alternatively generate a 3D view of portions of the model, such as a CBCT volume according to a sweep angle.
[0049] In some examples, the display device 210 generates co-registered operation data relating the 2D view 211 or other 2D data to the 3D model 212. The co-registered operation data may be data relating 2D data to 3D data, such as where similar and/or matching structures (e.g., entry/exit points, landmarks, junctures, etc.) are. In further examples, the co-registered operation data may include a per-pixel mapping between 2D data (e.g., a 2D image, 2D coordinate system, etc.) and 3D data (e.g., a 3D model, 3D coordinate system, etc.). Similarly, the co-registered operation data may include optical markers and/or fiducials for visual tracking with a 2D or 3D imaging sensor (e.g., an RGB camera, infrared camera, etc.). In some examples, the display device 210 may use 3D shape sensor data and/or external electromagnetic tracking data to perform the co-registration as described herein. Depending on the example, the display device 210 may continually or dynamically update the co-registration based on data measured throughout a procedure (e.g., via visual identification of landmarks within the 3D model 212 according to 2D data or the 2D view 211).
[0050] In further examples, the co-registered operation data may include a common reference frame that may be, for example, a surgical reference frame for a user, a patient reference frame for a patient, an observer reference frame for an observer, etc. As such, the co-registered operation data may allow a display device 210 to orientate, overlay, and/or otherwise generate components of the view, such as the 2D view 211, the 3D model 212, etc.
[0051] In some examples, the display device 210 may automatically generate the co-registered data using machine learning, image analysis, optical recognition, and/or other similar techniques. In further examples, the display device 210 may prompt the user to interact with one or more targets on the 3D model 212 and the 2D view 211, and may generate the co-registered operation data based on the interactions. Depending on the example, the interactions may be touching the 3D model 212 and/or the 2D view 211 , dragging the virtual instrument 214 to a location, moving a physical controller to a location, and/or any other similar method of interaction with the virtual environment. Similarly, the display device 210 may prompt the user to interact with the 3D model 212 to determine a reference frame for the patient volume. In further examples, the co-registration process may involve matching measured points on the patient to the 2D view 211 and/or the 3D model 212 through the use of rigid and/or non-rigid transformations. Measured points may be generated using landmarks in the anatomy, electromagnetic coils scanned and tracked during a medical procedure, a shape sensor system, and/or other similar techniques. Depending on the example, registering the measured points to points in the 3D model 212 may be done according to an iterative closest point (ICP) technique. ICP and other registration techniques are described in PCT Application Publication No. W02017/030913 and PCT Application Publication No. W02017/030915, both filed August 14, 2015, which are incorporated by reference herein in their entirety.
[0052] The system may use the co-registration data and/or the reference frame to determine a display orientation for the 2D view 211 and/or the 3D model 212. In some examples, the 2D view 211 and the 3D model 212 share an orientation, and the display device 210 displays the 2D view 211 and the 3D model 212 simultaneously according to the shared orientation. When a user updates one of the 2D view 211 or the 3D view 212 (e.g., by manipulating and/or modifying the virtual views), the display device 210 may automatically adjust the other view so that the 2D view 211 and the 3D model 212 continue to share an orientation. In further examples, the display device 210 may adjust one but not the other in response to an indication from the user.
[0053] Depending on the example, the user may toggle an element of the navigation interface 210R on or off. In some examples, the display device 210 in conjunction with a platform controlling the medical device (e.g., robotic-assisted platform 102 as described with regard to FIG. 1 above) continues to monitor changes in the 3D model 212 and/or 2D view 211 and updates the appropriate element such that both views continue to share an orientation and/or user-designated marks when the user toggles one back on. Depending on the example, the display device 210 may automatically toggle the elements on or off if the display device 210 detects that the user is distracted by various elements. In further examples, the display device 210 may cause elements to turn opaque or translucent depending on the task at hand or a user context. For example, if a user is dragging the virtual instrument 214 through the 3D model 212, the display device 210 may cause the virtual instrument 214 to turn opaque upon the user making contact with the virtual instrument 214. Similarly, if a user turns away from some elements, the system may cause the element to turn translucent at least temporarily. In further examples, the display device 210 modifies an XR mode between AR, VR, and MR depending on the situation. For example, if the display device 210 and/or another component of the environment 200 detects a sufficient quantity of motion in the theater, the display device 210 may determine that the motion is likely to distract the user and switch from an AR mode to a VR mode, blocking the visual aspects out.
[0054] In some examples, the navigation interface 21 OR may include a virtual representation of the imaging device 220. Depending on the example, the display device 210 may display the virtual imaging device such that the virtual imaging device superimposes the imaging device 220, is located adjacent to the imaging device 220, or is offset from the 3D model 212 by the same amount that the imaging device 220 is offset from the patient. Further, the display device 210 may tie the virtual imaging device and the imaging device 220 together such that moving one may transmit a signal that causes the other to follow the same path. For example, if a user manipulates the virtual imaging device (e.g., rotating a virtual C-arm), the display device 210 may transmit a signal that causes the real world imaging device 220 to follow a similar movement path (e.g., rotating the actual C-arm). Depending on the example, the display device 210 may display a prompt to the user, asking the user to confirm the movement path and/or displaying the movement path before causing the imaging device 220 to follow the path.
[0055] In further examples, the user indicates to the display device 210 to generate a virtual 2D view of the patient volume. For example, the 2D view 211 may be a virtual 2D view generated from the 3D model 212 instead of a 2D x-ray image generated from the imaging device 220. Depending on the example, the user may move or position the virtual imaging device and cause the display device 210 to generate a predicted 2D view using the positioning and/or angle of the virtual imaging device. The user can subsequently decide whether to (i) confirm the predicted 2D view by causing the imaging device 220 to move to the matching location and image the patient or (ii) discard the predicted 2D view and move the virtual imaging device to a different location. In some such examples, the addition of the virtual determination using a virtual imaging device reduces the ionizing radiation exposure for the patient and/or others in the vicinity by reducing the required number of scans using the imaging device 220. [0056] FIG. 3 depicts an example architecture 300 for a system similar to a system implemented in environment 100 or environment 200 as depicted in FIGS. 1-2B above. In particular, the block diagram depicts a system 300 including various input devices such as an XR device 310 worn by a user O, a computer 320, and an assembly 302. Depending on the example, the XR device 310 may include, be, or resemble the HMD 110 and/or display device 210 as depicted in FIGS. 1-2B. Similarly, the assembly 302 may include, be, or resemble the robotic-assisted platform 102 as depicted in FIG. 1, and may include or be communicatively coupled with a medical instrument such as medical instrument 104 or medical instrument 204.
[0057] The assembly 302 may manipulate a medical instrument and/or other device including one or more sensors that may generate shape sensor data 355 representative of a shape of the medical instrument within patient anatomy. Depending on the example, the shape sensor data 355 may include data related to the shape of the information, gathered by one or more shape sensors for the instrument (e.g., a fiber optic shape sensor, EM sensors, etc.). The assembly 302 may stream the shape sensor data from the assembly 302 to an intermediary computing device, such as computer 320, via a system data streaming application 350. In further examples, the assembly 302 may receive and/or transmit the shape sensor data 355 via a TCP/IP socket connection, a Wi-Fi connection, a Bluetooth connection, etc.
[0058] The computer 320 may receive the shape sensor data 355 at a streaming application 340 and may modify, update, and/or otherwise manipulate the shape sensor data 355 to generate shape sensor data 345. Similarly, depending on the example, the computer 320 may capture video data from the assembly 302 through use of a recording device (e.g., a capture card) according to a video handler 347, or may otherwise manipulate already captured video data using the video handler 347. Similarly, the computer 320 may retrieve 3D assets 349 including 3D models, 3D targets, etc. as defined, generated, or otherwise supplied by the user O, a team associated with the computer 320, a patient, etc. In some implementations, the 3D assets may include assets associated with the volume of the patient (e.g., models generated from 3D volume data, instrument video data, etc.), pre-generated internal models, and/or other similar data as described herein. The computer 320 similarly transmits relevant data to the XR device 310 and, as such, the user O from the streaming application 340. Depending on the example, the computer 320 may similarly receive and/or transmit the shape sensor data 355 via a TCP/IP socket connection, a Wi-Fi connection, a Bluetooth connection, etc. [0059] The XR device 310 receives the relevant data, such as shape sensor data 345 from the computer 320 via an extended reality application 330. The XR device 310 may modify, update, and/or otherwise manipulate the shape sensor data 345 to generate XR shape sensor data 335, video data 336 (e.g., from a medical instrument such as an endoscope, from a virtual medical instrument, etc. via a real-time transport protocol (RTP) such as WebRTC), and/or 3D assets 339 (e.g., models, targets, virtual keyboard, etc.) as described herein. Further, the XR device 310 may generate and/or receive imaging modality data 338. Depending on the example, the XR device may generate the imaging modality data 338 based on simulated imaging modalities in the XR application 330 and/or from one or more imaging modalities associated with an imaging device of the assembly 302 (e.g., imaging device 220 as described above with regard to FIGS. 2A and 2B). In examples in which the XR device generates the imaging modality data 338 based on simulated imaging modalities (e.g., from a virtual representation of an imaging device), the imaging modality data 338 may further be based on a desired position and/or orientation of the imaging device (e.g., as indicated by the user and/or depicted by the XR device as a virtual representation of the imaging device in a virtual space). In examples in which the XR device generates and/or receives the imaging modality data 338 from an imaging device (e.g., imaging device 220), the imaging modality data 338 may include or be based on an image of the patient from the respective device (e.g., an X-ray image, a CT scan, etc.).
[0060] Similarly, the XR device 310 may retrieve, render, generate, and/or otherwise display 3D models to a user according to model data 332 based on the XR shape sensor data 335, video data 336, 3D assets 339, user input 334, etc. The XR device may additionally generate a navigation interface 310R similar to XR view 110R and/or navigation interface 210R based on the data processed and/or generated by the XR application 330.
[0061] FIG. 4 depicts an example architecture for a system 400 similar to the architecture and system of FIG. 3. In the example of FIG. 4, the system 400 is accessed by multiple users, and in which the XR device and the assembly are directly communicatively coupled, rather than through an intermediary computing device. In some examples, the system includes an XR device 410 used at least by a user O and an assembly 402. The XR device 410 and/or the assembly 402 may include, be, or resemble the XR device 310 and/or assembly 302, respectively, as described above with regard to FIG. 3. [0062] In some examples, the assembly 402 may include a system application 450 that functions similarly to the system data streaming application 350 as described with regard to FIG. 3 above. Unlike the assembly 302 and system data streaming application 350, however, the assembly 402 and system application 450 may communicate directly and bidirectionally with the XR device 410 and an XR application 430 rather than using an intermediate computing device. Further, the assembly 402 may include a console 403 for a user to input, update, generate, and/or otherwise manipulate data received from the XR device 410, generated by the assembly 402, etc. In some examples, the console 403 receives console interactions 464 to manipulate the assembly 402, the console 403, display data in the XR device 410, etc. For example, the console 403 may receive inputs from a user directly (e.g., via a controller, keyboard, mouse, etc. associated with the console 403 and/or assembly 402) and/or indirectly (e.g., via one or more controllers, sensors, etc. of the XR device 410) and may update the assembly 402, console 403, system application 450, and/or any stored data accordingly.
[0063] The assembly 402 generates shape sensor data 455 similar to the shape sensor data 355. In some examples, the assembly 402 generates and/or receives additional data based on information from the XR device 410, such as 3D data 431, 2D data 441, co-registration information between the 3D data 431 and the 2D data 441, etc. Similarly, the assembly 402 may generate video data 456 (e.g., from a medical instrument such as an endoscope, from a virtual medical instrument, etc. via a real-time transport protocol (RTP) such as WebRTC, UDP socket communication, etc.), system event data 458, imaging data 461, etc. In some examples, the system event data 458 is gathered via an API associated with the console 403 and/or assembly 402. Depending on the example, the imaging data 461 may be 2D or 3D imaging data (e.g., CT data, CBCT data, fluoroscopy data, (Radial) EBUS data, etc.) as described herein. Depending on the example, the system application 450 may communicate the data with the XR application 430 via a TCP/IP socket connection, a Wi-Fi connection, a Bluetooth connection, etc.
[0064] The XR application 430 may function similarly to the XR application 330 as described above with regard to FIG. 3. In particular, the model data 432, user inputs 434, shape sensor data 435, video data 436, and/or 3D assets 439 may resemble the corresponding data as described above. Similarly, the XR application 430 may generate and/or receive 3D data 431 (e.g., patient volume data, pre-operative data such as a CT scan data, intra-operative data such as CBCT data, etc.) or 2D data 441 (pre-operative data such as fluoroscopy data, intra-operative data such as R- EBUS data, etc.). The XR application 430 may also receive and/or generate user interface data 444 from the assembly 402 and/or console 403, such as commands for a medical instrument associated with the assembly 402, or XR toolkit elements.
[0065] In some examples, the system 400 includes one or more additional users Q via a multiplayer sync module 445. In some such examples, the system 400 may include additional XR devices similar to XR device 410 and/or computing devices for the additional user(s) Q to use. In some such examples, the system 400 automatically displays the navigation interface 41 OR as generated for the user O and the XR device 410. In further such examples, the system 400 does not register and/or accept commands or instructions from the additional users Q. As such, the additional users Q are only able to observe. In further examples, the additional users Q can view a modified navigation interface (e.g., with fewer elements and/or information) that the additional users Q can modify to a degree (e.g., changing perspective, changing angle, toggling various UI elements, etc.) without affecting the first (e.g., primary) navigation interface 410R. In still further examples, the additional users Q can cause notifications to appear in the first navigation interface 410R (e.g., a chat window, an alert, a question, highlighting, etc.). Similarly, the additional users Q and/or user O may annotate the navigation interface to leave notes, telestrations, drawn objects, etc.
[0066] In some examples, the system 400 provides a different navigation interface to the additional users Q based on a role and/or permissions granted to the users Q. For example, a student observer may only see the 3D data 431 and 2D data 441, while a member of the patient care team may see details regarding a patient’s vitals.
[0067] Referring next to FIG. 5, a flow diagram depicts an example method 500 for generating a 3D navigation interface for a robotically-assisted medical procedure. Although the method 500 is described below with regard to environment 100 and components thereof as illustrated in FIG. 1, it will be understood that other similarly suitable devices and components may be used instead, including those shown in FIG. 3 and/or FIG. 4.
[0068] At block 502, the robotic-assisted platform 102 receives a 3D model representative of a volume of a patient. Depending on the example, the 3D model may be of a particular organ (e.g., the lungs) of a patient or portion of organ, a larger system (e.g., the respiratory system) of a patient, an entirety of a patient anatomy, etc. In some examples, a component of the environment 100 may generate the 3D model and transmit the model within the environment 100. In further examples, an clement outside the environment 100 may generate the 3D model and transmit the model to the environment 100. Depending on the example, any of a computed tomography (CT) scan device, a cone-beam computed tomography (CBCT) scan device, a magnetic resonance imaging (MRI) scan device, a positron emission tomography (PET) scan device, a tomosynthesis device, etc. may generate the 3D model. Further, the device that generates the 3D model may generate the model prior to the medical procedure (e.g., immediately before, hours before, days before, etc.) or during the medical procedure.
[0069] At block 504, the robotic-assisted platform receives 2D data representative of one or more 2D images corresponding to at least a portion of the volume of the patient. In some examples, the 2D images may include 2D x-ray images, such as fluoroscopic images, which may be captured by a C-arm. Additionally or alternatively, the 2D images may include a synthetic 2D image generated from the 3D model and representative of a 2D x-ray image captured by a C-arm. Depending on the example, the C-arm may be associated with the environment 100 or may be a separate C-arm that transmits the 2D images to the environment 100. The robotic-assisted platform 102 may additionally track the C-arm (e.g., the C-arm movement, C-arm pose, C-arm position, etc.). The robotic-assisted platform 102 may track the C-arm by way of a sensor located on the C- arm (e.g., using an accelerometer, a gyroscope, a positional sensor, etc.) or by way of a head mounted display (HMD), as described herein.
[0070] At block 506, the robotic-assisted platform 102 generates co-registered operation data relating the 3D model to the 2D data. The co-registered operation data may be particular data of interest relating 2D data to 3D data, such as where similar and/or matching structures (e.g., entry/exit points, landmarks, junctures, etc.) are. In further examples, the co-registered operation data may include a per-pixel mapping between 2D data (e.g., a 2D image, 2D coordinate system, etc.) and 3D data (e.g., a 3D model, 3D coordinate system, etc.). Similarly, the co-registered operation data may include optical markers and/or fiducials for visual tracking with a 2D or 3D imaging sensor (e.g., an RGB camera, infrared camera, etc.). In further examples, the co-registered operation data may include a common reference frame that may be, for example, a surgical reference frame for a user, a patient reference frame for a patient, an observer reference frame for an observer, etc. As such, the co-registered operation data may allow the robotic-assisted platform 102 to orientate, overlay, and/or otherwise generate components of the view, such as a 2D view, a 3D model, etc.
[0071] In some examples, the robotic-assisted platform 102 may automatically generate the coregistered data using machine learning, image analysis, optical recognition, and/or other similar techniques. In further examples, the robotic-assisted platform 102 may prompt the user to interact with one or more targets on the 3D model and/or the 2D data, and may generate the co-registered data based on the interactions. Depending on the example, the interactions may be touching the 3D model and/or the 2D data, dragging a virtual instrument to the location, moving a physical controller to the location, and/or any other similar method of interaction with the virtual environment. In further examples, the robotic-assisted platform 102 may use 3D shape sensor data and/or external electromagnetic tracking data to perform the co-registration as described herein. Depending on the example the robotic-assisted platform 102 may continually or dynamically update the co-registration based on data measured throughout a procedure (e.g., via visual identification of landmarks within a 3D model according to 2D data, etc.).
[0072] At block 508, the robotic-assisted platform 102 generates a 3D navigation interface. In some examples, generating the 3D navigation interface includes generating a display of at least a portion of the 3D model based on the co-registered operation data. The robotic-assisted platform 102 may display or cause a display device to display the 3D model in a predetermined position relative to the patient. For example, the robotic-assisted platform 102 may display or cause a display device to display the 3D model above the patient (e.g., as depicted in FIGS. 2A and 2B), adjacent to the patient, superimposed with the patient, etc. In further examples, the robotic-assisted platform 102 may cause the display device to adjust the position of the 3D model based on user inputs. For example, the robotic-assisted platform 102 may ensure that the display device displays the 3D model as remaining in position above the patient when the user moves the view within the 3D navigation interface. Alternatively, the robotic-assisted platform 102 may move the 3D model display in response to receiving an indication from the user to move the display (e.g., the user drags the display elsewhere, the user inputs a particular command to change positioning, etc.).
[0073] In further examples, the robotic-assisted platform 102 generates the 3D navigation interface by additionally generating a display of one or more 2D images based at least on the coregistered operation data. The robotic-assisted platform 102 may generate the 3D navigation interface by orienting the 2D images such that an orientation of the 2D images is based at least on the 3D model. As such, the robotic-assistcd platform 102 may orient the 2D images and the 3D model such that the 2D images and the 3D model share an orientation from the user perspective, allowing the user to more easily identify shared locations, landmarks, etc. between the 2D images and 3D model. Alternatively, the robotic-assisted platform 102 may orient the 2D images relative to the 3D model such that the 2D images rotate or tilt as the user moves the 3D model (e.g., a top down view of the 2D images and a front view of the 3D model may still rotate as the 3D model rotates).
[0074] In further examples, the 3D navigation interface includes a live video feed from an instrument of the environment 100. For example, the 3D navigation interface may include a feed from one or more imaging devices associated with a flexible elongate device as described herein. Depending on the example, the live video feed may be a 2D video feed or a 3D video feed. Further, the robotic-assisted platform 102 may cause the display device to align the view of the live video feed with the 3D model. For example, the robotic-assisted platform 102 may generate the 3D navigation interface such that the video feed and the 3D model share an orientation, as described above with regard to the 2D view. Additionally or alternatively, the robotic-assisted platform 102 may superimpose some or all of the video feed with the 3D model or otherwise indicate where the video feed is displaying with regard to the 3D model. Depending on the example, the robotic- assisted platform 102 may display video feed in various orientations. For example, the robotic- assisted platform 102 may display the video feed directly behind a patient and/or 3D model (e.g., similar to 2D instrument video 219 as described above with regard to FIGS. 2 A and 2B), oriented depending on the current pose of the virtual instrument (e.g., similar to oriented 2D view 218 as described above with regard to FIGS. 2A and 2B), oriented depending on the position of the user, oriented above and/or below the 3D model, oriented superimposed with the 3D model, etc.
[0075] At block 510, the robotic-assisted platform 102 causes a display device to display the 3D navigation interface to a user. In some examples, the display device is a HMD. In such examples, the robotic-assisted platform 102 causes the HMD to display the 3D navigation interface in a form of extended reality (XR). Depending on the example, the robotic-assisted platform 102 may cause the HMD to display the 3D navigation interface as (i) a mixed reality (MR) navigation interface, (ii) an augmented reality (AR) navigation interface, or (iii) a virtual reality (VR) navigation interface. [0076] In some examples, the robotic-assisted platform 102 further receives sensor data from one or more sensors configured to generate data associated with the patient, the internal volume of the patient, a portion of the internals of the patient, etc. Depending on the example, the robotic- assisted platform 102 may receive 2D sensor data or 3D sensor data. The sensor data may include any of: computed tomography (CT) data, cone-beam computed tomography (CBCT) data, catheter data, endoscope video data, magnetic resonance imaging (MRI) data, C-arm data, radial endobronchial ultrasound (EBUS) data, a combination of data types, and/or any other such data as described herein.
[0077] In further examples, the robotic-assisted platform 102 additionally or alternatively receives navigation information regarding the patient, the internal volume of the patient, a portion of the internals of the patient, etc. In some such examples, the navigation information may include a historical navigation in the patient. The historical navigation may include navigation history during the current session, navigation history during past sessions for the same patient, generalized and/or normalized navigation history for the volume in general (e.g., various navigation paths taken by the physician in similar cases for the lungs), etc. Depending on the example, the robotic- assisted platform 102 may cause the display device to display the navigation history as drawn or otherwise generated paths along the 3D model, each path indicating a past navigation path. In some such examples, each path includes an indication of when the navigation occurred, relative to other paths and/or according to the actual time or date of the navigation. Additionally or alternatively, the historical navigation may include the history for past sessions as separate maps, as lists of passed landmarks, as descriptions of distances traveled or turns taken, etc. In further such examples, the robotic-assisted platform 102 may cause the display device to display a subset of the navigation history according to user preferences, user indications, a current user task, etc.
[0078] Similarly, the navigation history may additionally or alternatively include a navigation path representative of a recommended path for the instrument in the volume of the patient. Depending on the example, a physician may use the robotic-assisted platform 102 or an external computing device to generate the navigation path prior to the medical procedure or during the medical procedure. In further examples, the robotic-assisted platform 102 may generate a navigation path based on indications from the user, navigation history, etc. using machine learning techniques and/or a trained neural network to predict a preferred path for the user. In some examples, the navigation information additionally or alternatively includes visual or auditory elements representative of landmarks (e.g., distinct and/or easily recognizable locations in the patient) and/or one or more sampled tissue locations (e.g., locations at which an instrument of robotic-assisted platform 102 has sampled tissue) in the patient.
[0079] In further examples, the robotic-assisted platform 102 and/or the display device modifies at least a portion of the 3D navigation interface based on control input(s) received from the user. The control input(s) may modify the 3D navigation interface directly (e.g., moving interface elements, zooming in or out on elements, shifting a user perspective or view, etc.) or responsive to a physical element moving (e.g., modifying model display in response to the user moving an instrument in the patient, etc.).
[0080] Depending on the example, the user may provide the control input(s) using a controller, a trackball, a keyboard, a mouse, a touchscreen device, touch sensors, movement sensors, accelerometers, gyroscopes, positional sensors, etc. As such, the user may provide the control input(s) by way of an input to a physical device (e.g., pressing a button, moving a joystick, rolling a trackball, etc.) and/or by way of interacting with a virtual object. For example, the robotic- assisted platform 102 may detect that a user moves a hand to interact with an object displayed in a virtual space via the display device, and may cause the object to move in the user field of view accordingly. In some examples, the user may drag a virtual representation of an instrument within the boundary of the 3D navigation interface to indicate where a corresponding instrument is to move within the patient volume. In further examples, the user may tap an indication of a virtual target in the volume of the patient, and the robotic-assisted platform 102 causes the instrument including the sensor to follow a path to the indicated target. In some examples, the robotic-assisted platform 102 causes the instrument to follow a path according to the navigation information. In other examples, the robotic-assisted platform 102 automatically generates a path in response to the user indication using navigation history, landmarks, tissue sample locations, etc. and follows the generated path.
[0081] In some examples, the robotic-assisted platform 102 modifies the visibility of parts of the 3D navigation interface depending on the current task, user context, etc. For example, the robotic-assisted platform 102 may determine that at least one element is distracting and/or likely to distract the user. In response, the robotic-assisted platform 102 may dim the visibility of the element (e.g., by reducing the opacity, reducing a lighting, washing out colors of the element, etc.) to allow the user to better focus on other elements. Tn further examples, the robotic-assisted platform 102 instead changes the XR type in response to determining that an element is distracting a user. For example, the robotic-assisted platform 102 may determine that another individual in the user view is distracting the user and may change from an AR view to a VR view that does not show the other individual. The robotic-assisted platform 102 may determine that an object or element is distracting or likely to distract the user based on a user indication, via a trained machine learning algorithm to predict distraction, etc. In some examples, dimming the visibility of the element or of a broader portion of the interface may occur naturally with and/or in conjunction with changing the XR type.
[0082] In still further examples, additional second users (e.g., students, supervising physicians, observers, etc.) may observe the 3D navigation interface as the first user performs the medical procedure. In some such examples, the robotic-assisted platform 102 generates a second 3D navigation interface and displays the second interface to the second users. Depending on the example, the second 3D navigation interface may include a reduced set of information, such as removing past navigation history, particular patient information, etc. Similarly, the robotic- assisted platform 102 may modify the second 3D navigation interface in response to inputs from the first user, but may avoid modifying the first 3D navigation interface and/or the second 3D navigation interface in response to inputs from the second user(s).
[0083] Referring next to FIG. 6, a flow diagram depicts an example method 600 for generating a 3D navigation interface for a robotically-assisted medical procedure. Although the method 600 is described below with regard to environment 100 and components thereof as illustrated in FIG. 1 , it will be understood that other similarly suitable imaging devices and components may be used instead.
[0084] At block 602, the robotic-assisted platform 102 receives a 3D model representative of a volume of a patient. Depending on the example, the 3D model may be of a particular organ (e.g., the lungs) of a patient or a portion of an organ, a larger system (e.g., the respiratory system) of a patient, an entirety of a patient anatomy, etc. In some examples, a component of the robotic- assisted platform 102 may generate the 3D model and transmit the model within the environment 100. In further examples, an element outside the environment 100 may generate the 3D model and transmit the model to the environment 100. Depending on the example, any of a computed tomography (CT) scan device, a cone-beam computed tomography (CBCT) scan device, a magnetic resonance imaging (MRI) scan device, a positron emission tomography (PET) scan device, a tomosynthesis device, etc. may generate the 3D model. Further, the device that generates the 3D model may generate the model prior to the medical procedure (e.g., immediately before, hours before, days before, etc.) or during the medical procedure. Similar to FIG. 5, the system may display the 3D model above a patient and/or a view of a patient, adjacent to a patient and/or view, superimposed with a patient and/or view, etc.
[0085] In some examples, the view of the patient is a view of the physical patient seen via the display device, such as in an AR view. In other examples, the view of the patient is a representation of the patient, such as one generated in a VR view. In further examples, the 3D model includes a virtual apparatus that rotates to match a positioning of a corresponding physical apparatus. In some such examples, the virtual apparatus is offset from the physical apparatus by the same distance the 3D model is offset from the patient view. In further examples, the virtual apparatus is superimposed with the physical apparatus even though the remainder of the 3D model is not superimposed with the patient view.
[0086] At block 604, the robotic-assisted platform 102 generates a 2D view of the volume of the patient. In some examples, the 2D view is representative of a 2D imaging modality when positioned at a particular apparatus projection angle. For example, the 2D view may be an x-ray image or representative of an x-ray image taken by an imaging modality (e.g., a C-arm) at a given angle. In further examples, the 2D view includes at least one of a synthetic fluoroscopy image, a 2D x-ray image (e.g., from a C-arm), an EBUS image, etc. Similarly, the imaging modality may include at least one of: (i) C-arm imaging, (ii) CT imaging, (iii) MRI imaging, (iv) CBCT imaging, (v) EBUS imaging, or (vi) any other similarly appropriate imaging modality.
[0087] In some examples, the robotic-assisted platform 102 receives an additional image (e.g., a CT image, etc.) and/or generates the 2D view such that the image diverges from the patient volume. The robotic-assisted platform 102 may, in response to determining that the image diverges from the reality of the patient volume, the robotic-assisted platform 102 may transmit a request for an updated image, using the same or a different imaging modality (e.g., a CBCT image to replace a CT image). [0088] At block 606, the robotic-assisted platform 102 determines a display orientation for the 2D view relative to the user. In some examples, the robotic-assistcd platform 102 determines the display orientation for the 2D view based on input from the user, based on a registration or coregistration process, automatically based on the 3D model, based on a patient position, etc.
[0089] At block 608, the robotic-assisted platform 102 registers the 2D view to the 3D model such that both the 2D view and the 3D model share the display orientation. In some examples, the robotic-assisted platform 102 orients the 2D view and the 3D model relative to the user such that the shared display orientation is from a user perspective. For example, the robotic-assisted platform 102 may orient the 2D view and the 3D model such that the shared orientation depicts a first person view from the user perspective, a third person view from the user perspective (e.g., over the shoulder), a top down view of the patient from the user perspective, a task view representative of at least a portion of the volume of the patient based on a current task performed by the user, or another similar orientation.
[0090] In some examples, the robotic-assisted platform 102 may similarly scale or otherwise modify the 3D model and/or the 2D view in the shared orientation based on the user and/or user activity. For example, the robotic-assisted platform 102 may scale the 3D model relative to a movement speed of an elongate flexible device and/or another instrument, a navigation context for the volume of the patient, a user-indicated precision preferences, a current user task, etc. In some such examples, scaling the 3D model includes modifying a navigation path displayed to the user based on the movement speed (e.g., by changing spacing between component dashes of a dashed line indicating a navigation path based on the movement speed). For example, as the user increases the movement speed of the elongate flexible device, the robotic-assisted platform 102 may reduce the spacing between the component dashes, so that a user continues at a safe speed. In some such examples, the robotic-assisted platform 102 indicates that a user is moving at an unsafe or maximum speed when the dashed line transforms into a solid line.
[0091] At block 610, the robotic-assisted platform 102 causes a display device to simultaneously display the 2D view and the 3D model to a user in accordance with the shared display orientation. For example, the robotic-assisted platform 102 may display the 2D view above the 3D model, below the 3D model, behind the 3D model, superimposed with the 3D model, etc. (e.g., similar to 2D image 211 and 3D model 212 as described above with regard to FIGS. 2A and 2B). In some examples, the display device is an HMD, and the HMD displays the 2D view and the 3D model in XR, such as MR, AR, or VR. In further examples, the display device is a 3D computing device.
[0092] At block 612, the robotic-assisted platform 102 receives control input from the user. Depending on the example, the control input may include an input for moving an apparatus (e.g., an instrument, an external imaging device, etc.) associated with the robotic-assisted platform 102. For example, the control input may be a control input for rotating a C-arm. In some examples, by moving the apparatus, both the physical apparatus and a virtual representation of the apparatus move to reflect the inputs. In further examples, the robotic-assisted platform 102 moves the virtual representation of the apparatus as indicated in more detail below with regard to block 614, but does not move the physical apparatus until receiving a confirmation and/or indication from the user.
[0093] In some examples, the control input includes user manipulation (e.g., direct interaction) with a virtual representation of an elongate flexible device (e.g., the virtual apparatus as described above with regard to block 604). In some such examples, the robotic-assisted platform 102 causes the elongate flexible device to move based on the user manipulation of the virtual representation responsive to receiving the control input. In further examples, the control input includes an interaction by a user with a physical location or a virtual representation of the physical location, such as tapping a virtual representation of a location along the 3D model. In some such examples, the robotic-assisted platform 102 automatically generates a virtual path to the location in response to receiving the control input. The robotic-assisted platform 102 may prompt the user to confirm the path and/or cause the elongate flexible device to follow the path.
[0094] At block 614, the robotic-assisted platform 102 updates at least the 2D view of the volume of the patient based on the control input. In examples in which the control input includes an input for moving an apparatus, the robotic-assisted platform 102 updates the 2D view to correspond to a new position of the apparatus. For example, the robotic-assisted platform 102 may update the 2D view after controlling a C-arm to rotate to a new angle. In further examples, the robotic-assisted platform 102 updates the 3D model or both the 2D view and the 3D model. In some such examples, the system updates the 2D view and the 3D model while maintaining the shared orientation between both. In further examples, the user indicates to the robotic-assisted platform 102 that the robotic-assisted platform 102 should modify one of the 2D view and the 3D model, but not the other. Depending on the example, the robotic-assistcd platform 102 may modify one while maintaining the shared orientation (e.g., where the modification does not involve changing the orientation), or the robotic-assisted platform 102 breaks the shared orientation (e.g., where the modification does include changing the orientation). Similarly, the robotic-assisted platform 102 may cause the display device to display instructions regarding how to rotate/reorient at least one of the 2D view or the 3D model. In some examples, the instructions may display after the user chooses to break the shared orientation, automatically, in response to a user indication, etc.
[0095] In some examples, the robotic-assisted platform 102 embeds histology sample images in the 3D model. For example, the robotic-assisted platform 102 may capture a 2D histology sample image at a fork in the model (or any other location), and the robotic-assisted platform 102 may embed the 2D histology sample at the fork in the model (or any other location). As such, a user can quickly and accurately determine what location a particular image indicates in the 3D model.
[0096] In further examples, the robotic-assisted platform 102 generates a procedure report in response to an indication from the user. Depending on the example, the indication may be or include a request from the user, an indication that the medical procedure is complete, an indication that the medical procedure is starting, etc. The procedure report may include navigation information representative of at least one of the 2D view or the 3D model of the volume of the patient. Depending on the example, the navigation information may include: (i) navigation guidance information representative of a recommended path for a sensor in the volume of the patient, (ii) a historical navigation representative of past navigation of a sensor in the volume of the patient, (iii) the shared display orientation of the 2D view and the 3D model, (iv) a slicing of the 3D model (e.g., a 2D image of a single view of the 3D model), (v) histology data representative of at least one of the 2D view or the 3D model, (vi) or any other similar information. Similarly, the navigation information may include any combination of the above.
[0097] Similar to FIG. 5 above, the robotic-assisted platform 102 may allow for other, second users to view the displayed 2D view and/or 3D model. In some examples, the robotic-assisted platform 102 generates a second user orientation for a second user different from the first user orientation. Subsequently, the robotic-assisted platform 102 may cause a second display device (c.g., another HMD, a computing device, etc.) to display the 2D view and/or 3D model in accordance with the second user orientation to the second user. In further examples, the robotic- assisted platform 102 displays the first orientation to the second user, but does not respond to control inputs from the second user.
[0098] It will be understood that, although FIGS. 5 and 6 are described above with regard to particular components, other components may perform some of the functionality described above as appropriate. For example, in some examples the HMD 110 of FIG. 1 may perform some of the functionality described. Similarly, in further examples, other components not described with regard to FIGS. 1 , 3, or 4 may perform the functionality as described above.
[0099] FIGS. 7-9B depict diagrams of a medical system that may be used for manipulating a medical instrument according to any of the methods and systems described above, in some examples.
[0100] FIG. 7 is a simplified diagram of a medical system 700 according to some examples. The medical system 700 may be suitable for use in, for example, surgical, diagnostic (e.g., biopsy), or therapeutic (e.g., ablation, electroporation, etc.) procedures. While some examples are provided herein with respect to such procedures, any reference to medical or surgical instruments and medical or surgical methods is non-limiting. The systems, instruments, and methods described herein may be used for animals, human cadavers, animal cadavers, portions of human or animal anatomy, non-surgical diagnosis, as well as for industrial systems, general or special purpose robotic systems, general or special purpose teleoperational systems, or robotic medical systems.
[0101] As shown in FIG. 7, medical system 700 may include a manipulator assembly 702 that controls the operation of a medical instrument 704 in performing various procedures on a patient P. Medical instrument 704 may extend into an internal site within the body of patient P via an opening in the body of patient P. The manipulator assembly 702 may be teleoperated, nonteleoperated, or a hybrid teleoperated and non-teleoperated assembly with one or more degrees of freedom of motion that may be motorized and/or one or more degrees of freedom of motion that may be non-motorized (e.g., manually operated). The manipulator assembly 702 may be mounted to and/or positioned near a patient table T. A master assembly 706 allows an operator O (e.g., a surgeon, a clinician, a physician, or other user) to control the manipulator assembly 702. In some examples, the master assembly 706 allows the operator O to view the procedural site or other graphical or informational displays. In some examples, the manipulator assembly 702 may be excluded from the medical system 700 and the instrument 704 may be controlled directly by the operator O. In some examples, the manipulator assembly 702 may be manually controlled by the operator O. Direct operator control may include various handles and operator interfaces for handheld operation of the instrument 704.
[0102] The master assembly 706 may be located at a surgeon’s console which is in proximity to (e.g., in the same room as) a patient table T on which patient P is located, such as at the side of the patient table T. In some examples, the master assembly 706 is remote from the patient table T, such as in in a different room or a different building from the patient table T. The master assembly 706 may include one or more control devices for controlling the manipulator assembly 702. The control devices may include any number of a variety of input devices, such as joysticks, trackballs, scroll wheels, directional pads, buttons, data gloves, trigger-guns, hand-operated controllers, voice recognition devices, motion or presence sensors, and/or the like. In some examples, the master assembly 706 may be or include an extended reality (XR) device, such as a virtual reality (VR) device, an augmented reality (AR) device, a mixed reality (MR) device, or any other such device as described herein.
[0103] The manipulator assembly 702 supports the medical instrument 704 and may include a kinematic structure of links that provide a set-up structure. The links may include one or more non-servo controlled links (e.g., one or more links that may be manually positioned and locked in place) and/or one or more servo controlled links (e.g., one or more links that may be controlled in response to commands, such as from a control system 712). The manipulator assembly 702 may include a plurality of actuators (e.g., motors) that drive inputs on the medical instrument 704 in response to commands, such as from the control system 712. The actuators may include drive systems that move the medical instrument 704 in various ways when coupled to the medical instrument 704. For example, one or more actuators may advance medical instrument 704 into a naturally or surgically created anatomic orifice. Actuators may control articulation of the medical instrument 704, such as by moving the distal end (or any other portion) of medical instrument 704 in multiple degrees of freedom. These degrees of freedom may include three degrees of linear motion (e.g., linear motion along the X, Y, Z Cartesian axes) and in three degrees of rotational motion (e.g., rotation about the X, Y, Z Cartesian axes). One or more actuators may control rotation of the medical instrument about a longitudinal axis. Actuators can also be used to move an articulable end effector of medical instrument 704, such as for grasping tissue in the jaws of a biopsy device and/or the like, or may be used to move or otherwise control tools (e.g., imaging tools, ablation tools, biopsy tools, electroporation tools, etc.) that are inserted within the medical instrument 704. Depending on the example, the manipulator assembly 702 may include or be a robotic-assisted platform as described in more detail above with regard to FIGS. 1-4. Similarly, the medical instrument 704 may be or include elements of a medical instrument as described above with regal'd to FIGS. 1-4.
[0104] The medical system 700 may include a sensor system 708 with one or more sub-systems for receiving information about the manipulator assembly 702 and/or the medical instrument 704. Such sub-systems may include a position sensor system (e.g., that uses electromagnetic (EM) sensors or other types of sensors that detect position or location); a shape sensor system for determining the position, orientation, speed, velocity, pose, and/or shape of a distal end and/or of one or more segments along a flexible body of the medical instrument 704; a visualization system (e.g., using a color imaging device, an infrared imaging device, an ultrasound imaging device, an x-ray imaging device, a fluoroscopic imaging device, a computed tomography (CT) imaging device, a magnetic resonance imaging (MRI) imaging device, or some other type of imaging device) for capturing images, such as from the distal end of medical instrument 704 or from some other location; and/or actuator position sensors such as resolvers, encoders, potentiometers, and the like that describe the rotation and/or orientation of the actuators controlling the medical instrument 704.
[0105] The medical system 700 may include a display system 710 for displaying an image or representation of the procedural site and the medical instrument 704. Display system 710 and master assembly 706 may be oriented so physician O can control medical instrument 704 and master assembly 706 with the perception of telepresence. In some examples, although the display system 710 and the master assembly 706 are depicted in FIG. 7 as separate blocks, both the display system 710 and the master assembly 706 may be part of the same device and/or operation control system.
[0106] In some examples, the medical instrument 704 may include a visualization system, which may include an image capture assembly that records a concurrent or real-time image of a procedural site and provides the image to the operator O through one or more displays of display system 710. The image capture assembly may include various types of imaging devices. The concurrent image may be, for example, a two-dimensional image or a three-dimensional image captured by an endoscope positioned within the anatomical procedural site. In some examples, the visualization system may include endoscopic components that may be integrally or removably coupled to medical instrument 704. Additionally or alternatively, a separate endoscope, attached to a separate manipulator assembly, may be used with medical instrument 704 to image the procedural site. The visualization system may be implemented as hardware, firmware, software or a combination thereof which interact with or are otherwise executed by one or more computer processors, such as of the control system 712.
[0107] Display system 710 may also display an image of the procedural site and medical instruments, which may be captured by the visualization system. In some examples, the medical system 700 provides a perception of telepresence to the operator O. For example, images captured by an imaging device at a distal portion of the medical instrument 704 may be presented by the display system 710 to provide the perception of being at the distal portion of the medical instrument 704 to the operator O. The input to the master assembly 706 provided by the operator O may move the distal portion of the medical instrument 704 in a manner that corresponds with the nature of the input (e.g., distal tip turns right when a trackball is rolled to the right) and results in corresponding change to the perspective of the images captured by the imaging device at the distal portion of the medical instrument 704. As such, the perception of telepresence for the operator O is maintained as the medical instrument 704 is moved using the master assembly 706. The operator O can manipulate the medical instrument 704 and hand controls of the master assembly 706 as if viewing the workspace in substantially true presence, simulating the experience of an operator that is physically manipulating the medical instrument 704 from within the patient anatomy.
[0108] In some examples, the display system 710 may present virtual images of a procedural site that are created using image data recorded pre-operatively (e.g., prior to the procedure performed by the medical instrument system 800) or intra-operatively (e.g., concurrent with the procedure performed by the medical instrument system 800), such as image data created using computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), fluoroscopy, thermography, ultrasound, optical coherence tomography (OCT), thermal imaging, impedance imaging, laser imaging, nanotube X-ray imaging, and/or the like. The virtual images may include two-dimensional, three-dimensional, or higher-dimensional (e.g., including, for example, time based or vclocity-bascd information) images. In some examples, one or more models are created from pre-operative or intra-operative image data sets and the virtual images are generated using the one or more models.
[0109] In some examples, for purposes of imaged guided medical procedures, display system 710 may display a virtual image that is generated based on tracking the location of medical instrument 704. For example, the tracked location of the medical instrument 704 may be registered (e.g., dynamically referenced) with the model generated using the pre-operative or intra-operative images, with different portions of the model correspond with different locations of the patient anatomy. As the medical instrument 704 moves through the patient anatomy, the registration is used to determine portions of the model corresponding with the location and/or perspective of the medical instrument 704 and virtual images are generated using the determined portions of the model. This may be done to present the operator O with virtual images of the internal procedural site from viewpoints of medical instrument 704 that correspond with the tracked locations of the medical instrument 704.
[0110] The medical system 700 may also include the control system 712, which may include processing circuitry that implements the some or all of the methods or functionality discussed herein. The control system 712 may include at least one memory and at least one processor for controlling the operations of the manipulator assembly 702, the medical instrument 704, the master assembly 706, the sensor system 708, and/or the display system 710. Control system 712 may include instructions (e.g., a non-transitory machine -readable medium storing the instructions) that when executed by the at least one processor, configures the one or more processors to implement some or all of the methods or functionality discussed herein. While the control system 712 is shown as a single block in FIG. 7, the control system 712 may include two or more separate data processing circuits with one portion of the processing being performed at the manipulator assembly 702, another portion of the processing being performed at the master assembly 706, and/or the like. In some examples, the control system 712 may include other types of processing circuitry, such as application- specific integrated circuits (ASICs) and/or field-programmable gate array (FPGAs). The control system 712 may be implemented using hardware, firmware, software, or a combination thereof. [0111] In some examples, the control system 712 may receive feedback from the medical instrument 704, such as force and/or torque feedback. Responsive to the feedback, the control system 712 may transmit signals to the master assembly 706. In some examples, the control system 712 may transmit signals instructing one or more actuators of the manipulator assembly 702 to move the medical instrument 704. In some examples, the control system 712 may transmit informational displays regarding the feedback to the display system 710 for presentation or perform other types of actions based on the feedback.
[0112] The control system 712 may include a virtual visualization system to provide navigation assistance to operator O when controlling the medical instrument 704 during an image-guided medical procedure. Virtual navigation using the virtual visualization system may be based upon an acquired pre-operative or intra-operative dataset of anatomic passageways of the patient P. The control system 712 or a separate computing device may convert the recorded images, using programmed instructions alone or in combination with operator inputs, into a model of the patient anatomy. The model may include a segmented two-dimensional or three-dimensional composite representation of a partial or an entire anatomic organ or anatomic region. An image data set may be associated with the composite representation. The virtual visualization system may obtain sensor data from the sensor system 708 that is used to compute an (e.g., approximate) location of the medical instrument 704 with respect to the anatomy of patient P. The sensor system 708 may be used to register and display the medical instrument 704 together with the pre-operatively or intra-operatively recorded images. For example, PCT Publication WO 2016/161298 (published December 1, 2016, and titled “Systems and Methods of Registration for Image Guided Surgery”), which is incorporated by reference herein in its entirety, discloses example systems.
[0113] During a virtual navigation procedure, the sensor system 708 may be used to compute the (e.g., approximate) location of the medical instrument 704 with respect to the anatomy of patient P. The location can be used to produce both macro-level (e.g., external) tracking images of the anatomy of patient P and virtual internal images of the anatomy of patient P. The system may include one or more electromagnetic (EM) sensors, fiber optic sensors, and/or other sensors to register and display a medical instrument together with pre-operatively recorded medical images. For example, U.S. Patent No. 8,300,131 (filed May 13, 2011, and titled “Medical System Providing Dynamic Registration of a Model of an Anatomic Structure for Image-Guided Surgery”), which is incorporated by reference herein in its entirety, discloses example systems. [0114] Medical system 700 may further include operations and support systems (not shown) such as illumination systems, steering control systems, irrigation systems, and/or suction systems. In some examples, the medical system 700 may include more than one manipulator assembly and/or more than one master assembly. The exact number of manipulator assemblies may depend on the medical procedure and space constraints within the procedural room, among other factors. Multiple master assemblies may be co-located, or they may be positioned in separate locations. Multiple master assemblies may allow more than one operator to control one or more manipulator assemblies in various combinations.
[0115] FIG. 8A is a simplified diagram of a medical instrument system 800 according to some examples. The medical instrument system 800 includes a flexible elongate device 802 (also referred to as elongate device 802), a drive unit 804, and a medical tool 826 that collectively is an example of a medical instrument 704 of a medical system 700. The medical system 700 may be a teleoperated system, a non-teleoperated system, or a hybrid teleoperated and non-teleoperated system, as explained with reference to FIG. 7. A visualization system 831, tracking system 830, and navigation system 832 are also shown in FIG. 8A and are example components of the control system 712 of the medical system 700. In some examples, the medical instrument system 800 may be used for non-teleoperational exploratory procedures or in procedures involving traditional manually operated medical instruments, such as endoscopy. The medical instrument system 800 may be used to gather (e.g., measure) a set of data points corresponding to locations within anatomic passageways of a patient, such as patient P.
[0116] The elongate device 802 is coupled to the drive unit 804. The elongate device 802 includes a channel 821 through which the medical tool 826 may be inserted. The elongate device 802 navigates within patient anatomy to deliver the medical tool 826 to a procedural site. The elongate device 802 includes a flexible body 816 having a proximal end 817 and a distal end 818. In some examples, the flexible body 816 may have an approximately 3 mm outer diameter. Other flexible body outer diameters may be larger or smaller.
[0117] Medical instrument system 800 may include the tracking system 830 for determining the position, orientation, speed, velocity, pose, and/or shape of the flexible body 816 at the distal end 818 and/or of one or more segments 824 along flexible body 816, as will be described in further detail below. The tracking system 830 may include one or more sensors and/or imaging devices. The flexible body 816, such as the length between the distal end 818 and the proximal end 817, may include multiple segments 824. The tracking system 830 may be implemented using hardware, firmware, software, or a combination thereof. In some examples, the tracking system 830 is part of control system 712 shown in FIG. 7.
[0118] Tracking system 830 may track the distal end 818 and/or one or more of the segments 824 of the flexible body 816 using a shape sensor 822. The shape sensor 822 may include an optical fiber aligned with the flexible body 816 (e.g., provided within an interior channel of the flexibly body 816 or mounted externally along the flexible body 816). In some examples, the optical fiber may have a diameter of approximately 800 pm. In other examples, the diameter may be larger or smaller. The optical fiber of the shape sensor 822 may form a fiber optic bend sensor for determining the shape of flexible body 816. Optical fibers including Fiber Bragg Gratings (FBGs) may be used to provide strain measurements in structures in one or more dimensions. Various systems and methods for monitoring the shape and relative position of an optical fiber in three dimensions, which may be applicable in some examples, are described in U.S. Patent Application Publication No. 8006/0013523 (filed July 13, 2005 and titled “Fiber optic position and shape sensing device and method relating thereto”); U.S. Patent No. 7,772,541 (filed on March 12, 2008 and titled “Fiber Optic Position and/or Shape Sensing Based on Rayleigh Scatter”); and U.S. Patent No. 8,773,350 (filed on Sept. 2, 2010 and titled “Optical Position and/or Shape Sensing”), which are all incorporated by reference herein in their entireties. Sensors in some examples may employ other suitable strain sensing techniques, such as Rayleigh scattering, Raman scattering, Brillouin scattering, and Fluorescence scattering.
[0119] In some examples, the shape of the flexible body 816 may be determined using other techniques. For example, a history of the position and/or pose of the distal end 818 of the flexible body 816 can be used to reconstruct the shape of flexible body 816 over an interval of time (e.g., as the flexible body 816 is advanced or retracted within a patient anatomy). In some examples, the tracking system 830 may alternatively and/or additionally track the distal end 818 of the flexible body 816 using a position sensor system 820. Position sensor system 820 may be a component of an EM sensor system with the position sensor system 820 including one or more position sensors. Although the position sensor system 820 is shown as being near- the distal end 818 of the flexible body 816 to track the distal end 818, the number and location of the position sensors of the position sensor system 820 may vary to track different regions along the flexible body 816. In one example, the position sensors include conductive coils that may be subjected to an externally generated electromagnetic field. Each coil of position sensor system 820 may produce an induced electrical signal having characteristics that depend on the position and orientation of the coil relative to the externally generated electromagnetic field. The position sensor system 820 may measure one or more position coordinates and/or one or more orientation angles associated with one or more portions of flexible body 816. In some examples, the position sensor system 820 may be configured and positioned to measure six degrees of freedom, e.g., three position coordinates X, Y, Z and three orientation angles indicating pitch, yaw, and roll of a base point. In some examples, the position sensor system 820 may be configured and positioned to measure five degrees of freedom, e.g., three position coordinates X, Y, Z and two orientation angles indicating pitch and yaw of a base point. Further description of a position sensor system, which may be applicable in some examples, is provided in U.S. Patent No. 6,380,432 (filed August 11, 1999 and titled “Six-Degree of Freedom Tracking System Having a Passive Transponder on the Object Being Tracked”), which is incorporated by reference herein in its entirety.
[0120] In some examples, the tracking system 830 may alternately and/or additionally rely on a collection of pose, position, and/or orientation data stored for a point of an elongate device 802 and/or medical tool 826 captured during one or more cycles of alternating motion, such as breathing. This stored data may be used to develop shape information about the flexible body 816. In some examples, a series of position sensors (not shown), such as EM sensors like the sensors in position sensor 820 or some other type of position sensors may be positioned along the flexible body 816 and used for shape sensing. In some examples, a history of data from one or more of these position sensors taken during a procedure may be used to represent the shape of elongate device 802, particularly if an anatomic passageway is generally static.
[0121] FIG. 8B is a simplified diagram of the medical tool 826 within the elongate device 802 according to some examples. The flexible body 816 of the elongate device 802 may include the channel 821 sized and shaped to receive the medical tool 826. In some examples, the medical tool 826 may be used for procedures such as diagnostics, imaging, surgery, biopsy, ablation, illumination, irrigation, suction, electroporation, etc. Medical tool 826 can be deployed through channel 821 of flexible body 816 and operated at a procedural site within the anatomy. Medical instrument 826 may be, for example, an image capture probe, a biopsy tool (e.g., a needle, grasper, brush, etc.), an ablation tool (e.g., a laser ablation tool, radio frequency (RF) ablation tool, cryoablation tool, thermal ablation tool, heated liquid ablation tool, etc.), an electroporation tool, and/or another surgical, diagnostic, or therapeutic tool. In some examples, the medical tool 826 may include an end effector having a single working member such as a scalpel, a blunt blade, an optical fiber, an electrode, and/or the like. Other end types of end effectors may include, for example, forceps, graspers, scissors, staplers, clip appliers, and/or the like. Other end effectors may further include electrically activated end effectors such as electrosurgical electrodes, transducers, sensors, and/or the like.
[0122] The medical tool 826 may be a biopsy tool used to remove sample tissue or a sampling of cells from a target anatomic location. In some examples, the biopsy tool is a flexible needle. The biopsy tool may further include a sheath that can surround the flexible needle to protect the needle and interior surface of the channel 821 when the biopsy tool is within the channel 821. The medical tool 826 may be an image capture probe that includes a distal portion with a stereoscopic or monoscopic camera that may be placed at or near the distal end 818 of flexible body 816 for capturing images (e.g., still or video images). The captured images may be processed by the visualization system 831 for display and/or provided to the tracking system 830 to support tracking of the distal end 818 of the flexible body 816 and/or one or more of the segments 824 of the flexible body 816. The image capture probe may include a cable for transmitting the captured image data that is coupled to an imaging device at the distal portion of the image capture probe. In some examples, the image capture probe may include a fiber-optic bundle, such as a fiberscope, that couples to a more proximal imaging device of the visualization system 831. The image capture probe may be single-spectral or multi- spectral, for example, capturing image data in one or more of the visible, near- infrared, infrared, and/or ultraviolet spectrums. The image capture probe may also include one or more light emitters that provide illumination to facilitate image capture. In some examples, the image capture probe may use ultrasound, x-ray, fluoroscopy, CT, MRI, or other types of imaging technology.
[0123] In some examples, the image capture probe is inserted within the flexible body 816 of the elongate device 802 to facilitate visual navigation of the elongate device 802 to a procedural site and then is replaced within the flexible body 816 with another type of medical tool 826 that performs the procedure. In some examples, the image capture probe may be within the flexible body 816 of the elongate device 802 along with another type of medical tool 826 to facilitate simultaneous image capture and tissue intervention, such as within the same channel 821 or in separate channels. A medical tool 826 may be advanced from the opening of the channel 821 to perform the procedure (or some other functionality) and then retracted back into the channel 821 when the procedure is complete. The medical tool 826 may be removed from the proximal end 817 of the flexible body 816 or from another optional instrument port (not shown) along flexible body 816.
[0124] In some examples, the elongate device 802 may include integrated imaging capability rather than utilize a removable image capture probe. For example, the imaging device (or fiberoptic bundle) and the light emitters may be located at the distal end 818 of the elongate device 802. The flexible body 215 may include one or more dedicated channels that carry the cable(s) and/or optical fiber(s) between the distal end 818 and the visualization system 831. Here, the medical instrument system 800 can perform simultaneous imaging and tool operations.
[0125] In some examples, the medical tool 826 is capable of controllable articulation. The medical tool 826 may house cables (which may also be referred to as pull wires), linkages, or other actuation controls (not shown) that extend between its proximal and distal ends to controllably bend the distal end of medical tool 826, such as discussed herein for the flexible elongate device 802. The medical tool 826 may be coupled to a drive unit 804 and the manipulator assembly 702. In these examples, the elongate device 802 may be excluded from the medical instrument system 800 or may be a flexible device that does not have controllable articulation. Steerable instruments or tools, applicable in some examples, are further described in detail in U.S. Patent No. 7,916,681 (filed on Oct. 4, 2005 and titled “Articulated Surgical Instrument for Performing Minimally Invasive Surgery with Enhanced Dexterity and Sensitivity”) and U.S. Patent No. 9,259,274 (filed Sept. 30, 2008 and titled “Passive Preload and Capstan Drive for Surgical Instruments”), which are incorporated by reference herein in their entireties.
[0126] The flexible body 816 of the elongate device 802 may also or alternatively house cables, linkages, or other steering controls (not shown) that extend between the drive unit 804 and the distal end 818 to controllably bend the distal end 818 as shown, for example, by broken dashed line depictions 819 of the distal end 818 in FIG. 8 A. In some examples, at least four cables are used to provide independent up-down steering to control a pitch of the distal end 818 and left-right steering to control a yaw of the distal end 281. In these examples, the flexible elongate device 802 may be a steerable catheter. Examples of steerable catheters, applicable in some examples, are described in detail in PCT Publication WO 2019/018436 (published Jan. 24, 2019 and titled “Flexible Elongate Device Systems and Methods”), which is incorporated by reference herein in its entirety.
[0127] In examples where the elongate device 802 and/or medical tool 826 are actuated by a teleoperational assembly (e.g., the manipulator assembly 702), the drive unit 804 may include drive inputs that removably couple to and receive power from drive elements, such as actuators, of the teleoperational assembly. In some examples, the elongate device 802 and/or medical tool 826 may include gripping features, manual actuators, or other components for manually controlling the motion of the elongate device 802 and/or medical tool 826. The elongate device 802 may be steerable or, alternatively, the elongate device 802 may be non-steerable with no integrated mechanism for operator control of the bending of distal end 818. In some examples, one or more channels 821 (which may also be referred to as lumens), through which medical tools 826 can be deployed and used at a target anatomical location, may be defined by the interior walls of the flexible body 816 of the elongate device 802.
[0128] In some examples, the medical instrument system 800 (e.g., the elongate device 802 or medical tool 826) may include a flexible bronchial instrument, such as a bronchoscope or bronchial catheter, for use in examination, diagnosis, biopsy, and/or treatment of a lung. The medical instrument system 800 may also be suited for navigation and treatment of other tissues, via natural or surgically created connected passageways, in any of a variety of anatomic systems, including the colon, the intestines, the kidneys and kidney calices, the brain, the heart, the circulatory system including vasculature, and/or the like.
[0129] The information from the tracking system 830 may be sent to the navigation system 832, where the information may be combined with information from the visualization system 831 and/or pre-operatively obtained models to provide the physician, clinician, surgeon, or other operator with real-time position information. In some examples, the real-time position information may be displayed on the display system 710 for use in the control of the medical instrument system 800. In some examples, the navigation system 832 may utilize the position information as feedback for positioning medical instrument system 800. Various systems for using fiber optic sensors to register and display a surgical instrument with surgical images, applicable in some examples, are provided in U.S. Patent No. 8,300,131 (filed May 13, 2011 and titled “Medical System Providing Dynamic Registration of a Model of an Anatomic Structure for Image-Guided Surgery”), which is incorporated by reference herein in its entirety.
[0130] FIGS. 9A and 9B are simplified diagrams of side views of a patient coordinate space including a medical instrument mounted on an insertion assembly according to some examples. As shown in FIGS. 9A and 9B, a surgical environment 900 may include a patient P positioned on the patient table T. Patient P may be stationary within the surgical environment 900 in the sense that gross patient movement is limited by sedation, restraint, and/or other means. Cyclic anatomic motion, including respiration and cardiac motion, of patient P may continue. Within surgical environment 900, a medical instrument 904 is used to perform a medical procedure which may include, for example, surgery, biopsy, ablation, illumination, irrigation, suction, or electroporation. The medical instrument 904 may also be used to perform other types of procedures, such as a registration procedure to associate the position, orientation, and/or pose data captured by the sensor system 708 to a desired (e.g., anatomical or system) reference frame. The medical instrument 904 may be, for example, the medical instrument 704. In some examples, the medical instrument 904 may include an elongate device 910 (e.g., a catheter) coupled to an instrument body 912. Elongate device 910 includes one or more channels sized and shaped to receive a medical tool.
[0131] Elongate device 910 may also include one or more sensors (e.g., components of the sensor system 708). In some examples, a shape sensor 914 may be fixed at a proximal point 916 on the instrument body 912. The proximal point 916 of the shape sensor 914 may be movable with the instrument body 912, and the location of the proximal point 916 with respect to a desired reference frame may be known (e.g., via a tracking sensor or other tracking device). The shape sensor 914 may measure a shape from the proximal point 916 to another point, such as a distal end 918 of the elongate device 910. The shape sensor 914 may be aligned with the elongate device 910 (e.g., provided within an interior channel or mounted externally). In some examples, the shape sensor 914 may use optical fibers to generate shape information for the elongate device 910.
[0132] In some examples, position sensors (e.g., EM sensors) may be incorporated into the medical instrument 904. A series of position sensors may be positioned along the flexible elongate device 910 and used for shape sensing. Position sensors may be used alternatively to the shape sensor 914 or with the shape sensor 914, such as to improve the accuracy of shape sensing or to verify shape information. [0133] Elongate device 910 may house cables, linkages, or other steering controls that extend between the instrument body 912 and the distal end 918 to controllably bend the distal end 918. In some examples, at least four cables are used to provide independent up-down steering to control a pitch of distal end 918 and left-right steering to control a yaw of distal end 918. The instrument body 912 may include drive inputs that removably couple to and receive power from drive elements, such as actuators, of a manipulator assembly.
[0134] The instrument body 912 may be coupled to an instrument carriage 906. The instrument carriage 906 may be mounted to an insertion stage 908 that is fixed within the surgical environment 900. Alternatively, the insertion stage 908 may be movable but have a known location (e.g., via a tracking sensor or other tracking device) within surgical environment 900. Instrument carriage 906 may be a component of a manipulator assembly (e.g., manipulator assembly 702) that couples to the medical instrument 904 to control insertion motion (e.g., motion along an insertion axis A) and/or motion of the distal end 918 of the elongate device 910 in multiple directions, such as yaw, pitch, and/or roll. The instrument carriage 906 or insertion stage 908 may include actuators, such as servomotors, that control motion of instrument carriage 906 along the insertion stage 908.
[0135] A sensor device 920, which may be a component of the sensor system 708, may provide information about the position of the instrument body 912 as it moves relative to the insertion stage 908 along the insertion axis A. The sensor device 920 may include one or more resolvers, encoders, potentiometers, and/or other sensors that measure the rotation and/or orientation of the actuators controlling the motion of the instrument carriage 906, thus indicating the motion of the instrument body 912. In some examples, the insertion stage 908 has a linear track as shown in FIGS. 9 A and 9B. In some examples, the insertion stage 908 may have curved track or have a combination of curved and linear track sections.
[0136] FIG. 9A shows the instrument body 912 and the instrument carriage 906 in a retracted position along the insertion stage 908. In this retracted position, the proximal point 916 is at a position E0 on the insertion axis A. The location of the proximal point 916 may be set to a zero value and/or other reference value to provide a base reference (e.g., corresponding to the origin of a desired reference frame) to describe the position of the instrument carriage 906 along the insertion stage 908. In the retracted position, the distal end 918 of the elongate device 910 may be positioned just inside an entry orifice of patient P. Also in the retracted position, the data captured by the sensor device 920 may be set to a zero value and/or other reference value (e.g., 1=0). In FIG. 9B, the instrument body 912 and the instrument carriage 906 have advanced along the linear track of insertion stage 908, and the distal end 918 of the elongate device 910 has advanced into patient P. In this advanced position, the proximal point 916 is at a position LI on the insertion axis A. In some examples, the rotation and/or orientation of the actuators measured by the sensor device 920 indicating movement of the instrument carriage 906 along the insertion stage 908 and/or one or more position sensors associated with instrument carriage 906 and/or the insertion stage 908 may be used to determine the position LI of the proximal point 916 relative to the position L0. In some examples, the position LI may further be used as an indicator of the distance or insertion depth to which the distal end 918 of the elongate device 910 is inserted into the passageway(s) of the anatomy of patient P.
[0137] One or more components of the examples discussed in this disclosure, such as control system 712, may be implemented in software for execution on one or more processors of a computer system. The software may include code that when executed by the one or more processors, configures the one or more processors to perform various functionalities as discussed herein. The code may be stored in a non-transitory computer readable storage medium (e.g., a memory, magnetic storage, optical storage, solid-state storage, etc.). The computer readable storage medium may be part of a computer readable storage device, such as an electronic circuit, a semiconductor device, a semiconductor memory device, a read only memory (ROM), a flash memory, an erasable programmable read only memory (EPROM); a floppy diskette, a CD-ROM, an optical disk, a hard disk, or other storage device. The code may be downloaded via computer networks such as the Internet, Intranet, etc. for storage on the computer readable storage medium. The code may be executed by any of a wide variety of centralized or distributed data processing architectures. The programmed instructions of the code may be implemented as a number of separate programs or subroutines, or they may be integrated into a number of other aspects of the systems described herein. The components of the computing systems discussed herein may be connected using wired and/or wireless connections. In some examples, the wireless connections may use wireless communication protocols such as Bluetooth, near-field communication (NFC), Infrared Data Association (IrDA), home radio frequency (HomeRF), IEEE 502.11 , Digital Enhanced Cordless Telecommunications (DECT), and wireless medical telemetry service (WMTS). [0138] Various general-purpose computer systems may be used to perform one or more processes, methods, or functionalities described herein. Additionally or alternatively, various specialized computer systems may be used to perform one or more processes, methods, or functionalities described herein. In addition, a variety of programming languages may be used to implement one or more of the processes, methods, or functionalities described herein.
[0139] While certain examples and examples have been described above and shown in the accompanying drawings, it is to be understood that such examples and examples are merely illustrative and are not limited to the specific constructions and arrangements shown and described, since various other alternatives, modifications, and equivalents will be appreciated by those with ordinary skill in the art.

Claims

WHAT IS CLAIMED:
1. A method for generating a three-dimensional (3D) navigation interface for a robotically- assisted medical procedure, the method comprising: receiving, by one or more processors, a 3D model representative of a volume of a patient; receiving, by the one or more processors, two-dimensional (2D) data representative of one or more 2D images corresponding to at least a portion of the volume of the patient; generating, by the one or more processors, co-registered operation data relating the 3D model to the 2D data; generating, by the one or more processors, a 3D navigation interface, including generating a display of at least a portion of the 3D model based on the co-registered operation data; and causing, by the one or more processors, a display device to present the 3D navigation interface to a user.
2. The method of claim 1, wherein generating the 3D navigation interface includes: generating a display of the one or more 2D images based at least on the co-registered operation data.
3. The method of claim 2, wherein generating the 3D navigation interface further includes: orienting the one or more 2D images such that an orientation of the one or more 2D images is based at least on the 3D model.
4. The method of claim 1, further comprising: receiving, from one or more sensors configured to generate data associated with the volume of the patient, 3D sensor data; wherein generating the co-registered operation data includes generating co-registered 3D data based on the 3D sensor data.
5. The method of claim 4, wherein the 3D sensor data includes at least one of: computed tomography (CT) data, cone-beam computed tomography (CBCT) data, catheter data, endoscope video data, or magnetic resonance imaging (MRI) data.
6. The method of claim 1, further comprising: receiving, from one or more sensors configured to generate data associated with the volume of the patient, 2D sensor data; wherein generating the co-registered operation data includes generating co-registered 2D data based on the 2D sensor data.
7. The method of claim 6, wherein the 2D sensor data includes at least one of: endoscope video data, C-arm data, or radial endobronchial ultrasound (EBUS) data.
8. The method of claim 1, further comprising: receiving navigation information including historical navigation information representative of past navigation of a sensor in the volume of the patient; and wherein generating the 3D navigation interface further includes generating the 3D navigation interface based on the historical navigation information.
9. The method of claim 1, further comprising: receiving navigation information including navigation guidance information representative of a recommended path for a sensor in the volume of the patient; and wherein generating the 3D navigation interface further includes generating the 3D navigation interface based on the navigation guidance information.
10. The method of claim 1, further comprising: receiving navigation information including one or more visual or auditory elements representative of landmarks in the volume of the patient; and wherein generating the 3D navigation interface further includes generating the 3D navigation interface based on the one or more visual or auditory elements.
11. The method of claim 1 , further comprising: receiving navigation information including one or more sampled tissue locations in the volume of the patient; and wherein generating the 3D navigation interface further includes generating the 3D navigation interface based on the navigation information.
12. The method of any one of claims 1-11, wherein the one or more 2D images comprise one or more 2D x-ray images.
13. The method of claim 12, wherein the one or more 2D x-ray images comprise fluoroscopic images captured by a C-arm.
14. The method of claim 13, wherein generating the co-registered operation data includes: tracking a pose of the C-arm.
15. The method of claim 14, wherein tracking the pose of the C-arm includes: tracking the pose of the C-arm using a head mounted display (HMD).
16. The method of claim 14, wherein tracking the pose of the C-arm includes: tracking the pose of the C-arm using a sensor on the C-arm.
17. The method of any one of claims 1-11, wherein causing the display device to present the 3D navigation interface includes: causing a head mounted display (HMD) to present the 3D navigation interface as one of: (i) a mixed reality (MR) navigation interface, (ii) an augmented reality (AR) navigation interface, or (iii) a virtual reality (VR) navigation interface.
18. The method of any one of claims 1-11, wherein the 3D model comprises a 3D lung model, and wherein causing the display device to present the 3D navigation interface includes causing the display device to present at least the portion of the 3D lung model: (i) above the patient, (ii) adjacent to the patient, or (iii) superimposed with the patient.
19. The method of any one of claims 1-11 , wherein the 3D model is generated prior to the medical procedure via at least one of: (i) computed tomography (CT) scan device; (ii) a magnetic resonance imaging (MRI) device, or (iii) a positron emission tomography (PET) scan device.
20. The method of any one of claims 1-11, wherein the 3D model is generated during the medical procedure via at least one of: (i) a cone-beam computed tomography (CBCT) scan device or (ii) a tomosynthesis device.
21. The method of any one of claims 1-11, wherein the 3D navigation interface includes at least one of a live 2D video feed or a live 3D video feed.
22. The method of claim 21, wherein generating the 3D navigation interface further includes: aligning a camera view of the at least one of the live 2D video feed or the live 3D video feed with the 3D model.
23. The method of any one of claims 1-11, further comprising: modifying at least a portion of the 3D navigation interface based on a control input received from the user while the user navigates the patient volume.
24. The method of claim 23, further comprising: receiving the control input from a console operated by the user, wherein the control input includes one or more inputs made by the user via at least one of: (i) a controller, (ii) a trackball, (iii) a keyboard, (iv) a mouse, or (v) a touchscreen device.
25. The method of claim 23, further comprising: receiving the control input based on user movements detected by at least one of: (i) touch sensors, (ii) movement sensors, (iii) accelerometers, (iv) gyroscopes, or (v) positional sensors.
26. The method of claim 25, wherein the user movements are movements representative of the user dragging a virtual representation of a sensor within bounds of the 3D navigation interface.
27. The methods of claim 25, wherein the user movements include an indication of a virtual target in the volume of the patient.
28. The method of any one of claims 1-11, further comprising: modifying visibility of at least a portion of the 3D navigation interface depending on a current task or context for the user.
29. The method of claim 28, wherein modifying the visibility includes: determining, based on the current task or context for the user, that at least one element is potentially distracting for the user; and at least one of dimming a visibility of the at least one element or changing an extended reality type of the display device such that the user does not see the at least one element.
30. The method of any one of claims 1-11, wherein the user is a first user, the display device is a first display device, and the 3D navigation interface is a first 3D navigation interface, and wherein the method further comprises: generating a second 3D navigation interface, wherein the second 3D navigation interface includes a reduced set of information compared to the first 3D navigation interface; and causing a second display device to present the second 3D navigation interface to a second user.
31. A system for generating a three-dimensional (3D) navigation interface for a robotically- assisted medical procedure, the system comprising: one or more processors; a communication unit; a display device; and a non-transitory computer-readable medium coupled to the one or more processors and the communication unit and storing instructions thereon that, when executed by the one or more processors, cause the system to perform a method according to any one of claims 1-30.
32. A non-transitory computer-readable medium storing instructions thereon that, when executed by one or more processors, cause the one or more processors to perform a method according to any one of claims 1-30.
33. A method for generating an interactive view for a robotically-assisted medical procedure, the method comprising: receiving, by one or more processors, a three-dimensional (3D) model representative of a volume of a patient; generating, by the one or more processors, a two-dimensional (2D) view of the volume of the patient, the 2D view representative of a 2D imaging modality when positioned at a particular apparatus projection angle; determining a display orientation for the 2D view relative to the user; registering the 2D view to the 3D model such that both the 2D view and the 3D model share the display orientation; causing, by the one or more processors, a display device to simultaneously present the 2D view and the 3D model to a user in accordance with the shared display orientation; receiving, by the one or more processors, control input from the user; and updating, by the one or more processors, at least the 2D view of the volume of the patient based on the control input.
34. The method of claim 33, further comprising: orienting the 2D view and the 3D model relative to the user such that the shared display orientation is representative of at least one of: (i) a first person view relative to the user, (ii) a third person view relative to the user, (iii) a top down view of the patient, or (iv) a task view representative of the volume of the patient based on a current task performed by the user.
35. The method of claim 34, further comprising: scaling the 3D model relative to at least one of: (i) a movement speed of an elongate flexible device in the volume of the patient, (ii) a navigation context for the volume of the patient, (iii) a user-indicated precision preference, or (iv) a current user task.
36. The method of claim 35, wherein scaling the 3D model includes modifying an indication corresponding to a navigation path based on the movement speed.
37. The method of claim 33, further comprising: updating the 2D view and the 3D model while maintaining the shared display orientation.
38. The method of claim 33, further comprising: embedding 2D histology sample images in the 3D model based on a location of the volume of the patient in which the 2D histology sample images were captured.
39. The method of claim 33, further comprising: generating a procedure report including navigation information representative of at least one of the 2D view or the 3D model of the volume of the patient.
40. The method of claim 39, wherein the navigation information includes at least one of:
(i) navigation guidance information representative of a recommended path for a sensor in the volume of the patient,
(ii) historical navigation information representative of past navigation of a sensor in the volume of the patient,
(iii) the shared display orientation of the 2D view and the 3D model,
(iv) a slicing of the 3D model, or
(v) histology data representative of at least one of the 2D view or the 3D model.
41. The method of claim 33, wherein the control input includes user manipulation of a virtual representation of an elongate flexible device, and wherein the method further comprises: responsive to receiving the control input, causing the elongate flexible device to move in accordance with the user manipulation of the virtual representation.
42. The method of claim 33, wherein the control input includes an interaction by a user with a physical location or a virtual representation of the physical location, the method further comprising: automatically generating a virtual path to the location responsive to receiving the control input.
43. The method of claim 33, further comprising: receiving a computed tomography (CT) image during the medical procedure; determining that the CT image diverges from the volume; and transmitting a request for a cone-beam computed tomography (CBCT) image to replace the CT image.
44. The method of any one of claims 33-43, wherein the control input includes a control input for rotating a C-arm.
45. The method of claim 44, wherein updating at least the 2D view includes: updating at least the 2D view to correspond to a new angle of the C-arm after rotation.
46. The method of claim 44, wherein the control input for rotating the C-arm is a control input for rotating a virtual C-arm.
47. The method of claim 44, wherein the control input for rotating the C-arm is a control input for rotating a physical C-arm.
48. The method of any one of claims 33-43, wherein the 2D view includes at least one of: (i) a synthetic fluoroscopy image, (ii) a 2D fluoroscopy image from a C-arm, or (iii) an endobronchial ultrasound (EBUS) image.
49. The method of any one of claims 33-43, wherein causing the display device to simultaneously present the 2D view and the 3D model includes causing a head mounted display (HMD) to present the 2D view and the 3D model in extended reality (XR).
50. The method of any one of claims 33-43, wherein the 3D model comprises a 3D lung model, and wherein the 3D lung model is presented above a view of the patient.
51. The method of claim 50, wherein the view of the patient is a view of the physical patient in an augmented reality (AR) view.
52. The method of claim 50, wherein the view of the patient is a representation of the patient in a virtual reality (VR) view.
53. The method of any one of claims 33-43, wherein the 3D model is generated prior to the medical procedure via at least one of: (i) a computed tomography (CT) scan device; (ii) a magnetic resonance imaging (MRI) device, or (iii) a positron emission tomography (PET) scan device.
54. The method of any one of claims 33-43, wherein the 3D model is generated during the medical procedure via at least one of: (i) a cone-beam computed tomography (CBCT) scan device or (ii) a tomosynthesis device.
55. The method of any one of claims 33-43, wherein causing the display device to simultaneously present the 2D view and the 3D model includes: causing the display device to present instructions of how to rotate for reorientation in at least one of the 2D view or the 3D model.
56. The method of any one of claims 33-43, wherein the 3D model includes a virtual apparatus that rotates to match a positioning of a corresponding physical apparatus.
57. The method of any one of claims 33-43, wherein the 2D imaging modality includes at least one of:
(i) C-arm imaging,
(ii) computed tomography (CT) imaging, or
(iii) endobronchial ultrasound (EBUS) imaging.
58. The method of any one of claims 33-43, wherein the user is a first user, the display device is a first display device, the shared orientation is a first user orientation, and further comprising: generating a second user orientation for a second user, wherein the second user orientation differs from the first user orientation; and causing a second display device to present the 2D view and the 3D model in accordance with the second user orientation to the second user.
59. A system for generating an interactive view for a robotically-assisted medical procedure, the system comprising: one or more processors; a communication unit; a display device; and a non-transitory computer-readable medium coupled to the one or more processors and the communication unit and storing instructions thereon that, when executed by the one or more processors, cause the system to perform a method according to any one of claims 33-58.
60. A non-transitory computer-readable medium storing instructions thereon that, when executed by one or more processors, cause the one or more processors to perform a method according to any one of claims 33-58.
PCT/US2023/086007 2022-12-29 2023-12-27 Systems and methods for generating 3d navigation interfaces for medical procedures WO2024145341A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202263477752P 2022-12-29 2022-12-29
US63/477,752 2022-12-29

Publications (1)

Publication Number Publication Date
WO2024145341A1 true WO2024145341A1 (en) 2024-07-04

Family

ID=89853485

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2023/086007 WO2024145341A1 (en) 2022-12-29 2023-12-27 Systems and methods for generating 3d navigation interfaces for medical procedures

Country Status (1)

Country Link
WO (1) WO2024145341A1 (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6380432B2 (en) 1999-10-22 2002-04-30 Elsicon Inc. Materials for inducing alignment in liquid crystals and liquid crystal displays
US20060013523A1 (en) 2004-07-16 2006-01-19 Luna Innovations Incorporated Fiber optic position and shape sensing device and method relating thereto
US7772541B2 (en) 2004-07-16 2010-08-10 Luna Innnovations Incorporated Fiber optic position and/or shape sensing based on rayleigh scatter
US7916681B2 (en) 2005-05-20 2011-03-29 Telefonaktiebolaget Lm Ericsson (Publ) Method and apparatus for communication channel error rate estimation
US8300131B2 (en) 2009-09-10 2012-10-30 Fujifilm Corporation Image pickup device for wide dynamic range at a high frame rate
US8773350B2 (en) 2011-08-31 2014-07-08 Sharp Kabushiki Kaisha Sensor circuit and electronic apparatus
US9259274B2 (en) 2008-09-30 2016-02-16 Intuitive Surgical Operations, Inc. Passive preload and capstan drive for surgical instruments
WO2016161298A1 (en) 2015-04-02 2016-10-06 Ensco International Incorporated Bail mounted guide
WO2017030915A1 (en) 2015-08-14 2017-02-23 Intuitive Surgical Operations, Inc. Systems and methods of registration for image-guided surgery
WO2017030913A2 (en) 2015-08-14 2017-02-23 Intuitive Surgical Operations, Inc. Systems and methods of registration for image-guided surgery
US20190011703A1 (en) * 2016-07-25 2019-01-10 Magic Leap, Inc. Imaging modification, display and visualization using augmented and virtual reality eyewear
WO2019018436A1 (en) 2017-07-17 2019-01-24 Desktop Metal, Inc. Additive fabrication using variable build material feed rates
US20190350659A1 (en) * 2016-12-08 2019-11-21 Intuitive Surgical Operations, Inc, Systems and methods for navigation in image-guided medical procedures
US20210077047A1 (en) * 2017-07-08 2021-03-18 Vuze Medical Ltd. Apparatus and methods for use with image-guided skeletal procedures
US20220270247A1 (en) * 2021-02-24 2022-08-25 Siemens Healthcare Gmbh Apparatus for moving a medical object and method for providing a control instruction

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6380432B2 (en) 1999-10-22 2002-04-30 Elsicon Inc. Materials for inducing alignment in liquid crystals and liquid crystal displays
US20060013523A1 (en) 2004-07-16 2006-01-19 Luna Innovations Incorporated Fiber optic position and shape sensing device and method relating thereto
US7772541B2 (en) 2004-07-16 2010-08-10 Luna Innnovations Incorporated Fiber optic position and/or shape sensing based on rayleigh scatter
US7916681B2 (en) 2005-05-20 2011-03-29 Telefonaktiebolaget Lm Ericsson (Publ) Method and apparatus for communication channel error rate estimation
US9259274B2 (en) 2008-09-30 2016-02-16 Intuitive Surgical Operations, Inc. Passive preload and capstan drive for surgical instruments
US8300131B2 (en) 2009-09-10 2012-10-30 Fujifilm Corporation Image pickup device for wide dynamic range at a high frame rate
US8773350B2 (en) 2011-08-31 2014-07-08 Sharp Kabushiki Kaisha Sensor circuit and electronic apparatus
WO2016161298A1 (en) 2015-04-02 2016-10-06 Ensco International Incorporated Bail mounted guide
WO2017030915A1 (en) 2015-08-14 2017-02-23 Intuitive Surgical Operations, Inc. Systems and methods of registration for image-guided surgery
WO2017030913A2 (en) 2015-08-14 2017-02-23 Intuitive Surgical Operations, Inc. Systems and methods of registration for image-guided surgery
US20190011703A1 (en) * 2016-07-25 2019-01-10 Magic Leap, Inc. Imaging modification, display and visualization using augmented and virtual reality eyewear
US20190350659A1 (en) * 2016-12-08 2019-11-21 Intuitive Surgical Operations, Inc, Systems and methods for navigation in image-guided medical procedures
US20210077047A1 (en) * 2017-07-08 2021-03-18 Vuze Medical Ltd. Apparatus and methods for use with image-guided skeletal procedures
WO2019018436A1 (en) 2017-07-17 2019-01-24 Desktop Metal, Inc. Additive fabrication using variable build material feed rates
US20220270247A1 (en) * 2021-02-24 2022-08-25 Siemens Healthcare Gmbh Apparatus for moving a medical object and method for providing a control instruction

Similar Documents

Publication Publication Date Title
US20230346487A1 (en) Graphical user interface for monitoring an image-guided procedure
US12011232B2 (en) Systems and methods for using tracking in image-guided medical procedure
CN110325138B (en) System and method for intelligent seed registration
US20230200790A1 (en) Graphical user interface for displaying guidance information in a plurality of modes during an image-guided procedure
US20240041531A1 (en) Systems and methods for registering elongate devices to three-dimensional images in image-guided procedures
JP7118890B2 (en) Systems and methods for using registered fluoroscopic images in image-guided surgery
EP3182875B1 (en) Systems for display of pathological data in an image guided procedure
US20210100627A1 (en) Systems and methods related to elongate devices
CN116421309A (en) System and method for navigation in image guided medical procedures
US20210401508A1 (en) Graphical user interface for defining an anatomical boundary
US20210259783A1 (en) Systems and Methods Related to Registration for Image Guided Surgery
WO2024145341A1 (en) Systems and methods for generating 3d navigation interfaces for medical procedures
US20230099522A1 (en) Elongate device references for image-guided procedures
WO2023055723A1 (en) Navigation assistance for an instrument