GB2614025A - Surgery guidance system - Google Patents

Surgery guidance system Download PDF

Info

Publication number
GB2614025A
GB2614025A GB2112741.0A GB202112741A GB2614025A GB 2614025 A GB2614025 A GB 2614025A GB 202112741 A GB202112741 A GB 202112741A GB 2614025 A GB2614025 A GB 2614025A
Authority
GB
United Kingdom
Prior art keywords
orientation
virtual environment
displacement sensor
visual representation
patient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB2112741.0A
Other versions
GB202112741D0 (en
GB2614025B (en
Inventor
Rezaei Haddad Ali
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neuronav Ltd
Original Assignee
Neuronav Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neuronav Ltd filed Critical Neuronav Ltd
Priority to GB2112741.0A priority Critical patent/GB2614025B/en
Publication of GB202112741D0 publication Critical patent/GB202112741D0/en
Priority to PCT/EP2022/074921 priority patent/WO2023036848A1/en
Publication of GB2614025A publication Critical patent/GB2614025A/en
Application granted granted Critical
Publication of GB2614025B publication Critical patent/GB2614025B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/102Modelling of surgical devices, implants or prosthesis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/105Modelling of the patient, e.g. for ligaments or bones
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/107Visualisation of planned trajectories or target regions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2048Tracking techniques using an accelerometer or inertia sensor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2051Electromagnetic tracking systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2055Optical tracking systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2068Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis using pointers, e.g. pointers having reference marks for determining coordinates of body points
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • A61B2090/365Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/372Details of monitor hardware
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/50Supports for surgical instruments, e.g. articulated arms
    • A61B2090/502Headgear, e.g. helmet, spectacles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/034Recognition of patterns in medical or anatomical images of medical instruments

Abstract

An augmented reality surgery system comprises a display displacement system comprising at least one displacement sensor fixed relative to a body part of a patient. A processing system is operable to: generate a virtual environment based on a physical environment imaged by a camera 205; receive image data including an image of the body part in the physical environment; receive scan data corresponding to a 3D model of the body part; process the image and scan data to determine a position/orientation for the 3D model in the virtual environment; and introduce the 3D model to the virtual environment. The processing system is also operable to: for each sensor, determine an origin point and orientation of the sensor co-ordinate axes in the virtual environment using image data; at a first time, render a visual representation of the virtual environment including the 3D model and output to a display 203; receive data from the sensor corresponding to movement of the body part in the physical environment; modify position, orientation and/or form of the 3D model based on the received data and determined position/orientation; and, at a second time, render a visual representation including the modified model and output to the display.

Description

AUGMENTED REALITY SURGICAL NAVIGATION SYSTEM
Technical Field
An augmented reality surgical navigation system is disclosed in which a computer-generated image of a portion of a patient derived from scan data is displayed to the surgeon in alignment with that portion of the patient, or an image of that portion of the patient, during a surgical procedure to provide assistance to the surgeon. The invention is particularly concerned with monitoring movement of the patient, tissue displacement and tracking of surgical instruments during surgery and adjusting the display of the computer-generated image in order to maintain alignment between the computer-generated image and the portion of the patient during the surgical procedure.
Background
Many fields of surgery, such as cranial and spinal neurosurgery, require a high precision of execution to avoid significant complications. For example in cranial neurosurgery, the placement of extra ventricular drains is a common emergency neurosurgical procedure, used to release raised pressure in the brain. Placement of an emergency external ventricular drain is often performed free hand by a neurosurgeon. The positioning of approximately I in 5 drains needing to be subsequently revised due to misplacement, and with repeated insertions the likelihood of brain bleeds increases by 40%. Studies have shown that image-guided placement leads to improved accuracy of the external ventricular drain tip, reducing the risk of malposition by over 50%. In spinal surgery, a unique pitfall is when the incorrect vertebral level is exposed or operated upon, known as wrong level surgery, or fixation screws are incorrectly placed. To avoid this, the current convention is to perform checks with plain radiographs. These are resource intensive, requiring X-ray machine use, a radiographer, further anaesthetic time and consumables such as drapes. Current checklist site-verification systems such have been shown to have a very weak effect on reducing wrong site or placement errors. Realtime imaging feedback provided by augmented reality systems has the potential to reduce this risk. Studies have shown augmented reality systems generally gave higher accuracy placement of pedicle screws compared to conventional navigation. This again improves patient outcomes and reduces further revision work.
Likewise, surgery which requires differentiation between diseased tissues and healthy tissues can be simplified and made more effective if the diseased tissue is identified beforehand and a surgical navigation system highlights this tissue to the surgeon, rather than forcing the surgeon to inspect and determine this during the procedure.
During surgical procedures which utilise augmented reality to provide this navigational information, there is a requirement for optical alignment between a view of a virtual object within a virtual environment generated by the augmented reality system and a corresponding object in a physical environment (e.g. a surgical theatre). This may involve aligning a virtual model of a body part in the virtual environment with the actual body part in the physical environment. It can be critical that the virtual model of the body part remains in accurate alignment with the actual body part at all points during surgery.
Optical alignment with the head of a patient can be achieved by using facial recognition techniques to identify landmark points on the face of a patient. Information to be displayed to the surgeon can be rendered in positions relative to these points. However, for infection control purposes and to maintain sterility, the body is usually draped during surgery which renders tracking of these landmark points during surgery using facial recognition techniques impossible. As a result, if the body moves during surgery alignment between the virtual mod& of the body part in the virtual environment and the actual body part in the physical environment will be lost.
It is therefore desirable to find an alternative way of tracking the motion of a body part of a patient during surgery without a reliance on images of the body part. Furthermore it is highly desirable to be able to level check, track instruments, assess screw placement and assess spinal tissue changes post fixation using augmented reality rather than plain radiographs, thus reducing radiation exposure to both patient and staff.
Summary
According to a first aspect of the present invention, there is provided an augmented reality surgery system having a camera for imaging a view of a physical environment and a display for displaying a virtual environment. A displacement sensing system has one or more displacement sensors for fixing relative to a body part of a patient in the physical environment. The displacement sensing system outputs measurement data for each displacement sensor corresponding to translational and rotational movement of that displacement sensor relative to an origin and co-ordinate system defined by that displacement sensor. The augmented reality surgery system also includes a processing system which generates the virtual environment based on the physical environment of the camera so that each position in the physical environment has a corresponding position in the virtual environment. The processing system receives image data from the camera, the image data including an image of the body part of the patient in the physical environment, and scan data corresponding to a three-dimensional model of the body part of the patient. The processing system processes the image data and the scan data to determine a position and orientation for the three-dimensional model of the body part in the virtual environment matching the position and orientation of the body part of the patient in the physical environment, and introduces the three-dimensional model of the body part into the virtual environment with the determined position and orientation. The processing system determines, for each displacement sensor, the position in the virtual environment corresponding to the position of the origin point in the physical environment and the orientation of the co-ordinate axes of the displacement sensor in the virtual environment using image data for one or more images from the camera. The processing system then renders a first visual representation of the virtual environment including the model of the body part and outputs the first visual representation to the display, the first visual representation corresponding to the view of physical environment imaged by the camera at a first time. The processing system subsequently receive measurement data from the displacement sensor corresponding to movement of the body part of the patient in the physical environment and modifies at least one of the position, orientation and form of the model in the virtual environment based on the received measurement data, the determined position of the origin point of the displacement sensor and the determined orientation of the co-ordinate axes of the displacement sensor in the physical environment, and renders a second visual representation of the virtual environment including the modified model of the body part and outputs the second visual representation to the display, the second visual representation corresponding to the view of physical environment imaged by the camera at a second time. In this way, by initially registering the position and orientation in the virtual environment of one or more displacement sensors fixed to a body part using imaging techniques at a first time before surgery commences, measurement data from the displacement sensing system can be used to track movement or reconfiguration of the body part at a second time during surgery.
According to a second aspect of the invention, there is provided an augmented reality surgery system having a camera for imaging a view of a physical environment, a display for displaying a view of a virtual environment, and a displacement sensing system having one or more displacement sensors for fixing relative to a surgical instrument in the physical environment, the displacement sensing system being operable to output measurement data corresponding to translational and rotational movement of each displacement sensor relative to a respective origin and co-ordinate system. The augmented reality surgery system also includes a processing system which generates the virtual environment based on the physical environment of the camera so that each position in the physical environment has a corresponding position in the virtual environment, and for each displacement sensor determines the position of the origin and the orientation of the co-ordinate axes of that displacement sensor in the virtual environment using image data for one or more images from the camera. The processing system then introduces a three-dimensional model of the surgical instalment into the virtual environment with a position based on the determined position of the origin and an orientation based on the determined orientation of the co-ordinate axes in the virtual environment. At a first time, the processing system renders a first visual representation of the virtual environment including the three-dimensional model of the surgical instrument and outputs the first visual representation to the display, the first visual representation corresponding to the view of the physical environment imaged by the camera at the first time. Following receiving measurement data from the displacement sensor corresponding to movement of the surgical instrument in the physical environment, the processing system modifies at least one of the position and orientation of the three-dimensional model of the surgical instrument in the virtual environment based on the received measurement data and the determined position of the origin point of the displacement sensor and the determined orientation of the co-ordinate axes of the displacement sensor in the virtual environment and at a second time renders a second visual representation of the virtual environment including the modified model of the surgical instrument and output the second visual representation to the display, the second visual representation corresponding to the view of physical environment imaged by the camera at the second time.
In an example of the present invention, the displacement sensing system is an electromagnetic tracking system and the displacement sensors are probes whose translation and rotational movement is tracked in six dimensions. To assist the optical registration of the probes, optical fiducial markers are provided on each probe positioned in relation to the origin point in dependence on the co-ordinate system of the probe.
The augmented reality surgical navigation system could be used during neurosurgery to allow a virtual image of the head of a patient to track movement of the head of the patient during surgery. Alternatively, the augmented reality surgical navigation system could be used during spinal surgery in which the virtual image is modified to take account of relative movement of spinal vertebrae pre and post fixation, and to track the trajectory of screw insertions into the vertibrae.
Further features and advantages of the invention will become apparent from the following description of embodiments of the invention, given by way of example only, which is made with reference to the accompanying drawings.
Brief Description of the Drawings
Figure 1A shows a perspective view of a surgeon performing cranial neurosurgery assisted by an augmented reality surgical navigation system in accordance with examples described herein.
Figure 1B shows a perspective view of a surgeon performing cranial neurosurgery whereby a probe or instrument is tracked via the augmented reality surgical navigation system as it enters the tissue, allowing the surgeon to follow and visualise its trajectory towards a desired target in accordance with examples described herein.
Figure 2 shows a schematic view of a displacement sensor with surface optical fiducial markers that forms part of the augmented reality surgical navigation system of Figure 1.
Figure 3A schematically shows the functional components of the augmented reality surgical navigation system for co-registering a 3D virtual model in a virtual environment with a corresponding body part of a patient in the physical environment.
Figure 3B is a flow chart showing steps performed to generate a point cloud from the 3D virtual model in which each point represents a landmark feature on the surface of the 3D virtual model.
Figure 3C is a flow chart showing operations performed to position the 3D virtual model in the virtual environment in a position and orientation corresponding to the position and orientation of the corresponding body part in the physical environment.
Figure 4A schematically shows the functional components of the augmented reality surgical navigation system for registering the location of an origin point and the orientation of co-ordinate axes associated with a displacement sensor in the virtual environment.
Figure 4B is a flow chart showing operations performed to determine the position in the virtual environment corresponding to the origin of the displacement sensor and the orientation of the co-ordinate axes of the displacement sensor in the virtual environment.
Figure 5 is schematically shows the operation of the augmented reality surgical navigation system during surgery.
Figures 6A and 6B show perspective views of a surgeon performing spinal surgery assisted by an augmented reality surgical navigation system in accordance with examples described herein.
Detailed Description
Overview Figure IA shows a surgeon 1 carrying out a surgical procedure on a patient 3. In this example, the surgical procedure involves cranial neurosurgery on the brain, on blood vessels or on nerves located in the skull or near the brain. Such neurosurgical procedures require high precision to avoid significant complications, and the surgeon 1 makes use of an Augmented Reality (AR) surgical navigation system to assist with carrying out the neurosurgical procedure with such high precision.
In this example, the AR surgical navigation system includes a head-mounted AR display device 5. More particularly, in this example the AR display device 5 is a Microsoft HoloLens 2 device. The AR display device 5 senses the physical environment of the head-mounted AR display device, e.g. a surgical theatre, and generates a virtual environment corresponding to the physical environment using a first co-ordinate system, which will hereafter be called the virtual environment co-ordinate system.
The surgeon 1 holds a stylet 13 and during surgery, as shown in Figure 1B, the head-mounted AR display device 5 presents an image 15 to the surgeon 1 of a three-dimensional model of the head of the patient that is derived from scan data and positioned and oriented within the virtual environment of the AR surgical navigation system so as to match the position and orientation of the head of the patient 3 in the physical environment. The image is rendered from a viewpoint in the virtual environment corresponding to the position and orientation of the head-mounted AR device 5. In this way, the displayed image is superimposed over the corresponding portion of the physical body of the patient 3 in the field of view of the surgeon 1. Further, the head-mounted AR display device 5 presents information detailing the trajectory and distance to a target location.
The AR surgical navigation system also includes a displacement sensing system, which in this example is an electromagnetic tracking system in which movement of a displacement sensor 7 in six degrees of freedom (three translational and three rotational) is monitored using a field generator 9, which generates an electromagnetic field that induces currents in the displacement sensor 7 that can be analysed to determine the sensed movements. In this example, the displacement sensor 7 is a substantially planar device, as shown in Figure 2, having optical fiduciary markers 21a, 21b and 21c (such as AprilTag, ARTag, ARToolkit, ArUco and the like) positioned thereon in a configuration such that a cartesian co-ordinate system for the sensed movement corresponds to a first axis 23a aligned with a line joining the first optical fiduciary marker 21a and the second optical fiduciary marker 216, a second axis 236 aligned with a line joining the first optical fiduciary marker 21a and the third fiduciary marker 21c, and a third axis 23c aligned with a line perpendicular to the plane of the displacement sensor 7 and passing through the first optical fiduciary marker 23a. Displacement sensors sensor 7 can be fixed to the forehead of the patient 3, although other positions on the patient 3 are possible, and to surgical devices.
In this example, the AR surgical navigation system also includes a fixed display 11 that displays the field of view of the surgeon 1, including both an image from a camera in the AR display device 5 that captures the view of the surgeon 1 of the physical environment and the displayed image corresponding with the same view of the virtual environment. For example, as shown in Figure I A, the fixed display 11 may show a first image of the head of the patient 3 captured in the physical environment with a second image of the brain of the patient 3 captured from the virtual environment, with the first and second images having matching positions and orientations.
The AR surgical navigation system includes a processing system having three different modes of operation, namely a patient registration mode, a displacement sensor registration mode and a surgery mode.
The patient registration mode involves, before surgery, identifying the position and orientation in the virtual environment corresponding to the head of the patient 3, and then introducing the three-dimensional model derived from the previously scanned data of the head of the patient 3 into the virtual environment at a position and orientation matching the identified position and orientation for the physical head of the patient 3. As will be discussed in more detail hereafter, in this example the patient registration mode uses image processing techniques that require the head of the patient 3 to be in the field of view of the surgeon 1 wearing the head-mounted AR device 5.
The displacement sensor registration mode involves determining the position and orientation in the virtual world corresponding to the position and orientation of one or more six-dimensional displacement sensors (three axes of translation and three axes of rotation) which can be fixed relative to the head of the patient, for example on the forehead of the patient, or on a surgical device. In this way, detected movement of the displacement sensor can be converted into a corresponding translation and/or rotation in the virtual world of the corresponding object to which it is fixed. As will be discussed in more detail hereafter, in this example the displacement sensor registration mode also uses image processing techniques that require the head of the patient 3 to be in the field of view of the surgeon 1 wearing the head-mounted AR device 5.
The surgery mode involves a rendered image of the three-dimensional model being displayed to the surgeon 1 from a viewpoint corresponding to the position and orientation of the AR display device 5. During the surgery mode, movement of the displacement sensor 7 fixed to the head of the patient 3 is sensed and converted by the AR surgical navigation system to a corresponding translation and/or rotation of the three-dimensional model of the head of the patient 3 in the virtual environment so as to maintain the rendered image being superimposed over the corresponding part of the head of the patient 3 in the field of view of the surgeon 1 even if the head of the patient 3 moves. Further, movement of a displacement sensor fixed relative to a surgical device such as the styl et 13 is used to track movement of a virtual model of the surgical device in the virtual environment. As will be described in more detail hereafter, the surgery mode does not require the displacement sensor 7 to be in the field ofview of the surgeon 1 wearing the AR display device 5. This is advantageous because during surgery the head of the patient 3 is typically draped for hygiene reasons and therefore the displacement sensor 7 is not in the field of view of the surgeon 1. In addition, this is advantageous because during surgery a surgical device maybe inserted into body tissue and accordingly the tip of the surgical device is no longer visible to the surgeon 1.
An overview of the functionality in each of these modes will now be described in more detail.
Patient Registration Mode Figure 3A schematically illustrates the functionality of the surgical navigation system in the patient registration mode. As shown, a head-mounted augmented reality (AR) device 201 (corresponding to the AR device 5 of Figure 1) has a display 203, a camera 205 having associated depth sensing capabilities and a device pose calculator 207. The camera 205 takes images of the field of view of the surgeon 1 and the device pose calculator 207 processes those images to determine the position and orientation in the virtual environment corresponding to the position and orientation of the AR device 201 in the physical environment of the AR surgical navigation system.
In this example, scan data 209 corresponding to a pre-operative scan of the portion of the head of the patient is input to a processing system in which the scan data is processed by a 3D model generation module 211 to generate a virtual three-dimensional model of the portion of the head of the patient using a second co-ordinate system, which will hereafter be referred to as the scan model coordinate system. In this embodiment, the pre-operative scan data is received in Digital Imaging and Communications in Medicine (DICOM) format which stores slice-based images. The pre-operative scan data may be from CT scans, MRI scans, ultrasound scans, or any other known medical imaging procedure.
In this example, the slice-based images of the DICOM files are processed to construct a plurality of three-dimensional (3D) scan models each concentrating on a different aspect of the scan data using conventional segmentation filtering techniques to identify and separate anatomical features within the 2D DICOM images. One of the constructed 3D scan models is of the exterior surface of the head of the patient. Other 3D scan models may represent internal features of the head and brain such as brain tissue, blood vessels, the skull of the patient, etc. All these other models are constructed using the scan model co-ordinate system and the same scaling so as to allow the scan models to be selectively exchanged.
The patient registration process then generates a 3D point cloud of landmark points in the 3D scan model of the exterior surface of the head. In particular, referring to Figure 3B, the patient registration mode renders, at 5201, an image of the 3D model of the exterior surface of the head of the patient under virtual lighting conditions and from the viewpoint of a virtual camera positioned directly in front of the face of the patient 3 to create two-dimensional image data corresponding to an image of the face of the patient 3. A pre-trained facial detection model analyses, at S203, this image of the face of the patient 3 to obtain a set of 2D landmark points relating to features of the face of the patient 3. These landmark points are then converted, at S205, into a 3D set of points using ray tracing or casting. In particular, the position of the virtual camera is used to project a ray from a 2D landmark point in the image plane of the virtual camera into virtual model space, and the collision point of this ray with the 3D model of the exterior surface of the head of the patient corresponds to the corresponding 3D position of that landmark point. In this way, the 3D point cloud of landmark points corresponding to positions of facial landmarks is generated.
The patient registration process then compares the 3D point cloud with image data for an image of the face of the patient captured by the camera 205 to determine, at 213, transform data for transforming the 3D model into a co-ordinate system relative to the camera 205, which will hereafter be referred to as the camera co-ordinate system. In particular, the patient registration process determines a translation vector and rotation matrix which matches the position and orientation of the 3D point cloud with the position and orientation of the head of the patient in the captured image.
More particularly, with reference to Figure 3C, after the AR camera 205 captures, at S301, an image of the physical environment, the patient registration process processes, at S303, the corresponding image data to detect faces within the image, and optical fiducial markers in the image that indicate which of the detected faces belongs to the patient 3, as only the patient 3 has optical fiducial markers on them. The optical fiducial markers are useful because there may be one or more persons other than the patient 3 within the image. The patient registration process then identifies, at S305, a 2D set of landmark points of the face of the patient 3 within the captured image using the same pre-trained facial detection model as was used to process the rendered image of the 3D model of the exterior surface of the head. The patient registration process then aligns, at 5307, the 3D point cloud of landmark points for the 3D model and the 2D set of landmark points from the captured image using a conventional pose estimation process, which determines a translation vector and a rotation matrix that positions and orientates the 3D model in the field of view of the AR camera 205 co-registered with the physical head of the patient 3. This is a well-studied problem in computer vision known as a perspective n-point (PNP) problem. In this way, the translation vector and the rotation matrix form transform data.
The determined translation vector and rotation matrix are relative to the field of view of the camera when capturing the image. It will be appreciated that the head-mounted AR device moves within the physical environment, and therefore a further transformation is required to determine compensated transform data to locate the 3D model in the virtual environment at a position and orientation that matches the position and orientation of the head of the patient 3 in the physical environment. Accordingly, as shown in Figure 3A, the patient registration process then transforms, at 215, the 3D model into the virtual environment co-ordinate system. In particular, the patient registration process determines a compensated translation vector and a compensated rotation matrix which matches the position and orientation of the 3D point cloud with the position and orientation in the virtual environment that corresponds to the position and orientation of the head in the physical environment.
Returning to Figure 3C, more particularly the patient registration process receives, at S309, data form the device pose calculator 207 that identifies the position and the orientation within the virtual environment that corresponds to the position and orientation of the camera 205 in the physical environment when the image was captured. The patient registration process then calculates, at S311, a compensating translation vector and rotation matrix using the data provided by the device pose calculator 207. The compensating translation vector and rotation matrix form compensated transform data to transform the location and orientation of the 3D model of the exterior surface of the head into a location and orientation in the virtual environment, defined using the virtual environment co-ordinate system, so that the position and orientation of the 3D model in the virtual environment matches the position and orientation of the head of the patient 3 in the real world.
In this example, the patient registration process then estimates the accuracy of the co-registration of the physical head and the virtual model. In particular, depth data from the depth sensors associated with the camera 205 is used to construct, at S313, a point cloud of landmark points of the physical face of the patient from the 2D set of landmark points determined from the captured image, and then the accuracy of the overlap of the point cloud of landmark points of the physical face of the patient determined from the captured image and the point cloud of landmark points of the virtual model is calculated, at S315, for example by calculating the sum of the cartesian distance between corresponding points of each point cloud.
To improve accuracy of co-registration, the patient registration process repeats, at 5317, the above processing for multiple captured images, while the head of the patient is immobilised but the camera 205 may be mobile, until sufficient accuracy is achieved. This assessment of accuracy can involve averaging the previous best-fits, and determining, at S317, whether the difference between current and previous averaged fits has fallen below a defined accuracy threshold and hence converged on some suitable alignment At this point, co-registration is deemed to be sufficiently accurate and the averaged compensated translation vector and rotation matrix can be used to introduce the 3D models into the virtual environment.
Displacement Sensor Registration Process Figure 4A schematically illustrates the functionality of the surgical navigation system in the displacement sensor registration mode. As discussed previously, each displacement sensor 7 is equipped with optical fiducial markers 21a-21c which are positioned to define the origin and axes of a co-ordinate system in which the displacement sensor 7 measures movement in six directions (three translational and three rotational), hereafter referred to as the displacement sensor co-ordinate system. The aim of the displacement sensor registration process is to identify the position of the origin of the displacement sensor co-ordinate system, and the orientation of the axes of the displacement sensor co-ordinate system, in the virtual environment.
In this example, as illustrated in Figure 4B, the displacement sensor registration process receives, at step 5401, an image of the physical environment, which may be an image used for patient registration, together with associated depth data and identifies, at step S403, the optical fiducial markers on the displacement sensor 7, which can be done using available software from libraries such as OpenCV combined with helper libraries which provide specific implementations for the optical fiducial marker used. In this way, the 2D mage co-ordinates of the fiducial markers in the image, and their correct orientation based on the patterns forming the optical fiducial markers 21a-21c, are provided.
Based on the positions and orientations of the optical fiducial markers 21a-21c in the image, the displacement sensor registration process then uses pose estimation to determine, at S405, a rotational transformation required to rotate the plane of the displacement sensor 7 containing the optical fiducial markers to align with the object plane of the camera 205. From this rotation transformation, the displacement sensor registration process determines, at S407, a rotation matrix transforming the displacement sensor co-ordinate system to the AR camera co-ordinate system. In addition, the depth data corresponding to the optical fiducial markers is used to calculate a 3D position of the origin of the displacement sensor 7 in the AR camera coordinate system.
The determined origin position and rotation matrix are relative to the field of view of the camera 205 when capturing the image. The patient registration process generates, at S409, a compensating translation vector and rotation matrix using a location and orientation of the camera 205, provided by the device pose calculator 207, for when the image was captured. The compensating translation vector and rotation matrix converts the location of the origin of the displacement sensor 7 and the orientation of the displacement sensor 7 from the AR camera co-ordinate system to the virtual environment co-ordinate system, so that the position of the origin of the displacement sensor 7 and the orientation of the displacement sensor co-ordinate system in the virtual environment matches the position and orientation of the displacement sensor 7 in the physical environment.
The displacement sensor registration process can then be repeated, at S413, using multiple different images captured by the camera 205 to acquire an average position of the origin of the displacement sensor 7 and the orientation of the displacement sensor co-ordinate system in the virtual environment.
Surgery Mode Figure 5 schematically illustrates the functionality of the surgical navigation system in the surgery mode. 3D models 211 are transformed into the virtual environment coordinate system. For 3D models generated from scan data, this involves using the transformation data generated during the patient registration process, so that its position and orientation in the virtual environment matches the position and orientation of the head of the patient 3 in the physical environment. For 3D models corresponding to surgical devices, this involves determining the position and orientation of the surgical device in the virtual environment based on the determined position and the orientation of the displacement sensor 7 attached to surgical device.
Each 3D model then undergoes a further transformation based on the readings from the displacement sensor 7, and then the resultant transformed model is output to a rendering engine 235. The pose calculator 207 outputs data indicating the position and orientation in the virtual environment corresponding to the position and orientation of the AR camera 205 in the physical environment. This allow the rendering engine 235 to render two-dimensional image data corresponding to the view of a virtual camera, whose optical specification matches the optical specification of the AR camera 205, positioned and orientated in the virtual environment with the position and orientation indicated by the pose calculator 207. The resultant two-dimensional image data is then output by the rendering engine 235 to the display 203 for viewing by the surgeon. More particularly, the display of the head-mounted AR device 5 is a semi-transparent display device enabling the displayed image to be superimposed in the view of the surgeon 1.
If the head of the patient 3 moves during surgery, then the displacement sensor 7 attached to the head will make a corresponding movement and the displacement sensing system will output displacement sensor readings 218a in six dimensions corresponding to translational movement of the origin point and rotational movement of the displacement sensor co-ordinate system. Based on the location of the origin of the displacement sensor 7 and the orientation of the displacement sensor co-ordinate axes in the virtual environment co-ordinate system determined in the displacement sensor registration process, the displacement sensor readings 218a are converted to a displacement transformation 218b to effect a corresponding translation and rotation of the 3D model in the virtual environment. In this way, co-registration between the head of the patient in the physical environment and the 3D model in the virtual environment is maintained.
It will be appreciated that at the start of the cranial neurosurgery, the surgeon 1 may make a visual check that the virtual image of one or more of the 3D models is aligned with the head of the patient 3 before draping the head of the patient 3 for surgery. After draping, the surgeon 1 is reliant on the transformations based on the sensor readings from the displacement sensor 7 to maintain alignment.
As a surgical device, e.g. the stylet 13, moves during surgery, the displacement sensor attached to the surgical device will make a corresponding movement and the displacement sensing system will output displacement sensor readings 218a in six dimensions corresponding to translational movement of the origin point and rotational movement of the displacement sensor co-ordinate system. Based on the location of the origin of the displacement sensor 7 and the orientation of the displacement sensor coordinate axes in the virtual environment co-ordinate system determined in the displacement sensor registration process, the displacement sensor readings 218a are converted to a displacement transformation 218b to effect a corresponding translation and rotation of the 3D model of the surgical device in the virtual environment.
In this example, as shown in Figure 1B, a target location can be identified in the 3D models of the head of the patient 3, and data corresponding to a distance and trajectory between the tip of the stylet 13 and the target location can be calculated and displayed superimposed over the rendered image of the virtual environment. This data may be displayed with or without a rendered virtual model of the stylet 13.
While the above description has related to maintaining the co-registration of the head of a patient 3 in the physical environment and a 3D model of the head of the patient 3 in the virtual environment in case the head of the patient moves during surgery, the augmented reality surgical navigation system can also be used to modify the form of a 3D model during surgery to take account of modifications to the corresponding body part during the surgical procedure or to track the trajectory of instruments such as spinal screws For example, Figures 6A and 6B illustrate a surgeon 1 carrying out spinal surgery in which vertebrae of the spine are physically moved relative to each other. In this example, the 3D model is accordingly of the relevant vertebrae of the spine, with each vertebra having a respective sub-model. Before surgery, a displacement sensor 7 is fixed relative to each vertebra, and the position of the origin and the orientation of the co-ordinate axes of each displacement sensor in the virtual environment is determined in the manner described above. During surgery, as the surgeon 1 physically inserts a screw using the electromagnetically tracked tool A, the insertion and fixation of the screw moves the vertebrae relative to each other, and so the displacement sensor 7 on each vertebra will move with that vertebra respectively. The displacement readings for each displacement sensor can then be converted to transformation data for transforming the position and orientation of the corresponding sub-module in the virtual environment. In this way, the form of the 3D model is modified during surgery to take account of the relative movement of the vertebrae as a result of surgical manipulation.
In addition, the trajectory and position of the spinal screws B will be tracked within the vertebrae.
Modifications and Further Embodiments In the illustrated embodiments, the head-mounted AR device is a Microsoft HoloLens 2 device. It will be appreciated that other head-mounted AR devices that display a virtual environment in association with a physical environment could be used. Further, examples of the augmented reality surgical navigation system need not include a head-mounted AR device as alternatively a fixed camera could be used, for example in conjunction with the fixed display 11, with the displacement sensing system being used to maintain registration between a 3D model in a virtual environment and a corresponding body part in a physical environment.
While the displacement sensing system of the illustrated embodiment is an electronic tracking system, other displacement systems that do not rely upon image analysis to track movement in six dimensions could be used. For example, a six axis gyroscopic system could be used.
The addition of optical fiducial markers on the displacement sensors assist in the registration of the position of the origin of the displacement sensor and the orientation of the co-ordinate axes of the displacement sensor in the virtual environment. The optical fiducial markers are not, however, essential as the displacement sensor may be designed to have sufficient landmark points to enable registration to be performed.
The above embodiments are to be understood as illustrative examples of the invention. Further embodiments of the invention are envisaged. For example, [add possibilities]. It is to be understood that any feature described in relation to any one embodiment may be used alone, or in combination with other features described, and may also be used in combination with one or more features of any other of the embodiments, or any combination of any other of the embodiments. Furthermore, equivalents and modifications not described above may also be employed without departing from the scope of the invention, which is defined in the accompanying claims.

Claims (13)

  1. CLAIMS1. An augmented reality surgery system comprising: a camera for imaging a view of a physical environment; a display for displaying a view of a virtual environment; a displacement sensing system having one or more displacement sensors to be fixed relative to a body part of a patient in the physical environment, the displacement sensing system being operable to output measurement data corresponding to translational and rotational movement of each displacement sensor relative to a respective origin and coordinate system; a processing system operable to: generate the virtual environment based on the physical environment of the camera so that each position in the physical environment has a corresponding position in the virtual environment; receive image data from the camera, the image data including an image of the body part of the patient in the physical environment; receive scan data corresponding to a three-dimensional model of the body part of the patient; process the image data and the scan data to determine a position and orientation for the three-dimensional model of the body part in the virtual environment matching the position and orientation of the body part of the patient in the physical environment, and introduce the three-dimensional model of the body part into the virtual environment with the determined position and orientation; for each displacement sensor, determine the position of the origin point and the orientation of the co-ordinate axes of that displacement sensor in the virtual environment using image data for one or more images from the camera; at a first time, render a first visual representation of the virtual environment including the three-dimensional model of the body part and output the first visual representation to the display, the first visual representation corresponding to the view of the physical environment imaged by the camera at the first time; receive measurement data from the displacement sensor corresponding to movement of the body part of the patient in the physical environment; modify at least one of the position, orientation and form of the three-dimensional model in the virtual environment based on the received measurement data and the determined position of the origin point of the displacement sensor and the determined orientation of the co-ordinate axes of the displacement sensor in the virtual environment; and at a second rime, render a second visual representation of the virtual environment including the modified model of the body part and output the second visual representation to the display, the second visual representation corresponding to the view of physical environment imaged by the camera at the second time.
  2. 2. The augmented reality surgery system of claim I, wherein each displacement sensor comprises a plurality of optical fiducial markers positioned relative to the origin and co-ordinate axes of the displacement sensor, wherein the processing system is arranged to identify the positions of the plurality of optical fiducial markers in the one or more images and to determine the position of the origin point and the orientation of the co-ordinate axes of the displacement sensor in the physical environment based on the identified positions.
  3. 3. The augmented reality surgery system of claim 1 or claim 2, wherein the displacement sensing system is an electromagnetic tracking system.
  4. 4. The augmented reality surgery system of any preceding claim, wherein the camera and the display form part of a head-mounted AR device, and the augmented reality surgery system is operable to detect the position and orientation of the head-mounted AR device in the physical environment.
  5. 5. The augmented reality surgery system of any preceding claim, wherein the programming instructions for determining the pose of the patient comprise programming instructions configured to: receive a patient image from the camera; and identify landmark points in the patient image;
  6. 6. The augmented reality surgery system of claim 5, wherein the programming instructions for registration between the model and the patient comprise programming instructions configured to: use perspective n-point methods to calculate the transformation matrix required to map the model to the landmark points identified in the patient image.
  7. 7. The augmented reality surgery system of claims 5 or 6, wherein the programming instructions for identifying the landmark points comprise programming instructions configured to: automatically identify features by a pre-trained model.
  8. 8. The system of any preceding claim, wherein the model received by the processor is a 3D point cloud constructed from DICOM data using automatic landmark recognition.
  9. 9. The system of any preceding claim, wherein the visual representation of the model is a 3D holographic rendering constructed from the DICOM data.
  10. 10. An augmented reality surgery system comprising: a camera for imaging a view of a physical environment; a display for displaying a view of a virtual environment; a displacement sensing system having one or more displacement sensors for fixing relative to a surgical device in the physical environment, the displacement sensing system being operable to output measurement data corresponding to translational and rotational movement of each displacement sensor relative to a respective origin and coordinate system; a processing system operable to: generate the virtual environment based on the physical environment of the camera so that each position in the physical environment has a corresponding position in the virtual environment; for each displacement sensor fixed to a surgical device, determine the position of the origin point and the orientation of the co-ordinate axes of that displacement sensor in the virtual environment using image data for one or more images from the camera; introduce a three-dimensional model of the surgical device into the virtual environment with a position based on the determined position of the origin and an orientation based on the determined orientation of the co-ordinate axes in the virtual environment so the position and origin of the three-dimensional model of the surgical device in the virtual environment matches the position and orientation of the surgical device in the physical environment; at a first time, render a first visual representation of the virtual environment including at least one of the three-dimensional model of the surgical device and data derived from the position and orientation of the three-dimensional model of the surgical device, and output the first visual representation to the display, the first visual representation corresponding to the view of the physical environment imaged by the camera at the first time; receive measurement data from the displacement sensor corresponding to movement of the surgical device in the physical environment; modify at least one of the position and orientation of the three-dimensional model of the surgical device in the virtual environment based on the received measurement data and the determined position of the origin point of the displacement sensor and the determined orientation of the co-ordinate axes of the displacement sensor in the virtual environment; and at a second time, render a second visual representation of the virtual environment including at least one of the modified model of the surgical device and data derived from the position and orientation of the three-dimensional model of the surgical device, and output the second visual representation to the display, the second visual representation corresponding to the view of physical environment imaged by the camera at the second time.
  11. 11. An augmented reality surgery system according to claim 10, wherein the displacement sensor is fixed to a stylet.
  12. 12. A computer program for an augmented reality surgical system according to any of claims 1 to 9, the computer program comprising instructions that, when executed by the processing system: receive image data from the camera, the image data including an image of the body part of the patient in the physical environment; receive scan data corresponding to a three-dimensional model of the body part of the patient; process the image data and the scan data to determine a position and orientation for the three-dimensional model of the body part in the virtual environment matching the position and orientation of the body part of the patient in the physical environment, and introduce the three-dimensional model of the body part into the virtual environment with the determined position and orientation; for each displacement sensor, determine the position of the origin point and the orientation of the co-ordinate axes of that displacement sensor in the virtual environment using image data for one or more images from the camera; at a first time, render a first visual representation of the virtual environment including the three-dimensional model of the body part and output the first visual representation to the display, the first visual representation corresponding to the view of the physical environment imaged by the camera at the first time; receive measurement data from the displacement sensor corresponding to movement of the body part of the patient in the physical environment; modify at least one of the position, orientation and form of the three-dimensional model in the virtual environment based on the received measurement data and the determined position of the origin point of the displacement sensor and the determined orientation of the co-ordinate axes of the displacement sensor in the virtual environment; and at a second time, render a second visual representation of the virtual environment including the modified model of the body part and output the second visual representation to the display, the second visual representation corresponding to the view of physical environment imaged by the camera at the second time.
  13. 13. A computer program for an augmented reality surgical system according to claim 10 or claim 11, the computer program comprising instructions that, when executed by the processing system: for each displacement sensor fixed to a surgical device, determine the position of the origin point and the orientation of the co-ordinate axes of that displacement sensor in the virtual environment using image data for one or more images from the camera; introduce a three-dimensional model of the surgical device into the virtual environment with a position based on the determined position of the origin and an orientation based on the determined orientation of the co-ordinate axes in the virtual environment so the position and origin of the three-dimensional model of the surgical device in the virtual environment matches the position and orientation of the surgical device in the physical environment; at a first time, render a first visual representation of the virtual environment including at least one of the three-dimensional model of the surgical device and data derived from the position and orientation of the three-dimensional model of the surgical device, and output the first visual representation to the display, the first visual representation corresponding to the view of the physical environment imaged by the camera at the first time; receive measurement data from the displacement sensor corresponding to movement of the surgical device in the physical environment; modify at least one of the position and orientation of the three-dimensional model of the surgical device in the virtual environment based on the received measurement data and the determined position of the origin point of the displacement sensor and the determined orientation of the co-ordinate axes of the displacement sensor in the virtual environment; and at a second time, render a second visual representation of the virtual environment including at least one of the modified model of the surgical device and data derived from the position and orientation of the three-dimensional model of the surgical device, and output the second visual representation to the display, the second visual representation corresponding to the view of physical environment imaged by the camera at the second time.
GB2112741.0A 2021-09-07 2021-09-07 Augmented reality surgical navigation system Active GB2614025B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB2112741.0A GB2614025B (en) 2021-09-07 2021-09-07 Augmented reality surgical navigation system
PCT/EP2022/074921 WO2023036848A1 (en) 2021-09-07 2022-09-07 Augmented reality surgical navigation system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2112741.0A GB2614025B (en) 2021-09-07 2021-09-07 Augmented reality surgical navigation system

Publications (3)

Publication Number Publication Date
GB202112741D0 GB202112741D0 (en) 2021-10-20
GB2614025A true GB2614025A (en) 2023-06-28
GB2614025B GB2614025B (en) 2023-12-27

Family

ID=78076860

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2112741.0A Active GB2614025B (en) 2021-09-07 2021-09-07 Augmented reality surgical navigation system

Country Status (2)

Country Link
GB (1) GB2614025B (en)
WO (1) WO2023036848A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116993794A (en) * 2023-08-02 2023-11-03 德智鸿(上海)机器人有限责任公司 Virtual-real registration method and device for augmented reality surgery assisted navigation

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160225192A1 (en) * 2015-02-03 2016-08-04 Thales USA, Inc. Surgeon head-mounted display apparatuses
US20170258526A1 (en) * 2016-03-12 2017-09-14 Philipp K. Lang Devices and methods for surgery
WO2019215550A1 (en) * 2018-05-10 2019-11-14 3M Innovative Properties Company Simulated orthodontic treatment via augmented visualization in real-time
WO2020163358A1 (en) * 2019-02-05 2020-08-13 Smith & Nephew, Inc. Computer-assisted arthroplasty system to improve patellar performance

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8325614B2 (en) * 2010-01-05 2012-12-04 Jasper Wireless, Inc. System and method for connecting, configuring and testing new wireless devices and applications

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160225192A1 (en) * 2015-02-03 2016-08-04 Thales USA, Inc. Surgeon head-mounted display apparatuses
US20170258526A1 (en) * 2016-03-12 2017-09-14 Philipp K. Lang Devices and methods for surgery
WO2019215550A1 (en) * 2018-05-10 2019-11-14 3M Innovative Properties Company Simulated orthodontic treatment via augmented visualization in real-time
WO2020163358A1 (en) * 2019-02-05 2020-08-13 Smith & Nephew, Inc. Computer-assisted arthroplasty system to improve patellar performance

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PEPE ANTONIO ET AL: "A Marker-Less Registration Approach for Mixed Reality-Aided Maxillofacial Surgery: a Pilot Evaluation", JOURNAL OF DIGITAL IMAGING, SPRINGER INTERNATIONAL PUBLISHING, CHAM, vol. 32, no. 6, 4 September 2019 (2019-09-04), pages 1008 - 1018, XP037047699, ISSN: 0897-1889, [retrieved on 20190904], DOI: 10.1007/S10278-019-00272-6 *

Also Published As

Publication number Publication date
WO2023036848A1 (en) 2023-03-16
GB202112741D0 (en) 2021-10-20
GB2614025B (en) 2023-12-27

Similar Documents

Publication Publication Date Title
US11717376B2 (en) System and method for dynamic validation, correction of registration misalignment for surgical navigation between the real and virtual images
JP7429120B2 (en) Non-vascular percutaneous procedure system and method for holographic image guidance
CA2973479C (en) System and method for mapping navigation space to patient space in a medical procedure
US10166079B2 (en) Depth-encoded fiducial marker for intraoperative surgical registration
US11712307B2 (en) System and method for mapping navigation space to patient space in a medical procedure
US11026747B2 (en) Endoscopic view of invasive procedures in narrow passages
US20210153953A1 (en) Systems and methods for performing intraoperative guidance
US20080119725A1 (en) Systems and Methods for Visual Verification of CT Registration and Feedback
JP2002186603A (en) Method for transforming coordinates to guide an object
US11191595B2 (en) Method for recovering patient registration
WO2008035271A2 (en) Device for registering a 3d model
KR20190096575A (en) Medical imaging system
CN114727847A (en) System and method for computing coordinate system transformations
WO2023036848A1 (en) Augmented reality surgical navigation system
Jing et al. Navigating system for endoscopic sinus surgery based on augmented reality
Li et al. C-arm based image-guided percutaneous puncture of minimally invasive spine surgery
Edwards et al. Guiding therapeutic procedures
CN117580541A (en) Surgical assistance system with improved registration and registration method