CN110946654A - Bone surgery navigation system based on multimode image fusion - Google Patents

Bone surgery navigation system based on multimode image fusion Download PDF

Info

Publication number
CN110946654A
CN110946654A CN201911336544.4A CN201911336544A CN110946654A CN 110946654 A CN110946654 A CN 110946654A CN 201911336544 A CN201911336544 A CN 201911336544A CN 110946654 A CN110946654 A CN 110946654A
Authority
CN
China
Prior art keywords
image
surgical instrument
registration
subject
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911336544.4A
Other languages
Chinese (zh)
Other versions
CN110946654B (en
Inventor
李海
王腾飞
王宏志
江海河
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Institutes of Physical Science of CAS
Original Assignee
Hefei Institutes of Physical Science of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Institutes of Physical Science of CAS filed Critical Hefei Institutes of Physical Science of CAS
Priority to CN201911336544.4A priority Critical patent/CN110946654B/en
Publication of CN110946654A publication Critical patent/CN110946654A/en
Application granted granted Critical
Publication of CN110946654B publication Critical patent/CN110946654B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2055Optical tracking systems
    • A61B2034/2057Details of tracking cameras
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition

Landscapes

  • Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Robotics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention provides an orthopedic surgery navigation system based on multimode image fusion, which comprises: a binocular vision positioning camera for acquiring the space coordinates of the passive infrared reflection marker balls placed on the surface of the skin of the surgical instrument and the subject in real time; the image segmentation module is used for accurately segmenting the vertebra part in the initial image; the space coordinate registration module is used for registering the space coordinates and the image coordinates; the surgical instrument calibration module is used for establishing a conversion relation between the coordinates of the tip of the surgical instrument and the coordinates of the passive infrared reflection marker ball placed at the tail end of the surgical instrument; a multi-mode image registration module for registering the intraoperative cone-beam CT image to the preoperative CT image; and the visualization interface module is used for establishing a three-dimensional image and displaying the position of the surgical instrument in the image of the subject in real time during the operation. The invention can accurately position the real-time position of the surgical instrument in the body of the subject during the operation, provides accurate and reliable navigation for the orthopedic operation, effectively reduces the operation risk and the operation time, and has simple and convenient system operation and high applicability.

Description

Bone surgery navigation system based on multimode image fusion
Technical Field
The invention relates to an orthopedic surgery navigation system based on multimode image fusion, and belongs to the fields of medical images, computer vision, image processing and the like.
Background
In a traditional surgical operation, a doctor cannot directly observe a specific position of a surgical instrument entering a human body, and the surgeon usually judges the anatomical structure of the human body by experience and then positions and guides the surgical instrument by combining a preoperative CT image or an intraoperative X-ray image. Therefore, the market urgently needs a new surgical navigation system to guide the surgical instrument to quickly and accurately reach a specific position to assist the completion of the surgery in the complex surgical operation. With the development of technologies such as computers, medical images, high-precision measurement and the like, an operation navigation system based on multi-mode image guidance is gradually developed and becomes a new research hotspot in the field of high-end medical instrument research and development. The operation navigation technology is a technology which utilizes related technologies such as medical images, three-dimensional positioning, computer visualization and the like to track and display the three-dimensional spatial position and motion information of the surgical instrument and the human body part in real time, thereby achieving the purpose of monitoring the surgical instrument entering the human body in real time and helping surgeons to accurately perform operations.
Currently common surgical navigation systems are mainly divided into two types, fiducial point-based registration and C-arm-based guidance. The system based on the reference point registration usually adopts a mode of manually selecting a reference point to complete the registration of the preoperative image of the patient and the actual operation space; the method generally requires a doctor to select 30-50 reference points by using a surgical probe, then coordinate space registration is completed according to the reference points, and registration accuracy is evaluated; if the accuracy does not meet the surgical requirements, the surgeon is required to repeat the fiducial point selection and registration. Due to the complexity of human anatomy structures and the existence of individual differences, the selection precision of the reference points is often difficult to guarantee. The C-shaped arm guiding-based system utilizes the X-ray image in the operation to conduct operation guiding, and the image information can be obtained and updated in real time. However, the intraoperative X-ray images only have two-dimensional image information and do not provide the complete three-dimensional structure of the surgical site.
Therefore, researchers and doctors both want to complete surgical guidance by means of intraoperative three-dimensional images, provide complete intraoperative three-dimensional anatomical information of patients, and achieve rapid preoperative-intraoperative image registration through a high-precision registration algorithm. This is also the development direction and research focus in the field of current international surgical navigation. The related art discloses an orthopedic surgery navigation system (chinese patent application No. 200710066724.6), which needs to rely on an intramedullary nail to track the position of a surgical instrument by determining the relative position of the surgical instrument and the intramedullary nail, however, the system is mainly used for fracture restoration operation of a closed intramedullary nail and has certain limitations.
Disclosure of Invention
The invention solves the problems: the defect of the prior art is overcome, the orthopedic surgery navigation system based on multimode image fusion is provided, the real-time position of a surgical instrument in a patient body in an operation can be accurately positioned, accurate and reliable navigation is provided for the orthopedic surgery, the system is easy and convenient to operate, and the applicability is high.
The technical scheme of the invention is as follows: an orthopedic surgery navigation system based on multimode image fusion comprises functions of three-dimensional visualization of medical images, accurate segmentation of vertebra, space coordinate registration, preoperative three-dimensional CT, intraoperative CBCT and the like, a set of multimode image binocular vision tracking surgical instrument positioning system is constructed, and the position and the motion trail of a surgical instrument are displayed on an image of a subject in real time.
The invention relates to an orthopedic surgery navigation system based on multimode image fusion, which comprises:
the binocular vision positioning camera is used for acquiring the space coordinates of the passive infrared reflection marker balls placed on the surface of the skin of the surgical instrument and the subject in real time; a plurality of passive infrared reflection marker balls are respectively arranged on the surgical instrument and the subject; the binocular vision positioning camera receives infrared distance information reflected by the passive infrared reflection marking ball by emitting infrared rays based on a binocular vision principle, and calculates a three-dimensional space coordinate of the passive infrared reflection marking ball; according to the obtained three-dimensional space coordinates of the passive infrared reflection marker ball, the space positions of the surgical instrument and the subject are obtained and respectively sent to a space coordinate registration module and a surgical instrument calibration module;
the image segmentation module is used for accurately segmenting the vertebra part in the initial image; accurately segmenting and labeling the three-dimensionally reconstructed vertebra by utilizing a segmentation algorithm based on a multi-scale local regional level set according to prior information of the spine of a human body, thereby quickly obtaining three-dimensional structure information of an operation part of a subject, and sending a segmented image in a preoperative operation to a multi-mode image registration module;
the spatial coordinate registration module is used for registering the three-dimensional spatial coordinates of the passive infrared reflection marker balls on the skin surface of the subject with the CT image coordinates of the subject; acquiring a subject image scanned by CT, fixing a plurality of passive infrared reflection marker balls on the skin surface of a subject, obtaining three-dimensional image coordinates of the passive infrared reflection marker balls on the CT according to the passive infrared reflection marker balls fixed on the skin surface of the subject and based on a binocular vision positioning camera, and establishing a relation between a CT image coordinate system and a space coordinate system of the subject; according to the relation between the space coordinate system and the CT image coordinate system of the testee, obtaining the corresponding conversion relation between the three-dimensional space coordinate of the passive infrared reflection marker ball on the skin surface of the testee and the CT image coordinate of the testee, establishing a space-to-image conversion matrix, and sending the space-to-image conversion matrix to a visual interface module;
the surgical instrument calibration module is used for obtaining three-dimensional space coordinates of a plurality of passive infrared reflection marker spheres placed at the tail end of the surgical instrument based on a binocular vision positioning camera according to a surgical instrument tip calibration method of space transformation invariance and spherical surface constraint; determining the three-dimensional space coordinates of the surgical instrument tip in a surgical instrument coordinate system through the rotational translation transformation relation among the three-dimensional space coordinates; according to the three-dimensional space coordinates of the tip of the surgical instrument in the surgical instrument coordinate system and the three-dimensional space coordinates of the plurality of passive infrared reflection marker balls placed at the tail end of the surgical instrument, the spatial relationship between the surgical instrument coordinate system and the CT image coordinate system of the subject is established, and the spatial relationship is sent to a visual interface module;
the multimode image registration module is used for acquiring three-dimensional structure information of an operative position of an intraoperative subject based on spine segmentation images of intraoperative cone beam CT and preoperative CT obtained by the image segmentation module, and registering the three-dimensional structure information of the operative position of the intraoperative subject with a preoperative three-dimensional CT segmentation image containing detailed anatomical structure information of the subject so as to fuse preoperative and intraoperative subject images; according to a registration algorithm based on ICP (iterative closed Point) and B spline mixing, a rough registration image is obtained through a global point registration ICP algorithm; taking the rough registration image as an input image, performing B-spline mixed registration to obtain a fine local registration image, namely a registration image between the intra-operative cone beam CT and the pre-operative CT, and sending the registration image to a visual interface module;
the visual interface module is used for obtaining real-time three-dimensional space coordinates of the tip of the surgical instrument based on the three-dimensional space coordinates of the plurality of passive infrared reflection marker balls at the tail end of the surgical instrument and the space relation between the surgical instrument coordinate system and the subject CT image coordinate system; and converting the real-time three-dimensional space coordinates of the surgical instrument tip into three-dimensional image coordinates based on the space coordinate registration module, thereby fusing the three-dimensional image coordinates of the surgical tip into a registration image, displaying the position and the motion track of the surgical instrument in the subject image in real time, and outputting from multiple angles.
In the binocular vision positioning camera, the number of the plurality.
In the calibration module for the surgical instrument, the following is specifically realized according to the surgical instrument tip calibration method based on space transformation invariance and spherical constraint:
(1) fixing four infrared passive reflection marking balls with known three-dimensional space coordinates on a surgical instrument to be calibrated;
(2) placing the tip of a surgical instrument in the center of a calibration target, acquiring three-dimensional space position information of a motion trail of an infrared passive reflection marker ball based on a binocular vision positioning camera, and transmitting the acquired motion trail to a computer terminal by the binocular vision positioning camera;
(3) repeating the step (2) for multiple times (at least three times), wherein the motion tracks of each time are inconsistent, obtaining three-dimensional space position information of the three groups of infrared passive reflection marker balls, calculating a rotation and translation matrix of the surgical instrument among the motion tracks of the multiple times, and calculating a coordinate vector of a rotation space invariant point corresponding to each rotation and translation matrix, and calculating an average value;
(4) forming a transformation matrix by the three-dimensional coordinate vectors of all the infrared passive reflection marker balls on one group of surgical instruments and the three-dimensional coordinate vector of the surgical tip, and describing the position relation between the surgical instrument tip and the infrared passive reflection marker balls;
(5) and transforming the position relation of the tip of the surgical instrument and the infrared passive reflection marker ball into the three-dimensional space coordinate of the surgical instrument according to the transformation matrix of the position relation, so as to establish the space relation between the surgical instrument coordinate system and the CT image coordinate system of the subject.
In the multi-mode image registration module, a registration algorithm based on ICP (iterative closed Point) and B spline mixing is divided into two parts, and a global rough registration image is obtained based on a global point registration ICP algorithm; taking the global rough registration image and the reference image as input images, and performing B-spline mixed registration to obtain a fine local registration image, wherein the specific implementation is as follows:
(1) acquiring a CT image to be registered and a reference cone beam CT image;
(2) performing image segmentation on the image to be registered and the reference image, segmenting a target object in the image to be registered and the reference image, and extracting coordinate points, wherein the number of the coordinate points of the registered image is the same as that of the coordinate points of the reference image;
(3) performing point set-based registration on the point set data of the image to be registered and the reference image based on a global point registration ICP algorithm to obtain a global rough registration image;
(5) and taking the global rough registration image and the reference image as input images, and performing B-spline mixed registration to obtain a fine local registration image, namely a registration image between the intra-operative cone beam CT and the pre-operative CT.
The (3) is specifically realized as follows:
(1) finding out the closest point of the registered image point set in the reference image point set, solving a rotational translation matrix by using the relation between the closest points, and transforming the registered image by using the obtained rotational translation matrix to obtain a new floating point set;
(2) performing next round of calculation by taking the new floating point set obtained in the step (1) as an initial point set, and repeating the steps until an optimal transformation array is obtained;
(3) applying the optimal transformation array obtained in the step (2) to a reference image, performing operations such as image interpolation, searching the closest point to possibly obtain an error point pair, and calculating an error;
(4) and (4) repeating the steps (1) to (3) for a plurality of iterations until the error meets the requirement.
The (4) is specifically realized as follows:
(1) carrying out similarity measurement on the overall rough registration image and the reference image, and setting an initial control point;
(2) based on the similarity obtained in the step (1), if the convergence threshold value is not reached, calculating a transformation field by using a control point, and transforming and interpolating the registration image by using the transformation field to obtain a new registration image;
(3) and (3) repeating the step (1) and the step (2) to obtain the similarity measurement, and ending the iteration until the optimal solution is reached or the threshold value is reached.
Compared with the prior art, the invention has the advantages that:
(1) the invention can accurately position the real-time position of the surgical instrument in the body of the patient during the operation, provides accurate and reliable navigation for the orthopedic operation, and has simple and convenient system operation and high applicability.
(2) In the invention, the passive infrared reflection marker ball is fixed on the surface of the subject, so that the tracking is accurate in positioning and the subject is not hurt;
(3) the surgical instrument tip calibration method based on space transformation invariance and spherical surface constraint has the advantages of higher calibration speed and higher precision;
(4) the multi-mode image registration method is based on the point set ICP registration and the voxel-based B-spline registration, the registration result based on the point set is used as the initial value based on the voxel registration, and the registration result based on the voxel can be quickly registered, and meanwhile, the registration based on the voxel also enables the method to have high precision, so that the image can be quickly and accurately registered, and the application of the method in real-time orthopedic surgery navigation is met.
(5) The multimode image registration module of the invention fuses and registers images in different forms, so that more three-dimensional structural information of the surgical site of the subject is obtained, and the precision is higher.
Drawings
FIG. 1 is a block diagram of a navigation system for a multi-mode image orthopedic surgery;
FIG. 2 is a detailed schematic diagram of an example of a navigation system for multi-modality imaging bone surgery;
FIG. 3 is a schematic view of surgical path planning;
FIG. 4 is a schematic diagram of a binocular vision camera recognizing a passive infrared reflection marker ball;
FIG. 5 is a flow chart of an image registration method;
fig. 6 is a visualization interface of the surgical navigation system.
Detailed Description
The details of the respective methods involved in the technical solutions of the present invention are described below with reference to the accompanying drawings.
FIG. 1 is a block diagram of a multi-modality video navigation system for orthopedic surgery developed in accordance with the present invention, comprising a binocular vision positioning camera, an image segmentation module, a spatial coordinate registration module, a surgical instrument calibration module, a multi-modality image registration module, and a visualization interface module.
The binocular vision positioning camera is used for acquiring the space coordinates of the passive infrared reflection marker balls placed on the skin surfaces of the surgical instrument and the testee in real time, wherein the surgical instrument and the testee are respectively provided with four passive infrared reflection marker balls.
And correcting the precision error of the binocular vision positioning camera by using the calibration target, wherein the error is in a sub-millimeter level.
And the image segmentation module is used for accurately segmenting the vertebra part in the initial image, and accurately segmenting and labeling the three-dimensionally reconstructed vertebra by utilizing a segmentation algorithm based on a multi-scale local regional level set, so that the three-dimensional structure information of the surgical site of the subject is quickly acquired, and a reasonable surgical plan is formulated.
And the spatial coordinate registration module is used for obtaining image coordinates of the four passive infrared reflection marker balls in the scanned subject image, obtaining the spatial coordinates of the four passive infrared reflection marker balls through a binocular vision system, and establishing a relation between the image coordinates and the spatial coordinates so as to register the spatial coordinate system of the surgical instrument with the CT image coordinate system of the patient.
The surgical instrument calibration module develops a surgical instrument tip calibration method based on space transformation invariance and spherical constraint according to the method, the method only needs to statically acquire the mark point tracks under a small number (at least 3) of surgical instrument fixed tips, and the coordinates of the surgical instrument tip in the surgical instrument coordinate system are determined through the rotation-translation transformation relation among the coordinate systems.
The multimode image registration module acquires three-dimensional structural information of an operation position of a subject in operation by utilizing cone beam CT, and registers the three-dimensional structural information with a preoperative three-dimensional CT image containing detailed anatomical structural information of the subject so as to multimodally fuse preoperative and intraoperative subject images.
And the visual interface module is used for displaying the position of the surgical instrument in the image of the subject and the motion track of the surgical instrument in real time and guiding the operation.
FIG. 2 is an example of an orthopedic surgical instrument navigation system developed in accordance with the present invention, having completed an example set-up of the system, including a binocular vision positioning camera for acquiring in real time the spatial coordinates of passive infrared reflective marker balls placed on the surface of the surgical instrument and the skin of a subject; the image segmentation module is used for accurately segmenting the vertebra part in the initial image; the space coordinate registration module is used for registering the space coordinates and the image coordinates; the surgical instrument calibration module is used for establishing a conversion relation between the coordinates of the tip of the surgical instrument and the coordinates of the passive infrared reflection marker ball placed at the tail end of the surgical instrument; a multi-mode image registration module for registering the intraoperative cone-beam CT image to the preoperative CT image; and the visual interface module is used for establishing a three-dimensional image and displaying the position of the surgical instrument in the patient image in real time during the operation.
Real-time acquisition of spatial coordinates of passive infrared reflective marker balls placed on the surface of surgical instruments and the skin of a subject using a binocular vision positioning camera, whichIn the above-mentioned surgical instrument and testee, respectively set up four passive infrared reflection marker balls, the coordinate is P ═ { P ═ respectively1,p2,p3,p4},Q={q1,q2,q3,q4}。
The precision of the binocular vision camera needs to be verified, the coordinates of 7 points are measured by using a handheld probe, and the average value and the standard deviation of the coordinates are counted (see table 1), so that the measurement precision of the system can completely meet the requirement of the sub-millimeter surgical precision.
TABLE 1
Figure BDA0002331083930000061
The image segmentation module needs to perform three-dimensional reconstruction and segmentation on the CT image in order to obtain three-dimensional anatomical structure information. Carrying out preprocessing such as filtering, enhancing, equalizing and the like on each slice of the CT image so as to reduce image noise; performing image segmentation, extracting a concerned region from the slice, accurately segmenting the vertebra part in the initial image, and accurately segmenting and labeling the three-dimensionally reconstructed vertebra by using a segmentation algorithm based on a multi-scale local regional level set; carrying out slice interpolation value and inter-slice interpolation operation on the segmented image, and improving the spatial resolution of the image; and reconstructing a three-dimensional anatomical structure image of the surgical site by using a MarchingCubes algorithm, thereby quickly acquiring the three-dimensional structure information of the surgical site of the subject and formulating a reasonable surgical plan.
According to the CT image acquired before the operation and the three-dimensional structure data of the reconstructed segmentation, the path of the surgical instrument can be simulated more intuitively, and the segmented vertebra image and the planned surgical treatment scheme are shown in fig. 3, so that the correct implementation of the surgical navigation in the operation is guaranteed.
The spatial coordinate registration module is used for obtaining image coordinates of the four passive infrared reflection marker balls in a scanned subject image, and fig. 4 shows that the spatial coordinates of the subject image are established through a binocular vision system, the image coordinates of the four passive infrared reflection marker balls are obtained in the subject image obtained through CT scanning, and a transformation matrix between the image coordinates and the spatial coordinates is calculated, so that the relationship between the image coordinates and the spatial coordinates is established, and the spatial coordinate system of the surgical instrument is registered with the CT image coordinate system of the subject.
The accurate positioning of the surgical instrument calibration module, the surgical instrument and particularly the tip thereof, is the key core of the surgical navigation system and is the key for the success of the surgical navigation system. The precise positioning of the surgical instrument tip is often accomplished by precisely calibrating the coordinates of the surgical instrument tip.
The first calibration method needs to be completed by means of a precise target, and the method excessively depends on a mark point on the precise target, so that the calibration result is deficient in both accuracy and stability; the second calibration method is completed by utilizing the rigidity characteristic of the surgical instrument, the method needs to acquire a large number of distributed track points when fitting a spherical equation, and requires that the surgical instrument has large motion amplitude in the process of moving around the tip, which can cause the tip of the surgical instrument to have small movement and influence the calibration precision.
Therefore, a method for calibrating the tip of a surgical instrument based on space transformation invariance and spherical constraint is developed, the method can accurately calibrate the tip of the surgical instrument by only statically acquiring the track of the marking point under a small number (at least 3) of fixed tips of the surgical instrument, and still has better calibration precision for the surgical instrument with poor rigidity, and the calibration step comprises the following steps:
step 1: fixing four infrared passive reflection marking balls with known three-dimensional space coordinates on a surgical instrument to be calibrated;
step 2: placing the tip of a surgical instrument in the center of a calibration target, acquiring three-dimensional space position information of a motion trail of an infrared passive reflection marker ball based on a binocular vision positioning camera, and transmitting the acquired motion trail to a computer terminal by the binocular vision positioning camera;
and step 3: repeating the step 2 for three times, wherein the motion tracks of each time are inconsistent, obtaining three-dimensional space position information of three groups of infrared passive reflection marker balls, calculating the rotation and translation matrixes of the surgical instrument among the motion tracks of the three times, and calculating the coordinate vector of the rotation space invariant point corresponding to each rotation and translation matrix, and obtaining an average value;
and 4, step 4: forming a transformation matrix by the three-dimensional coordinate vectors of all the infrared passive reflection marker balls on one group of surgical instruments and the three-dimensional coordinate vector of the surgical tip, and describing the position relation between the surgical instrument tip and the infrared passive reflection marker balls;
and 5: a transformation matrix describing the position relationship between the tip of the surgical instrument and the infrared passive reflection marker ball is transformed to the three-dimensional space coordinate of the surgical instrument to obtain the transformation matrix
The calibration error of the surgical instrument is verified through 50 times of experiments, and compared with the traditional method, the method can reduce the calibration error from 0.62mm to 0.27 mm.
The multimode image registration module and the accurate three-dimensional space registration are one of important conditions for ensuring that the surgical navigation system can successfully complete the navigation task. Aiming at the clinical requirement, the invention develops a new registration algorithm based on the mixture of ICP (iterative closed Point) and B splines, and can remarkably improve the registration efficiency while ensuring the registration accuracy.
The registration algorithm is mainly divided into two parts, and based on a global point registration ICP algorithm, point set-based registration is carried out on point set data of the image to be registered and the reference image to obtain a global rough registration image; taking the global rough registration image and the reference image as input images, performing B-spline mixed registration to obtain a fine local registration image, namely a registration image between the intra-operative cone beam CT and the pre-operative CT,
the registration step is shown in fig. 5 and comprises:
step 1: carrying out normalization and point set extraction and sampling on the acquired floating image and reference image, namely CT data and cone beam CT data;
step 2: carrying out ICP point set registration by using the sampled data to obtain a transformation array;
and step 3: transforming and interpolating the floating image by using a transformation array obtained by ICP point set registration to obtain an ICP global registration image, and using the ICP global registration image as an initial value of B spline registration;
and 4, step 4: and B spline registration is carried out by using the image after ICP registration and the initial reference image as two registration images until an optimal deformation parameter is obtained and then a registration result is output.
As the registration based on the point set is used as the initial value of B-spline registration, the speed of the registration algorithm is greatly improved, and the advantage of high precision of the registration based on the voxel is also kept. The algorithm acquires three-dimensional structural information of an operation position of an intraoperative subject by using cone-beam CT, and registers the three-dimensional structural information with preoperative three-dimensional CT images containing detailed anatomical structural information of the subject so as to multi-modal fuse preoperative and intraoperative subject images.
The visual interface module is used for obtaining real-time three-dimensional space coordinates of the tip of the surgical instrument based on the three-dimensional space coordinates of the plurality of passive infrared reflection marker balls at the tail end of the surgical instrument and the space relation between the surgical instrument coordinate system and the subject CT image coordinate system; the real-time three-dimensional spatial coordinates of the surgical instrument tip are converted to three-dimensional image coordinates based on a spatial coordinate registration module, thereby fusing the three-dimensional image coordinates of the surgical tip into a registered image.
Fig. 6 is an example of a visualization interface module, which can display a three-dimensional image of a skeleton of a subject and two-dimensional images in various directions in real time, wherein a green point is a target point to be reached by a preoperatively set surgical instrument, and can display the position and motion trajectory of the surgical instrument in the image of the subject in real time, and feed back the position and motion trajectory to a surgeon from multiple angles to guide the operation.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (6)

1. The utility model provides an orthopedic surgery navigation based on multimode image fusion which characterized in that includes:
the binocular vision positioning camera is used for acquiring the space coordinates of the passive infrared reflection marker balls placed on the surface of the skin of the surgical instrument and the subject in real time; a plurality of passive infrared reflection marker balls are respectively arranged on the surgical instrument and the subject; the binocular vision positioning camera receives infrared distance information reflected by the passive infrared reflection marking ball by emitting infrared rays based on a binocular vision principle, and calculates a three-dimensional space coordinate of the passive infrared reflection marking ball; according to the obtained three-dimensional space coordinates of the passive infrared reflection marker ball, the space positions of the surgical instrument and the subject are obtained and respectively sent to a space coordinate registration module and a surgical instrument calibration module;
the image segmentation module is used for accurately segmenting the vertebra part in the initial image; accurately segmenting and labeling the three-dimensionally reconstructed vertebra by utilizing a segmentation algorithm based on a multi-scale local regional level set according to prior information of the spine of a human body, thereby quickly obtaining three-dimensional structure information of an operation part of a subject, and sending a segmented image in a preoperative operation to a multi-mode image registration module;
the spatial coordinate registration module is used for registering the three-dimensional spatial coordinates of the passive infrared reflection marker balls on the skin surface of the subject with the CT image coordinates of the subject; acquiring a subject image scanned by CT, fixing a plurality of passive infrared reflection marker balls on the skin surface of a subject, obtaining three-dimensional image coordinates of the passive infrared reflection marker balls on the CT according to the passive infrared reflection marker balls fixed on the skin surface of the subject and based on a binocular vision positioning camera, and establishing a relation between a CT image coordinate system and a space coordinate system of the subject; according to the relation between the space coordinate system and the CT image coordinate system of the testee, obtaining the corresponding conversion relation between the three-dimensional space coordinate of the passive infrared reflection marker ball on the skin surface of the testee and the CT image coordinate of the testee, establishing a space-to-image conversion matrix, and sending the space-to-image conversion matrix to a visual interface module;
the surgical instrument calibration module is used for obtaining three-dimensional space coordinates of a plurality of passive infrared reflection marker spheres placed at the tail end of the surgical instrument based on a binocular vision positioning camera according to a surgical instrument tip calibration method of space transformation invariance and spherical surface constraint; determining the three-dimensional space coordinates of the surgical instrument tip in a surgical instrument coordinate system through the rotational translation transformation relation among the three-dimensional space coordinates; according to the three-dimensional space coordinates of the tip of the surgical instrument in the surgical instrument coordinate system and the three-dimensional space coordinates of the plurality of passive infrared reflection marker balls placed at the tail end of the surgical instrument, the spatial relationship between the surgical instrument coordinate system and the CT image coordinate system of the subject is established, and the spatial relationship is sent to a visual interface module;
the multimode image registration module is used for acquiring three-dimensional structure information of an operative position of an intraoperative subject based on spine segmentation images of intraoperative cone beam CT and preoperative CT obtained by the image segmentation module, and registering the three-dimensional structure information of the operative position of the intraoperative subject with a preoperative three-dimensional CT segmentation image containing detailed anatomical structure information of the subject so as to fuse preoperative and intraoperative subject images; according to a registration algorithm based on ICP (iterative closed Point) and B spline mixing, a rough registration image is obtained through a global point registration ICP algorithm; taking the rough registration image as an input image, performing B-spline mixed registration to obtain a fine local registration image, namely a registration image between the intra-operative cone beam CT and the pre-operative CT, and sending the registration image to a visual interface module;
the visual interface module is used for obtaining real-time three-dimensional space coordinates of the tip of the surgical instrument based on the three-dimensional space coordinates of the plurality of passive infrared reflection marker balls at the tail end of the surgical instrument and the space relation between the surgical instrument coordinate system and the subject CT image coordinate system; and converting the real-time three-dimensional space coordinates of the surgical instrument tip into three-dimensional image coordinates based on the space coordinate registration module, thereby fusing the three-dimensional image coordinates of the surgical tip into a registration image, displaying the position and the motion track of the surgical instrument in the subject image in real time, and outputting from multiple angles.
2. The multi-mode image fusion-based bone surgery navigation system according to claim 1, characterized in that: in the binocular vision positioning camera, the number of the plurality.
3. The multi-mode image fusion-based bone surgery navigation system according to claim 1, characterized in that: in the calibration module for the surgical instrument, the following is specifically realized according to the surgical instrument tip calibration method based on space transformation invariance and spherical constraint:
(1) fixing four infrared passive reflection marking balls with known three-dimensional space coordinates on a surgical instrument to be calibrated;
(2) placing the tip of a surgical instrument in the center of a calibration target, acquiring three-dimensional space position information of a motion trail of an infrared passive reflection marker ball based on a binocular vision positioning camera, and transmitting the acquired motion trail to a computer terminal by the binocular vision positioning camera;
(3) repeating the step (2) for multiple times, wherein the motion tracks of each time are inconsistent, obtaining three-dimensional space position information of three groups of infrared passive reflection marker balls, calculating the rotation and translation matrixes of the surgical instrument among the motion tracks of multiple times, and calculating the coordinate vector of the rotation space invariant point corresponding to each rotation and translation matrix, and obtaining an average value;
(4) forming a transformation matrix by the three-dimensional coordinate vectors of all the infrared passive reflection marker balls on one group of surgical instruments and the three-dimensional coordinate vector of the surgical tip, and describing the position relation between the surgical instrument tip and the infrared passive reflection marker balls;
(5) and transforming the position relation of the tip of the surgical instrument and the infrared passive reflection marker ball into the three-dimensional space coordinate of the surgical instrument according to the transformation matrix of the position relation, so as to establish the space relation between the surgical instrument coordinate system and the CT image coordinate system of the subject.
4. The multi-mode image fusion-based bone surgery navigation system according to claim 1, characterized in that: in the multi-mode image registration module, a registration algorithm based on ICP (iterative closed Point) and B spline mixing is divided into two parts, and a global rough registration image is obtained based on a global point registration ICP algorithm; taking the global rough registration image and the reference image as input images, and performing B-spline mixed registration to obtain a fine local registration image, wherein the specific implementation is as follows:
(1) acquiring a CT image to be registered and a reference cone beam CT image;
(2) performing image segmentation on the image to be registered and the reference image, segmenting a target object in the image to be registered and the reference image, and extracting coordinate points, wherein the number of the coordinate points of the registered image is the same as that of the coordinate points of the reference image;
(3) performing point set-based registration on the point set data of the image to be registered and the reference image based on a global point registration ICP algorithm to obtain a global rough registration image;
(4) and taking the global rough registration image and the reference image as input images, and performing B-spline mixed registration to obtain a fine local registration image, namely a registration image between the intra-operative cone beam CT and the pre-operative CT.
5. The multi-modality image fusion-based bone surgery navigation system according to claim 4, wherein: the (3) is specifically realized as follows:
(1) finding out the closest point of the registered image point set in the reference image point set, solving a rotational translation matrix by using the relation between the closest points, and transforming the registered image by using the obtained rotational translation matrix to obtain a new floating point set;
(2) performing next round of calculation by taking the new floating point set obtained in the step (1) as an initial point set, and repeating the steps until an optimal transformation array is obtained;
(3) applying the optimal transformation array obtained in the step (2) to a reference image, performing operations such as image interpolation, searching the closest point to possibly obtain an error point pair, and calculating an error;
(4) and (4) repeating the steps (1) to (3) for a plurality of iterations until the error meets the requirement.
6. The multi-modality image fusion-based bone surgery navigation system according to claim 4, wherein: the (4) is specifically realized as follows:
(1) carrying out similarity measurement on the overall rough registration image and the reference image, and setting an initial control point;
(2) based on the similarity obtained in the step (1), if the convergence threshold value is not reached, calculating a transformation field by using a control point, and transforming and interpolating the registration image by using the transformation field to obtain a new registration image;
(3) and (3) repeating the step (1) and the step (2) to obtain the similarity measurement, and ending the iteration until the optimal solution is reached or the threshold value is reached.
CN201911336544.4A 2019-12-23 2019-12-23 Bone surgery navigation system based on multimode image fusion Active CN110946654B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911336544.4A CN110946654B (en) 2019-12-23 2019-12-23 Bone surgery navigation system based on multimode image fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911336544.4A CN110946654B (en) 2019-12-23 2019-12-23 Bone surgery navigation system based on multimode image fusion

Publications (2)

Publication Number Publication Date
CN110946654A true CN110946654A (en) 2020-04-03
CN110946654B CN110946654B (en) 2022-02-08

Family

ID=69983424

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911336544.4A Active CN110946654B (en) 2019-12-23 2019-12-23 Bone surgery navigation system based on multimode image fusion

Country Status (1)

Country Link
CN (1) CN110946654B (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111588476A (en) * 2020-05-18 2020-08-28 苏州立威新谱生物科技有限公司 Method and system for realizing multi-zone surgical operation control and readable storage medium
CN111588469A (en) * 2020-05-18 2020-08-28 四川大学华西医院 Ophthalmic robot end effector guidance and positioning system
CN111658065A (en) * 2020-05-12 2020-09-15 北京航空航天大学 Digital guide system for mandible cutting operation
CN111743618A (en) * 2020-08-05 2020-10-09 哈尔滨梓滨科技有限公司 Binocular optics-based bipolar electric coagulation forceps positioning device and method
CN111768497A (en) * 2020-06-29 2020-10-13 深圳大学 Three-dimensional reconstruction method, device and system of head dynamic virtual model
CN111887988A (en) * 2020-07-06 2020-11-06 罗雄彪 Positioning method and device of minimally invasive interventional operation navigation robot
CN112908455A (en) * 2021-03-04 2021-06-04 苏州迪凯尔医疗科技有限公司 Surgical instrument precision verification method
CN113040908A (en) * 2021-02-02 2021-06-29 武汉联影智融医疗科技有限公司 Registration method, device, computer equipment and storage medium for surgical navigation
CN113057734A (en) * 2021-03-12 2021-07-02 上海微创医疗机器人(集团)股份有限公司 Surgical system
CN113143463A (en) * 2021-03-16 2021-07-23 上海交通大学 Operation navigation device, system, calibration method, medium and electronic equipment
CN113262048A (en) * 2021-04-25 2021-08-17 深影医疗科技(深圳)有限公司 Spatial registration method and device, terminal equipment and intraoperative navigation system
CN113313754A (en) * 2020-12-23 2021-08-27 南京凌华微电子科技有限公司 Bone saw calibration method and system in surgical navigation
CN113554710A (en) * 2020-04-24 2021-10-26 西门子(深圳)磁共振有限公司 Calibration method, system and storage medium of 3D camera in medical image system
WO2021217713A1 (en) * 2020-04-26 2021-11-04 深圳市鑫君特智能医疗器械有限公司 Surgical navigation system, computer for performing surgical navigation method, and storage medium
CN113610826A (en) * 2021-08-13 2021-11-05 推想医疗科技股份有限公司 Puncture positioning method and device, electronic device and storage medium
CN113662662A (en) * 2021-07-30 2021-11-19 北京天智航医疗科技股份有限公司 Data precision detection method and device, storage medium and electronic equipment
CN113855288A (en) * 2021-11-01 2021-12-31 杭州柳叶刀机器人有限公司 Image generation method, image generation device, electronic equipment and storage medium
CN113855236A (en) * 2021-09-03 2021-12-31 北京长木谷医疗科技有限公司 Method and system for tracking and moving surgical robot
CN114052907A (en) * 2021-11-22 2022-02-18 南京普爱医疗设备股份有限公司 Surgical navigation positioning system and registration method thereof
CN114066947A (en) * 2020-07-30 2022-02-18 杭州三坛医疗科技有限公司 Image registration method and image registration device
CN114145846A (en) * 2021-12-06 2022-03-08 北京理工大学 Operation navigation method and system based on augmented reality assistance
CN114224485A (en) * 2021-11-01 2022-03-25 中国医学科学院北京协和医院 Navigation system for open type spinal vertebral plate decompression operation and control method
CN114587584A (en) * 2022-03-04 2022-06-07 杭州湖西云百生科技有限公司 Navigation system visualization method and system for improving orthopedics nail implantation operation safety
CN115089293A (en) * 2022-07-04 2022-09-23 山东大学 Calibration method for spinal endoscopic surgical robot
WO2022206407A1 (en) * 2021-04-01 2022-10-06 上海复拓知达医疗科技有限公司 Augmented reality-based puncture surgery navigation apparatus and computer-readable storage medium
CN116725662A (en) * 2023-08-11 2023-09-12 北京维卓致远医疗科技发展有限责任公司 Fracture surgery planning method, device and storable medium based on two-dimensional images
CN117017487A (en) * 2023-10-09 2023-11-10 杭州键嘉医疗科技股份有限公司 Spinal column registration method, device, equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101082988A (en) * 2007-06-19 2007-12-05 北京航空航天大学 Automatic deepness image registration method
CN101160104A (en) * 2005-02-22 2008-04-09 马科外科公司 Haptic guidance system and method
CN101388115A (en) * 2008-10-24 2009-03-18 北京航空航天大学 Depth image autoegistration method combined with texture information
CN101660904A (en) * 2009-09-22 2010-03-03 大连海事大学 Kinematics calibration method of measurement robot
CN102654387A (en) * 2012-05-25 2012-09-05 南京理工大学 Online industrial robot calibration device based on spatial curved surface restraint
EP2910187A1 (en) * 2014-02-24 2015-08-26 Université de Strasbourg (Etablissement Public National à Caractère Scientifique, Culturel et Professionnel) Automatic multimodal real-time tracking of moving instruments for image plane alignment inside a MRI scanner
CN106580473A (en) * 2016-12-29 2017-04-26 中国科学院合肥物质科学研究院 Operation appliance calibration method applied to operation navigation system
CN106934821A (en) * 2017-03-13 2017-07-07 中国科学院合肥物质科学研究院 A kind of conical beam CT and CT method for registering images based on ICP algorithm and B-spline
CN107536643A (en) * 2017-08-18 2018-01-05 北京航空航天大学 A kind of augmented reality operation guiding system of Healing in Anterior Cruciate Ligament Reconstruction
WO2018175737A1 (en) * 2017-03-22 2018-09-27 Intuitive Surgical Operations, Inc. Systems and methods for intelligently seeding registration

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101160104A (en) * 2005-02-22 2008-04-09 马科外科公司 Haptic guidance system and method
CN101082988A (en) * 2007-06-19 2007-12-05 北京航空航天大学 Automatic deepness image registration method
CN101388115A (en) * 2008-10-24 2009-03-18 北京航空航天大学 Depth image autoegistration method combined with texture information
CN101660904A (en) * 2009-09-22 2010-03-03 大连海事大学 Kinematics calibration method of measurement robot
CN102654387A (en) * 2012-05-25 2012-09-05 南京理工大学 Online industrial robot calibration device based on spatial curved surface restraint
EP2910187A1 (en) * 2014-02-24 2015-08-26 Université de Strasbourg (Etablissement Public National à Caractère Scientifique, Culturel et Professionnel) Automatic multimodal real-time tracking of moving instruments for image plane alignment inside a MRI scanner
CN106580473A (en) * 2016-12-29 2017-04-26 中国科学院合肥物质科学研究院 Operation appliance calibration method applied to operation navigation system
CN106934821A (en) * 2017-03-13 2017-07-07 中国科学院合肥物质科学研究院 A kind of conical beam CT and CT method for registering images based on ICP algorithm and B-spline
WO2018175737A1 (en) * 2017-03-22 2018-09-27 Intuitive Surgical Operations, Inc. Systems and methods for intelligently seeding registration
CN107536643A (en) * 2017-08-18 2018-01-05 北京航空航天大学 A kind of augmented reality operation guiding system of Healing in Anterior Cruciate Ligament Reconstruction

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113554710A (en) * 2020-04-24 2021-10-26 西门子(深圳)磁共振有限公司 Calibration method, system and storage medium of 3D camera in medical image system
WO2021217713A1 (en) * 2020-04-26 2021-11-04 深圳市鑫君特智能医疗器械有限公司 Surgical navigation system, computer for performing surgical navigation method, and storage medium
CN111658065A (en) * 2020-05-12 2020-09-15 北京航空航天大学 Digital guide system for mandible cutting operation
CN111588469A (en) * 2020-05-18 2020-08-28 四川大学华西医院 Ophthalmic robot end effector guidance and positioning system
CN111588476A (en) * 2020-05-18 2020-08-28 苏州立威新谱生物科技有限公司 Method and system for realizing multi-zone surgical operation control and readable storage medium
CN111588469B (en) * 2020-05-18 2021-02-02 四川大学华西医院 Ophthalmic robot end effector guidance and positioning system
CN111768497A (en) * 2020-06-29 2020-10-13 深圳大学 Three-dimensional reconstruction method, device and system of head dynamic virtual model
CN111887988A (en) * 2020-07-06 2020-11-06 罗雄彪 Positioning method and device of minimally invasive interventional operation navigation robot
CN114066947A (en) * 2020-07-30 2022-02-18 杭州三坛医疗科技有限公司 Image registration method and image registration device
CN114066947B (en) * 2020-07-30 2022-10-14 杭州三坛医疗科技有限公司 Image registration method and image registration device
CN111743618A (en) * 2020-08-05 2020-10-09 哈尔滨梓滨科技有限公司 Binocular optics-based bipolar electric coagulation forceps positioning device and method
CN113313754A (en) * 2020-12-23 2021-08-27 南京凌华微电子科技有限公司 Bone saw calibration method and system in surgical navigation
CN113040908A (en) * 2021-02-02 2021-06-29 武汉联影智融医疗科技有限公司 Registration method, device, computer equipment and storage medium for surgical navigation
CN112908455A (en) * 2021-03-04 2021-06-04 苏州迪凯尔医疗科技有限公司 Surgical instrument precision verification method
CN113057734A (en) * 2021-03-12 2021-07-02 上海微创医疗机器人(集团)股份有限公司 Surgical system
CN113143463A (en) * 2021-03-16 2021-07-23 上海交通大学 Operation navigation device, system, calibration method, medium and electronic equipment
CN113143463B (en) * 2021-03-16 2022-08-26 上海交通大学 Operation navigation device, system, calibration method, medium and electronic equipment
WO2022206407A1 (en) * 2021-04-01 2022-10-06 上海复拓知达医疗科技有限公司 Augmented reality-based puncture surgery navigation apparatus and computer-readable storage medium
CN113262048A (en) * 2021-04-25 2021-08-17 深影医疗科技(深圳)有限公司 Spatial registration method and device, terminal equipment and intraoperative navigation system
CN113262048B (en) * 2021-04-25 2022-06-24 深影医疗科技(深圳)有限公司 Spatial registration method and device, terminal equipment and intraoperative navigation system
CN113662662A (en) * 2021-07-30 2021-11-19 北京天智航医疗科技股份有限公司 Data precision detection method and device, storage medium and electronic equipment
CN113662662B (en) * 2021-07-30 2023-10-27 北京天智航医疗科技股份有限公司 Data precision detection method and device, storage medium and electronic equipment
CN113610826A (en) * 2021-08-13 2021-11-05 推想医疗科技股份有限公司 Puncture positioning method and device, electronic device and storage medium
CN113855236A (en) * 2021-09-03 2021-12-31 北京长木谷医疗科技有限公司 Method and system for tracking and moving surgical robot
CN113855236B (en) * 2021-09-03 2022-05-31 北京长木谷医疗科技有限公司 Method and system for tracking and moving surgical robot
CN114224485A (en) * 2021-11-01 2022-03-25 中国医学科学院北京协和医院 Navigation system for open type spinal vertebral plate decompression operation and control method
CN113855288A (en) * 2021-11-01 2021-12-31 杭州柳叶刀机器人有限公司 Image generation method, image generation device, electronic equipment and storage medium
CN114224485B (en) * 2021-11-01 2024-03-26 中国医学科学院北京协和医院 Navigation system and control method for open type vertebral lamina decompression operation
CN114052907A (en) * 2021-11-22 2022-02-18 南京普爱医疗设备股份有限公司 Surgical navigation positioning system and registration method thereof
CN114145846B (en) * 2021-12-06 2024-01-09 北京理工大学 Operation navigation method and system based on augmented reality assistance
CN114145846A (en) * 2021-12-06 2022-03-08 北京理工大学 Operation navigation method and system based on augmented reality assistance
CN114587584A (en) * 2022-03-04 2022-06-07 杭州湖西云百生科技有限公司 Navigation system visualization method and system for improving orthopedics nail implantation operation safety
CN114587584B (en) * 2022-03-04 2023-10-03 杭州湖西云百生科技有限公司 Navigation system visualization method and system for improving safety of orthopedic nail setting operation
CN115089293A (en) * 2022-07-04 2022-09-23 山东大学 Calibration method for spinal endoscopic surgical robot
CN116725662A (en) * 2023-08-11 2023-09-12 北京维卓致远医疗科技发展有限责任公司 Fracture surgery planning method, device and storable medium based on two-dimensional images
CN116725662B (en) * 2023-08-11 2023-11-03 北京维卓致远医疗科技发展有限责任公司 Fracture surgery planning method, device and storable medium based on two-dimensional images
CN117017487B (en) * 2023-10-09 2024-01-05 杭州键嘉医疗科技股份有限公司 Spinal column registration method, device, equipment and storage medium
CN117017487A (en) * 2023-10-09 2023-11-10 杭州键嘉医疗科技股份有限公司 Spinal column registration method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN110946654B (en) 2022-02-08

Similar Documents

Publication Publication Date Title
CN110946654B (en) Bone surgery navigation system based on multimode image fusion
EP3254621B1 (en) 3d image special calibrator, surgical localizing system and method
EP4265214A1 (en) Navigation system and method for joint replacement surgery
EP4159149A1 (en) Surgical navigation system, computer for performing surgical navigation method, and storage medium
CN107456278B (en) Endoscopic surgery navigation method and system
EP2953569B1 (en) Tracking apparatus for tracking an object with respect to a body
CN112971982B (en) Operation navigation system based on intrahepatic vascular registration
CN101474075B (en) Navigation system of minimal invasive surgery
CN110264504B (en) Three-dimensional registration method and system for augmented reality
CN102784003B (en) Pediculus arcus vertebrae internal fixation operation navigation system based on structured light scanning
CN112220557B (en) Operation navigation and robot arm device for craniocerebral puncture and positioning method
US20080242978A1 (en) Method and apparatus for registering a physical space to image space
CN202751447U (en) Vertebral pedicle internal fixation surgical navigation system based on structured light scanning
KR102105974B1 (en) Medical imaging system
CN107854177A (en) A kind of ultrasound and CT/MR image co-registrations operation guiding system and its method based on optical alignment registration
CN109646089A (en) A kind of spine and spinal cord body puncture based on multi-mode medical blending image enters waypoint intelligent positioning system and method
CN103948432A (en) Algorithm for augmented reality of three-dimensional endoscopic video and ultrasound image during operation
CN111093505B (en) Radiographic apparatus and image processing method
CN114727847A (en) System and method for computing coordinate system transformations
Shao et al. Augmented reality calibration using feature triangulation iteration-based registration for surgical navigation
CN115358995A (en) Full-automatic space registration system based on multi-mode information fusion
CN114983567A (en) Femoral neck fracture minimally invasive surgery navigation system
JP2023520618A (en) Method and system for using multi-view pose estimation
Zhang et al. A hybrid feature-based patient-to-image registration method for robot-assisted long bone osteotomy
CN112381750A (en) Multi-mode registration fusion method for ultrasonic image and CT/MRI image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant