CN109464196B - Surgical navigation system adopting structured light image registration and registration signal acquisition method - Google Patents

Surgical navigation system adopting structured light image registration and registration signal acquisition method Download PDF

Info

Publication number
CN109464196B
CN109464196B CN201910012362.5A CN201910012362A CN109464196B CN 109464196 B CN109464196 B CN 109464196B CN 201910012362 A CN201910012362 A CN 201910012362A CN 109464196 B CN109464196 B CN 109464196B
Authority
CN
China
Prior art keywords
image
registration
structured light
dimensional
mri
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910012362.5A
Other languages
Chinese (zh)
Other versions
CN109464196A (en
Inventor
李书纲
陈鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an hehuaruibo Technology Co.,Ltd.
Original Assignee
Beijing Hehua Ruibo Technology Co ltd
Beijing And Huaruibo Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Hehua Ruibo Technology Co ltd, Beijing And Huaruibo Medical Technology Co ltd filed Critical Beijing Hehua Ruibo Technology Co ltd
Priority to CN201910012362.5A priority Critical patent/CN109464196B/en
Publication of CN109464196A publication Critical patent/CN109464196A/en
Application granted granted Critical
Publication of CN109464196B publication Critical patent/CN109464196B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2055Optical tracking systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition

Landscapes

  • Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Robotics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The application discloses an operation navigation system adopting structured light image registration and a registration signal acquisition method, wherein the system comprises a structured light navigator, a static reference, an end effector and a control computer; the structured light navigator is provided with two cameras, and the cameras are used for scanning the face, acquiring a three-dimensional image and sending the three-dimensional image to the control computer; a visible light mark is arranged on the static reference; the control computer is used for processing the image and registering the processing result with the original model; the control computer is provided with a display screen for displaying the registration result and finishing the state monitoring. The surgical navigation system provided by the application acquires the three-dimensional point cloud image of the face of the patient through structured light scanning, and performs registration with the original model, has the characteristics of large information amount, high acquisition speed, multiple characteristic points and the like, and has higher registration precision when performing face registration with preoperative MRI three-dimensional reconstruction data; the problems of insufficient feature points and low registration accuracy of the traditional registration are solved.

Description

Surgical navigation system adopting structured light image registration and registration signal acquisition method
Technical Field
The application relates to the technical field of medical instruments, in particular to an operation navigation system adopting structured light image registration.
Background
Neurosurgery is specific to other types of surgery: the nerve and blood vessel are complicated and intricate, the brain tissue and other structures are very fine and extremely important in function, and the blood vessel nerve can be damaged by carelessness to cause serious disability and even life threatening; in addition, the nervous system structure is low in free degree, and the nerve can not be pulled and moved freely during the operation. Thus, in conventional neurosurgery, more skull bone structures are often sacrificed in order to expose the lesion. The nerve navigation system is one of the most important auxiliary devices for the neurosurgical minimally invasive surgery and is a sharp tool in the hand of the neurosurgeon. Similar to car navigation, it can inform the operator of the current operation position in real time, so that the operator can make more accurate prediction or judgment.
The operation navigation system is based on medical image data such as nuclear magnetic resonance, CT and the like, and displays a three-dimensional visual 'virtual human brain' on a computer, thereby reflecting where a probe in a doctor points, whether the probe reaches the edge of a tumor or not, and whether the front of the probe is an important tissue or not, and further realizing the purpose of reaching a specified place through a specified path. The most important execution process of the navigation system is the 'registration' process, the 'registration' refers to the matching process of geographic coordinates of different image graphs obtained by different imaging means in the same region, the 'registration' refers to a bridge connecting the actual situation and a computer, and the entity in the real environment can be connected with the computer execution coordinates through the 'registration', so that the superposition of real and virtual operation planning is realized, and the accurate execution of the operation plan is ensured.
Currently, in the prior art, a navigation technique of neurosurgery mainly adopts an infrared tracker, a probe with a light-reflecting ball is used for contacting characteristic anatomical points (such as the middle points of the eye orbit, the nose and the frontal bone of the eyebrow) of the skull and the face of a patient, the infrared tracker obtains the spatial coordinates of the characteristic anatomical points, reconstructs the characteristic points, and matches the characteristic points with a CT three-dimensional reconstruction image or an MRI reconstruction image before an operation, so as to realize registration. However, this method will result in limited registration accuracy due to the following several drawbacks: firstly, the probe is adopted to contact soft tissues of the orbit, the nose and the like of a patient, so that local deformation is caused, the displacement of the space coordinate of the probe is caused, and an error exists between the displacement and the appointed position of the desired space; secondly, the number of the characteristic anatomical points in the space acquired by the probe in the method is extremely limited, the amount of information provided by a computer is small, the registration accuracy is also affected, if the amount of information is required to be improved, the acquisition parts of the probe are inevitably increased, the time consumption is increased, and the working efficiency is reduced.
Disclosure of Invention
The application provides an operation navigation system adopting structured light image registration and a registration signal acquisition method, which aim to solve the problems of limited registration precision, low registration efficiency and the like in the prior art.
The application provides an operation navigation system adopting structured light image registration, which comprises a structured light navigator, a static reference, an end effector and a control computer; the structured light navigator, the static reference and the end effector are electrically connected to the control computer;
the structured light navigator is provided with two cameras, and the cameras are used for scanning the face, acquiring a three-dimensional image and sending the three-dimensional image to the control computer;
a visible light mark is arranged on the static reference;
the control computer is used for processing the image and registering the processing result with the original model; the control computer is provided with a display screen for displaying the registration result and finishing the state monitoring.
Optionally, the structured light navigator is configured to emit structured light to the face, so that the image of the face is distorted.
Optionally, the original model is an MRI three-dimensional digital model.
The application also provides a registration signal acquisition method of the surgical navigation system by adopting structured light image registration, which comprises the following steps:
establishing a structured light coordinate system, and determining the position of the visible light mark in the structured light coordinate system;
acquiring a structured light three-dimensional image;
reconstructing a face three-dimensional image according to the structured light three-dimensional image;
and carrying out local surface registration on the structured light three-dimensional image and the MRI three-dimensional digital image.
Optionally, the acquiring a structured light three-dimensional image includes:
scanning the face of a patient to distort the facial image;
shooting a distorted image;
the photographed image is transmitted to the control computer.
Optionally, the method further includes:
and acquiring the relative relation between the spatial posture of the static reference and the three-dimensional face image.
Optionally, the method further includes:
establishing an MRI coordinate system, and determining the position of the visible light marker in the MRI coordinate system;
acquiring an MRI three-dimensional digital image based on the MRI three-dimensional digital model;
optionally, the performing local surface registration on the structured light three-dimensional image and the MRI three-dimensional digital image includes:
associating the coordinate systems of the structured light three-dimensional image and the MRI three-dimensional digital image with a static reference to obtain a conversion relation between the coordinate systems;
tracking the visible light mark, and acquiring the space coordinates of the visible light mark in different images;
a registration signal is acquired that enables the end effector to achieve registration-based precision navigation.
Optionally, the method further includes:
positioning the face of the patient by adopting a plurality of static benchmarks, and respectively acquiring the positions of the visible light marks in a coordinate system;
acquiring the position relation among a plurality of visible light marks;
detecting whether the position relation changes; if the change occurs, an alarm signal is sent out.
The operation navigation system and the registration signal acquisition method provided by the application have the following beneficial effects:
1. the method and the device have the advantages that the three-dimensional point cloud image of the whole face of the patient is obtained through structured light scanning, the method and the device have the characteristics of large information amount, high obtaining speed, multiple characteristic points and the like, and the registration precision is higher when the method and the device are subjected to face registration with preoperative MRI three-dimensional reconstruction data; the problems of insufficient characteristic points and low registration accuracy of single CT registration are solved.
2. During operation, if the aseptic requirement is considered to cover the face, the system of the application adopts the static reference as a medium, and the coordinate system after the registration of the structured light and the MRI three-dimensional data is associated with the static reference, so that the MRI three-dimensional reconstruction data, the structured light face three-dimensional data and the static reference can be positioned in the same coordinate system, and at the moment, the accurate navigation based on the accurate registration can be realized only by detecting the position of the static reference.
3. Due to the fact that the structured light scanning technology is adopted, direct contact with a patient is not needed, the image acquisition speed is improved, and meanwhile the problem that registration accuracy is low due to contact with the patient is solved.
4. If the static reference shifts due to human factors in the operation process so as to influence the navigation function, the system and the method can also give an alarm immediately to remind an operator of correcting in time.
Drawings
In order to more clearly explain the technical solution of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious to those skilled in the art that other drawings can be obtained according to the drawings without any creative effort.
FIG. 1 is a schematic structural diagram of a surgical navigation system employing structured light image registration according to the present application;
FIG. 2 is a flowchart of a registration signal acquisition method of a surgical navigation system employing structured light image registration according to the present application;
FIG. 3 is an exploded view of one embodiment of step S20 of the method provided herein;
FIG. 4 is an exploded view of step S40 of the method of FIG. 2, under one embodiment;
FIG. 5 is a flow diagram of a method provided herein, under one embodiment;
FIG. 6 is a flow chart of a method provided herein under another embodiment;
FIG. 7 is a flow chart of a method provided herein under yet another embodiment;
in the figure, 1-structured light navigator, 11-camera, 2-static reference, 3-end effector, 4-control computer, 41-display.
Detailed Description
The operation navigation system accurately corresponds the image data before or during operation of a patient to the anatomical structure of the patient on an operation bed, tracks the surgical instrument during operation and updates and displays the position of the surgical instrument on the image of the patient in a virtual probe mode in real time, so that a doctor can clearly see the position of the surgical instrument relative to the anatomical structure of the patient, and the surgical operation is quicker, more accurate and safer. In the surgical navigation system, the most important process is the registration process, and the actual entity in the real environment can be connected with the coordinates in the computer through registration, so that the combination of real and virtual surgical planning is realized, and the accurate execution of the surgical plan is ensured. And the feature extraction is the key of the registration technology, and the accurate feature extraction provides guarantee for successful feature matching. Therefore, the embodiment of the application provides a surgical navigation system based on the registration influenced by the structural light and a registration signal acquisition method.
The operation navigation system and the registration signal acquisition method provided by the application are provided for neurosurgery, and the neurosurgery has the particularity that: the nerve and blood vessel are complicated and intricate, the brain tissue and other structures are very fine, the functions are extremely important, the blood vessel nerve can be damaged by carelessness to cause serious disability and even life threatening, the dissociative degree of the nervous system structure is low, and the nerve system can not be freely pulled and moved in the operation. Therefore, in neurosurgery, a surgical navigation system with higher accuracy is required to assist the operator to make more accurate prognosis or judgment.
Fig. 1 is a schematic structural diagram of an operation navigation system using structured light image registration according to the present application;
as can be seen from fig. 1, the present application provides a surgical navigation system using structured light image registration, the system includes a structured light navigator 1, a static reference 2, an end effector 3 and a control computer 4; the structured light navigator 1, the static reference 2 and the end effector 3 are all electrically connected to the control computer 4;
in the present embodiment, the structured light navigator 1 is configured to project structured light onto a surface of an object and acquire an image with distortion, where the image mainly refers to a facial image of a patient; the structured light navigator 1 is provided with two cameras 11, and the cameras 11 are used for scanning a face, acquiring a three-dimensional image and sending the three-dimensional image to the control computer 4;
further, in order to reduce the shielding of the cameras during the registration and tracking processes in the operation, in a preferred embodiment, the two cameras 11 are respectively disposed at two sides of the operation area and are located obliquely above the operation table and arranged diagonally, so as to ensure that real-time three-dimensional images can be obtained in the whole course of the operation.
Specifically, in this embodiment, the three-dimensional image is substantially a three-dimensional image of an object reconstructed from a three-dimensional point cloud image after surface patch encapsulation, and the specific acquisition process includes: the structured light three-dimensional point cloud image is obtained by a charge coupled device in the camera 11, and the point cloud is a collection of massive points with the same characteristics as the surface of the target, and the point cloud obtained by combining laser measurement and photogrammetry principles generally comprises three-dimensional coordinates (XYZ), laser reflection intensity (intensity), color information and the like, so that the three-dimensional point cloud image has more characteristic point quantity and characteristic point information, and the subsequent registration process is more accurate.
Further, the two cameras 11 in this embodiment track the visible light marks on the end effector 3 and the static reference 2 by using a "binocular stereo vision" technique, which is a method for acquiring three-dimensional geometric information of an object by acquiring two images of the object to be measured from different positions by using an imaging device based on a parallax principle and calculating a position deviation between corresponding points of the images, as an important form of machine vision. By adopting the method, the camera 11 can acquire and calibrate the space coordinates of the visible light mark under the coordinate system of the structured light navigator 1 in the operation process
Figure BDA0001937808970000051
At this time, since the positions of the end effector 3 and the static reference 2 are controllable, and the accuracy is high, that is, the physical forms (information such as positions) of the end effector 3 and the static reference 2 are known, the relationship between the end effector 3 and the static reference 2 can be established by calculating the spatial postures of the end effector 3 and the static reference 2, including the position relationship between the spatial coordinates in the coordinate system, and the like, and the relationship between the static reference 2 and the intracranial navigation path of the patient can be obtained by registration, so that the orientation information of the end effector 3 in the navigation image of the surgical navigation system can be determined.
A visible light mark is arranged on the static reference 2; in the operation process, the static datum 2 is arranged near the operation area, a static datum coordinate system is generated through the static datum 2, and the position information of the visible light mark in the static datum coordinate system is obtained.
Since the structured light navigator 1 and the static reference 2 are both electrically connected to the control computer 4, the control computer 4 obtains the relative relationship between the two coordinate systems in the XYZ axis and the relative relationship between the two coordinate systems by multiplying the three-dimensional coordinates (XYZ) scanned by the camera 11 and the static reference coordinates generated by the static reference 2 by the matrix of the two coordinate systems
Figure BDA0001937808970000052
And the relative relation between the space posture of the static reference and the three-dimensional face image is acquired by the rotation angles in three dimensions, so that two coordinate systems are represented in the same coordinate system.
In this embodiment, the end effector 3 refers to a terminal device for navigation, such as a tool, and a reference frame with a reflective ball is attached to a surface of the terminal device, so that the structured light navigator 1 acquires a spatial position of the end effector 3, at this time, the end effector 3 is controlled to move to a specified path by a mechanism for driving the end effector 3, and the mechanism for driving the end effector 3 may be driven manually or by an electrically controlled mechanical arm, which is not limited in this embodiment.
The control computer 4 is used for processing the image and registering the processing result with the original model; the control computer is provided with a display screen 41 for displaying the registration result and monitoring the completion status.
The original model mentioned in this embodiment is a simulated image obtained from the head of the patient before the operation based on the prior art, and the model used may be an MRI reconstructed image. Taking an MRI three-dimensional digital model as an example, before performing an operation, based on MRI thin-layer three-dimensional cross-sectional phase continuous scanning data of the head, an obtained MRI three-dimensional image can clearly display soft tissue information of the head of a patient, such as information of facial soft tissue, intracranial brain tissue or a relative relationship between the facial soft tissue and the intracranial brain tissue. According to the obtained head soft tissue information, the operation position to be performed by the patient can be obtained through preliminary calculation on the control computer, an operation channel and a simulated working path are designed according to the three-dimensional digital model, the result is used as one of target values for performing registration in the operation, and the process is the process of performing preoperative planning by a doctor.
In this embodiment, the processing of the image by the control computer 4 includes inputting the image captured by the camera 11 into the computer through an internally configured image capture card; performing surface patch packaging on the acquired image, and reconstructing a structured light three-dimensional image; after a structured light three-dimensional image is obtained, the image is required to be registered with an image obtained by an original model, it is required to be noted that local surface registration is adopted for the image required to be registered, and the local part required to be registered is obtained by segmenting an MRI image according to specific signal characteristics through a threshold segmentation algorithm in the preoperative planning process, so as to further segment a region with purposiveness, and then performing three-dimensional reconstruction on the region by utilizing a moving cube algorithm to obtain a registered image; when registration is executed, firstly, the conversion relation between the structured light three-dimensional image and the coordinate system of the MRI three-dimensional reconstruction image is calculated respectively, then a corresponding structured light three-dimensional image registration area is obtained according to a registration area formulated in the MRI three-dimensional reconstruction image, and then the two coordinate systems are associated with a static reference coordinate system, so that the display position relation of the two images in the same coordinate system is realized; finally, the positional relationship is displayed in real time through the display screen 41.
For example, a multi-region ICP registration algorithm, also called an iterative nearest point algorithm, which is mainly used to solve the registration problem based on a free-form curved surface, may be used for registration.
The application provides an operation navigation system adopting structured light image registration, which comprises a structured light navigator, a static reference, an end effector and a control computer; the structured light navigator, the static reference and the end effector are electrically connected to the control computer; the structured light navigator is provided with two cameras, and the cameras are used for scanning the face, acquiring a three-dimensional image and sending the three-dimensional image to the control computer; a visible light mark is arranged on the static reference; the control computer is used for processing the image and registering the processing result with the original model; the control computer is provided with a display screen for displaying the registration result and finishing the state monitoring. The surgical navigation system provided by the application acquires the three-dimensional point cloud image of the face of the patient through structured light scanning, and performs registration with the original model, has the characteristics of large information amount, high acquisition speed, multiple characteristic points and the like, and has higher registration precision when performing face registration with preoperative MRI three-dimensional reconstruction data; the problems of insufficient feature points and low registration accuracy of the traditional registration are solved.
Referring to fig. 2, it is a flowchart of a registration signal acquisition method of an operation navigation system using structured light image registration according to the present application;
as can be seen from fig. 2, the present application further provides a registration signal acquisition method of a surgical navigation system using structured light image registration, the method including:
s10: establishing a structured light coordinate system, and determining the position of the visible light mark in the structured light coordinate system; in this embodiment, before acquiring the image, the coordinate system of the image and the position of the visible light mark in the coordinate system are determined, and when establishing the structured light coordinate system, the transformation relationship between the structured light coordinate system and the static reference coordinate system needs to be obtained at the same time, so that the two coordinate systems can be represented in the same coordinate system.
S20: acquiring a structured light three-dimensional image; the structured light navigator is adopted to project structured light to the face of a patient, and an optical device with scanning and shooting functions (such as a camera, a structured light scanner and the like) is used for acquiring a structured light three-dimensional image of the face of the patient; specifically, referring to FIG. 3, in one possible embodiment, step S20 can be broken down into the following steps:
s21: scanning the face of a patient to distort the facial image; the distorted image can be captured by a camera device with a built-in charge coupled device.
S22: shooting a distorted image; for the shot image, there may be a difference in the quality of the shot image depending on the selected image pickup apparatus, and therefore, on the premise of satisfying the economy, it is preferable to use a higher quality image pickup apparatus.
S23: the photographed image is transmitted to the control computer. The control computer inputs the shot image into the computer for processing through an image acquisition card configured inside, and the specific processing process comprises the steps of performing surface patch packaging on the acquired image and reconstructing a structured light three-dimensional image; and registering with images acquired from the original model, etc.
S30: and reconstructing a three-dimensional image of the face according to the structured light three-dimensional image. The specific process is as follows: the structured light three-dimensional point cloud image is obtained by a charge coupled device in the camera 11, and the point cloud is a collection of massive points with the same characteristics as the surface of the target, and the point cloud obtained by combining laser measurement and photogrammetry principles generally comprises three-dimensional coordinates (XYZ), laser reflection intensity (intensity), color information and the like, so that the three-dimensional point cloud image has more characteristic point quantity and characteristic point information, and the subsequent registration process is more accurate.
S40: and carrying out local surface registration on the structured light three-dimensional image and the MRI three-dimensional digital image. It should be noted that, in the preoperative planning process, the image to be registered is locally registered, and the local part to be registered is a registered image obtained by segmenting the MRI image according to specific signal characteristics by using a threshold segmentation algorithm, further segmenting a purposeful region, and then performing three-dimensional reconstruction on the region by using a moving cube algorithm;
the adopted threshold segmentation method is an image segmentation technology based on regions, and the principle is to divide image pixel points into a plurality of classes. The image thresholding segmentation is the most common traditional image segmentation method, and becomes the most basic and widely applied segmentation technology in image segmentation due to simple implementation, small calculation amount and stable performance. The marching cubes algorithm reconstructs the object surface by iso-surface extraction, which is essentially to extract an iso-surface from a three-dimensional data field and define the iso-surface as a collection of points in three-dimensional space having the same attributes.
Specifically, as can be seen from fig. 4, step S40 can be decomposed into;
s41: associating the coordinate systems of the structured light three-dimensional image and the MRI three-dimensional digital image with a static reference to obtain a conversion relation between the coordinate systems; and then, a corresponding structured light three-dimensional image registration region can be obtained according to a proposed registration region in the MRI three-dimensional reconstruction image, and then the two coordinate systems are associated with the static reference coordinate system, so that the two images are in a display position relation in the same coordinate system.
S42: tracking the visible light mark, and acquiring the space coordinates of the visible light mark in different images; the visible light marks in different coordinate systems are displayed in the same coordinate system, and the position between the actual image and the simulated image can be calibrated in real time.
S43: acquiring a registration signal, wherein the registration signal enables the end effector to realize accurate navigation based on registration; in the operation process, the position of the end effector can be accurately adjusted in real time through the registration result of the structured light and the MRI image, and the purpose of accurately executing operation navigation is further achieved.
Referring to fig. 5, a flow chart of a method provided herein under one embodiment;
as can be seen from fig. 5, in a possible embodiment, the method further includes:
s50: acquiring the relative relation between the spatial posture of the static reference and the three-dimensional face image; specifically, when the control computer acquires three-dimensional coordinates (XYZ) scanned by the camera and static reference coordinates generated from the static reference, by multiplying matrices of the two coordinate systems, the relative relationship between the two coordinate systems in the XYZ axes and
Figure BDA0001937808970000071
and the relative relation between the space posture of the static reference and the three-dimensional face image is acquired by the rotation angles in three dimensions, so that two coordinate systems are represented in the same coordinate system.
Referring to fig. 6, a flow chart of a method provided by the present application under another embodiment;
the signals used to perform registration, in addition to including real-time acquired images during surgery, typically require acquisition of a three-dimensional digital model of the patient's face and cranium by preoperative planning, e.g., reconstruction of images using MRI; taking MRI as an example, the operation position to be performed by the patient can be preliminarily calculated by using the obtained MRI three-dimensional digital model on a control computer by adopting a full digitalization method. As can be seen from fig. 5, in a possible embodiment, the method further includes:
s51: establishing an MRI coordinate system, and determining the position of the visible light marker in the MRI coordinate system; while establishing the MRI coordinate system, the transformation relationship between the MRI coordinate system and the static reference coordinate system needs to be obtained at the same time, so as to quickly establish the association with the structured light coordinate system in the subsequent registration.
S52: acquiring an MRI three-dimensional digital image based on the MRI three-dimensional digital model; specifically, before performing an operation, based on MRI thin-layer three-dimensional cross-sectional phase continuous scanning data of the head, the obtained MRI three-dimensional image can clearly display soft tissue information of the head of the patient, such as soft tissue of the face, intracranial brain tissue or relative relationship between the two. According to the obtained head soft tissue information, the operation position to be performed by the patient can be obtained through preliminary calculation on the control computer, an operation channel and a simulated working path are designed according to the three-dimensional digital model, the result is used as one of target values for performing registration in the operation, and the process is the process of performing preoperative planning by a doctor.
Referring to fig. 7, a flow chart of a method provided by the present application under yet another embodiment;
as shown in fig. 7, in a preferred embodiment, when performing step S10, the method further includes:
s11: positioning the face of the patient by adopting a plurality of static benchmarks, and respectively acquiring the positions of the visible light marks in a coordinate system;
s12: acquiring the position relation among a plurality of visible light marks;
s13: detecting whether the position relation changes; if the change occurs, an alarm signal is sent out.
The steps S11-S13 are substantially a correction process of a space coordinate system, and because a plurality of static references are positioned at the same position of the same object, the relative relation between the static reference A and the static reference B is obtained through the structured light navigator, so that whether the coordinate system established by the static reference A and the static reference B shifts or not can be monitored in real time, and whether the skull moves or not can be judged; furthermore, when the position relation of the two static references is detected to change, a mode of sending an alarm signal can be adopted to remind medical staff to make corresponding adjustment in time, and meanwhile, a doctor can correspondingly adjust the running track of the end effector according to the value of the position change so as to ensure that the operation is accurately executed.
According to the technical scheme, the registration signal acquisition method of the surgical navigation system adopting structured light image registration comprises the steps of establishing a structured light coordinate system and determining the position of a visible light mark in the structured light coordinate system; acquiring a structured light three-dimensional image; reconstructing a face three-dimensional image according to the structured light three-dimensional image; and carrying out local surface registration on the structured light three-dimensional image and the MRI three-dimensional digital image. The method provided by the application acquires the three-dimensional point cloud image of the face of the patient through structured light scanning, and performs registration with the original model, has the characteristics of large information amount, high acquisition speed, multiple characteristic points and the like, and has higher registration precision when performing surface registration with preoperative MRI three-dimensional reconstruction data; the problems of insufficient feature points and low registration accuracy of the traditional registration are solved.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (8)

1. A neurosurgical navigation system adopting structured light image registration is characterized by comprising a structured light navigator, a plurality of static references, an end effector and a control computer; the structured light navigator, the plurality of static fiducials and end effectors all electrically connected to the control computer;
the structured light navigator is provided with two cameras, and the cameras are used for scanning the face, acquiring a three-dimensional image and sending the three-dimensional image to the control computer;
visible light marks are arranged on the plurality of static references;
the control computer is used for processing the image and registering the processing result with the original MRI three-dimensional digital model; the control computer is provided with a display screen for displaying the registration result and finishing the state monitoring.
2. A neurosurgical navigation system using structured light image registration according to claim 1, wherein the structured light navigator is configured to emit structured light to the face to distort the image of the face.
3. The method for acquiring the registration signal of the neurosurgical navigation system adopting the structured light image registration is adopted, and the method is characterized by comprising the following steps:
positioning the face of the patient by adopting a plurality of static benchmarks, and respectively acquiring the positions of the visible light marks in a coordinate system;
establishing a structured light coordinate system, and determining the position of the visible light mark in the structured light coordinate system;
acquiring a structured light three-dimensional image;
reconstructing a face three-dimensional image according to the structured light three-dimensional image;
and carrying out local surface registration on the structured light three-dimensional image and the MRI three-dimensional digital image.
4. The method of claim 3, wherein the acquiring the structured light three-dimensional image comprises:
scanning the face of a patient to distort the facial image;
shooting a distorted image;
the photographed image is transmitted to the control computer.
5. The method of claim 3, further comprising:
and acquiring the relative relation between the spatial posture of the static reference and the three-dimensional face image.
6. The method of claim 3, further comprising:
establishing an MRI coordinate system, and determining the position of the visible light marker in the MRI coordinate system;
and acquiring an MRI three-dimensional digital image based on the MRI three-dimensional digital model.
7. The method of claim 3, wherein locally registering the structured light three-dimensional image with the MRI three-dimensional digital image comprises:
associating the coordinate systems of the structured light three-dimensional image and the MRI three-dimensional digital image with a static reference to obtain a conversion relation between the coordinate systems;
tracking the visible light mark, and acquiring the space coordinates of the visible light mark in different images;
a registration signal is acquired that enables the end effector to achieve registration-based precision navigation.
8. The method of claim 3, further comprising:
acquiring the position relation among a plurality of visible light marks;
detecting whether the position relation changes; if the change occurs, an alarm signal is sent out.
CN201910012362.5A 2019-01-07 2019-01-07 Surgical navigation system adopting structured light image registration and registration signal acquisition method Active CN109464196B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910012362.5A CN109464196B (en) 2019-01-07 2019-01-07 Surgical navigation system adopting structured light image registration and registration signal acquisition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910012362.5A CN109464196B (en) 2019-01-07 2019-01-07 Surgical navigation system adopting structured light image registration and registration signal acquisition method

Publications (2)

Publication Number Publication Date
CN109464196A CN109464196A (en) 2019-03-15
CN109464196B true CN109464196B (en) 2021-04-20

Family

ID=65677574

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910012362.5A Active CN109464196B (en) 2019-01-07 2019-01-07 Surgical navigation system adopting structured light image registration and registration signal acquisition method

Country Status (1)

Country Link
CN (1) CN109464196B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3712900A1 (en) * 2019-03-20 2020-09-23 Stryker European Holdings I, LLC Technique for processing patient-specific image data for computer-assisted surgical navigation
CN110008380B (en) * 2019-04-11 2021-01-01 中国人民解放军国防科技大学 Sparse partition-region tree-based isosurface accelerated extraction method and system
CN110025891A (en) * 2019-04-22 2019-07-19 上海大学 Transcranial magnetic stimulation operation vision guided navigation device
CN110169820A (en) * 2019-04-24 2019-08-27 艾瑞迈迪科技石家庄有限公司 A kind of joint replacement surgery pose scaling method and device
CN110236674A (en) * 2019-05-09 2019-09-17 苏州大学 A kind of operation on liver navigation methods and systems based on structure light scan
CN110101452A (en) * 2019-05-10 2019-08-09 山东威高医疗科技有限公司 A kind of optomagnetic integrated positioning navigation method for surgical operation
FR3103097B1 (en) * 2019-11-19 2021-11-05 Quantum Surgical Navigation method for positioning a medical robot
CN111583314A (en) * 2020-04-13 2020-08-25 艾瑞迈迪医疗科技(北京)有限公司 Synchronous registration frame acquisition method and device applied to surgical navigation system
CN111815644A (en) * 2020-05-21 2020-10-23 艾瑞迈迪医疗科技(北京)有限公司 Method and device for extracting skin face data of patient in medical image
CN111568550A (en) * 2020-06-08 2020-08-25 北京爱康宜诚医疗器材有限公司 Operation navigation monitoring system, information monitoring method and device thereof, and storage medium
CN112258494B (en) * 2020-10-30 2021-10-22 北京柏惠维康科技有限公司 Focal position determination method and device and electronic equipment
CN112634336A (en) * 2020-12-31 2021-04-09 华科精准(北京)医疗科技有限公司 Registration method and system
CN113855240B (en) * 2021-09-30 2023-05-19 上海寻是科技有限公司 Medical image registration system and method based on magnetic navigation
CN114041876A (en) * 2021-11-30 2022-02-15 苏州大学 Augmented reality orthopedic perspective navigation method and system based on structured light
CN116725679A (en) * 2022-08-12 2023-09-12 北京和华瑞博医疗科技有限公司 Registration point determination and registration method, apparatus, device, medium and program product
CN116563297B (en) * 2023-07-12 2023-10-31 中国科学院自动化研究所 Craniocerebral target positioning method, device and storage medium
CN116747023B (en) * 2023-08-11 2023-11-28 北京维卓致远医疗科技发展有限责任公司 Fixing instrument for registration instrument of image system and navigation system

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB9405299D0 (en) * 1994-03-17 1994-04-27 Roke Manor Research Improvements in or relating to video-based systems for computer assisted surgery and localisation
DE20316925U1 (en) * 2003-06-03 2004-08-19 Neuropolymed AG Surgical clip for neurosurgery, has fine surface structure to reduce light reflectance
CN1270672C (en) * 2004-06-01 2006-08-23 复旦大学 Method for correcting brain tissue deformation in navigation system of neurosurgery
CN100345525C (en) * 2005-12-07 2007-10-31 嘉兴市第一医院 Framed stereo directed neurosurgery system registration method
CN101564289A (en) * 2009-06-03 2009-10-28 南京航空航天大学 Method for real-time error correction of neurosurgery navigation puncture path based on near infrared spectrum
CN202751447U (en) * 2012-07-20 2013-02-27 北京先临华宁医疗科技有限公司 Vertebral pedicle internal fixation surgical navigation system based on structured light scanning
CN103908344A (en) * 2012-12-31 2014-07-09 复旦大学 Tractive straightening method based on surgical navigation system
US10433911B2 (en) * 2013-09-18 2019-10-08 iMIRGE Medical INC. Optical targeting and visualization of trajectories
CN103793915B (en) * 2014-02-18 2017-03-15 上海交通大学 Inexpensive unmarked registration arrangement and method for registering in neurosurgery navigation
CN104146767A (en) * 2014-04-24 2014-11-19 薛青 Intraoperative navigation method and system for assisting in surgery
CN104055520B (en) * 2014-06-11 2016-02-24 清华大学 Human organ motion monitoring method and operation guiding system
WO2016044934A1 (en) * 2014-09-24 2016-03-31 7D Surgical Inc. Tracking marker support structure and surface registration methods employing the same for performing navigated surgical procedures
US11432878B2 (en) * 2016-04-28 2022-09-06 Intellijoint Surgical Inc. Systems, methods and devices to scan 3D surfaces for intra-operative localization
CN107440797B (en) * 2017-08-21 2020-04-03 刘洋 Registration and registration system and method for surgical navigation

Also Published As

Publication number Publication date
CN109464196A (en) 2019-03-15

Similar Documents

Publication Publication Date Title
CN109464196B (en) Surgical navigation system adopting structured light image registration and registration signal acquisition method
US11986256B2 (en) Automatic registration method and device for surgical robot
JP4822634B2 (en) A method for obtaining coordinate transformation for guidance of an object
US8463360B2 (en) Surgery support device, surgery support method, and computer readable recording medium storing surgery support program
CN107456278B (en) Endoscopic surgery navigation method and system
CN110033465B (en) Real-time three-dimensional reconstruction method applied to binocular endoscopic medical image
JP5702861B2 (en) Assisted automatic data collection method for anatomical surfaces
US6511418B2 (en) Apparatus and method for calibrating and endoscope
US8388539B2 (en) Operation supporting device, method, and program
JP2950340B2 (en) Registration system and registration method for three-dimensional data set
US20010051761A1 (en) Apparatus and method for calibrating an endoscope
CN110215284A (en) A kind of visualization system and method
CN100418489C (en) Multimode medical figure registration system based on basic membrane used in surgical operation navigation
CN1326092C (en) Multimodel type medical image registration method based on standard mask in operation guiding
CN103948432A (en) Algorithm for augmented reality of three-dimensional endoscopic video and ultrasound image during operation
CN109498156A (en) A kind of head operation air navigation aid based on 3-D scanning
WO1999016352A1 (en) Interventional radiology guidance system
Maurer Jr et al. Augmented-reality visualization of brain structures with stereo and kinetic depth cues: system description and initial evaluation with head phantom
WO2001057805A2 (en) Image data processing method and apparatus
CN113100941B (en) Image registration method and system based on SS-OCT (scanning and optical coherence tomography) surgical navigation system
CN116883471B (en) Line structured light contact-point-free cloud registration method for chest and abdomen percutaneous puncture
CN117323002A (en) Neural endoscopic surgery visualization system based on mixed reality technology
CN115177340B (en) Craniocerebral positioning puncture method based on three-dimensional coordinates
CN108143501B (en) Anatomical projection method based on body surface vein features
CN114283179A (en) Real-time fracture far-near end space pose acquisition and registration system based on ultrasonic images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210324

Address after: Room 1301, 13 / F, building 5, No. 18, Kechuang 13th Street, Daxing Economic and Technological Development Zone, Beijing 100176

Applicant after: Beijing and Huaruibo Medical Technology Co.,Ltd.

Applicant after: BEIJING HEHUA RUIBO TECHNOLOGY Co.,Ltd.

Address before: 100176 West 606, 6th floor, building 2, 28 Jinghai 2nd Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing

Applicant before: BEIJING HEHUA RUIBO TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20211222

Address after: 710075 room 801, office building, block a, Huihao international, No. 58, Keji Third Road, high tech Zone, Xi'an, Shaanxi Province

Patentee after: Xi'an hehuaruibo Technology Co.,Ltd.

Address before: Room 1301, 13 / F, building 5, No. 18, Kechuang 13th Street, Daxing Economic and Technological Development Zone, Beijing 100176

Patentee before: Beijing and Huaruibo Medical Technology Co.,Ltd.

Patentee before: Beijing hehuaruibo Technology Co., Ltd

TR01 Transfer of patent right