CN114681057A - Spatial registration method and device and neurosurgical navigation system - Google Patents

Spatial registration method and device and neurosurgical navigation system Download PDF

Info

Publication number
CN114681057A
CN114681057A CN202011642180.5A CN202011642180A CN114681057A CN 114681057 A CN114681057 A CN 114681057A CN 202011642180 A CN202011642180 A CN 202011642180A CN 114681057 A CN114681057 A CN 114681057A
Authority
CN
China
Prior art keywords
point cloud
spatial
patient
information data
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011642180.5A
Other languages
Chinese (zh)
Inventor
旷雅唯
刘文博
李赞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sinovation Beijing Medical Technology Co ltd
Original Assignee
Sinovation Beijing Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sinovation Beijing Medical Technology Co ltd filed Critical Sinovation Beijing Medical Technology Co ltd
Priority to CN202011642180.5A priority Critical patent/CN114681057A/en
Publication of CN114681057A publication Critical patent/CN114681057A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/105Modelling of the patient, e.g. for ligaments or bones
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/107Visualisation of planned trajectories or target regions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/108Computer aided selection or customisation of medical implants or cutting guides
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2055Optical tracking systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition

Landscapes

  • Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Robotics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The embodiment of the invention discloses a spatial registration method, a spatial registration device and a neurosurgical navigation system. The method comprises the steps of receiving medical image data of a target part of a patient, establishing a three-dimensional model of the target part of the patient, extracting medical image point cloud of the three-dimensional model, obtaining spatial information data at least containing the target part of the patient through a handheld scanning device, generating fusion point cloud of the target part of the patient according to the spatial information data, and registering the medical image point cloud and the fusion point cloud to obtain a coordinate transformation relation. According to the invention, the fusion point cloud of the target part of the patient is generated according to the spatial information data at least containing the target part of the patient, which is obtained by the handheld scanning device, and then registration can be carried out according to the medical image point cloud and the fusion point cloud without driving screws into the skull of the patient or pasting a plurality of markers on the face or selecting the markers, so that the registration speed is greatly improved, the operation time is saved, and the clinical requirements are met.

Description

Spatial registration method and device and neurosurgery navigation system
Technical Field
The invention relates to the technical field of medical equipment, in particular to a spatial registration method and device and a neurosurgical navigation system.
Background
The nerve navigation system can help a doctor to determine the position and the boundary of a focus, when the nerve navigation system is used, three-dimensional modeling needs to be carried out on medical image data of a preoperative patient to obtain a virtual model, and then spatial registration is carried out on an operation position and the virtual model, so that the accurate position of the focus is determined.
In the prior art, a surface registration method is adopted for spatial registration, and the surface registration principle is that a laser scanner is used for acquiring partial point clouds of a patient face in a patient space, a surface point cloud representing partial features (such as eyes and a nose) of the patient face is extracted in an image space, and the two point clouds are matched through an iterative method. However, clinically, the scanner is fixed on a tripod and needs to be tracked, which limits the scanning range of the scanner, causes the scanner to extract only face information, causes low registration accuracy of parts outside the face, and cannot meet the clinical requirement.
Disclosure of Invention
The invention provides a spatial registration method, a spatial registration device and a neurosurgery navigation system, which can provide rapid registration and meet clinical requirements. The specific technical scheme is as follows.
In a first aspect, the present invention provides a spatial registration method, including:
receiving medical image data containing a target part of a patient, establishing a three-dimensional model of the target part of the patient, and extracting a medical image point cloud of the three-dimensional model;
obtaining spatial information data at least containing the target part of the patient through a handheld scanning device, and generating a fusion point cloud of the target part of the patient according to the spatial information data;
and registering the medical image point cloud and the fused point cloud to obtain a coordinate transformation relation.
Optionally, the spatial information data obtained by the handheld scanning device includes scanning point clouds obtained by the handheld scanning device at different positions.
Optionally, the handheld scanning device has a spatial positioning structure, and the step of generating the fused point cloud of the target portion of the patient according to the spatial information data includes:
and converting the spatial information data into the spatial coordinate system of the optical tracking equipment according to the spatial position information of the spatial positioning structure in the spatial coordinate system of the optical tracking equipment to obtain converted spatial information data, and generating a fusion point cloud of the target part of the patient according to the converted spatial information data.
Optionally, the step of converting the spatial information data into the spatial coordinate system of the optical tracking device according to the spatial position information of the spatial positioning structure in the spatial coordinate system of the optical tracking device to obtain the converted spatial information data, and generating the fusion point cloud of the target portion of the patient according to the converted spatial information data includes:
and respectively converting scanning point clouds obtained by the handheld scanning device at different positions into the space coordinate system of the optical tracking equipment according to the space position information of the space positioning structure in the space coordinate system of the optical tracking equipment to obtain a plurality of converted scanning point clouds, and generating a fused point cloud of the target part of the patient according to the plurality of converted scanning point clouds.
Optionally, the step of respectively converting the scanning point clouds obtained by the handheld scanning device at different positions into the spatial coordinate system of the optical tracking device according to the spatial position information of the spatial positioning structure in the spatial coordinate system of the optical tracking device to obtain a plurality of converted scanning point clouds, and generating the fused point cloud of the target portion of the patient according to the plurality of converted scanning point clouds includes:
aiming at each position of the scanning point cloud obtained by the handheld scanning device, determining a first conversion matrix from the handheld scanning device to a space positioning structure according to the relative position relation between the space positioning structure connecting the handheld scanning device and the handheld scanning device at the position;
determining a second conversion matrix of the space positioning structure to the optical tracking device according to the space position information of the space positioning structure in the space coordinate system of the optical tracking device at the position;
converting the scanning point cloud obtained by the handheld scanning device at the position into the space coordinate system of the optical tracking equipment according to the first conversion matrix and the second conversion matrix to obtain the scanning point cloud after the position conversion;
and fusing the scanning point clouds converted from different positions to generate a fused point cloud of the target part of the patient.
Optionally, the spatial information data further includes spatial position information of a position indicating device in a scanning point cloud coordinate system obtained by the handheld scanning device at each position, and the step of generating a fused point cloud of the target portion of the patient according to the spatial information data includes:
for each position of the spatial information data obtained by the handheld scanning device, determining a third conversion matrix from the handheld scanning device to the position indicating device according to the spatial position information of the position indicating device in a scanning point cloud coordinate system obtained by the handheld scanning device at the position;
determining a fourth conversion matrix of the position indicating device to the optical tracking device according to the spatial position information of the position indicating device in the spatial coordinate system of the optical tracking device at the position;
converting the scanning point cloud obtained by the handheld scanning device at the position into the space coordinate system of the optical tracking equipment according to the third conversion matrix and the fourth conversion matrix to obtain the scanning point cloud after the position conversion;
and fusing the scanning point clouds converted from different positions to generate a fused point cloud of the target part of the patient.
Optionally, at least two scanning point clouds obtained at different positions have overlapping portions, and the step of generating the fused point cloud of the target part of the patient according to the spatial information data includes:
and splicing the scanning point clouds with the overlapped parts according to the overlapped parts to generate a fused point cloud of the target part of the patient.
Optionally, the spatial information data further includes spatial position information of a position indicating device in a coordinate system of the handheld scanning device, and the step of generating the fused point cloud of the target portion of the patient according to the spatial information data includes:
and respectively converting scanning point clouds obtained by the handheld scanning device at different positions into a position indicating device coordinate system with the position indicating device as an origin according to the spatial position information of the position indicating device in the handheld scanning device coordinate system, and generating a fused point cloud of the target part of the patient.
Optionally, the step of registering the medical image point cloud and the fused point cloud to obtain a coordinate transformation relationship includes:
and registering the medical image point cloud and the fusion point cloud in a point and point registration mode and a point and surface registration mode to obtain a coordinate transformation relation.
In a second aspect, the present invention provides a spatial registration apparatus, comprising:
the medical image point cloud extraction module is used for receiving medical image data containing a target part of a patient, establishing a three-dimensional model of the target part of the patient and extracting a medical image point cloud of the three-dimensional model;
the fusion point cloud generating module is used for acquiring spatial information data at least comprising the target part of the patient through a handheld scanning device and generating fusion point cloud of the target part of the patient according to the spatial information data;
and the registration module is used for registering the medical image point cloud and the fusion point cloud to obtain a coordinate transformation relation.
Optionally, the spatial information data obtained by the handheld scanning device includes scanning point clouds obtained by the handheld scanning device at different positions.
Optionally, the handheld scanning device has a spatial positioning structure, and the fused point cloud generating module is specifically configured to:
and converting the spatial information data into the spatial coordinate system of the optical tracking equipment according to the spatial position information of the spatial positioning structure in the spatial coordinate system of the optical tracking equipment to obtain converted spatial information data, and generating a fusion point cloud of the target part of the patient according to the converted spatial information data.
Optionally, the fused point cloud generating module is specifically configured to:
and respectively converting scanning point clouds obtained by the handheld scanning device at different positions into the space coordinate system of the optical tracking equipment according to the space position information of the space positioning structure in the space coordinate system of the optical tracking equipment to obtain a plurality of converted scanning point clouds, and generating a fused point cloud of the target part of the patient according to the plurality of converted scanning point clouds.
Optionally, the fused point cloud generating module includes:
the first conversion matrix determining unit is used for obtaining each position of the scanning point cloud aiming at the handheld scanning device and determining a first conversion matrix from the handheld scanning device to the space positioning structure according to the relative position relation between the space positioning structure connecting the handheld scanning device and the handheld scanning device at the position;
a second transformation matrix determining unit, configured to determine a second transformation matrix from the spatial localization structure to the optical tracking device according to spatial location information of the spatial localization structure in a spatial coordinate system of the optical tracking device at the location;
the first conversion unit is used for converting the scanning point cloud obtained by the handheld scanning device at the position into the space coordinate system of the optical tracking equipment according to the first conversion matrix and the second conversion matrix to obtain the scanning point cloud after the position conversion;
and the first generation unit is used for fusing the scanning point clouds converted from different positions to generate a fused point cloud of the target part of the patient.
Optionally, the spatial information data further includes spatial position information of a position indicating device in a scanning point cloud coordinate system obtained by the handheld scanning device at each position, and the fused point cloud generating module includes:
a third conversion matrix determining unit, configured to determine, for each position where the handheld scanning device obtains spatial information data, a third conversion matrix from the handheld scanning device to the position indicating device according to spatial position information of the position indicating device in a scanning point cloud coordinate system obtained by the handheld scanning device at the position;
a fourth conversion matrix determination unit, configured to determine a fourth conversion matrix from the position indication apparatus to the optical tracking apparatus according to the spatial position information of the position indication apparatus in the spatial coordinate system of the optical tracking apparatus at the position;
the second conversion unit is used for converting the scanning point cloud obtained by the handheld scanning device at the position into the space coordinate system of the optical tracking equipment according to the third conversion matrix and the fourth conversion matrix to obtain the scanning point cloud after the position conversion;
and the second generation unit is used for fusing the scanning point clouds converted from different positions to generate a fused point cloud of the target part of the patient.
Optionally, at least two scanning point clouds obtained at different positions have an overlapping portion therebetween, and the fused point cloud generating module is specifically configured to:
and splicing the scanning point clouds with the overlapped parts according to the overlapped parts to generate a fused point cloud of the target part of the patient.
Optionally, the spatial information data further includes spatial position information of a position indicating device in a coordinate system of the handheld scanning device, and the fused point cloud generating module is specifically configured to:
and respectively converting scanning point clouds obtained by the handheld scanning device at different positions into a position indicating device coordinate system taking the position indicating device as an origin according to the spatial position information of the position indicating device in the handheld scanning device coordinate system, and generating a fused point cloud of the target part of the patient.
Optionally, the registration module is specifically configured to:
and registering the medical image point cloud and the fusion point cloud in a point and point registration mode and a point and surface registration mode to obtain a coordinate transformation relation.
In a third aspect, the invention provides a neurosurgical navigation system, which comprises a host computer and a handheld scanning device, wherein the handheld scanning device is connected with the host computer in a communication way;
the handheld scanning device collects spatial information data at least containing the target part of the patient and sends the spatial information data to the host;
the host computer performs the spatial registration method of the first aspect described above.
Optionally, the neurosurgical navigation system further comprises an optical tracking device, and the optical tracking device is in communication connection with the host;
the optical tracking device tracks the spatial location information of the spatial positioning structure or the position indicating device and sends the spatial location information to the host.
As can be seen from the above, the embodiment of the present invention can receive medical image data of a target portion of a patient, establish a three-dimensional model of the target portion of the patient, extract a medical image point cloud of the three-dimensional model, obtain spatial information data at least including the target portion of the patient through a handheld scanning device, generate a fusion point cloud of the target portion of the patient according to the spatial information data, and register the medical image point cloud and the fusion point cloud to obtain a coordinate transformation relationship. According to the embodiment of the invention, the fusion point cloud of the target part of the patient can be generated according to the spatial information data at least containing the target part of the patient, which is obtained by the handheld scanning device, and then the registration can be carried out according to the medical image point cloud and the fusion point cloud without driving screws on the skull of the patient or pasting a plurality of markers on the face or selecting the markers, so that the registration speed is improved, the operation time is saved, and the clinical requirements are met. Of course, not all of the advantages described above need to be achieved at the same time in the practice of any one product or method of the invention.
The innovation points of the embodiment of the invention comprise:
1. the embodiment can receive medical image data of a target part of a patient, establish a three-dimensional model of the target part of the patient, extract medical image point clouds of the three-dimensional model, obtain spatial information data at least containing the target part of the patient through a handheld scanning device, generate fusion point clouds of the target part of the patient according to the spatial information data, and register the medical image point clouds and the fusion point clouds to obtain a coordinate transformation relation. According to the embodiment of the invention, the fusion point cloud of the target part of the patient can be generated according to the spatial information data at least containing the target part of the patient, which is obtained by the handheld scanning device, and then the registration can be carried out according to the medical image point cloud and the fusion point cloud without driving screws on the skull of the patient or pasting a plurality of markers on the face or selecting the markers, so that the registration speed is improved, the operation time is saved, and the clinical requirements are met.
2. In the process of two times of coordinate system conversion, only the relative position relationship between the position indicating device and the handheld scanning device is used, or the relative position relationship between the space positioning structure and the handheld scanning device, the position indicating device and the space positioning structure are only used as a medium for completing scanning of a target part of a patient to obtain scanning point clouds, and whether identification points are set on the position indicating device and the space positioning structure or not and the number and the positions of the identification points are not limited, so that the calculation process is simplified, and the calculation amount is reduced.
3. Compared with the laser point cloud registration mode which can only acquire the point cloud information of the face of the patient for registration, the embodiment of the invention can acquire the point cloud information of the face of the patient and can also acquire the point cloud information of other parts of the skull of the patient for registration, so that the registration precision is improved, and the number of points in the point cloud information acquired by the handheld scanning device is 10-200 times of the data volume of the points acquired by the laser pen in the laser point cloud registration mode, so that the registration precision is improved.
4. The handheld scanning device in the neurosurgical navigation system provided by the embodiment of the invention collects spatial information data at least containing a target part of a patient and sends the spatial information data to the host, the host can receive medical image data of the target part of the patient, establish a three-dimensional model of the target part of the patient, extract medical image point cloud of the three-dimensional model, receive spatial information data at least containing the target part of the patient, generate fusion point cloud of the target part of the patient according to the spatial information data, and register the medical image point cloud and the fusion point cloud to obtain a coordinate transformation relation. The neurosurgical navigation system provided by the embodiment of the invention can perform registration according to the medical image point cloud and the fusion point cloud, does not need to drive screws on the skull of a patient or paste a plurality of markers on the face, and also does not need to select the markers, thereby improving the registration speed, saving the operation time and meeting the clinical requirements.
5. Because the plurality of different positions in the embodiment of the invention at least comprise the face and the non-face part of the patient, the embodiment of the invention adopts the information of the non-face part during the registration, improves the registration precision of the non-face part and meets the clinical requirement.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below. It is to be understood that the drawings in the following description are merely exemplary of some embodiments of the invention. For a person skilled in the art, without inventive effort, further figures can be obtained from these figures.
Fig. 1 is a schematic flow chart of a spatial registration method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a spatial positioning structure attached to a handheld scanning device;
FIG. 3 is a schematic view of a first structure of a position indicating device used in combination with a handheld scanning device;
FIG. 4 is a schematic diagram of a second structure of the position indicating device used in combination with the handheld scanning device;
fig. 5 is a schematic structural diagram of a spatial registration apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a neurosurgical navigation system according to an embodiment of the present invention.
In fig. 1-6, 1 a spatial positioning structure, 2 a handheld scanning device, 3 an optical tracking apparatus, 4 a position indicating device, 10 a host computer.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely a few embodiments of the invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the embodiments and drawings of the present invention are intended to cover non-exclusive inclusions. A process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements but may alternatively include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiment of the invention discloses a spatial registration method, a spatial registration device and a neurosurgical navigation system, which can provide rapid and high-precision registration and meet clinical requirements. The following provides a detailed description of embodiments of the invention.
Fig. 1 is a schematic flowchart of a spatial registration method according to an embodiment of the present invention. The method is applied to the electronic equipment. The method specifically comprises the following steps.
S110: receiving medical image data containing a target part of a patient, establishing a three-dimensional model of the target part of the patient, and extracting a medical image point cloud of the three-dimensional model.
In order to perform spatial registration, medical imaging equipment is required to detect a head target part of a patient to obtain head medical image data of the head target part of the patient, and the electronic equipment is in communication connection with the medical imaging equipment, so that the medical imaging equipment can transmit the medical image data of the head target part of the patient to the electronic equipment, and the electronic equipment receives the medical image data containing the head target part of the patient and establishes a three-dimensional model of the head target part of the patient according to the received medical image data of the head target part of the patient.
For example, the medical Imaging data acquisition method adopted by the medical Imaging device may be MRI (Magnetic Resonance Imaging), CT (Computed Tomography), X-ray Imaging, electron emission Tomography or ultrasound scanning.
The medical image data received by the electronic device may be one or more, and when the electronic device receives more than one type of medical image data, the electronic device may fuse the different types of medical image data to obtain a three-dimensional model of the target portion of the patient.
Illustratively, when the target site of the patient is the head, the three-dimensional model created may be an overall three-dimensional model of the head of the patient, including head surface structures as well as structures within the head.
In some embodiments, extracting the medical image point cloud of the three-dimensional model specifically includes: the skin part on the three-dimensional model is extracted through an image segmentation technology, and then the skin part is converted into a medical image point cloud.
In some embodiments, extracting the medical image point cloud of the three-dimensional model specifically includes: establishing an external sphere of the three-dimensional model, uniformly sampling the spherical surface of the external sphere to obtain spherical surface points, calculating a first surface point of the intersection of a connecting line of the spherical surface points and a spherical center point and the three-dimensional model, and extracting the first surface point to be used as a medical image point cloud.
S120: spatial information data at least containing a target part of a patient are obtained through a handheld scanning device, and fusion point cloud of the target part of the patient is generated according to the spatial information data.
In order to perform the spatial registration, the handheld scanning device is required to scan the target region of the patient to obtain spatial information data at least including the target region of the patient, and then the obtained spatial information data is sent to the electronic device, that is, the electronic device obtains the spatial information data at least including the target region of the patient through the handheld scanning device.
The spatial information data obtained by the handheld scanning device comprises scanning point clouds obtained by the handheld scanning device at different positions.
For example, when the target portion of the patient is the head, the spatial information data obtained by the handheld scanning device includes scanning point clouds of different portions of the head of the patient.
The electronic device receives spatial information data at least containing a target part of a patient, and then generates a fused point cloud of the target part of the patient according to the spatial information data, wherein the point cloud information can also comprise color information besides a geometric position, and the color information is obtained by a camera generally and then the color information of a pixel at a corresponding position is given to a corresponding point in the point cloud.
For example, when the target region of the patient is the head, the scanning of the head of the patient at a plurality of different positions by the handheld scanning device may be: an operator holds the handheld scanning device and scans different positions of the head of a patient. In order to solve the problem of low precision of the position registration such as the back part of the brain of the existing face registration, different positions are set in the embodiment of the invention and at least comprise the position of the non-face of the patient.
In one implementation, a fused point cloud of a target site of a patient may be generated with a handheld scanning device, a spatial localization structure, and an optical tracking apparatus. The handheld scanning device has a spatial positioning structure, and the above-mentioned generating of the fusion point cloud of the target part of the patient according to the spatial information data may include:
and converting the spatial information data into the spatial coordinate system of the optical tracking equipment according to the spatial position information of the spatial positioning structure in the spatial coordinate system of the optical tracking equipment to obtain the converted spatial information data, and generating a fusion point cloud of the target part of the patient according to the converted spatial information data.
Since the optical tracking device is fixed and does not move during the whole process of spatial registration, the spatial coordinate system of the optical tracking device is a fixed coordinate system. Therefore, in the embodiment of the present invention, the spatial information data may be converted into the spatial coordinate system of the optical tracking device according to the spatial position information of the spatial positioning structure in the spatial coordinate system of the optical tracking device, so as to obtain the converted spatial information data, so that the spatial information data are unified into the same fixed coordinate system, and then the fusion point cloud of the target portion of the patient may be generated according to the converted spatial information data in the same coordinate system.
Because the spatial information data obtained by the handheld scanning device includes scanning point clouds obtained by the handheld scanning device at different positions, the above-mentioned converting the spatial information data into the spatial coordinate system of the optical tracking device according to the spatial position information of the spatial positioning structure in the spatial coordinate system of the optical tracking device to obtain the converted spatial information data, and generating a fused point cloud of the target portion of the patient according to the converted spatial information data may include:
and respectively converting scanning point clouds obtained by the handheld scanning device at different positions into the space coordinate system of the optical tracking equipment according to the space position information of the space positioning structure in the space coordinate system of the optical tracking equipment to obtain a plurality of converted scanning point clouds, and generating a fused point cloud of the target part of the patient according to the plurality of converted scanning point clouds.
That is to say, according to the spatial position information of the spatial positioning structure in the spatial coordinate system of the optical tracking device, the scanning point clouds at the positions are all converted into the spatial coordinate system of the optical tracking device, so as to obtain a plurality of converted scanning point clouds, so that the scanning point clouds at the positions are unified to the same fixed coordinate system, and then the fused point clouds at the target part of the patient can be generated according to the plurality of converted scanning point clouds in the same coordinate system.
Fig. 2 is a schematic structural diagram of a spatial positioning structure attached to a handheld scanning device, referring to fig. 2, two straight lines emitted from the handheld scanning device 2 in fig. 2 represent a scanning area of the handheld scanning device 2, a straight line between an optical tracking device 3 and the spatial positioning structure 1 represents that the optical tracking device 3 can track the spatial positioning structure 1, the spatial positioning structure 1 is clamped on the upper portion of the handheld scanning device 2, an operator holds the handheld scanning device 2 to scan different positions of the head of a patient, and the optical tracking device 3 tracks the spatial positioning structure 1.
Specifically, the above-mentioned scanning point clouds obtained by the handheld scanning device at different positions are respectively converted into the spatial coordinate system of the optical tracking device according to the spatial position information of the spatial positioning structure in the spatial coordinate system of the optical tracking device, so as to obtain a plurality of converted scanning point clouds, and a fused point cloud of the target part of the patient is generated according to the plurality of converted scanning point clouds, which may include:
obtaining each position of the scanning point cloud aiming at the handheld scanning device, and determining a first conversion matrix from the handheld scanning device to a space positioning structure according to the relative position relation between the space positioning structure connected with the handheld scanning device and the handheld scanning device at the position;
determining a second conversion matrix from the space positioning structure to the optical tracking device according to the space position information of the space positioning structure in the space coordinate system of the optical tracking device at the position;
converting the scanning point cloud obtained by the handheld scanning device at the position into a space coordinate system of the optical tracking equipment according to the first conversion matrix and the second conversion matrix to obtain the scanning point cloud converted at the position;
and fusing the scanning point clouds converted from different positions to generate a fused point cloud of the target part of the patient.
The space positioning structure is attached to the handheld scanning device, and a fixed relative position relationship exists between the space positioning structure and the handheld scanning device, so that each position of scanning point cloud is obtained for the handheld scanning device, and a first conversion matrix from the handheld scanning device to the space positioning structure is determined according to the relative position relationship between the space positioning structure connected with the handheld scanning device and the handheld scanning device at the position.
The handheld scanning device can be used for acquiring the surface point cloud instantly at a certain position, the optical tracker equipment can track the space positioning structure, so that a second conversion matrix from the space positioning structure to the optical tracking equipment can be determined according to the space position information of the space positioning structure in a space coordinate system of the optical tracking equipment at the position, and the position of the space positioning structure in the space coordinate system of the optical tracking equipment is changed at different positions, so that the second conversion matrix corresponding to each position is different.
After the first conversion matrix and the second conversion matrix are obtained, the scanning point cloud obtained by the handheld scanning device at the position can be converted into the space coordinate system of the optical tracking device according to the first conversion matrix and the second conversion matrix, and the scanning point cloud after the position conversion is obtained.
For example, converting the scanning point cloud obtained by the handheld scanning device at the position into the spatial coordinate system of the optical tracking device according to the first conversion matrix and the second conversion matrix, and obtaining the scanning point cloud after the position conversion may include:
and calculating a product between the first conversion matrix and the second conversion matrix as a target conversion matrix, and converting the scanning point cloud obtained by the handheld scanning device at the position into a space coordinate system of the optical tracking equipment according to the target conversion matrix to obtain the scanning point cloud converted at the position.
After the scanning point clouds of all the converted positions are obtained, the scanning point clouds of different converted positions can be fused in a space coordinate system of the optical tracking equipment to generate a fused point cloud of the target part of the patient.
Therefore, under the condition that the handheld scanning device has a space positioning structure, aiming at scanning point clouds obtained by the handheld scanning device at different positions, the scanning point clouds at the positions are converted into a space positioning structure coordinate system through a first conversion matrix, then the scanning point clouds at the positions are converted into an optical tracking device space coordinate system through a second conversion matrix, the scanning point clouds at the positions are converted into the optical tracking device space coordinate system through two times of coordinate system conversion, the scanning point clouds after the position conversion are obtained, then the scanning point clouds after the different position conversion are fused in the optical tracking device space coordinate system, and fusion point clouds of a target part of a patient are generated.
In another implementation, a fused point cloud of a target site of a patient may be generated with a handheld scanning device, a position pointing device, and an optical tracking apparatus.
Fig. 3 is a schematic view of a first configuration of a position indicating device used in combination with a handheld scanning device, see fig. 3, in fig. 3, two straight lines from the handheld scanning device 2 represent the scanning area of the handheld scanning device 2, and a straight line between the optical tracking device 3 and the position indicating device 4 represents that the optical tracking device 3 can track the position indicating device 4, the operator can scan the handheld scanning device 2 at different positions of the head of the patient, and the handheld scanning device 2 can shoot the position indicating device 4 at the same time, the handheld scanning device 2 can obtain a scanning point cloud coordinate system when scanning at each position, that is, how many positions the handheld scanning device 2 can obtain how many scanning point cloud coordinate systems, therefore, the spatial information data may further include spatial position information of the position indication device 4 in a scanning point cloud coordinate system obtained by the handheld scanning device at each position.
The position indicating device is a device capable of being tracked by the optical tracking equipment, and is a rigid structure with at least three optical identification points, and the three optical identification points have special relative position relation to form an easily-recognized array.
The position indicating device can be an active type or a passive type, the active type position indicating device can be a positioning ball capable of emitting signals, the passive type position indicating device is a rigid structure formed by taking a reflective ball or an angular point as an optical identification point, and the arrangement of the optical identification point is asymmetric and can be identified. Illustratively, the position indicating device may be a reflective ball reference frame and the optical tracking device may be an infrared tracking device.
Specifically, the generating of the fused point cloud of the target portion of the patient according to the spatial information data, where the spatial information data further includes spatial position information of the position indicating device in a scanning point cloud coordinate system obtained by the handheld scanning device at each position, may include:
aiming at each position of the spatial information data obtained by the handheld scanning device, determining a third conversion matrix from the handheld scanning device to the position indicating device according to the spatial position information of the position indicating device in a scanning point cloud coordinate system obtained by the handheld scanning device at the position;
determining a fourth conversion matrix from the position indicating device to the optical tracking device according to the spatial position information of the position indicating device in the spatial coordinate system of the optical tracking device at the position;
converting the scanning point cloud obtained by the handheld scanning device at the position into a space coordinate system of the optical tracking equipment according to the third conversion matrix and the fourth conversion matrix to obtain the scanning point cloud converted at the position;
and fusing the scanning point clouds converted from different positions to generate a fused point cloud of the target part of the patient.
Since the handheld scanning device 2 can take the images of the position indicating devices 4 together, the relationship between the coordinate system of the position indicating device and the coordinate system of the scanning point cloud obtained by the handheld scanning device can be determined, that is, the third transformation matrix from the handheld scanning device to the position indicating device can be determined for each position of the space information data obtained by the handheld scanning device, according to the space position information of the position indicating device in the coordinate system of the scanning point cloud obtained by the handheld scanning device at the position.
Since the optical tracking device can track the position indicating device, the fourth transformation matrix from the position indicating device to the optical tracking device can be determined according to the spatial position information of the position indicating device in the spatial coordinate system of the optical tracking device, and since the position of the position indicating device in the spatial coordinate system of the optical tracking device is relatively fixed and variable at different positions, the fourth transformation matrix corresponding to each position is the same.
After the third conversion matrix and the fourth conversion matrix are obtained, the scanning point cloud obtained by the handheld scanning device at the position can be converted into the spatial coordinate system of the optical tracking device according to the third conversion matrix and the fourth conversion matrix, and the scanning point cloud converted at the position is obtained.
The scanning point cloud obtained by the handheld scanning device at the position is converted into the spatial coordinate system of the optical tracking device according to the third conversion matrix and the fourth conversion matrix, and the scanning point cloud obtained after the position conversion can be converted into the spatial coordinate system of the optical tracking device according to the first conversion matrix and the second conversion matrix by referring to the above-mentioned method, and the scanning point cloud obtained after the position conversion is obtained by using a similar method, which is not described herein again.
After the scanning point clouds of all the converted positions are obtained, the scanning point clouds of different converted positions can be fused in a space coordinate system of the optical tracking equipment to generate a fused point cloud of the target part of the patient.
Therefore, under the condition that the spatial information data also comprises spatial position information of a position indicating device in a scanning point cloud coordinate system obtained by the handheld scanning device at each position, aiming at the scanning point clouds obtained by the handheld scanning device at different positions, the scanning point cloud of the position is converted into the position indicating device coordinate system through a third conversion matrix, the scanning point cloud of the position is converted into an optical tracking device spatial coordinate system through a fourth conversion matrix, the scanning point cloud of the position is converted into the optical tracking device spatial coordinate system through two times of coordinate system conversion, the converted scanning point cloud at the position is obtained, then the converted scanning point clouds at different positions are fused in the optical tracking device spatial coordinate system, and the fusion point cloud of the target part of the patient is generated.
In the above two coordinate system conversion processes, only the relative position relationship between the position indicating device and the handheld scanning device is used, or the relative position relationship between the spatial positioning structure and the handheld scanning device, the position indicating device and the spatial positioning structure are only used as a medium for completing scanning of the target part of the patient to obtain the scanning point cloud, and whether the position indicating device and the spatial positioning structure are set with the identification points or not and the number and the positions of the identification points are not limited, so that the calculation process is simplified, and the calculation amount is reduced.
In one implementation, the generating a fused point cloud of a target portion of a patient according to spatial information data by using scanning point clouds obtained at different positions with at least two overlapping portions may include:
and splicing the scanning point clouds with the overlapped parts according to the overlapped parts to generate a fused point cloud of the target part of the patient.
Because the scanning point clouds obtained at different positions have at least an overlapping part between every two scanning point clouds, for example, the scanning point clouds obtained at two adjacent positions have an overlapping part between them, the scanning point clouds with the overlapping parts can be spliced according to the overlapping parts to generate the fused point cloud of the target part of the patient.
Thus, the scan point clouds having the overlapping portions are spliced based on the overlapping portions, and a fused point cloud of the target part of the patient is generated.
In another implementation, a fused point cloud of a target site of a patient may be generated with a handheld scanning device and a position indicating device.
Fig. 4 is a schematic diagram of a second structure of the position indicating device used in combination with the handheld scanning device, referring to fig. 4, two straight lines in fig. 4 represent a scanning area of the handheld scanning device 2, and it can be seen from fig. 4 that the handheld scanning device 2 can shoot down the position indicating device 4 together, so that the spatial information data can further include spatial position information of the position indicating device 4 in a coordinate system of the handheld scanning device.
In the case that the spatial information data further includes spatial position information of the position indicating device in the coordinate system of the handheld scanning device, the generating of the fused point cloud of the target portion of the patient according to the spatial information data may include:
and respectively converting scanning point clouds obtained by the handheld scanning device at different positions into a position indicating device coordinate system with the position indicating device as an origin according to the spatial position information of the position indicating device in the handheld scanning device coordinate system to generate a fused point cloud of the target part of the patient.
The position indicating device 4 is shot at the same time when the handheld scanning device scans every time, so that the scanning point clouds collected every time can be unified into a position indicating device coordinate system taking the position indicating device as an origin, and a fused point cloud of the target part of the patient is obtained, namely, the scanning point clouds obtained by the handheld scanning device at different positions are respectively converted into the position indicating device coordinate system taking the position indicating device as the origin according to the spatial position information of the position indicating device in the handheld scanning device coordinate system, and the fused point cloud of the target part of the patient is generated.
Therefore, the generation of the fused point cloud of the target part of the patient is realized by unifying the scanning point clouds acquired each time into a position indicating device coordinate system taking the position indicating device as an origin.
In still another implementation, with continued reference to fig. 4, the solution omits the position indicating device 4 in fig. 4, scans the head only through the handheld scanning device 2, matches the point cloud of the second scanning based on the first scanning data, performs matching calculation through the ICP algorithm, forms a fused point cloud if the two are overlapped, continues the next scanning, and continues to fuse the point cloud of the next scanning with the fused point cloud; and if the two are not overlapped, continuing to perform next scanning, and repeating the matching calculation until the overlapped part position appears.
Registering the fused point cloud and the medical image point cloud to obtain a registration matrix; by using the hand-held structured light spot cloud collector, patterns such as body surface projection of focuses and paths, planned craniotomy range and the like can be projected to a target part, so that the combination of virtual and reality is realized.
S130: and registering the medical image point cloud and the fused point cloud to obtain a coordinate transformation relation.
After the medical image point cloud and the fusion point cloud are obtained, the medical image point cloud and the fusion point cloud can be registered to obtain a coordinate transformation relation.
Illustratively, registering the medical image point cloud and the fusion point cloud to obtain a coordinate transformation relationship may include:
and registering the medical image point cloud and the fused point cloud by adopting an ICP (inductively coupled plasma) algorithm to obtain a coordinate transformation relation.
The ICP (Iterative Closest Point algorithm) can merge Point cloud data under different coordinates into the same coordinate system to obtain a coordinate transformation relationship between two coordinate systems. The coordinate transformation relation can be a transformation coordinate matrix, and the transformation coordinate matrix obtained in the embodiment of the invention can be a transformation matrix for transforming the medical image point cloud into the fusion point cloud, and can also be a transformation coordinate matrix for transforming the fusion point cloud into the medical image point cloud.
In one implementation, the registering the medical image point cloud and the fusion point cloud to obtain a coordinate transformation relationship may include:
and registering the medical image point cloud and the fusion point cloud in a point and point registration mode and a point and surface registration mode to obtain a coordinate transformation relation.
That is, the point and the surface are registered in two registration modes to obtain a coordinate transformation relation.
As can be seen from the above, in the present embodiment, the medical image data of the target portion of the patient can be received, the three-dimensional model of the target portion of the patient is established, the medical image point cloud of the three-dimensional model is extracted, the spatial information data at least including the target portion of the patient is obtained by the handheld scanning device, the fusion point cloud of the target portion of the patient is generated according to the spatial information data, and the medical image point cloud and the fusion point cloud are registered to obtain the coordinate transformation relationship. According to the embodiment of the invention, the fusion point cloud of the target part of the patient can be generated according to the spatial information data at least containing the target part of the patient, which is obtained by the handheld scanning device, and then the registration can be carried out according to the medical image point cloud and the fusion point cloud without driving screws on the skull of the patient or pasting a plurality of markers on the face or selecting the markers, so that the handheld scanning device only needs tens of seconds, the registration speed is greatly improved, the operation time is saved, and the clinical requirements are met.
Compared with the laser point cloud registration mode which can only acquire the point cloud information of the face of the patient for registration, the embodiment of the invention can acquire the point cloud information of the face of the patient and the point cloud information of other parts of the skull of the patient for registration, so that the registration accuracy is improved, and the number of points in the point cloud information acquired by the handheld scanning device is 10-200 times of the data volume of the points acquired by the laser pen in the laser point cloud registration mode, so that the registration accuracy is improved.
In addition, different positions in the embodiment of the invention at least comprise the face and the non-face part of the patient, so that the embodiment of the invention adopts the information of the non-face part during the registration, improves the registration precision of the non-face part and meets the clinical requirement.
Fig. 5 is a schematic structural diagram of a spatial registration apparatus according to an embodiment of the present invention, and referring to fig. 5, the spatial registration apparatus according to the embodiment of the present invention includes:
a medical image point cloud extraction module 501, configured to receive medical image data including a target portion of a patient, establish a three-dimensional model of the target portion of the patient, and extract a medical image point cloud of the three-dimensional model;
a fused point cloud generating module 502, configured to obtain spatial information data at least including the target part of the patient through a handheld scanning device, and generate a fused point cloud of the target part of the patient according to the spatial information data;
a registration module 503, configured to register the medical image point cloud and the fusion point cloud to obtain a coordinate transformation relationship.
As can be seen, the spatial registration apparatus provided in this embodiment may receive medical image data of a target portion of a patient, establish a three-dimensional model of the target portion of the patient, extract a medical image point cloud of the three-dimensional model, obtain spatial information data at least including the target portion of the patient through a handheld scanning apparatus, generate a fusion point cloud of the target portion of the patient according to the spatial information data, and register the medical image point cloud and the fusion point cloud to obtain a coordinate transformation relationship. According to the embodiment of the invention, the fusion point cloud of the target part of the patient can be generated according to the spatial information data at least containing the target part of the patient, which is obtained by the handheld scanning device, and then the registration can be carried out according to the medical image point cloud and the fusion point cloud, so that screws do not need to be driven into the skull of the patient or a plurality of markers do not need to be pasted on the face of the patient, and the markers do not need to be selected.
In one implementation, the spatial information data obtained by the handheld scanning device includes scanning point clouds obtained by the handheld scanning device at different locations.
In one implementation, the handheld scanning device has a spatial localization mechanism, and the fused point cloud generating module 502 may be specifically configured to:
and converting the spatial information data into the spatial coordinate system of the optical tracking equipment according to the spatial position information of the spatial positioning structure in the spatial coordinate system of the optical tracking equipment to obtain converted spatial information data, and generating a fusion point cloud of the target part of the patient according to the converted spatial information data.
In one implementation, the fused point cloud generating module 502 may be specifically configured to:
and respectively converting scanning point clouds obtained by the handheld scanning device at different positions into the space coordinate system of the optical tracking equipment according to the space position information of the space positioning structure in the space coordinate system of the optical tracking equipment to obtain a plurality of converted scanning point clouds, and generating a fused point cloud of the target part of the patient according to the plurality of converted scanning point clouds.
In one implementation, the fused point cloud generating module 502 may include:
the first conversion matrix determining unit is used for obtaining each position of the scanning point cloud aiming at the handheld scanning device and determining a first conversion matrix from the handheld scanning device to the space positioning structure according to the relative position relation between the space positioning structure connecting the handheld scanning device and the handheld scanning device at the position;
a second transformation matrix determining unit, configured to determine a second transformation matrix from the spatial localization structure to the optical tracking device according to spatial location information of the spatial localization structure in a spatial coordinate system of the optical tracking device at the location;
the first conversion unit is used for converting the scanning point cloud obtained by the handheld scanning device at the position into the space coordinate system of the optical tracking equipment according to the first conversion matrix and the second conversion matrix to obtain the scanning point cloud after the position conversion;
and the first generation unit is used for fusing the scanning point clouds converted from different positions to generate a fused point cloud of the target part of the patient.
In one implementation, the spatial information data further includes spatial position information of a position indicating device in a scanning point cloud coordinate system obtained by the handheld scanning device at each position, and the fused point cloud generating module 502 may include:
a third conversion matrix determining unit, configured to determine, for each position where the handheld scanning device obtains spatial information data, a third conversion matrix from the handheld scanning device to the position indicating device according to spatial position information of the position indicating device in a scanning point cloud coordinate system obtained by the handheld scanning device at the position;
a fourth conversion matrix determination unit, configured to determine a fourth conversion matrix from the position indication apparatus to the optical tracking apparatus according to the spatial position information of the position indication apparatus in the spatial coordinate system of the optical tracking apparatus at the position;
the second conversion unit is used for converting the scanning point cloud obtained by the handheld scanning device at the position into the space coordinate system of the optical tracking equipment according to the third conversion matrix and the fourth conversion matrix to obtain the scanning point cloud after the position conversion;
and the second generation unit is used for fusing the scanning point clouds converted from different positions to generate a fused point cloud of the target part of the patient.
In an implementation manner, at least two scanning point clouds obtained at different positions have an overlapping portion therebetween, and the fused point cloud generating module 502 may be specifically configured to:
and splicing the scanning point clouds with the overlapped parts according to the overlapped parts to generate a fused point cloud of the target part of the patient.
In an implementation manner, the spatial information data further includes spatial position information of the position indication device in a coordinate system of the handheld scanning device, and the fused point cloud generating module 502 may be specifically configured to:
and respectively converting scanning point clouds obtained by the handheld scanning device at different positions into a position indicating device coordinate system with the position indicating device as an origin according to the spatial position information of the position indicating device in the handheld scanning device coordinate system, and generating a fused point cloud of the target part of the patient.
In one implementation, the registration module 503 may be specifically configured to:
and registering the medical image point cloud and the fusion point cloud in a point and point registration mode and a point and surface registration mode to obtain a coordinate transformation relation.
Fig. 6 is a schematic structural diagram of a neurosurgical navigation system according to an embodiment of the present invention, and referring to fig. 6, the neurosurgical navigation system according to the embodiment of the present invention includes a host 10 and a handheld scanning device 2, and the handheld scanning device 2 is in communication connection with the host 10.
The handheld scanning device 2 collects spatial information data at least including a target region of a patient and transmits the spatial information data to the host computer 10, and the host computer executes the spatial registration method shown in any one of the embodiments.
Therefore, the handheld scanning device 2 in the neurosurgical navigation system provided by the embodiment of the invention collects spatial information data at least containing a target part of a patient and sends the spatial information data to the host 10, the host 10 can receive medical image data of the target part of the patient, establish a three-dimensional model of the target part of the patient, extract a medical image point cloud of the three-dimensional model, receive spatial information data at least containing the target part of the patient, generate a fusion point cloud of the target part of the patient according to the spatial information data, and register the medical image point cloud and the fusion point cloud to obtain a coordinate transformation relation. The neurosurgical navigation system provided by the embodiment of the invention can perform registration according to the medical image point cloud and the fusion point cloud, does not need to drive screws on the skull of a patient or paste a plurality of markers on the face, and also does not need to select the markers, thereby improving the registration speed, saving the operation time and meeting the clinical requirements.
Because different positions in the embodiment of the invention at least comprise the face and the non-face part of the patient, the neurosurgery navigation system provided by the embodiment of the invention adopts the information of the non-face part during the registration, improves the registration precision of the non-face part and meets the clinical requirement.
With reference to fig. 6, the neurosurgical navigation system provided in the embodiment of the present invention further includes an optical tracking device 3, the optical tracking device 3 is in communication connection with the host 10, the optical tracking device 3 tracks the spatial location information of the spatial location structure or the position indicating apparatus and sends the spatial location information to the host 10, the host 10 receives the spatial location information of the spatial location structure or the position indicating apparatus sent by the optical tracking device 3 and performs spatial registration according to the spatial location information of the spatial location structure or the position indicating apparatus, and a specific spatial registration process may refer to corresponding descriptions in the spatial registration method, which is not described herein again.
For example, the handheld scanning device may be a handheld three-dimensional scanner, which may be of various suitable kinds to achieve different functions, for example, it may be a structured light spot cloud collector including a digital projection component and a camera component, or it may include a laser generator and a camera component, the laser generator may emit line laser or surface laser; alternatively, the three-dimensional scanner is a binocular camera.
Thus, the auxiliary host 10 performs spatial registration by way of the optical tracking device 3.
Those of ordinary skill in the art will understand that: the figures are merely schematic representations of one embodiment, and the blocks or flow diagrams in the figures are not necessarily required to practice the present invention.
Those of ordinary skill in the art will understand that: modules in the devices in the embodiments may be distributed in the devices in the embodiments according to the description of the embodiments, or may be located in one or more devices different from the embodiments with corresponding changes. The modules of the above embodiments may be combined into one module, or further split into multiple sub-modules.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (12)

1. A spatial registration method, comprising:
receiving medical image data containing a target part of a patient, establishing a three-dimensional model of the target part of the patient, and extracting a medical image point cloud of the three-dimensional model;
obtaining spatial information data at least containing the target part of the patient through a handheld scanning device, and generating a fusion point cloud of the target part of the patient according to the spatial information data;
and registering the medical image point cloud and the fusion point cloud to obtain a coordinate transformation relation.
2. The method of claim 1, wherein the spatial information data obtained by the handheld scanning device comprises scanning point clouds obtained by the handheld scanning device at different locations.
3. The method of claim 2, wherein the handheld scanning device has a spatial localization mechanism, and the step of generating a fused point cloud of the patient target site from the spatial information data comprises:
and converting the spatial information data into the spatial coordinate system of the optical tracking equipment according to the spatial position information of the spatial positioning structure in the spatial coordinate system of the optical tracking equipment to obtain converted spatial information data, and generating a fusion point cloud of the target part of the patient according to the converted spatial information data.
4. The method of claim 3, wherein the step of transforming the spatial information data into the optical tracking device spatial coordinate system based on the spatial location information of the spatial localization structure in the optical tracking device spatial coordinate system to obtain transformed spatial information data, and generating the fused point cloud of the target region of the patient based on the transformed spatial information data comprises:
and respectively converting scanning point clouds obtained by the handheld scanning device at different positions into the space coordinate system of the optical tracking equipment according to the space position information of the space positioning structure in the space coordinate system of the optical tracking equipment to obtain a plurality of converted scanning point clouds, and generating a fused point cloud of the target part of the patient according to the plurality of converted scanning point clouds.
5. The method of claim 4, wherein the step of converting the scanning point clouds obtained by the handheld scanning device at different positions into the spatial coordinate system of the optical tracking device according to the spatial position information of the spatial localization structure in the spatial coordinate system of the optical tracking device to obtain a plurality of converted scanning point clouds, and generating the fused point cloud of the target part of the patient according to the plurality of converted scanning point clouds comprises:
aiming at each position of the scanning point cloud obtained by the handheld scanning device, determining a first conversion matrix from the handheld scanning device to a space positioning structure according to the relative position relation between the space positioning structure connecting the handheld scanning device and the handheld scanning device at the position;
determining a second conversion matrix of the space positioning structure to the optical tracking device according to the space position information of the space positioning structure in the space coordinate system of the optical tracking device at the position;
converting the scanning point cloud obtained by the handheld scanning device at the position into the space coordinate system of the optical tracking equipment according to the first conversion matrix and the second conversion matrix to obtain the scanning point cloud after the position conversion;
and fusing the scanning point clouds converted from different positions to generate a fused point cloud of the target part of the patient.
6. The method of claim 2, wherein the spatial information data further comprises spatial location information of a location indicating device in a scanning point cloud coordinate system obtained at each location of the handheld scanning device, the step of generating a fused point cloud of the target site of the patient from the spatial information data comprising:
for each position of the spatial information data obtained by the handheld scanning device, determining a third conversion matrix from the handheld scanning device to the position indicating device according to the spatial position information of the position indicating device in a scanning point cloud coordinate system obtained by the handheld scanning device at the position;
determining a fourth conversion matrix of the position indicating device to the optical tracking device according to the spatial position information of the position indicating device in the spatial coordinate system of the optical tracking device at the position;
converting the scanning point cloud obtained by the handheld scanning device at the position into the space coordinate system of the optical tracking equipment according to the third conversion matrix and the fourth conversion matrix to obtain the scanning point cloud after the position conversion;
and fusing the scanning point clouds converted from different positions to generate a fused point cloud of the target part of the patient.
7. The method of claim 2, wherein the scan point clouds obtained at different locations have at least two overlapping portions therebetween, and wherein generating the fused point cloud of the patient target site from the spatial information data comprises:
and splicing the scanning point clouds with the overlapped parts according to the overlapped parts to generate a fused point cloud of the target part of the patient.
8. The method of claim 2, wherein the spatial information data further comprises spatial location information of a position indicating device in a handheld scanning device coordinate system, and the step of generating a fused point cloud of the patient target site from the spatial information data comprises:
and respectively converting scanning point clouds obtained by the handheld scanning device at different positions into a position indicating device coordinate system with the position indicating device as an origin according to the spatial position information of the position indicating device in the handheld scanning device coordinate system, and generating a fused point cloud of the target part of the patient.
9. The method of claim 1, wherein the step of registering the medical image point cloud and the fused point cloud to obtain a coordinate transformation relationship comprises:
and registering the medical image point cloud and the fusion point cloud in a point and point registration mode and a point and surface registration mode to obtain a coordinate transformation relation.
10. A spatial registration apparatus, comprising:
the medical image point cloud extraction module is used for receiving medical image data containing a patient target part, establishing a three-dimensional model of the patient target part and extracting a medical image point cloud of the three-dimensional model;
the fusion point cloud generating module is used for acquiring spatial information data at least comprising the target part of the patient through a handheld scanning device and generating fusion point cloud of the target part of the patient according to the spatial information data;
and the registration module is used for registering the medical image point cloud and the fusion point cloud to obtain a coordinate transformation relation.
11. A neurosurgical navigation system, comprising a host and a handheld scanning device, wherein the handheld scanning device is communicatively connected with the host;
the handheld scanning device collects spatial information data at least containing the target part of the patient and sends the spatial information data to the host;
the host computer performs the spatial registration method of any of claims 1-9.
12. The system of claim 11, further comprising an optical tracking device communicatively coupled to the host;
the optical tracking device tracks the spatial location information of the spatial positioning structure or the position indicating device and sends the spatial location information to the host.
CN202011642180.5A 2020-12-31 2020-12-31 Spatial registration method and device and neurosurgical navigation system Pending CN114681057A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011642180.5A CN114681057A (en) 2020-12-31 2020-12-31 Spatial registration method and device and neurosurgical navigation system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011642180.5A CN114681057A (en) 2020-12-31 2020-12-31 Spatial registration method and device and neurosurgical navigation system

Publications (1)

Publication Number Publication Date
CN114681057A true CN114681057A (en) 2022-07-01

Family

ID=82135968

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011642180.5A Pending CN114681057A (en) 2020-12-31 2020-12-31 Spatial registration method and device and neurosurgical navigation system

Country Status (1)

Country Link
CN (1) CN114681057A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105078573A (en) * 2014-05-11 2015-11-25 复旦大学 Neuronavigation space aligning method based on handheld scanner
US20180228550A1 (en) * 2015-07-29 2018-08-16 Kyle Richard Dupont Handheld scanner for rapid registration in a medical navigation system
CN109549705A (en) * 2019-01-21 2019-04-02 华科精准(北京)医疗科技有限公司 A kind of surgical robot system and its application method
CN109692050A (en) * 2018-12-26 2019-04-30 雅客智慧(北京)科技有限公司 A kind of calibration, method for tracing and the device of dentistry plantation navigating surgery

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105078573A (en) * 2014-05-11 2015-11-25 复旦大学 Neuronavigation space aligning method based on handheld scanner
US20180228550A1 (en) * 2015-07-29 2018-08-16 Kyle Richard Dupont Handheld scanner for rapid registration in a medical navigation system
CN109692050A (en) * 2018-12-26 2019-04-30 雅客智慧(北京)科技有限公司 A kind of calibration, method for tracing and the device of dentistry plantation navigating surgery
CN109549705A (en) * 2019-01-21 2019-04-02 华科精准(北京)医疗科技有限公司 A kind of surgical robot system and its application method

Similar Documents

Publication Publication Date Title
CN110946654B (en) Bone surgery navigation system based on multimode image fusion
US10593052B2 (en) Methods and systems for updating an existing landmark registration
CN109549705B (en) Surgical robot system and application method thereof
US11123144B2 (en) Registration of frames of reference
CN103211655B (en) A kind of orthopaedics operation navigation system and air navigation aid
CN107105972A (en) Model register system and method
CN106308946A (en) Augmented reality device applied to stereotactic surgical robot and method of augmented reality device
CN110215284A (en) A kind of visualization system and method
CN109498156A (en) A kind of head operation air navigation aid based on 3-D scanning
CN103519895A (en) Orthopedic operation auxiliary guide method
CN113197666A (en) Device and system for surgical navigation
CN105496556A (en) High-precision optical positioning system for surgical navigation
CN207164367U (en) AR glasses and its tracing system
CN110638525A (en) Operation navigation method and system integrating augmented reality
CN114795470A (en) Positioning navigation method and device in operation and installation robot
CN111493878A (en) Optical three-dimensional scanning device for orthopedic surgery and method for measuring bone surface
CN114708409A (en) Positioning navigation method, device, storage medium and equipment in operation
CN116883471B (en) Line structured light contact-point-free cloud registration method for chest and abdomen percutaneous puncture
CN109106448A (en) A kind of operation piloting method and device
CN114681057A (en) Spatial registration method and device and neurosurgical navigation system
CN114191078B (en) Endoscope operation navigation robot system based on mixed reality
US20190029561A1 (en) Image registration system and method using subject-specific tracker
CN114886558A (en) Endoscope projection method and system based on augmented reality
CN110368027B (en) Image fusion method and device
CN110368026B (en) Operation auxiliary device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination