CN111462314B - Organ three-dimensional image reconstruction method, operation navigation method and operation auxiliary system - Google Patents

Organ three-dimensional image reconstruction method, operation navigation method and operation auxiliary system Download PDF

Info

Publication number
CN111462314B
CN111462314B CN202010258116.0A CN202010258116A CN111462314B CN 111462314 B CN111462314 B CN 111462314B CN 202010258116 A CN202010258116 A CN 202010258116A CN 111462314 B CN111462314 B CN 111462314B
Authority
CN
China
Prior art keywords
dimensional image
module
dimensional
image data
organ
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010258116.0A
Other languages
Chinese (zh)
Other versions
CN111462314A (en
Inventor
罗创新
黄从云
杜晓红
杨晓芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Shuze Technology Co ltd
Original Assignee
Shenzhen Shuze Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Shuze Technology Co ltd filed Critical Shenzhen Shuze Technology Co ltd
Priority to CN202010258116.0A priority Critical patent/CN111462314B/en
Publication of CN111462314A publication Critical patent/CN111462314A/en
Application granted granted Critical
Publication of CN111462314B publication Critical patent/CN111462314B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/105Modelling of the patient, e.g. for ligaments or bones
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Robotics (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Theoretical Computer Science (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Processing Or Creating Images (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention discloses an organ three-dimensional image reconstruction method, an operation navigation method and an operation auxiliary system, wherein the organ three-dimensional image reconstruction method comprises the following steps: step 1, aiming at a target reconstructed organ, processing the target reconstructed organ by using the existing medical equipment to obtain CT or MRI two-dimensional image data; and 2, importing the two-dimensional image data in the step 1 into 3D-DOCTOR software, reconstructing a three-dimensional model of the target reconstructed organ by using the 3D-DOCTOR software, and forming a three-dimensional image data file. The method comprises the steps of processing a target reconstruction organ by using existing medical equipment, obtaining CT or MRI two-dimensional image data, then importing the two-dimensional image data into 3D-DOCTOR software, reconstructing a three-dimensional image of the target reconstruction organ by using the 3D-DOCTOR software, and forming a three-dimensional image data file, wherein the method is simple and the data is accurate.

Description

Organ three-dimensional image reconstruction method, operation navigation method and operation auxiliary system
Technical field:
the invention relates to an organ three-dimensional image reconstruction method, an operation navigation method and an operation auxiliary system.
The background technology is as follows:
in general, only a two-dimensional tomographic image of a human body is obtained by the medical imaging device, which is not beneficial to people to obtain the relative spatial position relationship among tissues, and the requirement of accurately analyzing the data is not met. The three-dimensional reconstruction can extract information from the image file to create a 3D model, can display organ tissues and lesion parts of a human body in a three-dimensional form, and provides useful visual information for clinical diagnosis and treatment of doctors. Three-dimensional reconstruction techniques are processes that reconstruct three-dimensional images of tissues or organs within a human body from two-dimensional image slices obtained by CT, MRI, etc., using computer graphics, image processing, and related knowledge in the medical field. The three-dimensional reconstruction can clearly and truly reflect the pipeline system in the organ and the relative position relation between the pipeline system and the tumor, define the invasion range of the tumor and formulate a treatment scheme according to the reconstruction result. The three-dimensional reconstruction technology has high practical value in the aspects of preoperative diagnosis, preoperative evaluation, surgical treatment, prognosis evaluation and the like.
However, the three-dimensional virtual image reconstruction method of the organs is complex, and the reconstructed images are inaccurate.
In addition, after the three-dimensional virtual image of the organ is reconstructed, if the 3D browser software and the 3D glasses are not provided, the user can only watch the three-dimensional modeled two-dimensional image and cannot present the three-dimensional image, the operation such as procedure navigation and the like, and the function is limited.
The invention comprises the following steps:
the invention aims to provide a three-dimensional image reconstruction method for an organ, which solves the technical problems that the three-dimensional virtual image reconstruction method for the organ in the prior art is complex and the reconstructed image is inaccurate.
The invention further aims to provide a surgical navigation method, which solves the technical problems that after the three-dimensional virtual image of the organ is reconstructed in the prior art, only the two-dimensional image of the three-dimensional modeling can be watched, the three-dimensional image, the surgical navigation and other operations can not be presented, and the function is limited.
It is yet another object of the present invention to provide a surgical assistance system that solves the problem of the prior art that the surgical assistance system cannot render stereoscopic three-dimensional images.
The invention aims at realizing the following technical scheme:
a method for reconstructing a three-dimensional image of an organ, comprising the steps of: step 1, aiming at a target reconstructed organ, processing the target reconstructed organ by using the existing medical equipment to obtain CT or MRI two-dimensional image data; and 2, importing the two-dimensional image data in the step 1 into 3D-DOCTOR software, reconstructing a three-dimensional model of the target reconstructed organ by using the 3D-DOCTOR software, and forming a three-dimensional image data file.
In the step 2, after the two-dimensional image data in the step 1 is imported into the 3D-DOCTOR software, the name of the target reconstructed organ is set, the region of interest of the target reconstructed organ is drawn, the data format of the two-dimensional image data is set, the boundary of the target reconstructed organ is interactively extracted, and then the surface rendering is performed to form the three-dimensional model of the target reconstructed organ.
The organ is a blood vessel, viscera, bone or muscle.
A surgical navigation method, comprising the steps of: step A: obtaining a three-dimensional model of a target reconstructed organ by using the three-dimensional virtual image reconstruction method of the organ and forming a three-dimensional image data file; and (B) step (B): opening a three-dimensional image data file by using 3D browser software, and performing corresponding operation; step C: 3D/AR glasses are worn during the operation, and three-dimensional images of the target reconstructed organ before the operation of the patient are watched and clearly compared with the anatomy of the patient during the operation.
In the step B, the three-dimensional image data file is opened by using the 3D browser software, and the operations of 2D/3D mode switching, annotation, dimension measurement, color setting and automatic playing are performed.
The 3D browser software has 7 modules, where the 7 modules include: module 1: the importing model module is used for importing a three-dimensional image data file; module 2: the list module is used for hiding and displaying the three-dimensional model, manually moving the three-dimensional model and returning the three-dimensional model to the initial position; module 3: the 2D/3D switching module is used for selecting to display the three-dimensional model in a 3D mode or a 2D mode; module 4: the annotating module is used for leaving notes/drawing lines and deleting the notes/drawing lines at any point on the three-dimensional model; module 5: the coloring module is used for coloring the model or adjusting the transparency of the model; and (6) module 6: the automatic playing module is used for automatically rotating the model; module 7: and the measuring module is used for measuring the distance between any two points on the model.
A surgical assistance system, characterized by: the three-dimensional image reconstruction system comprises a computer host and 3D/AR glasses, wherein the computer host is provided with an organ three-dimensional image reconstruction module and a 3D display module, the organ three-dimensional image reconstruction module is used for carrying out three-dimensional reconstruction on two-dimensional image data comprising a target extract obtained from CT or MRI to form a three-dimensional model and a three-dimensional image data file, the three-dimensional image data file is sent to the 3D display module, and the 3D display module displays the three-dimensional image data file and sends the three-dimensional image data file to the 3D/AR glasses; the 3D/AR glasses are used for receiving and displaying three-dimensional image data files.
The 3D display module includes: importing a sub-model for importing a three-dimensional image data file; the list sub-module is used for hiding and displaying the three-dimensional model, manually moving the three-dimensional model and returning the three-dimensional model to the initial position; an annotating sub-module, which is used for leaving notes/drawing lines and deleting the notes/drawing lines at any point on the three-dimensional model; a coloring sub-module for coloring the model or adjusting the transparency of the model; and the measuring submodule is used for measuring the distance between any two points on the model.
The 3D display module further comprises an automatic playing sub-module, wherein the automatic playing sub-module is used for automatically rotating and displaying the three-dimensional model at a certain speed.
The 3D display module further comprises a 2D/3D switching sub-module, and the 2D/3D switching sub-module is used for selecting to display the three-dimensional model in a 3D mode or a 2D mode.
The 3D/AR glasses include Gao Qingwei display screens.
Compared with the prior art, the invention has the following effects:
1) According to the organ three-dimensional image reconstruction method, the existing medical equipment is utilized to process the target reconstructed organ, the two-dimensional image data of CT or MRI is obtained, then the two-dimensional image data are imported into 3D-DOCTOR software, the three-dimensional image of the target reconstructed organ is reconstructed by the 3D-DOCTOR software, and a three-dimensional image data file is formed.
2) According to the surgical navigation method, three-dimensional image data of a target reconstructed organ are obtained by using an organ three-dimensional virtual image reconstruction method; then opening three-dimensional image data by using 3D browser software, and performing corresponding operation; the 3D/AR glasses are worn in the operation process, the three-dimensional image of the target reconstructed organ before the operation of the patient is watched, the three-dimensional image is clearly compared with the anatomy structure of the patient in the operation, and based on the presentation and observation of the three-dimensional model information, the operation path is clear for a doctor, the operation wound surface can be reduced, the operation accuracy of the surgeon is greatly improved, and therefore better medical services are provided for the patient.
3) According to the operation auxiliary system disclosed by the invention, the two-dimensional image data are converted into the three-dimensional image, medical staff can watch the three-dimensional image by using the 3D/AR glasses, the three-dimensional image can clearly and truly reflect the pipeline system in the organ and the relative position relation between the pipeline system and the tumor, the invasion range of the tumor is clear, and the medical staff can formulate a preoperative treatment scheme according to the three-dimensional image and perform auxiliary navigation in an operation, so that the difficulty and risk of the operation are reduced.
4) Other advantages of the present invention are described in detail in the examples section.
Description of the drawings:
FIG. 1 is a flow chart of a disclosed method for reconstructing a three-dimensional image of an organ in accordance with one embodiment;
FIG. 2 is a flow chart of a method of reconstructing a three-dimensional image of a blood vessel in accordance with one embodiment of the disclosure;
FIG. 3 is a flow chart of a viscera three-dimensional image reconstruction method disclosed in the second embodiment;
FIG. 4 is an effect map of a three-dimensional image of the liver reconstructed using a visceral three-dimensional image reconstruction method;
FIG. 5 is a flow chart of a bone three-dimensional image reconstruction method as disclosed in embodiment three;
FIG. 6 is a schematic representation of a knee MRI two-dimensional image used in a bone three-dimensional image reconstruction method;
FIG. 7 is a schematic representation of the effect of a knee joint reconstructed using a bone three-dimensional image reconstruction method;
FIG. 8 is a schematic representation of bone structure in a three-dimensional image of a knee joint;
FIG. 9 is a flow chart of a surgical navigation method disclosed in embodiment four;
FIG. 10 is a block diagram of 3D browser software in a surgical navigation method;
FIG. 11 is a schematic diagram of an operation interface of 3D browser software in a surgical navigation method;
FIG. 12 is a block diagram of a surgical assistance system disclosed in embodiment five;
FIG. 13 is a schematic diagram of a 3D display module in a surgical assistance system displaying a three-dimensional model in a 2D mode;
FIG. 14 is a schematic diagram of a 3D display module in a surgical assistance system displaying a three-dimensional model in a 3D mode;
FIG. 15 is a schematic view of an application measurement sub-module in a surgical assistance system;
FIG. 16 is a schematic diagram of 3D/AR glasses in the surgical assistance system;
fig. 17 is a schematic diagram of the structure of 3D/AR glasses in the surgical assistance system.
The specific embodiment is as follows:
the invention is described in further detail below by means of specific embodiments in connection with the accompanying drawings.
Embodiment one:
as shown in fig. 1, the present embodiment provides a three-dimensional image reconstruction method for an organ, including the following steps:
step 1, aiming at a target reconstructed organ, processing the target reconstructed organ by using the existing medical equipment to obtain CT or MRI two-dimensional image data;
and 2, importing the two-dimensional image data in the step 1 into 3D-DOCTOR software, reconstructing a three-dimensional model of the target reconstructed organ by using the 3D-DOCTOR software, and forming a three-dimensional image data file.
In the step 2, after the two-dimensional image data in the step 1 is imported into the 3D-DOCTOR software, setting the name of the target reconstructed organ, drawing out the interest area of the target reconstructed organ, setting the data format of the two-dimensional image data, interactively extracting the boundary of the target reconstructed organ, and then performing surface rendering to form the three-dimensional model of the target reconstructed organ.
As shown in fig. 2, the portal vein is taken as an example to reconstruct the portal vein in three dimensions, and the specific working procedure is as follows:
step A1: scanning portal veins and hepatic veins of a patient by using CT equipment of a hospital to form CT angiography (CTA), acquiring CT data comprising the portal veins, storing the CT data in a Dicom format, wherein the thickness of the layer is 2.0mm, and the image resolution is 512 multiplied by 512 pixels;
step B1: importing CT data of the portal vein into 3D-DOCTOR software, and automatically setting a picture real object proportion (calization) by the software;
step C1: the target extract is set as portal vein (defined object);
step D1: drawing a portal vein region of interest (ROI);
step E1: setting a portal vein CT, and interactively extracting a portal vein boundary;
step F1: trimming the portal vein Boundary (BOUNDERIES);
step G1: portal vein surface rendering (Surface Rendering) forms a portal vein three-dimensional structure.
Embodiment two:
as shown in fig. 3 and 4, the present embodiment provides a three-dimensional image reconstruction method of viscera, which is similar to a three-dimensional image reconstruction method of blood vessels, and the following three-dimensional reconstruction of liver is performed by taking the liver as an example, and specifically includes the following steps:
step A2: scanning the liver of a patient by using CT equipment of a hospital to form liver CT two-dimensional image data, storing the liver CT two-dimensional image data into a Dicom format, and storing the two-dimensional image data into the Dicom format, wherein the thickness of the layer is 2.5mm, and the image resolution is 512 multiplied by 512 pixels;
step B2: importing liver CT two-dimensional image data into 3D-DOCTOR software, and automatically setting the proportion of the picture objects by the software;
step C2: the target extract is set as liver;
step D2: and reconstructing a 3D model comprising the liver and blood vessels in the liver according to the CT value automatic identification and manual correction method.
The 3D model of the liver after three-dimensional image reconstruction is shown in fig. 3.
Embodiment III:
as shown in fig. 5 to 8, the present embodiment provides a bone three-dimensional image reconstruction method, which is similar to a blood vessel three-dimensional image reconstruction method, and in the following, a knee joint is taken as an example to reconstruct a knee joint in three dimensions, and specifically includes the following steps:
step A3: scanning knee joints of a patient by using nuclear Magnetic Resonance (MRI) equipment of a hospital to form MRI two-dimensional image data of the knee joints, wherein the two-dimensional image data are stored in a Dicom format, the thickness of the layer is 2.5mm, and the image resolution is 512 multiplied by 512 pixels;
step B3: importing knee joint MRI two-dimensional image data into 3D-DOCTOR software, and automatically setting the proportion of the picture objects by the software;
step C3: the target extract is set as knee joint;
step D3: according to the method of automatic identification and manual correction of the MRI values, a 3D model comprising knee joints and muscles surrounding the knee joints is reconstructed.
The 3D model of the knee joint and the knee joint peripheral muscle after the knee joint three-dimensional image reconstruction is shown in fig. 7 and 8.
Embodiment four:
as shown in fig. 9, the present embodiment provides a surgical navigation method, which is specifically described below by taking portal vein and hepatic vein as examples, and includes the following steps:
step A4: obtaining a three-dimensional model of a target reconstructed blood vessel by using the three-dimensional virtual image reconstruction method of the blood vessel and forming a three-dimensional image data file;
step B4: opening a three-dimensional image data file by using 3D browser software, and performing corresponding operation;
step C4: 3D/AR glasses are worn during the operation, and three-dimensional images of the reconstructed blood vessels of the preoperative target of the patient are watched and clearly compared with the anatomy of the patient during the operation.
In the step B4, the 3D browser software is used to open the three-dimensional image data file, and perform 2D/3D mode switching, annotation, dimension measurement, color setting, or automatic playing.
As shown in fig. 10 and 11, the 3D browser software has 7 modules, and as shown in fig. 10, the 7 modules include:
module 1: the importing model module is used for importing a three-dimensional image data file;
module 2: the list module is used for hiding and displaying the three-dimensional model, manually moving the three-dimensional model and returning the three-dimensional model to the initial position;
module 3: the 2D/3D switching module is used for selecting to display the three-dimensional model in a 3D mode or a 2D mode;
module 4: the annotating module is used for leaving notes/drawing lines and deleting the notes/drawing lines at any point on the three-dimensional model;
module 5: the coloring module is used for coloring the model or adjusting the transparency of the model;
and (6) module 6: the automatic playing module is used for automatically rotating the model;
module 7: and the measuring module is used for measuring the distance between any two points on the model.
Fifth embodiment:
as shown in fig. 12, the present embodiment provides a surgical assist system, characterized in that: the three-dimensional image reconstruction system comprises a computer host 3 and 3D/AR glasses 5, wherein the computer host 3 is provided with an organ three-dimensional image reconstruction module 11 and a 3D display module 32, the organ three-dimensional image reconstruction module 31 is used for carrying out three-dimensional reconstruction on two-dimensional image data comprising a target extract obtained from CT or MRI to form a three-dimensional model and a three-dimensional image data file, and sending the three-dimensional image data file to the 3D display module 32,3D, the display module 32 displays the three-dimensional image data file and sends the three-dimensional image data file to the 3D/AR glasses 5; the 3D/AR glasses 5 are used for receiving and displaying three-dimensional image data files. Medical staff can watch a three-dimensional image by using the 3D/AR glasses 5, the three-dimensional image can clearly and truly reflect an organ internal pipeline system and the relative position relation between the organ internal pipeline system and tumors, the invasion range of the tumors is clear, and the medical staff can formulate a preoperative treatment scheme according to the three-dimensional image and perform auxiliary navigation in an operation, so that the difficulty and risk of the operation are reduced.
Specifically, the three-dimensional image reconstruction module 31 reconstructs two-dimensional image data into a three-dimensional image data file by using the three-dimensional image reconstruction method of the organ described in the first embodiment, the second embodiment or the third embodiment.
The present embodiment will be specifically described by taking the portal vein as an example. The target extract is the internal blood vessel of human body such as portal vein, hepatic vein, inferior vena cava and artery.
The 3D display module 32 includes:
an import sub-model 321 for importing a three-dimensional image data file;
a list sub-module 322 for hiding and displaying the three-dimensional model, manually moving the three-dimensional model, and returning the three-dimensional model to the initial position;
an annotating sub-module 323 for leaving notes/drawings on any point on the three-dimensional model, deleting notes/drawings; specifically, the annotation can be left on any point on the model by clicking the point, the line can be left on any point on the three-dimensional model by long-pressing the left mouse button and dragging, and a group of lines drawn last time can be deleted once by clicking the drawn line on the three-dimensional model.
A coloring sub-module 324 for coloring the model or adjusting the transparency of the model;
the measurement submodule 325 is used for measuring the distance between any two points on the model.
Medical staff can directly add annotates, drawing lines, coloring and measuring distances in the three-dimensional images, so that the medical staff can perform preoperative preparation in more detail.
Specifically, the import submodel 321 supports importing various files such as OBJ and STL. In the annotating sub-module 323, by clicking on any point on the model, an annotation can be left on that point; a line can be left on any point on the three-dimensional model by long-pressing the left button of the mouse and dragging; a group of last drawn lines is deleted by clicking on the drawn lines on the three-dimensional model once. In the coloring sub-module 324, after any part of the three-dimensional model is selected, the color in the color disc is selected to color the selected part; dragging the bars of the menu may adjust the transparency of the three-dimensional model. As shown in fig. 15, in the measurement submodule 325, a distance between two points can be measured by clicking any two points on the three-dimensional model.
The 3D display module 32 further includes an automatic playing sub-module 326, which is configured to automatically rotate and display the three-dimensional model at a certain speed.
The 3D display module 32 further includes a 2D/3D switching sub-module 327, as shown in fig. 13 and 14, where the 2D/3D switching sub-module 327 is used to select to display the three-dimensional model in the 3D mode or the 2D mode.
As shown in fig. 16 and 17, the 3D/AR glasses 5 include a micro-display imaging optical system, the micro-display imaging optical system includes a micro-display 10 and an optical prism 20, the optical prism is provided with an incident surface 2, a total reflection surface 22 and an exit surface 23, wherein the incident surface 21 is located at the top of the optical prism 20, the exit surface 23 and the total reflection surface 22 are located at the left and right sides of the optical prism 20, the exit surface 23 is located at one side close to the eyeball 40 of a person, the light generated by the video image on the micro-display enters the optical prism 20 from the incident surface 21, is reflected by the total reflection surface 22 to the exit surface 23, and is refracted from the exit surface 23 to reach the eyeball of the person for imaging, and the video image on the micro-display is amplified by the optical prism 20 to form an amplified virtual image on the eyeball 40 of the person.
The micro display 10 in the 3D/AR glasses is a high-definition micro display screen. The optical prism 20 of the 3D/AR glasses does not obstruct the user's view, and the human eyeball can view a solid scene outside a three-dimensional video image (virtual image) from the optical prism 20 while viewing the virtual image. The optical prism head-mounted display has the characteristics of visibility, light weight, comfortable wearing, low power consumption and arbitrary posture.
The above examples are preferred embodiments of the present invention, but the embodiments of the present invention are not limited thereto, and any other changes, modifications, substitutions, combinations, and simplifications made without departing from the spirit and principles of the present invention are included in the scope of the present invention.

Claims (2)

1. A surgical assistance system, characterized by: the three-dimensional image reconstruction system comprises a computer host and 3D/AR glasses, wherein the computer host is provided with an organ three-dimensional image reconstruction module and a 3D display module, the organ three-dimensional image reconstruction module acquires CT or MRI two-dimensional image data of a target reconstructed organ by using existing medical equipment, and the CT or MRI two-dimensional image data is imported into 3D-DOCTOR software, and a three-dimensional model of the target reconstructed organ is reconstructed by the 3D-DOCTOR software to form a three-dimensional image data file; the 3D display module displays the three-dimensional image data file and sends the three-dimensional image data file to the 3D/AR glasses; the 3D/AR glasses are used for receiving and displaying three-dimensional image data files;
the 3D display module includes:
an importing sub-module for importing a three-dimensional image data file;
the list sub-module is used for hiding and displaying the three-dimensional model, manually moving the three-dimensional model and returning the three-dimensional model to the initial position;
an annotating sub-module, which is used for leaving notes/drawing lines and deleting the notes/drawing lines at any point on the three-dimensional model;
a coloring sub-module for coloring the model or adjusting the transparency of the model;
the measuring submodule is used for measuring the distance between any two points on the model;
the automatic playing sub-module is used for automatically rotating and displaying the three-dimensional model at a certain speed;
and the 2D/3D switching sub-module is used for selecting to display the three-dimensional model in a 3D mode or a 2D mode.
2. A surgical assistance system according to claim 1, wherein: the 3D/AR glasses include a Gao Qingwei display screen.
CN202010258116.0A 2020-04-03 2020-04-03 Organ three-dimensional image reconstruction method, operation navigation method and operation auxiliary system Active CN111462314B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010258116.0A CN111462314B (en) 2020-04-03 2020-04-03 Organ three-dimensional image reconstruction method, operation navigation method and operation auxiliary system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010258116.0A CN111462314B (en) 2020-04-03 2020-04-03 Organ three-dimensional image reconstruction method, operation navigation method and operation auxiliary system

Publications (2)

Publication Number Publication Date
CN111462314A CN111462314A (en) 2020-07-28
CN111462314B true CN111462314B (en) 2023-07-21

Family

ID=71680276

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010258116.0A Active CN111462314B (en) 2020-04-03 2020-04-03 Organ three-dimensional image reconstruction method, operation navigation method and operation auxiliary system

Country Status (1)

Country Link
CN (1) CN111462314B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113538665B (en) * 2021-07-21 2024-02-02 无锡艾米特智能医疗科技有限公司 Organ three-dimensional image reconstruction compensation method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9886797B2 (en) * 2013-08-13 2018-02-06 Boston Scientific Scimed, Inc. Comparative analysis of anatomical items
CN108847111B (en) * 2018-06-13 2020-11-20 广州迈普再生医学科技股份有限公司 Craniocerebral simulation model and preparation method thereof

Also Published As

Publication number Publication date
CN111462314A (en) 2020-07-28

Similar Documents

Publication Publication Date Title
Bernhardt et al. The status of augmented reality in laparoscopic surgery as of 2016
RU2714665C2 (en) Guide system for positioning patient for medical imaging
RU2740259C2 (en) Ultrasonic imaging sensor positioning
JP6073971B2 (en) Medical image processing device
Bichlmeier et al. Contextual anatomic mimesis hybrid in-situ visualization method for improving multi-sensory depth perception in medical augmented reality
JP5551957B2 (en) Projection image generation apparatus, operation method thereof, and projection image generation program
JP4421016B2 (en) Medical image processing device
RU2627147C2 (en) Real-time display of vasculature views for optimal device navigation
EP2157905B1 (en) A method for tracking 3d anatomical and pathological changes in tubular-shaped anatomical structures
Giancardo et al. Textureless macula swelling detection with multiple retinal fundus images
Gsaxner et al. The HoloLens in medicine: A systematic review and taxonomy
US20090087046A1 (en) Digital blink comparator apparatus and software and methods for operation
CN112740285A (en) Overlay and manipulation of medical images in a virtual environment
Kutter et al. Real-time volume rendering for high quality visualization in augmented reality
Abou El-Seoud et al. An interactive mixed reality ray tracing rendering mobile application of medical data in minimally invasive surgeries
JP2014064721A (en) Virtual endoscopic image generation apparatus, virtual endoscopic image generation method, and virtual endoscopic image generation program
CN116188677A (en) Three-dimensional reconstruction method, system and device for vascular intervention operation area
CN111462314B (en) Organ three-dimensional image reconstruction method, operation navigation method and operation auxiliary system
KR101988531B1 (en) Navigation system for liver disease using augmented reality technology and method for organ image display
Robb Virtual endoscopy: evaluation using the visible human datasets and comparison with real endoscopy in patients
Karner et al. Single-shot deep volumetric regression for mobile medical augmented reality
CN110718284A (en) Three-dimensional medical image data interaction method and system
Klein et al. Visual computing for medical diagnosis and treatment
CN114283179A (en) Real-time fracture far-near end space pose acquisition and registration system based on ultrasonic images
CN112950774A (en) Three-dimensional modeling device, operation planning system and teaching system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant