WO2019047820A1 - 内窥镜微创手术导航的图像显示方法、装置及*** - Google Patents
内窥镜微创手术导航的图像显示方法、装置及*** Download PDFInfo
- Publication number
- WO2019047820A1 WO2019047820A1 PCT/CN2018/103929 CN2018103929W WO2019047820A1 WO 2019047820 A1 WO2019047820 A1 WO 2019047820A1 CN 2018103929 W CN2018103929 W CN 2018103929W WO 2019047820 A1 WO2019047820 A1 WO 2019047820A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- endoscope
- endoscopic
- minimally invasive
- invasive surgery
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 82
- 238000009877 rendering Methods 0.000 claims abstract description 26
- 238000002324 minimally invasive surgery Methods 0.000 claims description 41
- 210000003484 anatomy Anatomy 0.000 claims description 38
- 230000004927 fusion Effects 0.000 claims description 32
- 238000012545 processing Methods 0.000 claims description 26
- 238000013507 mapping Methods 0.000 claims description 19
- 230000003287 optical effect Effects 0.000 claims description 15
- 230000011218 segmentation Effects 0.000 claims description 15
- 238000001356 surgical procedure Methods 0.000 claims description 14
- 238000005266 casting Methods 0.000 claims description 11
- 238000004364 calculation method Methods 0.000 claims description 7
- 208000015093 skull base neoplasm Diseases 0.000 claims description 7
- 201000011510 cancer Diseases 0.000 claims description 5
- 238000012937 correction Methods 0.000 claims description 3
- 238000002372 labelling Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 abstract description 17
- 238000005070 sampling Methods 0.000 description 9
- 230000007704 transition Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 7
- 238000003745 diagnosis Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 230000008447 perception Effects 0.000 description 6
- 206010028980 Neoplasm Diseases 0.000 description 5
- 239000003550 marker Substances 0.000 description 5
- 239000007787 solid Substances 0.000 description 5
- 230000000007 visual effect Effects 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000007499 fusion processing Methods 0.000 description 3
- 238000007654 immersion Methods 0.000 description 3
- 230000003902 lesion Effects 0.000 description 3
- 230000002093 peripheral effect Effects 0.000 description 3
- 230000002238 attenuated effect Effects 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 2
- 210000004204 blood vessel Anatomy 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000002674 endoscopic surgery Methods 0.000 description 2
- 210000003128 head Anatomy 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 210000005036 nerve Anatomy 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000007428 craniotomy Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 210000003928 nasal cavity Anatomy 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Definitions
- the present invention relates to the field of medical technology, and in particular, to an image display method, device and system for endoscopic minimally invasive surgery navigation.
- Skull base tumors are difficult to distinguish due to their deep location, and the adjacent structures are difficult to distinguish.
- the diagnosis and treatment process involves multidisciplinary techniques such as neurosurgery, otolaryngology and head and neck surgery. It is difficult to completely remove the tumor.
- Endoscopic minimally invasive technique is simple and quick to recover after surgery. Endoscopic image guidance can avoid the damage of facial skin structure caused by surgical approach, thus reducing the probability of various complications.
- the present invention provides an image display method, apparatus and system for endoscopic minimally invasive surgery navigation.
- the invention provides an image display method for endoscopic minimally invasive surgery navigation, comprising the following steps:
- step S4 according to the position and direction of the endoscope endoscope and the CT image after the registration, the distance between the endoscope and the surgical target is obtained, and the inner end is also obtained.
- the orthogonal cross-sectional view is displayed in the step S6, and the relative positional view between the endoscope and the patient's human body is also displayed.
- step S2 the method further includes:
- the method further includes:
- the method further includes:
- S10 Perform real-time fusion of the registered CT image and the endoscopic image to obtain a virtual and real fusion image, and display the image.
- step S10 specifically includes:
- the distance-weighted ray casting method performs differential rendering on the cut cube data to obtain the rendered cube data.
- S13 Perform virtual and real fusion of the rendered cube data and the endoscopic image to obtain a virtual and real fusion image, and display the same.
- step S5 further includes the following steps:
- the method further includes:
- the invention provides an image display device for endoscopic minimally invasive surgery navigation, comprising a display screen, a processor and a data interface; wherein the data interface is used for connecting an endoscope and a CT device to obtain an endoscopic image And a pre-operative CT image; the processor is configured to perform an image display method of any of the above-described endoscopic minimally invasive surgery navigation to obtain a corresponding surgical navigation image; the display screen is used to display an image obtained by the processor.
- the processor comprises a CPU processing unit and a GPU processing unit, wherein the CPU processing unit is used for computing and image configuration, and the GPU processing unit is used for image processing.
- the processor is further configured to acquire a relative position view, an orthogonal cut view, and a virtual solid image of the corresponding endoscope and the patient body according to the real-time position of the endoscope, and update to the display.
- the screen is displayed.
- the invention provides an endoscope minimally invasive surgery navigation system, comprising a computer device and an optical tracking device, wherein the optical tracking device is used for real-time acquisition of the position of the endoscopic surgical tool and tracking of the patient's posture, the computer device For acquiring the endoscopic image and the CT image, and combining the position information tracked by the optical tracking device, and using the image display method of any of the above embodiments, the corresponding surgical navigation image is acquired and displayed.
- the computer device comprises the image display device of any of the above embodiments.
- the endoscopic minimally invasive surgical navigation system is applied to nasal and sinus malignant tumor surgery and skull base tumor surgery navigation.
- the image display method of the above-mentioned endoscopic minimally invasive surgery navigation has the following advantages over the conventional endoscopic surgical navigation display method:
- the present invention orthogonally cuts the CT image in a direction parallel or perpendicular to the endoscope, effectively avoiding the display disadvantage of the three views in the distance, and also for the surgical instrument (such as an endoscope) and the patient's human body.
- the relative position between the two is displayed to accurately indicate the distance relationship between the instrument and the human body; in addition, the orthogonal cut view is differentiated and rendered by the distance-weighted rendering method, so that the endoscope and the target position are The distance display is clearer;
- the present invention realizes a relative positional view between the endoscope and the patient's human body, an orthogonal sectional view of the CT image with the endoscope as a reference, and a virtual and solid fusion display view between the endoscopic image and the CT image
- the display enables the doctor to combine the various views to accurately understand the position of the endoscope and the intraoperative process, and improve the safety of endoscopic minimally invasive surgery;
- the virtual and fused image of the present invention not only can display the image detected by the endoscope in real time, but also uses the distance-weighted rendering method to differentiate the rendered cube data, which can reduce the computational complexity and accelerate the rendering speed. At the same time, it provides more accurate depth perception, and more effectively improves the relative relationship between anatomical structures, so that doctors can more clearly define the occlusion and pre- and post-analysis of anatomical structures, and provide doctors with more accurate auxiliary diagnosis and treatment capabilities;
- FIG. 1 is a schematic flow chart of an image display method for an endoscopic minimally invasive surgery navigation according to an embodiment of the present invention
- FIG. 2 is a schematic view showing the position of an endoscopic surgical tool using an optical tracking device in the endoscopic surgical navigation process of the present invention
- FIG. 3 is a schematic cross-sectional view showing CT images performed during endoscopic surgery of the present invention.
- FIG. 4 is a schematic flow chart of an image display method for an endoscopic minimally invasive surgery navigation according to another embodiment of the present invention.
- FIG. 5 is a schematic flow chart of an image display method for an endoscopic minimally invasive surgery navigation according to still another embodiment of the present invention.
- FIG. 6 is a schematic diagram showing a refinement process of performing virtual and real fusion processing on an endoscopic image and a CT image according to an embodiment of the present invention
- FIG. 7 is a schematic diagram of a cube cutting of a CT image according to an endoscope position in an endoscopic surgical navigation process according to an embodiment of the present invention
- FIG. 8 is a schematic flow chart of processing an endoscopic image in an image display method for an endoscopic minimally invasive surgery navigation according to an embodiment of the present invention
- FIG. 9 is a schematic diagram of edge Gaussian attenuation and transparency mapping of an endoscopic image according to an embodiment of the present invention.
- FIG. 10 is a schematic diagram of an image display interface of an endoscopic minimally invasive surgery navigation according to an embodiment of the present invention
- FIG. 11 is a functional block diagram of an image display device for minimally invasive surgery of an endoscope according to an embodiment of the present invention
- FIG. 12 is a block diagram showing a specific function of a processor in an image display device for minimally invasive surgery of an endoscope according to an embodiment of the present invention
- FIG. 13 is a schematic structural view of an endoscope minimally invasive surgery navigation system according to an embodiment of the present invention.
- the present invention provides an image display method in an endoscopic minimally invasive surgery navigation process, such as but not limited to, including nasal and sinus malignant tumor surgery, skull base tumor surgery, etc., of course, may also include other Use endoscopic surgery.
- FIG. 1 illustrates an image display method during an endoscope minimally invasive surgery navigation process according to an embodiment of the present invention.
- the image display method includes the following steps:
- a preoperative scan of a predetermined portion of the patient is performed using a CT device to obtain a pre-operative CT image, which is a three-dimensional view.
- the predetermined part is, for example, a human head.
- S102 Perform registration between the CT image and the patient posture to obtain the registered CT image
- the position corresponding to the key anatomy in the CT image is determined according to a predetermined key anatomy and used as a reference point.
- the optical tracking device realizes the positioning of the marker point corresponding to the reference point on the patient's body, and then uses the 3PCHM (3-Points Convex Hull Matching) rapid registration calculation method to perform the CT image between the patient and the patient posture. Rotate the matrix and the translation vector and obtain the converted CT image.
- the end of the endoscope is the end of the endoscope that protrudes into the patient's body, that is, the endoscope's detection lens. Since the tip end of the endoscope extends into the patient, the position and orientation of the tip are difficult to obtain, and thus the position of the endoscopic surgical tool located outside the patient's body is converted. As shown in FIG. 2, the surgical tool 300 of the endoscope is provided with four marker points, and the four marker points are tracked and monitored by the optical tracking device 200, and the position information of the four marker points is acquired.
- the coordinate transformation relationship between the two volume data can be registered by the following formula:
- the DLT Direct Linear Transform
- the position and orientation of the end of the endoscope will be acquired in real time, and the positional change of the endoscope can be tracked in time to facilitate subsequent image update.
- the endoscope and the surgical target (for example, the tumor to be resected, etc.) can be obtained according to the real-time position of the endoscope obtained by the optical tracking device.
- the distance between the surgical instruments and the relative position of the surgical target can also be obtained.
- the CT image after the registration is orthogonal to the parallel and perpendicular directions of the endoscope.
- the distance between the endoscope and the target position on the cut surface is displayed, and the posture state of the endoscope and the surgical tool can be displayed more effectively and intuitively.
- the distance-weighted ray casting method is used to differentiate the orthogonally-cut data so that the distance between the endoscope and the target position is displayed more clearly.
- the parallel plane (plane UW) in the positive direction (direction W) of the endoscope is its pointing direction
- the vertical direction is defined by the right-handed coordinate system
- the orthogonal plane (plane VW) of the direction V) is an orthogonal direction.
- the sampling factor of each sampling point on each ray has a transparency value corresponding to the d value
- the cube data is then rendered differently according to the corresponding transparency value.
- the sampling weighting factor the farther the voxel from the top end of the endoscope is absorbed in the ray casting function, so that the anatomical structure of the different positions of the CT image data is differentiated.
- the CT image is orthogonally cut in a direction parallel or perpendicular to the endoscope, thereby effectively avoiding the display defects of the three views in the distance, and the surgical instruments (such as endoscope) and surgery
- the relative position between the targets is displayed, and the distance relationship between the device and the human body is accurately indicated.
- the orthogonal cut view is rendered differently by the distance-based weighting rendering method, so that the distance between the endoscope and the target position is displayed more clearly.
- a relative positional view between the endoscope and the patient's body is also displayed.
- the method further includes:
- the 3D segmentation of the predetermined key anatomical structures was performed by the region-based growth and rapid travel methods, and the key anatomical structures after 3D segmentation were labeled.
- the predetermined key anatomy is determined based on the specific surgical site, such as blood vessels, tumors, and nerves. Moreover, the key anatomy is determined by the physician in the CT image to determine the specific location of the key anatomy.
- the key anatomical structure after the 3D segmentation is subjected to the rendering process of step S105, thereby realizing the differentiation of the anatomical structure in the orthogonal section view. Display, easy for doctors to observe during surgery, quickly determine the surgical target, such as the tumor to be removed.
- Color mapping of key anatomical structures obtained by 3D segmentation makes the distinction between key anatomical structures in the image more obvious, and speeds up the fusion process for virtual and real fusion.
- the accuracy of distance perception during virtual and real fusion processing provides a guarantee.
- the farther away from the position of the top end of the endoscope the data will also be attenuated by color rendering, ie the farther the structure is, the less likely it is to be observed.
- the relative relationship between key anatomical structures is more effectively improved, so that doctors can more clearly determine the occlusion and context between key anatomical structures, and provide doctors with more accurate auxiliary diagnosis and treatment capabilities.
- the method further includes:
- the endoscope's detection lens is used to extend into the patient to obtain an endoscopic image.
- the endoscopic image is acquired in real time as the intraoperative endoscope changes.
- S110 Perform real-time fusion of the collated CT image and the endoscopic image to obtain a virtual and real fusion image, and display the image.
- the embodiment of the invention realizes a relative position view between the endoscope and the patient's human body, an orthogonal sectional view of the CT image with the endoscope as a reference, and a virtual and solid fusion display view between the endoscopic image and the CT image.
- the display allows the doctor to combine the various views to accurately understand the position of the endoscope and the intraoperative process, improving the safety of endoscopic minimally invasive surgery.
- step S110 includes:
- the cube parameter of the cutting is determined, and the collated CT image is cut based on the cube constructed by the cube parameter to obtain the cube data.
- the cube parameter of the cutting is specifically: in the space O CT formed by the CT image, starting from the focal plane O V of the endoscope, and taking the axial direction of the endoscope as the depth direction and the length d , forming one side of the cube; at the same time, setting the other two sides m and n of the cube according to the size of the endoscope display range.
- the cut cube data can be obtained by cutting the registered CT image according to the constructed cube.
- the CT image is cut according to the cube constructed by the cube parameter shown in Fig. 7.
- the distance-weighted ray casting method is used to differentiate the cut cube data. Specifically, the distance from the front surface of the data cube (ie, the focal plane Ov of the endoscope in FIG. 7) to the rear surface of the data cube is d, as the distance increases (ie, the value of d increases) ), the sampling factor of each sampling point on each ray corresponds to a transparency value corresponding to the d value, thereby differentiating the cube data according to the corresponding transparency value.
- the distance-weighted ray casting method is used to differentiate the cut cube data. Specifically, the distance from the front surface of the data cube (ie, the focal plane Ov of the endoscope in FIG. 7) to the rear surface of the data cube is d, as the distance increases (ie, the value of d increases) ), the sampling factor of each sampling point on each ray corresponds to a transparency value
- n, and d are the side lengths of the data cube, respectively, and the coordinate position of the sampling position is (x, y, z).
- S113 Perform real-time fusion of the rendered cube data with the endoscopic image to obtain a virtual and real fusion image, and display the image.
- step S109 After the cube data is obtained and rendered, it is subjected to a virtual fusion process with the endoscopic image obtained in step S109 to obtain a virtual and real fusion image.
- the virtual and real fusion image of the embodiment of the present invention not only can display the image detected by the endoscope in real time, but also uses the distance weighted rendering method to differentially render the cut cube data, which can reduce the computational complexity and accelerate the rendering speed. At the same time, it provides more accurate depth perception, and more effectively improves the relative relationship between anatomical structures, so that the doctor can more clearly define the occlusion and pre- and post-analysis of the anatomical structure, and provide doctors with more accurate auxiliary diagnosis and treatment capabilities.
- Step S114 performing distortion correction on the endoscopic image
- the endoscopic image distortion is corrected, so that the endoscope image with severe radial distortion can be quickly recovered, so as to eliminate distortion of the endoscopic image due to image distortion in the virtual and real fusion display, and the object is not match.
- Step S115 performing transparency mapping on the endoscopic image based on the distance from the center of the image, and performing edge attenuation processing on the endoscopic image subjected to the transparency mapping.
- the distortion-corrected endoscopic image is mapped based on the distance from the center of the distance image.
- the image center is the center of the image, and the radius is used as the transparency mapping parameter.
- the image is farther away from the image center, and the transparency is higher, that is, the more Transparent. In this way, the image of the central region of the endoscope can be preserved, so that the layered rendering can be realized when the edge of the endoscopic image is attenuated, which can effectively improve the immersion of the fused display, and the fusion of the front and back scenes of the virtual and real fusion is more realistic. .
- FIG. 9 shows a schematic diagram of edge Gaussian attenuation and transparency mapping of a nasal endoscopic image.
- the distance between any point P(i,j) in the picture and the center of the image As shown in FIG. 9, set the distance between any point P(i,j) in the picture and the center of the image as Where 0 ⁇ i ⁇ m-1, 0 ⁇ j ⁇ n-1.
- the radius of the opaque region in the endoscopic image can be set to t, and the maximum radius of the image is R, that is, the attenuation region is Rt.
- the transparency of the attenuation area can be defined as:
- the endoscope image is processed by using a Gaussian edge attenuation algorithm, and a seamless transition between the endoscopic image and the CT image is realized, and a smooth transition is achieved visually, which can be used in the endoscopic image.
- Good matching and transition between the visible structure and the reconstructed structure which can display more structural information than the peripheral expansion of the traditional endoscopic image, and can display the lesion information behind the endoscopic image in the same view, which significantly improves the operation.
- FIG. 10 illustrates a nasal endoscopic surgical navigation virtual fusion display interface according to an embodiment of the present invention.
- the display interface shown in FIG. 10 includes a relative position view of the endoscope and the patient's human body, an axial positioning cutaway view, a radial positioning cutaway view, and a transparency of the cut cube data after distance-weighted-based rendering and transparency.
- each view in the display interface is updated as the position of the end of the endoscope changes. Based on the display interface shown in FIG.
- the distance between the endoscope and the target structure in the patient's body can be clearly and intuitively observed. And location relationship.
- the virtual and real fusion display view it is possible to simultaneously observe the real-time cut-out cube data based on distance weighting, the endoscopic image of edge Gaussian attenuation and transparency mapping, and the key anatomical target information of color mapping, and the endoscopic image.
- the anatomical structures such as the nasal cavity naturally extend into the virtual scene, and the differential display based on distance weighting provides an effective prompt for the anatomy in the virtual scene.
- the axial positioning cutaway view and the radial positioning cutaway view shown in FIG. 10 are orthogonal cross-sectional views obtained by orthogonally cutting and rendering the CT image in step S105.
- the virtual and solid fusion display method of the above-mentioned endoscopic minimally invasive surgery navigation has the following advantages over the conventional endoscope navigation display method:
- the embodiment of the present invention orthogonally cuts the CT image in a direction parallel or perpendicular to the endoscope, thereby effectively avoiding the display defects of the three views in the distance, and also for surgical instruments (such as endoscopes) and surgery.
- the relative position between the targets is displayed, and the distance relationship between the instrument and the surgical target is accurately indicated.
- the orthogonal cut view is differentiated and rendered by the distance-weighted rendering method, so that the endoscope and the target position are made.
- the distance between the displays is clearer;
- the embodiment of the present invention realizes a relative position view between the endoscope and the patient's human body, an orthogonal sectional view of the CT image with the endoscope as a reference, and a virtual and solid fusion display between the endoscopic image and the CT image
- the display of the view enables the doctor to combine the various views to accurately understand the position of the endoscope and the intraoperative process, and improve the safety of endoscopic minimally invasive surgery;
- the virtual and real fusion image of the embodiment of the present invention not only can display the image detected by the endoscope in real time, but also uses the distance-weighted rendering method to differentially render the cut cube data, which can reduce the computational complexity and accelerate At the same time of rendering speed, it provides more accurate depth perception, which can more effectively improve the relative relationship between anatomical structures, so that doctors can more clearly define the occlusion and pre- and post-analysis of anatomical structures, and provide doctors with more accurate auxiliary diagnosis and treatment capabilities;
- FIG. 11 illustrates an image display device for minimally invasive surgery of an endoscope according to an embodiment of the present invention.
- the virtual and real fusion device may include a display screen 10 , a processor 20 , and a data interface 30 .
- the data interface is used to connect the endoscope and the CT device to obtain an endoscopic image and a CT image; the processor 20 is configured to perform the minimally invasive surgery navigation virtual fusion display method according to any one of the above embodiments, A virtual solid image is obtained; the display screen 10 is used to display a virtual solid image obtained by the processor 20.
- the processor 20 includes a CPU processing unit 21 and a GPU processing unit 22, wherein the CPU processing unit 21 is mainly used to perform functions such as mathematical calculation and image configuration, such as CT image and patient posture. Registration and 3D segmentation of key anatomical structures.
- the CPU processing unit is also used to perform other processing, such as reading endoscopic images and CT images from the data interface 30, and obtaining positional information such as the real-time position of the endoscope and the pose of the patient from the optical tracking device 200.
- the GPU processing unit 22 is configured to perform functions related to graphics processing, such as cube cutting of CT images, cube data rendering based on distance weighting, transparency mapping and edge attenuation processing of endoscopic images, and acquisition between the endoscope and the patient's body.
- graphics processing such as cube cutting of CT images, cube data rendering based on distance weighting, transparency mapping and edge attenuation processing of endoscopic images, and acquisition between the endoscope and the patient's body.
- the processor 20 is further configured to: obtain a corresponding virtual and fused image, a relative position view of the endoscope and the patient's body according to the real-time position of the endoscope, and a CT image in a direction parallel and perpendicular to the endoscope A cross-sectional view of the orthogonal section is performed and updated to the display screen 10 for display.
- an embodiment of the present invention further provides an endoscopic minimally invasive surgical navigation system, such as, but not limited to, a nasal and sinus malignant tumor surgery and a skull base tumor surgical navigation.
- the surgical navigation system specifically includes: a computer device 100 and an optical tracking device 200 for acquiring the position of the endoscopic surgical tool 300 and tracking the posture of the patient in real time, the computer device 100 for acquiring The endoscope image and the CT image are combined with the position information tracked by the optical tracking device 200, and the endoscope image and the CT image are processed by the image display method according to any of the above embodiments to obtain the endoscope.
- a virtual fusion image of the image and the CT image is displayed.
- the computer device comprises the image display device shown in FIG.
- the computing device in the above embodiment may be implemented by means of software plus a necessary general hardware platform, and may of course be implemented by hardware, but in many cases, the former is a better implementation.
- the technical solution of the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product, that is, the execution of any of the above embodiments by a series of program instructions.
- Computing method that is, a computer software product that executes the computing method is stored in a computer storage medium (such as but not limited to a ROM/RAM, a magnetic disk, an optical disk, etc.), and includes a plurality of instructions for causing a terminal device (may be A computer, medical device, server, etc.) performs a calculation method of any of the embodiments of the present invention.
- a computer storage medium such as but not limited to a ROM/RAM, a magnetic disk, an optical disk, etc.
- a terminal device may be A computer, medical device, server, etc.
- the invention provides an image display method, device and system for endoscopic minimally invasive surgery navigation.
- the image display method includes: acquiring a CT image; performing registration between the CT image and the patient posture; acquiring the position and direction of the endoscope end point in real time; according to the position and direction of the top end of the speculum mirror, and after registration CT image, obtain the relative position between the endoscope and the patient's body, and the distance between the endoscope and the surgical target; according to the position and orientation of the endoscope tip, and the distance between the endoscope and the surgical target
- the orthogonal CT image is orthogonally cut along the parallel and perpendicular directions of the endoscope, and the orthogonally cut data is differentiated and rendered based on the distance-weighted ray casting method; the endoscope and the patient are displayed.
- the image display device and the system all adopt the image display method to realize image display during the surgical navigation process.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Endoscopes (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
Description
Claims (14)
- 一种内窥镜微创手术导航的图像显示方法,其特征在于,包括以下步骤:S1,获取CT影像;S2,进行所述CT影像与病人位姿之间的配准,获得配准后的CT影像;S3,实时获取内窥镜顶端的位置和方向;S4,根据所述内窥镜顶端的位置和方向,以及所述配准后的CT影像,获得所述内窥镜与手术目标之间的距离;S5,根据所述内窥镜顶端的位置和方向,以及内窥镜与手术目标之间的距离,对所述配准后的CT影像沿与所述内窥镜平行和垂直的方向进行正交剖切,并基于距离加权的光线投射方法对正交剖切的数据进行差异化渲染,获得正交剖切数据;S6,显示正交剖切视图。
- 如权利要求1所述的内窥镜微创手术导航的图像显示方法,其特征在于,所述步骤S4中根据所述内窥镜顶端的位置和方向,以及所述配准后的CT影像,获得所述内窥镜与手术目标之间的距离的同时,还获得内窥镜与病人人体之间的相对位置;所述步骤S6中显示正交剖切视图的同时还显示内窥镜与病人人体之间的相对位置视图。
- 如权利要求1所述的内窥镜微创手术导航的图像显示方法,其特征在于,所述步骤S2之前进一步包括:S7,通过基于区域生长和快速行进方法在所述CT影像中对预定关键解剖结构进行3D分割,并对3D分割的关键解剖结构进行标注。
- 如权利要求3所述的内窥镜微创手术导航的图像显示方法,其特征在于,所述步骤S7之后进一步包括:S8,对经过3D分割获得的关键解剖结构进行颜色映射。
- 如权利要求1-4中任意一项所述的内窥镜微创手术导航的图像显示方法,其特征在于,所述步骤S4之后进一步包括:S9,实时获取内窥镜图像;S10,将配准后的CT影像与所述内窥镜图像进行虚实融合,获得虚实融合图像,并显示。
- 如权利要求5所述的内窥镜微创手术导航的图像显示方法,其特征在于,所述步骤S10具体包括:S11,根据所述内窥镜顶端的位置和方向,对所述CT影像进行立方体切割,获得切割后的立方体数据;S12,基于距离加权的光线投射方法对所述切割后的立方体数据进行差异化渲染,获得渲染后的立方体数据;S13,将所述渲染后的立方体数据与所述内窥镜图像进行虚实融合,获得虚实融合图像,并显示。
- 如权利要求5所述的内窥镜微创手术导航的图像显示方法,其特征在于,所述步骤S5之前进一步包括步骤:S14,对所述内窥镜图像进行基于距离图像中心远近的透明度映射,并对经过透明度映射的内窥镜图像进行边缘衰减处理,以使经过边缘衰减处理后的内窥镜图像与所述切割后的立方体数据进行虚实融合。
- 如权利要求7所述的内窥镜微创手术导航的图像显示方法,其特征在于,所述步骤S14之前进一步包括:S15,对所述内窥镜图像进行畸变校正。
- 一种内窥镜微创手术导航的图像显示装置,其特征在于,包括显示屏、处理器以及数据接口;其中,所述数据接口用于连接内窥镜和CT设备,以获取内窥镜图像和术前CT影像;所述处理器用于执行权利要求1-8中任意一项所述的内窥镜微创手术导航的图像显示方法,以获得相应的手术导航图像;所述显示屏用于显示所述处理器获得的图像。
- 如权利要求9所述的内窥镜微创手术导航的图像显示装置,其特征在于,所述处理器包括CPU处理单元以及GPU处理单元,其中所述CPU处理单元用于计算和影像配置,GPU处理单元用于图像处理。
- 如权利要求9所述的内窥镜微创手术导航的图像显示装置,其特征在于,所述处理器进一步用于根据内窥镜的实时位置,获取相应的内窥镜与病人人体的相对位置视图、正交剖切视图,以及虚实融合图像,并更新至所述显示屏进行显示。
- 一种内窥镜微创手术导航***,其特征在于,包括计算机装置以及光学跟踪设备,所述光学跟踪设备用于实时获取内窥镜手术工具的位置以及对病人位姿的跟踪,所述计算机装置用于获取内窥镜图像以及CT影像,并结合光学跟踪设备跟踪的位置信息,以及利用权利要求1-8中任意一项所述的图像显示方法,获取相应的手术导航图像,并显示。
- 如权利要求12所述的内窥镜微创手术导航***,其特征在于,所述计算机装置包括如权利要求9-11中任意一项所述的图像显示装置。
- 如权利要求13所述的内窥镜微创手术导航***,其特征在于,所述内窥镜微创手术导航***应用于鼻及鼻窦恶性肿瘤手术以及颅底肿瘤手术导航。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710795485.1 | 2017-09-06 | ||
CN201710795485.1A CN107689045B (zh) | 2017-09-06 | 2017-09-06 | 内窥镜微创手术导航的图像显示方法、装置及*** |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019047820A1 true WO2019047820A1 (zh) | 2019-03-14 |
Family
ID=61155170
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/103929 WO2019047820A1 (zh) | 2017-09-06 | 2018-09-04 | 内窥镜微创手术导航的图像显示方法、装置及*** |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107689045B (zh) |
WO (1) | WO2019047820A1 (zh) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107689045B (zh) * | 2017-09-06 | 2021-06-29 | 艾瑞迈迪医疗科技(北京)有限公司 | 内窥镜微创手术导航的图像显示方法、装置及*** |
CN109223177A (zh) * | 2018-07-30 | 2019-01-18 | 艾瑞迈迪医疗科技(北京)有限公司 | 图像显示方法、装置、计算机设备和存储介质 |
CN109246419A (zh) * | 2018-09-17 | 2019-01-18 | 广州狄卡视觉科技有限公司 | 手术显微镜双路输出微型图案立体成像显示***及方法 |
CN109998684A (zh) * | 2019-05-07 | 2019-07-12 | 艾瑞迈迪科技石家庄有限公司 | 基于距离动态量化的引导预警方法及装置 |
CN110123447A (zh) * | 2019-05-07 | 2019-08-16 | 艾瑞迈迪科技石家庄有限公司 | 应用于影像引导的引导路径规划方法及装置 |
CN113243877A (zh) * | 2020-02-13 | 2021-08-13 | 宁波思康鑫电子科技有限公司 | 一种用于内窥镜定位的***及方法 |
CN113317874B (zh) * | 2021-04-30 | 2022-11-29 | 上海友脉科技有限责任公司 | 一种医学图像处理装置及介质 |
CN117481753B (zh) * | 2023-12-29 | 2024-04-05 | 北京智愈医疗科技有限公司 | 一种基于内窥镜的水刀运动轨迹的监测方法和装置 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101049248A (zh) * | 2007-05-18 | 2007-10-10 | 西安工业大学 | 光磁电复合导航手术定位装置和方法 |
CN102428496A (zh) * | 2009-05-18 | 2012-04-25 | 皇家飞利浦电子股份有限公司 | 用于em跟踪内窥镜***的无标记物跟踪的配准和校准 |
CN102946784A (zh) * | 2010-06-22 | 2013-02-27 | 皇家飞利浦电子股份有限公司 | 用于内窥镜实时校准的***和方法 |
WO2017013521A1 (en) * | 2015-07-23 | 2017-01-26 | Koninklijke Philips N.V. | Endoscope guidance from interactive planar slices of a volume image |
WO2017030913A2 (en) * | 2015-08-14 | 2017-02-23 | Intuitive Surgical Operations, Inc. | Systems and methods of registration for image-guided surgery |
CN107689045A (zh) * | 2017-09-06 | 2018-02-13 | 艾瑞迈迪医疗科技(北京)有限公司 | 内窥镜微创手术导航的图像显示方法、装置及*** |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6167296A (en) * | 1996-06-28 | 2000-12-26 | The Board Of Trustees Of The Leland Stanford Junior University | Method for volumetric image navigation |
CN100586379C (zh) * | 2008-07-04 | 2010-02-03 | 浙江大学 | 一种计算机模拟定标活检方法及装置 |
EP2523621B1 (en) * | 2010-01-13 | 2016-09-28 | Koninklijke Philips N.V. | Image integration based registration and navigation for endoscopic surgery |
CN102727309B (zh) * | 2011-04-11 | 2014-11-26 | 上海优益基医疗器械有限公司 | 结合内窥镜影像的外科手术导航*** |
CN102999902B (zh) * | 2012-11-13 | 2016-12-21 | 上海交通大学医学院附属瑞金医院 | 基于ct配准结果的光学导航定位导航方法 |
-
2017
- 2017-09-06 CN CN201710795485.1A patent/CN107689045B/zh active Active
-
2018
- 2018-09-04 WO PCT/CN2018/103929 patent/WO2019047820A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101049248A (zh) * | 2007-05-18 | 2007-10-10 | 西安工业大学 | 光磁电复合导航手术定位装置和方法 |
CN102428496A (zh) * | 2009-05-18 | 2012-04-25 | 皇家飞利浦电子股份有限公司 | 用于em跟踪内窥镜***的无标记物跟踪的配准和校准 |
CN102946784A (zh) * | 2010-06-22 | 2013-02-27 | 皇家飞利浦电子股份有限公司 | 用于内窥镜实时校准的***和方法 |
WO2017013521A1 (en) * | 2015-07-23 | 2017-01-26 | Koninklijke Philips N.V. | Endoscope guidance from interactive planar slices of a volume image |
WO2017030913A2 (en) * | 2015-08-14 | 2017-02-23 | Intuitive Surgical Operations, Inc. | Systems and methods of registration for image-guided surgery |
CN107689045A (zh) * | 2017-09-06 | 2018-02-13 | 艾瑞迈迪医疗科技(北京)有限公司 | 内窥镜微创手术导航的图像显示方法、装置及*** |
Also Published As
Publication number | Publication date |
---|---|
CN107689045A (zh) | 2018-02-13 |
CN107689045B (zh) | 2021-06-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019047820A1 (zh) | 内窥镜微创手术导航的图像显示方法、装置及*** | |
CN107456278B (zh) | 一种内窥镜手术导航方法和*** | |
US11883118B2 (en) | Using augmented reality in surgical navigation | |
US9646423B1 (en) | Systems and methods for providing augmented reality in minimally invasive surgery | |
CN110033465B (zh) | 一种应用于双目内窥镜医学图像的实时三维重建方法 | |
AU2015284430B2 (en) | Dynamic 3D lung map view for tool navigation inside the lung | |
US9364294B2 (en) | Systems, methods, apparatuses, and computer-readable media for image management in image-guided medical procedures | |
US11961193B2 (en) | Method for controlling a display, computer program and mixed reality display device | |
Bernhardt et al. | Automatic localization of endoscope in intraoperative CT image: a simple approach to augmented reality guidance in laparoscopic surgery | |
CN112641514B (zh) | 一种微创介入导航***与方法 | |
CN107610109A (zh) | 内窥镜微创手术导航的图像显示方法、装置及*** | |
JP2011212301A (ja) | 投影画像生成装置および方法、並びにプログラム | |
EP2901934B1 (en) | Method and device for generating virtual endoscope image, and program | |
WO2023246521A1 (zh) | 基于混合现实的病灶定位方法、装置和电子设备 | |
Wang et al. | Autostereoscopic augmented reality visualization for depth perception in endoscopic surgery | |
Zhu et al. | A neuroendoscopic navigation system based on dual-mode augmented reality for minimally invasive surgical treatment of hypertensive intracerebral hemorrhage | |
US10631948B2 (en) | Image alignment device, method, and program | |
CN115105204A (zh) | 一种腹腔镜增强现实融合显示方法 | |
CN115375595A (zh) | 图像融合方法、装置、***、计算机设备和存储介质 | |
CN111743628A (zh) | 一种基于计算机视觉的自动穿刺机械臂路径规划的方法 | |
US20220392173A1 (en) | Virtual enhancement of a camera image | |
Fang et al. | An Ultrasound Image Fusion Method for Stereoscopic Laparoscopic Augmented Reality | |
WO2023162657A1 (ja) | 医療支援装置、医療支援装置の作動方法及び作動プログラム | |
Kumar et al. | Stereoscopic augmented reality for single camera endoscope using optical tracker: a study on phantom | |
CN116385513A (zh) | 基于PSO-SoftPOSIT联合算法的腹腔镜肝脏手术导航2D/3D配准方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18855080 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18855080 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 11.08.2020) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18855080 Country of ref document: EP Kind code of ref document: A1 |