WO2019047820A1 - 内窥镜微创手术导航的图像显示方法、装置及*** - Google Patents

内窥镜微创手术导航的图像显示方法、装置及*** Download PDF

Info

Publication number
WO2019047820A1
WO2019047820A1 PCT/CN2018/103929 CN2018103929W WO2019047820A1 WO 2019047820 A1 WO2019047820 A1 WO 2019047820A1 CN 2018103929 W CN2018103929 W CN 2018103929W WO 2019047820 A1 WO2019047820 A1 WO 2019047820A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
endoscope
endoscopic
minimally invasive
invasive surgery
Prior art date
Application number
PCT/CN2018/103929
Other languages
English (en)
French (fr)
Inventor
杨峰
Original Assignee
艾瑞迈迪科技石家庄有限公司
艾瑞迈迪医疗科技(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 艾瑞迈迪科技石家庄有限公司, 艾瑞迈迪医疗科技(北京)有限公司 filed Critical 艾瑞迈迪科技石家庄有限公司
Publication of WO2019047820A1 publication Critical patent/WO2019047820A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present invention relates to the field of medical technology, and in particular, to an image display method, device and system for endoscopic minimally invasive surgery navigation.
  • Skull base tumors are difficult to distinguish due to their deep location, and the adjacent structures are difficult to distinguish.
  • the diagnosis and treatment process involves multidisciplinary techniques such as neurosurgery, otolaryngology and head and neck surgery. It is difficult to completely remove the tumor.
  • Endoscopic minimally invasive technique is simple and quick to recover after surgery. Endoscopic image guidance can avoid the damage of facial skin structure caused by surgical approach, thus reducing the probability of various complications.
  • the present invention provides an image display method, apparatus and system for endoscopic minimally invasive surgery navigation.
  • the invention provides an image display method for endoscopic minimally invasive surgery navigation, comprising the following steps:
  • step S4 according to the position and direction of the endoscope endoscope and the CT image after the registration, the distance between the endoscope and the surgical target is obtained, and the inner end is also obtained.
  • the orthogonal cross-sectional view is displayed in the step S6, and the relative positional view between the endoscope and the patient's human body is also displayed.
  • step S2 the method further includes:
  • the method further includes:
  • the method further includes:
  • S10 Perform real-time fusion of the registered CT image and the endoscopic image to obtain a virtual and real fusion image, and display the image.
  • step S10 specifically includes:
  • the distance-weighted ray casting method performs differential rendering on the cut cube data to obtain the rendered cube data.
  • S13 Perform virtual and real fusion of the rendered cube data and the endoscopic image to obtain a virtual and real fusion image, and display the same.
  • step S5 further includes the following steps:
  • the method further includes:
  • the invention provides an image display device for endoscopic minimally invasive surgery navigation, comprising a display screen, a processor and a data interface; wherein the data interface is used for connecting an endoscope and a CT device to obtain an endoscopic image And a pre-operative CT image; the processor is configured to perform an image display method of any of the above-described endoscopic minimally invasive surgery navigation to obtain a corresponding surgical navigation image; the display screen is used to display an image obtained by the processor.
  • the processor comprises a CPU processing unit and a GPU processing unit, wherein the CPU processing unit is used for computing and image configuration, and the GPU processing unit is used for image processing.
  • the processor is further configured to acquire a relative position view, an orthogonal cut view, and a virtual solid image of the corresponding endoscope and the patient body according to the real-time position of the endoscope, and update to the display.
  • the screen is displayed.
  • the invention provides an endoscope minimally invasive surgery navigation system, comprising a computer device and an optical tracking device, wherein the optical tracking device is used for real-time acquisition of the position of the endoscopic surgical tool and tracking of the patient's posture, the computer device For acquiring the endoscopic image and the CT image, and combining the position information tracked by the optical tracking device, and using the image display method of any of the above embodiments, the corresponding surgical navigation image is acquired and displayed.
  • the computer device comprises the image display device of any of the above embodiments.
  • the endoscopic minimally invasive surgical navigation system is applied to nasal and sinus malignant tumor surgery and skull base tumor surgery navigation.
  • the image display method of the above-mentioned endoscopic minimally invasive surgery navigation has the following advantages over the conventional endoscopic surgical navigation display method:
  • the present invention orthogonally cuts the CT image in a direction parallel or perpendicular to the endoscope, effectively avoiding the display disadvantage of the three views in the distance, and also for the surgical instrument (such as an endoscope) and the patient's human body.
  • the relative position between the two is displayed to accurately indicate the distance relationship between the instrument and the human body; in addition, the orthogonal cut view is differentiated and rendered by the distance-weighted rendering method, so that the endoscope and the target position are The distance display is clearer;
  • the present invention realizes a relative positional view between the endoscope and the patient's human body, an orthogonal sectional view of the CT image with the endoscope as a reference, and a virtual and solid fusion display view between the endoscopic image and the CT image
  • the display enables the doctor to combine the various views to accurately understand the position of the endoscope and the intraoperative process, and improve the safety of endoscopic minimally invasive surgery;
  • the virtual and fused image of the present invention not only can display the image detected by the endoscope in real time, but also uses the distance-weighted rendering method to differentiate the rendered cube data, which can reduce the computational complexity and accelerate the rendering speed. At the same time, it provides more accurate depth perception, and more effectively improves the relative relationship between anatomical structures, so that doctors can more clearly define the occlusion and pre- and post-analysis of anatomical structures, and provide doctors with more accurate auxiliary diagnosis and treatment capabilities;
  • FIG. 1 is a schematic flow chart of an image display method for an endoscopic minimally invasive surgery navigation according to an embodiment of the present invention
  • FIG. 2 is a schematic view showing the position of an endoscopic surgical tool using an optical tracking device in the endoscopic surgical navigation process of the present invention
  • FIG. 3 is a schematic cross-sectional view showing CT images performed during endoscopic surgery of the present invention.
  • FIG. 4 is a schematic flow chart of an image display method for an endoscopic minimally invasive surgery navigation according to another embodiment of the present invention.
  • FIG. 5 is a schematic flow chart of an image display method for an endoscopic minimally invasive surgery navigation according to still another embodiment of the present invention.
  • FIG. 6 is a schematic diagram showing a refinement process of performing virtual and real fusion processing on an endoscopic image and a CT image according to an embodiment of the present invention
  • FIG. 7 is a schematic diagram of a cube cutting of a CT image according to an endoscope position in an endoscopic surgical navigation process according to an embodiment of the present invention
  • FIG. 8 is a schematic flow chart of processing an endoscopic image in an image display method for an endoscopic minimally invasive surgery navigation according to an embodiment of the present invention
  • FIG. 9 is a schematic diagram of edge Gaussian attenuation and transparency mapping of an endoscopic image according to an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of an image display interface of an endoscopic minimally invasive surgery navigation according to an embodiment of the present invention
  • FIG. 11 is a functional block diagram of an image display device for minimally invasive surgery of an endoscope according to an embodiment of the present invention
  • FIG. 12 is a block diagram showing a specific function of a processor in an image display device for minimally invasive surgery of an endoscope according to an embodiment of the present invention
  • FIG. 13 is a schematic structural view of an endoscope minimally invasive surgery navigation system according to an embodiment of the present invention.
  • the present invention provides an image display method in an endoscopic minimally invasive surgery navigation process, such as but not limited to, including nasal and sinus malignant tumor surgery, skull base tumor surgery, etc., of course, may also include other Use endoscopic surgery.
  • FIG. 1 illustrates an image display method during an endoscope minimally invasive surgery navigation process according to an embodiment of the present invention.
  • the image display method includes the following steps:
  • a preoperative scan of a predetermined portion of the patient is performed using a CT device to obtain a pre-operative CT image, which is a three-dimensional view.
  • the predetermined part is, for example, a human head.
  • S102 Perform registration between the CT image and the patient posture to obtain the registered CT image
  • the position corresponding to the key anatomy in the CT image is determined according to a predetermined key anatomy and used as a reference point.
  • the optical tracking device realizes the positioning of the marker point corresponding to the reference point on the patient's body, and then uses the 3PCHM (3-Points Convex Hull Matching) rapid registration calculation method to perform the CT image between the patient and the patient posture. Rotate the matrix and the translation vector and obtain the converted CT image.
  • the end of the endoscope is the end of the endoscope that protrudes into the patient's body, that is, the endoscope's detection lens. Since the tip end of the endoscope extends into the patient, the position and orientation of the tip are difficult to obtain, and thus the position of the endoscopic surgical tool located outside the patient's body is converted. As shown in FIG. 2, the surgical tool 300 of the endoscope is provided with four marker points, and the four marker points are tracked and monitored by the optical tracking device 200, and the position information of the four marker points is acquired.
  • the coordinate transformation relationship between the two volume data can be registered by the following formula:
  • the DLT Direct Linear Transform
  • the position and orientation of the end of the endoscope will be acquired in real time, and the positional change of the endoscope can be tracked in time to facilitate subsequent image update.
  • the endoscope and the surgical target (for example, the tumor to be resected, etc.) can be obtained according to the real-time position of the endoscope obtained by the optical tracking device.
  • the distance between the surgical instruments and the relative position of the surgical target can also be obtained.
  • the CT image after the registration is orthogonal to the parallel and perpendicular directions of the endoscope.
  • the distance between the endoscope and the target position on the cut surface is displayed, and the posture state of the endoscope and the surgical tool can be displayed more effectively and intuitively.
  • the distance-weighted ray casting method is used to differentiate the orthogonally-cut data so that the distance between the endoscope and the target position is displayed more clearly.
  • the parallel plane (plane UW) in the positive direction (direction W) of the endoscope is its pointing direction
  • the vertical direction is defined by the right-handed coordinate system
  • the orthogonal plane (plane VW) of the direction V) is an orthogonal direction.
  • the sampling factor of each sampling point on each ray has a transparency value corresponding to the d value
  • the cube data is then rendered differently according to the corresponding transparency value.
  • the sampling weighting factor the farther the voxel from the top end of the endoscope is absorbed in the ray casting function, so that the anatomical structure of the different positions of the CT image data is differentiated.
  • the CT image is orthogonally cut in a direction parallel or perpendicular to the endoscope, thereby effectively avoiding the display defects of the three views in the distance, and the surgical instruments (such as endoscope) and surgery
  • the relative position between the targets is displayed, and the distance relationship between the device and the human body is accurately indicated.
  • the orthogonal cut view is rendered differently by the distance-based weighting rendering method, so that the distance between the endoscope and the target position is displayed more clearly.
  • a relative positional view between the endoscope and the patient's body is also displayed.
  • the method further includes:
  • the 3D segmentation of the predetermined key anatomical structures was performed by the region-based growth and rapid travel methods, and the key anatomical structures after 3D segmentation were labeled.
  • the predetermined key anatomy is determined based on the specific surgical site, such as blood vessels, tumors, and nerves. Moreover, the key anatomy is determined by the physician in the CT image to determine the specific location of the key anatomy.
  • the key anatomical structure after the 3D segmentation is subjected to the rendering process of step S105, thereby realizing the differentiation of the anatomical structure in the orthogonal section view. Display, easy for doctors to observe during surgery, quickly determine the surgical target, such as the tumor to be removed.
  • Color mapping of key anatomical structures obtained by 3D segmentation makes the distinction between key anatomical structures in the image more obvious, and speeds up the fusion process for virtual and real fusion.
  • the accuracy of distance perception during virtual and real fusion processing provides a guarantee.
  • the farther away from the position of the top end of the endoscope the data will also be attenuated by color rendering, ie the farther the structure is, the less likely it is to be observed.
  • the relative relationship between key anatomical structures is more effectively improved, so that doctors can more clearly determine the occlusion and context between key anatomical structures, and provide doctors with more accurate auxiliary diagnosis and treatment capabilities.
  • the method further includes:
  • the endoscope's detection lens is used to extend into the patient to obtain an endoscopic image.
  • the endoscopic image is acquired in real time as the intraoperative endoscope changes.
  • S110 Perform real-time fusion of the collated CT image and the endoscopic image to obtain a virtual and real fusion image, and display the image.
  • the embodiment of the invention realizes a relative position view between the endoscope and the patient's human body, an orthogonal sectional view of the CT image with the endoscope as a reference, and a virtual and solid fusion display view between the endoscopic image and the CT image.
  • the display allows the doctor to combine the various views to accurately understand the position of the endoscope and the intraoperative process, improving the safety of endoscopic minimally invasive surgery.
  • step S110 includes:
  • the cube parameter of the cutting is determined, and the collated CT image is cut based on the cube constructed by the cube parameter to obtain the cube data.
  • the cube parameter of the cutting is specifically: in the space O CT formed by the CT image, starting from the focal plane O V of the endoscope, and taking the axial direction of the endoscope as the depth direction and the length d , forming one side of the cube; at the same time, setting the other two sides m and n of the cube according to the size of the endoscope display range.
  • the cut cube data can be obtained by cutting the registered CT image according to the constructed cube.
  • the CT image is cut according to the cube constructed by the cube parameter shown in Fig. 7.
  • the distance-weighted ray casting method is used to differentiate the cut cube data. Specifically, the distance from the front surface of the data cube (ie, the focal plane Ov of the endoscope in FIG. 7) to the rear surface of the data cube is d, as the distance increases (ie, the value of d increases) ), the sampling factor of each sampling point on each ray corresponds to a transparency value corresponding to the d value, thereby differentiating the cube data according to the corresponding transparency value.
  • the distance-weighted ray casting method is used to differentiate the cut cube data. Specifically, the distance from the front surface of the data cube (ie, the focal plane Ov of the endoscope in FIG. 7) to the rear surface of the data cube is d, as the distance increases (ie, the value of d increases) ), the sampling factor of each sampling point on each ray corresponds to a transparency value
  • n, and d are the side lengths of the data cube, respectively, and the coordinate position of the sampling position is (x, y, z).
  • S113 Perform real-time fusion of the rendered cube data with the endoscopic image to obtain a virtual and real fusion image, and display the image.
  • step S109 After the cube data is obtained and rendered, it is subjected to a virtual fusion process with the endoscopic image obtained in step S109 to obtain a virtual and real fusion image.
  • the virtual and real fusion image of the embodiment of the present invention not only can display the image detected by the endoscope in real time, but also uses the distance weighted rendering method to differentially render the cut cube data, which can reduce the computational complexity and accelerate the rendering speed. At the same time, it provides more accurate depth perception, and more effectively improves the relative relationship between anatomical structures, so that the doctor can more clearly define the occlusion and pre- and post-analysis of the anatomical structure, and provide doctors with more accurate auxiliary diagnosis and treatment capabilities.
  • Step S114 performing distortion correction on the endoscopic image
  • the endoscopic image distortion is corrected, so that the endoscope image with severe radial distortion can be quickly recovered, so as to eliminate distortion of the endoscopic image due to image distortion in the virtual and real fusion display, and the object is not match.
  • Step S115 performing transparency mapping on the endoscopic image based on the distance from the center of the image, and performing edge attenuation processing on the endoscopic image subjected to the transparency mapping.
  • the distortion-corrected endoscopic image is mapped based on the distance from the center of the distance image.
  • the image center is the center of the image, and the radius is used as the transparency mapping parameter.
  • the image is farther away from the image center, and the transparency is higher, that is, the more Transparent. In this way, the image of the central region of the endoscope can be preserved, so that the layered rendering can be realized when the edge of the endoscopic image is attenuated, which can effectively improve the immersion of the fused display, and the fusion of the front and back scenes of the virtual and real fusion is more realistic. .
  • FIG. 9 shows a schematic diagram of edge Gaussian attenuation and transparency mapping of a nasal endoscopic image.
  • the distance between any point P(i,j) in the picture and the center of the image As shown in FIG. 9, set the distance between any point P(i,j) in the picture and the center of the image as Where 0 ⁇ i ⁇ m-1, 0 ⁇ j ⁇ n-1.
  • the radius of the opaque region in the endoscopic image can be set to t, and the maximum radius of the image is R, that is, the attenuation region is Rt.
  • the transparency of the attenuation area can be defined as:
  • the endoscope image is processed by using a Gaussian edge attenuation algorithm, and a seamless transition between the endoscopic image and the CT image is realized, and a smooth transition is achieved visually, which can be used in the endoscopic image.
  • Good matching and transition between the visible structure and the reconstructed structure which can display more structural information than the peripheral expansion of the traditional endoscopic image, and can display the lesion information behind the endoscopic image in the same view, which significantly improves the operation.
  • FIG. 10 illustrates a nasal endoscopic surgical navigation virtual fusion display interface according to an embodiment of the present invention.
  • the display interface shown in FIG. 10 includes a relative position view of the endoscope and the patient's human body, an axial positioning cutaway view, a radial positioning cutaway view, and a transparency of the cut cube data after distance-weighted-based rendering and transparency.
  • each view in the display interface is updated as the position of the end of the endoscope changes. Based on the display interface shown in FIG.
  • the distance between the endoscope and the target structure in the patient's body can be clearly and intuitively observed. And location relationship.
  • the virtual and real fusion display view it is possible to simultaneously observe the real-time cut-out cube data based on distance weighting, the endoscopic image of edge Gaussian attenuation and transparency mapping, and the key anatomical target information of color mapping, and the endoscopic image.
  • the anatomical structures such as the nasal cavity naturally extend into the virtual scene, and the differential display based on distance weighting provides an effective prompt for the anatomy in the virtual scene.
  • the axial positioning cutaway view and the radial positioning cutaway view shown in FIG. 10 are orthogonal cross-sectional views obtained by orthogonally cutting and rendering the CT image in step S105.
  • the virtual and solid fusion display method of the above-mentioned endoscopic minimally invasive surgery navigation has the following advantages over the conventional endoscope navigation display method:
  • the embodiment of the present invention orthogonally cuts the CT image in a direction parallel or perpendicular to the endoscope, thereby effectively avoiding the display defects of the three views in the distance, and also for surgical instruments (such as endoscopes) and surgery.
  • the relative position between the targets is displayed, and the distance relationship between the instrument and the surgical target is accurately indicated.
  • the orthogonal cut view is differentiated and rendered by the distance-weighted rendering method, so that the endoscope and the target position are made.
  • the distance between the displays is clearer;
  • the embodiment of the present invention realizes a relative position view between the endoscope and the patient's human body, an orthogonal sectional view of the CT image with the endoscope as a reference, and a virtual and solid fusion display between the endoscopic image and the CT image
  • the display of the view enables the doctor to combine the various views to accurately understand the position of the endoscope and the intraoperative process, and improve the safety of endoscopic minimally invasive surgery;
  • the virtual and real fusion image of the embodiment of the present invention not only can display the image detected by the endoscope in real time, but also uses the distance-weighted rendering method to differentially render the cut cube data, which can reduce the computational complexity and accelerate At the same time of rendering speed, it provides more accurate depth perception, which can more effectively improve the relative relationship between anatomical structures, so that doctors can more clearly define the occlusion and pre- and post-analysis of anatomical structures, and provide doctors with more accurate auxiliary diagnosis and treatment capabilities;
  • FIG. 11 illustrates an image display device for minimally invasive surgery of an endoscope according to an embodiment of the present invention.
  • the virtual and real fusion device may include a display screen 10 , a processor 20 , and a data interface 30 .
  • the data interface is used to connect the endoscope and the CT device to obtain an endoscopic image and a CT image; the processor 20 is configured to perform the minimally invasive surgery navigation virtual fusion display method according to any one of the above embodiments, A virtual solid image is obtained; the display screen 10 is used to display a virtual solid image obtained by the processor 20.
  • the processor 20 includes a CPU processing unit 21 and a GPU processing unit 22, wherein the CPU processing unit 21 is mainly used to perform functions such as mathematical calculation and image configuration, such as CT image and patient posture. Registration and 3D segmentation of key anatomical structures.
  • the CPU processing unit is also used to perform other processing, such as reading endoscopic images and CT images from the data interface 30, and obtaining positional information such as the real-time position of the endoscope and the pose of the patient from the optical tracking device 200.
  • the GPU processing unit 22 is configured to perform functions related to graphics processing, such as cube cutting of CT images, cube data rendering based on distance weighting, transparency mapping and edge attenuation processing of endoscopic images, and acquisition between the endoscope and the patient's body.
  • graphics processing such as cube cutting of CT images, cube data rendering based on distance weighting, transparency mapping and edge attenuation processing of endoscopic images, and acquisition between the endoscope and the patient's body.
  • the processor 20 is further configured to: obtain a corresponding virtual and fused image, a relative position view of the endoscope and the patient's body according to the real-time position of the endoscope, and a CT image in a direction parallel and perpendicular to the endoscope A cross-sectional view of the orthogonal section is performed and updated to the display screen 10 for display.
  • an embodiment of the present invention further provides an endoscopic minimally invasive surgical navigation system, such as, but not limited to, a nasal and sinus malignant tumor surgery and a skull base tumor surgical navigation.
  • the surgical navigation system specifically includes: a computer device 100 and an optical tracking device 200 for acquiring the position of the endoscopic surgical tool 300 and tracking the posture of the patient in real time, the computer device 100 for acquiring The endoscope image and the CT image are combined with the position information tracked by the optical tracking device 200, and the endoscope image and the CT image are processed by the image display method according to any of the above embodiments to obtain the endoscope.
  • a virtual fusion image of the image and the CT image is displayed.
  • the computer device comprises the image display device shown in FIG.
  • the computing device in the above embodiment may be implemented by means of software plus a necessary general hardware platform, and may of course be implemented by hardware, but in many cases, the former is a better implementation.
  • the technical solution of the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product, that is, the execution of any of the above embodiments by a series of program instructions.
  • Computing method that is, a computer software product that executes the computing method is stored in a computer storage medium (such as but not limited to a ROM/RAM, a magnetic disk, an optical disk, etc.), and includes a plurality of instructions for causing a terminal device (may be A computer, medical device, server, etc.) performs a calculation method of any of the embodiments of the present invention.
  • a computer storage medium such as but not limited to a ROM/RAM, a magnetic disk, an optical disk, etc.
  • a terminal device may be A computer, medical device, server, etc.
  • the invention provides an image display method, device and system for endoscopic minimally invasive surgery navigation.
  • the image display method includes: acquiring a CT image; performing registration between the CT image and the patient posture; acquiring the position and direction of the endoscope end point in real time; according to the position and direction of the top end of the speculum mirror, and after registration CT image, obtain the relative position between the endoscope and the patient's body, and the distance between the endoscope and the surgical target; according to the position and orientation of the endoscope tip, and the distance between the endoscope and the surgical target
  • the orthogonal CT image is orthogonally cut along the parallel and perpendicular directions of the endoscope, and the orthogonally cut data is differentiated and rendered based on the distance-weighted ray casting method; the endoscope and the patient are displayed.
  • the image display device and the system all adopt the image display method to realize image display during the surgical navigation process.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Endoscopes (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

一种内窥镜微创手术导航的图像显示方法、装置及***,该图像显示方法包括:获取CT影像(S101);进行CT影像与病人位姿之间的配准(S102);实时获取内窥镜顶端的位置和方向(S103);根据内窥镜顶端的位置和方向,以及配准后的CT影像,获得内窥镜与病人人体之间的相对位置,以及内窥镜与手术目标之间的距离(S104);根据内窥镜顶端的位置和方向,以及内窥镜与手术目标之间的距离,对配准后的CT影像沿与内窥镜平行和垂直的方向进行正交剖切,并基于距离加权的光线投射方法对正交剖切的数据进行差异化渲染(S105);显示内窥镜与病人人体之间的相对位置视图,以及正交剖切视图(S106)。图像显示装置及***均采用该图像显示方法,实现手术导航过程中的图像显示。

Description

内窥镜微创手术导航的图像显示方法、装置及*** 技术领域
本发明涉及医疗技术领域,尤其涉及一种内窥镜微创手术导航的图像显示方法、装置及***。
背景技术
颅底肿瘤由于其发生位置较深,临近结构复杂难辨,而且诊疗过程涉及神经外科、耳鼻喉科和头颈外科等多学科技术,若要完整的切除肿瘤较为困难。经过百余年的发展,颅底肿瘤诊疗由裸眼开颅手术发展到内窥镜微创手术阶段。内窥镜微创技术术式简洁,术后恢复快,通过内窥镜图像引导能避免手术入路对于面部皮肤结构的破坏,从而降低各种并发症发生的概率。
目前,常规的鼻及鼻窦恶性肿瘤手术以及颅底肿瘤手术采用单纯鼻内窥镜视频图像导航,结合医学影像CT数据引导的手术导航***基本能够提供较为准确的三视图信息,同时在该三视图中显示内窥镜剖面图。但是,手术导航中采用的CT数据的三视图中给医生提供的辅助信息仍然有限,有待改进。
发明内容
为解决上述技术问题,本发明提出了一种内窥镜微创手术导航的图像显示方法、装置和***。
本发明提供一种内窥镜微创手术导航的图像显示方法,包括以下步骤:
S1,获取CT影像;
S2,进行所述CT影像与病人位姿之间的配准,获得配准后的CT影像;
S3,实时获取内窥镜顶端的位置和方向;
S4,根据所述内窥镜顶端的位置和方向,以及所述配准后的CT影像,获得内窥镜与手术目标之间的距离;
S5,根据所述内窥镜顶端的位置和方向,以及内窥镜与手术目标之间的距离,对所述配准后的CT影像沿与所述内窥镜平行和垂直的方向进行正交剖切,并基于距离加权的光线投 射方法对正交剖切的数据进行差异化渲染,获得正交剖切数据;
S6,显示正交剖切视图。
可选地,所述步骤S4中根据所述内窥镜顶端的位置和方向,以及所述配准后的CT影像,获得所述内窥镜与手术目标之间的距离的同时,还获得内窥镜与病人人体之间的相对位置;
所述步骤S6中显示正交剖切视图的同时还显示内窥镜与病人人体之间的相对位置视图。
可选地,步骤S2之前进一步包括:
S7,通过基于区域生长和快速行进方法在所述CT影像中对预定关键解剖结构进行3D分割,并对3D分割的关键解剖结构进行标注。
可选地,所述步骤S7之后进一步包括:
S8,对经过3D分割获得的关键解剖结构进行颜色映射。
可选地,所述步骤S4之后进一步包括:
S9,实时获取内窥镜图像;
S10,将配准后的CT影像与所述内窥镜图像进行虚实融合,获得虚实融合图像,并显示。
可选地,所述步骤S10具体包括:
S11,根据所述内窥镜顶端的位置和方向,对所述CT影像进行立方体切割,获得切割后的立方体数据;
S12,基于距离加权的光线投射方法对所述切割后的立方体数据进行差异化渲染,获得渲染后的立方体数据;
S13,将所述渲染后的立方体数据与所述内窥镜图像进行虚实融合,获得虚实融合图像,并显示。
可选地,所述步骤S5之前进一步包括步骤:
S14,对所述内窥镜图像进行基于距离图像中心远近的透明度映射,并对经过透明度映射的内窥镜图像进行边缘衰减处理,以使经过边缘衰减处理后的内窥镜图像与所述切割后的立方体数据进行虚实融合。
可选地,所述步骤S10之前进一步包括:
S15,对所述内窥镜图像进行畸变校正。
本发明提供的一种内窥镜微创手术导航的图像显示装置,包括显示屏、处理器以及数据接口;其中,所述数据接口用于连接内窥镜和CT设备,以获取内窥镜图像和术前CT影像; 所述处理器用于执行上述任一内窥镜微创手术导航的图像显示方法,以获得相应的手术导航图像;所述显示屏用于显示所述处理器获得的图像。
可选地,所述处理器包括CPU处理单元以及GPU处理单元,其中所述CPU处理单元用于计算和影像配置,GPU处理单元用于图像处理。
可选地,所述处理器进一步用于根据内窥镜的实时位置,获取相应的内窥镜与病人人体的相对位置视图、正交剖切视图,以及虚实融合图像,并更新至所述显示屏进行显示。
本发明提供一种内窥镜微创手术导航***,包括计算机装置以及光学跟踪设备,所述光学跟踪设备用于实时获取内窥镜手术工具的位置以及对病人位姿的跟踪,所述计算机装置用于获取内窥镜图像以及CT影像,并结合光学跟踪设备跟踪的位置信息,以及利用上述任一实施方式的图像显示方法,获取相应的手术导航图像,并显示。
可选地,所述计算机装置包括上述任一实施方式的图像显示装置。
可选地,所述内窥镜微创手术导航***应用于鼻及鼻窦恶性肿瘤手术以及颅底肿瘤手术导航。
综上,上述内窥镜微创手术导航的图像显示方法,相对传统的内窥镜手术导航显示方法,具有如下优点:
1)本发明对CT影像沿与内窥镜平行或垂直的方向进行正交剖切,有效地避免了三视图在距离上的显示缺点,而且对手术器械(例如内窥镜)与病人人体之间的相对位置进行显示,对器械与人体之间的距离关系进行准确提示;另外,采用基于距离加权的渲染方式对正交剖切视图进行差异化渲染,使得内窥镜与目标位置之间的距离显示更加清晰;
2)本发明实现了内窥镜与病人人体之间的相对位置视图、以内窥镜为基准对CT影像的正交剖切视图、以及内窥镜图像与CT影像之间的虚实融合显示视图的显示,使得医生能结合各个视图,准确了解内窥镜位置以及术中进程,提升了内窥镜微创手术的安全性;
3)本发明的虚实融合图像中不但能实时显示内窥镜探测到的图像,而且采用基于距离加权的渲染方式对切割后的立方体数据进行差异化渲染,能够在降低计算复杂度,加速渲染速度的同时,提供更加准确的深度感知,更有效地提高解剖结构之间的相对关系,使医生对解剖结构的遮挡和前后判断更加明确,为医生提供更加准确的辅助诊疗能力;
4)采用高斯边缘衰减算法对内窥镜图像进行实时处理,实现内窥镜图像与CT影像进行虚实融合的无缝过渡,在视觉上达到平滑的过渡,能够将内窥镜图像中的肉眼可见结构与重 建结构良好的匹配与过渡,相对传统内窥镜图像的***扩展,能显示更多的结构信息,另外在同一视图中能显示内窥镜图像后面的病灶信息,明显提升了手术导航中实时图像的提示效果;
5)在内窥镜的视野观察区域进行虚实融合且分层渲染的方式对视野观察区域实现增强现实引导,而且在显示和渲染区域上采用了定位立方体剖,其位置随内窥镜位置和方向的变化而变化,在距离感知和场景沉浸感上均有提升。
附图说明
图1是本发明一实施方式的内窥镜微创手术导航的图像显示方法的流程示意图;
图2是本发明内窥镜手术导航过程中利用光学跟踪设备跟踪内窥镜手术工具的位置示意图;
图3是本发明内窥镜手术导航过程中进行CT影像的正交剖切示意图;
图4是本发明另一实施方式的内窥镜微创手术导航的图像显示方法的流程示意图;
图5是本发明又一实施方式的内窥镜微创手术导航的图像显示方法的流程示意图;
图6是本发明一实施方式中对内窥镜图像及CT影像进行虚实融合处理的细化流程示意图;
图7是本发明一实施方式的内窥镜手术导航过程中根据内窥镜位置进行CT影像的立方体切割的示意图;
图8是本发明一实施方式的内窥镜微创手术导航的图像显示方法中对内窥镜图像进行处理的流程示意图;
图9是本发明一实施方式的内窥镜图像进行边缘高斯衰减与透明度映射的示意图;
图10是本发明一实施方式的内窥镜微创手术导航的图像显示界面示意图;
图11是本发明一实施方式的内窥镜微创手术导航的图像显示装置的功能框图;
图12是本发明一实施方式的内窥镜微创手术导航的图像显示装置中处理器的具体功能框图;
图13是本发明一实施方式的内窥镜微创手术导航***的结构示意图。
具体实施方式
下面参考附图来说明本发明的实施例。在本发明的一个附图或一种实施方式中描述的元 素和特征可以与一个或更多个其他附图或实施方式中示出的元素和特征相结合。应当注意,为了清楚的目的,附图和说明中省略了与本发明无关的、本领域普通技术人员已知的部件或处理的表示和描述。
下面结合附图对本发明做进一步描述。
本发明提供了一种内窥镜微创手术导航过程中的图像显示方法,该内窥镜微创手术例如但不限于包括鼻及鼻窦恶性肿瘤手术、颅底肿瘤手术等,当然也可以包括其他利用内窥镜的手术。
具体的,参见图1,图1示出了本发明一实施方式的内窥镜微创手术导航过程中的图像显示方法,该图像显示方法包括以下步骤:
S101,获取CT影像;
利用CT设备对病人的预定部位进行术前扫描,以获取术前CT影像,该CT影像为三维视图。预定部位例如人体头部。
S102,进行CT影像与病人位姿之间的配准,获得配准后的CT影像;
具体地,根据预定的关键解剖结构确定CT影像中与该关键解剖结构对应的位置,并将其作为参考点。光学跟踪设备则根据该参考点,在病人身体上实现对应该参考点的标志点的定位,然后使用3PCHM(3-Points Convex Hull Matching)快速配准计算方法进行CT影像与病人位姿之间的旋转矩阵和平移向量,并获得转换后的CT影像。
S103,获取内窥镜顶端的位置和方向;
内窥镜顶端为内窥镜伸入病人体内的一端,即内窥镜的探测镜头。由于该内窥镜顶端伸入病人体内,顶端位置和方向较难获取,因此通过位于病人人体外的内窥镜手术工具的位置进行换算获得。如图2所示,该内窥镜的手术工具300上设有4个标志点,利用光学跟踪设备200对该4个标志点进行跟踪监测,获取该4个标志点的位置信息。两个体数据之间的坐标转换关系可以通过以下公式进行配准变换:
Figure PCTCN2018103929-appb-000001
在上式中,
Figure PCTCN2018103929-appb-000002
代表CT数据坐标系中点的坐标,
Figure PCTCN2018103929-appb-000003
代表光学跟踪设备坐标系中对应点的坐标,
Figure PCTCN2018103929-appb-000004
Figure PCTCN2018103929-appb-000005
则分别是旋转矩阵和平移向量。根据该4个标志点的位置信息,即可使用DLT(Direct Linear Transform)算法计算
Figure PCTCN2018103929-appb-000006
Figure PCTCN2018103929-appb-000007
另外,该内窥镜顶端的位置和方向将实时获取,以及时跟踪内窥镜的位置变化,便于后面的图像更新。
S104,根据配准后的CT影像以及内窥镜顶端的位置和方向,获得内窥镜与手术目标之间的距离;
由于病人位姿与CT影像经过配准之后已经统一到同一坐标空间中,此时根据光学跟踪设备得到的内窥镜实时位置,即能够获得内窥镜与手术目标(例如,待切除的肿瘤等)之间的距离,以便进行手术器械与手术目标相对位置的显示。进一步地,获得内窥镜与手术目标之间的距离后,还能获得内窥镜与病人之间的相对位置。
S105,根据所述内窥镜顶端的位置和方向,以及内窥镜与手术目标之间的距离,对所述配准后的CT影像沿与所述内窥镜平行和垂直方向进行正交剖切,并基于距离加权的光线投射方法对正交剖切的数据进行差异化渲染,获得正交剖切数据;
通过对CT影像以内窥镜为基准进行正交剖切,实现剖切面上的内窥镜与目标位置之间的距离显示,更有效直观地显示内窥镜及手术工具的位姿状态。另外,利用距离加权的光线投射方法对正交剖切的数据进行差异化渲染,使得内窥镜与目标位置之间的距离显示更加清晰。
具体地,如图3所示,从内窥镜顶端的位置开始,假设沿内窥镜正方向(方向W)的平行平面(平面UW)为其指向方向,以右手坐标系定义得到垂直方向(方向V)的正交平面(平面VW)为正交方向。进行正交剖切时,沿指向方向和正交方向分别剖切得到整个CT影像得到剖切影像数据。同时假设内窥镜前端到手术目标之间的距离为d,随着距离的增加(即d数值的增大),每条光线上的每个采样点的采样因子与d值相应的透明度值,从而根据该相应的透明度值对立方体数据进行差异化渲染。在CT影像的任意一点的光线投射采样加权因子,距离内窥镜顶端位置越远的体素在光线投射函数中的吸收因素越少,如此使得CT影像数据的不同位置的解剖结构形成差异性区别。
S106,显示正交剖切视图。
本发明实施方式中,对CT影像沿与内窥镜平行或垂直的方向进行正交剖切,有效地避 免了三视图在距离上的显示缺点,而且对手术器械(例如内窥镜)与手术目标之间的相对位置进行显示,对器械与人体之间的距离关系进行准确提示。另外,采用基于距离加权的渲染方式对正交剖切视图进行差异化渲染,使得内窥镜与目标位置之间的距离显示更加清晰。进一步地,在显示正交剖切视图的同时,还显示内窥镜与病人人体之间的相对位置视图。
进一步地,如图4所示,上述步骤S102之前进一步包括:
S107,通过基于区域生长和快速行进方法在所述CT影像中对预定关键解剖结构进行3D分割,并对3D分割的关键解剖结构进行标注;
以术前获取的CT影像作为基准,通过基于区域生长和快速行进方法对预定的关键解剖结构进行3D分割,并对3D分割后的关键解剖结构进行标注。该预定的关键解剖结构根据具体的手术部位而确定,例如血管、肿瘤和神经。而且,该关键解剖结构由医生在CT影像中确定关键解剖结构的具***置。
另外,由于在CT影像中对预定关键解剖结构进行了3D分割,因此该3D分割后的关键解剖结构在进行步骤S105的渲染处理后,从而实现了正交剖切视图中对解剖结构的差异化显示,便于医生术中观察,快速确定手术目标,例如要切除的肿瘤。
S108,对经过3D分割获得的关键解剖结构进行颜色映射。
通过对3D分割获得的关键解剖结构进行颜色映射,例如血管为红色、肿瘤为绿色、神经为黄色,如此使得图像中的关键解剖结构区分更加明显,同时也为虚实融合处理加快了速度,以及为虚实融合处理时距离感知的准确性提供了保证。
进行颜色映射的关键解剖结构在步骤S105的基于距离加权的差异化渲染时,距离内窥镜顶端的位置越远的数据也将进行颜色的衰减渲染,即越远的结构越不易被观察到。如此,更有效地提高关键解剖结构之间的相对关系,使医生对关键解剖结构之间的遮挡和前后关系的判断更加明确,为医生提供更加准确的辅助诊疗能力。
进一步地,如图5所示,上述步骤S104之后还包括:
S109、实时获取内窥镜图像;
利用内窥镜的探测镜头伸入病人体内,以获取内窥镜图像。由于术中内窥镜会发生变化,因此该内窥镜图像为实时获取。
S110、将配准后的CT影像与所述内窥镜图像进行虚实融合,获得虚实融合图像,并显示。
本发明实施方式实现了内窥镜与病人人体之间的相对位置视图、以内窥镜为基准对CT影像的正交剖切视图、以及内窥镜图像与CT影像之间的虚实融合显示视图的显示,使得医生能结合各个视图,准确了解内窥镜位置以及术中进程,提升了内窥镜微创手术的安全性。
进一步地,如图6所示,上述步骤S110包括:
S111,根据所述内窥镜顶端的位置,对配准后CT影像进行立方体切割,获得切割后的立方体数据;
根据内窥镜顶端的位置信息,确定切割的立方体参数,并基于该立方体参数构建的立方体对配准后的CT影像进行切割,获得立方体数据。一实施例中,参照图7,该切割的立方体参数具体为:在CT影像形成的空间O CT内,以内窥镜的焦平面O V为起点,以内窥镜轴向为深度方向且长度为d,形成立方体的一条边;同时,根据内窥镜显示范围的大小设定立方体的另外两条边m和n。如此,基于该确定的立方体参数(即焦平面O V以及3条边),则构建一立方体。按照该构建的立方体对配准后的CT影像进行切割,即可获得切割后的立方体数据。
S112,基于距离加权的光线投射方法对所述切割后的立方体进行差异化渲染,获得渲染后的立方体数据;
根据图7所示的立方体参数构建的立方体对CT影像进行切割,获得立方体数据后,采用距离加权的光线投射方法对切割后的立方体数据进行差异化渲染。具体地,从数据立方体的前表面(即图7中的内窥镜的焦平面Ov)开始,到数据立方体的后表面之间的距离为d,随着距离的增加(即d数值的增大),每条光线上的每个采样点的采样因子与d值相应的透明度值,从而根据该相应的透明度值对立方体数据进行差异化渲染。继续参照图7,以深度p的手术目标为例,在数据立方体内任意一点的光线投射采样加权因子,距离焦平面O V越远的体素在光线投射函数中的吸收因素越少,如此使得立方体数据内不同位置的解剖结构形成差异性区别。采样位置与采样因子透明度的映射关系如下:
Figure PCTCN2018103929-appb-000008
其中m,n,d分别为数据立方体的边长,采样位置的坐标位置为(x,y,z)。
通过基于距离加权的渲染,能够在精细化渲染结构纹理的同时,有效提高立方体内部的解剖结构之间的相对位置关系,清晰表现解剖结构以及其位置信息。
S113,将所述渲染后的立方体数据与所述内窥镜图像进行虚实融合,获得虚实融合图像, 并显示。
获得立方体数据并对其进行渲染后,再将其与步骤S109获得的内窥镜图像进行虚实融合处理,获得虚实融合图像。
本发明实施方式的虚实融合图像中不但能实时显示内窥镜探测到的图像,而且采用基于距离加权的渲染方式对切割后的立方体数据进行差异化渲染,能够在降低计算复杂度,加速渲染速度的同时,提供更加准确的深度感知,更有效地提高解剖结构之间的相对关系,使医生对解剖结构的遮挡和前后判断更加明确,为医生提供更加准确的辅助诊疗能力。
需要说明的是,上述步骤S110中也可以采用其他的虚实融合方式。
进一步地,如图8所示,上述步骤S109中获取内窥镜图像后,将对其进行如下处理:
步骤S114,对所述内窥镜图像进行畸变校正;
获取内窥镜图像后,将对该内窥镜图像畸变校正,使得径向畸变严重的内窥镜图像能快速恢复,以消除虚实融合显示中因图像畸变使得内窥镜图像失真,与实物不匹配。
步骤S115,对所述内窥镜图像进行基于距离图像中心远近的透明度映射,并对经过透明度映射的内窥镜图像进行边缘衰减处理。
对经过畸变校正的内窥镜图像进行基于距离图像中心远近的透明度映射,具体地,以图像中心为圆心,以半径为透明度映射参数,与图像中心距离越远的图像,透明度越高,即越透明。如此,能够保留内窥镜中心区域的图像,从而在对内窥镜图像的边缘进行衰减处理时,能实现分层渲染,能够有效提高融合显示的沉浸感,对虚实融合的前后场景融合更加逼真。
进一步地,该边缘衰减处理例如但不限于利用高斯函数处理。如图9所示,图9示出了一鼻内窥镜图像进行边缘高斯衰减与透明度映射的示意图。对于一个m×n的内窥镜图像,设其画面中任意一点P(i,j)与图像中心的距离为
Figure PCTCN2018103929-appb-000009
其中0<i≤m-1,0<j≤n-1。则根据其与图像中心的距离,可以设置内窥镜图像中的不透明区域半径为t,图像最大半径为R,即衰减区域则为R-t。衰减区域的透明度则可以定义为:
Figure PCTCN2018103929-appb-000010
本发明实施方式中,采用高斯边缘衰减算法对内窥镜图像进行处理,实现内窥镜图像与CT影像虚实融合的无缝过渡,在视觉上达到平滑的过渡,能够将内窥镜图像中的肉眼可见结 构与重建结构良好的匹配与过渡,相对传统内窥镜图像的***扩张,能显示更多的结构信息,另外在同一视图中能显示内窥镜图像后面的病灶信息,明显提升了手术导航中实时图像的提示效果。
如图10所示,图10示出本发明一实施方式的鼻内镜手术导航虚实融合显示界面。图10所示的显示界面包括内窥镜与病人人体相对位置视图、轴向定位剖切视图、径向定位剖切视图,以及对切割的立方体数据经过基于距离加权的差异化渲染后与经过透明度映射及边缘衰减处理的内窥镜图像的虚实融合显示视图。而且该显示界面中的各视图均随内窥镜顶端的位置变化而更新。基于图10所示的显示界面,根据内窥镜与病人人体相对位置视图与轴向径向定位剖切视图中,能够清晰直观的观察到内窥镜与病人人体内的目标结构之间的距离和位置关系。根据虚实融合显示视图,能够同时观察到基于距离加权的差异化渲染的实时剖切立方体数据、边缘高斯衰减与透明度映射的内窥镜图像和颜色映射的关键解剖目标信息,同时对内窥镜图像中的鼻腔等解剖结构自然延伸扩展至虚拟场景中,通过基于距离加权的差异化渲染显示对虚拟场景中的解剖结构提供有效的提示。
需要说明的是,由于附图中无法显示颜色,故用不同的线条表示,实际的显示图像中不同的解剖结构是通过不同的颜色进行显示的。其中图10示出的轴向定位剖切视图和径向定位剖切视图为步骤S105中对CT影像进行正交剖切并渲染后获得的正交剖切视图。
综上,上述内窥镜微创手术导航的虚实融合显示方法,相对传统的内窥镜导航显示方法,具有如下优点:
1)本发明实施方式对CT影像沿与内窥镜平行或垂直的方向进行正交剖切,有效地避免了三视图在距离上的显示缺点,而且对手术器械(例如内窥镜)与手术目标之间的相对位置进行显示,对器械与手术目标之间的距离关系进行准确提示;另外,采用基于距离加权的渲染方式对正交剖切视图进行差异化渲染,使得内窥镜与目标位置之间的距离显示更加清晰;
2)本发明实施方式实现了内窥镜与病人人体之间的相对位置视图、以内窥镜为基准对CT影像的正交剖切视图、以及内窥镜图像与CT影像之间的虚实融合显示视图的显示,使得医生能结合各个视图,准确了解内窥镜位置以及术中进程,提升了内窥镜微创手术的安全性;
3)本发明实施方式的虚实融合图像中不但能实时显示内窥镜探测到的图像,而且采用基于距离加权的渲染方式对切割后的立方体数据进行差异化渲染,能够在降低计算复杂度,加速渲染速度的同时,提供更加准确的深度感知,更有效地提高解剖结构之间的相对关系,使 医生对解剖结构的遮挡和前后判断更加明确,为医生提供更加准确的辅助诊疗能力;
4)采用高斯边缘衰减算法对内窥镜图像进行实时处理,实现内窥镜图像与CT影像进行虚实融合的无缝过渡,在视觉上达到平滑的过渡,能够将内窥镜图像中的肉眼可见结构与重建结构良好的匹配与过渡,相对传统内窥镜图像的***扩展,能显示更多的结构信息,另外在同一视图中能显示内窥镜图像后面的病灶信息,明显提升了手术导航中实时图像的提示效果;
5)在内窥镜的视野观察区域进行虚实融合且分层渲染的方式对视野观察区域实现增强现实引导,而且在显示和渲染区域上采用了定位立方体剖,其位置随内窥镜位置和方向的变化而变化,在距离感知和场景沉浸感上均有提升。
对应地,上述内窥镜微创手术导航的虚实融合显示方法可采用硬件结合软件的方式实现,也可以采用纯软件代码的形式,并运行于一计算机中。具体地,如图11所示,图11示出了本发明一实施方式的内窥镜微创手术导航的图像显示装置,该虚实融合装置可包括显示屏10、处理器20以及数据接口30,其中,所述数据接口用于连接内窥镜和CT设备,以获取内窥镜图像和CT影像;所述处理器20用于执行上述任意一实施方式的微创手术导航虚实融合显示方法,以获得虚实融合图像;所述显示屏10用于显示所述处理器20获得的虚实融合图像。
进一步地,如图12所示,上述处理器20包括CPU处理单元21以及GPU处理单元22,其中所述CPU处理单元21主要用于执行数学计算和影像配置等功能,例如CT影像与病人位姿的配准以及关键解剖结构的3D分割。当然,该CPU处理单元还用于执行其他处理,例如从数据接口30读取内窥镜图像以及CT影像,以及从光学跟踪设备200获得内窥镜的实时位置以及病人的位姿等位置信息。
GPU处理单元22用于执行与图形处理有关的功能,例如CT影像的立方体切割、基于距离加权的立方体数据渲染、内窥镜图像的透明度映射及边缘衰减处理、获取内窥镜与病人人体之间的相对位置以及内窥镜与手术目标之间的距离、对CT影像的正交剖切等等。
进一步地,处理器20进一步用于:根据内窥镜的实时位置,获取相应的虚实融合图像、内窥镜与病人人体的相对位置视图、CT影像沿与所述内窥镜平行和垂直的方向进行正交剖切的剖切视图,并更新至所述显示屏10进行显示。
如图13所示,本发明实施方式还提供了一种内窥镜微创手术导航***,例如但不限于应用于鼻及鼻窦恶性肿瘤手术以及颅底肿瘤手术导航。该手术导航***具体包括:计算机装置 100以及光学跟踪设备200,所述光学跟踪设备200用于实时获取内窥镜手术工具300的位置以及对病人位姿的跟踪,所述计算机装置100用于获取内窥镜图像以及CT影像,并结合光学跟踪设备200跟踪的位置信息,以及利用上述任意一实施方式的图像显示方法,对所述内窥镜图像以及CT影像进行处理,即可获得内窥镜图像与CT影像的虚实融合图像,并显示。
可选地,该计算机装置包括图12所示的图像显示装置。
需要说明的是,上述实施方式中的计算装置可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件实现,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明实施方式的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,也就是说,通过一系列程序指令来执行上述任一实施方式的计算方法,即执行该计算方法的计算机软件产品存储在一个计算机存储介质(例如但不限定ROM/RAM、磁碟、光盘等)中,包括若干指令,用以使得一台终端设备(可以是计算机、医疗设备、服务器等)执行本发明任一实施方式的计算方法。
以上所述仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或流程变换,或直接或间接运用在其它相关的技术领域,均同理包括在本发明的专利保护范围内。
工业实用性
本发明提供一种内窥镜微创手术导航的图像显示方法、装置及***。所述图像显示方法包括:获取CT影像;进行CT影像与病人位姿之间的配准;实时获取内窥镜顶端的位置和方向;根据所述窥镜顶端的位置和方向,以及配准后的CT影像,获得内窥镜与病人人体之间的相对位置,以及内窥镜与手术目标之间的距离;根据内窥镜顶端的位置和方向,以及内窥镜与手术目标之间的距离,对配准后的CT影像沿与内窥镜平行和垂直的方向进行正交剖切,并基于距离加权的光线投射方法对正交剖切的数据进行差异化渲染;显示内窥镜与病人人体之间的相对位置视图,以及正交剖切视图。图像显示装置及***均采用该图像显示方法,实现手术导航过程中的图像显示。本发明具有工业实用性。

Claims (14)

  1. 一种内窥镜微创手术导航的图像显示方法,其特征在于,包括以下步骤:
    S1,获取CT影像;
    S2,进行所述CT影像与病人位姿之间的配准,获得配准后的CT影像;
    S3,实时获取内窥镜顶端的位置和方向;
    S4,根据所述内窥镜顶端的位置和方向,以及所述配准后的CT影像,获得所述内窥镜与手术目标之间的距离;
    S5,根据所述内窥镜顶端的位置和方向,以及内窥镜与手术目标之间的距离,对所述配准后的CT影像沿与所述内窥镜平行和垂直的方向进行正交剖切,并基于距离加权的光线投射方法对正交剖切的数据进行差异化渲染,获得正交剖切数据;
    S6,显示正交剖切视图。
  2. 如权利要求1所述的内窥镜微创手术导航的图像显示方法,其特征在于,所述步骤S4中根据所述内窥镜顶端的位置和方向,以及所述配准后的CT影像,获得所述内窥镜与手术目标之间的距离的同时,还获得内窥镜与病人人体之间的相对位置;
    所述步骤S6中显示正交剖切视图的同时还显示内窥镜与病人人体之间的相对位置视图。
  3. 如权利要求1所述的内窥镜微创手术导航的图像显示方法,其特征在于,所述步骤S2之前进一步包括:
    S7,通过基于区域生长和快速行进方法在所述CT影像中对预定关键解剖结构进行3D分割,并对3D分割的关键解剖结构进行标注。
  4. 如权利要求3所述的内窥镜微创手术导航的图像显示方法,其特征在于,所述步骤S7之后进一步包括:
    S8,对经过3D分割获得的关键解剖结构进行颜色映射。
  5. 如权利要求1-4中任意一项所述的内窥镜微创手术导航的图像显示方法,其特征在于,所述步骤S4之后进一步包括:
    S9,实时获取内窥镜图像;
    S10,将配准后的CT影像与所述内窥镜图像进行虚实融合,获得虚实融合图像,并显示。
  6. 如权利要求5所述的内窥镜微创手术导航的图像显示方法,其特征在于,所述步骤S10具体包括:
    S11,根据所述内窥镜顶端的位置和方向,对所述CT影像进行立方体切割,获得切割后的立方体数据;
    S12,基于距离加权的光线投射方法对所述切割后的立方体数据进行差异化渲染,获得渲染后的立方体数据;
    S13,将所述渲染后的立方体数据与所述内窥镜图像进行虚实融合,获得虚实融合图像,并显示。
  7. 如权利要求5所述的内窥镜微创手术导航的图像显示方法,其特征在于,所述步骤S5之前进一步包括步骤:
    S14,对所述内窥镜图像进行基于距离图像中心远近的透明度映射,并对经过透明度映射的内窥镜图像进行边缘衰减处理,以使经过边缘衰减处理后的内窥镜图像与所述切割后的立方体数据进行虚实融合。
  8. 如权利要求7所述的内窥镜微创手术导航的图像显示方法,其特征在于,所述步骤S14之前进一步包括:
    S15,对所述内窥镜图像进行畸变校正。
  9. 一种内窥镜微创手术导航的图像显示装置,其特征在于,包括显示屏、处理器以及数据接口;其中,所述数据接口用于连接内窥镜和CT设备,以获取内窥镜图像和术前CT影像;所述处理器用于执行权利要求1-8中任意一项所述的内窥镜微创手术导航的图像显示方法,以获得相应的手术导航图像;所述显示屏用于显示所述处理器获得的图像。
  10. 如权利要求9所述的内窥镜微创手术导航的图像显示装置,其特征在于,所述处理器包括CPU处理单元以及GPU处理单元,其中所述CPU处理单元用于计算和影像配置,GPU处理单元用于图像处理。
  11. 如权利要求9所述的内窥镜微创手术导航的图像显示装置,其特征在于,所述处理器进一步用于根据内窥镜的实时位置,获取相应的内窥镜与病人人体的相对位置视图、正交剖切视图,以及虚实融合图像,并更新至所述显示屏进行显示。
  12. 一种内窥镜微创手术导航***,其特征在于,包括计算机装置以及光学跟踪设备,所述光学跟踪设备用于实时获取内窥镜手术工具的位置以及对病人位姿的跟踪,所述计算机装置用于获取内窥镜图像以及CT影像,并结合光学跟踪设备跟踪的位置信息,以及利用权利要求1-8中任意一项所述的图像显示方法,获取相应的手术导航图像,并显示。
  13. 如权利要求12所述的内窥镜微创手术导航***,其特征在于,所述计算机装置包括如权利要求9-11中任意一项所述的图像显示装置。
  14. 如权利要求13所述的内窥镜微创手术导航***,其特征在于,所述内窥镜微创手术导航***应用于鼻及鼻窦恶性肿瘤手术以及颅底肿瘤手术导航。
PCT/CN2018/103929 2017-09-06 2018-09-04 内窥镜微创手术导航的图像显示方法、装置及*** WO2019047820A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710795485.1 2017-09-06
CN201710795485.1A CN107689045B (zh) 2017-09-06 2017-09-06 内窥镜微创手术导航的图像显示方法、装置及***

Publications (1)

Publication Number Publication Date
WO2019047820A1 true WO2019047820A1 (zh) 2019-03-14

Family

ID=61155170

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/103929 WO2019047820A1 (zh) 2017-09-06 2018-09-04 内窥镜微创手术导航的图像显示方法、装置及***

Country Status (2)

Country Link
CN (1) CN107689045B (zh)
WO (1) WO2019047820A1 (zh)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107689045B (zh) * 2017-09-06 2021-06-29 艾瑞迈迪医疗科技(北京)有限公司 内窥镜微创手术导航的图像显示方法、装置及***
CN109223177A (zh) * 2018-07-30 2019-01-18 艾瑞迈迪医疗科技(北京)有限公司 图像显示方法、装置、计算机设备和存储介质
CN109246419A (zh) * 2018-09-17 2019-01-18 广州狄卡视觉科技有限公司 手术显微镜双路输出微型图案立体成像显示***及方法
CN109998684A (zh) * 2019-05-07 2019-07-12 艾瑞迈迪科技石家庄有限公司 基于距离动态量化的引导预警方法及装置
CN110123447A (zh) * 2019-05-07 2019-08-16 艾瑞迈迪科技石家庄有限公司 应用于影像引导的引导路径规划方法及装置
CN113243877A (zh) * 2020-02-13 2021-08-13 宁波思康鑫电子科技有限公司 一种用于内窥镜定位的***及方法
CN113317874B (zh) * 2021-04-30 2022-11-29 上海友脉科技有限责任公司 一种医学图像处理装置及介质
CN117481753B (zh) * 2023-12-29 2024-04-05 北京智愈医疗科技有限公司 一种基于内窥镜的水刀运动轨迹的监测方法和装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101049248A (zh) * 2007-05-18 2007-10-10 西安工业大学 光磁电复合导航手术定位装置和方法
CN102428496A (zh) * 2009-05-18 2012-04-25 皇家飞利浦电子股份有限公司 用于em跟踪内窥镜***的无标记物跟踪的配准和校准
CN102946784A (zh) * 2010-06-22 2013-02-27 皇家飞利浦电子股份有限公司 用于内窥镜实时校准的***和方法
WO2017013521A1 (en) * 2015-07-23 2017-01-26 Koninklijke Philips N.V. Endoscope guidance from interactive planar slices of a volume image
WO2017030913A2 (en) * 2015-08-14 2017-02-23 Intuitive Surgical Operations, Inc. Systems and methods of registration for image-guided surgery
CN107689045A (zh) * 2017-09-06 2018-02-13 艾瑞迈迪医疗科技(北京)有限公司 内窥镜微创手术导航的图像显示方法、装置及***

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6167296A (en) * 1996-06-28 2000-12-26 The Board Of Trustees Of The Leland Stanford Junior University Method for volumetric image navigation
CN100586379C (zh) * 2008-07-04 2010-02-03 浙江大学 一种计算机模拟定标活检方法及装置
EP2523621B1 (en) * 2010-01-13 2016-09-28 Koninklijke Philips N.V. Image integration based registration and navigation for endoscopic surgery
CN102727309B (zh) * 2011-04-11 2014-11-26 上海优益基医疗器械有限公司 结合内窥镜影像的外科手术导航***
CN102999902B (zh) * 2012-11-13 2016-12-21 上海交通大学医学院附属瑞金医院 基于ct配准结果的光学导航定位导航方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101049248A (zh) * 2007-05-18 2007-10-10 西安工业大学 光磁电复合导航手术定位装置和方法
CN102428496A (zh) * 2009-05-18 2012-04-25 皇家飞利浦电子股份有限公司 用于em跟踪内窥镜***的无标记物跟踪的配准和校准
CN102946784A (zh) * 2010-06-22 2013-02-27 皇家飞利浦电子股份有限公司 用于内窥镜实时校准的***和方法
WO2017013521A1 (en) * 2015-07-23 2017-01-26 Koninklijke Philips N.V. Endoscope guidance from interactive planar slices of a volume image
WO2017030913A2 (en) * 2015-08-14 2017-02-23 Intuitive Surgical Operations, Inc. Systems and methods of registration for image-guided surgery
CN107689045A (zh) * 2017-09-06 2018-02-13 艾瑞迈迪医疗科技(北京)有限公司 内窥镜微创手术导航的图像显示方法、装置及***

Also Published As

Publication number Publication date
CN107689045A (zh) 2018-02-13
CN107689045B (zh) 2021-06-29

Similar Documents

Publication Publication Date Title
WO2019047820A1 (zh) 内窥镜微创手术导航的图像显示方法、装置及***
CN107456278B (zh) 一种内窥镜手术导航方法和***
US11883118B2 (en) Using augmented reality in surgical navigation
US9646423B1 (en) Systems and methods for providing augmented reality in minimally invasive surgery
CN110033465B (zh) 一种应用于双目内窥镜医学图像的实时三维重建方法
AU2015284430B2 (en) Dynamic 3D lung map view for tool navigation inside the lung
US9364294B2 (en) Systems, methods, apparatuses, and computer-readable media for image management in image-guided medical procedures
US11961193B2 (en) Method for controlling a display, computer program and mixed reality display device
Bernhardt et al. Automatic localization of endoscope in intraoperative CT image: a simple approach to augmented reality guidance in laparoscopic surgery
CN112641514B (zh) 一种微创介入导航***与方法
CN107610109A (zh) 内窥镜微创手术导航的图像显示方法、装置及***
JP2011212301A (ja) 投影画像生成装置および方法、並びにプログラム
EP2901934B1 (en) Method and device for generating virtual endoscope image, and program
WO2023246521A1 (zh) 基于混合现实的病灶定位方法、装置和电子设备
Wang et al. Autostereoscopic augmented reality visualization for depth perception in endoscopic surgery
Zhu et al. A neuroendoscopic navigation system based on dual-mode augmented reality for minimally invasive surgical treatment of hypertensive intracerebral hemorrhage
US10631948B2 (en) Image alignment device, method, and program
CN115105204A (zh) 一种腹腔镜增强现实融合显示方法
CN115375595A (zh) 图像融合方法、装置、***、计算机设备和存储介质
CN111743628A (zh) 一种基于计算机视觉的自动穿刺机械臂路径规划的方法
US20220392173A1 (en) Virtual enhancement of a camera image
Fang et al. An Ultrasound Image Fusion Method for Stereoscopic Laparoscopic Augmented Reality
WO2023162657A1 (ja) 医療支援装置、医療支援装置の作動方法及び作動プログラム
Kumar et al. Stereoscopic augmented reality for single camera endoscope using optical tracker: a study on phantom
CN116385513A (zh) 基于PSO-SoftPOSIT联合算法的腹腔镜肝脏手术导航2D/3D配准方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18855080

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18855080

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 11.08.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18855080

Country of ref document: EP

Kind code of ref document: A1