WO2017036023A1 - Système de positionnement destiné à être utilisé au cours d'une opération chirurgicale - Google Patents

Système de positionnement destiné à être utilisé au cours d'une opération chirurgicale Download PDF

Info

Publication number
WO2017036023A1
WO2017036023A1 PCT/CN2015/099144 CN2015099144W WO2017036023A1 WO 2017036023 A1 WO2017036023 A1 WO 2017036023A1 CN 2015099144 W CN2015099144 W CN 2015099144W WO 2017036023 A1 WO2017036023 A1 WO 2017036023A1
Authority
WO
WIPO (PCT)
Prior art keywords
module
image
processing module
data
central processing
Prior art date
Application number
PCT/CN2015/099144
Other languages
English (en)
Chinese (zh)
Inventor
樊昊
Original Assignee
北京医千创科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京医千创科技有限公司 filed Critical 北京医千创科技有限公司
Publication of WO2017036023A1 publication Critical patent/WO2017036023A1/fr

Links

Images

Definitions

  • the present invention relates to the field of medical device technology, and in particular, to a surgical positioning system.
  • Laparoscopic, endoscopic (eg gastroscopy, colonoscopy, bronchoscopy) and surgical robots are all representative of minimally invasive techniques.
  • various cameras are the main observation tools. They replace the human eye and are mainly used to perform two tasks: 1. Identify the location of the lesion and lesion in the human body; 2. Identify the position of the surgical instrument and the instrument in the human body.
  • the surgical instruments are relatively large and the camera is not difficult to recognize. However, for identifying lesions, especially early lesions, the camera is difficult.
  • the reasons are: 1.
  • the camera imaging technology uses visible light. Visible light can see the surface of the lesion or the structure of the organizer, and can not see the lesion or tissue structure hidden in the deep layer. For example, in laparoscopic surgery, the camera can see large tumors, but can not see the deep supply of tumor blood vessels.
  • the technique of finding lesions before surgery is not necessarily a camera-type technique, but may also be other influential examinations such as ultrasound, nuclear magnetic, and CT.
  • the original signal acquisition methods of these technologies are different from the original signal acquisition methods of the cameras, and the types of lesions that they are good at discovering are also different.
  • Some early lesions can be detected early with other techniques, and camera technology will have to wait until later to discover. For example, some early breast cancers discovered by nuclear magnetic or molybdenum targets are not much different from normal tissues under the observation of the camera and are difficult to distinguish.
  • the CN200680020112 patent mentions a technique for providing a surgical robot with a laparoscopic ultrasound probe specifically for use in surgery, which produces a 2D image that can be processed by the processor to generate at least a portion of the 3D anatomical image. After that, the image and the camera image are transmitted to the processor, and after processing, the camera image is the main image on the display screen, and the ultrasonic image is displayed in the form of the auxiliary image. And the design can also compare the 3D camera view with the 2D ultrasound image slice.
  • the CN201310298142 patent mentions another technique.
  • the technique converts the pre-operative 3D image into a virtual ultrasound image, which is registered with the intraoperative ultrasound, and the resulting image is then fused with the endoscopic image during the operation, and finally the postoperative evaluation is performed on the cloud platform.
  • the CN201310298142 patent has other problems: 1) The cloud server function of this technology is post-installed, and the function of the cloud server is placed in the final stage of the process - the post-evaluation phase. 2/Cloud server function in parallel with other registration functions, reducing the user's dependence on the cloud server. The doctor uses the system, without using the cloud server, can complete the pre-operative 3D image acquisition, the intra-operative ultrasound image fusion, the new fusion image and the process of the camera image fusion. The above two points make a lot of computing work must be done on the local processor, which puts certain requirements on the configuration of the local processor. Mobile devices and wearable devices are inherently limited in size, and compared to desktops and even workstations, these configuration requirements are not easily met.
  • a special intraoperative ultrasound device which can realize the comparison between the pre-operative non-instant image data and the intra-operative visible light image; the software running environment is low, and the mobile device or the wearable device is convenient for the 3D image. Read and even show the results of the fusion; it is also theoretically guaranteed that as long as the early detection can find the location of the lesion, the localization system of the lesion can be found during surgery. Popularizing early detection, early treatment has important medical implications.
  • an object of the present invention is to provide a surgical positioning system which provides an original and optical camera for the deficiencies of the original laparoscopic, medical endoscope and robotic surgical positioning system.
  • the data acquisition method with different signal acquisition methods is performed on the cloud server side to perform 3D visualization processing and then merged with the video or image data of the optical camera to improve the surgical positioning system of the lesion discovery rate in the operation.
  • a surgical positioning system that performs positioning by direct comparison between visible light images and non-instant images of imaging;
  • the system includes a DICOM data input module, a data visualization processing module, a visible light image input module, a central processing module, and an image Display output module;
  • the data visualization operation processing module is located in the cloud, is connected to the central processing module, receives the data of the DICOM data input module, and visualizes the received data, and transmits the patient's 3D model data to the central processing module and/or the image display output.
  • the DICOM data input module is connected to the data visualization operation processing module, and is configured to upload the detected data in a format of a DICOM file;
  • the visible light image input module is connected to the central processing module for transmitting the intraoperative real-time image data to the central processing module;
  • a central processing module configured to receive image data transmitted by the visible light image input module and 3D model data transmitted by the visualization processing module;
  • the image display output module is divided into a central processing pre-display output module and a central processing display output module; the two display output modules respectively exist independently and run separately; the central processing pre-display output module is connected with the cloud data visualization operation processing module for The 3D model is displayed; a centrally processed display output module is coupled to the central processing module for displaying an optical image and a 3D model.
  • Figure 1 is a schematic view showing the structure of a surgical positioning system.
  • FIG. 1 it is a structural diagram of a surgical positioning system, which is positioned by direct comparison between a visible image of visible light and a non-instant image of imaging; the system includes a DICOM data input module 100 and a data visualization operation processing module 200. , visible light image input module 300, central processing module 400, image display output module;
  • the data visualization operation processing module is located in the cloud, is connected to the central processing module, receives the data of the DICOM data input module, and visualizes the received data, and transmits the patient's 3D model data to the central processing module and/or the image display output.
  • the DICOM data input module is connected to the data visualization operation processing module, and is configured to upload the detected data in a format of a DICOM file;
  • the visible light image input module is connected to the central processing module for transmitting the intraoperative real-time image data to the central processing module;
  • a central processing module configured to receive image data transmitted by the visible light image input module and 3D model data transmitted by the visualization processing module;
  • the image display output module is divided into a central processing pre-display output module 501 and a central processing post-display output module 502; the two display output modules respectively exist independently and run separately;
  • the output module is connected to the cloud data visualization operation processing module for displaying the 3D model;
  • the central processing display output module is connected to the central processing module for displaying an optical image and a 3D model.
  • the doctor uploads the patient's CT data to the cloud server in the form of a DICOM file.
  • the 3D model data of the patient's kidney and the location data of the stones in the kidney are transmitted to the central processing module.
  • the central processing module accepts the image data transmitted from the ureteroscope camera and the 3D data transmitted from the data visualization processing module. Through registration and fusion, it is determined which relative position of the patient's 3D model is in the lens, and arrives at the buried position. The path of travel required for the stones in the chamber. In this way, along the path prompted by the central processing module, the diverticulum stones buried in the tissue can be found.
  • a kidney tumor patient needs to have a partial laparoscopic nephrectomy.
  • the doctor uploads the patient's kidney CT data to the cloud server in the form of a DICOM file.
  • the 3D model data of the patient's kidney, the location data of the tumor in the kidney, and the location of the renal blood vessels are transmitted to the central processing module.
  • the center processing module is also located in the cloud, and receives the image data transmitted from the laparoscopic camera and the 3D data transmitted from the data visualization processing module.
  • the direction of the renal tumor blood vessels supplied under the renal capsule is judged. Wear the device (glasses).
  • the doctor thus selectively blocks only the blood vessels supplying the kidney tumor and completes the surgery. Conventional methods are needed to block larger arteries and veins, resulting in more extensive renal tissue ischemia and impaired renal function.
  • a peripheral lung cancer patient needs a bronchoscopy biopsy.
  • the doctor uploads the patient's DICOM file to the cloud server.
  • the 3D model data of the patient's lungs and bronchial tubes, the location of the tumor in the lungs, and the location of the blood vessels around the tumor are transmitted to the central processing module.
  • the central processing module receives the image data transmitted from the bronchoscope camera and the 3D data transmitted from the data visualization processing module. Through registration and fusion, it is determined which relative position of the bronchial lens is in the patient 3D model, and which bronchus the tumor needs to pass. Rumor, there is no biopsy around the tumor Need to avoid blood vessels, etc. It can even help identify and select biopsy at the marginal site of the tumor because the edge of the tumor is detected at a higher rate than the central cancer cell of the tumor (the ratio of necrotic cells in the center of the tumor is too high).
  • a breast cancer patient needs a total laparoscopic mastectomy. Before surgery, tiny early breast cancer lesions were found by NMR, and it was difficult to identify cancer lesions by the camera alone.
  • the doctor sends the patient's nuclear magnetic DICOM file to the medical data visualization processing module. After data visualization, the patient's mammary gland along with the tumor's 3D model data is passed to the central processing module.
  • the central processing module receives the image data transmitted from the endoscope camera and the 3D data transmitted from the data visualization processing module. Through registration and fusion, it is determined which relative position of the patient's 3D model is located, and where the tumor needs to be moved. In the end, the tumor is not easily recognized by the camera and the tumor is removed.
  • a partial liver resection of the robot is required.
  • the patient's color Doppler ultrasound results in the location of liver cancer and the abnormal proliferation of vascular tumors.
  • the doctor sends the patient's preoperative color ultrasound DICOM file to the medical data visualization processing module.
  • the patient's liver, tumor and blood vessel 3D model data are transmitted to the central processing module and the mobile phone.
  • the doctor had a general understanding of the blood vessel distribution at the surgical site by the mobile phone before surgery.
  • the intraoperative central processing module accepts the image data transmitted from the robot camera and the 3D data transmitted from the data visualization processing module. Through registration and fusion, it is determined which relative position of the surgical instrument is in the patient 3D model, and where the tumor needs to be reached.
  • the moving, abnormally proliferating blood vessels are buried there, helping the doctor to find a surgical path that avoids the abnormally proliferating variegated blood vessels, and finally removes the tumor easily.

Landscapes

  • Endoscopes (AREA)

Abstract

La présente invention concerne un système de positionnement destiné à être utilisé au cours d'une opération chirurgicale, ledit positionnement étant mis en œuvre en fonction d'une comparaison directe entre une image en temps réel à la lumière visible et une image qui n'est pas en temps réel obtenue par imagerie. Ledit système comprend : un module (100) d'ouverture de session de données de DICOM, un module (200) de commutation et de traitement pour la visualisation de données, un module (300) d'entrée d'image de lumière visible, un module central (400) de traitement, et un module de sortie d'affichage d'image. Le système est conçu pour générer, sur la base des données d'imagerie, un modèle d'imagerie 3D qui n'est pas en temps réel, et puis combiner le modèle d'imagerie 3D qui n'est pas en temps réel avec une image de caméra en temps réel prise au cours d'une opération, ce qui permet de réduire les exigences en termes d'équipements d'opération, par exemple, ne nécessitant pas d'équipement ultrasonore laparoscopique ou d'équipement ultrasonore endoscopique professionnel mais utilisant uniquement un résultat de test pré-chirurgical standard. Théoriquement, le système peut parfaitement assurer l'affichage d'un emplacement d'une modification pathologique dans le modèle 3D, et par conséquent, une caméra peut être actionnée en se basant uniquement sur un emplacement relatif d'un instrument chirurgical et une structure anatomique principale dans la carte 3D.
PCT/CN2015/099144 2015-09-06 2015-12-28 Système de positionnement destiné à être utilisé au cours d'une opération chirurgicale WO2017036023A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510559675.4A CN105213032B (zh) 2015-09-06 2015-09-06 手术定位***
CN201510559675.4 2015-09-06

Publications (1)

Publication Number Publication Date
WO2017036023A1 true WO2017036023A1 (fr) 2017-03-09

Family

ID=54982566

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/099144 WO2017036023A1 (fr) 2015-09-06 2015-12-28 Système de positionnement destiné à être utilisé au cours d'une opération chirurgicale

Country Status (2)

Country Link
CN (1) CN105213032B (fr)
WO (1) WO2017036023A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106326856A (zh) * 2016-08-18 2017-01-11 厚凯(天津)医疗科技有限公司 一种手术图像处理方法及装置
CN112237477B (zh) * 2019-07-17 2021-11-16 杭州三坛医疗科技有限公司 骨折复位闭合手术定位导航装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002102249A (ja) * 2000-09-29 2002-04-09 Olympus Optical Co Ltd 手術ナビゲーション装置および手術ナビゲーション方法
WO2005055008A2 (fr) * 2003-11-26 2005-06-16 Viatronix Incorporated Systemes et procedes pour la segmentation, la visualisation et l'analyse automatisees d'images medicales
CN1874734A (zh) * 2003-09-01 2006-12-06 西门子公司 对电生理导管的心脏应用提供可视化支持的方法和装置
CN102811655A (zh) * 2010-03-17 2012-12-05 富士胶片株式会社 内窥镜观察支持***、方法、设备和程序
US8348831B2 (en) * 2009-12-15 2013-01-08 Zhejiang University Device and method for computer simulated marking targeting biopsy
CN103793915A (zh) * 2014-02-18 2014-05-14 上海交通大学 神经外科导航中低成本无标记配准***及配准方法
CN104757951A (zh) * 2014-04-11 2015-07-08 京东方科技集团股份有限公司 一种显示***和数据处理方法

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080147086A1 (en) * 2006-10-05 2008-06-19 Marcus Pfister Integrating 3D images into interventional procedures
US8235530B2 (en) * 2009-12-07 2012-08-07 C-Rad Positioning Ab Object positioning with visual feedback
US10561861B2 (en) * 2012-05-02 2020-02-18 Viewray Technologies, Inc. Videographic display of real-time medical treatment
CN203195768U (zh) * 2013-03-15 2013-09-18 应瑛 手术引导***
CN103371870B (zh) * 2013-07-16 2015-07-29 深圳先进技术研究院 一种基于多模影像的外科手术导航***

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002102249A (ja) * 2000-09-29 2002-04-09 Olympus Optical Co Ltd 手術ナビゲーション装置および手術ナビゲーション方法
CN1874734A (zh) * 2003-09-01 2006-12-06 西门子公司 对电生理导管的心脏应用提供可视化支持的方法和装置
WO2005055008A2 (fr) * 2003-11-26 2005-06-16 Viatronix Incorporated Systemes et procedes pour la segmentation, la visualisation et l'analyse automatisees d'images medicales
US8348831B2 (en) * 2009-12-15 2013-01-08 Zhejiang University Device and method for computer simulated marking targeting biopsy
CN102811655A (zh) * 2010-03-17 2012-12-05 富士胶片株式会社 内窥镜观察支持***、方法、设备和程序
CN103793915A (zh) * 2014-02-18 2014-05-14 上海交通大学 神经外科导航中低成本无标记配准***及配准方法
CN104757951A (zh) * 2014-04-11 2015-07-08 京东方科技集团股份有限公司 一种显示***和数据处理方法

Also Published As

Publication number Publication date
CN105213032A (zh) 2016-01-06
CN105213032B (zh) 2017-12-15

Similar Documents

Publication Publication Date Title
Zhang et al. Real-time navigation for laparoscopic hepatectomy using image fusion of preoperative 3D surgical plan and intraoperative indocyanine green fluorescence imaging
JP7133474B2 (ja) 内視鏡画像及び超音波画像の画像ベースの融合
US20220192611A1 (en) Medical device approaches
Okamoto et al. Clinical application of navigation surgery using augmented reality in the abdominal field
Fu et al. The future of endoscopic navigation: a review of advanced endoscopic vision technology
CN106236006B (zh) 3d光学分子影像腹腔镜成像***
RU2556593C2 (ru) Совмещение и навигация для эндоскопической хирургии на основе интеграции изображений
Reynisson et al. Navigated bronchoscopy: a technical review
JP5486432B2 (ja) 画像処理装置、その作動方法およびプログラム
KR20130108320A (ko) 관련 애플리케이션들에 대한 일치화된 피하 해부구조 참조의 시각화
JP2013517909A (ja) 気管支鏡検査法ガイダンスに適用される画像ベースのグローバル登録
Onda et al. Short rigid scope and stereo-scope designed specifically for open abdominal navigation surgery: clinical application for hepatobiliary and pancreatic surgery
Kriegmair et al. Digital mapping of the urinary bladder: potential for standardized cystoscopy reports
Bertrand et al. A case series study of augmented reality in laparoscopic liver resection with a deformable preoperative model
Amir-Khalili et al. Automatic segmentation of occluded vasculature via pulsatile motion analysis in endoscopic robot-assisted partial nephrectomy video
WO2019047820A1 (fr) Procédé, dispositif et système d'affichage d'images pour navigation chirurgicale endoscopique minimalement invasive
Ma et al. Surgical navigation system for laparoscopic lateral pelvic lymph node dissection in rectal cancer surgery using laparoscopic-vision-tracked ultrasonic imaging
Konishi et al. Augmented reality navigation system for endoscopic surgery based on three-dimensional ultrasound and computed tomography: Application to 20 clinical cases
Nagelhus Hernes et al. Computer‐assisted 3D ultrasound‐guided neurosurgery: technological contributions, including multimodal registration and advanced display, demonstrating future perspectives
WO2017036023A1 (fr) Système de positionnement destiné à être utilisé au cours d'une opération chirurgicale
Langø et al. Navigation in laparoscopy–prototype research platform for improved image‐guided surgery
Galloway et al. Image‐Guided Abdominal Surgery and Therapy Delivery
Geurten et al. Endoscopic laser surface scanner for minimally invasive abdominal surgeries
Bartholomew et al. Surgical navigation in the anterior skull base using 3-dimensional endoscopy and surface reconstruction
Ong et al. A novel method for texture-mapping conoscopic surfaces for minimally invasive image-guided kidney surgery

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15902828

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15902828

Country of ref document: EP

Kind code of ref document: A1