CN111887988A - Positioning method and device of minimally invasive interventional operation navigation robot - Google Patents

Positioning method and device of minimally invasive interventional operation navigation robot Download PDF

Info

Publication number
CN111887988A
CN111887988A CN202010642175.8A CN202010642175A CN111887988A CN 111887988 A CN111887988 A CN 111887988A CN 202010642175 A CN202010642175 A CN 202010642175A CN 111887988 A CN111887988 A CN 111887988A
Authority
CN
China
Prior art keywords
real
time
positioning
image
human body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010642175.8A
Other languages
Chinese (zh)
Other versions
CN111887988B (en
Inventor
罗雄彪
万英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202010642175.8A priority Critical patent/CN111887988B/en
Publication of CN111887988A publication Critical patent/CN111887988A/en
Application granted granted Critical
Publication of CN111887988B publication Critical patent/CN111887988B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B17/00Surgical instruments, devices or methods, e.g. tourniquets
    • A61B17/34Trocars; Puncturing needles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/105Modelling of the patient, e.g. for ligaments or bones
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/107Visualisation of planned trajectories or target regions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2055Optical tracking systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2068Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis using pointers, e.g. pointers having reference marks for determining coordinates of body points
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2072Reference field transducer attached to an instrument or patient
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • A61B2034/302Surgical robots specifically adapted for manipulations within body cavities, e.g. within abdominal or thoracic cavities

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Robotics (AREA)
  • Pathology (AREA)
  • Endoscopes (AREA)

Abstract

The invention relates to a positioning method and a device of a minimally invasive interventional operation navigation robot, wherein the positioning method comprises the following steps: the preoperative image is corresponding to a real-time human body, and organs in the real-time human body are divided; establishing a three-dimensional rectangular coordinate system based on the division result; acquiring real-time coordinates of the movement of the respiratory tract endoscope camera and the in-vitro sensor in the three-dimensional space rectangular coordinate system; converting the real-time coordinates of the in vitro sensor to real-time positioning coordinates in the human body; judging whether the positioning coordinates are in the range of the preoperative image, if so, generating variable positioning coordinates; and optimizing the positioning of the surgical navigation robot on the surgical part based on the changed positioning coordinates and the preoperative image. According to the invention, the preoperative image accurately corresponds to the real-time human body through the mutual binding process of the preoperative image, the in-vitro sensor and the respiratory tract endoscope camera.

Description

Positioning method and device of minimally invasive interventional operation navigation robot
Technical Field
The invention belongs to the technical field of surgical navigation, and particularly relates to a positioning method and a positioning device of a minimally invasive interventional surgical navigation robot.
Background
The minimally invasive surgery is to use medical electronic endoscopes such as laparoscopes, digestive tract endoscopes, thoracoscopes and respiratory tract endoscopes and modern related medical equipment to perform clinical tumor disease examination, diagnosis, excision and treatment processes. The minimally invasive surgery has the advantages of small wound, light pain and quick recovery.
The lung cancer is the tumor with the highest incidence and mortality rate of the cancer in the world, and although novel tumor targeting drugs, chemotherapeutic drugs and various minimally invasive surgeries are continuously applied to diagnosis and treatment of the tumor in recent years, the average survival time and the 5-year survival rate of the overall malignant tumor cancer are still not obviously improved. The main reason for this is that about 75% of cancer patients in our country are in the advanced stage (III-IV stage). Therefore, early detection and early treatment of malignant tumors are crucial. The minimally invasive interventional operation puncture biopsy diagnosis or ablation treatment is an important clinical method for small lung lobe tumors, lung nodules and lung cancer, the incision (puncture point) of the minimally invasive interventional operation puncture biopsy diagnosis or ablation treatment is only rice grain in size, the aim of disease diagnosis or treatment can be achieved without cutting human body tissues, and the minimally invasive interventional operation puncture biopsy diagnosis or ablation treatment method has the characteristics of no operation, small wound, quick recovery and good effect.
Clinicians generally conduct needle biopsy diagnosis or ablation therapy during an interventional surgical guidance procedure with a respiratory endoscope (flexible bronchoscope). Although the bronchoscope operation navigation system is clinically used at present, the interventional operation method still cannot reach the whole lung area, and particularly cannot perform diagnosis and treatment operation on a lesion target area on the periphery of the lung and far away from the bronchus. Clinically, another method for diagnosing and treating lung cancer is to perform percutaneous lung puncture under the guidance of real-time CT imaging by a doctor for diagnosis or treatment. The method is exposed to CT radiation, and the puncture effect depends on the clinical experience and operation technique of a doctor; the positions and angles of puncture and needle insertion need to be adjusted repeatedly, and meanwhile, CT scanning imaging equipment needs to be started repeatedly to confirm the position of the puncture needle, so that the positions of a puncture surgical tool and a lung lobe target area cannot be tracked and positioned automatically. Particularly, under the respiratory deformation movement of a human body, the positioning of the target area of the lung lobe and the operation and control of an operation tool are more difficult to grasp, and one disadvantage of the method is that the radiation is large, a doctor must carry on a lead garment to complete the operation, and extra burden is caused to the doctor; another disadvantage is that the anatomical location of the target area cannot be provided and the physician needs to repeatedly try during the procedure to find the nearest bronchial branch, prolonging the procedure time.
Therefore, the problems of difficult positioning of the lung lobe target area, difficult operation of the surgical tool and the like in the minimally invasive interventional operation become more and more urgent technical problems to be solved.
Disclosure of Invention
Aiming at the problems, the invention relates to a positioning method of a minimally invasive interventional operation navigation robot, which comprises the following steps:
the preoperative image is corresponding to a real-time human body, and organs in the real-time human body are divided;
establishing a three-dimensional rectangular coordinate system based on the division result;
acquiring real-time coordinates of the movement of the respiratory tract endoscope camera and the in-vitro sensor in the three-dimensional space rectangular coordinate system;
converting the real-time coordinates of the in vitro sensor to real-time positioning coordinates in the human body;
judging whether the positioning coordinates are in the range of the preoperative image, if so, generating variable positioning coordinates;
and optimizing the positioning of the surgical navigation robot on the surgical part based on the changed positioning coordinates and the preoperative image.
Preferably, the corresponding the preoperative image to the real-time human body, and the dividing the organs in the real-time human body includes:
corresponding the preoperative image to an organ in a real-time human body to obtain an image map of the preoperative image in the organ of the human body;
and accurately segmenting the human organs of the image map to obtain a real-time image map of the human organs.
Preferably, the establishing a three-dimensional rectangular coordinate system includes:
restoring the real-time image of the human organ into a three-dimensional model;
obtaining three-dimensional data of the human organ according to the three-dimensional model;
the three-dimensional model is segmented for many times to obtain a plurality of section image maps of human organs;
and establishing a three-dimensional rectangular coordinate system according to the three-dimensional data of the human organ and the multiple section image maps of the human organ.
Preferably, the acquiring of the real-time coordinates of the motion of the respiratory tract endoscope camera and the motion of the in-vitro sensor according to the three-dimensional rectangular space coordinate system comprises:
attaching a first external sensor to the front end of a first respiratory tract endoscopic camera;
the mechanical arm carries out first percutaneous lung puncture on a first respiratory tract endoscope camera with a first external sensor through a first puncture needle, and the first external sensor is embedded near an operation part to form a respiratory gate control;
acquiring real-time coordinates of a first external sensor in a percutaneous lung puncture process, and forming a first path;
and meanwhile, the three-dimensional transformation space process of the mechanical arm in the first percutaneous puncture process is obtained, and a first image shot by the first respiratory tract endoscope camera in the percutaneous lung puncture process is obtained.
Preferably, the acquiring of the real-time coordinates of the motion of the respiratory tract endoscope camera and the motion of the in-vitro sensor according to the three-dimensional rectangular space coordinate system further comprises:
attaching a second extrabody sensor to the front section of a second respiratory tract endoscopic camera;
carrying out second percutaneous lung puncture on a second respiratory tract endoscope camera with a second in-vitro sensor through a second puncture needle, and accurately positioning the second in-vitro sensor to the operation position on the basis of the first percutaneous lung puncture, the positioning of the in-vitro sensor and the respiratory gating;
acquiring real-time coordinates of a second extrabody sensor in the percutaneous lung puncture process, and forming a second path;
and simultaneously acquiring a three-dimensional transformation space process of the mechanical arm in a second percutaneous puncture process, and acquiring a second image shot by the second respiratory tract endoscope camera in the percutaneous lung puncture process.
Preferably, the converting the real-time coordinates of the in-vitro sensor to the positioning coordinates in the real-time human body comprises:
corresponding a first path formed by real-time coordinates of a first external sensor to a real-time human body to obtain a first real-time path in the real-time human body;
and corresponding a second path formed by the real-time coordinates of the second extrabody sensor to the real-time human body to obtain a second real-time path in the real-time human body.
Preferably, the determining whether the positioning coordinates are within the range of the preoperative image includes:
judging whether the coordinate points of the first real-time path and the second real-time path are in the range of the surgical component of the preoperative image or not;
if so, acquiring the change positioning coordinates in the range of the surgical component;
and if not, discarding all real-time coordinates of the first real-time path and the second real-time path.
Preferably, the optimizing the positioning of the surgical navigation robot to the surgical site includes:
based on the three-dimensional transformation space process, the transformation positioning coordinates and the preoperative images of the mechanical arm two times of percutaneous lung puncture, the positioning of the surgical navigation robot to the surgical position is optimized.
The invention also relates to a positioning device for a minimally invasive interventional surgery navigation robot, comprising:
a dividing module: the preoperative imaging system is used for corresponding preoperative images to a real-time human body, dividing organs in the real-time human body and establishing a three-dimensional rectangular coordinate system based on a division result;
an acquisition module: the system is used for acquiring real-time coordinates of the movement of the respiratory tract endoscope camera and the in-vitro sensor in the three-dimensional space rectangular coordinate system;
a positioning module: the positioning system is used for converting the real-time coordinate of the external sensor into a positioning coordinate in a real-time human body, judging whether the positioning coordinate is in the range of the preoperative image, if so, generating a change positioning coordinate, and optimizing the positioning of the surgical navigation robot on a surgical part based on the change positioning coordinate and the preoperative image.
Preferably, the dividing module comprises
A corresponding unit: the preoperative image is corresponding to organs in a real-time human body, and an image map of the preoperative image in the real-time human body is obtained;
a dividing unit: the image processing device is used for accurately segmenting the human organs of the image map to obtain the real-time image map of the human organs.
The invention has the technical effects that: according to the invention, the preoperative image accurately corresponds to the real-time human body through the mutual binding process of the preoperative image, the in-vitro sensor and the respiratory tract endoscope camera. Each part of the lung lobe is fully displayed by slicing the preoperative image in three directions and establishing a three-dimensional reconstruction visualization model through the preoperative image. Through twice percutaneous lung puncture processes, the position of the operation part can be accurately determined, so that the treatment is convenient.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 shows a work technology roadmap for a minimally invasive interventional surgical navigation robot system according to an embodiment of the invention;
FIG. 2 shows a schematic view of a minimally invasive interventional surgical navigation robot system according to an embodiment of the present invention;
fig. 3 shows a three-dimensional visualization information and software interface schematic diagram of the minimally invasive interventional procedure according to an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention is exemplified by lung lobe tumor, and is not limited to the positioning process of the surgical navigation robot in the lung lobe part. Fig. 1 shows a work technology roadmap of a minimally invasive interventional surgery navigation robot system according to an embodiment of the present invention, and as shown in fig. 1, the present invention shows a positioning method of a minimally invasive interventional surgery navigation robot, which includes the following steps:
the method comprises the following steps: and corresponding the preoperative image to a real-time human body, and dividing organs in the real-time human body.
The preoperative image is an image obtained by scanning a real-time human body by a CT (computed tomography) or a magnetic resonance imaging device, and the preoperative image may be a CT image or a magnetic resonance image. The embodiment of the invention processes the preoperative image of the lung lobes, and the processing process is to automatically and accurately segment the regions of various organs of the lung lobes from the image, including tissue regions such as lung parenchyma, lung nodules, bronchus, blood vessels of the lung lobes, target areas of tumors and the like. Illustratively, a CT or mri apparatus scans a lung lobe structure of a real-time human body, and can sufficiently scan each part of the lung lobe, thereby obtaining an overall detail structure diagram of each part of the lung lobe, forming a three-dimensional preoperative image slice, and sufficiently displaying each part of the lung lobe through three directions (i.e., three views), thereby establishing a three-dimensional reconstruction visualization model for tissue regions such as a lesion target region, a lung nodule, a blood vessel, and a lung lobe.
When the preoperative image corresponds to a lung lobe organ in a real-time human body, an image map of the preoperative image in the human body organ is obtained, the human body organ of the image map is accurately segmented, and the real-time image map of the human body organ is obtained. The human organ is a lung lobe, so that a lung lobe structure is accurately segmented, and a real-time image map of a lesion target area, a lung nodule, a blood vessel, the lung lobe and other tissue areas is obtained.
After the preoperative image of the lung lobe is obtained, the preoperative image is corresponding to the real-time lung lobe structure of the human body, so that the three-dimensional reconstruction visualization model can be in one-to-one correspondence with each component in the lung lobe, and the error of the preoperative image forming the three-dimensional reconstruction visualization model is reduced.
Step two: and establishing a three-dimensional rectangular coordinate system based on the division result.
After a three-dimensional reconstruction visualization model is established for the tissue area of the lung lobe, a three-dimensional rectangular coordinate system is established on the basis. Illustratively, a three-dimensional rectangular coordinate system is established by taking the central point of a connecting line of two lung lobes of a real-time human body as an origin, so as to obtain all coordinate data of each part on the lung lobes. The center points of the two connecting lines of the lung lobes are taken as an origin to illustrate only one example, and the three-dimensional rectangular space coordinate system is established by taking the golden section point of the whole real-time human body as the origin, for example.
When the three-dimensional reconstruction visualization model is established, the three-dimensional reconstruction visualization model is established by a volume rendering and surface rendering method. Wherein the volume rendering method comprises: techniques for generating on-screen two-dimensional images from three-dimensional data. CT or nuclear magnetic resonance imaging equipment is used for carrying out multilayer planar segmentation on the lung lobe structure, so that the segmented lung lobe data is regularly assembled according to position and angle information.
And restoring the real-time image of the lung lobes into a three-dimensional model, and obtaining three-dimensional data of the lung lobes according to the three-dimensional model, such as the shape, the size, the curvature and the like of the lung lobes. The three-dimensional model is segmented for multiple times to obtain multiple sectional image views of the lung lobes. The multilayer plane comprises a front view direction section, a left view direction section and a top view direction section, and the shadow view of the lung lobes can be comprehensively interpreted by using the sections in the three directions.
The surface drawing method comprises the following steps: and extracting surface information of the lung lobes, and blanking and rendering according to the illumination and light and shade models to obtain three-dimensional display images of the lung lobes, wherein the three-dimensional display images are mainly embodied in boundary outlines and surface curved surfaces.
And according to the three-dimensional data of the lung lobes and the multiple section image maps of the lung lobes, namely according to a volume drawing and surface drawing method, further obtaining the specific information of the surfaces and the interiors of the lung lobes, and establishing a three-dimensional rectangular coordinate system. And determining the accurate position and orientation of each part of the lung lobes in the coordinate system by using the middle points of the connecting lines of the two lung lobes as the origin and the three-dimensional data of the lung lobes, and determining the accurate three-dimensional rectangular space coordinate system by using a plurality of sectional image maps.
Step three: and acquiring real-time coordinates of the respiratory tract endoscope camera and the in-vitro sensor moving in the three-dimensional space rectangular coordinate system.
Attach the anterior segment of first outer sensor of body at first respiratory tract scope camera, first respiratory tract scope camera is in inside first pjncture needle, and first pjncture needle is in on the arm. The mechanical arm carries out first percutaneous lung puncture on a first respiratory tract endoscope camera with a first external sensor through a first puncture needle, and the first external sensor is embedded near an operation component to form a respiratory gate control. Wherein, the operation part is a lesion target area or a lung nodule.
Real-time coordinates of the first off-body sensor during the percutaneous lung puncture are acquired and a first path is formed. And meanwhile, the three-dimensional transformation space process of the mechanical arm in the first percutaneous puncture process is obtained, and a first image shot by the first respiratory tract endoscope camera in the percutaneous lung puncture process is obtained.
And binding the coordinates of the first image with the coordinates of the mechanical arm by a camera calibration method, namely converting the coordinates of the central point of the first image into coordinates adaptive to the coordinates of the mechanical arm.
The first image is in a three-dimensional rectangular coordinate system, and the three-dimensional rectangular coordinate system is established by the existing standard data, so that the azimuth coordinate where the first image is located can be directly determined.
The second extrabody sensor is attached to the front section of the second respiratory tract endoscopic camera.
Carrying out second percutaneous lung puncture on a second respiratory tract endoscope camera with a second in-vitro sensor through a second puncture needle, and accurately positioning the second in-vitro sensor to the operation position on the basis of the first percutaneous lung puncture, the positioning of the in-vitro sensor and the respiratory gating;
acquiring real-time coordinates of a second extrabody sensor in the percutaneous lung puncture process, and forming a second path;
and simultaneously acquiring a three-dimensional transformation space process of the mechanical arm in a second percutaneous puncture process, and acquiring a second image shot by the second respiratory tract endoscope camera in the percutaneous lung puncture process.
Similarly, the second image is in a three-dimensional rectangular coordinate system, and the three-dimensional rectangular coordinate system is established by the existing standard data, so that the azimuth coordinate where the second image is located can be directly determined. Illustratively, the first image and the second image may be medical electronic endoscopic video images, ultrasound images, Cone-Beam CT images, or the like. The in-vitro sensor can be an electromagnetic sensor, an optical sensor, a laser sensor and the like with a motion tracking and positioning function. The external sensor and the puncture needle (or the operation tool) can be made into an integrated sensor positioning puncture needle operation tool, namely the operation tool is not only a positioning sensor but also a puncture needle.
And binding the actual coordinates and the movement distance of the first image and the second image in a three-dimensional rectangular coordinate system by a camera calibration method. The actual coordinate and the motion condition of the mechanical arm in a three-dimensional space rectangular coordinate system, namely the three-dimensional change relationship between the mechanical arm and the end part of the puncture needle, are bound by a hand-eye calibration method. When the respiratory tract endoscope camera shoots a first image (or a second image), the coordinate of the central point of the first image (or the second image) is obtained, and the coordinate of the mechanical arm is obtained through the conversion relation, wherein the external sensor is located at the middle point position of the respiratory tract endoscope camera shooting image, so that the mechanical arm can reach the designated position according to the coordinate, and the external sensor can be accurately placed.
Step four: converting the real-time coordinates of the in-vitro sensor to real-time positioning coordinates in the human body.
From the third step, the first path and the second path formed by the in vitro sensor in the process of two percutaneous lung punctures can be obtained. And corresponding a first path formed by the real-time coordinates of the first external sensor to the real-time human body to obtain a first real-time path in the real-time human body. And corresponding a second path formed by the real-time coordinates of the second extrabody sensor to the real-time human body to obtain a second real-time path in the real-time human body.
The first real-time path is mainly characterized in that the first external sensor is embedded near an operation position, the first puncture needle is inserted in place through first percutaneous lung puncture without repeatedly adjusting the position of the puncture needle, in the first percutaneous lung puncture process, each real-time coordinate passed by the first external sensor in the first percutaneous lung puncture process is collected, and the three-dimensional transformation space process of the mechanical arm is determined according to each real-time coordinate, so that the motion of the mechanical arm is adjusted, and the mechanical arm drives the first sensor to reach the position near an operation component through the first puncture needle.
The first external sensor is embedded into the lung lobes, so that real-time positioning is implemented, the respiratory motion of the lung lobes is tracked in real time, the purpose of implementing respiratory gating is achieved, and preparation is made for second puncture. Wherein, respiratory gating fixes a position surgical components, and the patient can lead to the displacement of important organs in breathing, needs the accurate position of drawing surgical components of accurate.
The second real-time path is mainly to place the second extra-corporeal sensor in the second puncture needle, and perform a second percutaneous lung puncture in the three-dimensional reconstruction visualization model under the guidance of the preoperative image and the first extra-corporeal sensor puncture needle in which the lung lobes are embedded, so as to place the second extra-corporeal sensor in the surgical component (i.e., the lesion target region). The second percutaneous lung puncture process directly punctures the second pjncture needle to the operation part in, realizes the real-time accurate location under the respiratory gate to realize accurate diagnosis and treatment.
Step five: and judging whether the positioning coordinates are in the range of the preoperative image, and if so, generating variable positioning coordinates.
The registration of the preoperative image and the real-time human body is implemented by an unpierced space registration method, which comprises the following steps: and collecting each real-time coordinate passed by the first extrabody sensor and the second extrabody sensor to enter the lung lobe area of the real-time human body to form a first coordinate set. And determining a second coordinate set of the surgical component according to the accurate position of the delineated surgical component, and performing real-time accurate positioning under respiratory gating so as to determine a third coordinate set of the whole lung lobe, wherein the whole lung lobe comprises the surgical component.
And if the partial coordinate set existing in the second coordinate set does not appear in the third coordinate set, the partial coordinate set is discarded. And comparing the compared second coordinate set with the first coordinate set to obtain an overlapped part coordinate set.
And judging whether the coordinate points of the first real-time path and the second real-time path are in the range of the surgical component of the preoperative image or not. That is, whether there is a coordinate point in the first coordinate set that is in the second coordinate set, and if there is a coordinate point in the first coordinate set, the change location coordinates within the range of the surgical member, that is, the overlapped portion coordinate set, are acquired. If not, all real-time coordinates of the first real-time path and the second real-time path are discarded, that is, all coordinate points of the first coordinate set are discarded.
And (3) calculating the radius D of the second coordinate set (the second coordinate set after being compared with the third coordinate set), calculating the distance X between the coordinate point in the first coordinate set and the central point in the second coordinate set, wherein if X is larger than D, the coordinate point is not in the range of the surgical component, and if X is smaller than D, the coordinate point is in the range of the surgical component, so that the change positioning coordinate in the range of the surgical component is obtained.
Thereby confirm the accurate position of operation part according to two percutaneous lung puncture processes, and when external sensor removed in the operation part within range of image before the art, thereby obtain the change location coordinate of external sensor at the operation part, the first image and the second image that respiratory tract scope camera was taken at external sensor removal in-process are the real-time image of human organ, the central point of real-time image is for changing location coordinate, thereby correspond image before the art and the real-time image that respiratory tract scope camera was taken, make the image before the art correspond in the human organ completely.
Step six: and optimizing the positioning of the surgical navigation robot on the surgical part based on the changed positioning coordinates and the preoperative image. Based on the three-dimensional transformation space process, the transformation positioning coordinates and the preoperative images of the mechanical arm two times of percutaneous lung puncture, the positioning of the surgical navigation robot to the surgical position is optimized.
The operation process automatically regulated and controlled by the operation navigation robot is as follows:
a surgical tool (containing an embedded in-vitro sensor puncture needle) is placed and fixed in a mechanical arm of the robot, and the motion of the mechanical arm is tracked in real time by a three-dimensional space transformation process between the mechanical arm and the tail end of the surgical tool through a robot hand-eye calibration algorithm and by means of obtaining a space transformation matrix. The transformation relation matrix of the real-time dynamic position of the mechanical arm, the preoperative image and the real-time human body space of the patient is obtained by using the transformation relation matrix obtained by three space registration fusion technologies of the preoperative image, the extracorporeal sensor and the respiratory tract endoscope camera (real-time human body space), so that the percutaneous lung puncture by an operation tool held by the mechanical arm is automatically guided and regulated.
Through the accurate location of operation navigation robot to the operation part, need not adjust the angle and the position of operation instrument through CT repeated scanning formation of image, avoided the radiation to doctor and disease. An un-reference point space registration method is introduced, and a real-time human body and preoperative images are registered without attaching a positioning sensor on the surface of the thoracic cavity of the human body as a reference point. The respiratory movement can be measured without attaching a positioning sensor mark on the surface of the chest cavity of the patient, so that the use cost of the sensor can be reduced; by means of a respiratory gating technology based on the fact that the lung lobes are embedded into a single in-vitro sensor, the target area of the pathological changes of respiratory deformation is tracked, and the positioning accuracy of the puncture needle is greatly improved.
On the basis of the above method, fig. 2 shows a schematic diagram of a navigation robot system for minimally invasive intervention surgery according to an embodiment of the present invention, and as shown in fig. 2, the navigation robot system for minimally invasive intervention surgery includes a robot mechanical arm automatic control system, a patient operating table, and a surgical console.
The automatic regulation and control system of the robot mechanical arm comprises a surgical tool and a robot system.
The robot system includes: the mechanical arm of the held surgical tool and an automatic regulating and controlling system.
The patient operating table comprises an operating sickbed and respiratory tract endoscope system equipment.
The operation console comprises an operation navigation system (an in-vitro sensor and a control unit thereof), operation three-dimensional visualization software, a plurality of medical display screens, a high-performance image workstation, navigation three-dimensional visualization software, an operation control operation pushing handle, a rotating ball handle, a multifunctional digital video acquisition card and a converter.
External sensor positioning apparatus: the device comprises an extracorporeal sensor and a control unit, wherein the control unit comprises a signal transmitter and a signal receiver.
The embodiment of the invention provides a working process of a minimally invasive interventional operation navigation robot, which is used for guiding the navigation robot to position and treat a target area of a lung lobe lesion by operating a mechanical arm, such as a robot automatically regulating and controlling a puncture process of an operation tool. In the operation guiding process, the living body lies on an operation sickbed, and the position of an operation tool is obtained in real time by moving the operation intervention tool in the internal space of the living body. The surgical tool may be a medical endoscope (respiratory tract endoscope camera), a puncture needle, an ablation needle, etc. The surgical tool acquires the position information of the surgical tool in the organism organ by combining with the in-vitro sensor positioning device. The in-vitro sensor positioning device comprises an in-vitro sensor, a control unit and the like. The external sensor positioning equipment can be combined with the surgical tool by attaching the external sensor in the external sensor positioning equipment to the front end of the respiratory tract endoscope camera. After the position information is acquired, the position of the surgical tool is converted into a three-dimensional image of a surgical target organ through navigation three-dimensional visualization software, and the three-dimensional image is displayed through a plurality of medical display screens to guide surgical positioning. The medical display screens display images prepared by a high-performance image graphic workstation, and the images are obtained by acquiring images shot by a respiratory tract endoscope camera through a multifunctional digital video acquisition card and converting the images through a converter. The essence of the invention is a process of controlling a surgical tool to puncture a target area of lung lobe lesion of a patient on an operating table of the patient under the control of a robot system and positioning the target area of lung lobe lesion through a surgical console and an in-vitro sensor positioning device.
Fig. 3 shows a three-dimensional visualization information and software interface schematic diagram of the minimally invasive interventional procedure according to an embodiment of the invention. As shown in fig. 3, after the image processing and the three-dimensional reconstruction visualization model before the operation are completed, the embodiment of the present invention provides a medical software, which can perform three-dimensional reconstruction and visualization on each organ of the segmented lung lobes, for example, establish software by locating an operation site (lesion target area), and include various functional modules on a main menu of a software interface, and distribute each functional module to five windows.
Window 1: three-dimensional preoperative image sections (three directions): each component of the lung lobes is fully revealed by three orientations (i.e., three views are possible).
And (3) window 2: and the three-dimensional visual position relation among the shape of the three-dimensional surgical tool puncture needle, the dynamic position of the front end position of the surgical tool puncture needle and the lesion target area in the three-dimensional reconstruction visual model space.
And (3) window: a visualization model is reconstructed three-dimensionally (e.g., a model of a lesion target region, lung nodules, blood vessels, lung lobes, etc.) is reconstructed.
And (4) window: in the in vitro sensor positioning device, a first in vitro sensor is embedded into the dynamic position of the puncture needle, a three-dimensional lesion target area and a second in vitro sensor is embedded into the dynamic position of the puncture needle.
And (5) window: and various parameters of the software functional module are displayed and adjusted on the interface.
The operation position is accurately positioned on the external sensor through establishing different windows and is expressed, so that the process of displaying preoperative images, three-dimensionally reconstructing visual models, puncture needles, operation tools and embedding the external sensor into the operation position is accurately expressed in detail, all related parameters are detailed one by one in the positioning process, all physical data parameters can be shown, and the positioning process can be adjusted according to the parameters.
The invention also relates to a positioning device of the minimally invasive interventional surgery navigation robot, which comprises a dividing module: the preoperative imaging system is used for corresponding preoperative images to a real-time human body, dividing organs in the real-time human body and establishing a three-dimensional rectangular coordinate system based on a division result.
An acquisition module: the system is used for acquiring real-time coordinates of the movement of the respiratory tract endoscope camera and the in-vitro sensor in the three-dimensional space rectangular coordinate system.
A positioning module: the positioning system is used for converting the real-time coordinate of the external sensor into a positioning coordinate in a real-time human body, judging whether the positioning coordinate is in the range of the preoperative image, if so, generating a change positioning coordinate, and optimizing the positioning of the surgical navigation robot on a surgical part based on the change positioning coordinate and the preoperative image.
Wherein the dividing module comprises:
a corresponding unit: and the preoperative imaging system is used for corresponding the preoperative image to an organ in a real-time human body to obtain an image map of the preoperative image in the real-time human body.
A dividing unit: the image processing device is used for accurately segmenting the human organs of the image map to obtain the real-time image map of the human organs.
According to the invention, the preoperative image accurately corresponds to the real-time human body through the mutual binding process of the preoperative image, the in-vitro sensor and the respiratory tract endoscope camera. Each part of the lung lobe is fully displayed by slicing the preoperative image in three directions and establishing a three-dimensional reconstruction visualization model through the preoperative image. Through twice percutaneous lung puncture processes, the position of the operation part can be accurately determined, so that the treatment is convenient.
Although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A positioning method of a minimally invasive interventional surgery navigation robot is characterized by comprising the following steps:
the preoperative image is corresponding to a real-time human body, and organs in the real-time human body are divided;
establishing a three-dimensional rectangular coordinate system based on the division result;
acquiring real-time coordinates of the movement of the respiratory tract endoscope camera and the in-vitro sensor in the three-dimensional space rectangular coordinate system;
converting the real-time coordinates of the in vitro sensor to real-time positioning coordinates in the human body;
judging whether the positioning coordinates are in the range of the preoperative image, if so, generating variable positioning coordinates;
and optimizing the positioning of the surgical navigation robot on the surgical part based on the changed positioning coordinates and the preoperative image.
2. The method of claim 1, wherein the mapping the preoperative image into a real-time human body and the dividing organs in the real-time human body comprises:
corresponding the preoperative image to an organ in a real-time human body to obtain an image map of the preoperative image in the organ of the human body;
and accurately segmenting the human organs of the image map to obtain a real-time image map of the human organs.
3. The method of claim 2, wherein the establishing the three-dimensional rectangular coordinate system comprises:
restoring the real-time image of the human organ into a three-dimensional model;
obtaining three-dimensional data of the human organ according to the three-dimensional model;
the three-dimensional model is segmented for many times to obtain a plurality of section image maps of human organs;
and establishing a three-dimensional rectangular coordinate system according to the three-dimensional data of the human organ and the multiple section image maps of the human organ.
4. The positioning method of the navigated robot for minimally invasive interventional surgery according to claim 3, wherein said obtaining real-time coordinates of the motion of the airway endoscope camera and the extracorporeal sensor according to a three-dimensional rectangular coordinate system comprises:
attaching a first external sensor to the front end of a first respiratory tract endoscopic camera;
the mechanical arm carries out first percutaneous lung puncture on a first respiratory tract endoscope camera with a first external sensor through a first puncture needle, and the first external sensor is embedded near an operation part to form a respiratory gate control;
acquiring real-time coordinates of a first external sensor in a percutaneous lung puncture process, and forming a first path;
and meanwhile, the three-dimensional transformation space process of the mechanical arm in the first percutaneous puncture process is obtained, and a first image shot by the first respiratory tract endoscope camera in the percutaneous lung puncture process is obtained.
5. The method for positioning a navigated robot for minimally invasive interventional procedures according to claim 4, wherein said obtaining real-time coordinates of the motion of the airway endoscope camera and the extracorporeal sensor according to a three-dimensional rectangular coordinate system further comprises:
attaching a second extrabody sensor to the front section of a second respiratory tract endoscopic camera;
carrying out second percutaneous lung puncture on a second respiratory tract endoscope camera with a second in-vitro sensor through a second puncture needle, and accurately positioning the second in-vitro sensor to the operation position on the basis of the first percutaneous lung puncture, the positioning of the in-vitro sensor and the respiratory gating;
acquiring real-time coordinates of a second extrabody sensor in the percutaneous lung puncture process, and forming a second path;
and simultaneously acquiring a three-dimensional transformation space process of the mechanical arm in a second percutaneous puncture process, and acquiring a second image shot by the second respiratory tract endoscope camera in the percutaneous lung puncture process.
6. The method of claim 5, wherein the converting the real-time coordinates of the in vitro sensor to the positioning coordinates in the real-time human body comprises:
corresponding a first path formed by real-time coordinates of a first external sensor to a real-time human body to obtain a first real-time path in the real-time human body;
and corresponding a second path formed by the real-time coordinates of the second extrabody sensor to the real-time human body to obtain a second real-time path in the real-time human body.
7. The method of claim 6, wherein the determining whether the location coordinates are within the range of the preoperative image comprises:
judging whether the coordinate points of the first real-time path and the second real-time path are in the range of the surgical component of the preoperative image or not;
if so, acquiring the change positioning coordinates in the range of the surgical component;
and if not, discarding all real-time coordinates of the first real-time path and the second real-time path.
8. The method of claim 7, wherein optimizing the positioning of the surgical navigation robot to the surgical site comprises:
based on the three-dimensional transformation space process, the transformation positioning coordinates and the preoperative images of the mechanical arm two times of percutaneous lung puncture, the positioning of the surgical navigation robot to the surgical position is optimized.
9. A positioning device for a minimally invasive interventional surgical navigation robot, the positioning device comprising:
a dividing module: the preoperative imaging system is used for corresponding preoperative images to a real-time human body, dividing organs in the real-time human body and establishing a three-dimensional rectangular coordinate system based on a division result;
an acquisition module: the system is used for acquiring real-time coordinates of the movement of the respiratory tract endoscope camera and the in-vitro sensor in the three-dimensional space rectangular coordinate system;
a positioning module: the positioning system is used for converting the real-time coordinate of the external sensor into a positioning coordinate in a real-time human body, judging whether the positioning coordinate is in the range of the preoperative image, if so, generating a change positioning coordinate, and optimizing the positioning of the surgical navigation robot on a surgical part based on the change positioning coordinate and the preoperative image.
10. The positioning device of a minimally invasive interventional surgical navigation robot according to claim 9, wherein the partitioning module comprises
A corresponding unit: the preoperative image is corresponding to organs in a real-time human body, and an image map of the preoperative image in the real-time human body is obtained;
a dividing unit: the image processing device is used for accurately segmenting the human organs of the image map to obtain the real-time image map of the human organs.
CN202010642175.8A 2020-07-06 2020-07-06 Positioning method and device of minimally invasive interventional operation navigation robot Active CN111887988B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010642175.8A CN111887988B (en) 2020-07-06 2020-07-06 Positioning method and device of minimally invasive interventional operation navigation robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010642175.8A CN111887988B (en) 2020-07-06 2020-07-06 Positioning method and device of minimally invasive interventional operation navigation robot

Publications (2)

Publication Number Publication Date
CN111887988A true CN111887988A (en) 2020-11-06
CN111887988B CN111887988B (en) 2022-06-10

Family

ID=73192992

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010642175.8A Active CN111887988B (en) 2020-07-06 2020-07-06 Positioning method and device of minimally invasive interventional operation navigation robot

Country Status (1)

Country Link
CN (1) CN111887988B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112641514A (en) * 2020-12-17 2021-04-13 罗雄彪 Minimally invasive interventional navigation system and method
CN113057665A (en) * 2021-03-18 2021-07-02 上海卓昕医疗科技有限公司 Lung image three-dimensional imaging method and system
CN115153842A (en) * 2022-06-30 2022-10-11 常州朗合医疗器械有限公司 Navigation control method, device and system for double-arm robot and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103479430A (en) * 2013-09-22 2014-01-01 江苏美伦影像***有限公司 Image guiding intervention operation navigation system
CN105286988A (en) * 2015-10-12 2016-02-03 北京工业大学 CT image-guided liver tumor thermal ablation needle location and navigation system
CN106063726A (en) * 2016-05-24 2016-11-02 中国科学院苏州生物医学工程技术研究所 Puncture navigation system and air navigation aid thereof in real time
CN108464862A (en) * 2018-03-08 2018-08-31 艾瑞迈迪医疗科技(北京)有限公司 A kind of endoscope guiding operation navigation display method and device
CN110464459A (en) * 2019-07-10 2019-11-19 丽水市中心医院 Intervention plan navigation system and its air navigation aid based on CT-MRI fusion
US10536686B1 (en) * 2018-08-02 2020-01-14 Synaptive Medical (Barbados) Inc. Exoscope with enhanced depth of field imaging
CN110742691A (en) * 2019-10-21 2020-02-04 南开大学 Motion control method for flexible endoscope operation robot
CN110946654A (en) * 2019-12-23 2020-04-03 中国科学院合肥物质科学研究院 Bone surgery navigation system based on multimode image fusion

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103479430A (en) * 2013-09-22 2014-01-01 江苏美伦影像***有限公司 Image guiding intervention operation navigation system
CN105286988A (en) * 2015-10-12 2016-02-03 北京工业大学 CT image-guided liver tumor thermal ablation needle location and navigation system
CN106063726A (en) * 2016-05-24 2016-11-02 中国科学院苏州生物医学工程技术研究所 Puncture navigation system and air navigation aid thereof in real time
CN108464862A (en) * 2018-03-08 2018-08-31 艾瑞迈迪医疗科技(北京)有限公司 A kind of endoscope guiding operation navigation display method and device
US10536686B1 (en) * 2018-08-02 2020-01-14 Synaptive Medical (Barbados) Inc. Exoscope with enhanced depth of field imaging
CN110464459A (en) * 2019-07-10 2019-11-19 丽水市中心医院 Intervention plan navigation system and its air navigation aid based on CT-MRI fusion
CN110742691A (en) * 2019-10-21 2020-02-04 南开大学 Motion control method for flexible endoscope operation robot
CN110946654A (en) * 2019-12-23 2020-04-03 中国科学院合肥物质科学研究院 Bone surgery navigation system based on multimode image fusion

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112641514A (en) * 2020-12-17 2021-04-13 罗雄彪 Minimally invasive interventional navigation system and method
CN113057665A (en) * 2021-03-18 2021-07-02 上海卓昕医疗科技有限公司 Lung image three-dimensional imaging method and system
CN113057665B (en) * 2021-03-18 2022-03-18 上海卓昕医疗科技有限公司 Lung image three-dimensional imaging method and system
CN115153842A (en) * 2022-06-30 2022-10-11 常州朗合医疗器械有限公司 Navigation control method, device and system for double-arm robot and storage medium
CN115153842B (en) * 2022-06-30 2023-12-19 常州朗合医疗器械有限公司 Double-arm robot navigation control method, device, system and storage medium

Also Published As

Publication number Publication date
CN111887988B (en) 2022-06-10

Similar Documents

Publication Publication Date Title
US20220346886A1 (en) Systems and methods of pose estimation and calibration of perspective imaging system in image guided surgery
US6019724A (en) Method for ultrasound guidance during clinical procedures
CN111887988B (en) Positioning method and device of minimally invasive interventional operation navigation robot
US11896414B2 (en) System and method for pose estimation of an imaging device and for determining the location of a medical device with respect to a target
CA2973479C (en) System and method for mapping navigation space to patient space in a medical procedure
US9232982B2 (en) System for orientation assistance and display of an instrument in an object under examination particularly for use in human body
US6731966B1 (en) Systems and methods for targeting a lesion
CN101474075B (en) Navigation system of minimal invasive surgery
CN113729977A (en) System and method for using registered fluoroscopic images in image-guided surgery
EP2329786A2 (en) Guided surgery
WO1996025881A1 (en) Method for ultrasound guidance during clinical procedures
JP2020522827A (en) Use of augmented reality in surgical navigation
WO2004019799A2 (en) Methods and systems for localizing of a medical imaging probe and of a biopsy needle
KR20150019311A (en) System and Method For Non-Invasive Patient-Image Registration
WO2011161684A2 (en) Focused prostate cancer treatment system and method
EP1727471A1 (en) System for guiding a medical instrument in a patient body
CN110537980A (en) puncture surgery navigation method based on motion capture and mixed reality technology
KR101862133B1 (en) Robot apparatus for interventional procedures having needle insertion type
Wegner et al. Evaluation and extension of a navigation system for bronchoscopy inside human lungs
US20230130653A1 (en) Apparatus and method for positioning a patient's body and tracking the patient's position during surgery
WO2023129562A1 (en) Systems and methods for pose estimation of imaging system
EP3628263A1 (en) Guidance in lung intervention procedures
Galloway et al. Overview and history of image-guided interventions
Feuerstein et al. Registration-free laparoscope augmentation for intra-operative liver resection planning
US11931111B2 (en) Systems and methods for providing surgical guidance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant