CN112641514A - Minimally invasive interventional navigation system and method - Google Patents
Minimally invasive interventional navigation system and method Download PDFInfo
- Publication number
- CN112641514A CN112641514A CN202011494377.9A CN202011494377A CN112641514A CN 112641514 A CN112641514 A CN 112641514A CN 202011494377 A CN202011494377 A CN 202011494377A CN 112641514 A CN112641514 A CN 112641514A
- Authority
- CN
- China
- Prior art keywords
- camera
- dimensional virtual
- image
- dimensional
- endoscope
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/105—Modelling of the patient, e.g. for ligaments or bones
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2065—Tracking using image or pattern recognition
Landscapes
- Health & Medical Sciences (AREA)
- Surgery (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Robotics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Endoscopes (AREA)
Abstract
The invention relates to a minimally invasive interventional navigation system and a method, wherein the method comprises the following steps: s1: performing preoperative medical image processing, and dividing a target area; s2: performing full-automatic online navigation initialization on the basis of the divided target area to determine the direction of the two-dimensional virtual endoscope camera; s3: performing a relative motion prediction based on a direction of the two-dimensional virtual endoscopic camera; s4: and determining the position and the direction of the two-dimensional virtual endoscope camera in the three-dimensional preoperative medical image space based on the discriminant structure similarity function. The system comprises a planning module, a motion prediction module and a positioning module, and is used in combination with a medical electronic endoscope system and an interventional imaging system to implement an accurate minimally invasive interventional diagnosis and treatment cycle integrated operation process and achieve the purpose of minimally invasive interventional accurate positioning.
Description
Technical Field
The invention belongs to the technical field of surgical navigation, and particularly relates to a minimally invasive interventional navigation system and method.
Background
The minimally invasive intervention technology is a surgical operation method for implanting instruments or medicines into pathological tissues with minimum trauma (only puncture needle eyes without skin cutting) under the joint guidance of preoperative medical images (such as CT) and intraoperative imaging technologies (such as medical electronic endoscopy, minimally invasive intervention magnetic resonance and ultrasonic imaging) to diagnose or treat the pathological tissues, and has the characteristics of no operation, small trauma, quick recovery and good effect.
The existing minimally invasive interventional technology is widely applied to various surgical procedures, and minimally invasive surgical instruments such as imaging tools of medical electronic endoscopes and the like enter a certain specific region of internal organs of a human body through natural apertures (such as oral cavities, nasal cavities and the like) of the human body or small incisions at certain parts to diagnose or treat. Currently, such minimally invasive interventional procedures mainly use single-modality intra-operative image (e.g., interventional CT, ultrasound, or magnetic resonance imaging) guidance techniques, along with the experience of the surgeon, to perform the positioning of the surgical instrument and the target region of the patient. The main drawbacks of this guiding technique are: (1) the operation of surgical instruments is difficult, and the precise control and positioning are difficult; (2) blind puncture of a tumor area and inaccurate positioning of the tumor area are realized; (3) in-operation single-mode imaging provides more limited information of human anatomy structure, the image quality is lower, artifacts can be generated, and the positioning difficulty of surgical instruments is increased; (4) preoperative medical images and intraoperative multi-modal information cannot be automatically fused; (5) stereoscopic virtual reality visualization navigation cannot be provided. These drawbacks lead to inaccurate minimally invasive surgical procedures, especially for small or tiny tumors (5-10mm) in the patient's body, which cannot be accurately reached and located. These disadvantages increase the risk of minimally invasive interventional procedures and reduce the success rate of the procedures.
Therefore, in the process of minimally invasive interventional operation diagnosis or treatment, problems of difficulty in accurate positioning, difficulty in operation tool control and the like caused by incapability of automatically fusing preoperative medical images and intraoperative multi-modal information and implementing three-dimensional digital virtual reality visualization become more and more urgent to solve.
Disclosure of Invention
In order to solve the problems, the invention provides a minimally invasive interventional navigation system and a method;
a method of minimally invasive interventional navigation, the method comprising the steps of:
s1: performing preoperative medical image processing, and dividing a target area;
s2: performing full-automatic online navigation initialization on the basis of the divided target area to determine the direction of the two-dimensional virtual endoscope camera;
s3: performing a relative motion prediction based on a direction of the two-dimensional virtual endoscopic camera;
s4: and determining the position and the direction of the two-dimensional virtual endoscope camera in the three-dimensional preoperative medical image space based on the discriminant structure similarity function.
Further, the air conditioner is provided with a fan,
the preoperative medical image processing method for dividing a target area comprises the following steps:
corresponding the preoperative medical image to an organ in a real-time human body to obtain an image map of the preoperative medical image in the organ of the human body;
accurately extracting and segmenting the human organ of the image map to obtain a two-dimensional virtual endoscope image of the human organ;
and performing three-dimensional reconstruction and digital visualization on the two-dimensional virtual endoscope image to extract and segment a target area and performing preoperative calibration on the target area.
Further, the air conditioner is provided with a fan,
the full-automatic online navigation initialization comprises the following steps:
(1) obtaining organ structure bifurcation center line information through the preoperative medical image processing, and determining direction information of a two-dimensional virtual endoscope camera;
the first bifurcation point, namely the intersection point of three center lines of a main center line, a right main center line and a left main center line, defines the sight line direction of the two-dimensional virtual endoscope camera, and is set as the Z-axis direction which is the main center line direction; the vector product of the right main central line and the left main central line is the Y-axis direction of the two-dimensional virtual endoscopic camera, and the cross direction of the Z-axis direction and the Y-axis direction is the X-axis direction of the two-dimensional virtual endoscopic camera;
(2) from the start or starting point of the main centre line, 5 three-dimensional position points are generated along the main centre line direction:
the two-dimensional virtual endoscope camera rotates along the Z-axis direction, a new direction is generated every time the two-dimensional virtual endoscope camera rotates 30 degrees, 12 camera directions are generated in total, and 60 poses are generated in total by combining with the generated 5 positions;
(3) generating 60 two-dimensional virtual endoscope camera images by using 60 posture information and through a volume rendering technology;
(4) calculating the similarity between the two-dimensional real endoscope camera image and 60 two-dimensional virtual endoscope camera images, and determining the two-dimensional virtual endoscope camera image corresponding to the maximum similarity, wherein the position and posture information corresponding to the two-dimensional virtual endoscope camera image is the position and posture information of the two-dimensional real endoscope camera image.
Further, the air conditioner is provided with a fan,
the method for predicting the relative motion comprises the following steps:
the method comprises a camera epipolar line geometry method, an analysis method, a motion recovery structure, a filtering method and a deep learning method.
Further, the air conditioner is provided with a fan,
the method of S4 includes:
1) extracting an area containing bifurcation and fold discriminant structure information on a two-dimensional real endoscope camera image by using an HSV color model;
2) defining a discriminant structure similarity function, namely a cost function in the optimization process, wherein the discriminant structure similarity function is defined as the difference of pixel point gray values in the discriminant structure region extracted in the S1 between the two-dimensional real endoscope camera image and the two-dimensional virtual endoscope image;
wherein, the two-dimensional real endoscope camera image of the ith frame image is RiAt RiThe M discriminant structure regions are extracted, and the discriminant structure information similarity function is defined as S (R)i,V(Pi)):
Wherein, V (Δ P)iPi-1) Is a two-dimensional virtual endoscope image based on a two-dimensional real endoscope camera pose delta PiPi-1Automatically generated by volume rendering techniques, Pi-1Is the camera pose, delta P, corresponding to the i-1 th frame imageiQ (x, y) represents a discriminant structure area with a pixel (x, y) as a center, wherein the pixel (x, y) is a position and pose variable; mu.s1And mu2Respectively a two-dimensional real endoscopic camera image RiAnd a two-dimensional virtual endoscopic image V (Δ P)iPi-1) Average pixel value, σ, of the upper discriminating structural region Q (x, y)12Is the correlation, σ, between the two-dimensional real endoscopic camera image area Q (x, y) and the two-dimensional virtual endoscopic image area Q (x, y)1、σ2Pixel value variances of a two-dimensional real endoscopic camera image area Q (x, y) and a two-dimensional virtual endoscopic image area Q (x, y), respectively; constant C16.5, constant C2=58.5;
3) Establishing an optimization equation, and defining the operation navigation two-dimensional real endoscope camera pose prediction optimization equation as follows:
4) the pose initialization and registration optimization iterative process in the optimization process need to initialize delta PiUsing the relative motion prediction to obtain the relative motion parameter prediction result to initialize delta Pi;
5) Iterations are performed until the equation converges: introducing an optimization algorithm Powell method or Levenberg-Marquardt algorithm for iteration until convergence to obtainThereby obtaining the current frame camera pose prediction
6) Entering the next frame of camera pose prediction: and (5) repeating the steps 1-5 until the surgical instrument reaches the target area, and simultaneously positioning the surgical instrument, namely the electronic endoscope and the surgical catheter in real time.
Further, the air conditioner is provided with a fan,
the method of S4 further includes a three-state respiratory motion navigation error compensation mechanism, the error compensation mechanism including:
collecting preoperative medical image data for three times in a maximum expiration state, a normal breath holding state and a maximum inspiration state of a patient;
interpolating the three preoperative medical image data to divide the respiratory state of the human body into 12 states, which is equivalent to 12 preoperative medical image data for the same patient;
generating 11 two-dimensional virtual endoscope camera images in the other 11 preoperative medical images through the camera pose information predicted in the normal breath holding state, and then calculating the similarity between the two-dimensional real endoscope camera image and the 11 two-dimensional virtual endoscope camera images;
finding out the preoperative medical image state corresponding to the maximum similarity, and calculating a transformation relation matrix between the state and the normal breath holding state;
determining an optimal camera pose:
the optimal camera pose is the predicted camera pose at the normal breath hold state.
The present invention also provides a method of producing,
a minimally invasive interventional navigation system, the system comprising:
a planning module: the method is used for preoperative medical image processing and marking out a target area;
the navigation initialization module: the direction of the two-dimensional virtual endoscope camera is determined by carrying out full-automatic online navigation initialization based on the divided target area;
a motion prediction module: for relative motion prediction based on the direction of the two-dimensional virtual endoscopic camera;
a positioning module: the system is used for determining the position and the direction of the two-dimensional virtual endoscope camera in the three-dimensional preoperative medical image space based on the discriminant structure similarity function.
Further, the air conditioner is provided with a fan,
the system further comprises:
a display device: the navigation system is used for displaying all preoperative and intraoperative navigation information of a surgical navigation software interface and implementing surgical navigation;
digital video adapter: the system is used for connecting the minimally invasive interventional operation navigation system with other medical equipment systems;
operation cart: the high-performance computer, the digital video adapter and the navigator are integrated, so that the operation of a doctor is facilitated;
a surgical catheter: a miniature positioning sensor for bearing a navigator, an optical biopsy probe of a confocal laser micro-endoscope system and an ablation needle of a target area ablation treatment system.
Further, the air conditioner is provided with a fan,
the surgical catheter is a pre-bent catheter or an adjustable catheter.
Further, the air conditioner is provided with a fan,
the target area ablation treatment system comprises laser, microwave, radio frequency ablation and argon-helium knife cryoablation.
Further, the air conditioner is provided with a fan,
the system further comprises:
medical electronic endoscope system: the medical electronic endoscope is used for checking whether tissues in the human body of a patient are abnormal or not and a working channel of the electronic endoscope can bear an operation catheter or other micro-operation instruments;
an interventional imaging system: and confirming the relative position relation between the target area and the two-dimensional virtual visual image, and simultaneously utilizing the human tissue motion deformation information acquired in real time by the interventional image.
Further, the air conditioner is provided with a fan,
the medical electronic endoscope comprises a soft endoscope and a hard endoscope;
the interventional imaging system includes fluoroscopy, B-mode ultrasound, Cone-Beam CT and interventional magnetic resonance.
The invention has the following beneficial effects:
(1) the minimally invasive interventional operation hybrid navigation system component adopts operation planning software and operation navigation software to realize preoperative image information three-dimensional visualization and intraoperative real-time navigation visualization;
(2) the novel scheme of the minimally invasive interventional operation hybrid navigation technology is provided, the advantages of a visual motion prediction technology and a multimode image registration technology are fused, and a minimally invasive interventional accurate hybrid navigation technology strategy is implemented;
(3) based on an operation navigation technology of registering a two-dimensional visual image and a three-dimensional preoperative image, a full-automatic online navigation initialization method is adopted for endoscope camera pose information corresponding to a first frame of video image;
(4) the method is a brand new method for predicting the pose of an endoscope camera based on discriminant structure similarity measurement enhancement registration, and directly registers a three-dimensional preoperative image and a two-dimensional endoscope video image without any external sensor for positioning, so that the position and the direction of a two-dimensional endoscope camera in a three-dimensional preoperative image space are determined;
(5) a respiratory motion navigation error compensation mechanism based on three-state preoperative image interpolation is integrated in operation navigation function software, so that functions of three-dimensional virtual reality visualization, intraoperative real-time navigation and the like are provided for a surgeon, and the surgeon is guided to operate surgical instruments intuitively, conveniently, easily and efficiently;
(6) an accurate minimally invasive interventional operation navigation system is adopted to be used together with a medical electronic endoscope system and an interventional imaging system so as to implement an accurate minimally invasive interventional diagnosis and treatment cycle integrated operation process; the influence of the deformation of the patient such as the breathing movement on the surgical navigation process is reduced, the relative positions of surgical instruments and tumor focuses in the cyclic process are accurately diagnosed, the operation time, the burden of surgeons and the surgical risks are reduced, and the operation efficiency and the success rate are improved;
(7) by utilizing video image information in an endoscope system, the constraint of the center line of the human body lumen structure is introduced, the robustness of the random optimization algorithm of the differential evolution particle filter is evaluated, and the navigation error caused by the human body motion deformation of a patient is compensated.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 shows a flow chart of a minimally invasive interventional navigation method according to an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention is exemplarily explained by taking a tumor as an example, the invention provides a minimally invasive interventional navigation method, fig. 1 shows a flowchart of the minimally invasive interventional navigation method according to the embodiment of the invention, and as shown in fig. 1, the method comprises the following steps:
s1: performing preoperative medical image processing, and dividing a target area;
s2: performing full-automatic online navigation initialization on the basis of the divided target area to determine the direction of the two-dimensional virtual endoscope camera;
s3: performing a relative motion prediction based on a direction of the two-dimensional virtual endoscopic camera;
s4: and determining the position and the direction of the two-dimensional virtual endoscope camera in the three-dimensional preoperative medical image space based on the discriminant structure similarity function.
Further, the air conditioner is provided with a fan,
the preoperative medical image processing method for dividing a target area comprises the following steps:
corresponding the preoperative medical image to an organ in a real-time human body to obtain an image map of the preoperative medical image in the organ of the human body;
accurately extracting and segmenting the human organ of the image map to obtain a two-dimensional virtual endoscope image of the human organ;
and performing three-dimensional reconstruction and digital visualization on the two-dimensional virtual endoscope image to extract and segment a target area and performing preoperative calibration on the target area.
In particular, the amount of the solvent to be used,
preoperative medical image processing mainly extracts and segments central lines of all branches of tubular organs in an image, human anatomical structure organs and a target region, extracts and segments the target region through three-dimensional reconstruction and digital visualization, and performs preoperative calibration on the target region;
the method for extracting and segmenting the target region is a preoperative image automatic segmentation method based on a deep learning convolutional neural network, the extraction and calibration of the target region, namely the tumor focus and the deformation of peripheral blood vessels of the tumor focus are accurate, and meanwhile, the information of the center line of the human body lumen structure is obtained.
Further, the air conditioner is provided with a fan,
based on an operation navigation technology for registering a two-dimensional virtual endoscope image and a three-dimensional preoperative image, initializing endoscope camera pose information (position and direction information of 6 degrees of freedom) corresponding to a first frame of video image;
the full-automatic online navigation initialization comprises the following steps:
(1) obtaining organ structure bifurcation center line information through the preoperative medical image processing, and determining direction information of a two-dimensional virtual endoscope camera;
the first bifurcation point, namely the intersection point of three center lines of a main center line, a right main center line and a left main center line, defines the sight line direction of the two-dimensional virtual endoscope camera, and is set as the Z-axis direction which is the main center line direction; the vector product of the right main central line and the left main central line is the Y-axis direction of the two-dimensional virtual endoscopic camera, and the cross direction of the Z-axis direction and the Y-axis direction is the X-axis direction of the two-dimensional virtual endoscopic camera;
(2) from the start or starting point of the main centre line, 5 three-dimensional position points are generated along the main centre line direction:
wherein, the position point is the initial point + the coefficient (the terminal point of the main central line-the starting point of the main central line) and the main central line direction; the coefficient is 0.5, 0.6, 0.7, 0.8, 0.9;
the two-dimensional virtual endoscope camera rotates along the Z-axis direction, a new direction is generated every time the two-dimensional virtual endoscope camera rotates 30 degrees, 12 camera directions are generated in total, and 60 poses are generated in total by combining with the generated 5 positions;
(3) generating 60 two-dimensional virtual endoscope camera images by using 60 posture information and through a volume rendering technology;
(4) calculating the similarity between the two-dimensional real endoscope camera image and 60 two-dimensional virtual endoscope camera images, and determining the two-dimensional virtual endoscope camera image corresponding to the maximum similarity, wherein the position and posture information corresponding to the two-dimensional virtual endoscope camera image is the position and posture information of the two-dimensional real endoscope camera.
Further, the air conditioner is provided with a fan,
relative motion information (position and direction of 6 degrees of freedom) is arranged between a front frame (previous frame) video image and a rear frame (current frame) video image of the electronic endoscope camera, and the relative motion information between the video images of the endoscope camera is predicted so as to improve subsequent optimization performance;
the method for predicting the relative motion comprises the following steps:
the camera epipolar geometry method is also called a epipolar geometry method, an analysis method, a motion recovery structure, a filtering method (such as Kalman filtering and particle filtering), and a deep learning method. The following steps of calculating the relative motion parameters between frames by using the epipolar line geometry method as an example are as follows:
detecting feature points on two frames (a front frame and a rear frame) of images by using a feature point extraction algorithm (such as a feature point detection method of SIFT, AffinisIFT, RootSIFT and the like) to obtain two point sets, and finding out a matching point between the two point sets by using a nearest neighbor matching method;
using the matched characteristic points, introducing an optimization algorithm (such as a least square method) to solve an epipolar constraint equation, and obtaining a camera basic matrix;
the camera essence matrix is calculated as a camera external parameter matrix and a camera basis matrix, and parameters included in the camera essence matrix are inter-frame relative motion parameters (i.e., relative position and direction information of 6 degrees of freedom).
Further, the air conditioner is provided with a fan,
the essence of the operation navigation is to determine the pose parameters of the electronic endoscope camera under the space coordinates of the three-dimensional preoperative image, namely the position and direction information of 6 degrees of freedom of the camera;
the method for determining the position and the direction of the two-dimensional virtual endoscope camera in the three-dimensional preoperative medical image space based on the discriminant structure similarity function comprises the following steps:
1) extracting an area containing bifurcation and fold discriminant structure information on a two-dimensional real endoscope camera image by using an HSV color model;
2) defining a discriminant structure similarity function, namely a cost function in the optimization process, wherein the discriminant structure similarity function is defined as the difference of pixel point gray values in the discriminant structure region extracted in the S1 between the two-dimensional real endoscope camera image and the two-dimensional virtual endoscope image;
wherein, the two-dimensional real endoscope camera image of the ith frame image is RiAt RiThe M discriminant structure regions are extracted, and the discriminant structure information similarity function is defined as S (R)i,V(Pi)):
Wherein, V (Δ P)iPi-1) Is a two-dimensional virtual endoscopeThe mirror image is based on the two-dimensional real endoscope camera pose delta PiPi-1Automatically generated by volume rendering techniques, Pi-1Is the camera pose, delta P, corresponding to the i-1 th frame imageiQ (x, y) represents a discriminant structure area with a pixel (x, y) as a center, wherein the pixel (x, y) is a position and pose variable; mu.s1And mu2Respectively a two-dimensional real endoscopic camera image RiAnd a two-dimensional virtual endoscopic image V (Δ P)iPi-1) Average pixel value, σ, of the upper discriminating structural region Q (x, y)12Is the correlation, σ, between the two-dimensional real endoscopic camera image area Q (x, y) and the two-dimensional virtual endoscopic image area Q (x, y)1、σ2Pixel value variances of a two-dimensional real endoscopic camera image area Q (x, y) and a two-dimensional virtual endoscopic image area Q (x, y), respectively; constant C16.5, constant C2=58.5;
3) Establishing an optimization equation, and defining the operation navigation two-dimensional real endoscope camera pose prediction optimization equation as follows:
4) the pose initialization and registration optimization iterative process in the optimization process need to initialize delta PiUsing the relative motion prediction to obtain the relative motion parameter prediction result to initialize delta Pi;
5) Iterations are performed until the equation converges: introducing an optimization algorithm Powell method or Levenberg-Marquardt algorithm for iteration until convergence to obtainThereby obtaining the current frame camera pose prediction
6) Entering the next frame of camera pose prediction: repeating the steps 1-S until the surgical instruments reach the target area, and simultaneously positioning the surgical instruments, namely the electronic endoscope and the surgical catheter in real time;
the brand new method for determining the position and the direction of the two-dimensional virtual endoscope camera in the three-dimensional preoperative medical image space based on the discriminant structure similarity function directly registers the three-dimensional preoperative medical image and the two-dimensional endoscope video image without any external sensor for positioning, so that the position and the direction of the endoscope camera in the three-dimensional preoperative image space are determined.
Further, the air conditioner is provided with a fan,
in order to reduce respiratory motion error, the method for determining the position and the direction of the two-dimensional virtual endoscope camera in the three-dimensional preoperative medical image space based on the discriminant structure similarity function further comprises a three-state respiratory motion navigation error compensation mechanism, and the error compensation mechanism comprises:
collecting preoperative medical image data for three times in a maximum expiration state, a normal breath holding state and a maximum inspiration state of a patient;
interpolating the three preoperative medical image data to divide the respiratory state of the human body into 12 states, which is equivalent to 12 preoperative medical image data for the same patient;
generating 11 two-dimensional virtual endoscope camera images in the other 11 preoperative medical images through the camera pose information predicted in the normal breath holding state, and then calculating the similarity between the two-dimensional real endoscope camera image and the 11 two-dimensional virtual endoscope images;
finding out the preoperative medical image state corresponding to the maximum similarity, and calculating a transformation relation matrix between the state and the normal breath holding state;
determining an optimal camera pose:
the optimal camera pose is the predicted camera pose at the normal breath hold state.
The invention also provides a minimally invasive interventional navigation system, comprising:
a planning module: the method is used for preoperative medical image processing and marking out a target area;
the navigation initialization module: the direction of the two-dimensional virtual endoscope camera is determined by carrying out full-automatic online navigation initialization based on the divided target area;
a motion prediction module: for relative motion prediction based on the direction of the two-dimensional virtual endoscopic camera;
a positioning module: the system is used for determining the position and the direction of the two-dimensional virtual endoscope camera in the three-dimensional preoperative medical image space based on the discriminant structure similarity function.
In particular, the amount of the solvent to be used,
the surgical navigation is a method for guiding a surgeon to operate surgical instruments or tools in the minimally invasive interventional operation process so as to quickly, efficiently and accurately reach a target region of a tumor focus; it can provide the surgeon with an intuitive, real-time online, three-dimensional digital visualization and virtual reality surgical environment and field of view.
The operation navigation of the invention is to introduce a steady and efficient algorithm by utilizing multimode information such as preoperative images and segmentation results thereof, intraoperative medical electronic endoscope video images and the like, synchronously and integrally visualize the multimode information in the same coordinate space, and guide an electronic endoscope and an operation catheter to reach a tumor focus target area. The operation navigation technology integrates the advantages of a visual motion prediction technology and a multi-mode image registration technology, and implements an accurate hybrid navigation technology strategy.
Further, the air conditioner is provided with a fan,
the system further comprises:
a display device: the navigation system is used for displaying all preoperative and intraoperative navigation information of a surgical navigation software interface and implementing surgical navigation;
digital video adapter: the system is used for connecting the minimally invasive interventional operation navigation system with other medical equipment systems;
operation cart: the high-performance computer, the digital video adapter and the navigator are integrated, so that the operation of a doctor is facilitated;
a surgical catheter: a miniature positioning sensor for bearing a navigator, an optical biopsy probe of a confocal laser micro-endoscope system and an ablation needle of a tumor ablation treatment system.
Further, the air conditioner is provided with a fan,
the surgical catheter is a pre-bent catheter or an adjustable catheter.
Further, the air conditioner is provided with a fan,
the tumor ablation treatment system comprises laser, microwave, radio frequency ablation and argon-helium knife cryoablation.
Further, the air conditioner is provided with a fan,
the system further comprises:
medical electronic endoscope system: the medical electronic endoscope is used for checking whether tissues in the human body of a patient are abnormal or not and a working channel of the electronic endoscope can bear an operation catheter or other micro-operation instruments;
the medical electronic endoscope system comprises an electronic endoscope and an auxiliary operation instrument, wherein in the process of minimally invasive intervention operation, a surgeon firstly operates the electronic endoscope (the front end of the electronic endoscope is provided with a video camera and a working channel) to enter the inside of a human body, and real-time image information transmitted to a display through the front end camera is used for checking whether tissues in the human body of a patient are abnormal or not; in addition to the examination function, the working channel of the electronic endoscope may carry a surgical catheter or other microsurgical instrument; the endoscope video image information is utilized, the human body lumen structure central line constraint is introduced, the robustness of the differential evolution particle filter random optimization algorithm is evaluated, and the navigation error caused by the human body motion deformation of the patient is compensated.
An interventional imaging system: and confirming the relative position relation between the tumor tissue biopsy ablation needle or the tumor ablation needle and the tumor focus, and simultaneously utilizing human tissue motion deformation information acquired in real time by the interventional image.
Further, the air conditioner is provided with a fan,
the medical electronic endoscope comprises a soft endoscope and a hard endoscope;
the interventional imaging system includes fluoroscopy, B-mode ultrasound, Cone-Beam CT and interventional magnetic resonance.
The invention provides a minimally invasive interventional navigation system and a method, which can be widely applied to various minimally invasive interventional operation processes. The difficult problem of inaccurate positioning in the prior art can be effectively solved, and therefore the more accurate, more efficient and safer minimally invasive interventional operation system and method are achieved.
Although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (12)
1. A minimally invasive interventional navigation method, characterized in that the method comprises the steps of:
s1: performing preoperative medical image processing, and dividing a target area;
s2: performing full-automatic online navigation initialization on the basis of the divided target area to determine the direction of the two-dimensional virtual endoscope camera;
s3: performing a relative motion prediction based on a direction of the two-dimensional virtual endoscopic camera;
s4: and determining the position and the direction of the two-dimensional virtual endoscope camera in the three-dimensional preoperative medical image space based on the discriminant structure similarity function.
2. The minimally invasive interventional navigation method according to claim 1, wherein the preoperative medical image processing and the marking out of the target area comprises:
corresponding the preoperative medical image to an organ in a real-time human body to obtain an image map of the preoperative medical image in the organ of the human body;
accurately extracting and segmenting the human organ of the image map to obtain a two-dimensional virtual endoscope image of the human organ;
and performing three-dimensional reconstruction and digital visualization on the two-dimensional virtual endoscope image to extract and segment a target area and performing preoperative calibration on the target area.
3. The minimally invasive interventional navigation method according to claim 1, characterized in that the fully automatic online navigation initialization comprises the steps of:
(1) obtaining organ structure bifurcation center line information through the preoperative medical image processing, and determining direction information of a two-dimensional virtual endoscope camera;
the first bifurcation point, namely the intersection point of three center lines of a main center line, a right main center line and a left main center line, defines the sight line direction of the two-dimensional virtual endoscope camera, and is set as the Z-axis direction which is the main center line direction; the vector product of the right main central line and the left main central line is the Y-axis direction of the two-dimensional virtual endoscopic camera, and the cross direction of the Z-axis direction and the Y-axis direction is the X-axis direction of the two-dimensional virtual endoscopic camera;
(2) from the start or starting point of the main centre line, 5 three-dimensional position points are generated along the main centre line direction:
the two-dimensional virtual endoscope camera rotates along the Z-axis direction, a new direction is generated every time the two-dimensional virtual endoscope camera rotates 30 degrees, 12 camera directions are generated in total, and 60 poses are generated in total by combining with the generated 5 positions;
(3) generating 6() pieces of two-dimensional virtual endoscope camera images by using 60 pieces of posture information through a volume rendering technology;
(4) calculating the similarity between the two-dimensional real endoscope camera image and 60 two-dimensional virtual endoscope camera images, and determining the two-dimensional virtual endoscope camera image corresponding to the maximum similarity, wherein the position and posture information corresponding to the two-dimensional virtual endoscope camera image is the position and posture information of the two-dimensional real endoscope camera image.
4. The minimally invasive interventional navigation method according to claim 1, wherein the method of relative motion prediction comprises:
the method comprises a camera epipolar line geometry method, an analysis method, a motion recovery structure, a filtering method and a deep learning method.
5. The minimally invasive interventional navigation method according to claim 1, wherein the determining the position and the orientation of the two-dimensional virtual endoscopic camera in the three-dimensional preoperative medical image space based on the discriminant structure similarity function in S4 specifically comprises:
1) extracting an area containing bifurcation and fold discriminant structure information on a two-dimensional real endoscope camera image by using an HSV color model;
2) defining a discriminant structure similarity function, wherein the discriminant structure similarity function is defined as the difference of gray values of pixel points in the discriminant structure region extracted in the S1 between the two-dimensional real endoscope camera image and the two-dimensional virtual endoscope image;
wherein, the two-dimensional real endoscope camera image of the ith frame image is RiAt RiThe M discriminant structure regions are extracted, and the discriminant structure information similarity function is defined as S (R)i,V(Pi)):
Wherein, V (Δ P)iPi-1) Is a two-dimensional virtual endoscope image based on a two-dimensional real endoscope camera pose delta PiPi-1Automatically generated by volume rendering techniques, Pi-1Is the camera pose, delta P, corresponding to the i-1 th frame imageiQ (x, y) represents a discriminant structure area with a pixel (x, y) as a center, wherein the pixel (x, y) is a position and pose variable; mu.s1And mu2Respectively a two-dimensional real endoscopic camera image RiAnd a two-dimensional virtual endoscopic image V (Δ P)iPi-1) Average pixel value, σ, of the upper discriminating structural region Q (x, y)12Is the correlation, σ, between the two-dimensional real endoscopic camera image area Q (x, y) and the two-dimensional virtual endoscopic image area Q (x, y)1、σ2Pixel value variances of a two-dimensional real endoscopic camera image area Q (x, y) and a two-dimensional virtual endoscopic image area Q (x, y), respectively; c1、C2Is a predetermined value;
3) establishing an optimization equation, and defining the operation navigation two-dimensional real endoscope camera pose prediction optimization equation as follows:
4) the pose initialization and registration optimization iterative process in the optimization process need to initialize delta PiUsing the relative motion prediction to obtain the relative motion parameter prediction result to initialize delta Pi;
5) Iterations are performed until the equation converges: introducing optimization algorithm POwell method or Levenberg-Marquardt algorithm for iteration until convergence to obtainThereby obtaining the current frame camera pose prediction
6) Entering the next frame of camera pose prediction: and (5) repeating the steps 1-5 until the surgical instrument reaches the target area, and simultaneously positioning the surgical instrument, namely the electronic endoscope and the surgical catheter in real time.
6. The minimally invasive interventional navigation method of claim 5, wherein the method of S4 further comprises a three-state respiratory motion navigation error compensation mechanism, the error compensation mechanism comprising:
collecting preoperative medical image data for three times in a maximum expiration state, a normal breath holding state and a maximum inspiration state of a patient;
interpolating the three preoperative medical image data to divide the respiratory state of the human body into 12 states, which is equivalent to 12 preoperative medical image data for the same patient;
generating 11 two-dimensional virtual endoscope camera images in the other 11 preoperative medical images through the camera pose information predicted in the normal breath holding state, and then calculating the similarity between the two-dimensional real endoscope camera image and the 11 two-dimensional virtual endoscope camera images;
finding out the preoperative medical image state corresponding to the maximum similarity, and calculating a transformation relation matrix between the state and the normal breath holding state;
determining an optimal camera pose:
the optimal camera pose is the predicted camera pose at the normal breath hold state.
7. A minimally invasive interventional navigation system, the system comprising:
a planning module: the method is used for preoperative medical image processing and marking out a target area;
a planning module: the direction of the two-dimensional virtual endoscope camera is determined by carrying out full-automatic online navigation initialization based on the divided target area;
a motion prediction module: for relative motion prediction based on the direction of the two-dimensional virtual endoscopic camera;
a positioning module: the system is used for determining the position and the direction of the two-dimensional virtual endoscope camera in the three-dimensional preoperative medical image space based on the discriminant structure similarity function.
8. The minimally invasive interventional navigation system of claim 7, further comprising:
a display device: the navigation system is used for displaying all preoperative and intraoperative navigation information of a surgical navigation software interface and implementing surgical navigation;
digital video adapter: the system is used for connecting the minimally invasive interventional operation navigation system with other medical equipment systems;
operation cart: the high-performance computer, the digital video adapter and the navigator are integrated, so that the operation of a doctor is facilitated;
a surgical catheter: a miniature positioning sensor for bearing a navigator, an optical biopsy probe of a confocal laser micro-endoscope system and an ablation needle of a target area ablation treatment system.
9. The minimally invasive interventional navigation system of claim 8,
the surgical catheter is a pre-bent catheter or an adjustable catheter.
10. The minimally invasive interventional navigation system of claim 8,
the target area ablation treatment system comprises laser, microwave, radio frequency ablation and argon-helium knife cryoablation.
11. The minimally invasive interventional navigation system of claim 7, further comprising:
medical electronic endoscope system: the medical electronic endoscope is used for checking whether tissues in the human body of a patient are abnormal or not and a working channel of the electronic endoscope can bear an operation catheter or other micro-operation instruments;
an interventional imaging system: and confirming the relative position relation between the target area and the two-dimensional virtual visual image, and simultaneously utilizing the human tissue motion deformation information acquired in real time by the interventional image.
12. The minimally invasive interventional navigation system of claim 11,
the medical electronic endoscope comprises a soft endoscope and a hard endoscope;
the interventional imaging system includes fluoroscopy, B-mode ultrasound, Cone-Beam CT and interventional magnetic resonance.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011494377.9A CN112641514B (en) | 2020-12-17 | 2020-12-17 | Minimally invasive interventional navigation system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011494377.9A CN112641514B (en) | 2020-12-17 | 2020-12-17 | Minimally invasive interventional navigation system and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112641514A true CN112641514A (en) | 2021-04-13 |
CN112641514B CN112641514B (en) | 2022-10-18 |
Family
ID=75354617
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011494377.9A Active CN112641514B (en) | 2020-12-17 | 2020-12-17 | Minimally invasive interventional navigation system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112641514B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113940756A (en) * | 2021-11-09 | 2022-01-18 | 广州柏视医疗科技有限公司 | Operation navigation system based on mobile DR image |
CN114191078A (en) * | 2021-12-29 | 2022-03-18 | 上海复旦数字医疗科技股份有限公司 | Endoscope operation navigation robot system based on mixed reality |
CN115908121A (en) * | 2023-02-23 | 2023-04-04 | 深圳市精锋医疗科技股份有限公司 | Endoscope registration method and device and calibration system |
CN116433874A (en) * | 2021-12-31 | 2023-07-14 | 杭州堃博生物科技有限公司 | Bronchoscope navigation method, device, equipment and storage medium |
CN116807361A (en) * | 2023-08-28 | 2023-09-29 | 青岛美迪康数字工程有限公司 | CT image display method, electronic equipment and device |
CN117274506A (en) * | 2023-11-20 | 2023-12-22 | 华中科技大学同济医学院附属协和医院 | Three-dimensional reconstruction method and system for interventional target scene under catheter |
TWI836491B (en) * | 2021-11-18 | 2024-03-21 | 瑞鈦醫療器材股份有限公司 | Method and navigation system for registering two-dimensional image data set with three-dimensional image data set of body of interest |
CN117838311A (en) * | 2024-03-07 | 2024-04-09 | 杭州海沛仪器有限公司 | Target spot ablation respiration gating method and system based on optical positioning |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003265408A (en) * | 2002-03-19 | 2003-09-24 | Mitsubishi Electric Corp | Endoscope guide device and method |
US20070052724A1 (en) * | 2005-09-02 | 2007-03-08 | Alan Graham | Method for navigating a virtual camera along a biological object with a lumen |
US20090161927A1 (en) * | 2006-05-02 | 2009-06-25 | National University Corporation Nagoya University | Medical Image Observation Assisting System |
US20120302878A1 (en) * | 2010-02-18 | 2012-11-29 | Koninklijke Philips Electronics N.V. | System and method for tumor motion simulation and motion compensation using tracked bronchoscopy |
CN103169445A (en) * | 2013-04-16 | 2013-06-26 | 苏州朗开医疗技术有限公司 | Navigation method and system for endoscope |
US20170071504A1 (en) * | 2015-09-16 | 2017-03-16 | Fujifilm Corporation | Endoscope position identifying apparatus, endoscope position identifying method, and recording medium having an endoscope position identifying program recorded therein |
CN108778113A (en) * | 2015-09-18 | 2018-11-09 | 奥瑞斯健康公司 | The navigation of tubulose network |
CN108990412A (en) * | 2017-03-31 | 2018-12-11 | 奥瑞斯健康公司 | Compensate the robot system for chamber network navigation of physiological noise |
CN109124766A (en) * | 2017-06-21 | 2019-01-04 | 韦伯斯特生物官能(以色列)有限公司 | It is sensed using trace information with shape and is registrated to improve |
JP2020010735A (en) * | 2018-07-13 | 2020-01-23 | 富士フイルム株式会社 | Inspection support device, method, and program |
CN111588464A (en) * | 2019-02-20 | 2020-08-28 | 忞惪医疗机器人(苏州)有限公司 | Operation navigation method and system |
CN111887988A (en) * | 2020-07-06 | 2020-11-06 | 罗雄彪 | Positioning method and device of minimally invasive interventional operation navigation robot |
-
2020
- 2020-12-17 CN CN202011494377.9A patent/CN112641514B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003265408A (en) * | 2002-03-19 | 2003-09-24 | Mitsubishi Electric Corp | Endoscope guide device and method |
US20070052724A1 (en) * | 2005-09-02 | 2007-03-08 | Alan Graham | Method for navigating a virtual camera along a biological object with a lumen |
US20090161927A1 (en) * | 2006-05-02 | 2009-06-25 | National University Corporation Nagoya University | Medical Image Observation Assisting System |
US20120302878A1 (en) * | 2010-02-18 | 2012-11-29 | Koninklijke Philips Electronics N.V. | System and method for tumor motion simulation and motion compensation using tracked bronchoscopy |
CN103169445A (en) * | 2013-04-16 | 2013-06-26 | 苏州朗开医疗技术有限公司 | Navigation method and system for endoscope |
US20170071504A1 (en) * | 2015-09-16 | 2017-03-16 | Fujifilm Corporation | Endoscope position identifying apparatus, endoscope position identifying method, and recording medium having an endoscope position identifying program recorded therein |
CN108778113A (en) * | 2015-09-18 | 2018-11-09 | 奥瑞斯健康公司 | The navigation of tubulose network |
CN108990412A (en) * | 2017-03-31 | 2018-12-11 | 奥瑞斯健康公司 | Compensate the robot system for chamber network navigation of physiological noise |
CN109124766A (en) * | 2017-06-21 | 2019-01-04 | 韦伯斯特生物官能(以色列)有限公司 | It is sensed using trace information with shape and is registrated to improve |
JP2020010735A (en) * | 2018-07-13 | 2020-01-23 | 富士フイルム株式会社 | Inspection support device, method, and program |
CN111588464A (en) * | 2019-02-20 | 2020-08-28 | 忞惪医疗机器人(苏州)有限公司 | Operation navigation method and system |
CN111887988A (en) * | 2020-07-06 | 2020-11-06 | 罗雄彪 | Positioning method and device of minimally invasive interventional operation navigation robot |
Non-Patent Citations (1)
Title |
---|
XIONGBIAO LUO: "《IEEE TRANSACTIONS ON MEDICAL IMAGING》", 30 June 2014 * |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113940756B (en) * | 2021-11-09 | 2022-06-07 | 广州柏视医疗科技有限公司 | Operation navigation system based on mobile DR image |
CN113940756A (en) * | 2021-11-09 | 2022-01-18 | 广州柏视医疗科技有限公司 | Operation navigation system based on mobile DR image |
TWI836491B (en) * | 2021-11-18 | 2024-03-21 | 瑞鈦醫療器材股份有限公司 | Method and navigation system for registering two-dimensional image data set with three-dimensional image data set of body of interest |
CN114191078A (en) * | 2021-12-29 | 2022-03-18 | 上海复旦数字医疗科技股份有限公司 | Endoscope operation navigation robot system based on mixed reality |
CN114191078B (en) * | 2021-12-29 | 2024-04-26 | 上海复旦数字医疗科技股份有限公司 | Endoscope operation navigation robot system based on mixed reality |
CN116433874A (en) * | 2021-12-31 | 2023-07-14 | 杭州堃博生物科技有限公司 | Bronchoscope navigation method, device, equipment and storage medium |
CN115908121A (en) * | 2023-02-23 | 2023-04-04 | 深圳市精锋医疗科技股份有限公司 | Endoscope registration method and device and calibration system |
CN116807361B (en) * | 2023-08-28 | 2023-12-08 | 青岛美迪康数字工程有限公司 | CT image display method, electronic equipment and device |
CN116807361A (en) * | 2023-08-28 | 2023-09-29 | 青岛美迪康数字工程有限公司 | CT image display method, electronic equipment and device |
CN117274506A (en) * | 2023-11-20 | 2023-12-22 | 华中科技大学同济医学院附属协和医院 | Three-dimensional reconstruction method and system for interventional target scene under catheter |
CN117274506B (en) * | 2023-11-20 | 2024-02-02 | 华中科技大学同济医学院附属协和医院 | Three-dimensional reconstruction method and system for interventional target scene under catheter |
CN117838311A (en) * | 2024-03-07 | 2024-04-09 | 杭州海沛仪器有限公司 | Target spot ablation respiration gating method and system based on optical positioning |
CN117838311B (en) * | 2024-03-07 | 2024-05-31 | 杭州海沛仪器有限公司 | Target spot ablation respiratory gating system based on optical positioning |
Also Published As
Publication number | Publication date |
---|---|
CN112641514B (en) | 2022-10-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112641514B (en) | Minimally invasive interventional navigation system and method | |
US20220015727A1 (en) | Surgical devices and methods of use thereof | |
CN107456278B (en) | Endoscopic surgery navigation method and system | |
US20150313503A1 (en) | Electromagnetic sensor integration with ultrathin scanning fiber endoscope | |
Soper et al. | In vivo validation of a hybrid tracking system for navigation of an ultrathin bronchoscope within peripheral airways | |
US11026747B2 (en) | Endoscopic view of invasive procedures in narrow passages | |
CN111588464B (en) | Operation navigation method and system | |
JP6049202B2 (en) | Image processing apparatus, method, and program | |
Wu et al. | Three-dimensional modeling from endoscopic video using geometric constraints via feature positioning | |
US20230039532A1 (en) | 2d pathfinder visualization | |
US20230113035A1 (en) | 3d pathfinder visualization | |
Luo et al. | Robust endoscope motion estimation via an animated particle filter for electromagnetically navigated endoscopy | |
JP5554028B2 (en) | Medical image processing apparatus, medical image processing program, and X-ray CT apparatus | |
CN115668281A (en) | Method and system for using multi-view pose estimation | |
CN112315582A (en) | Positioning method, system and device of surgical instrument | |
CN114283179A (en) | Real-time fracture far-near end space pose acquisition and registration system based on ultrasonic images | |
US20240099776A1 (en) | Systems and methods for integrating intraoperative image data with minimally invasive medical techniques | |
CN115919462A (en) | Image data processing system, method and operation navigation system | |
CN117355257A (en) | Volume filter for fluoroscopic video |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20211215 Address after: 310051 room 801, 8th floor, building 8, No. 88, Jiangling Road, Xixing street, Binjiang District, Hangzhou, Zhejiang Applicant after: HANGZHOU KUNBO BIOTECHNOLOGY Co.,Ltd. Address before: No.28 Haibin Road, Siming District, Xiamen City, Fujian Province (opposite to Hulishan Fort) Applicant before: Luo Xiongbiao Applicant before: Wan Ying |
|
TA01 | Transfer of patent application right | ||
GR01 | Patent grant | ||
GR01 | Patent grant |