CN114948209A - Surgical field tracking and adjusting method and surgical robot system - Google Patents

Surgical field tracking and adjusting method and surgical robot system Download PDF

Info

Publication number
CN114948209A
CN114948209A CN202210602492.6A CN202210602492A CN114948209A CN 114948209 A CN114948209 A CN 114948209A CN 202210602492 A CN202210602492 A CN 202210602492A CN 114948209 A CN114948209 A CN 114948209A
Authority
CN
China
Prior art keywords
tissue
surgical
surgical field
circle
instrument
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210602492.6A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Microport Medbot Group Co Ltd
Original Assignee
Shanghai Microport Medbot Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Microport Medbot Group Co Ltd filed Critical Shanghai Microport Medbot Group Co Ltd
Priority to CN202210602492.6A priority Critical patent/CN114948209A/en
Publication of CN114948209A publication Critical patent/CN114948209A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • A61B34/35Surgical robots for telesurgery
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/70Manipulators specially adapted for use in surgery
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • A61B2034/301Surgical robots for introducing or steering flexible instruments inserted into the body, e.g. catheters or endoscopes

Landscapes

  • Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Robotics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a surgical field tracking and adjusting method and a surgical robot system. The surgical field tracking and adjusting method comprises the following steps: and acquiring preparation information based on the CT image of the operation area, identifying tissues in the operation field of the endoscope based on machine vision, and acquiring coordinates of the focus tissues and the tail end of the surgical instrument in a reference coordinate system. Determining a motion vector, performing inverse kinematics calculation based on the motion vector, outputting a control signal to drive the first mechanical arm to move, and further driving a preset object to appear at a preset position in the operation field; wherein the preset object includes the focal tissue and/or a tip of the surgical instrument. So dispose, carry out automatic adjustment to the art field through the algorithm, reduced the requirement to operating personnel to improve operation efficiency, solved prior art, the adjustment of art field needs artifical intervention and the problem that threshold height, inefficiency, time are long brought from this.

Description

Surgical field tracking and adjusting method and surgical robot system
Technical Field
The invention relates to the technical field of medical instruments, in particular to an operation field tracking and adjusting method and an operation robot system.
Background
During the operation, the endoscope needs to be continuously adjusted to ensure a better operation field, thereby facilitating the operation.
In the prior art, the surgical field is adjusted in the following way.
Before the operation, medical personnel manually adjust the preoperative visual field of the endoscope according to experience, and cooperate with surgical instruments to enter the visual field.
In operation, medical staff or doctors control the motion of the surgical end endoscope-holding mechanical arm by remotely operating the main end main control arm of the surgical robot manually to adjust the pose of the endoscope and acquire the change of the visual field of the endoscope.
After operation, medical staff manually adjust the postoperative visual field of the endoscope according to experience to obtain the evacuation visual field of surgical instruments; or medical care personnel control the motion of the surgical end endoscope-holding mechanical arm by remotely operating the main end main control arm of the surgical robot manually so as to adjust the pose of the endoscope, acquire the postoperative visual field of the endoscope and assist the evacuation of surgical instruments.
The above method has the following drawbacks.
Before the operation, medical staff are required to manually adjust the visual field of the endoscope, surgical instruments enter the endoscope, the optimal visual field before the operation cannot be automatically adjusted, the experience requirement on the medical staff is high, and the preparation time of the visual field before the operation is long.
In the operation, medical personnel are required to frequently and manually adjust the visual field of the endoscope, the visual field in the operation cannot be automatically adjusted, the operation efficiency is reduced, and the operation time is long.
After operation, medical staff are required to manually adjust the visual field of the endoscope for evacuation of surgical instruments, the postoperative evacuation visual field cannot be automatically adjusted, the requirement on the experience of the medical staff is high, and the postoperative evacuation time is long.
In a word, in the prior art, the adjustment of the surgical field needs manual intervention, on one hand, the requirement on the experience of the adjusting personnel is high, on the other hand, the adjustment time is also increased, and the operation time is prolonged.
Disclosure of Invention
The invention aims to provide an operation field tracking and adjusting method and an operation robot system, and aims to solve the problems that in the prior art, manual intervention is needed for operation field adjustment, and accordingly threshold is high, efficiency is low, and time is long.
In order to solve the above technical problems, the present invention provides an operative field tracking and adjusting method, comprising the steps of: s10 acquiring preliminary information including model information and position information of a characteristic tissue in the operation region and model information and position information of a lesion tissue based on the CT image of the operation region. S20, based on the application scenario, identifying a preset object and/or obtaining position information of the preset object, wherein the preset object includes at least one of the lesion tissue, the characteristic tissue, and a distal end of a surgical instrument. S60 determining a motion vector of the photographing point based on the execution result of step S20 and the application scene; the shooting point is arranged on the sight line of the camera and is overlapped with a shot target, the camera belongs to the endoscope, and the camera is arranged at the front end of the endoscope. S70 performs inverse kinematics calculation based on the motion vector to obtain a base signal. And S80 outputting a control signal corresponding to the application scenario to drive the first mechanical arm to move, so as to drive the focal tissue and/or the end of the surgical instrument to appear at a preset position in the surgical field; the endoscope is fixed on the first mechanical arm and moves under the driving of the first mechanical arm, and the control signal comprises the basic signal.
Optionally, step S10 includes: and carrying out tissue identification based on the CT image to obtain an identification result, wherein the identification result comprises model information and position information of tissues to be distinguished, and the tissues to be distinguished comprise the characteristic tissues, the focus tissues and tissues to be removed. And displaying the identification result on the CT image in an overlapping manner. Acquiring identification information, wherein the identification information comprises information for distinguishing the feature tissue, the lesion tissue and the tissue to be removed, or the identification information comprises information for distinguishing the feature tissue, the lesion tissue and the tissue to be removed, information for correcting the feature tissue and information for correcting the lesion tissue. And determining the provisioning information based on the identification information.
Optionally, the step S20 specifically includes: based on the application scenario, at least a portion of steps S30, S40, and S50 are performed. Wherein, step S30 includes: identifying tissue in a surgical field of the endoscope based on machine vision and matching with the feature tissue or the lesion tissue. Step S40 includes: and acquiring the coordinates of the lesion tissues in a reference coordinate system based on machine vision and kinematic calculation. Step S50 includes: acquiring coordinates of the tip of the surgical instrument in the reference coordinate system based on kinematic calculations; the surgical instrument is fixed on the second mechanical arm and driven by the second mechanical arm to move.
Optionally, the method further comprises the following steps: establishing a base coordinate system, wherein the base coordinate system comprises a surgical robot base coordinate system fixed with a robot base, a first robot arm proximal end group coordinate system fixed with a proximal end of the first robot arm, a first robot arm distal end group coordinate system fixed with a distal end of the first robot arm, a second robot arm proximal end group coordinate system fixed with a proximal end of the second robot arm, a second robot arm distal end group coordinate system fixed with a distal end of the second robot arm, an endoscope base coordinate system fixed with the endoscope, and a camera base coordinate system fixed with the camera. The surgical robot base coordinate system is set as the reference coordinate system. Acquiring sensor information, wherein the sensor information comprises joint position information of the first mechanical arm and joint position information of the second mechanical arm. And establishing a conversion relationship of the coordinates in one of the reference coordinate systems to the coordinates in the other of the reference coordinate systems, the conversion relationship having a functional relationship with the sensor information. Step S40 includes: and acquiring coordinates of the lesion tissues in the camera-based coordinate system based on machine vision, and converting the coordinates into coordinates in the reference coordinate system based on the conversion relation. Step S50 includes: and acquiring coordinates of the tail end of the surgical instrument in the second mechanical arm far end coordinate system, and converting the coordinates into coordinates in the reference coordinate system based on the conversion relation.
Optionally, the application scenario includes at least one of the following scenarios: preoperative positioning, preoperative instrument entering and post-operative positioning, intraoperative tracking positioning and postoperative instrument withdrawing and positioning.
Optionally, the application scenario includes the preoperative positioning; the preparation information further includes a direction vector of the feature tissue pointing to the lesion tissue, and a distance of the feature tissue from the lesion tissue.
Optionally, step S20 includes: and when the application scene is the preoperative positioning, executing step S30. Step S60 includes: when the application scene is the preoperative positioning, if the matching result is the focus tissue, the motion vector is 0; and if the matching result is the characteristic tissue, constructing the motion vector based on the direction vector of the characteristic tissue pointing to the lesion tissue and the distance between the characteristic tissue and the lesion tissue. Step S80 includes: and when the application scene is the preoperative positioning, outputting the basic signal to drive the first mechanical arm to move.
Optionally, the application scenario includes post-entry positioning of the pre-operative instrument. Step S20 includes: and when the application scenario is that the preoperative instrument enters post-positioning, executing step S30. Step S60 includes: the application scene is that when the preoperative instrument enters post-positioning, the distance between the center of the operative field and the focus tissue is calculated; if the distance is greater than a first distance threshold value, calculating a motion vector of the surgical field moving to be aligned with the focus tissue, otherwise, the motion vector is 0. Step S80 includes: the application scenario is that when the preoperative instrument enters post-positioning, the basic signal is output to drive the first mechanical arm to move, and then a control instruction is output to drive the endoscope to move to an extreme position far away from the focus tissue.
Optionally, the application scenario includes the intraoperative tracking position. Step S20 includes: and when the application scene is the intraoperative tracking and positioning, executing the step S30, the step S40 and the step S50. The step S60 includes: and when the application scene is the intraoperative tracking and positioning, calculating an enveloping circle, wherein the lesion tissues and the tail end of the surgical instrument are positioned in the inside or the circumference of the enveloping circle. Calculating a surgical field circle based on the enveloping circle, wherein the surgical field circle is concentric with the enveloping circle, and the diameter of the surgical field circle is increased by a surgical field threshold value than the diameter of the enveloping circle. And calculating the motion vector of the surgical field which moves to be coincident with the surgical field circle. Step S80 includes: and when the application scene is the intraoperative tracking and positioning, outputting the basic signal to drive the first mechanical arm to move.
Optionally, the step of calculating the envelope circle is: a circumscribed circle of the focal tissue and at least a portion of the distal end of the surgical instrument is set as the enveloping circle, which also envelops the remaining distal end of the surgical instrument. The focal tissue is a center of a circle, and a minimum circle enveloping the tips of all the surgical instruments is set as the enveloping circle. Alternatively, the center of gravity of the distal end of at least a part of the surgical instrument is the center of the circle, and the smallest circle that encloses all the distal ends of the surgical instrument and the lesion tissue is set as the envelope circle.
Optionally, the application scenario includes the post-operative instrument evacuation positioning. Step S20 includes: and when the application scene is the evacuation positioning of the postoperative instrument, executing the steps S30, S40 and S50. Step S60 includes: and the application scene is that when the postoperative instrument is withdrawn and positioned, an enveloping circle is calculated, and the focus tissues and the tail end of the surgical instrument are both positioned in the inside or the circumference of the enveloping circle. And calculating the distance between the center of the surgical field and the center of the enveloping circle. And if the distance is greater than a second distance threshold, calculating the motion vector of the surgical field moving to be aligned with the envelope circle, otherwise, the motion vector is 0. Step S80 includes: when the postoperative instruments are withdrawn and positioned in the application scene, the basic signal is output to drive the first mechanical arm to move, and then a control instruction is output to drive the endoscope to move to an extreme position away from the enveloping circle.
Optionally, the method further comprises the following steps: indicating the current application scenario through a status light and/or a buzzer, and/or indicating that the application scenario is switching.
Optionally, the method further comprises the following steps: an envelope circle is calculated, the focal tissue and the end of the surgical instrument are both located inside or at the circumference of the envelope circle. And displaying a surgical field appropriateness rate, wherein the surgical field appropriateness rate is a ratio of a current diameter of the surgical field divided by an expected diameter, and the expected diameter is the diameter of the enveloping circle plus a surgical field threshold.
In order to solve the technical problem, the invention further provides a surgical robot system, which is characterized by comprising a robot module, an endoscope module and a self-adaptive adjusting module; the robot module comprises a first mechanical arm for clamping an endoscope and a second mechanical arm for clamping a surgical instrument; the endoscope module comprises the endoscope; the adaptive adjusting module is used for outputting a control instruction to drive the first mechanical arm to move based on the surgical field tracking and adjusting method of any one of claims 1 to 12.
Optionally, the surgical robotic system further comprises at least one of the following features: the robot module further comprises a main control arm, and the main control arm is used for collecting control actions of medical staff and converting the control actions into control instructions of the first mechanical arm or the second mechanical arm. The endoscope module also comprises a cold light source, an image processor and a display; the cold light source is used for providing illumination for the surgical field environment, and the image processor is used for processing the surgical field image information and sending the surgical field image information to the display for displaying. The self-adaptive adjusting module comprises a CT reconstruction and diagnosis unit, a tissue identification unit, a calculation unit and a storage unit; the tissue identification unit is used for optimizing and calculating a surgical field; the computing unit is used for identifying and identifying patient tissues; the storage unit is used for storing threshold value class information.
Compared with the prior art, in the surgical field tracking and adjusting method and the surgical robot system provided by the application, the surgical field tracking and adjusting method comprises the following steps: and acquiring preparation information based on the CT image of the operation area, identifying tissues in the operation field of the endoscope based on machine vision, and matching the tissues with the characteristic tissues or the lesion tissues. And acquiring coordinates of the lesion tissues and the tail end of the surgical instrument in a reference coordinate system. Determining a motion vector, performing inverse kinematics calculation based on the motion vector, outputting a control signal corresponding to the application scene to drive the first mechanical arm to move, and further driving a preset object to appear at a preset position in the operation field; wherein the preset object includes the focal tissue and/or a tip of the surgical instrument. So dispose, carry out automatic adjustment to the art field through the algorithm, reduced the requirement to operating personnel to improve operation efficiency, solved prior art, the adjustment of art field needs artifical intervention and the problem that threshold height, inefficiency, time are long brought from this.
Drawings
It will be appreciated by those skilled in the art that the drawings are provided for a better understanding of the invention and do not constitute any limitation to the scope of the invention. Wherein:
FIG. 1 is a schematic view of an application scenario of a surgical robotic system according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a surgical field tracking and adjusting method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a shot point according to an embodiment of the invention;
FIG. 4 is a schematic structural diagram of a surgical robotic system in accordance with an embodiment of the present invention;
FIG. 5 is a flowchart illustrating step S10 according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a base coordinate system in accordance with an embodiment of the present invention;
FIG. 7 is a schematic view of a direction vector of a feature tissue pointing to a lesion tissue according to an embodiment of the present invention;
FIG. 8 is a schematic view of an application scenario of preoperative positioning according to an embodiment of the present invention;
FIG. 9 is a schematic flow chart of an embodiment of the present invention in an application scenario of preoperative positioning;
FIG. 10 is a schematic flow chart of an embodiment of the present invention in an application scenario of post-operative instrument positioning;
FIG. 11 is a diagram illustrating a first calculation method for the envelope circle and the surgical field circle in accordance with one embodiment of the present invention;
FIG. 12 is a diagram illustrating a second calculation method for the envelope circle and the surgical field circle in accordance with one embodiment of the present invention;
FIG. 13 is a schematic diagram of a third calculation method of the envelope circle and the surgical field circle in accordance with one embodiment of the present invention;
fig. 14 is a schematic flow diagram of an embodiment of the present invention in an application scenario of post-operative instrument evacuation positioning;
FIG. 15 is a diagram illustrating panel contents of a display according to an embodiment of the present invention.
In the drawings:
1-a robot module; 2-endoscope module 2; 3-adaptive adjustment module 3.
11-a first robot arm; 12-a second mechanical arm; 13-robotic medical end; 14-robotic patient end; 15-master control arm; 16-a robot controller; 17-a robot base; 21-an endoscope; 22-a display; 23-an image processor; 24-a cold light source; 25-a camera; 26-line of sight of the camera; 27-shot points; 28-object being photographed; 41-focal tissue; 42-characteristic organization; 43-direction vector; 44-the tip of the surgical instrument; 51-initial insertion state; 52-post-conditioning state; 61-envelope circle; 62-surgical field round; 63-operative field; 71-status light; 72-appropriate rate of operative field.
E1-surgical robot base coordinate system; e2-first arm proximal group coordinate system; e3-first arm distal end coordinate system; e4-second arm proximal end group coordinate system; e5-second arm distal end coordinate system; e6-endoscope-based coordinate system; E7-Camera base coordinate System.
Detailed Description
To further clarify the objects, advantages and features of the present invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. It is to be noted that the drawings are in greatly simplified form and are not to scale, but are merely intended to facilitate and clarify the explanation of the embodiments of the present invention. Further, the structures illustrated in the drawings are often part of actual structures. In particular, the drawings may have different emphasis points and may sometimes be scaled differently.
As used in this application, the singular forms "a", "an" and "the" include plural referents, the term "or" is generally employed in a sense including "and/or," the terms "a" and "an" are generally employed in a sense including "at least one," the terms "at least two" are generally employed in a sense including "two or more," and the terms "first", "second" and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicit to the number of technical features indicated. Thus, features defined as "first", "second", "third" may explicitly or implicitly include one or at least two of such features, the term "proximal" is typically the end near the operator, the term "distal" is typically the end near the patient, "end" with "another end" and "proximal" with "distal" are typically the corresponding two parts, which include not only end points, the terms "mounted", "connected" and "connected" are to be understood broadly, e.g., they may be fixedly connected, detachably connected, or integrated; can be mechanically or electrically connected; either directly or indirectly through intervening media, either internally or in any other relationship. Furthermore, as used in the present invention, the disposition of an element with another element generally only means that there is a connection, coupling, fit or driving relationship between the two elements, and the connection, coupling, fit or driving relationship between the two elements may be direct or indirect through intermediate elements, and cannot be understood as indicating or implying any spatial positional relationship between the two elements, i.e., an element may be in any orientation inside, outside, above, below or to one side of another element, unless the content clearly indicates otherwise. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
The core idea of the invention is to provide an operation field tracking and adjusting method and a surgical robot system, so as to solve the problems that in the prior art, the operation field adjustment needs manual intervention and the manual intervention is high in threshold, low in efficiency and long in time.
The following description refers to the accompanying drawings.
As shown in fig. 1, the present embodiment provides a surgical robot system including a robot module 1, an endoscope module 2, and an adaptive adjustment module 3; the robotic module 1 comprises a robotic patient end 14, the robotic patient end 14 comprising a first robotic arm 11 for gripping an endoscope 21 and a second robotic arm 12 for gripping a surgical instrument (it may also be understood that the robotic module 1 comprises the first robotic arm 11 and the second robotic arm 12); the endoscope module 2 includes the endoscope 21; the adaptive adjusting module 3 is used for outputting a control instruction to drive the first mechanical arm 11 to move based on a surgical field tracking and adjusting method. The surgical field tracking and adjustment method is described in the following.
Further, the robot module 1 further includes a robot doctor end 13, the robot doctor end includes a master control arm 15 (it can also be understood that the robot module 1 further includes the master control arm 15), and the master control arm 15 is configured to collect a control action of a medical worker and convert the control action into a control instruction of the first mechanical arm 11 or the second mechanical arm 12.
The endoscope module 2 further comprises a cold light source 24, an image processor 23 and a display 22; the cold light source is used for providing illumination for the surgical field environment, and the image processor 23 is used for processing the surgical field image information and sending the surgical field image information to the display 22 for display.
And, the adaptive adjusting module 3 includes a CT (Computed Tomography) reconstruction and diagnosis unit 31, a tissue identification unit 32, a calculation unit 33, and a storage unit 34; the tissue identification unit 32 is used for optimizing and calculating the surgical field; the computing unit 33 is used for identifying and identifying patient tissues; the storage unit 34 is used for storing threshold class information.
The adaptive adjustment module 3 is configured to execute a surgical field tracking and adjusting method, please refer to fig. 2, the surgical field tracking and adjusting method includes the following steps:
s10 acquiring preliminary information including model information and position information of a characteristic tissue in the operation region and model information and position information of a lesion tissue based on the CT image of the operation region.
S20, based on the application scenario, identifying a preset object and/or obtaining position information of the preset object, wherein the preset object includes at least one of the lesion tissue, the characteristic tissue, and a distal end of a surgical instrument.
S60 determining a motion vector of the photographing point based on the execution result of step S20 and the application scene; the shooting point is arranged on the sight line of the camera and is overlapped with a shot target, the camera belongs to the endoscope, and the camera is arranged at the front end of the endoscope.
S70 performs inverse kinematics calculation based on the motion vector to obtain a base signal.
And S80 outputting a control signal corresponding to the application scenario to drive the first mechanical arm to move, so as to drive the focal tissue and/or the end of the surgical instrument to appear at a preset position in the surgical field; the endoscope is fixed on the first mechanical arm and moves under the driving of the first mechanical arm, and the control signal comprises the basic signal.
In step S10, the model information refers to information describing the corresponding tissue, such as tissue color, tissue texture, tissue shape, size, contour line, and the like; the model information also comprises characteristic point information used for judging matching by machine vision; the configuration can be specifically carried out according to the actual requirement. After the endoscope is inserted into the body of the patient, based on the characteristics, the tissue is identified and matched based on the visual image; and matching the tissues obtained by visual recognition with the tissues modeled before the operation. The position information should be understood as the position information of a certain point preset in the tissue, and the specific preset rule may be set according to different situations, for example, the position information of the center of gravity of the tissue is regarded as the position information of the tissue, or the center of an envelope sphere of the tissue is regarded as the position information of the tissue, or the position information of a certain point on the outer contour of the tissue is regarded as the position information of the tissue; the model information and the position information may be used as the position information as long as the combination of the model information and the position information can reflect the space occupied by the tissue without fail. CT images are obtained based on fluoroscopic imaging techniques, such as x-ray beam, gamma ray, ultrasound, and the like. In step S20, the specific steps performed will be different because the application scenario will be different, which will result in different information needed subsequently. In one embodiment, step S20 specifically includes: performing at least a part of the steps S30, S40, and S50 based on the application scenario; wherein, step S30 includes: identifying tissue in a surgical field of the endoscope based on machine vision and matching with the feature tissue or the lesion tissue; step S40 includes: acquiring coordinates of the lesion tissues in a reference coordinate system based on machine vision and kinematic calculation; step S50 includes: acquiring coordinates of the tip of the surgical instrument in the reference coordinate system based on kinematic calculations; the surgical instrument is fixed on the second mechanical arm and driven by the second mechanical arm to move. In one application scenario, step S20 may only perform a portion of steps S30, S40, and S50, but when all application scenarios are covered, steps S30, S40, and S50 are all performed. In step S30, the specific content of the matched tissue depends on the content of the tissue photographed in the operation field, for example, if the operation field does not include the lesion tissue in the initial condition, only the characteristic tissue can be matched. The specific implementation of the machine vision can be set according to actual needs, for example: binocular vision technology, infrared vision technology, etc. In steps S40 and S50, based on the kinematic coordinates, the results can be more accurate than those obtained in other ways, and the surgical field tracking and adjusting method can acquire the relevant coordinates of the surgical instrument that is not in the surgical field. Throughout, a surgical instrument should be understood as a "surgical instrument in need of attention. "for example, in a surgical preparation phase, surgical instruments A, B, C and D are respectively fixed on four arms of a robot, but D is not used in some cases during a surgical procedure, and in this case, the surgical instruments refer to only A, B and C; sometimes a and B are not used, and in this case, the surgical instruments are referred to only as C and D. Generally, the surgical instruments refer to instruments that a surgeon is manipulating in the left hand and instruments that a surgeon is manipulating in the right hand; other surgical instruments that are not being manipulated but that also need attention are not excluded; meanwhile, there may be a means and a technique in which one doctor operates three or more surgical instruments at the same time or a scenario in which two doctors operate three or more surgical instruments at the same time. In step S60, the shooting point 27 can be understood according to fig. 3, and the shooting point 27 is an imaginary point which is located on the line of sight 26 of the camera and coincides with the object 28 to be shot. The target 28 to be photographed may be the lesion tissue or the characteristic tissue. The surgical field can be adjusted by moving the shooting point 27, but the motion of the robot is limited by the mechanical structure of the mechanical arm and the range of motion of each joint, so the motion track of each joint needs to be calculated and planned through inverse kinematics to obtain a control signal. The line of sight 26 of the camera should be understood as a line pointing at and passing through the center point of the surgical field. In step S80, the content of the control signal may include only the basic signal, or other control signals may be added on the premise of the basic signal to further ensure the final surgical field range. "actuating" should be understood as the main focus of the method is to output a control signal, and whether the final control purpose is achieved may also depend on other control algorithms or the specific actual situation at hand.
Due to the configuration, on one hand, a series of problems caused by adjustment under manual intervention can be solved through automatic tracking and adjustment; on the other hand, the second relative position relation is obtained through the kinematic relation, and a more accurate and reliable tracking effect can be obtained.
The overall structure of the surgical robot system can also refer to fig. 4.
The surgical robot system comprises the robot module 1, the endoscope module 2 and the adaptive adjustment module 3. The robotic module 1 comprises the robotic medical end 13, the robotic patient end 14 and the robotic controller 16. The robotic physician end 13 includes the master arm 15, and the robotic patient end includes the first robotic arm 11 (in fig. 4, referred to as an "endoscopic-holding robotic arm" to illustrate functionality) and the second robotic arm 12 (in fig. 4, referred to as a "robotic arm-holding" to illustrate functionality). The first mechanical arm 11 and the second mechanical arm 12 can perform actions based on the robot controller 16, and the robot controller 16 is used for responding to an active control instruction of a doctor and a surgical field adjusting and tracking instruction output by the adaptive adjusting module. The endoscope module 2 comprises the endoscope 21, the display 22, the image processor 23 and the cold light source 24, wherein the image processor 23 is configured to send the processed data to the display 22 and the adaptive adjustment module 3. The adaptive adjustment module 3 includes the CT reconstruction and diagnosis unit 31, the tissue identification unit 32, the calculation unit 33, and the storage unit 34, where the CT reconstruction and diagnosis unit 31 is configured to process a CT image, and the CT image is derived from an external CT acquisition process, and herein, the CT acquisition process is not limited.
Further, step S10 includes: s11, performing tissue identification based on the CT image to obtain an identification result, wherein the identification result comprises model information and position information of tissues to be distinguished, and the tissues to be distinguished comprise the characteristic tissues, the focus tissues and tissues to be removed.
And S12 displaying the identification result on the CT image in an overlapping manner.
S13 obtaining identification information, where the identification information includes information for distinguishing the feature tissue, the lesion tissue, and the tissue to be removed, or the identification information includes information for distinguishing the feature tissue, the lesion tissue, and the tissue to be removed, information for correcting the feature tissue, and information for correcting the lesion tissue.
And S14 determining the preliminary information based on the identification information.
In step S11, only by observing the content of the recognition result, it is only known that several tissues to be distinguished are recognized, but which tissue should be the lesion tissue, which tissue should be the characteristic tissue, and which tissue should be removed, at which time it is not known in fact. Identification determination can be subsequently performed through the identification information. In step S12, the identification result is displayed in an overlapping manner, so that subsequent medical staff can perform intuitive observation and judgment to determine the identification information. In step S13, the identification information is input by an external medical staff, or may be input by another method such as an upper computer or an intelligent recognition algorithm. The identification information at least includes information for distinguishing the characteristic tissue, the lesion tissue and the tissue to be removed, and in some embodiments, if the recognition result of step S11 is not ideal, the identification information further includes information for correcting the characteristic tissue and information for correcting the lesion tissue, so as to ensure the accuracy of the preparation information.
Step S10 can also be understood in light of the disclosure of fig. 5.
S101, acquiring focus image information of a patient through focus CT acquisition.
S102 image information modeling, obtaining a lesion region model of the patient, including a tissue model, a lesion model and a location (corresponding to step S11).
S103 the display means reproduces the modeling information (corresponding to step S12), and the medical staff member diagnoses (corresponding to step S13).
S104, medical staff select the focus area of the patient and mark the focus area, and mark the focus area.
S105, the medical staff identifies the characteristic organization of the model, including but not limited to identification and selection of the tissue organ. (S104 and S105 correspond to step S14)
Thus, the modeling of the patient tissue and the identification of the characteristic tissue are completed.
For performing the kinematic calculation and the inverse kinematic calculation, the method further comprises the steps of:
s91, wherein, referring to fig. 6, the base coordinate system includes a surgical robot base coordinate system E1 fixed to the robot base 17, a first robot proximal end coordinate system E2 fixed to the proximal end of the first robot arm 11, a first robot distal end coordinate system E3 fixed to the distal end of the first robot arm 11, a second robot proximal end coordinate system E4 fixed to the proximal end of the second robot arm 12, a second robot distal end coordinate system E5 fixed to the distal end of the second robot arm 12, an endoscope base coordinate system E6 fixed to the endoscope 21, and a camera base coordinate system E7 fixed to the camera 25. Based on the base coordinate system, the coordinate data of an object can be conveniently converted in different coordinate systems, and the kinematics calculation and the inverse kinematics calculation are assisted.
S92 the surgical robot base coordinate system E1 is set as the reference coordinate system. In other embodiments, other coordinate systems may be set as the reference coordinate system.
S93 acquires sensor information including joint position information of the first robot arm 11 and joint position information of the second robot arm 12.
And S94 establishing a conversion relationship of coordinates in one of the reference coordinate systems to coordinates in another one of the reference coordinate systems, the conversion relationship having a functional relationship with the sensor information.
Based on the above setting, step S40 and step S50 may be refined as follows:
step S40 includes: coordinates of the lesion tissue in the camera-based coordinate system E7 are acquired based on machine vision, and converted into coordinates in the reference coordinate system based on the conversion relation.
Step S50 includes: coordinates of the distal end of the surgical instrument in the second robot arm distal end coordinate system E5 are acquired, and converted into coordinates in the reference coordinate system based on the conversion relationship.
It should be understood that fig. 6 mainly illustrates the fixing manner of each base coordinate system, and the orientation of the origin and the axes thereof can be set according to actual needs, and is not limited to the case illustrated in fig. 6. In addition, the relative position and direction between the base coordinate systems may also change with the movement of the joints, and is not limited to the case shown in fig. 6.
In this embodiment, the application scenario includes: preoperative positioning, preoperative instrument entering and post-operative positioning, intraoperative tracking positioning and postoperative instrument withdrawing and positioning. In other embodiments, at least a portion of the application scenarios described above may also be included.
In order to facilitate the preoperative positioning, advanced data operation can be performed, and the subsequent logic can be conveniently expanded. In an embodiment, the application scenario includes the preoperative localization; the preparation information further includes a direction vector of the feature tissue pointing to the lesion tissue, and a distance of the feature tissue from the lesion tissue.
Referring to fig. 7, within the acquisition and modeling region, the lesion tissue 41 is identified as number 0 and the feature tissue 42 is identified as numbers 1, 2, 3 … … n (where n is the total number of feature tissues 42). Of course, other identification manners may be adopted, which are only examples herein, and do not limit the identification manner of the feature organization, the organization features, and the identification or number of each feature organization obtained based on the modeled intelligent feature organization recognition technology.
And then, acquiring the coordinates of each tissue, wherein the coordinate system where the coordinates are located can be selected at will, and the coordinates can be acquired by adopting a default coordinate system during data acquisition. The coordinates of the feature organization 42 with the number i are (xi, yi, zi), wherein the value range of i is 1 to n. The coordinates of the lesion tissue 41 are (x0, y0, z0), and the method for establishing the coordinate system is not limited herein, and the method for acquiring the preset point representing each tissue is also not limited herein, for example, the method is based on the minimum envelope sphere center of the tissue, the method is based on the center of the envelope surface, and the like.
Based on the above coordinates, the direction vector of the feature tissue 42 and the lesion tissue 41 with the number i is n (xi-x0, yi-y0, zi-z 0). The distance Li between the feature tissue 42 and the lesion tissue 41 with the number i can be calculated by the following formula
Figure BDA0003669862000000081
Preferably, step S20 includes: and when the application scenario is the preoperative positioning, executing step S30. Step S60 includes: when the application scenario is the preoperative positioning, if the matching result is the focal tissue 41, the motion vector is 0 (in this case, no motion is corresponding to the base signal); if the matching result is the feature tissue 42, the motion vector 43 is constructed based on the direction vector of the feature tissue 42 pointing to the lesion tissue 41 and the distance between the feature tissue and the lesion tissue. The motion vector 43 can be understood with reference to fig. 8, in which fig. 8 the endoscope 21 on the right is in an initial insertion state and the endoscope 21 on the left is in an adjusted state 52. Step S80 includes: and when the application scene is the preoperative positioning, outputting the basic signal to drive the first mechanical arm to move. So configured, the motion vector 43 can be constructed by directly using the direction vector and the distance without performing other coordinate transformation, the logic is clear and intuitive, and the design and the later maintenance are convenient.
The above process can also be understood according to fig. 9, and when the application scenario is the preoperative positioning, the flow of the method is as follows.
S201, medical staff guide the endoscope holding mechanical arm, and the endoscope lens end enters the body of a patient. This step is a preceding step performed by the method.
S202, identifying the current surgical field image tissue of the endoscope and acquiring the coordinate information of the corresponding tissue position.
S203, matching the surgical field image tissue with the CT modeling tissue. Steps S202, S203 correspond to step S30.
S204, acquiring the matched tissue and a direction vector; based on the direction vector, the coordinate transformation acquires a motion vector of the end of the mirror-holding robot arm (and the shot point 27). Step S204 corresponds to step S60.
S205, the surgical robot obtains the target position of each joint of the mechanical arm through inverse kinematics calculation, and the mechanical arm moves to the target position to obtain the current surgical field. Step S205 corresponds to step S70 and step S80.
S206, identifying and matching the current operation field image tissue with the target focus until the target focus tissue is positioned. Steps S202 and S203 are repeated to ensure validity of the motion result.
So configured, based on preoperative CT modeling information and tissue identification; the direction vector guides the motion method, self-adaptively positions the target focus, realizes the accurate positioning in a visual mode, and simultaneously realizes the self-searching and automatic positioning of the target focus tissue.
To facilitate the effect of the pre-operative instruments in a post-positioning scenario, the method is so configured.
Step S20 includes: and when the application scenario is that the preoperative instrument enters post-positioning, executing step S30.
Step S60 includes: the application scene is that when the preoperative instrument enters post-positioning, the distance between the center of the operative field and the focus tissue is calculated; if the distance is greater than the first distance threshold, calculating a motion vector of the surgical field moving to be aligned with the lesion tissue, otherwise, the motion vector is 0. It is to be understood that "alignment" is to be understood as alignment within the engineering scope, i.e. allowing for errors between the two within a preset range.
Step S80 includes: the application scenario is that when the preoperative instrument enters post-positioning, the basic signal is output to drive the first mechanical arm to move, and then a control instruction is output to drive the endoscope to move to an extreme position far away from the focus tissue. The extreme position is limited to the farthest position where the mechanical structure can move, or limited to the farthest position where the mechanical structure can move according to a preset rule, and may be set according to actual situations in different embodiments. For example, in one embodiment, the extreme position is the stamp card tip position, which is calculated from the structural configuration.
The above process can also be understood according to fig. 10, and the application scenario is when the preoperative instrument enters into post-positioning, and the flow of the method is as follows.
S301 obtains an instrument entering command, which includes but is not limited to a software command and a hardware button command, and the robot enters a pre-operation instrument entering preparation phase. That is, whether the application scene is the positioning after the preoperative instrument enters may be determined by whether the instrument entering instruction is acquired.
S302, extracting and obtaining the central position of the current visual field image of the endoscope; extracting and obtaining a focus tissue coordinate origin of the current surgical field image; and calculating and obtaining a distance vector between the center of the image and the origin of coordinates of the target lesion tissue. Step S302 corresponds to step S30.
S303 comparing a modulo Ld of the distance vector with a first distance threshold la; if Ld is larger than la, representing that the focus tissue is not positioned in the center of the current surgical field, carrying out inverse kinematics calculation and motion adjustment on a mechanical arm of the endoscope, moving the image center to the tissue center according to the distance vector, and finishing centering adjustment by the endoscope. Step S303 corresponds to the first half of steps S60, S70, and step S80.
S304, the endoscope holding arm is lifted and adjusted to move to the top end position of the poking card, and the top end position of the poking card is obtained through structural configuration calculation, so that the maximized visual field is obtained. Step S304 corresponds to the latter half of step S80.
S305 the instrument enters the field of view ready to end.
S306, the medical staff enters the surgical instrument. S305 and S306 are subsequent steps of the method.
The adjusting process is based on an endoscope vision technology and a mechanical arm technology, the focus tissue is positioned at the center of the surgical field, the maximization of the surgical field is completed, and the surgical instruments can conveniently enter the body of a patient; meanwhile, the adjustment process is automatically realized and is efficient.
In order to facilitate the effect of the intraoperative tracking localization, the method is so configured.
Step S20 includes: when the application scenario is the intraoperative tracking position, step S30, step S40 and step S50 are executed.
The step S60 includes: and when the application scene is the intraoperative tracking and positioning, calculating an enveloping circle, wherein the lesion tissues and the tail end of the surgical instrument are positioned in the inside or the circumference of the enveloping circle. Calculating a surgical field circle based on the enveloping circle, wherein the surgical field circle is concentric with the enveloping circle, the diameter of the surgical field circle is increased by a surgical field threshold value, e.g., the diameter of the enveloping circle is d1, the surgical field threshold value is H, and the diameter of the surgical field circle is d 2; then, d2 ═ H + d1 is satisfied. And calculating the motion vector of the surgical field which moves to be coincident with the surgical field circle.
Step S80 includes: and when the application scene is the intraoperative tracking and positioning, outputting the basic signal to drive the first mechanical arm to move.
Specifically, in an embodiment, the step of calculating the envelope circle is: a circumscribed circle of the focal tissue and at least a portion of the distal end of the surgical instrument is set as the enveloping circle, which also envelops the remaining distal end of the surgical instrument. When the number of the tips of the surgical instrument is two, the step of calculating the envelope circle is: a circumscribed circle of the focal tissue and the distal end of the surgical instrument is set as the enveloping circle. Referring to fig. 11, the enveloping circle 61 circumscribes the focal tissue 41 and the ends 44 of the two surgical instruments, and the surgical field circle 62 is enlarged to d1+ H, i.e., 0.5H in radius, based on the enveloping circle 61. When the number of the ends of the surgical instrument is not two, any logic may be adopted to select a part of the surgical instruments to generate a circumscribed circle, and the rest of the surgical instruments are arranged in the enveloping circle. In the embodiment, the position of the instrument is calculated and acquired based on the configuration of the mechanical arm, when the self-adaptive adjustment is carried out, the instrument cannot be lost in the visual field, the automatic adjustment and tracking in the operation are simultaneously completed, the focus and the left and right instruments are in the optimal visual field area, the threshold value H can be set, and the larger the H is, the larger the visual field range is.
In another embodiment, the focal tissue is a center of a circle, and a smallest circle enveloping the tips of all the surgical instruments is set as the enveloping circle. When the number of the ends of the surgical instruments is two, please refer to fig. 12, in which the end 44 of one of the surgical instruments is located within the enveloping circle 61, and the end 44 of the other surgical instrument is located right above the enveloping circle 61, so that the enveloping circle is minimized. In the embodiment, the position of the instrument is calculated and obtained based on the configuration of the mechanical arm, the instrument cannot be lost in the visual field during self-adaptive adjustment, meanwhile, the focus is always positioned in the center of the surgical field and envelops the left and right instruments, the focus tissues are well positioned and tracked, and the instrument 1 and the instrument 2 are positioned in the surgical field.
In yet another embodiment, the center of gravity of at least a portion of the tips of the surgical instruments is the center of a circle, the x-coordinate of the center of gravity is the average of the x-axis coordinates of the tips of each of the surgical instruments, and the y-coordinate of the center of gravity is the average of the y-axis coordinates of the tips of each of the surgical instruments. And a minimum circle enveloping the tips of all the surgical instruments and the lesion tissue is set as the enveloping circle. When the number of the tips of the surgical instrument is two, the center of the tip of the surgical instrument is a center of a circle, and a minimum circle that envelopes all the tips of the surgical instrument and the lesion tissue is set as the envelope circle. Referring to fig. 13, the ends 44 of two of the surgical instruments are shown at exactly opposite ends of a diameter of the enveloping circle 61. In the embodiment, the position of the instrument is calculated and obtained based on the configuration of the mechanical arm, the instrument cannot be lost in the visual field during self-adaptive adjustment, the focus tissue is enveloped by taking the midpoint of the left instrument and the right instrument as the center of the surgical field, the instrument 1 and the instrument 2 are better tracked, and the focus tissue is in the surgical field.
It will be appreciated that other ways of calculating the envelope circle may be provided.
In order to facilitate the effect of the postoperative instrument evacuation positioning scene, the method is configured in such a way.
Step S20 includes: and when the application scene is the evacuation positioning of the postoperative instrument, executing the steps S30, S40 and S50.
Step S60 includes: and the application scene is that when the postoperative instrument is withdrawn and positioned, an enveloping circle is calculated, and the focus tissues and the tail end of the surgical instrument are both positioned in the inside or the circumference of the enveloping circle. And calculating the distance between the center of the surgical field and the center of the enveloping circle. And if the distance is greater than a second distance threshold, calculating the motion vector of the surgical field moving to be aligned with the envelope circle, otherwise, the motion vector is 0. The second distance threshold may be equal to the first distance threshold, or may be set individually.
Step S80 includes: when the postoperative instruments are withdrawn and positioned in the application scene, the basic signal is output to drive the first mechanical arm to move, and then a control instruction is output to drive the endoscope to move to an extreme position away from the enveloping circle.
The above process can also be understood according to fig. 14, and the flow of the method when the application scenario is the evacuation positioning of the postoperative instruments is as follows.
S401, the robot obtains instrument evacuation instructions, including but not limited to software instructions, hardware button type instructions and the like, and enters a post-operation instrument evacuation preparation stage. That is, whether the application scene is the post-operation instrument evacuation positioning may be determined by whether the instrument evacuation instruction is acquired.
S402, obtaining the center position of the current operation field image of the endoscope; acquiring a position of a tip of the surgical instrument; and calculating the circle center position of the enveloping circle. The specific calculation method of the envelope circle may also adopt the method in the foregoing.
S403, calculating the distance between the center of the enveloping circle and the center of the image. Steps S402 and S403 correspond to step S30.
S404, comparing a model Ld of the distance vector with the second distance threshold lb, if Ld is larger than lb, representing that the current surgical field center is not in the position of the evacuation center, carrying out inverse kinematics calculation and motion adjustment on a mechanical arm of the endoscope, moving the image center to the evacuation center according to the distance vector, and finishing centering adjustment on the endoscope. Step S404 corresponds to the first half of steps S60, S70, and step S80.
S405, the endoscope holding arm is lifted and adjusted to move to the top end position of the poking card, and the maximized visual field is obtained. Step S405 corresponds to the latter half of step S80.
S406 the instrument evacuation field of view preparation is complete.
S407, the medical staff is evacuated from the surgical instrument. S406 and S407 are subsequent steps of the method.
The adjusting process is based on an endoscope vision technology and a mechanical arm technology, the focus tissues and the tail ends of the surgical instruments are positioned in the evacuation intermediate position of the surgical field, the maximization of the surgical field is completed, and the surgical instruments are convenient to evacuate the body of a patient; meanwhile, the adjustment process is automatically realized and is efficient.
In one embodiment, the method further comprises the steps of: indicating the current application scenario through a status light and/or a buzzer, and/or indicating that the application scenario is switching. The specific way of indication can be set according to actual needs.
For example, the status light and the buzzer may be disposed on the image platform, including but not limited to the position of the image host, the image cart, the display, and the like, and also including but not limited to the position of the doctor cart and the patient cart of the surgical robot, including but not limited to the connection manner of wire, wireless, and the like, and the external status light and the buzzer.
The application scenario may be characterized by a change in status lights. Including but not limited to color changes, changes in blinking frequency, and not limited to multiple status light combinations, changes in the order of light positions, changes in color combinations, to characterize and distinguish changes in the application scenarios of the surgical robot, such as preoperative positioning, preoperative instrument post-entry positioning, intraoperative tracking positioning, and postoperative instrument evacuation positioning.
The application scene can be characterized by the change of the buzzer. Including but not limited to changes in sound intensity, changes in pitch, changes in frequency of sound flashes, to characterize and distinguish between state changes such as preoperative localization, preoperative instrument entry-post localization, intraoperative tracking localization, and postoperative instrument withdrawal localization.
In order to facilitate the doctor to visually check the appropriate degree of the current surgical field in the surgical process, the method also comprises the following steps: an envelope circle is calculated, the focal tissue and the end of the surgical instrument are both located inside or at the circumference of the envelope circle. And displaying a surgical field appropriateness rate, wherein the surgical field appropriateness rate is a ratio of a current diameter of the surgical field divided by an expected diameter, and the expected diameter is the diameter of the enveloping circle plus a surgical field threshold.
Through the demonstration of the appropriate rate of the surgical field, a doctor can conveniently judge whether the current surgical field is appropriate for the continuation of the operation in real time.
In one embodiment, the panel content of the display is as shown in fig. 15, on the left side of the panel, the envelope circle 61, the surgical field circle 62 (the diameter of the surgical field circle is the desired diameter) and the range of the surgical field 63 are visually displayed by images, and on the right side of the panel, the surgical field appropriateness 72 and the status light 71 are displayed. The panel content of the display is helpful for medical staff to know the current surgical field condition so as to better perform the operation.
In summary, the present embodiment provides a surgical field tracking and adjusting method and a surgical robot system. The surgical field tracking and adjusting method comprises the following steps: and acquiring preparation information based on the CT image of the operation area, identifying tissues in the operation field of the endoscope based on machine vision, and matching the tissues with the characteristic tissues or the lesion tissues. And acquiring coordinates of the lesion tissues and the tail end of the surgical instrument in a reference coordinate system. Determining a motion vector, performing inverse kinematics calculation based on the motion vector, outputting a control signal corresponding to the application scene to drive the first mechanical arm to move, and further driving a preset object to appear at a preset position in the operation field; wherein the preset object includes the focal tissue and/or a tip of the surgical instrument. So the configuration carries out automatic adjustment to the art field through the algorithm, has reduced the requirement to operating personnel to improve operation efficiency, solved prior art, the adjustment of art field needs artifical intervention and the threshold height that brings from this, inefficiency, chronic problem.
The above description is only for describing the preferred embodiment of the present invention, and it is not intended to limit the scope of the present invention, and any variations and modifications made by those skilled in the art according to the above disclosure are within the protection scope of the present invention.

Claims (15)

1. A method for tracking and adjusting a surgical field, the method comprising the steps of:
s10, acquiring preparation information based on the CT image of the operation area, wherein the preparation information comprises model information and position information of characteristic tissues in the operation area and model information and position information of lesion tissues;
s20 identifying and/or obtaining position information of a preset object based on an application scenario, where the preset object includes at least one of the focal tissue, the feature tissue, and a distal end of a surgical instrument; (ii) a
S60, determining the motion vector of the shooting point based on the execution result of the step S20 and the application scene; the shooting point is arranged on the sight line of a camera and is superposed with a shot target, the camera belongs to an endoscope, and the camera is arranged at the front end of the endoscope;
s70, performing inverse kinematics calculation based on the motion vector to obtain a basic signal; and the number of the first and second groups,
s80, outputting a control signal corresponding to the application scene to drive the first mechanical arm to move, and further driving the focal tissue and/or the tail end of the surgical instrument to appear at a preset position in the operation field; the endoscope is fixed on the first mechanical arm and moves under the driving of the first mechanical arm, and the control signal comprises the basic signal.
2. The surgical field tracking and adjusting method according to claim 1, wherein step S10 includes:
performing tissue identification based on the CT image to obtain an identification result, wherein the identification result comprises model information and position information of tissues to be distinguished, and the tissues to be distinguished comprise the characteristic tissues, the focus tissues and tissues to be removed;
displaying the identification result on the CT image in an overlapping manner;
acquiring identification information, wherein the identification information comprises information for distinguishing the characteristic tissue, the lesion tissue and the tissue to be removed, or the identification information comprises information for distinguishing the characteristic tissue, the lesion tissue and the tissue to be removed, information for correcting the characteristic tissue and information for correcting the lesion tissue; and the number of the first and second groups,
determining the provisioning information based on the identification information.
3. The surgical field tracking and adjusting method according to claim 1, wherein the step S20 specifically comprises: performing at least a part of the steps S30, S40, and S50 based on the application scenario; wherein,
step S30 includes: identifying tissue in a surgical field of the endoscope based on machine vision and matching with the feature tissue or the lesion tissue;
step S40 includes: acquiring coordinates of the lesion tissues in a reference coordinate system based on machine vision and kinematic calculation;
step S50 includes: acquiring coordinates of the tip of the surgical instrument in the reference coordinate system based on kinematic calculations; the surgical instrument is fixed on the second mechanical arm and is driven by the second mechanical arm to move.
4. A surgical field tracking and adjusting method according to claim 3, characterized in that it further comprises the steps of:
establishing a base coordinate system, wherein the base coordinate system comprises a surgical robot base coordinate system fixed with a robot base, a first robot arm proximal end group coordinate system fixed with a proximal end of the first robot arm, a first robot arm distal end group coordinate system fixed with a distal end of the first robot arm, a second robot arm proximal end group coordinate system fixed with a proximal end of the second robot arm, a second robot arm distal end group coordinate system fixed with a distal end of the second robot arm, an endoscope base coordinate system fixed with the endoscope, and a camera base coordinate system fixed with the camera;
the surgical robot base coordinate system is set to the reference coordinate system;
acquiring sensor information, wherein the sensor information comprises joint position information of the first mechanical arm and joint position information of the second mechanical arm; and the number of the first and second groups,
establishing a conversion relationship of coordinates in one of the reference coordinate systems to coordinates in another one of the reference coordinate systems, the conversion relationship having a functional relationship with the sensor information;
step S40 includes: acquiring coordinates of the lesion tissues in the camera-based coordinate system based on machine vision, and converting the coordinates into coordinates in the reference coordinate system based on the conversion relation;
step S50 includes: and acquiring coordinates of the tail end of the surgical instrument in the second mechanical arm far end coordinate system, and converting the coordinates into coordinates in the reference coordinate system based on the conversion relation.
5. The surgical field tracking and adjusting method according to claim 3, characterized in that the application scenarios include at least one of the following scenarios: preoperative positioning, preoperative instrument entering postsurgical positioning, intraoperative tracking positioning and postoperative instrument withdrawing positioning.
6. A surgical field tracking and adjusting method according to claim 5, characterized in that the application scenario includes the preoperative localization; the preparation information further includes a direction vector of the feature tissue pointing to the lesion tissue, and a distance of the feature tissue from the lesion tissue.
7. The surgical field tracking and adjusting method according to claim 6, wherein step S20 includes: when the application scene is the preoperative positioning, executing step S30;
step S60 includes: when the application scene is the preoperative positioning, if the matching result is the focus tissue, the motion vector is 0; if the matching result is the characteristic tissue, constructing the motion vector based on the direction vector of the characteristic tissue pointing to the lesion tissue and the distance between the characteristic tissue and the lesion tissue;
step S80 includes: and when the application scene is preoperative positioning, outputting the basic signal to drive the first mechanical arm to move.
8. A surgical field tracking and adjusting method according to claim 5, characterized in that the application scenario includes a post-operative instrument entry positioning;
step S20 includes: when the application scene is that the preoperative instrument enters post-positioning, executing step S30;
step S60 includes: the application scene is that when the preoperative instrument enters into post-positioning, the distance between the center of the operative field and the focus tissue is calculated; if the distance is greater than a first distance threshold, calculating a motion vector of the surgical field moving to be aligned with the lesion tissue, otherwise, the motion vector is 0;
step S80 includes: when the preoperative instrument enters into post-positioning, the basic signal is output to drive the first mechanical arm to move, and then a control command is output to drive the endoscope to move to an extreme position away from the focal tissue.
9. A surgical field tracking and adjusting method according to claim 5, characterized in that the application scenario comprises the intraoperative tracking fix;
step S20 includes: when the application scene is the intraoperative tracking and positioning, executing step S30, step S40 and step S50;
the step S60 includes: when the application scene is the intraoperative tracking and positioning, an enveloping circle is calculated, and the focal tissue and the tail end of the surgical instrument are both positioned in the interior or the circumference of the enveloping circle;
calculating a surgical field circle based on the enveloping circle, wherein the surgical field circle is concentric with the enveloping circle, and the diameter of the surgical field circle is increased by a surgical field threshold value than the diameter of the enveloping circle; and the number of the first and second groups,
calculating the motion vector of the surgical field moving to coincide with the surgical field circle;
step S80 includes: and when the application scene is the intraoperative tracking and positioning, outputting the basic signal to drive the first mechanical arm to move.
10. The surgical field tracking and adjusting method according to claim 9, wherein the step of calculating the envelope circle is:
a circumscribed circle of the focal tissue and at least a portion of the distal end of the surgical instrument is set as the enveloping circle, which also envelops the remaining distal end of the surgical instrument;
the focal tissue is a circle center, and a minimum circle enveloping the tail ends of all the surgical instruments is set as the enveloping circle; or,
the center of gravity of the distal end of at least a part of the surgical instrument is a center of a circle, and a minimum circle that encloses all the distal ends of the surgical instrument and the lesion tissue is set as the envelope circle.
11. A surgical field tracking and adjusting method according to claim 5, wherein the application scenario includes the post-operative instrument evacuation positioning;
step S20 includes: when the application scene is the evacuation positioning of the postoperative instrument, executing the steps S30, S40 and S50;
step S60 includes: when the postoperative instrument is withdrawn and positioned in the application scene, an enveloping circle is calculated, and the focus tissues and the tail end of the surgical instrument are both positioned in the inside or the circumference of the enveloping circle;
calculating the distance between the center of the surgical field and the center of the enveloping circle; and the number of the first and second groups,
if the distance is greater than a second distance threshold, calculating the motion vector of the surgical field moving to be aligned with the envelope circle, otherwise, the motion vector is 0;
step S80 includes: when the postoperative instruments are withdrawn and positioned in the application scene, the basic signal is output to drive the first mechanical arm to move, and then a control instruction is output to drive the endoscope to move to an extreme position away from the enveloping circle.
12. The surgical field tracking and adjusting method according to claim 3, further comprising the steps of:
indicating the current application scenario through a status light and/or a buzzer, and/or indicating that the application scenario is switching.
13. A surgical field tracking and adjusting method according to claim 3, characterized in that it further comprises the steps of:
calculating an envelope circle, wherein the lesion tissues and the tail end of the surgical instrument are positioned in the inner part or the circumference of the envelope circle; and the number of the first and second groups,
displaying a surgical field appropriateness rate, wherein the surgical field appropriateness rate is a ratio of a current diameter of the surgical field divided by an expected diameter, and the expected diameter is the diameter of the enveloping circle plus a surgical field threshold.
14. A surgical robotic system comprising a robot module, an endoscope module, and an adaptive adjustment module; the robot module comprises a first mechanical arm for clamping an endoscope and a second mechanical arm for clamping a surgical instrument; the endoscope module comprises the endoscope; the adaptive adjusting module is used for outputting a control instruction to drive the first mechanical arm to move based on the surgical field tracking and adjusting method of any one of claims 1 to 13.
15. The surgical robotic system as claimed in claim 14, further comprising at least one of the following features:
the robot module further comprises a main control arm, and the main control arm is used for collecting control actions of medical personnel and converting the control actions into control instructions of the first mechanical arm or the second mechanical arm;
the endoscope module also comprises a cold light source, an image processor and a display; the cold light source is used for providing illumination for a surgical field environment, and the image processor is used for processing surgical field image information and sending the surgical field image information to the display for display; and the number of the first and second groups,
the self-adaptive adjusting module comprises a CT reconstruction and diagnosis unit, a tissue identification unit, a calculation unit and a storage unit; the tissue identification unit is used for optimizing and calculating a surgical field; the computing unit is used for identifying and identifying patient tissues; the storage unit is used for storing threshold value class information.
CN202210602492.6A 2022-05-30 2022-05-30 Surgical field tracking and adjusting method and surgical robot system Pending CN114948209A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210602492.6A CN114948209A (en) 2022-05-30 2022-05-30 Surgical field tracking and adjusting method and surgical robot system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210602492.6A CN114948209A (en) 2022-05-30 2022-05-30 Surgical field tracking and adjusting method and surgical robot system

Publications (1)

Publication Number Publication Date
CN114948209A true CN114948209A (en) 2022-08-30

Family

ID=82958490

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210602492.6A Pending CN114948209A (en) 2022-05-30 2022-05-30 Surgical field tracking and adjusting method and surgical robot system

Country Status (1)

Country Link
CN (1) CN114948209A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116966416A (en) * 2023-07-04 2023-10-31 上海交通大学医学院附属第九人民医院 Artificial cochlea microelectrode implantation system and method for telecentric point control
CN117506965A (en) * 2024-01-08 2024-02-06 武汉联影智融医疗科技有限公司 Positioning system, method, computer device and storage medium of surgical robot
CN118236166A (en) * 2024-05-27 2024-06-25 华南师范大学 Automatic tracking system and method for surgical instrument

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116966416A (en) * 2023-07-04 2023-10-31 上海交通大学医学院附属第九人民医院 Artificial cochlea microelectrode implantation system and method for telecentric point control
CN117506965A (en) * 2024-01-08 2024-02-06 武汉联影智融医疗科技有限公司 Positioning system, method, computer device and storage medium of surgical robot
CN117506965B (en) * 2024-01-08 2024-04-12 武汉联影智融医疗科技有限公司 Positioning system, method, computer device and storage medium of surgical robot
CN118236166A (en) * 2024-05-27 2024-06-25 华南师范大学 Automatic tracking system and method for surgical instrument

Similar Documents

Publication Publication Date Title
US11514576B2 (en) Surgical system with combination of sensor-based navigation and endoscopy
CN114948209A (en) Surgical field tracking and adjusting method and surgical robot system
CN110215284B (en) Visualization system and method
CN113940755B (en) Surgical planning and navigation method integrating surgical operation and image
AU2015202805B2 (en) Augmented surgical reality environment system
US11416995B2 (en) Systems, devices, and methods for contactless patient registration for a medical procedure
JP4822634B2 (en) A method for obtaining coordinate transformation for guidance of an object
CN106572887B (en) Image integration and robotic endoscope control in an X-ray suite
CN108348295A (en) Motor-driven full visual field adaptability microscope
CN115005981A (en) Surgical path planning method, system, equipment, medium and surgical operation system
CN111839727A (en) Prostate particle implantation path visualization method and system based on augmented reality
CN113920187A (en) Catheter positioning method, interventional operation system, electronic device, and storage medium
CN114452508B (en) Catheter motion control method, interventional operation system, electronic device, and storage medium
CN112704566B (en) Surgical consumable checking method and surgical robot system
CN117122414A (en) Active tracking type operation navigation system
CN112869856B (en) Two-dimensional image guided intramedullary needle distal locking robot system and locking method thereof
CN115227349A (en) Lung puncture robot based on optical tracking technology
CN212281375U (en) C-shaped arm X-ray machine with operation positioning and navigation functions
CN115908436A (en) Human-computer interaction preoperative fractured bone segmentation and splicing and intraoperative display method and system
CN112450995B (en) Situation simulation endoscope system
CN116492064A (en) Master-slave motion control method based on pose identification and surgical robot system
CN113925611A (en) Matching method, device, equipment and medium for object three-dimensional model and object entity
CN114689041B (en) Magnetic navigation positioning system, method and related equipment based on two-dimensional image
US20230248467A1 (en) Method of medical navigation
CN116728394A (en) Control method of robot system based on positioning image and robot system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination