CN116230196A - Automatic surgical system, automatic surgical visual guidance method and medical equipment - Google Patents

Automatic surgical system, automatic surgical visual guidance method and medical equipment Download PDF

Info

Publication number
CN116230196A
CN116230196A CN202310193964.1A CN202310193964A CN116230196A CN 116230196 A CN116230196 A CN 116230196A CN 202310193964 A CN202310193964 A CN 202310193964A CN 116230196 A CN116230196 A CN 116230196A
Authority
CN
China
Prior art keywords
decision
visual
data
target
surgical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310193964.1A
Other languages
Chinese (zh)
Inventor
请求不公布姓名
王家寅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Microport Medbot Group Co Ltd
Original Assignee
Shanghai Microport Medbot Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Microport Medbot Group Co Ltd filed Critical Shanghai Microport Medbot Group Co Ltd
Priority to CN202310193964.1A priority Critical patent/CN116230196A/en
Publication of CN116230196A publication Critical patent/CN116230196A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/107Visualisation of planned trajectories or target regions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/108Computer aided selection or customisation of medical implants or cutting guides
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Molecular Biology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Robotics (AREA)
  • Veterinary Medicine (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Manipulator (AREA)

Abstract

The specification relates to the technical field of automatic operation robots, and specifically discloses an automatic operation system, an automatic operation vision guiding method and medical equipment, wherein the system comprises: the system comprises a scene acquisition system, a visual guiding system, an interactive decision system and a motion control system, wherein the scene acquisition system acquires target operation scene data and sends the target operation scene data to the visual guiding system; the visual guidance system generates visual decision data based on the target surgical scene data and the target surgical information and sends the visual decision data to the interactive decision system; the interactive decision system displays the visual decision data and receives a scheme adjustment instruction to generate adjusted visual decision data; the vision guiding system generates target decision data based on the adjusted vision decision data and sends the target decision data to the motion control system; the motion control system controls the surgical instrument to perform an operation according to the target decision data. The scheme can improve the accuracy of surgical visual decisions, reduce surgical risks and improve surgical quality.

Description

Automatic surgical system, automatic surgical visual guidance method and medical equipment
Technical Field
The present disclosure relates to the field of automated surgical robots, and more particularly to an automated surgical system, an automated surgical visual guidance method, and a medical device.
Background
Currently, in robotic automated surgical procedures, vision guidance systems typically provide visual support for robot movements, including moving objects, the location of instruments, and the like. The standard of the vision algorithm system processing data is given by experimenters, and no standard library is used for reference. The visual processing result is completely given by the visual algorithm system and is directly output to the surgical robot to perform the surgical operation.
However, the lack of uniform evaluation criteria for normalizing the visual treatment results makes it difficult to quantitatively analyze the quality of the visual treatment results, and the quality of the surgical results is very dependent on the quality of the visual algorithm treatment results. Some vision algorithms handle poorly or even erroneously situations that are difficult to avoid, which leads to an increased potential surgical risk and a reduced surgical quality.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the specification provides an automatic operation system, an automatic operation visual guiding method and medical equipment, so as to solve the problem that the surgical risk is increased due to poor visual algorithm processing results in the prior art.
Embodiments of the present specification provide an automated surgical system comprising: comprises a scene acquisition system, a vision guiding system, an interactive decision system and a motion control system, wherein,
the scene acquisition system is used for acquiring target operation scene data and sending the target operation scene data to the vision guide system; the target surgical scene data includes intra-operative image data;
the vision guidance system is used for acquiring target operation information; receiving the target surgical scene data; generating visual decision data based on the target surgical scene data and the target surgical information; transmitting the visual decision data to the interactive decision system;
the interactive decision system is used for receiving the visual decision data and performing graphical display; receiving a scheme adjustment instruction input by a doctor, and generating adjusted visual decision data based on the scheme adjustment instruction and the visual decision data; the vision guidance system is also used for receiving the adjusted vision decision data and generating target decision data; transmitting the target decision data to the motion control system;
the motion control system is used for controlling the surgical instrument to execute the surgical operation according to the target decision data.
In one embodiment, the interactive decision system is used to present at least one decision adjustment control to a physician; the interactive decision system is further used for responding to clicking operation of at least one decision adjustment control by a doctor to display a corresponding decision adjustment user interface so as to generate adjusted visual decision data based on adjustment operation input by the doctor; the at least one decision adjustment control comprises at least: target point adjustment control, target point adding and deleting control and path optimization control.
In one embodiment, the visual guidance system is specifically configured to query, in a pre-constructed visual decision criterion library, target visual decision criterion data corresponding to the target surgical information; the visual guidance system is further specifically configured to calculate, under the constraint of the target visual decision standard data, the target surgical information and the target surgical scene data by using a visual decision algorithm, so as to obtain visual decision data; the visual decision standard library comprises visual decision standard data corresponding to each operation type in a plurality of operation types;
the vision guidance system is further configured to send the target vision decision criteria data to the interactive decision system, which is further configured to display the target vision decision criteria data to a physician.
In one embodiment, the vision guidance system is further configured to generate real-time vision information based on the target surgical scene data, and send the real-time vision information to the interactive decision system, the real-time vision information including operation object information and vision operation-related information;
the interactive decision system is used for displaying the operation object information and the visual operation related information to a doctor; the interactive decision system is also used for acquiring and displaying the target operation information.
In one embodiment, the interactive decision system is further configured to present decision-making controls to a physician; in response to a selected operation of the decision-making control by the physician, the interactive decision-making system is further configured to generate decision-making data from the operation data entered by the physician, and to send the decision-making data to the visual guidance system.
The embodiment of the specification also provides an automatic operation vision guiding method, which comprises the following steps:
acquiring target operation information and target operation scene data; the target operation scene data comprise intra-operation image data acquired by a scene acquisition system;
generating visual decision data based on the target surgical scene data and the target surgical information;
The visual decision data are sent to an interactive decision system for graphical display, and the adjusted visual decision data fed back by the interactive decision system based on the visual decision data are received to obtain target decision data;
and sending the target decision data to a motion control system, so that the motion control system controls the surgical instrument to execute the surgical operation according to the target decision data.
In one embodiment, the automated surgical visual guidance method further comprises:
generating real-time visual information based on the target surgical scene data;
transmitting the real-time visual information to the interactive decision system for display; and sending the real-time visual information to a motion control system, so that the motion control system controls the surgical instrument to execute the surgical operation according to the real-time visual information and the target decision data.
In one embodiment, generating visual decision data based on the target surgical scene data and the target surgical information includes:
inquiring corresponding target visual decision standard data in a pre-constructed visual decision standard library; the visual decision standard library comprises visual decision standard data corresponding to each operation type in a plurality of operation types;
And under the constraint of the target visual decision standard data, calculating the target operation information and the target operation scene data by using a visual decision algorithm to obtain visual decision data.
In one embodiment, querying the pre-built visual decision criteria library for corresponding target visual decision criteria data includes:
acquiring target object information and operation scene information;
and inquiring corresponding target visual decision standard data in a pre-constructed visual decision standard library based on the target object information, the surgical scene information and the target surgical information.
In one embodiment, the automated surgical visual guidance method further comprises:
and sending the target visual decision standard data to an interactive decision system for display.
In one embodiment, after obtaining the target decision data, further comprising:
and updating the visual decision algorithm based on the target decision data to obtain an updated visual decision algorithm.
In one embodiment, receiving adjusted visual decision data fed back by the interactive decision system based on the visual decision data, obtaining target decision data includes:
Receiving adjusted visual decision data fed back by the interactive decision system based on the visual decision data;
and matching the adjusted visual decision data to an endoscope image coordinate system to obtain target decision data.
The embodiments of the present specification also provide an automatic surgical vision guidance device, comprising:
the acquisition module is used for acquiring target operation information and target operation scene data; the target operation scene data comprise intra-operation image data acquired by a scene acquisition system;
the generation module is used for generating visual decision data based on the target operation scene data and the target operation information;
the adjustment module is used for sending the visual decision data to an interactive decision system for graphical display, receiving adjusted visual decision data fed back by the interactive decision system based on the visual decision data, and obtaining target decision data;
and the sending module is used for sending the target decision data to a motion control system so that the motion control system controls the surgical instrument to execute the surgical operation according to the target decision data.
The embodiments of the present specification also provide a medical device comprising a processor and a memory for storing processor-executable instructions, which when executed by the processor implement the steps of the automated surgical vision guidance method described in any of the embodiments above.
The present description embodiments also provide a computer-readable storage medium having stored thereon computer instructions that, when executed, implement the steps of the automated surgical vision guidance method described in any of the embodiments above.
In an embodiment of the present specification, there is provided an automated surgical system comprising: the system comprises a scene acquisition system, a visual guiding system, an interactive decision system and a motion control system, wherein the visual guiding system can acquire target operation information and target operation scene data, the target operation scene data can comprise intraoperative image data acquired by the scene acquisition system, visual decision data can be generated based on the target operation scene data and the target operation information, the visual decision data is sent to the interactive decision system for graphical display, the interactive decision system can feed back the adjusted visual decision data based on the visual decision data and send the adjusted visual decision data to the visual guiding system, the visual guiding system can receive the adjusted visual decision data to obtain target decision data, and the target decision data is sent to the motion control system, so that the motion control system controls a surgical instrument to execute operation according to the target decision data. In the above scheme, after the visual guide system generates the visual decision data, the visual decision data can be sent to the interactive decision system for adjustment, so as to obtain the target decision data which better accords with the operation standard and the operation requirement, and the automatic operation is executed based on the adjusted target decision data, so that the operation risk can be reduced, and the operation quality can be improved.
Drawings
The accompanying drawings are included to provide a further understanding of the specification, and are incorporated in and constitute a part of this specification. In the drawings:
FIG. 1 is a schematic diagram showing an application scenario of an automated surgical system according to an embodiment of the present disclosure;
FIG. 2 illustrates a scene graph of visual guidance system result presentation in an embodiment of the present description;
FIG. 3 illustrates a physician-end visual decision rendering and adjustment rendering scenario diagram in an embodiment of the present description;
FIG. 4 shows a schematic structural diagram of an automated surgical system in an embodiment of the present disclosure;
FIG. 5 shows a schematic diagram of a user interface of an interactive decision making system in an embodiment of the present description;
FIG. 6 shows a schematic diagram of visual decision data adjustment in an embodiment of the present disclosure;
FIG. 7 illustrates a schematic diagram of visual decision data adjustment in an embodiment of the present disclosure;
FIG. 8 illustrates a schematic diagram of a physician making visual decision data directly in an embodiment of the specification;
FIG. 9 shows a schematic diagram of a physician making visual decision data directly in an embodiment of the present disclosure;
FIG. 10 shows a schematic diagram of a visual decision criteria library in an embodiment of the present disclosure;
FIG. 11 illustrates a schematic diagram of generating real-time visual information based on the target surgical scene data in an embodiment of the present disclosure;
FIG. 12 illustrates an effect diagram of a graphical display of visual decision data in an embodiment of the specification;
FIG. 13 illustrates a flow chart of an automated surgical system performing a procedure in one embodiment of the present disclosure;
FIG. 14 shows a block diagram of an automated surgical system in an embodiment of the present disclosure;
FIG. 15 is a block diagram showing the structure of a data storage unit in one embodiment of the present specification;
FIG. 16 illustrates a flow chart of a process of making interactive decisions between a visual guidance system and an interactive decision system in one embodiment of the present description;
FIG. 17 shows a flow chart of an automated surgical visual guidance method in an embodiment of the present description;
FIG. 18 shows a schematic view of an automated surgical visual guidance device in an embodiment of the present disclosure;
fig. 19 shows a schematic view of a medical device in an embodiment of the present description.
Detailed Description
The principles and spirit of the present specification will be described below with reference to several exemplary embodiments. It should be understood that these embodiments are presented merely to enable one skilled in the art to better understand and practice the present description, and are not intended to limit the scope of the present description in any way. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Those skilled in the art will appreciate that the embodiments of the present description may be implemented as a system, apparatus, method, or computer program product. Accordingly, the present disclosure may be embodied in the form of: complete hardware, complete software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software.
Embodiments of the present specification provide an automated surgical system. Fig. 1 shows a schematic diagram of an application scenario of an automatic surgical system in an embodiment of the present specification. As shown in fig. 1, the automated surgical system may include two parts, a surgical robotic system 10 and a visual guidance system 20.
The surgical robotic system 10 may include three parts, a doctor trolley 101, an image trolley 102, and a surgical trolley 103. Wherein the image trolley 102 may be used to acquire intraoperative real-time surgical images and communicate the surgical images to the visual guidance system 20 for processing. The doctor acquires visual decision data processed by the visual guidance system on the display screen of the doctor trolley 101, and adjusts or confirms the visual decision data. The surgical trolley 103 is used to perform an automated surgical operation.
The visual guidance system 20 may comprise two parts, an image processing host 201 and a display 202. The image processing host 201 may be used to process the endoscopic surgery image transmitted from the image trolley 102, and transmit the visual decision data obtained after processing to the doctor trolley 101 for the doctor to make a decision, so as to obtain adjusted visual decision data. Visual decision data may include target location, path planning, object location data, and the like. The adjusted visual decision data is returned to the visual guidance system 20. The vision guidance system 20 matches the adjusted vision decision data to the endoscope image coordinate system to obtain the target decision data. Specifically, after the doctor adjusts the visual decision data at the display end, the doctor converts the position under the display image coordinate system into the position under the endoscope image coordinate system through the conversion matrix for the adjusted visual decision data, so that the target decision data can be obtained. The image processing host 201 of the vision guidance system 20 can also perform recognition processing on the endoscopic surgery image to obtain real-time vision information. The real-time visual information may include operation object information and visual operation-related information. The display 202 may present visual decision data and display real-time visual information in real-time.
The vision guidance system 20 can send real-time vision information to the doctor trolley 101 for display, so that the doctor can grasp more operation information, and the vision decision data can be better adjusted. The vision guidance system 20 may also transmit real-time visual information and target decision data to the surgical trolley 103 for automated surgical procedures.
Referring to fig. 2, a scene graph of visual guidance system result presentation is shown. As shown in fig. 2, the display 202 of the visual guidance system 20 may be used to present all of the processing results of the visual guidance system 20, including: visual decision data and real-time visual information. The display 202 may also display a surgical scene, where the surgical scene may be endoscopic image data acquired from an image trolley.
Referring to fig. 3, a physician-end visual decision rendering and adjustment rendering scene graph is shown. As shown in fig. 3, a doctor may acquire visual decision data and choose whether to adjust the visual decision data at the doctor trolley 101. Likewise, the doctor trolley 101 may also display the surgical scene and real-time visual information.
Fig. 4 shows a schematic structural diagram of an automatic surgical system in an embodiment of the present specification. As shown in fig. 4, the automated surgical system in this embodiment may include a scene acquisition system 401, a visual guidance system 402, an interactive decision system 403, and a motion control system 404.
The scene acquisition system 401 may be used to acquire target surgical scene data. In one embodiment, the scene acquisition system may be an endoscopic system. In other embodiments, the scene acquisition system may be an in vitro camera system. The target surgical scene data may include intra-operative image data acquired by a scene acquisition system. The scene acquisition system 401 may send the target surgical scene data to the visual guidance system 402. In one embodiment, the scene acquisition system 401 may send the intraoperative image data to the vision guidance system 402 via the image trolley 102.
The visual guidance system 402 may be used to obtain targeted surgical information. The target surgical information may include information of a surgical type, a surgical stage, and the like. The visual guidance system 402 may receive the target surgical scene data and generate visual decision data based on the target surgical scene data and the target surgical information. Visual decision data may include decision-making scheme data for performing a current surgical task or surgical subtask, may include target location, path planning, object-of-operation location data, and the like. The visual guidance system 402 may send the visual decision data to the interactive decision system 403.
The interactive decision system 403 may be configured to receive the visual decision data and graphically display the visual decision data. In one embodiment, the interactive decision system 403 may be provided in the physician's trolley 101. In the event that the physician decides to adjust the visual decision data, the interactive decision system 403 may receive a protocol adjustment instruction entered by the physician. Illustratively, the scheme adjustment instruction may include at least one of: target deletion instructions, target addition instructions, path sequence adjustment instructions, decision making instructions and the like. The interactive decision system 403 may generate adjusted visual decision data based on the regimen adjustment instructions and the visual decision data and send the adjusted visual decision data to the visual guidance system 402. The visual guidance system 402 may generate target decision data based on the received adjusted visual decision data. The visual guidance system 402 may then send the target decision data to the motion control system 404.
The motion control system 404 may be used to control a surgical instrument to perform a surgical operation in accordance with the target decision data. In one embodiment, the motion control system 404 may be disposed in the surgical trolley 103. The motion control system 404 may be used to control surgical instrument or endoscope motion on a robotic arm on a surgical trolley to perform a surgical procedure.
In the above embodiment, after the visual guiding system generates the visual decision data, the visual decision data may be sent to the interactive decision system for adjustment, so as to obtain target decision data more in line with the operation standard and the operation requirement, and automatic operation is performed based on the adjusted target decision data, so that the operation risk may be reduced, and the operation quality may be improved.
In some embodiments of the present description, the interactive decision system 403 may be used to present at least one decision adjustment control to a physician. The interactive decision system 403 may be further configured to display a corresponding decision adjustment user interface in response to a clicking operation of at least one decision adjustment control by the doctor to generate adjusted visual decision data based on the adjustment operation entered by the doctor. The at least one decision adjustment control comprises at least: target point adjustment control, target point adding and deleting control and path optimization control.
Referring to fig. 5, a schematic diagram of a user interface of an interactive decision system in an embodiment of the present description is shown. As shown in fig. 5, visual decision data may be displayed in the user interface of the interactive decision system 403, and a plurality of adjustment controls may also be displayed: adjustment 1 and adjustment 2 … … adjust n. Wherein the plurality of decision adjustment controls comprises at least: target point adjustment control, target point adding and deleting control and path optimization control. The target point adjustment control is used to adjust the position of the target point, e.g., the position data of the target point can be modified. The target adding and deleting control can be used for deleting targets or adding targets. The path optimization control may adjust the surgical execution path. After the doctor triggers a certain adjustment control, the interactive decision system 403 may display a corresponding visual decision data adjustment interface on the user interface, and generate adjusted visual decision data based on the adjustment operation entered by the doctor. After the physician has completed the adjustment, the confirmation control of fig. 5 may be clicked on to send the adjusted visual decision data to the visual guidance system 402.
Referring to fig. 6, a schematic diagram of visual decision data adjustment in an embodiment of the present disclosure is shown. FIG. 6 illustrates the manipulation of target spot adjustments or target spot additions and deletions during a suturing procedure. The left side of fig. 6 is a schematic diagram of visual decision data sent by the visual guidance system to the interactive decision system. Wherein, the solid circle is the suturing target spot with good visual decision effect, and the dotted circle is the suturing target spot with poor visual decision effect. The middle diagram of fig. 6 shows the interaction of the decision system to adjust or add/delete the target according to the doctor's operation. As shown in fig. 6, targets that are not effective can be adjusted by dragging, adding or deleting. The right side of fig. 6 shows a schematic diagram of the adjusted visual decision data. Target point selection in the adjusted visual decision data better meets the requirements of suture operation.
Referring to fig. 7, a schematic diagram of visual decision data adjustment in an embodiment of the present disclosure is shown. FIG. 7 illustrates the operation of target retrieval path optimization in tissue retrieval surgery. The left side of fig. 7 is a schematic diagram of visual decision data sent by the visual guidance system to the interactive decision system. Wherein the total length of the path of the visual decision is not shortest and there is a risk of collision with the robot arm. Thus, the physician can adjust it. The right side of fig. 7 shows a schematic diagram of the adjusted visual decision data. The doctor optimizes the result, modifies the sequence of tissue recovery, and selects and breaks targets which may cause the collision of the mechanical arm. Specifically, the recovery sequence of the No. 1 and No. 2 targets is exchanged, the No. 6 target is deleted, and recovery is not performed. The optimized result improves the operation efficiency and ensures the experiment safety. Doctors have the function of supervising and optimizing the operation tasks.
In some embodiments of the present description, the interactive decision system is further configured to present decision-making controls to a physician; in response to a selected operation of the decision-making control by the physician, the interactive decision-making system is further configured to generate decision-making data from the operation data entered by the physician, and to send the decision-making data to the visual guidance system.
For some simpler surgical tasks, or when visual decision-making is poor, a decision-making scheme can be directly made by the physician at the interactive decision-making system. The decision mode mainly depends on personal operation experience of doctors, and has the advantages that on one hand, the dependence on a vision algorithm system can be reduced, and the calculation pressure of a server is reduced; on the one hand, the vision guidance system is complemented, and when the vision algorithm is poor, the normal execution of the operation can be ensured depending on the decision of a doctor.
Referring to fig. 8, a schematic diagram of the direct visual decision making data of a traditional Chinese medicine in an embodiment of the present specification is shown. The cholecystectomy task is shown in fig. 8. Because the task only needs to specify 5 target points and the target point positions are easy to identify, the doctor can directly specify the target point positions. The vision guiding system only needs to continuously track the target position in the follow-up operation, and the target position is updated in real time.
Referring to fig. 9, a schematic diagram of the direct visual decision making data of a traditional Chinese medicine in the embodiments of the present description is shown. As shown in fig. 9, is a cholecystectomy task. The common pig only has one blood vessel and gall bladder tube, and the vision guidance system can be normally identified. However, there are pigs with two blood vessels and one blood vessel, and the visual guidance system has poor detection results due to lack of related data. At this time, the doctor can directly designate the position of the blood vessel by using external equipment on the display, so that the smooth execution of the operation task is ensured.
In some embodiments of the present disclosure, the visual guidance system 402 may be specifically configured to query a pre-constructed visual decision criteria library for target visual decision criteria data corresponding to the target surgical information. The vision guidance system 402 may be further specifically configured to calculate the target surgical information and the target surgical scene data by using a vision decision algorithm under the constraint of the target vision decision standard data, so as to obtain vision decision data. The visual decision standard library comprises visual decision standard data corresponding to each operation type in a plurality of operation types. The visual guidance system 402 may also be used to send the target visual decision criteria data to the interactive decision system 403. The interactive decision system 403 may also be used to display the target visual decision criteria data to a physician.
Referring to FIG. 10, a schematic diagram of a visual decision criteria library in an embodiment of the present disclosure is shown. A library of visual decision criteria for an automated surgical system may be constructed, establishing criteria in accordance with international medical performance criteria for all types of automated surgery for which automated surgical systems are currently available. The visual decision standard library is used as a visual guidance system execution standard on one hand and is used for outputting visual results conforming to actual medical operation; on the other hand, the visual effect of the doctor decision can be used as an auxiliary reference for the quality of the visual effect of the doctor decision, and the visual effect can be called out at the doctor end for the doctor to check, as shown in fig. 5. The setting of the visual decision standard library is beneficial to improving the consistency and the effectiveness of automatic operation, so that the result of the automatic operation can be dependent.
In some embodiments of the present disclosure, to further improve the accuracy of the visual decision criteria library, different visual decision criteria data may be set for different patient individuals, different surgical scenarios, different surgical organizations, and different surgical modes. For example, the individual patients are distinguished by data such as different sexes, different age groups, different body weights, etc. For different surgical scenes, that is, for different surgical corresponding parts, for example, different scenes such as heart, cholecyst tube, ureter, liver and the like, the corresponding standards of the same surgical type are different. The criteria may also be different for different surgical tissues, e.g., for different tissue locations in the heart. Different standard data are set for different surgical modes, i.e. different surgical types. By setting the visual decision standard library, the automatic operation visual decision evaluation standard can be specialized, different standards are provided according to different information of individual patients, operation scenes, operation positions, operation modes and the like, and the automatic operation visual decision evaluation standard library has individuation and specificity standards, better accords with the actual operation conditions of the patients, and is beneficial to improving operation quality.
As shown in fig. 10, for example, the automated organization recycling standard library includes: 1) Organization reclamation execution condition judgment standard: determining what the conditions for performing the automatic tissue reclamation operation are, such as the number n of tissues to be reclaimed that meet reclamation criteria in the field of view, no shielding around the tissues to be reclaimed, and so on; 2) Recovery tissue size determination: defining the size of tissue that can be autonomously retrieved by the robotic arm, e.g., tissue having a tissue diameter within [ a, b ] millimeters can be retrieved; 3) Organization recovery order criteria: for a plurality of tissues to be recovered in the visual field range, the recovery standard and the like are executed according to the rules of closest distance to the mechanical arm, shortest moving total path, no instrument collision, no tissue collision and the like.
In some embodiments of the present description, the visual guidance system may be further configured to generate real-time visual information based on the target surgical scene data, and send the real-time visual information to the interactive decision system, the real-time visual information including operation object information and visual operation association information; the interactive decision system is used for displaying the operation object information and the visual operation related information to a doctor; the interactive decision system is also used for acquiring and displaying the target operation information.
The vision guidance system may generate real-time visual information based on the target surgical scene data. The real-time visual information herein may include operation object information and visual operation-related information. The operation object information here may include position information, shape information, and the like of an operation object in surgery. The visual operation-related information may be information related to an operation in a surgical procedure, such as whether a surgical instrument is successfully held during a suture operation. The real-time visual information may also include location data of key points during the surgical procedure. The key points here may include operation object position data, operation instrument position data, key tissue position data, and the like in the target decision data.
In one embodiment, a target tracking algorithm may be employed to generate the implementation visual information based on target surgical scene data received in real-time. The target tracking algorithm may be implemented by a tracking model generated using a deep learning algorithm. Referring to fig. 11, a schematic diagram of generating real-time visual information based on the target surgical scene data in an embodiment of the present disclosure is shown. As shown in fig. 11, data in a massive automatic operation scene can be collected, training is performed by using a convolutional neural network after manual labeling, and a tracking model in the automatic operation scene is output. After the adjusted visual decision data is input to the visual guidance system, the real-time visual information can be determined by using the tracking model and sent to the motion control system.
Referring to fig. 12, an effect diagram of a graphical display of visual decision data in an embodiment of the present disclosure is shown. As shown in fig. 12, all black parts in the figure are parts of the endoscopic image that are present in themselves, and all text and graphics are annotations added later by the visual guidance system to help the physician understand better the current operation and visual results. In fig. 12, the upper left corner is a text prompt class message, typically containing 1) the name of the currently performed surgical task + the name of the currently performed subtask; 2) Visual operation related cues, such as the success or failure status of the surgical instrument to clamp the blood vessel in this example. Operation object information prompt: text prompts are given to what part is the operation object of the current automatic operation, such as cholecyst tube, ureter, liver and the like. Visual processing result prompt: for information obtained by visual processing, the information is presented at a doctor end as a graphical result, and comprises 1) target classes: actual position of target, name of target, distance between targets, diameter of target, etc.; 2) Operation sequence class: for example, for recycling a certain area of tissue, after a certain recycling path is visually given, recycling sequence is digitally displayed beside a recycling target point, and the like. The specific presentation content depends on the specific surgical task.
Referring to fig. 13, a flow chart of an automated surgical system performing a procedure in this embodiment is shown. As shown in fig. 13, the following steps may be included:
s1: the endoscope captures intra-operative images, which are then transferred to the vision guidance system;
s2: the vision guiding system processes the image transmitted by the endoscope and generates corresponding vision decision data according to the current operation type and operation stage; the generated visual decision data is then transferred to an interactive decision system for a doctor to make a final decision;
s3: the doctor makes a final decision on the visual decision data in the interactive decision system, and then the adjusted visual decision data is transmitted to the visual guidance system;
s4: the vision guidance system updates the vision decision data;
s5: the vision guiding system transmits the real-time vision processing result to the movement control system, and the movement control system plans real-time movement according to the vision information;
s6: the surgical instrument performs a robotic surgical operation.
Referring to fig. 14, a block diagram of an automated surgical system in an embodiment of the present description is shown. As shown in fig. 14, an automated surgical system may include a data acquisition unit, a data processing unit, a data storage unit, a human-machine interaction unit, and an execution unit.
The data acquisition unit is used for acquiring real-time endoscopic images in operation.
The data processing unit is used for processing the endoscope image acquired by the data acquisition unit and generating visual decision data and real-time visual information required by the automatic operation.
As shown in fig. 15, a block diagram of the data storage unit structure in the present embodiment is shown. As shown in fig. 15, the data storage unit is configured to store the visual decision data generated by the visual decision standard library and the data processing unit, and the adjusted visual decision data returned by the man-machine interaction unit.
The man-machine interaction unit is used for realizing interaction between a user and the visual guidance system. The interaction unit can present visual decision data generated by the visual guidance system, and meanwhile, a doctor end can modify the visual decision data in the interaction unit to control the beginning and the ending of the automatic operation.
The illustrated execution unit is used to perform automated surgical operations, consisting of a surgical robotic system surgical platform.
Referring to fig. 16, a flow chart of a process for interactive decision making between a visual guidance system and an interactive decision making system is shown. As shown in fig. 16, after the vision guidance system processes the preliminary vision decision data, the preliminary vision decision data is transmitted to the interactive decision system at the doctor's end for the doctor to check, and the doctor selects whether to intervene to modify. If not, the results are directly output to the motion control system for planning and performing the automatic surgery. If the modification is performed, the doctor firstly selects decision contents needing to be modified, such as adding and deleting targets, optimizing target selection, optimizing visual decision paths and the like. Then, the doctor modifies the visual decision content on the display screen through external equipment such as a mouse or a capacitance pen. The modified decision content is returned to the visual guidance system, and the visual guidance system matches the modification result of the interactive decision system to the endoscope image plane. After matching is completed, the target position is tracked in real time in the whole automatic operation process, and the real-time position of the target is fed back to the motion control system for planning and execution.
Embodiments of the present specification provide an automated surgical visual guidance method. Fig. 17 shows a flowchart of an automated surgical visual guidance method in an embodiment of the present description. Although the present description provides methods and apparatus structures as shown in the following examples or figures, more or fewer steps or modular units may be included in the methods or apparatus based on conventional or non-inventive labor. In the steps or the structures of the apparatuses, which logically do not have the necessary cause and effect relationship, the execution order or the structure of the modules of the apparatuses are not limited to the execution order or the structure of the modules shown in the drawings and described in the embodiments of the present specification. The described methods or module structures may be implemented sequentially or in parallel (e.g., in a parallel processor or multithreaded environment, or even in a distributed processing environment) in accordance with the embodiments or the method or module structure connection illustrated in the figures when implemented in a practical device or end product application.
Specifically, as shown in fig. 17, the automatic surgical visual guidance method provided in one embodiment of the present specification may include the steps of:
step S171, acquiring target operation information and target operation scene data; the target surgical scene data comprises intraoperative image data acquired by a scene acquisition system.
The automatic surgical visual guidance method in the present embodiment may be applied to a visual guidance system. The visual guidance system may be used to obtain target surgical information and target surgical scene data. The target surgical scene data may include intra-operative image data acquired by a scene acquisition system. In one embodiment, the scene acquisition system may be an endoscopic system. In other embodiments, the scene acquisition system may be an in vitro camera system. The target surgical information may include surgical task information, surgical subtask information, surgical type, surgical stage, and the like.
Step S172, generating visual decision data based on the target surgical scene data and the target surgical information.
Step S173, sending the visual decision data to an interactive decision system for graphical display, and receiving the adjusted visual decision data fed back by the interactive decision system based on the visual decision data, so as to obtain target decision data.
After obtaining the target surgical scene data and the target surgical information, the visual guidance system may make a visual decision based on the target surgical scene data and the target surgical information, generating visual decision data.
The visual guidance system may send the resulting visual decision data to the interactive decision system. The interactive decision system may graphically display the visual decision data. The doctor can adjust the displayed visual decision data on the user interface of the interactive decision system to obtain adjusted visual decision data. The interactive decision system may send the adjusted visual decision data to the visual guidance system. The visual guidance system may generate target decision data based on the adjusted visual decision data.
Step S174, transmitting the target decision data to a motion control system, so that the motion control system controls the surgical instrument to perform the surgical operation according to the target decision data.
The surgical guidance system may send the generated target decision data to a motion control system such that the motion control system controls the surgical instrument to perform a surgical procedure in accordance with the target decision data.
In the above embodiment, after the visual guiding system generates the visual decision data, the visual decision data may be sent to the interactive decision system for adjustment, so as to obtain target decision data more in line with the operation standard and the operation requirement, and automatic operation is performed based on the adjusted target decision data, so that the operation risk may be reduced, and the operation quality may be improved.
In some embodiments of the present description, the automated surgical visual guidance method may further comprise: generating real-time visual information based on the target surgical scene data; transmitting the real-time visual information to the interactive decision system for display; and sending the real-time visual information to a motion control system, so that the motion control system controls the surgical instrument to execute the surgical operation according to the real-time visual information and the target decision data.
Specifically, the vision guidance system may also process the surgical scene data received in real time to generate real-time vision information. The real-time visual information herein may include operation object information and visual operation-related information. The operation object information here may include position information, shape information, and the like of an operation object in surgery. The visual operation-related information may be information related to an operation in a surgical procedure, such as whether a surgical instrument is successfully held during a suture operation. The real-time visual information may also include location data of key points during the surgical procedure. The key points here may include operation object position data, operation instrument position data, key tissue position data, and the like in the target decision data. The surgical guidance system may send real-time visual information to the interactive decision making system for display. The surgical guidance system may also send the real-time visual information to a motion control system such that the motion control system controls the surgical instrument to perform a surgical procedure in accordance with the real-time visual information and the target decision data. By the mode, automatic operation control and decision adjustment can be performed based on real-time visual information, and operation quality can be improved.
In some embodiments of the present description, generating visual decision data based on the target surgical scene data and the target surgical information may include: inquiring corresponding target visual decision standard data in a pre-constructed visual decision standard library; the visual decision standard library comprises visual decision standard data corresponding to each operation type in a plurality of operation types; and under the constraint of the target visual decision standard data, calculating the target operation information and the target operation scene data by using a visual decision algorithm to obtain visual decision data.
In some embodiments of the present disclosure, querying the pre-constructed visual decision criteria library for corresponding target visual decision criteria data may include: acquiring target object information and operation scene information; and inquiring corresponding target visual decision standard data in a pre-constructed visual decision standard library based on the target object information, the surgical scene information and the target surgical information. The target object information herein includes personal basic information of the patient, such as various information of sex, age, height, weight, medical history, and the like. The surgical scene information herein may be information of a specific surgical site or tissue. After obtaining the target object information and the surgical scene information, the corresponding target visual decision criterion data can be queried in a pre-constructed visual decision criterion library based on the target object information, the surgical scene information and the target surgical information.
In some embodiments of the present description, the automated surgical visual guidance method may further comprise: and sending the target visual decision standard data to the interactive decision system for display.
Specifically, in order to further improve the accuracy of the visual decision criteria library, different visual decision criteria data may be set for different patient individuals, different surgical scenarios, different surgical organizations, and different surgical modes. For example, the individual patients are distinguished by data such as different sexes, different age groups, different body weights, etc. For different surgical scenes, that is, for different surgical corresponding parts, for example, different scenes such as heart, cholecyst tube, ureter, liver and the like, the corresponding standards of the same surgical type are different. The criteria may also be different for different surgical tissues, e.g., for different tissue locations in the heart. Different visual decision standard data are set for different surgical modes, i.e. different surgical types. By setting the visual decision standard library, the automatic operation visual decision evaluation standard can be specialized, different standards are provided according to different information of individual patients, operation scenes, operation positions, operation modes and the like, and the automatic operation visual decision evaluation standard library has individuation and specificity standards, better accords with the actual operation conditions of the patients, and is beneficial to improving operation quality.
A library of visual decision criteria for an automated surgical system may be constructed, establishing criteria in accordance with international medical performance criteria for all types of automated surgery for which automated surgical systems are currently available. And inquiring target visual decision standard data corresponding to the target operation information in a pre-constructed visual decision standard library. The visual decision standard library comprises visual decision standard data corresponding to each operation type in a plurality of operation types. And under the constraint of the target visual decision standard data, calculating the target operation information and the target operation scene data by using a visual decision algorithm to obtain visual decision data. The visual decision standard library is used as a visual guidance system execution standard on one hand and is used for outputting visual results conforming to actual medical operation; on the other hand, the visual effect of the doctor decision can be used as an auxiliary reference for the quality of the visual effect of the doctor decision, and the visual effect of the doctor decision can be called out for the doctor to check in an interactive decision system at the doctor end. The setting of the visual decision standard library is beneficial to improving the consistency and the effectiveness of automatic operation, so that the result of the automatic operation can be dependent.
In some embodiments of the present specification, after obtaining the target decision data, it may further include: and updating the visual decision algorithm based on the target decision data to obtain an updated visual decision algorithm.
Under the condition that doctors adjust visual decision data, the generated target decision data can be used for updating a visual decision algorithm, the updated visual decision algorithm can make decisions on more accurate visual decision data, the operation efficiency can be improved, and the operation quality can be improved.
In some embodiments of the present disclosure, receiving the adjusted visual decision data fed back by the interactive decision system based on the visual decision data to obtain target decision data may include: receiving adjusted visual decision data fed back by the interactive decision system based on the visual decision data; and matching the adjusted visual decision data to an endoscope image coordinate system to obtain target decision data. By matching visual decision data under the image coordinate system of the interactive decision system to the endoscope image coordinate system, target decision data can be obtained, which is directly sent to the motion control system for surgical operation.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. Specific reference may be made to the foregoing description of related embodiments of the related process, which is not described herein in detail.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
Based on the same inventive concept, an automatic surgical visual guidance device is also provided in the embodiments of the present specification, as described in the following embodiments. Because the principle of the automatic operation vision guiding device for solving the problem is similar to that of the automatic operation vision guiding method, the implementation of the automatic operation vision guiding device can be referred to the implementation of the automatic operation vision guiding method, and the repetition is not repeated. As used below, the term "unit" or "module" may be a combination of software and/or hardware that implements the intended function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated. Fig. 18 is a block diagram of a configuration of an automatic surgical visual guidance device according to an embodiment of the present specification, as shown in fig. 18, including: the configuration of the acquisition module 181, the generation module 182, the adjustment module 183, and the transmission module 184 will be described below.
The acquisition module 181 is used for acquiring target surgery information and target surgery scene data; the target surgical scene data comprises intraoperative image data acquired by a scene acquisition system.
The generating module 182 is configured to generate visual decision data based on the target surgical scene data and the target surgical information.
The adjustment module 183 is configured to send the visual decision data to an interactive decision system for graphical display, and receive adjusted visual decision data fed back by the interactive decision system based on the visual decision data, so as to obtain target decision data.
The sending module 184 is configured to send the target decision data to a motion control system, so that the motion control system controls a surgical instrument to perform a surgical operation according to the target decision data.
The embodiment of the present disclosure further provides a schematic diagram of the composition structure of a medical device, specifically referring to fig. 19, which is based on the automatic surgical visual guidance method provided by the embodiment of the present disclosure, where the medical device specifically may include an input device 191, a processor 192, and a memory 193. Wherein the memory 193 is for storing processor executable instructions. The processor 192, when executing the instructions, performs the steps of the automated surgical visual guidance method described in any of the embodiments above.
In this embodiment, the input device may specifically be one of the main apparatuses for exchanging information between the user and the computer system. The input device may include a keyboard, mouse, camera, scanner, light pen, handwriting input board, voice input device, etc.; the input device is used to input raw data and a program for processing these numbers into the computer. The input device may also acquire and receive data transmitted from other modules, units, and devices. The processor may be implemented in any suitable manner. For example, the processor may take the form of, for example, a microprocessor or processor, and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a programmable logic controller, and an embedded microcontroller, among others. The memory may in particular be a memory device for storing information in modern information technology. The memory may comprise a plurality of levels, and in a digital system, may be memory as long as binary data can be stored; in an integrated circuit, a circuit with a memory function without a physical form is also called a memory, such as a RAM, a FIFO, etc.; in the system, the storage device in physical form is also called a memory, such as a memory bank, a TF card, and the like.
In this embodiment, the specific functions and effects of the medical device may be explained in comparison with other embodiments, and will not be described herein.
There is also provided in the present description embodiments a computer storage medium storing computer program instructions that, when executed, implement the steps of the automated surgical visual guidance method described in any of the embodiments above.
In the present embodiment, the storage medium includes, but is not limited to, a random access Memory (Random Access Memory, RAM), a Read-Only Memory (ROM), a Cache (Cache), a Hard Disk (HDD), or a Memory Card (Memory Card). The memory may be used to store computer program instructions. The network communication unit may be an interface for performing network connection communication, which is set in accordance with a standard prescribed by a communication protocol.
In this embodiment, the functions and effects of the program instructions stored in the computer storage medium may be explained in comparison with other embodiments, and are not described herein.
It will be apparent to those skilled in the art that the modules or steps of the embodiments described above may be implemented in a general purpose computing device, they may be concentrated on a single computing device, or distributed across a network of computing devices, they may alternatively be implemented in program code executable by computing devices, so that they may be stored in a storage device for execution by computing devices, and in some cases, the steps shown or described may be performed in a different order than herein, or they may be separately fabricated into individual integrated circuit modules, or multiple modules or steps within them may be fabricated into a single integrated circuit module. Thus, embodiments of the present specification are not limited to any specific combination of hardware and software.
It is to be understood that the above description is intended to be illustrative, and not restrictive. Many embodiments and many applications other than the examples provided will be apparent to those of skill in the art upon reading the above description. The scope of the disclosure should, therefore, be determined not with reference to the above description, but instead should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the embodiments of the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present specification should be included in the protection scope of the present specification.

Claims (13)

1. An automatic operation system is characterized by comprising a scene acquisition system, a vision guiding system, an interactive decision making system and a motion control system, wherein,
the scene acquisition system is used for acquiring target operation scene data and sending the target operation scene data to the vision guide system; the target surgical scene data includes intra-operative image data;
The vision guidance system is used for acquiring target operation information; receiving the target surgical scene data; generating visual decision data based on the target surgical scene data and the target surgical information; transmitting the visual decision data to the interactive decision system;
the interactive decision system is used for receiving the visual decision data and performing graphical display; receiving a scheme adjustment instruction input by a doctor, and generating adjusted visual decision data based on the scheme adjustment instruction and the visual decision data; the vision guidance system is also used for receiving the adjusted vision decision data and generating target decision data; transmitting the target decision data to the motion control system;
the motion control system is used for controlling the surgical instrument to execute the surgical operation according to the target decision data.
2. The automated surgical system of claim 1, wherein the interactive decision system is configured to present at least one decision adjustment control to a physician; the interactive decision system is further used for responding to clicking operation of at least one decision adjustment control by a doctor to display a corresponding decision adjustment user interface so as to generate adjusted visual decision data based on adjustment operation input by the doctor; the at least one decision adjustment control comprises at least: target point adjustment control, target point adding and deleting control and path optimization control.
3. The automated surgical system of claim 1, wherein the vision guidance system is specifically configured to query a pre-constructed library of vision decision criteria for target vision decision criteria data corresponding to the target surgical information; the visual guidance system is further specifically configured to calculate, under the constraint of the target visual decision standard data, the target surgical information and the target surgical scene data by using a visual decision algorithm, so as to obtain visual decision data; the visual decision standard library comprises visual decision standard data corresponding to each operation type in a plurality of operation types;
the vision guidance system is further configured to send the target vision decision criteria data to the interactive decision system, which is further configured to display the target vision decision criteria data to a physician.
4. The automated surgical system of claim 1, wherein the vision guidance system is further configured to generate real-time vision information based on the target surgical scene data, the real-time vision information including operation object information and vision operation-related information, and to send the real-time vision information to the interactive decision system;
The interactive decision system is used for displaying the operation object information and the visual operation related information to a doctor; the interactive decision system is also used for acquiring and displaying the target operation information.
5. The automated surgical system of claim 1, wherein the interactive decision system is further configured to present decision-making controls to a physician; in response to a selected operation of the decision-making control by the physician, the interactive decision-making system is further configured to generate decision-making data from the operation data entered by the physician, and to send the decision-making data to the visual guidance system.
6. An automated surgical vision guidance method, comprising:
acquiring target operation information and target operation scene data; the target operation scene data comprise intra-operation image data acquired by a scene acquisition system;
generating visual decision data based on the target surgical scene data and the target surgical information;
the visual decision data are sent to an interactive decision system for graphical display, and the adjusted visual decision data fed back by the interactive decision system based on the visual decision data are received to obtain target decision data;
And sending the target decision data to a motion control system, so that the motion control system controls the surgical instrument to execute the surgical operation according to the target decision data.
7. The automated surgical visual guidance method of claim 6, further comprising:
generating real-time visual information based on the target surgical scene data;
transmitting the real-time visual information to the interactive decision system for display;
and sending the real-time visual information to a motion control system, so that the motion control system controls the surgical instrument to execute the surgical operation according to the real-time visual information and the target decision data.
8. The automated surgical visual guidance method of claim 6, wherein generating visual decision data based on the target surgical scene data and the target surgical information comprises:
inquiring corresponding target visual decision standard data in a pre-constructed visual decision standard library; the visual decision standard library comprises visual decision standard data corresponding to each operation type in a plurality of operation types;
and under the constraint of the target visual decision standard data, calculating the target operation information and the target operation scene data by using a visual decision algorithm to obtain visual decision data.
9. The automated surgical visual guidance method of claim 8, wherein querying the pre-constructed visual decision criteria library for corresponding target visual decision criteria data comprises:
acquiring target object information and operation scene information;
and inquiring corresponding target visual decision standard data in a pre-constructed visual decision standard library based on the target object information, the surgical scene information and the target surgical information.
10. The automated surgical visual guidance method of claim 8, further comprising:
and sending the target visual decision standard data to the interactive decision system for display.
11. The automated surgical visual guidance method of claim 8, further comprising, after obtaining the target decision data:
and updating the visual decision algorithm based on the target decision data to obtain an updated visual decision algorithm.
12. The automated surgical vision guidance method of claim 6, wherein receiving adjusted vision decision data fed back by the interactive decision system based on the vision decision data, obtains target decision data, comprising:
Receiving adjusted visual decision data fed back by the interactive decision system based on the visual decision data;
and matching the adjusted visual decision data to an endoscope image coordinate system to obtain target decision data.
13. A medical device comprising a processor and a memory for storing processor-executable instructions, which when executed by the processor implement the steps of the method of any one of claims 6 to 12.
CN202310193964.1A 2023-03-02 2023-03-02 Automatic surgical system, automatic surgical visual guidance method and medical equipment Pending CN116230196A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310193964.1A CN116230196A (en) 2023-03-02 2023-03-02 Automatic surgical system, automatic surgical visual guidance method and medical equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310193964.1A CN116230196A (en) 2023-03-02 2023-03-02 Automatic surgical system, automatic surgical visual guidance method and medical equipment

Publications (1)

Publication Number Publication Date
CN116230196A true CN116230196A (en) 2023-06-06

Family

ID=86574678

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310193964.1A Pending CN116230196A (en) 2023-03-02 2023-03-02 Automatic surgical system, automatic surgical visual guidance method and medical equipment

Country Status (1)

Country Link
CN (1) CN116230196A (en)

Similar Documents

Publication Publication Date Title
KR102014359B1 (en) Method and apparatus for providing camera location using surgical video
US20220324102A1 (en) Systems and methods for prevention of surgical mistakes
KR102298412B1 (en) Surgical image data learning system
JP5442993B2 (en) 3D instrument path planning, simulation and control system
Han et al. A systematic review of robotic surgery: From supervised paradigms to fully autonomous robotic approaches
KR20180068336A (en) Surgical system with training or auxiliary functions
US20230238109A1 (en) Method and program for providing feedback on surgical outcome
KR102008891B1 (en) Apparatus, program and method for displaying surgical assist image
US11660142B2 (en) Method for generating surgical simulation information and program
US11666392B2 (en) Device for guiding position of robot, method therefor, and system including the same
Zinchenko et al. Autonomous endoscope robot positioning using instrument segmentation with virtual reality visualization
KR102298417B1 (en) Program and method for generating surgical simulation information
US20230293236A1 (en) Device, method and computer program product for validating surgical simulation
US11931114B2 (en) Virtual interaction with instruments in augmented reality
Lee et al. Simulation of robot-assisted flexible needle insertion using deep Q-network
CN110796064A (en) Human muscle image establishing method and device, storage medium and electronic equipment
CN116230196A (en) Automatic surgical system, automatic surgical visual guidance method and medical equipment
CN113397708B (en) Particle puncture surgical robot navigation system
CN111741729B (en) Surgical optimization method and device
CN115147357A (en) Automatic navigation method, device, equipment and medium for vascular intervention guide wire
CN114187281A (en) Image processing method and device, electronic equipment and storage medium
KR20190133425A (en) Program and method for displaying surgical assist image
EP4353178A1 (en) Artificial intelligence surgical system and control method therefor
CN117994346B (en) Digital twinning-based puncture instrument detection method, system and storage medium
JP2020036837A (en) Ophthalmologic image processing apparatus and ophthalmologic imaging apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination