WO2021012142A1 - 手术机器人***及其控制方法 - Google Patents

手术机器人***及其控制方法 Download PDF

Info

Publication number
WO2021012142A1
WO2021012142A1 PCT/CN2019/097032 CN2019097032W WO2021012142A1 WO 2021012142 A1 WO2021012142 A1 WO 2021012142A1 CN 2019097032 W CN2019097032 W CN 2019097032W WO 2021012142 A1 WO2021012142 A1 WO 2021012142A1
Authority
WO
WIPO (PCT)
Prior art keywords
coordinate
coordinate system
surgical
target part
surgical robot
Prior art date
Application number
PCT/CN2019/097032
Other languages
English (en)
French (fr)
Inventor
朱红文
于占泉
刘会超
廖平平
刘贵臻
张相雷
田春雷
曾祥丹
唐建
陶然
韩立通
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to CN201980001114.9A priority Critical patent/CN112543623A/zh
Priority to PCT/CN2019/097032 priority patent/WO2021012142A1/zh
Priority to CN201921446786.4U priority patent/CN211094674U/zh
Publication of WO2021012142A1 publication Critical patent/WO2021012142A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • A61B34/32Surgical robots operating autonomously

Definitions

  • the present disclosure relates to the technical field of medical devices, in particular to a surgical robot system and a control method thereof.
  • a variety of positioning surgical instruments are well known. However, when the surgical instrument is manually positioned, the user needs to manually determine the position of the surgical tool and move it, so errors are likely to occur between the position of the affected part and the surgical tool.
  • the embodiments of the present disclosure provide a surgical robot system and a control method thereof, which can be used to perform a variety of precise operations on focal organs that cannot be performed manually, and has the advantages of small trauma, low infectivity, and flexible surgical methods.
  • a surgical robot system including:
  • the imaging device is used to scan the target site to be operated to obtain the imaging data of the target site;
  • a robotic arm the front end of the robotic arm carries a surgical device
  • Positioning device for positioning the target part
  • the coordinate system processor is used to generate and store the first coordinate transformation relationship that transforms the coordinates from the first coordinate system of the imaging data to the second coordinate system of the robotic arm, and generates and stores the coordinate transformation from the robotic arm Transforming the second coordinate system into the second coordinate transformation relationship of the third coordinate system of the positioning device;
  • the controller is configured to generate a three-dimensional image of the target part in the first coordinate system according to the imaging data, display the three-dimensional image through a user interface, receive the surgical path input by the user, and according to the surgical path, the The first coordinate transformation relationship and the second coordinate transformation relationship control the robotic arm to perform surgery on the target part.
  • the controller is specifically configured to transform the surgical path input by the user through the user interface into a first coordinate track in the second coordinate system according to the first coordinate transformation relationship, and according to the The second coordinate transformation relationship transforms the first coordinate trajectory into a second coordinate trajectory on the third coordinate system, and controls the robotic arm to perform surgery on the target part according to the second coordinate trajectory.
  • the imaging data includes multiple two-dimensional cross-sectional images of the target part
  • the controller is specifically configured to acquire a region of interest in one of the two-dimensional cross-sectional images, and determine the boundary of the region of interest; repeat;
  • the boundaries of the regions of interest of all the two-dimensional cross-sectional images are acquired; geometric configuration is performed according to the boundaries of the regions of interest of all the two-dimensional cross-sectional images to obtain a three-dimensional image of the target part.
  • the controller is specifically configured to transform the boundary information of the region of interest of all two-dimensional cross-sectional images into a point group in the first coordinate system, and generate a three-dimensional image of the target part according to the point group .
  • the coordinate system processor is specifically configured to select at least four first reference points from the first coordinate system, and determine the first reference point of the at least four first reference points in the first coordinate system.
  • a coordinate value and a second coordinate value in the second coordinate system are calculated according to the first coordinate value and the second coordinate value to obtain a first conversion function characterizing the first coordinate transformation relationship;
  • Select at least four second reference points in the second coordinate system and determine the third coordinate value of the at least four second reference points in the second coordinate system and the fourth coordinate value in the third coordinate system.
  • the coordinate value is calculated according to the third coordinate value and the fourth coordinate value to obtain a second conversion function representing the second coordinate transformation relationship.
  • the imaging device includes:
  • the C-shaped arm is arranged on the base, and both ends of the C-shaped arm are respectively provided with a ray emitting part and a ray receiving part.
  • the target part is located in the ray Between the transmitting part and the ray receiving part.
  • the positioning device includes:
  • a positioning mark attached to or close to the target location
  • At least two optical emitting devices in different positions for emitting specific light
  • the positioning part is configured to receive the specific light reflected by the positioning mark, and determine the spatial position of the positioning mark in the third coordinate system according to the received specific light.
  • the robotic arm includes:
  • a moving part that moves the rotating part in the direction of at least one of the three shafts.
  • the moving part includes:
  • the first direction driving part moves along the first axis direction
  • the second direction driving part is connected to the first direction driving part and moves along the second axis direction;
  • the third direction driving part is connected to the second direction driving part and moves along the third axis direction;
  • the rotating part includes:
  • the first rotation driving part includes one end connected to the third-direction driving part and rotating around the first rotation axis;
  • the second rotation driving part includes an end connected to the first rotation driving part and rotating around a second rotation shaft, and the surgical device is attached to the second rotation driving part.
  • the embodiment of the present disclosure also provides a control method of a surgical robot system, which includes:
  • the imaging device scans the target site to be operated to obtain imaging data of the target site
  • the positioning device locates the target part
  • the coordinate system processor generates and stores the first coordinate transformation relationship that transforms the coordinates from the first coordinate system of the imaging data to the second coordinate system of the robot arm, and generates and stores the coordinates from the second coordinate system of the robot arm.
  • the coordinate system is transformed into the second coordinate transformation relationship of the third coordinate system of the positioning device;
  • the controller generates a three-dimensional image of the target part in the first coordinate system according to the imaging data, displays the three-dimensional image through a user interface, receives the surgical path input by the user, and according to the surgical path and the first coordinate
  • the transformation relationship and the second coordinate transformation relationship control the robotic arm to perform surgery on the target part.
  • controlling the robotic arm to perform surgery on the target site includes:
  • the controller transforms the surgical path input by the user through the user interface into a first coordinate track on the second coordinate system according to the first coordinate transformation relationship, and transforms the surgical path according to the second coordinate transformation relationship.
  • the first coordinate trajectory is transformed into a second coordinate trajectory on the third coordinate system, and the robotic arm is controlled to perform surgery on the target part according to the second coordinate trajectory.
  • the imaging data includes multiple two-dimensional cross-sectional images of the target part, and generating the three-dimensional image includes:
  • the controller acquires the region of interest in one of the two-dimensional cross-sectional images, and determines the boundary of the region of interest; repeats the above steps until the boundaries of the region of interest of all the two-dimensional cross-sectional images are acquired;
  • the boundary of the region of interest is geometrically configured to obtain a three-dimensional image of the target part.
  • generating a three-dimensional image of the target part in the second coordinate system includes:
  • the controller transforms the boundary information of the region of interest of all two-dimensional cross-sectional images into a point group in the first coordinate system, and generates a three-dimensional image of the target part according to the point group.
  • generating the first coordinate transformation relationship and the second coordinate transformation relationship includes:
  • the coordinate system processor selects at least four first reference points from the first coordinate system, and determines the first coordinate values of the at least four first reference points in the first coordinate system and the The second coordinate value in the second coordinate system is calculated according to the first coordinate value and the second coordinate value to obtain a first conversion function characterizing the first coordinate transformation relationship; select from the second coordinate system At least four second reference points, determine the third coordinate value of the at least four second reference points in the second coordinate system and the fourth coordinate value in the third coordinate system, according to the first The third coordinate value and the fourth coordinate value are calculated to obtain a second conversion function representing the second coordinate transformation relationship.
  • the imaging device includes: a base; a C-shaped arm is arranged on the base, and both ends of the C-shaped arm are respectively provided with a ray emitting part and a ray receiving part, and the target When a part is scanned, the target part is located between the ray emitting part and the ray receiving part; obtaining imaging data of the target part includes:
  • the ray emitting part emits rays
  • the radiation receiving unit receives radiation passing through the target part, and generates the imaging data according to the received radiation information.
  • the positioning device includes: a positioning mark attached to or arranged close to the target part; at least two optical emitting devices at different positions for emitting specific light; and positioning the target part includes:
  • the positioning unit receives the specific light reflected by the positioning mark, and determines the spatial position of the positioning mark in the third coordinate system according to the received specific light.
  • the robotic arm includes: a rotating part that carries a surgical device, and uses at least one of the two rotating shafts as a center to rotate the surgical device; and a moving part that makes the rotating part move toward three directions. At least one of the two axes moves in the direction;
  • Surgery on the target site includes:
  • the controller moves the rotating part to at least one of the three axes through the moving part;
  • the controller rotates the surgical device with at least one of the two rotating shafts as a center through the rotating part.
  • the embodiment of the present disclosure also provides a control device of a surgical robot system, including: a memory, a processor, and a computer program stored in the memory and capable of running on the processor.
  • the computer program is executed when the processor is executed. Steps in the control method of the surgical robot system as described above.
  • the embodiments of the present disclosure also provide a computer-readable storage medium having a computer program stored on the computer-readable storage medium, and when the computer program is executed by a processor, the above-mentioned control method of the surgical robot system is implemented step.
  • Figure 1 shows a schematic structural diagram of a surgical robot system according to an embodiment of the present disclosure
  • FIGS. 2 and 3 show schematic diagrams of the structure of an imaging device according to an embodiment of the present disclosure
  • FIG. 4 shows a schematic diagram of the composition of a surgical robot according to an embodiment of the present disclosure
  • 5 and 6 show schematic diagrams of the process of generating a three-dimensional image according to an embodiment of the present disclosure
  • FIG. 7 shows a schematic flowchart of a control method of a surgical robot system according to an embodiment of the present disclosure.
  • the embodiments of the present disclosure provide a surgical robot system and a control method thereof, which can be used to perform a variety of precise operations on focal organs that cannot be performed manually, and has the advantages of small trauma, low infectivity, and flexible surgical methods.
  • the embodiment of the present disclosure provides a surgical robot system, as shown in FIG. 1, including:
  • the imaging device 11 is used to scan a target part to be operated on to obtain imaging data of the target part;
  • a robotic arm 15 carrying surgical devices at the front end of the robotic arm 15;
  • the positioning device 12 is used for positioning the target part
  • the coordinate system processor 13 is used to generate and store the first coordinate transformation relationship that transforms the coordinates from the first coordinate system of the imaging data to the second coordinate system of the robot arm 15, and generates and stores the coordinate transformation from the The second coordinate system of the mechanical arm 15 is transformed into the second coordinate transformation relationship of the third coordinate system of the positioning device 12;
  • the controller 14 is configured to generate a three-dimensional image of the target part in the first coordinate system according to the imaging data, display the three-dimensional image through a user interface, receive the surgical path input by the user, and according to the surgical path, The first coordinate transformation relationship and the second coordinate transformation relationship control the robotic arm 15 to perform surgery on the target part.
  • the imaging device 11 is used to obtain the imaging data of the target part. According to the imaging data, a three-dimensional image of the target part can be generated and displayed. A user such as a doctor can input the surgical path according to the displayed three-dimensional image, so that the surgical robot can follow the instructions of the user. The surgical path performs surgery on the target site.
  • the doctor can control the surgical robot through the user interface, which can improve the precision of the operation, simplify the control method, and reduce the operation error.
  • the target site is a subject that can be treated with surgical devices, such as bones that need to be boned, lesions that need to be removed, etc.
  • the target site can be located in the patient's body or external surface, including but not limited to bones, joints, and internal organs.
  • the imaging device 11 may be used to scan the target part to obtain imaging data of the target part.
  • the imaging data may include multiple two-dimensional cross-sectional images of the target part.
  • the imaging device 11 can adopt, but is not limited to, X-ray equipment, ultrasound equipment, electronic computed tomography equipment, positron emission computed tomography equipment, etc.
  • the imaging device 11 is used to continuously scan the target part to obtain multiple two-dimensional cross-sectional images and their spatial coordinates.
  • the controller 14 performs spatial reconstruction of the target part in the first coordinate system according to the imaging data, and establishes the geometric configuration of the target part in the first coordinate system, so that the doctor can learn the detailed information of the target part and perform the operation accordingly Path planning.
  • the imaging device 11 may include:
  • the C-shaped arm 100 is arranged on the base, and both ends of the C-shaped arm 100 are respectively provided with a ray emitting part 103 and a ray receiving part 104. As shown in FIG. 3, the target part 302 is During scanning, the target part 302 is located between the ray emitting part 103 and the ray receiving part 104.
  • the base includes a supporting table 205, and a pillar 101 arranged perpendicular to the supporting table 205, and the middle of the C-shaped arm 100 can be arranged at all through a movable structure. Said pillar 101, so that the C-shaped arm 100 can move.
  • the supporting table 205 may be a T-shaped structure composed of a first supporting portion 2051 and a second supporting portion 2052 that are perpendicular to each other.
  • the bottoms of the first supporting portion 2051 and the second supporting portion 2052 are respectively provided with universal wheels 206.
  • the controller 14 obtains the region of interest in one of the two-dimensional cross-sectional images of the plurality of two-dimensional cross-sectional images, and determines the boundary of the region of interest; repeats the above steps until all the two-dimensional cross-sectional images of interest are obtained The boundary of the region; performing geometric configuration according to the boundary of the region of interest of all two-dimensional cross-sectional images to obtain a three-dimensional image of the target part.
  • the process of generating a three-dimensional image specifically includes the following steps:
  • the controller receives imaging data, and obtains a region of interest in one of the two-dimensional cross-sectional images;
  • the two-dimensional cross-sectional image may also include image information of other parts.
  • the operation only needs to know the information of the target part. Therefore, you can first distinguish the regions of interest in the two-dimensional cross-sectional image that require surgery In this way, the doctor can understand the situation of the target part more intuitively, without being misled by the images of other parts, and it can also reduce the amount of image data processing.
  • the doctor can manually determine the region of interest in the two-dimensional cross-sectional image that needs to be operated on; it can also be combined with the neural network to select the region of interest that needs to be operated on in the two-dimensional cross-sectional image.
  • the boundary of the region of interest needs to be determined, specifically, The boundary of the region of interest in each two-dimensional cross-sectional image needs to be determined.
  • the controller 14 may transform the boundary information of the region of interest of all two-dimensional cross-sectional images into a point group in the first coordinate system, and generate a three-dimensional image of the target part according to the point group.
  • step S15 Process the next two-dimensional cross-sectional image, and go to step S11.
  • generating a three-dimensional image of the target part according to the point group specifically includes the following steps:
  • the Otsu algorithm can be used to find the region of interest. Mark, based on the mark, the boundary detection of the region of interest can be performed.
  • the technical solution of the present disclosure is not limited to using the Otsu algorithm to find the mark, and other algorithms can also be used to find the mark of the region of interest.
  • the boundary of the region of interest in each two-dimensional profile image needs to be determined.
  • the boundary of the region of interest can be determined by the Watershed algorithm.
  • the Watershed algorithm also known as water segmentation, is a morphological segmentation algorithm that imitates the map immersion process Its essence is to use the regional characteristics of the image to segment the image. It combines the advantages of edge detection and region growth to obtain a single-pixel wide, connected, closed and accurate contour.
  • the technical solution of the present disclosure is not limited to using the Watershed algorithm to determine the boundary of the region of interest, and other boundary detection algorithms can also be used to determine the boundary of the region of interest.
  • the boundary centroid can be used as the basis for processing the next two-dimensional profile image.
  • the positioning device 12 of this embodiment may include:
  • a positioning mark attached to or close to the target location
  • At least two optical emitting devices in different positions for emitting specific light
  • the positioning part is configured to receive the specific light reflected by the positioning mark, and determine the spatial position of the positioning mark in the third coordinate system according to the received specific light.
  • the target part can be positioned through the spatial position of the positioning mark and the position relationship between the positioning mark and the target position, and the spatial position of the target part in the third coordinate system can be obtained.
  • the specific light is preferably light that can penetrate human skin, such as infrared rays.
  • the optical emitting device may be at least two infrared probes located obliquely above the target part, capable of emitting infrared; the positioning mark may be an infrared reflective positioning ball, but is not limited to this; the positioning part may be located obliquely above the target part The two optical cameras above.
  • the optical emitting device emits infrared rays, the positioning mark reflects infrared rays, and the positioning part receives the infrared rays reflected by the positioning mark, and the position of the positioning mark can be accurately obtained by triangulation.
  • the positioning mark needs to be attached to or arranged close to the target site in advance, for example, installed on the patient's bone that requires surgery.
  • the robot arm 15 can move with five degrees of freedom or six degrees of freedom.
  • the robotic arm 15 may include: a rotating part that carries a surgical device, and uses at least one of the two rotating shafts as the center to rotate the surgical device; and a moving part that makes the rotating part move toward three axes. Move in at least one axis direction.
  • the robotic arm 15 can be attached to an operating table for use.
  • the moving part may include:
  • the first direction driving part moves along the first axis direction
  • the second direction driving part is connected to the first direction driving part and moves along the second axis direction;
  • the third direction driving part is connected to the second direction driving part and moves along the third axis direction;
  • first axis direction is perpendicular to the second axis direction
  • second axis direction is perpendicular to the third axis direction
  • first axis direction is perpendicular to the third axis direction
  • the rotating part includes:
  • the first rotation driving part includes one end connected to the third-direction driving part and rotating around the first rotation axis;
  • the second rotation driving part includes an end connected to the first rotation driving part and rotating around a second rotation shaft, and the surgical device is attached to the second rotation driving part.
  • the surgical robot system may include one robotic arm 15 or multiple robotic arms 15.
  • the surgical robot system can perform bone surgery, and the surgical robot system can include two robotic arms, and the surgical device is a gripper arranged in the front section of the robotic arm.
  • the surgical robot system of this embodiment includes:
  • the first robotic arm 201 and the second robotic arm 202 are symmetrically arranged on the opposite sides of the pillar 101 along the first direction, and the ends of the first robotic arm 201 and the second robotic arm 202 are both provided with useful The gripper for gripping 2012.
  • the gripper 2012 of the first robotic arm 201 and/or the second robotic arm 202 can be controlled to grasp the bone to be boned and control the first robotic arm 201 and/or Or the second robotic arm 202 moves to perform bone fixation.
  • the first robotic arm 201 and the second robotic arm 202 are symmetrically arranged at all.
  • the two opposite sides of the pillar 101, and the target location is located between the two ends of the C-shaped arm 100, that is, the position facing the pillar 101, which facilitates the realization of bones.
  • the movement direction and movement mode of the first robotic arm 201 and the second robotic arm 202 can be set according to actual needs, and the first robotic arm 201 and the second robotic arm 202 The movement can be controlled uniformly or independently.
  • the surgical robot system can also perform other types of surgery, such as removing lesions, and the robotic arm assembly can also include:
  • a third robot arm 203 and a fourth robot arm 204 are arranged on opposite sides of the pillar 101 along the first direction, and the third robot arm 203 and/or the fourth robot arm 204 are provided at their ends.
  • the third robotic arm 203 and/or the fourth robotic arm 204 can be controlled to perform a corresponding type of surgical operation.
  • the surgical robot system of this embodiment may only include the first robotic arm 201 and the second robotic arm 202, or may only include the third robotic arm 203 and the fourth robotic arm 204, or both
  • the first robotic arm 201, the second robotic arm 202, the third robotic arm 203, and the fourth robotic arm 204 are used to increase the functions of the surgical robot and expand its scope of application.
  • the controller 14 can display a three-dimensional image of the target part in the user interface, and the doctor can study the three-dimensional image displayed in the user interface and determine the surgical path by scribing.
  • the surgical robot system of this embodiment should also include at least one display connected to the controller 14.
  • the display can receive the three-dimensional image of the controller 14 and display it in the user interface.
  • one of the displays can be used to display the user interface, and the other displays can be used to display the scene.
  • the controller 14 can control the robotic arm 15 to perform surgery according to the surgical path, but there are three different coordinate systems in the surgical robot system of this embodiment: the first coordinate system M of the imaging data, and the robotic arm 15
  • the second coordinate system R of the positioning device 12, the third coordinate system O of the positioning device 12, the surgical path determined by the doctor is based on the first coordinate system, and the coordinate transformation relationship between different coordinate systems needs to be obtained to realize the above three coordinate systems Only by positioning and alignment can the surgical path planned by the doctor based on the three-dimensional image be transmitted to the robotic arm 15 and the robotic arm 15 can operate at the corresponding position of the target part.
  • the positioning part of the positioning device 12 is located above the entire system.
  • the XYZ direction is The XYZ directions of the imaging device 11 on the same altitude side of the system are obviously different.
  • the XYZ direction of the positioning device 12 and the XYZ direction of the robotic arm 15 are also different.
  • the three can be unified through coordinate matrix transposition.
  • the coordinate system processor 13 may select at least four first reference points from the first coordinate system, and determine the first coordinate values of the at least four first reference points in the first coordinate system and the The second coordinate value in the second coordinate system is calculated according to the first coordinate value and the second coordinate value to obtain a first conversion function characterizing the first coordinate transformation relationship; from the second coordinate system Select at least four second reference points in the, determine the third coordinate value of the at least four second reference points in the second coordinate system and the fourth coordinate value in the third coordinate system, according to the The third coordinate value and the fourth coordinate value are calculated to obtain a second conversion function representing the second coordinate conversion relationship.
  • the reference point coordinates of the second coordinate system are The reference point coordinates of the first coordinate system are
  • the reference point coordinates of the third coordinate system are
  • the conversion between coordinate system O and coordinate system M can be realized first, and then the conversion between coordinate system M and coordinate system R can be realized, then two conversion functions can be obtained with among them, Is the conversion function between coordinate system R and coordinate system M, It is the conversion function between coordinate system O and coordinate system M.
  • the controller 14 can transform the surgical path into the first coordinate track on the second coordinate system according to the first coordinate transformation relationship, According to the second coordinate transformation relationship, the first coordinate trajectory is transformed into a second coordinate trajectory on the third coordinate system, and the robot arm 15 is controlled to perform the operation on the target part according to the second coordinate trajectory. surgery.
  • the doctor can control the surgical robot system to accurately perform various precise operations on the diseased organ that cannot be performed manually by the doctor, and has the advantages of small trauma, low infectivity, and flexible operation methods.
  • the embodiment of the present disclosure also provides a control method of a surgical robot system, as shown in FIG. 7, including:
  • the imaging device scans the target site to be operated to obtain imaging data of the target site
  • the positioning device locates the target part
  • the coordinate system processor generates and stores the first coordinate transformation relationship that transforms the coordinates from the first coordinate system of the imaging data to the second coordinate system of the robot arm, and generates and stores the coordinates from the first coordinate system of the robot arm. Transforming the second coordinate system into the second coordinate transformation relationship of the third coordinate system of the positioning device;
  • the controller generates a three-dimensional image of the target part in the first coordinate system according to the imaging data, displays the three-dimensional image through a user interface, receives a surgical path input by the user, and according to the surgical path and the first coordinate system A coordinate transformation relationship and the second coordinate transformation relationship control the robotic arm to perform surgery on the target part.
  • the imaging device is used to obtain the imaging data of the target part. According to the imaging data, a three-dimensional image of the target part can be generated and displayed. The path performs surgery on the target site.
  • the doctor can control the surgical robot through the user interface, which can improve the precision of the operation, simplify the control method, and reduce the operation error.
  • the target site is a subject that can be treated with surgical devices, such as bones that need to be boned, lesions that need to be removed, etc.
  • the target site can be located in the patient's body or external surface, including but not limited to bones, joints, and internal organs.
  • the imaging device 11 can be used to scan the target part to obtain imaging data of the target part.
  • the imaging data may include multiple two-dimensional cross-sectional images of the target part.
  • the imaging device 11 can adopt, but is not limited to, X-ray equipment, ultrasound equipment, electronic computed tomography equipment, positron emission computed tomography equipment, etc.
  • the imaging device 11 is used to continuously scan the target part to obtain multiple two-dimensional cross-sectional images and their spatial coordinates.
  • the controller 14 performs spatial reconstruction of the target part in the first coordinate system according to the imaging data, and establishes the geometric configuration of the target part in the first coordinate system, so that the doctor can learn the detailed information of the target part and perform the operation accordingly Path planning.
  • the imaging device 11 may include:
  • the C-shaped arm 100 is arranged on the base, and both ends of the C-shaped arm 100 are respectively provided with a ray emitting part 103 and a ray receiving part 104. As shown in FIG. 3, the target part 302 is During scanning, the target part 302 is located between the ray emitting part 103 and the ray receiving part 104.
  • the steps of obtaining imaging data of the target part include:
  • the radiation emitting part 103 emits radiation
  • the radiation receiving unit 104 receives radiation passing through the target part, and generates the imaging data according to the received radiation information.
  • the controller 14 obtains the region of interest in one of the two-dimensional cross-sectional images of the plurality of two-dimensional cross-sectional images, and determines the boundary of the region of interest; repeats the above steps until all the two-dimensional cross-sectional images of interest are obtained The boundary of the region; performing geometric configuration according to the boundary of the region of interest of all two-dimensional cross-sectional images to obtain a three-dimensional image of the target part.
  • the process of generating a three-dimensional image specifically includes the following steps:
  • the controller receives imaging data, and obtains a region of interest in one of the two-dimensional cross-sectional images;
  • the two-dimensional cross-sectional image may also include image information of other parts.
  • the operation only needs to know the information of the target part. Therefore, you can first distinguish the regions of interest in the two-dimensional cross-sectional image that require surgery In this way, the doctor can understand the situation of the target part more intuitively, without being misled by the images of other parts, and it can also reduce the amount of image data processing.
  • the doctor can manually determine the region of interest in the two-dimensional cross-sectional image that needs to be operated on; it can also be combined with the neural network to select the region of interest that needs to be operated on in the two-dimensional cross-sectional image.
  • the boundary of the region of interest needs to be determined, specifically, The boundary of the region of interest in each two-dimensional cross-sectional image needs to be determined.
  • the controller 14 may transform the boundary information of the region of interest of all two-dimensional cross-sectional images into a point group in the first coordinate system, and generate a three-dimensional image of the target part according to the point group.
  • step S15 Process the next two-dimensional cross-sectional image, and go to step S11.
  • generating a three-dimensional image of the target part according to the point group specifically includes the following steps:
  • the Otsu algorithm can be used to find the region of interest. Mark, based on the mark, the boundary detection of the region of interest can be performed.
  • the technical solution of the present disclosure is not limited to using the Otsu algorithm to find the mark, and other algorithms can also be used to find the mark of the region of interest.
  • the boundary of the region of interest in each two-dimensional profile image needs to be determined.
  • the boundary of the region of interest can be determined by the Watershed algorithm.
  • the Watershed algorithm also known as water segmentation, is a morphological segmentation algorithm that imitates the map immersion process Its essence is to use the regional characteristics of the image to segment the image. It combines the advantages of edge detection and region growth to obtain a single-pixel wide, connected, closed and accurate contour.
  • the technical solution of the present disclosure is not limited to using the Watershed algorithm to determine the boundary of the region of interest, and other boundary detection algorithms can also be used to determine the boundary of the region of interest.
  • the boundary centroid can be used as the basis for processing the next two-dimensional profile image.
  • the positioning device 12 of this embodiment may include:
  • a positioning mark attached to or close to the target location
  • At least two optical emitting devices in different positions for emitting specific light
  • Positioning the target part includes:
  • the positioning unit receives the specific light reflected by the positioning mark, and determines the spatial position of the positioning mark in the third coordinate system according to the received specific light.
  • the target part can be positioned through the spatial position of the positioning mark and the position relationship between the positioning mark and the target position, and the spatial position of the target part in the third coordinate system can be obtained.
  • the specific light is preferably light that can penetrate human skin, such as infrared rays.
  • the optical emitting device may be at least two infrared probes located obliquely above the target part, capable of emitting infrared; the positioning mark may be an infrared reflective positioning ball, but is not limited to this; the positioning part may be located obliquely above the target part The two optical cameras above.
  • the optical emitting device emits infrared rays, the positioning mark reflects infrared rays, and the positioning part receives the infrared rays reflected by the positioning mark, and the position of the positioning mark can be accurately obtained by triangulation.
  • the positioning mark needs to be attached or configured close to the target part in advance, for example, installed at the bone of the patient that needs to be operated on.
  • the robot arm 15 can move with five degrees of freedom or six degrees of freedom.
  • the robotic arm 15 may include: a rotating part that carries a surgical device, and uses at least one of the two rotating shafts as the center to rotate the surgical device; and a moving part that makes the rotating part move toward three axes. Move in at least one axis direction.
  • the robotic arm 15 can be attached to an operating table for use.
  • the moving part may include:
  • the first direction driving part moves along the first axis direction
  • the second direction driving part is connected to the first direction driving part and moves along the second axis direction;
  • the third direction driving part is connected to the second direction driving part and moves along the third axis direction;
  • first axis direction is perpendicular to the second axis direction
  • second axis direction is perpendicular to the third axis direction
  • first axis direction is perpendicular to the third axis direction
  • the rotating part includes:
  • the first rotation driving part includes one end connected to the third-direction driving part and rotating around the first rotation axis;
  • the second rotation driving part includes an end connected to the first rotation driving part and rotating around a second rotation shaft, and the surgical device is attached to the second rotation driving part.
  • the steps of performing surgery on the target site include:
  • the controller moves the rotating part to at least one of the three axes through the moving part;
  • the controller rotates the surgical device with at least one of the two rotating shafts as a center through the rotating part.
  • the controller 14 can display a three-dimensional image of the target part in the user interface, and the doctor can study the three-dimensional image displayed in the user interface and determine the surgical path by scribing.
  • the surgical robot system of this embodiment should also include at least one display connected to the controller 14.
  • the display can receive the three-dimensional image of the controller 14 and display it in the user interface.
  • one of the displays can be used to display the user interface, and the other displays can be used to display the scene.
  • the controller 14 can control the robotic arm 15 to perform surgery according to the surgical path, but there are three different coordinate systems in the surgical robot system of this embodiment: the first coordinate system M of the imaging data, and the robotic arm 15
  • the second coordinate system R of the positioning device 12, the third coordinate system O of the positioning device 12, the surgical path determined by the doctor is based on the first coordinate system, and the coordinate transformation relationship between different coordinate systems needs to be obtained to realize the above three coordinate systems Only by positioning and alignment can the surgical path planned by the doctor based on the three-dimensional image be transmitted to the robotic arm 15 and the robotic arm 15 can operate at the corresponding position of the target part.
  • the positioning part of the positioning device 12 is located above the entire system.
  • the XYZ direction is The XYZ directions of the imaging device 11 on the same altitude side of the system are obviously different.
  • the XYZ direction of the positioning device 12 and the XYZ direction of the robotic arm 15 are also different.
  • the three can be unified through coordinate matrix transposition.
  • the coordinate system processor 13 may select at least four first reference points from the first coordinate system, and determine the first coordinate values of the at least four first reference points in the first coordinate system and the The second coordinate value in the second coordinate system is calculated according to the first coordinate value and the second coordinate value to obtain a first conversion function characterizing the first coordinate transformation relationship; from the second coordinate system Select at least four second reference points in the, determine the third coordinate value of the at least four second reference points in the second coordinate system and the fourth coordinate value in the third coordinate system, according to the The third coordinate value and the fourth coordinate value are calculated to obtain a second conversion function representing the second coordinate conversion relationship.
  • the reference point coordinates of the second coordinate system are The reference point coordinates of the first coordinate system are
  • the reference point coordinates of the third coordinate system are
  • the conversion between coordinate system O and coordinate system M can be realized first, and then the conversion between coordinate system M and coordinate system R can be realized, then two conversion functions can be obtained with among them, Is the conversion function between coordinate system R and coordinate system M, It is the conversion function between the coordinate system O and the coordinate system M.
  • the controller 14 can transform the surgical path into the first coordinate track on the second coordinate system according to the first coordinate transformation relationship, According to the second coordinate transformation relationship, the first coordinate trajectory is transformed into a second coordinate trajectory on the third coordinate system, and the robot arm 15 is controlled to perform the operation on the target part according to the second coordinate trajectory. surgery.
  • the doctor can control the surgical robot system to accurately perform various precise operations on the diseased organ that cannot be performed manually by the doctor, and has the advantages of small trauma, low infectivity, and flexible operation methods.
  • the embodiment of the present disclosure also provides a control device of a surgical robot system, including: a memory, a processor, and a computer program stored in the memory and capable of running on the processor.
  • the computer program is executed when the processor is executed. Steps in the control method of the surgical robot system as described above.
  • the embodiments of the present disclosure also provide a computer-readable storage medium having a computer program stored on the computer-readable storage medium, and when the computer program is executed by a processor, the above-mentioned control method of the surgical robot system is implemented step.

Landscapes

  • Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Robotics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Manipulator (AREA)

Abstract

本公开提供了一种手术机器人***及其控制方法,属于医疗器械技术领域。手术机器人***包括:成像器件,得到目标部位的成像数据;机械手臂,前端携带有手术器件;***件,用于对目标部位进行定位;坐标系处理器,用于生成并存储将坐标从***件的第一坐标系变换成成像数据的第二坐标系的第一坐标变换关系,生成并存储将坐标从成像数据的第二坐标系变换成机械手臂的第三坐标系的第二坐标变换关系;控制器,用于根据成像数据和第一坐标变换关系生成目标部位在第二坐标系的三维图像,通过用户界面展示三维图像,接收用户输入的手术路径,根据手术路径和第二坐标变换关系控制机械手臂对目标部位进行手术。本公开的手术机器人能够执行精确手术。

Description

手术机器人***及其控制方法 技术领域
本公开涉及医疗器械技术领域,特别是指一种手术机器人***及其控制方法。
背景技术
多样的定位手术器械已经众所周知。但是,当利用手动定位手术器械时,用户需要手工决定手术工具的位置并移动,因而在患部位置与手术工具之间容易发生误差。
发明内容
本公开实施例提供一种手术机器人***及其控制方法,能够用来执行多种人工无法进行的针对病灶器官的精确手术,且具有创伤小、感染性低、手术方式灵活等优势。
为解决上述技术问题,本公开的实施例提供技术方案如下:
一方面,提供一种手术机器人***,包括:
成像器件,用于对待手术的目标部位进行扫描,得到目标部位的成像数据;
机械手臂,所述机械手臂前端携带有手术器件;
***件,用于对所述目标部位进行定位;
坐标系处理器,用于生成并存储将坐标从所述成像数据的第一坐标系变换成所述机械手臂的第二坐标系的第一坐标变换关系,生成并存储将坐标从所述机械手臂的第二坐标系变换成所述***件的第三坐标系的第二坐标变换关系;
控制器,用于根据所述成像数据生成所述目标部位在所述第一坐标系的三维图像,通过用户界面展示所述三维图像,接收用户输入的手术路径,根据所述手术路径、所述第一坐标变换关系和所述第二坐标变换关系控制所述 机械手臂对所述目标部位进行手术。
可选地,所述控制器具体用于将用户通过所述用户界面输入的所述手术路径根据所述第一坐标变换关系变换成所述第二坐标系上的第一坐标轨迹,根据所述第二坐标变换关系将所述第一坐标轨迹变换成所述第三坐标系上的第二坐标轨迹,并根据所述第二坐标轨迹控制所述机械手臂对所述目标部位进行手术。
可选地,所述成像数据包括多张所述目标部位的二维剖面图像,所述控制器具体用于获取其中一张二维剖面图像中的感兴趣区域,确定所述感兴趣区域的边界;重复上述步骤,直至获取所有二维剖面图像的感兴趣区域的边界;根据所有二维剖面图像的感兴趣区域的边界进行几何构型,得到所述目标部位的三维图像。
可选地,所述控制器具体用于将所有二维剖面图像的感兴趣区域的边界信息变换为所述第一坐标系中的点群,根据所述点群生成所述目标部位的三维图像。
可选地,所述坐标系处理器具体用于从所述第一坐标系中选择至少四个第一参考点,确定所述至少四个第一参考点在所述第一坐标系中的第一坐标值和在所述第二坐标系中的第二坐标值,根据所述第一坐标值和所述第二坐标值计算得到表征所述第一坐标变换关系的第一转换函数;从所述第二坐标系中选择至少四个第二参考点,确定所述至少四个第二参考点在所述第二坐标系中的第三坐标值和在所述第三坐标系中的第四坐标值,根据所述第三坐标值和所述第四坐标值计算得到表征所述第二坐标变换关系的第二转换函数。
可选地,所述成像器件包括:
基座;
C形臂,设置于所述基座上,且所述C形臂的两端分别设置有射线发射部和射线接收部,在对所述目标部位进行扫描时,所述目标部位位于所述射线发射部和所述射线接收部之间。
可选地,所述***件包括:
附着或接近于所述目标部位进行配置的定位标记;
处于不同位置的至少两个光学发射器件,用于发射特定光线;
定位部,用于接收所述定位标记反射的特定光线,并根据接收到的特定光线确定所述定位标记在所述第三坐标系中的空间位置。
可选地,所述机械手臂包括:
旋转部,携带有手术器件,以两个旋转轴中的至少一个旋转轴为中心,使所述手术器件旋转;
移动部,其使所述旋转部向三个轴中至少一个轴方向移动。
可选地,所述移动部包括:
第一方向驱动部,沿着第一轴方向进行移动;
第二方向驱动部,连接于所述第一方向驱动部,沿着第二轴方向进行移动;
第三方向驱动部,连接于所述第二方向驱动部,沿着第三轴方向进行移动;
所述旋转部包括:
第一旋转驱动部,包括连接于所述第三方向驱动部的一端,以第一旋转轴为中心旋转;及
第二旋转驱动部,包括连接于所述第一旋转驱动部的一端,以第二旋转轴为中心旋转,所述手术器件附着于所述第二旋转驱动部。
本公开实施例还提供了一种手术机器人***的控制方法,其中,包括:
成像器件对待手术的目标部位进行扫描,得到目标部位的成像数据;
***件对所述目标部位进行定位;
坐标系处理器生成并存储将坐标从所述成像数据的第一坐标系变换成所述机械手臂的第二坐标系的第一坐标变换关系,生成并存储将坐标从所述机械手臂的第二坐标系变换成所述***件的第三坐标系的第二坐标变换关系;
控制器根据所述成像数据生成所述目标部位在所述第一坐标系的三维图像,通过用户界面展示所述三维图像,接收用户输入的手术路径,根据所述手术路径、所述第一坐标变换关系和所述第二坐标变换关系控制所述机械手臂对所述目标部位进行手术。
可选地,所述控制所述机械手臂对所述目标部位进行手术包括:
所述控制器将用户通过所述用户界面输入的所述手术路径根据所述第一坐标变换关系变换成所述第二坐标系上的第一坐标轨迹,根据所述第二坐标变换关系将所述第一坐标轨迹变换成所述第三坐标系上的第二坐标轨迹,并根据所述第二坐标轨迹控制所述机械手臂对所述目标部位进行手术。
可选地,所述成像数据包括多张所述目标部位的二维剖面图像,生成所述三维图像包括:
所述控制器获取其中一张二维剖面图像中的感兴趣区域,确定所述感兴趣区域的边界;重复上述步骤,直至获取所有二维剖面图像的感兴趣区域的边界;根据所有二维剖面图像的感兴趣区域的边界进行几何构型,得到所述目标部位的三维图像。
可选地,生成所述目标部位在所述第二坐标系的三维图像包括:
所述控制器将所有二维剖面图像的感兴趣区域的边界信息变换为所述第一坐标系中的点群,根据所述点群生成所述目标部位的三维图像。
可选地,生成所述第一坐标变换关系和所述第二坐标变换关系包括:
所述坐标系处理器从所述第一坐标系中选择至少四个第一参考点,确定所述至少四个第一参考点在所述第一坐标系中的第一坐标值和在所述第二坐标系中的第二坐标值,根据所述第一坐标值和所述第二坐标值计算得到表征所述第一坐标变换关系的第一转换函数;从所述第二坐标系中选择至少四个第二参考点,确定所述至少四个第二参考点在所述第二坐标系中的第三坐标值和在所述第三坐标系中的第四坐标值,根据所述第三坐标值和所述第四坐标值计算得到表征所述第二坐标变换关系的第二转换函数。
可选地,所述成像器件包括:基座;C形臂,设置于所述基座上,且所述C形臂的两端分别设置有射线发射部和射线接收部,在对所述目标部位进行扫描时,所述目标部位位于所述射线发射部和所述射线接收部之间;得到目标部位的成像数据包括:
所述射线发射部发出射线;
所述射线接收部接收穿过所述目标部位的射线,根据接收到的射线信息 生成所述成像数据。
可选地,所述***件包括:附着或接近于所述目标部位进行配置的定位标记;处于不同位置的至少两个光学发射器件,用于发射特定光线;对所述目标部位进行定位包括:
定位部接收所述定位标记反射的特定光线,并根据接收到的特定光线确定所述定位标记在所述第三坐标系中的空间位置。
可选地,所述机械手臂包括:旋转部,携带有手术器件,以两个旋转轴中的至少一个旋转轴为中心,使所述手术器件旋转;移动部,其使所述旋转部向三个轴中至少一个轴方向移动;
对所述目标部位进行手术包括:
所述控制器通过所述移动部使所述旋转部向三个轴中至少一个轴方向移动;
所述控制器通过所述旋转部使所述手术器件,以两个旋转轴中的至少一个旋转轴为中心旋转。
本公开实施例还提供了一种手术机器人***的控制设备,包括:存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如上所述的手术机器人***的控制方法中的步骤。
本公开实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如上所述的手术机器人***的控制方法中的步骤。
附图说明
图1表示本公开实施例手术机器人***的结构示意图;
图2和图3表示本公开实施例成像器件的结构示意图;
图4表示本公开实施例手术机器人的组成示意图;
图5和图6表示本公开实施例生成三维图像的流程示意图;
图7表示本公开实施例手术机器人***的控制方法的流程示意图。
具体实施方式
为使本公开的实施例要解决的技术问题、技术方案和优点更加清楚,下面将结合附图及具体实施例进行详细描述。
本公开实施例提供一种手术机器人***及其控制方法,能够用来执行多种人工无法进行的针对病灶器官的精确手术,且具有创伤小、感染性低、手术方式灵活等优势。
本公开的实施例提供一种手术机器人***,如图1所示,包括:
成像器件11,用于对待手术的目标部位进行扫描,得到目标部位的成像数据;
机械手臂15,所述机械手臂15前端携带有手术器件;
***件12,用于对所述目标部位进行定位;
坐标系处理器13,用于生成并存储将坐标从所述成像数据的第一坐标系变换成所述机械手臂15的第二坐标系的第一坐标变换关系,生成并存储将坐标从所述机械手臂15的第二坐标系变换成所述***件12的第三坐标系的第二坐标变换关系;
控制器14,用于根据所述成像数据生成所述目标部位在所述第一坐标系的三维图像,通过用户界面展示所述三维图像,接收用户输入的手术路径,根据所述手术路径、所述第一坐标变换关系和所述第二坐标变换关系控制所述机械手臂15对所述目标部位进行手术。
本实施例中,利用成像器件11得到目标部位的成像数据,根据成像数据可以生成并展示目标部位的三维图像,用户比如医生可以根据所展示的三维图像输入手术路径,以便手术机器人根据用户指示的手术路径对目标部位进行手术。通过本实施例的技术方案,医生可以通过用户界面控制手术机器人,能够提高手术精密度,简化控制方式,并能够降低操作误差。
其中,目标部位为可以通过手术器件进行治疗的对象,比如需要接骨的骨头、需要去除的病灶等,目标部位可以位于患者的体内或者外部表面,包括但不限于骨头、关节、内脏。在手术之前,需要了解目标部位的详细信息,可以利用成像器件11对目标部位进行扫描,获得目标部位的成像数据,具体 地,成像数据可以包括目标部位的多张二维剖面图像。成像器件11可以采用但不限于X射线设备、超声设备、电子计算机断层扫描设备、正电子发射型计算机断层显像设备等,利用成像器件11不断扫描目标部位,获取多张二维剖面图像及其空间坐标,控制器14根据成像数据在第一坐标系中对目标部位进行空间重建,在第一坐标系中建立目标部位的几何构型,使得医生可以了解到目标部位的详细信息,并据此进行手术路径的规划。
一具体实施例中,如图2所示,成像器件11可以包括:
基座;
C形臂100,设置于所述基座上,且所述C形臂100的两端分别设置有射线发射部103和射线接收部104,如图3所示,在对所述目标部位302进行扫描时,所述目标部位302位于所述射线发射部103和所述射线接收部104之间。
如图4所示,本实施例中,所述基座包括一支撑台205,以及一垂直所述支撑台205设置的支柱101,所述C形臂100的中部可以通过一移动结构设置于所述支柱101上,这样使得C形臂100可移动。
所述支撑台205可以为由相互垂直的第一支撑部2051和第二支撑部2052组成的T形结构,所述第一支撑部2051和所述第二支撑部2052底部分别设置有万向轮206。
在空间重建过程中,所述控制器14获取多张二维剖面图像其中一张二维剖面图像中的感兴趣区域,确定所述感兴趣区域的边界;重复上述步骤,直至获取所有二维剖面图像的感兴趣区域的边界;根据所有二维剖面图像的感兴趣区域的边界进行几何构型,得到所述目标部位的三维图像。
如图5所示,生成三维图像的过程具体包括以下步骤:
S11、控制器接收成像数据,获取其中一张二维剖面图像中的感兴趣区域;
二维剖面图像除包括目标部位外,还可能包括其他部位的图像信息,而做手术只需要获知目标部位的信息即可,因此,可以首先将二维剖面图像中需要动手术的感兴趣区域区分出来,这样可以使得医生更加直观地了解到目标部位的情况,而不会被其他部位的图像所误导,另外还可以降低图像数据 的处理量。
具体地,可以通过医生手工确定二维剖面图像中需要动手术的感兴趣区域;还可以结合神经网络选择出二维剖面图像中需要动手术的感兴趣区域。
S12、确定感兴趣区域的边界;
在通过医生手工确定二维剖面图像中需要动手术的感兴趣区域,或结合神经网络选择出二维剖面图像中需要动手术的感兴趣区域后,需要确定感兴趣区域的边界,具体地,对每一张二维剖面图像中的感兴趣区域都需要确定其边界。
S13、判断所有二维剖面图像是否处理完,如果是,转向S14;如果否,转向S15;
S14、根据所有二维剖面图像的感兴趣区域的边界进行几何构型,得到所述目标部位的三维图像。
具体地,控制器14可以将所有二维剖面图像的感兴趣区域的边界信息变换为所述第一坐标系中的点群,根据所述点群生成所述目标部位的三维图像。
S15、处理下一张二维剖面图像,转向步骤S11。
如图6所示,根据所述点群生成所述目标部位的三维图像具体包括以下步骤:
S21、获取二维剖面图像中的感兴趣区域;
S22、寻找标记;
具体地,在通过医生手工确定二维剖面图像中需要动手术的感兴趣区域,或结合神经网络选择出二维剖面图像中需要动手术的感兴趣区域后,可以利用Otsu算法寻找感兴趣区域的标记,根据该标记可以进行感兴趣区域的边界检测,当然本公开的技术方案并不局限于采用Otsu算法寻找标记,还可以利用其它算法寻找感兴趣区域的标记。
S23、进行感兴趣区域的边界检测;
具体地,对每一张二维剖面图像中的感兴趣区域都需要确定其边界,可以通过Watershed算法确定感兴趣区域的边界,Watershed算法又称水域分割,是模仿地图浸没过程的一种形态学分割算法,其本质是利用图像的区域特性 来分割图像,它将边缘检测与区域生长的优点结合起来,能够得到单像素宽、连通、封闭确位置准确的轮廓。当然本公开的技术方案并不局限于采用Watershed算法确定感兴趣区域的边界,还可以利用其它的边界检测算法确定感兴趣区域的边界。
S24、存储边界信息;
存储每一张二维剖面图像中的感兴趣区域的边界信息。
S25、获得边界形心作为处理下一张二维剖面图像的基础;
由于二维剖面图像均是针对同一目标部位,相邻二维剖面图像之间的差别不大,为了减少数据处理量,可以将边界形心作为处理下一张二维剖面图像的基础。
S26、判断是否存在下一张二维剖面图像,如果是,转向S21;如果否,转向S27;
S27、将边界信息变换为第一坐标系中的点群;
S28、根据点群生成目标部位的三维图像。
为了对目标部位进行定位,本实施例的***件12可以包括:
附着或接近于所述目标部位进行配置的定位标记;
处于不同位置的至少两个光学发射器件,用于发射特定光线;
定位部,用于接收所述定位标记反射的特定光线,并根据接收到的特定光线确定所述定位标记在所述第三坐标系中的空间位置。
通过定位标记的空间位置以及定位标记与目标位置之间的位置关系可以对目标部位进行定位,获取目标部位在第三坐标系中的空间位置。
由于目标部位可能位于患者体内,因此,特定光线优选采用能够穿透人体皮肤的光线,比如红外线。具体地,光学发射器件可以为位于目标部位斜上方的至少两个红外线探头,能够发射红外线;定位标记可以为红外反射定位小球,但并不以此为限;定位部可以为位于目标部位斜上方的两个光学摄像头。光学发射器件发射红外线,定位标记反射红外线,定位部接收定位标记反射的红外线,通过三角定位方式可以准确获得定位标记的位置。其中,定位标记需要事先附着或接近于所述目标部位进行配置,比如安装在患者的 需要动手术的骨头处。
本实施例中,机械手臂15能够以五自由度或六自由度进行动作。所述机械手臂15可以包括:旋转部,携带有手术器件,以两个旋转轴中的至少一个旋转轴为中心,使所述手术器件旋转;移动部,其使所述旋转部向三个轴中至少一个轴方向移动。机械手臂15可以附着于手术台使用。
具体地,所述移动部可以包括:
第一方向驱动部,沿着第一轴方向进行移动;
第二方向驱动部,连接于所述第一方向驱动部,沿着第二轴方向进行移动;
第三方向驱动部,连接于所述第二方向驱动部,沿着第三轴方向进行移动;
其中,第一轴方向与第二轴方向垂直,第二轴方向与第三轴方向垂直,第一轴方向与第三轴方向垂直。
所述旋转部包括:
第一旋转驱动部,包括连接于所述第三方向驱动部的一端,以第一旋转轴为中心旋转;及
第二旋转驱动部,包括连接于所述第一旋转驱动部的一端,以第二旋转轴为中心旋转,所述手术器件附着于所述第二旋转驱动部。
本实施例中,手术机器人***可以包括一个机械手臂15,也可以包括多个机械手臂15。
一具体实施例中,手术机器人***可以执行接骨手术,手术机器人***可以包括两个机械手臂,手术器件为设置在机械手臂前段的抓手。具体地,如图4所示,本实施例的手术机器人***包括:
沿着第一方向对称设置于所述支柱101相对的两侧的第一机械手臂201和第二机械手臂202,所述第一机械手臂201和所述第二机械手臂202的端部均设置有用于抓握的抓手2012。
通过移动部和旋转部相互配合,可以控制所述第一机械手臂201和/或所述第二机械手臂202的抓手2012抓住待接骨的骨头、并控制所述第一机械手 臂201和/或所述第二机械手臂202运动以进行接骨。
通过本实施例的技术方案,可以实现自动接骨,可以通过移动部和旋转部控制所述第一机械手臂201和/或所述第二机械手臂202运动,以调整骨头位置精准对接。
骨头错位或者骨头折断需要接骨正位等操作时,需要两个机械手臂相配合以进行相应的操作,本实施例中,所述第一机械手臂201和所述第二机械手臂202对称设置于所述支柱101相对的两侧,而目标部位位于所述C形臂100的两端之间,也就是正对所述支柱101的位置,便于实现骨头的对接。
本实施例中,所述第一机械手臂201和所述第二机械手臂202的运动方向、以及运动方式均可以根据实际需要设定,所述第一机械手臂201和所述第二机械手臂202的运动可以统一控制,也可以独立控制。
本实施例中,手术机器人***还可以执行其他类型的手术,比如去除病灶,所述机械手臂组件还可以包括:
沿着第一方向设置于所述支柱101相对的两侧的第三机械手臂203和第四机械手臂204,所述第三机械手臂203和/或所述第四机械手臂204的端部设置有用于夹持手术刀的夹持部2032。
通过移动部和旋转部相互配合,可以控制第三机械手臂203和/或第四机械手臂204进行相应类型的手术操作。
需要说明的是,本实施例的手术机器人***可以仅仅包括所述第一机械手臂201、所述第二机械手臂202,也可以仅仅包括第三机械手臂203和第四机械手臂204,或者同时包括所述第一机械手臂201、所述第二机械手臂202、所述第三机械手臂203、所述第四机械手臂204,以增加手术机器人的功能,扩大其适用范围。
在进行手术时,控制器14可以在用户界面中展示目标部位的三维图像,医生可以研究用户界面中展示的三维图像,并以划线的方式确定手术路径。本实施例的手术机器人***还应包括与控制器14连接的至少一个显示器,显示器可以接收控制器14的三维图像,并在用户界面中进行展示。在手术机器人***包括多个显示器时,可以将其中一个显示器用来显示用户界面,其他的显示器可以用来显示场景。
在医生确定手术路径后,控制器14可以控制机械手臂15按照手术路径进行手术,但本实施例的手术机器人***中存在三个不同的坐标系:成像数据的第一坐标系M,机械手臂15的第二坐标系R,***件12的第三坐标系O,医生确定的手术路径是基于第一坐标系的,还需要获取不同坐标系之间的坐标变换关系,实现上述三个坐标系的定位对准,才能够使得医生基于三维图像所规划的手术路径传递给机械手臂15,并使得机械手臂15在目标部位的对应位置进行操作。
在建立不同坐标系之间的坐标变换关系时,可以先统一三个坐标系的向量空间方向,这是因为***件12的定位部位于整个***的上方,对它而言XYZ的方向跟位于***同海拔侧的成像器件11的XYZ方向显然是不同的,同样地,***件12的XYZ的方向与机械手臂15的XYZ方向也是不同的,如下所示,可以通过坐标矩阵转置统一三个坐标系的向量空间方向。
Figure PCTCN2019097032-appb-000001
多个坐标点可以确定一个立体构型,可以基于该原理实现坐标系的转化。所述坐标系处理器13可以从所述第一坐标系中选择至少四个第一参考点,确定所述至少四个第一参考点在所述第一坐标系中的第一坐标值和在所述第二坐标系中的第二坐标值,根据所述第一坐标值和所述第二坐标值计算得到表征所述第一坐标变换关系的第一转换函数;从所述第二坐标系中选择至少四个第二参考点,确定所述至少四个第二参考点在所述第二坐标系中的第三坐标值和在所述第三坐标系中的第四坐标值,根据所述第三坐标值和所述第四坐标值计算得到表征所述第二坐标变换关系的第二转换函数。
一具体示例中,第二坐标系的参考点坐标为
Figure PCTCN2019097032-appb-000002
第一坐标系的参考点坐标为
Figure PCTCN2019097032-appb-000003
第三坐标系的参考点坐标为
Figure PCTCN2019097032-appb-000004
可以先实现坐标系O和坐标 系M之间的转换,再实现坐标系M和坐标系R之间的转换,则可以获得两个转换函数
Figure PCTCN2019097032-appb-000005
Figure PCTCN2019097032-appb-000006
其中,
Figure PCTCN2019097032-appb-000007
为坐标系R和坐标系M之间的转换函数,
Figure PCTCN2019097032-appb-000008
为坐标系O和坐标系M之间的转换函数。
Figure PCTCN2019097032-appb-000009
Figure PCTCN2019097032-appb-000010
上述转换函数中,
Figure PCTCN2019097032-appb-000011
Figure PCTCN2019097032-appb-000012
中的参数可以通过高斯消去法求解,在求解后,即可得到坐标系R和坐标系M之间的转换函数
Figure PCTCN2019097032-appb-000013
坐标系O和坐标系M之间的转换函数
Figure PCTCN2019097032-appb-000014
还可以根据这两个转换函数得到坐标系R和坐标系O之间的转换函数
Figure PCTCN2019097032-appb-000015
从而能够获得任两个坐标系之间的坐标变换关系。
这样,在用户通过用户界面输入基于第一坐标系的手术路径后,所述控制器14可以将手术路径根据所述第一坐标变换关系变换成所述第二坐标系上的第一坐标轨迹,根据所述第二坐标变换关系将所述第一坐标轨迹变换成所述第三坐标系上的第二坐标轨迹,并根据所述第二坐标轨迹控制所述机械手臂15对所述目标部位进行手术。
通过本实施例的技术方案,医生可以控制手术机器人***准确地执行多种医生人工无法进行的针对病灶器官的精确手术,且具有创伤小、感染性低、手术方式灵活等优势。
本公开实施例还提供了一种手术机器人***的控制方法,如图7所示,包括:
S1、成像器件对待手术的目标部位进行扫描,得到目标部位的成像数据;
S2、***件对所述目标部位进行定位;
S3、坐标系处理器生成并存储将坐标从所述成像数据的第一坐标系变换 成所述机械手臂的第二坐标系的第一坐标变换关系,生成并存储将坐标从所述机械手臂的第二坐标系变换成所述***件的第三坐标系的第二坐标变换关系;
S4、控制器根据所述成像数据生成所述目标部位在所述第一坐标系的三维图像,通过用户界面展示所述三维图像,接收用户输入的手术路径,根据所述手术路径、所述第一坐标变换关系和所述第二坐标变换关系控制所述机械手臂对所述目标部位进行手术。
本实施例中,利用成像器件得到目标部位的成像数据,根据成像数据可以生成并展示目标部位的三维图像,用户比如医生可以根据所展示的三维图像输入手术路径,以便手术机器人根据用户指示的手术路径对目标部位进行手术。通过本实施例的技术方案,医生可以通过用户界面控制手术机器人,能够提高手术精密度,简化控制方式,并能够降低操作误差。
其中,目标部位为可以通过手术器件进行治疗的对象,比如需要接骨的骨头、需要去除的病灶等,目标部位可以位于患者的体内或者外部表面,包括但不限于骨头、关节、内脏。在手术之前,需要了解目标部位的详细信息,可以利用成像器件11对目标部位进行扫描,获得目标部位的成像数据,具体地,成像数据可以包括目标部位的多张二维剖面图像。成像器件11可以采用但不限于X射线设备、超声设备、电子计算机断层扫描设备、正电子发射型计算机断层显像设备等,利用成像器件11不断扫描目标部位,获取多张二维剖面图像及其空间坐标,控制器14根据成像数据在第一坐标系中对目标部位进行空间重建,在第一坐标系中建立目标部位的几何构型,使得医生可以了解到目标部位的详细信息,并据此进行手术路径的规划。
一具体实施例中,如图2所示,成像器件11可以包括:
基座;
C形臂100,设置于所述基座上,且所述C形臂100的两端分别设置有射线发射部103和射线接收部104,如图3所示,在对所述目标部位302进行扫描时,所述目标部位302位于所述射线发射部103和所述射线接收部104之间。
得到目标部位的成像数据的步骤包括:
所述射线发射部103发出射线;
所述射线接收部104接收穿过所述目标部位的射线,根据接收到的射线信息生成所述成像数据。
在空间重建过程中,所述控制器14获取多张二维剖面图像其中一张二维剖面图像中的感兴趣区域,确定所述感兴趣区域的边界;重复上述步骤,直至获取所有二维剖面图像的感兴趣区域的边界;根据所有二维剖面图像的感兴趣区域的边界进行几何构型,得到所述目标部位的三维图像。
如图5所示,生成三维图像的过程具体包括以下步骤:
S11、控制器接收成像数据,获取其中一张二维剖面图像中的感兴趣区域;
二维剖面图像除包括目标部位外,还可能包括其他部位的图像信息,而做手术只需要获知目标部位的信息即可,因此,可以首先将二维剖面图像中需要动手术的感兴趣区域区分出来,这样可以使得医生更加直观地了解到目标部位的情况,而不会被其他部位的图像所误导,另外还可以降低图像数据的处理量。
具体地,可以通过医生手工确定二维剖面图像中需要动手术的感兴趣区域;还可以结合神经网络选择出二维剖面图像中需要动手术的感兴趣区域。
S12、确定感兴趣区域的边界;
在通过医生手工确定二维剖面图像中需要动手术的感兴趣区域,或结合神经网络选择出二维剖面图像中需要动手术的感兴趣区域后,需要确定感兴趣区域的边界,具体地,对每一张二维剖面图像中的感兴趣区域都需要确定其边界。
S13、判断所有二维剖面图像是否处理完,如果是,转向S14;如果否,转向S15;
S14、根据所有二维剖面图像的感兴趣区域的边界进行几何构型,得到所述目标部位的三维图像。
具体地,控制器14可以将所有二维剖面图像的感兴趣区域的边界信息变换为所述第一坐标系中的点群,根据所述点群生成所述目标部位的三维图像。
S15、处理下一张二维剖面图像,转向步骤S11。
如图6所示,根据所述点群生成所述目标部位的三维图像具体包括以下步骤:
S21、获取二维剖面图像中的感兴趣区域;
S22、寻找标记;
具体地,在通过医生手工确定二维剖面图像中需要动手术的感兴趣区域,或结合神经网络选择出二维剖面图像中需要动手术的感兴趣区域后,可以利用Otsu算法寻找感兴趣区域的标记,根据该标记可以进行感兴趣区域的边界检测,当然本公开的技术方案并不局限于采用Otsu算法寻找标记,还可以利用其它算法寻找感兴趣区域的标记。
S23、进行感兴趣区域的边界检测;
具体地,对每一张二维剖面图像中的感兴趣区域都需要确定其边界,可以通过Watershed算法确定感兴趣区域的边界,Watershed算法又称水域分割,是模仿地图浸没过程的一种形态学分割算法,其本质是利用图像的区域特性来分割图像,它将边缘检测与区域生长的优点结合起来,能够得到单像素宽、连通、封闭确位置准确的轮廓。当然本公开的技术方案并不局限于采用Watershed算法确定感兴趣区域的边界,还可以利用其它的边界检测算法确定感兴趣区域的边界。
S24、存储边界信息;
存储每一张二维剖面图像中的感兴趣区域的边界信息。
S25、获得边界形心作为处理下一张二维剖面图像的基础;
由于二维剖面图像均是针对同一目标部位,相邻二维剖面图像之间的差别不大,为了减少数据处理量,可以将边界形心作为处理下一张二维剖面图像的基础。
S26、判断是否存在下一张二维剖面图像,如果是,转向S21;如果否,转向S27;
S27、将边界信息变换为第一坐标系中的点群;
S28、根据点群生成目标部位的三维图像。
为了对目标部位进行定位,本实施例的***件12可以包括:
附着或接近于所述目标部位进行配置的定位标记;
处于不同位置的至少两个光学发射器件,用于发射特定光线;
对所述目标部位进行定位包括:
定位部接收所述定位标记反射的特定光线,并根据接收到的特定光线确定所述定位标记在所述第三坐标系中的空间位置。
通过定位标记的空间位置以及定位标记与目标位置之间的位置关系可以对目标部位进行定位,获取目标部位在第三坐标系中的空间位置。
由于目标部位可能位于患者体内,因此,特定光线优选采用能够穿透人体皮肤的光线,比如红外线。具体地,光学发射器件可以为位于目标部位斜上方的至少两个红外线探头,能够发射红外线;定位标记可以为红外反射定位小球,但并不以此为限;定位部可以为位于目标部位斜上方的两个光学摄像头。光学发射器件发射红外线,定位标记反射红外线,定位部接收定位标记反射的红外线,通过三角定位方式可以准确获得定位标记的位置。其中,定位标记需要事先附着或接近于所述目标部位进行配置,比如安装在患者的需要动手术的骨头处。
本实施例中,机械手臂15能够以五自由度或六自由度进行动作。所述机械手臂15可以包括:旋转部,携带有手术器件,以两个旋转轴中的至少一个旋转轴为中心,使所述手术器件旋转;移动部,其使所述旋转部向三个轴中至少一个轴方向移动。机械手臂15可以附着于手术台使用。
具体地,所述移动部可以包括:
第一方向驱动部,沿着第一轴方向进行移动;
第二方向驱动部,连接于所述第一方向驱动部,沿着第二轴方向进行移动;
第三方向驱动部,连接于所述第二方向驱动部,沿着第三轴方向进行移动;
其中,第一轴方向与第二轴方向垂直,第二轴方向与第三轴方向垂直,第一轴方向与第三轴方向垂直。
所述旋转部包括:
第一旋转驱动部,包括连接于所述第三方向驱动部的一端,以第一旋转轴为中心旋转;及
第二旋转驱动部,包括连接于所述第一旋转驱动部的一端,以第二旋转轴为中心旋转,所述手术器件附着于所述第二旋转驱动部。
对所述目标部位进行手术的步骤包括:
所述控制器通过所述移动部使所述旋转部向三个轴中至少一个轴方向移动;
所述控制器通过所述旋转部使所述手术器件,以两个旋转轴中的至少一个旋转轴为中心旋转。
在进行手术时,控制器14可以在用户界面中展示目标部位的三维图像,医生可以研究用户界面中展示的三维图像,并以划线的方式确定手术路径。本实施例的手术机器人***还应包括与控制器14连接的至少一个显示器,显示器可以接收控制器14的三维图像,并在用户界面中进行展示。在手术机器人***包括多个显示器时,可以将其中一个显示器用来显示用户界面,其他的显示器可以用来显示场景。
在医生确定手术路径后,控制器14可以控制机械手臂15按照手术路径进行手术,但本实施例的手术机器人***中存在三个不同的坐标系:成像数据的第一坐标系M,机械手臂15的第二坐标系R,***件12的第三坐标系O,医生确定的手术路径是基于第一坐标系的,还需要获取不同坐标系之间的坐标变换关系,实现上述三个坐标系的定位对准,才能够使得医生基于三维图像所规划的手术路径传递给机械手臂15,并使得机械手臂15在目标部位的对应位置进行操作。
在建立不同坐标系之间的坐标变换关系时,可以先统一三个坐标系的向量空间方向,这是因为***件12的定位部位于整个***的上方,对它而言XYZ的方向跟位于***同海拔侧的成像器件11的XYZ方向显然是不同的,同样地,***件12的XYZ的方向与机械手臂15的XYZ方向也是不同的,如下所示,可以通过坐标矩阵转置统一三个坐标系的向量空间方向。
Figure PCTCN2019097032-appb-000016
多个坐标点可以确定一个立体构型,可以基于该原理实现坐标系的转化。所述坐标系处理器13可以从所述第一坐标系中选择至少四个第一参考点,确定所述至少四个第一参考点在所述第一坐标系中的第一坐标值和在所述第二坐标系中的第二坐标值,根据所述第一坐标值和所述第二坐标值计算得到表征所述第一坐标变换关系的第一转换函数;从所述第二坐标系中选择至少四个第二参考点,确定所述至少四个第二参考点在所述第二坐标系中的第三坐标值和在所述第三坐标系中的第四坐标值,根据所述第三坐标值和所述第四坐标值计算得到表征所述第二坐标变换关系的第二转换函数。
一具体示例中,第二坐标系的参考点坐标为
Figure PCTCN2019097032-appb-000017
第一坐标系的参考点坐标为
Figure PCTCN2019097032-appb-000018
第三坐标系的参考点坐标为
Figure PCTCN2019097032-appb-000019
可以先实现坐标系O和坐标系M之间的转换,再实现坐标系M和坐标系R之间的转换,则可以获得两个转换函数
Figure PCTCN2019097032-appb-000020
Figure PCTCN2019097032-appb-000021
其中,
Figure PCTCN2019097032-appb-000022
为坐标系R和坐标系M之间的转换函数,
Figure PCTCN2019097032-appb-000023
为坐标系O和坐标系M之间的转换函数。
Figure PCTCN2019097032-appb-000024
Figure PCTCN2019097032-appb-000025
上述转换函数中,
Figure PCTCN2019097032-appb-000026
Figure PCTCN2019097032-appb-000027
中的参数 可以通过高斯消去法求解,在求解后,即可得到坐标系R和坐标系M之间的转换函数
Figure PCTCN2019097032-appb-000028
坐标系O和坐标系M之间的转换函数
Figure PCTCN2019097032-appb-000029
还可以根据这两个转换函数得到坐标系R和坐标系O之间的转换函数
Figure PCTCN2019097032-appb-000030
从而能够获得任两个坐标系之间的坐标变换关系。
这样,在用户通过用户界面输入基于第一坐标系的手术路径后,所述控制器14可以将手术路径根据所述第一坐标变换关系变换成所述第二坐标系上的第一坐标轨迹,根据所述第二坐标变换关系将所述第一坐标轨迹变换成所述第三坐标系上的第二坐标轨迹,并根据所述第二坐标轨迹控制所述机械手臂15对所述目标部位进行手术。
通过本实施例的技术方案,医生可以控制手术机器人***准确地执行多种医生人工无法进行的针对病灶器官的精确手术,且具有创伤小、感染性低、手术方式灵活等优势。
本公开实施例还提供了一种手术机器人***的控制设备,包括:存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如上所述的手术机器人***的控制方法中的步骤。
本公开实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如上所述的手术机器人***的控制方法中的步骤。
在本公开各方法实施例中,所述各步骤的序号并不能用于限定各步骤的先后顺序,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,对各步骤的先后变化也在本公开的保护范围之内。
需要说明,本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于实施例而言,由于其基本相似于产品实施例,所以描述得比较简单,相关之处参见产品实施例的部分说明即可。
除非另外定义,本公开使用的技术术语或者科学术语应当为本公开所属领域内具有一般技能的人士所理解的通常意义。本公开中使用的“第一”、“第二”以及类似的词语并不表示任何顺序、数量或者重要性,而只是用来区分不 同的组成部分。“包括”或者“包含”等类似的词语意指出现该词前面的元件或者物件涵盖出现在该词后面列举的元件或者物件及其等同,而不排除其他元件或者物件。“连接”或者“相连”等类似的词语并非限定于物理的或者机械的连接,而是可以包括电性的连接,不管是直接的还是间接的。“上”、“下”、“左”、“右”等仅用于表示相对位置关系,当被描述对象的绝对位置改变后,则该相对位置关系也可能相应地改变。
可以理解,当诸如层、膜、区域或基板之类的元件被称作位于另一元件“上”或“下”时,该元件可以“直接”位于另一元件“上”或“下”,或者可以存在中间元件。
在上述实施方式的描述中,具体特征、结构、材料或者特点可以在任何的一个或多个实施例或示例中以合适的方式结合。
以上所述,仅为本公开的具体实施方式,但本公开的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应以所述权利要求的保护范围为准。

Claims (19)

  1. 一种手术机器人***,其中,包括:
    成像器件,用于对待手术的目标部位进行扫描,得到目标部位的成像数据;
    机械手臂,所述机械手臂前端携带有手术器件;
    ***件,用于对所述目标部位进行定位;
    坐标系处理器,用于生成并存储将坐标从所述成像数据的第一坐标系变换成所述机械手臂的第二坐标系的第一坐标变换关系,生成并存储将坐标从所述机械手臂的第二坐标系变换成所述***件的第三坐标系的第二坐标变换关系;
    控制器,用于根据所述成像数据生成所述目标部位在所述第一坐标系的三维图像,通过用户界面展示所述三维图像,接收用户输入的手术路径,根据所述手术路径、所述第一坐标变换关系和所述第二坐标变换关系控制所述机械手臂对所述目标部位进行手术。
  2. 根据权利要求1所述的手术机器人***,其中,所述控制器具体用于将用户通过所述用户界面输入的所述手术路径根据所述第一坐标变换关系变换成所述第二坐标系上的第一坐标轨迹,根据所述第二坐标变换关系将所述第一坐标轨迹变换成所述第三坐标系上的第二坐标轨迹,并根据所述第二坐标轨迹控制所述机械手臂对所述目标部位进行手术。
  3. 根据权利要求1所述的手术机器人***,其中,所述成像数据包括多张所述目标部位的二维剖面图像,所述控制器具体用于获取其中一张二维剖面图像中的感兴趣区域,确定所述感兴趣区域的边界;重复上述步骤,直至获取所有二维剖面图像的感兴趣区域的边界;根据所有二维剖面图像的感兴趣区域的边界进行几何构型,得到所述目标部位的三维图像。
  4. 根据权利要求3所述的手术机器人***,其中,所述控制器具体用于将所有二维剖面图像的感兴趣区域的边界信息变换为所述第一坐标系中的点群,根据所述点群生成所述目标部位的三维图像。
  5. 根据权利要求1所述的手术机器人***,其中,所述坐标系处理器具体用于从所述第一坐标系中选择至少四个第一参考点,确定所述至少四个第一参考点在所述第一坐标系中的第一坐标值和在所述第二坐标系中的第二坐标值,根据所述第一坐标值和所述第二坐标值计算得到表征所述第一坐标变换关系的第一转换函数;从所述第二坐标系中选择至少四个第二参考点,确定所述至少四个第二参考点在所述第二坐标系中的第三坐标值和在所述第三坐标系中的第四坐标值,根据所述第三坐标值和所述第四坐标值计算得到表征所述第二坐标变换关系的第二转换函数。
  6. 根据权利要求1所述的手术机器人***,其中,所述成像器件包括:
    基座;
    C形臂,设置于所述基座上,且所述C形臂的两端分别设置有射线发射部和射线接收部,在对所述目标部位进行扫描时,所述目标部位位于所述射线发射部和所述射线接收部之间。
  7. 根据权利要求1所述的手术机器人***,其中,所述***件包括:
    附着或接近于所述目标部位进行配置的定位标记;
    处于不同位置的至少两个光学发射器件,用于发射特定光线;
    定位部,用于接收所述定位标记反射的特定光线,并根据接收到的特定光线确定所述定位标记在所述第三坐标系中的空间位置。
  8. 根据权利要求1所述的手术机器人***,其中,所述机械手臂包括:
    旋转部,携带有手术器件,以两个旋转轴中的至少一个旋转轴为中心,使所述手术器件旋转;
    移动部,其使所述旋转部向三个轴中至少一个轴方向移动。
  9. 根据权利要求8所述的手术机器人***,其中,所述移动部包括:
    第一方向驱动部,沿着第一轴方向进行移动;
    第二方向驱动部,连接于所述第一方向驱动部,沿着第二轴方向进行移动;
    第三方向驱动部,连接于所述第二方向驱动部,沿着第三轴方向进行移动;
    所述旋转部包括:
    第一旋转驱动部,包括连接于所述第三方向驱动部的一端,以第一旋转轴为中心旋转;及
    第二旋转驱动部,包括连接于所述第一旋转驱动部的一端,以第二旋转轴为中心旋转,所述手术器件附着于所述第二旋转驱动部。
  10. 一种手术机器人***的控制方法,其中,包括:
    成像器件对待手术的目标部位进行扫描,得到目标部位的成像数据;
    ***件对所述目标部位进行定位;
    坐标系处理器生成并存储将坐标从所述成像数据的第一坐标系变换成机械手臂的第二坐标系的第一坐标变换关系,生成并存储将坐标从所述机械手臂的第二坐标系变换成所述***件的第三坐标系的第二坐标变换关系;
    控制器根据所述成像数据生成所述目标部位在所述第一坐标系的三维图像,通过用户界面展示所述三维图像,接收用户输入的手术路径,根据所述手术路径、所述第一坐标变换关系和所述第二坐标变换关系控制所述机械手臂对所述目标部位进行手术。
  11. 根据权利要求10所述的手术机器人***的控制方法,其中,所述控制所述机械手臂对所述目标部位进行手术包括:
    所述控制器将用户通过所述用户界面输入的所述手术路径根据所述第一坐标变换关系变换成所述第二坐标系上的第一坐标轨迹,根据所述第二坐标变换关系将所述第一坐标轨迹变换成所述第三坐标系上的第二坐标轨迹,并根据所述第二坐标轨迹控制所述机械手臂对所述目标部位进行手术。
  12. 根据权利要求10所述的手术机器人***的控制方法,其中,所述成像数据包括多张所述目标部位的二维剖面图像,生成所述三维图像包括:
    所述控制器获取其中一张二维剖面图像中的感兴趣区域,确定所述感兴趣区域的边界;重复上述步骤,直至获取所有二维剖面图像的感兴趣区域的边界;根据所有二维剖面图像的感兴趣区域的边界进行几何构型,得到所述目标部位的三维图像。
  13. 根据权利要求12所述的手术机器人***的控制方法,其中,生成所 述目标部位在所述第二坐标系的三维图像包括:
    所述控制器将所有二维剖面图像的感兴趣区域的边界信息变换为所述第一坐标系中的点群,根据所述点群生成所述目标部位的三维图像。
  14. 根据权利要求10所述的手术机器人***的控制方法,其中,生成所述第一坐标变换关系和所述第二坐标变换关系包括:
    所述坐标系处理器从所述第一坐标系中选择至少四个第一参考点,确定所述至少四个第一参考点在所述第一坐标系中的第一坐标值和在所述第二坐标系中的第二坐标值,根据所述第一坐标值和所述第二坐标值计算得到表征所述第一坐标变换关系的第一转换函数;从所述第二坐标系中选择至少四个第二参考点,确定所述至少四个第二参考点在所述第二坐标系中的第三坐标值和在所述第三坐标系中的第四坐标值,根据所述第三坐标值和所述第四坐标值计算得到表征所述第二坐标变换关系的第二转换函数。
  15. 根据权利要求10所述的手术机器人***的控制方法,其中,所述成像器件包括:基座;C形臂,设置于所述基座上,且所述C形臂的两端分别设置有射线发射部和射线接收部,在对所述目标部位进行扫描时,所述目标部位位于所述射线发射部和所述射线接收部之间;得到目标部位的成像数据包括:
    所述射线发射部发出射线;
    所述射线接收部接收穿过所述目标部位的射线,根据接收到的射线信息生成所述成像数据。
  16. 根据权利要求10所述的手术机器人***的控制方法,其中,所述***件包括:附着或接近于所述目标部位进行配置的定位标记;处于不同位置的至少两个光学发射器件,用于发射特定光线;对所述目标部位进行定位包括:
    定位部接收所述定位标记反射的特定光线,并根据接收到的特定光线确定所述定位标记在所述第三坐标系中的空间位置。
  17. 根据权利要求10所述的手术机器人***的控制方法,其中,所述机械手臂包括:旋转部,携带有手术器件,以两个旋转轴中的至少一个旋转轴 为中心,使所述手术器件旋转;移动部,其使所述旋转部向三个轴中至少一个轴方向移动;
    对所述目标部位进行手术包括:
    所述控制器通过所述移动部使所述旋转部向三个轴中至少一个轴方向移动;
    所述控制器通过所述旋转部使所述手术器件,以两个旋转轴中的至少一个旋转轴为中心旋转。
  18. 一种手术机器人***的控制设备,其中,包括:存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如权利要求10至17中任一项所述的手术机器人***的控制方法中的步骤。
  19. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求10至17中任一项所述的手术机器人***的控制方法中的步骤。
PCT/CN2019/097032 2019-07-22 2019-07-22 手术机器人***及其控制方法 WO2021012142A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201980001114.9A CN112543623A (zh) 2019-07-22 2019-07-22 手术机器人***及其控制方法
PCT/CN2019/097032 WO2021012142A1 (zh) 2019-07-22 2019-07-22 手术机器人***及其控制方法
CN201921446786.4U CN211094674U (zh) 2019-07-22 2019-09-02 手术机器人及手术机器人***

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/097032 WO2021012142A1 (zh) 2019-07-22 2019-07-22 手术机器人***及其控制方法

Publications (1)

Publication Number Publication Date
WO2021012142A1 true WO2021012142A1 (zh) 2021-01-28

Family

ID=74192418

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/097032 WO2021012142A1 (zh) 2019-07-22 2019-07-22 手术机器人***及其控制方法

Country Status (2)

Country Link
CN (1) CN112543623A (zh)
WO (1) WO2021012142A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023141800A1 (en) * 2022-01-26 2023-08-03 Warsaw Orthopedic, Inc. Mobile x-ray positioning system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6351659B1 (en) * 1995-09-28 2002-02-26 Brainlab Med. Computersysteme Gmbh Neuro-navigation system
CN104083217A (zh) * 2014-07-03 2014-10-08 北京天智航医疗科技股份有限公司 一种手术定位装置和方法以及机器人手术***
CN104799933A (zh) * 2015-03-18 2015-07-29 清华大学 一种用于骨外科定位引导的手术机器人运动补偿方法
CN107468351A (zh) * 2016-06-08 2017-12-15 北京天智航医疗科技股份有限公司 一种手术定位装置、定位***及定位方法
CN108272502A (zh) * 2017-12-29 2018-07-13 战跃福 一种ct三维成像引导的消融针导向手术方法及***

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10004764A1 (de) * 2000-02-03 2001-08-09 Philips Corp Intellectual Pty Verfahren zur Positionsbestimmung eines medizinischen Instruments
US20170258535A1 (en) * 2012-06-21 2017-09-14 Globus Medical, Inc. Surgical robotic automation with tracking markers
KR102668586B1 (ko) * 2012-08-03 2024-05-28 스트리커 코포레이션 로봇 수술을 위한 시스템 및 방법
CN107645924B (zh) * 2015-04-15 2021-04-20 莫比乌斯成像公司 集成式医学成像与外科手术机器人***
KR101848027B1 (ko) * 2016-08-16 2018-04-12 주식회사 고영테크놀러지 정위수술용 수술로봇 시스템 및 정위수술용 로봇의 제어방법
CN107028659B (zh) * 2017-01-23 2023-11-28 新博医疗技术有限公司 一种ct图像引导下的手术导航***及导航方法
CN107970060A (zh) * 2018-01-11 2018-05-01 上海联影医疗科技有限公司 手术机器人***及其控制方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6351659B1 (en) * 1995-09-28 2002-02-26 Brainlab Med. Computersysteme Gmbh Neuro-navigation system
CN104083217A (zh) * 2014-07-03 2014-10-08 北京天智航医疗科技股份有限公司 一种手术定位装置和方法以及机器人手术***
CN104799933A (zh) * 2015-03-18 2015-07-29 清华大学 一种用于骨外科定位引导的手术机器人运动补偿方法
CN107468351A (zh) * 2016-06-08 2017-12-15 北京天智航医疗科技股份有限公司 一种手术定位装置、定位***及定位方法
CN108272502A (zh) * 2017-12-29 2018-07-13 战跃福 一种ct三维成像引导的消融针导向手术方法及***

Also Published As

Publication number Publication date
CN112543623A (zh) 2021-03-23

Similar Documents

Publication Publication Date Title
US20240050156A1 (en) Surgical Systems And Methods For Providing Surgical Guidance With A Head-Mounted Device
CN110051436B (zh) 自动化协同工作组件及其在手术器械中的应用
EP2811889B1 (en) Invisible bifurcation detection within vessel tree images
US8073528B2 (en) Tool tracking systems, methods and computer products for image guided surgery
US20210059762A1 (en) Motion compensation platform for image guided percutaneous access to bodily organs and structures
US8108072B2 (en) Methods and systems for robotic instrument tool tracking with adaptive fusion of kinematics information and image information
Falk et al. Cardio navigation: planning, simulation, and augmented reality in robotic assisted endoscopic bypass grafting
US20220378526A1 (en) Robotic positioning of a device
US20200261155A1 (en) Image based robot guidance
US20090088773A1 (en) Methods of locating and tracking robotic instruments in robotic surgical systems
JP7399982B2 (ja) 手術中の3次元可視化
Zhan et al. Autonomous tissue scanning under free-form motion for intraoperative tissue characterisation
EP3328308B1 (en) Efficient positioning of a mechatronic arm
WO2021012142A1 (zh) 手术机器人***及其控制方法
Chen et al. Video-guided calibration of an augmented reality mobile C-arm
Patlan-Rosales et al. Strain estimation of moving tissue based on automatic motion compensation by ultrasound visual servoing
WO2023141800A1 (en) Mobile x-ray positioning system
Sun et al. Development of a Novel Hand-eye Coordination Algorithm for Robot Assisted Minimally Invasive Surgery
US12004821B2 (en) Systems, methods, and devices for generating a hybrid image
US20230240659A1 (en) Systems, methods, and devices for tracking one or more objects
US20230240790A1 (en) Systems, methods, and devices for providing an augmented display
US20230281869A1 (en) Systems, methods, and devices for reconstructing a three-dimensional representation
Wang et al. Navigational Augmented Reality for Robotic Drilling
Chen et al. Surgical Navigation System Design for Surgery Aided Diagnosis Platform
WO2023286048A2 (en) Systems, devices, and methods for identifying and locating a region of interest

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19938451

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19938451

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19938451

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205N DATED 06.02.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 19938451

Country of ref document: EP

Kind code of ref document: A1