CN114305695A - Movement guiding method and system, readable storage medium and surgical robot system - Google Patents

Movement guiding method and system, readable storage medium and surgical robot system Download PDF

Info

Publication number
CN114305695A
CN114305695A CN202111481113.4A CN202111481113A CN114305695A CN 114305695 A CN114305695 A CN 114305695A CN 202111481113 A CN202111481113 A CN 202111481113A CN 114305695 A CN114305695 A CN 114305695A
Authority
CN
China
Prior art keywords
target
target area
guidance method
movement guidance
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111481113.4A
Other languages
Chinese (zh)
Other versions
CN114305695B (en
Inventor
何超
其他发明人请求不公开姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Microport Medbot Group Co Ltd
Original Assignee
Shanghai Microport Medbot Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Microport Medbot Group Co Ltd filed Critical Shanghai Microport Medbot Group Co Ltd
Priority to CN202111481113.4A priority Critical patent/CN114305695B/en
Publication of CN114305695A publication Critical patent/CN114305695A/en
Priority to PCT/CN2022/137021 priority patent/WO2023104055A1/en
Application granted granted Critical
Publication of CN114305695B publication Critical patent/CN114305695B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Manipulator (AREA)

Abstract

The invention provides a movement guiding method and a system, a readable storage medium and a surgical robot system, wherein the movement guiding method comprises the following steps: acquiring a virtual model of a first target object; and dividing a first target area related to the first target object according to the virtual model of the first target object and a preset first area division standard, and enabling the first target area to be displayed in a superposition mode in a real scene so as to guide a moving object to move outside the boundary of the first target area. The movement guiding method can be applied to the operation executed by the operation robot system and is used for guiding the moving object to move in the operating room, so that the interference of the moving object entering the first target area by mistake to the operation is avoided.

Description

Movement guiding method and system, readable storage medium and surgical robot system
Technical Field
The invention relates to the technical field of medical equipment, in particular to a movement guiding method and system, a readable storage medium and a surgical robot system.
Background
The surgical robot has the advantages of accurate positioning, stable operation, strong dexterity, large working range, radiation and infection resistance and the like, and is widely applied to various surgeries. The surgical robot can assist doctors in completing accurate positioning of a surgical part, can realize minimum damage of a surgery, improves precision and quality of disease diagnosis and surgical treatment, improves surgery safety, shortens treatment time, and reduces medical cost, and research of the surgical robot becomes a new field of robot application in recent years.
In the process of performing the operation by using the surgical robot, at least one medical staff is required to control or assist the surgical robot to complete the operation, and at least one medical staff is also required to perform other auxiliary work, such as transferring surgical instruments and the like. In the prior art, in the operation of performing an operation by using a surgical robot, a medical worker who performs auxiliary work often gets close to an operation area because the operation area cannot be clearly identified, which interferes with the operation process.
Disclosure of Invention
The invention aims to provide a movement guiding method and system, a readable storage medium and a surgical robot system, and aims to help medical staff to accurately identify an operation area and enable the medical staff performing other auxiliary work to walk outside the operation area in the operation process of performing an operation by using a surgical robot, so that interference on the operation is reduced or even avoided.
In order to achieve the above object, the present invention provides a movement guidance method, including the steps of:
acquiring a virtual model of a first target object;
and dividing a first target area related to the first target object according to the virtual model of the first target object and a preset first area division standard, and enabling the first target area to be displayed in a superposition mode in a real scene so as to guide a moving object to move outside the boundary of the first target area.
Optionally, the movement guidance method further includes: planning a moving path of the moving object according to the first target area so as to guide the moving object to move along the moving path; the movement path is located outside the boundary of the first target area.
Optionally, the movement guidance method further includes: and judging whether the mobile object is at a safe position, if not, prompting to adjust the position of the mobile object and/or the first target object.
Optionally, the step of determining whether the mobile object is in a safe position includes:
acquiring a virtual model of the mobile object;
dividing a second target area related to the moving object according to the virtual model of the moving object and a preset second area division standard;
and judging whether the second target area is communicated with the first target area, if so, judging that the mobile object is not at a safe position, and if not, judging that the mobile object is at the safe position.
Optionally, the movement guidance method further includes:
acquiring a virtual model of a second target object;
and dividing a third target area related to the second target object according to the virtual model of the second target object and a preset third area division standard, and enabling the third target area to be displayed in a real scene in an overlapped mode so as to guide the moving object to move outside the boundary of the third target area.
Optionally, the movement guidance method further includes:
planning a moving path of the moving object according to the first target area and the third target area so as to guide the moving object to move along the moving path; the movement path is located outside the boundary of the first target area and outside the boundary of the third target area.
Optionally, the movement guidance method further includes: and displaying the moving path in a superposition manner in a real scene.
Optionally, the movement guidance method further includes: and judging whether the mobile object is in a safe state, if not, prompting to adjust the position of at least one of the mobile object, the first target object and the second target object.
Optionally, the step of determining whether the mobile object is in a safe state includes:
acquiring a virtual model of the mobile object;
dividing a second target area related to the moving object according to the virtual model of the moving object and a preset second area division standard;
and judging whether the second target area is communicated with at least one of the first target area and the third target area, if so, judging that the mobile object is in a non-safety state, and if not, judging that the mobile object is in a safety state.
Optionally, the movement guidance method further includes:
acquiring the position information of the first target object in real time;
updating the first target area according to the real-time position information of the first target object, and displaying the updated first target area in a real scene in an overlapping manner;
and updating the moving path according to the updated first target area, and displaying the updated moving path in a real scene in an overlapping manner.
Optionally, the movement guidance method further includes: and judging whether the moving object deviates from the moving path, if so, updating the moving path, and displaying the updated moving path in a real scene in an overlapping manner.
Optionally, the number of the first target objects is multiple; the step of dividing a first target region with respect to the first target object according to the virtual model of the first target object and a preset first region division criterion includes:
setting a first area division standard;
dividing sub-regions with respect to the respective first target objects, respectively, according to the first region division criterion;
and judging whether all the sub-regions meet preset conditions, if so, taking the regions where all the sub-regions are located as the first target region, and if not, generating prompt information to prompt intervention operation.
Optionally, the prompting information includes prompting to reset the first area division criterion, and/or prompting to adjust a position of at least a part of the first target object.
Optionally, the preset condition includes that a communication region is formed between all the sub-regions.
Optionally, a mapping relationship between a coordinate system of the virtual model of the first target object and a coordinate system of an augmented reality device is established, so that the first target region is displayed in a real scene in an overlaid manner by the augmented reality device.
To achieve the above object, the present invention also provides a computer-readable storage medium having a program stored thereon, which, when executed, performs the movement guidance method as described in any one of the preceding.
To achieve the above object, the present invention also provides an electronic device, which includes a processor and the computer-readable storage medium as described above, wherein the processor is configured to execute the program stored in the computer-readable storage medium.
In order to achieve the above object, the present invention further provides a mobile guidance system, comprising a control unit, an augmented reality device and a positioning device; the augmented reality device and the positioning device are respectively in communication connection with the control unit, and the control unit is configured to execute the movement guidance method according to any one of the preceding items.
In order to achieve the above object, the present invention further provides a surgical robot system, including a surgical operation device and the movement guidance system as described above, the movement guidance system being configured to acquire a first target area where the surgical operation device performs a surgical operation, and to guide at least the moving object to move outside a boundary of the first target area.
Compared with the prior art, the movement guiding method and system, the readable storage medium and the surgical robot system have the following advantages:
the aforementioned movement guidance method includes the steps of: acquiring a virtual model of a first target object; and dividing a first target area related to the first target object according to the virtual model of the first target object and a preset first area division standard, and enabling the first target area to be displayed in a superposition mode in a real scene so as to guide a moving object to move outside the boundary of the first target area. The movement guiding method can be applied to surgical operations executed by a surgical robot system, the surgical robot system comprises a surgical robot, when the movement guiding method is executed, a first target area of the surgical operation executed by the surgical robot can be obtained and displayed, medical staff can be helped to accurately identify the first target area, certain appointed medical staff can be guided to move outside the boundary of the first target area, the first target area can be prevented from entering as far as possible, and influences on the surgical operations are reduced or even eliminated.
The movement guiding method also comprises planning a movement path of the mobile object according to the first target area, wherein the movement path is positioned outside the boundary of the first target area, namely, the movement guiding method can reduce the possibility that the mobile object enters the first target area by guiding the mobile object to move along the movement path, thereby further improving the stability and the safety of the operation.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a schematic view of an application scenario of a surgical robotic system;
FIG. 2 is a schematic view of an application scenario of the surgical robotic system, wherein a first target area and a third target area are identified;
FIG. 3 is a flowchart of a method for applying the movement guidance system to a surgical robotic system and performing movement guidance according to an embodiment of the present invention;
fig. 4 is a schematic diagram illustrating a virtual model of a first target object based on a positioning device and a mapping relationship between a coordinate system of the first target object and a coordinate system of an augmented reality device in a movement guidance method according to an embodiment of the present invention;
fig. 5 is a schematic functional relationship diagram of a control device, a positioning device and an augmented reality device in an implementation process of a movement guidance method according to an embodiment of the present invention;
fig. 6 is a schematic diagram illustrating a principle that a binocular vision device is used to acquire image information of a first target object in the movement guidance method according to an embodiment of the present invention to establish a virtual model of the first target object;
FIG. 7 is a flowchart illustrating a first target area being divided in a movement guidance method performed by the movement guidance system according to an embodiment of the present invention;
FIG. 8 is a schematic view of an application scenario of the surgical robotic system, illustrating a first target area identified;
FIG. 9 is a schematic structural diagram of a positioning device and an augmented reality device of a mobile guidance system according to an embodiment of the invention;
FIG. 10 is a flowchart illustrating a method for determining whether a moving object is in a safe position according to an embodiment of the present invention;
FIG. 11 is a schematic illustration of an application scenario of the surgical robotic system showing a second target area, but without the surgical robot;
FIG. 12 is a flowchart illustrating a movement directing system directing a moving object to move along a moving path according to an embodiment of the present invention;
FIG. 13 is a diagram illustrating a mobile object being guided by the mobile guidance system according to an embodiment of the present invention.
[ reference numerals are described below ]:
10-doctor end control device, 20-surgical operation device, 21-base, 22-mechanical arm, 30-image display device, 40-surgical platform, 50-instrument placement platform, 60-surgical instrument, 70-target, 110-first medical staff, 120-second medical staff, 130-second target region, 201-first sub-region, 202-second sub-region, 203-third sub-region, 204-fourth sub-region, 300-control unit, 400-positioning device, 500-augmented reality device, 601-fifth sub-region, 602-sixth sub-region.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the present embodiment are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
Furthermore, each of the embodiments described below has one or more technical features, and thus, the use of the technical features of any one embodiment does not necessarily mean that all of the technical features of any one embodiment are implemented at the same time or that only some or all of the technical features of different embodiments are implemented separately. In other words, those skilled in the art can selectively implement some or all of the features of any embodiment or combinations of some or all of the features of multiple embodiments according to the disclosure of the present invention and according to design specifications or implementation requirements, thereby increasing the flexibility in implementing the invention.
As used in this specification, the singular forms "a", "an" and "the" include plural referents, and the plural forms "a plurality" includes more than two referents unless the content clearly dictates otherwise. As used in this specification, the term "or" is generally employed in its sense including "and/or" unless the content clearly dictates otherwise, and the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either fixedly connected, detachably connected, or integrally connected. Either mechanically or electrically. Either directly or indirectly through intervening media, either internally or in any other relationship. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
To further clarify the objects, advantages and features of the present invention, a more particular description of the invention will be rendered by reference to the appended drawings. It is to be noted that the drawings are in a very simplified form and are not to precise scale, which is merely for the purpose of facilitating and distinctly claiming the embodiments of the present invention. The same or similar reference numbers in the drawings identify the same or similar elements.
Fig. 1 shows a schematic view of an application scenario of a surgical robotic system. As shown in fig. 1, the surgical robot system may include a control end including a surgeon console and a surgeon end control device 10 provided on the surgeon console, and an execution end. The execution end comprises a patient end control device (not labeled in the figure), a surgical operation device 20, an image display device 30, a surgical platform 40, an instrument placement platform 50 and the like. Wherein the surgical platform 40 is adapted to carry a patient to be operated on, and the instrument placement platform 50 is positionable adjacent the surgical platform and adapted to carry a surgical instrument 60. The surgical operation device 20 is disposed on one side of the surgical platform 40, and includes a base 21 and a plurality of (two or more) mechanical arms 22 disposed on the base 21, at least one of the mechanical arms 22 is used for mounting an image acquisition device (not shown in the figure), and the image acquisition device is in communication connection with the display device 30. The image acquiring device is used for entering the body of the patient to acquire the image information in the body of the patient and sending the image information to the image display device 30 for displaying. At least one of the robotic arms 22 is used to mount a surgical instrument 60, the surgical instrument 60 being used to access the interior of a patient to perform a surgical procedure. A main manipulator (not shown) is disposed on the surgeon-side control device 10, and the surgeon-side control device and the mechanical arm 22 and the surgical instrument 60 have a predetermined master-slave mapping relationship (that is, the surgical robot system is a master-slave mapping surgical robot system), so that the mechanical arm 22 and the surgical instrument 60 can follow the motion of the main manipulator to perform a surgical operation.
When performing a surgical operation using the surgical robot system, as shown in fig. 2, a first medical staff 110 is often required to manipulate the main manipulator at the doctor-end control device 10 to control the mechanical arm 22 and the surgical instrument 60 on the surgical operation device 20 to perform the surgical operation. At least a second medical professional 120 is also required to be positioned adjacent to the surgical platform 40 and the instrument placement platform 50 to assist the surgical device 20. In addition, at least one third medical staff (not shown) is also needed to perform other auxiliary operations during the whole operation, and the third medical staff is often needed to move in the operating room. In operation, the surgical operation device 20 performs a surgical operation within a predetermined surgical field, such that the surgical platform 40, the instrument placement platform 50, and the second healthcare worker 120 are located within the surgical field, and the third healthcare worker is expected to move outside the surgical field to reduce interference with the surgical operation.
Based on this, the core idea of the embodiment of the present invention is to provide a movement guidance method to provide guidance for the movement of the third medical staff. As shown in fig. 3, the movement guidance method includes the steps of:
step S1: a virtual model of the first target object is obtained to create a virtual operating room scene.
Step S2: and dividing a first target area of the first target object according to the virtual model of the first target object and a preset first area division standard, and displaying the first target area in a display scene in an overlapping manner.
Here, the number of the first target objects is plural, and is the surgical operation device 20, the surgical platform 40, the instrument placement platform 50, and the second medical staff member 120, respectively. The first target area is the aforementioned surgical area. The moving object comprises the aforementioned third medical person. Therefore, the third medical staff can visually know the boundary of the first target area and move outside the boundary of the first target area, so that the interference of the third medical staff to the operation caused by the mistaken entry into the first target area is avoided, and the safety and the stability of the operation are improved. The first area division criterion may be set by a medical staff according to actual conditions, and the first target area obtained by division according to the first area division criterion may be configured to cover all the first target objects on a horizontal plane to form the surgical field.
Preferably, with continuing reference to fig. 3, the movement guidance method further includes step S3: planning a moving path of the moving object according to the first target area so as to guide the moving object to move along the moving path. The movement path is located outside the boundary of the first target area. Further preferably, the movement guidance method further includes step S4: and displaying the moving path in a superposition manner in a real scene, and providing more intuitive guidance for the moving object.
The movement guidance method can be implemented by a control unit 300 (as labeled in fig. 5). The setting mode of the control unit is not limited, as long as the control unit can realize corresponding functions. Optionally, the control unit 300 is disposed at the doctor-end control device 10 of the surgical robot system as a whole, or the control unit is disposed at the patient-end control device as a whole, or a part of the control unit 300 is disposed at the doctor-end control device 10 and another part is disposed at the patient-end control device, or the control unit 300 is at least partially independent from the doctor-end control device 10 and the patient-end control device. When the control unit 300 is at least partially independent from the doctor-side control device 10 and the patient-side control device, its independent portion is communicatively connected to the doctor-side control device 10 and the patient-side control device.
Next, the movement guidance method will be described in more detail.
In the embodiment of the present invention, there is no particular limitation on the manner of acquiring the virtual model of the first target object in step S1. In some implementations, the control unit 300 may acquire a virtual model of the first target object using image information of the first target object before surgery, the image information of the first target object being acquired by a positioning device 400, the positioning device 400 being, for example, a binocular vision device (as shown in fig. 4) communicatively coupled with the control unit 300 as shown in fig. 5.
The binocular vision device generally obtains two digital images of a first target object from different angles by two cameras at the same time, recovers three-dimensional geometric information of the first target object based on a parallax principle, and obtains the position of the first target object. Fig. 6 schematically shows the principle of three-dimensional measurement of the binocular vision apparatus. Referring to FIG. 6, a point P (x, y, z) is a feature point on the first target object, OlIs the optical center of the left camera, OrIs the optical center of the right camera. If the left camera is used to view point P, it is seen that its image point at the left camera is located at PlBut we cannot get from PlKnowing the three-dimensional position of P, in fact, at OlPlThe image point of any point on the left camera is PlThus, from PlThe position of the point can only be known that the spatial point P is located on the straight line OlPlThe above. Similarly, from the perspective of the right camera, it can only be known that the spatial point P is located on the straight line OrPrThe above. Thus, when the two cameras capture the same feature point P (x, y, z) of the first target object at the same time, the straight line OlPlAnd the straight line OrPrThe intersection point of (a), i.e., the position where the space point P is located, i.e., the three-dimensional coordinates of the space point P, is uniquely determined.
Further, the optical centers of the two cameras are at a distance b from the base line, and the focal lengths of the two cameras are both f. The two cameras shoot the same feature point P (x, y, z) of the first target object at the same time, and the following relation is obtained according to the similar triangle principle:
Figure BDA0003395289160000091
further obtaining:
Figure BDA0003395289160000092
thereby, the coordinate system F of the characteristic point P on the first target object in the binocular vision device can be obtained1The three-dimensional coordinate information is obtained, and then the coordinate system F of the binocular vision device is obtained1With the world coordinate system F0The mapping relation between the characteristic points P and the world coordinate system F is obtained0And (4) the following three-dimensional coordinate information. Based on the above, the world coordinate system F of other feature points on the first target object is obtained0Such that the control unit 300 is in a world coordinate system F according to all feature points of the first target object0And performing model reconstruction on the first target object by using the three-dimensional coordinate information to obtain a virtual model of the first target object.
Coordinate system F of the binocular vision device1With the world coordinate system F0The mapping relationship (c) can be established by rotating the matrix R and translating the vector t:
Figure BDA0003395289160000093
where R is a 3 × 3 matrix, t is a 3 × 1 vector, 0rThe matrix is (0,0,0), and M1 is a 4 × 4 matrix, which is also called a camera extrinsic parameter matrix, and the camera extrinsic parameter matrix can be obtained by an existing camera calibration method, which is not described in detail herein. Furthermore, the first target object is provided with a target 70 (as labeled in fig. 5) that can be recognized by the binocular vision device.
In other implementations, the control unit 300 stores a virtual model of the first target object in advance, so that the virtual model is directly called when the control unit 300 executes the movement guidance method.
As described above, in the surgical operation performed by the surgical robot system, the number of the first target objects is plural, and in this case, the first target area may be divided in the step S2 by the method shown in fig. 7:
step S21: setting the first area division standard.
Step S22: sub-regions with respect to the respective first target objects are respectively divided according to the first region division criterion.
Step S23: and judging whether all the sub-regions meet preset conditions, if so, executing the step S24, and if not, executing the step S25.
Step S24: and judging the areas where all the sub-areas are located as the first target area.
Step S25: first prompt information is generated to prompt the medical staff to execute intervention operation.
Wherein, in the step S21, the first region dividing criterion may include a plurality of sub-criteria, each of the sub-criteria corresponds to one of the first target objects and divides one sub-region with respect to the corresponding first target object. In an embodiment of the present invention, the sub-standard corresponding to the surgical operation device 20 is a first sub-standard, the corresponding sub-region is a first sub-region 201 (as shown in fig. 2, 8 and 13), the sub-standard corresponding to the surgical platform 40 is a second sub-standard, the corresponding sub-region is a second sub-region 202 (as shown in fig. 2, 8, 11 and 13), the sub-standard corresponding to the instrument placement platform 50 is a third sub-standard, the corresponding sub-region is a third sub-region 203, the sub-standard corresponding to the second healthcare worker 120 is a fourth sub-standard, and the corresponding sub-region is a fourth sub-region 204. Wherein, the first sub-area 201 is, for example, a first circular area on a horizontal plane, a center of the first circular area is a designated point on the surgical operation device 20, the designated point may be designated by a medical staff, and coordinates of the designated point may be obtained according to a virtual model of the surgical operation device 20, then the first sub-criterion may be a radius of the first circular area, and the first circular area should cover a maximum movement range of the base 21 and the mechanical arm 22 on the horizontal plane. The second sub-area 202 may be a first rectangular area covering the surgical platform 40 in a horizontal plane, and the second sub-criterion is a minimum distance from a boundary of the first rectangular area to the surgical platform 40. The third sub-region 203 may be a second rectangular region covering the instrument placement platform 50 in a horizontal plane, the third sub-criterion being a minimum distance of a boundary of the second rectangular region to the instrument placement platform 50. The fourth sub-area 204 may be a second circular area covering the second healthcare worker 120 in a horizontal plane, the center of the second circular area may be the center of the second healthcare worker 120, and the fourth sub-criterion is the radius of the second circular area. It is to be understood that the sub-standards and sub-regions are described herein by way of example only and are not to be considered as limiting in practice.
In step S23, the preset condition is that a communication region is formed between all the sub-regions. Here, "connected" means that at least part of the boundaries of adjacent regions coincide, or that adjacent regions at least partially overlap. That is, when all the sub-regions are communicated with each other, the communication region formed by all the sub-regions is used as the first target region, which is the region enclosed by the solid lines in fig. 2, 8, 11 and 13, wherein the solid lines and the dashed lines are completely separated, so as to clearly show the first target region, and in practice, the solid lines and the dashed lines are overlapped.
Referring back to fig. 7, in step S25, the first prompt message includes a first sub-prompt message and/or a second sub-prompt message. The first sub-hint information may, for example, prompt the re-setting of the first region division criterion, where the re-setting of the division criterion of the first region may be the re-setting of the shape of each sub-region or the adjustment of the corresponding value of the sub-criterion. The second sub-hint information is, for example, a hint to adjust the position of at least part of the first target object. When the medical staff selects to execute the first sub-prompt message, the control unit 300 returns to and loops from the step S21 to the step S25 until all the sub-areas satisfy the preset condition. When the medical staff selects to execute the second sub-prompt message, the control unit 300 returns to and loops from step S22 to step S25 until all the sub-areas satisfy the preset condition.
Referring back to fig. 4 and 5, the control unit 300 is further communicatively connected to an augmented reality device 500, so that in step S2, the control unit 300 sends the first target area to the augmented reality device 500, and can superimpose and display the first target area and the moving path in a real scene through the augmented reality device 500. The augmented reality device 500 is, for example, AR glasses.
In step S2, the control unit 300 establishes a coordinate system F of the augmented reality device 5002A mapping relation with a coordinate system of the virtual model of the first target object, thereby unifying the first target area and the movement path to a coordinate system F of the augmented reality apparatus 5002And can be in the coordinate system F of the augmented reality device 5002And then, displaying the first target area and the moving path in a display field scene in an overlapping manner.
Specifically, the control unit 300 first establishes the coordinate system F of the augmented reality device 5002Mapping relation with the coordinate system of the first target object, and obtaining the mapping relation between the coordinate system of the first target object and the coordinate system of the virtual model of the first target object, and then according to the coordinate system F of the augmented reality device 5002The mapping relation with the coordinate system of the first target object and the mapping relation between the coordinate system of the first target object and the coordinate system of the virtual model of the first target object establish a coordinate system F of the augmented reality device 5002A mapping relation to a coordinate system of a virtual model of the first target object.
In the embodiment of the present invention, the establishing of the mapping relationship between the coordinate system F2 of the augmented reality device 500 and the coordinate system of the first target object is actually to establish the coordinate system F of the augmented reality device 500 separately2Coordinate system F with the surgical operation device 203And establishing the coordinates of the augmented reality device 500Is F2Coordinate system F with the surgical platform 404And establishing a coordinate system F of the augmented reality device 5002Coordinate system F with the instrument placement platform 505And establishing a coordinate system F of the augmented reality device 5002Coordinate system F with said second medical person 1206The mapping relationship of (2). Since the establishing method is the same, this is not distinguished herein, and the mapping relationship between the coordinate system F2 of the augmented reality device 500 and the coordinate system of the first target object is directly established.
With continuing reference to fig. 4 and 5 in conjunction with fig. 9, the positioning device 400 (i.e., the binocular vision device) is integrated on the augmented reality device 500, and then the mechanical positions of the positioning device 400 and the augmented reality device 500 are fixed. The control unit 300 establishes a coordinate system F of the positioning device 400 according to the mechanical positions of the positioning device 400 and the augmented reality device 5001Coordinate system F with the augmented reality device 5002The mapping relationship between them. Alternatively, when the positioning device 400 and the augmented reality device 500 are separated from each other (not shown in the figures), the augmented reality device 500 is provided with the target 70 (as shown in fig. 5), and the control unit 300 acquires the image information of the target 70 on the augmented reality device 500 through the positioning device 400 to acquire the world coordinate system F of the augmented reality device 5000Further, a coordinate system F1 of the positioning device 400 and a coordinate system F of the augmented reality device 500 are established according to the three-dimensional coordinate information2The mapping relationship of (2).
Acquiring the image information of the first target object by using the positioning device 400, and acquiring the world coordinate system F of the first target object0When three-dimensional coordinate information is obtained, the coordinate system F of the positioning device 400 can be obtained1And the coordinate system of the first target object. Furthermore, the control unit 300 performs the control according to the system F of the positioning apparatus 4001Coordinate system F with the augmented reality device 5002And the coordinate system F of the positioning apparatus 4001And the first meshThe mapping relationship of the coordinate system of the target object, and the coordinate system F of the augmented reality device 500 is established2A mapping relationship with a coordinate system of the first target object.
And when the step S1 is executed, if the control unit 300 uses the image information of the first target object collected by the positioning device 400 to establish the virtual model of the first target object, the coordinate system of the virtual model of the first target object is the coordinate system of the first target object. If the control unit 300 directly calls the pre-stored virtual model of the first target object, the control unit 300 performs a registration operation on the virtual model of the first target object and the real-time image information of the first target object acquired by the positioning device 400, and establishes a mapping relationship between the coordinate system of the first target object and the coordinate system of the virtual model thereof.
In step S3, the control unit 300 plans the moving path in any suitable manner as long as the moving path is outside the boundary of the first target area. And in step S4, the augmented reality device 500 receives and displays the movement path in an overlaid manner in a real scene based on the mapping relationship between the coordinate system of the virtual model of the first target object and the coordinate system F3 of the augmented reality device 500.
Note that the operations performed in the above steps are not fixed, and for example, in step S2, only the first target region may be divided, and in step S4, the first target region may be superimposed and displayed in a real scene.
The moving object is expected to be in a safe position before the operation begins. Optionally, as shown in fig. 10, in some embodiments, the movement guidance method further includes step S5: and judging whether the mobile object is in a safe position. When it is determined that the mobile object is located at the safe position, it may be confirmed that the first target area divided in step S2 is reasonable and the third medical person has a proper standing position, and then steps S3 and S4 may be continuously performed. When it is determined that the mobile object is not located at the safe position, performing step S6, where step S6 is: generating second prompt information to prompt the adjustment of the position of the moving object and/or at least part of the first target object. After adjusting the position of the moving object and/or the first target object, the movement guidance method returns to perform at least part of the operations of step S2 (e.g., performing steps S22 to S25, or performing steps S21 to S25 as shown in fig. 7) to re-partition the first target area, that is, step S5 may be performed after step S2 and before step S3.
Referring to fig. 10, the step S5 may specifically include:
step S51: and acquiring a virtual model of the mobile object.
Step S52: a second target region 130 (shown in fig. 11) with respect to the moving object is divided according to the virtual model of the moving object and a second region division criterion set in advance.
Step S53: and judging whether the second target area 130 is communicated with the first target area, if so, judging that the mobile object is not at a safe position, and if not, judging that the mobile object is at the safe position.
In step S51, the virtual model of the moving object may be obtained in any suitable manner, for example, in a manner similar to that of the virtual model of the first target object, which is not described herein again.
In step S52, when the second target area 130 is a third circular area covering the moving object on the horizontal plane, and the center of the third circular area is the center of the moving object, the second area division criterion may be the radius of the third circular area. Preferably, the second target region 130 is also displayed in an overlapped manner in the real scene, and the implementation manner thereof may refer to the implementation manner of the foregoing step S3.
In step S53, the second target region 130 and the first target region are considered to be connected when they intersect, i.e., the second target region 130 at least partially coincides with the first target region, or the boundary of the second target region 130 at least partially coincides with the boundary of the first target region.
The moving object is also expected to move along the movement path to be continuously located outside the boundary of the first target area, but in practice the moving object may deviate from the movement path for various reasons. Optionally, as shown in fig. 12, in some embodiments, the movement guidance method further performs step S7 during the movement of the moving object: determining whether the moving object deviates from the moving path, if not, enabling the augmented reality device 500 to continuously display the moving path and guide the moving object to move, and if so, executing the steps S8 and S9. The step S8 is: and updating the moving path. The step S9 is: and displaying the updated moving path in a real scene in an overlapping manner so as to guide the moving object to move along the updated moving path. In this embodiment of the present invention, it may be determined whether the moving object deviates from the moving path according to whether the second target region 130 is communicated with the first target region during the moving process of the moving object, in other words, when the second target region 130 is not communicated with the first target region (as shown in fig. 13, an arrow in the figure indicates a moving path), the moving object is considered not to deviate from the moving path. When the second target area 130 is connected to the first target area (not shown), the moving object is considered to deviate from the moving path.
Furthermore, during the operation, the position of the first target object may be changed, for example, the second medical staff 120 moves, which may cause the first target area initially divided to be no longer suitable. In view of this, in some embodiments, the movement guiding method further includes step S9, step S10 (not shown), and step S11 (not shown), wherein the step S9 is: and acquiring the position information of the first target object in real time. The step S10 is: and updating the first target area and the moving path according to the real-time position information of the first target object. The step S11 is: and overlapping and displaying the updated first target area and the updated moving path in a real scene. The step S9 can be performed during the whole operation, and the steps S10 and S11 are performed after the position of at least one of the first target objects is changed.
Further, for the master-slave mapping robot system, the movement guidance method may further consider the positional relationship between the moving object and the image display device 30, the doctor-side control device 10, and the first medical staff 110, so as to minimize interference with the image display device 30 and the first medical staff 110. Therefore, the step S1 may further include: obtaining a virtual model of the second target object, wherein the second target object comprises the image display device 30, the doctor-side control device 10 and the first medical staff. Therefore, the step S2 may further include dividing a third target region of the second target object according to the virtual model of the second target object and a preset third region division criterion, and displaying the third target region in a real scene in an overlapping manner (none of fig. 3, 7, 10, and 12). As shown in fig. 2, the third target area includes a fifth sub-area 601 and a sixth sub-area 602, the shape of the fifth sub-area 601 may be a third rectangular area covering the image display device 30 in a horizontal plane, and the sixth sub-area 602 may be a fourth rectangular area covering the control end and the first healthcare worker 110 in the horizontal plane, in which case, the third area division criterion includes a minimum distance from a boundary of the third rectangular area to the image display device 30 and a minimum distance from a boundary of the fourth rectangular area to the control end and the first healthcare worker 110. The step S3 is specifically to plan the movement path according to the first target area and the third target area, so that the movement path is located outside the boundary of the first target area and outside the boundary of the third target area. And, in the step S53, when the second target area 130 is in communication with at least one of the first target area and the third target area, determining that the mobile object is not in a safe position, and when the second target area 300 is not in communication with either the first target area or the third target area, determining that the mobile object is in a safe position.
Further, an embodiment of the present invention also provides a computer-readable storage medium, on which a program is stored, and when the program is executed, the method for moving guidance is executed.
Further, an electronic device is provided in an embodiment of the present invention, where the electronic device includes a processor and the computer-readable storage medium, and the processor is configured to execute the program stored in the computer-readable storage medium.
Still further, referring back to fig. 5, an embodiment of the present invention further provides a mobile guidance system, which includes the control unit 300 and the augmented reality device 500, and further preferably includes the positioning device 400. The positioning device 400 and the targets 70 disposed on the first target object and the third target object form a positioning and tracking system, and when the positioning device 400 acquires image information of the moving object, the moving object is also disposed with the targets 70 and forms a part of the positioning and tracking system. The control unit is in communication connection with the positioning device 400 and the augmented reality device 500, and is configured to perform various operations, such as obtaining a virtual model of a first target object (including establishing the virtual model of the first target object according to image information of the first target object and also calling a pre-stored virtual model), dividing a first target area, planning a movement path, and sending the first target area and the movement path to the augmented reality device 500. The moving object, such as the third medical person, may wear the augmented reality device 500 to move outside the boundary of the first target area as directed by the movement directing method, it being understood that when considering a second target object, the third medical person moves outside the boundary of the first target area and the third target area.
In this embodiment, the positioning device 400 is specifically a binocular vision device as described above, and the target 70 is an optical target that can be recognized by the binocular vision device. The binocular vision device recognizes the position and contour information of the corresponding object by recognizing the target 70, and particularly, the binocular vision device recognizes the target 70 on the first target object to acquire the position and contour information of the first target object, the binocular vision device recognizes the target 70 on the third target object to acquire the position and contour information of the third target object, and the binocular vision device recognizes the position and contour information of the moving object by recognizing the target 70 on the moving object. The control unit 300 then receives the position and contour information of the corresponding object, performs data processing, and establishes a virtual operating room environment, and accordingly directs the moving object to move outside the boundary of the first target area, or outside the boundaries of the first target area and the third target area.
Furthermore, an embodiment of the present invention further provides a surgical robot system, where the surgical robot system includes a surgical operation device 20 and the movement guidance system, and the control unit of the movement guidance system is configured to acquire a first target area where the surgical operation device 20 performs a surgical operation, and guide the moving object to move at least outside a boundary of the first target area.
Although the present invention is disclosed above, it is not limited thereto. Various modifications and alterations of this invention may be made by those skilled in the art without departing from the spirit and scope of this invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (19)

1. A movement guidance method, characterized by comprising the steps of:
acquiring a virtual model of a first target object;
and dividing a first target area related to the first target object according to the virtual model of the first target object and a preset first area division standard, and enabling the first target area to be displayed in a superposition mode in a real scene so as to guide a moving object to move outside the boundary of the first target area.
2. The movement guidance method according to claim 1, characterized in that the movement guidance method further comprises: planning a moving path of the moving object according to the first target area so as to guide the moving object to move along the moving path; the movement path is located outside the boundary of the first target area.
3. The movement guidance method according to claim 2, characterized in that the movement guidance method further comprises: and judging whether the mobile object is at a safe position, if not, prompting to adjust the position of the mobile object and/or the first target object.
4. The movement guidance method according to claim 3, wherein the step of determining whether the moving object is in a safe position includes:
acquiring a virtual model of the mobile object;
dividing a second target area related to the moving object according to the virtual model of the moving object and a preset second area division standard;
and judging whether the second target area is communicated with the first target area, if so, judging that the mobile object is not at a safe position, and if not, judging that the mobile object is at the safe position.
5. The movement guidance method according to claim 1, characterized in that the movement guidance method further comprises:
acquiring a virtual model of a second target object;
and dividing a third target area related to the second target object according to the virtual model of the second target object and a preset third area division standard, and enabling the third target area to be displayed in a real scene in an overlapped mode so as to guide the moving object to move outside the boundary of the third target area.
6. The movement guidance method according to claim 5, characterized in that the movement guidance method further comprises:
planning a moving path of the moving object according to the first target area and the third target area so as to guide the moving object to move along the moving path; the movement path is located outside the boundary of the first target area and outside the boundary of the third target area.
7. The movement guidance method according to claim 2 or 6, characterized in that the movement guidance method further comprises: and displaying the moving path in a superposition manner in a real scene.
8. The movement guidance method according to claim 6, characterized in that the movement guidance method further comprises: and judging whether the mobile object is in a safe state, if not, prompting to adjust the position of at least one of the mobile object, the first target object and the second target object.
9. The movement guidance method according to claim 8, wherein the step of determining whether the moving object is in a safe state includes:
acquiring a virtual model of the mobile object;
dividing a second target area related to the moving object according to the virtual model of the moving object and a preset second area division standard;
and judging whether the second target area is communicated with at least one of the first target area and the third target area, if so, judging that the mobile object is in a non-safety state, and if not, judging that the mobile object is in a safety state.
10. The movement guidance method according to claim 2 or 6, characterized in that the movement guidance method further comprises:
acquiring the position information of the first target object in real time;
updating the first target area according to the real-time position information of the first target object, and displaying the updated first target area in a real scene in an overlapping manner;
and updating the moving path according to the updated first target area, and displaying the updated moving path in a real scene in an overlapping manner.
11. The movement guidance method according to claim 2 or 6, characterized in that the movement guidance method further comprises: and judging whether the moving object deviates from the moving path, if so, updating the moving path, and displaying the updated moving path in a real scene in an overlapping manner.
12. The movement guidance method according to claim 1, characterized in that the number of the first target objects is plural; the step of dividing a first target region with respect to the first target object according to the virtual model of the first target object and a preset first region division criterion includes:
setting a first area division standard;
dividing sub-regions with respect to the respective first target objects, respectively, according to the first region division criterion;
and judging whether all the sub-regions meet preset conditions, if so, taking the regions where all the sub-regions are located as the first target region, and if not, generating prompt information to prompt intervention operation.
13. The movement guidance method according to claim 12, wherein the prompt message includes a prompt to reset the first area division criterion and/or a prompt to adjust a position of at least a part of the first target object.
14. The movement guidance method according to claim 12 or 13, characterized in that the preset condition includes that a communication region is formed between all the sub-regions.
15. The movement guidance method according to claim 1, wherein a mapping relationship between a coordinate system of the virtual model of the first target object and a coordinate system of an augmented reality device is established so that the first target region is displayed in an overlaid manner in a real scene by the augmented reality device.
16. A computer-readable storage medium on which a program is stored, characterized in that when the program is executed, the movement guidance method according to any one of claims 1 to 15 is executed.
17. An electronic device comprising a processor and the computer-readable storage medium of claim 16, the processor to execute a program stored on the computer-readable storage medium.
18. A mobile guidance system is characterized by comprising a control unit, an augmented reality device and a positioning device; the augmented reality device and the positioning device are respectively in communication connection with the control unit, and the control unit is configured to execute the movement guidance method according to any one of claims 1-15.
19. A surgical robotic system comprising a surgical manipulation device and the movement guidance system of claim 18 for acquiring a first target area for a surgical manipulation performed by the surgical manipulation device and guiding at least the moving object to move outside a boundary of the first target area.
CN202111481113.4A 2021-12-06 2021-12-06 Mobile guidance method and system, readable storage medium, and surgical robot system Active CN114305695B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111481113.4A CN114305695B (en) 2021-12-06 2021-12-06 Mobile guidance method and system, readable storage medium, and surgical robot system
PCT/CN2022/137021 WO2023104055A1 (en) 2021-12-06 2022-12-06 Safety protection method and system, readable storage medium, and surgical robot system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111481113.4A CN114305695B (en) 2021-12-06 2021-12-06 Mobile guidance method and system, readable storage medium, and surgical robot system

Publications (2)

Publication Number Publication Date
CN114305695A true CN114305695A (en) 2022-04-12
CN114305695B CN114305695B (en) 2023-12-26

Family

ID=81048921

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111481113.4A Active CN114305695B (en) 2021-12-06 2021-12-06 Mobile guidance method and system, readable storage medium, and surgical robot system

Country Status (1)

Country Link
CN (1) CN114305695B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023104055A1 (en) * 2021-12-06 2023-06-15 上海微创医疗机器人(集团)股份有限公司 Safety protection method and system, readable storage medium, and surgical robot system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106255471A (en) * 2014-02-05 2016-12-21 直观外科手术操作公司 System and method for dynamic virtual collision object
CN108368974A (en) * 2015-12-10 2018-08-03 史赛克公司 Tracking device for use in a navigation system and method of manufacturing the same
CN108472095A (en) * 2015-12-29 2018-08-31 皇家飞利浦有限公司 The system of virtual reality device, controller and method are used for robotic surgical
CN108917758A (en) * 2018-02-24 2018-11-30 石化盈科信息技术有限责任公司 A kind of navigation methods and systems based on AR
US20190105776A1 (en) * 2017-10-05 2019-04-11 Auris Health, Inc. Robotic system with indication of boundary for robotic arm
CN113456221A (en) * 2021-06-30 2021-10-01 上海微创医疗机器人(集团)股份有限公司 Positioning guide method and system of movable equipment and surgical robot system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106255471A (en) * 2014-02-05 2016-12-21 直观外科手术操作公司 System and method for dynamic virtual collision object
CN108368974A (en) * 2015-12-10 2018-08-03 史赛克公司 Tracking device for use in a navigation system and method of manufacturing the same
CN108472095A (en) * 2015-12-29 2018-08-31 皇家飞利浦有限公司 The system of virtual reality device, controller and method are used for robotic surgical
US20190105776A1 (en) * 2017-10-05 2019-04-11 Auris Health, Inc. Robotic system with indication of boundary for robotic arm
CN108917758A (en) * 2018-02-24 2018-11-30 石化盈科信息技术有限责任公司 A kind of navigation methods and systems based on AR
CN113456221A (en) * 2021-06-30 2021-10-01 上海微创医疗机器人(集团)股份有限公司 Positioning guide method and system of movable equipment and surgical robot system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023104055A1 (en) * 2021-12-06 2023-06-15 上海微创医疗机器人(集团)股份有限公司 Safety protection method and system, readable storage medium, and surgical robot system

Also Published As

Publication number Publication date
CN114305695B (en) 2023-12-26

Similar Documents

Publication Publication Date Title
US20230310092A1 (en) Systems and methods for surgical navigation
JP7086150B2 (en) Systems and methods for rendering on-screen identification of instruments in remote-controlled medical systems
JP6898030B2 (en) How to control a surgical robot for stereotactic surgery and a surgical robot for stereotactic surgery
CN107753105B (en) Surgical robot system for positioning operation and control method thereof
CN105025835B (en) System for arranging objects in an operating room in preparation for a surgical procedure
WO2019139931A1 (en) Guidance for placement of surgical ports
US11896441B2 (en) Systems and methods for measuring a distance using a stereoscopic endoscope
WO2022083372A1 (en) Surgical robot adjustment system and method, medium, and computer device
CN112672709A (en) System and method for tracking the position of a robotically-manipulated surgical instrument
CN113456221B (en) Positioning guiding method and system of movable equipment and surgical robot system
US20220401178A1 (en) Robotic surgical navigation using a proprioceptive digital surgical stereoscopic camera system
CN115363762A (en) Positioning method and device of surgical robot and computer equipment
CN114305695B (en) Mobile guidance method and system, readable storage medium, and surgical robot system
Megali et al. EndoCAS navigator platform: a common platform for computer and robotic assistance in minimally invasive surgery
CN117122414A (en) Active tracking type operation navigation system
JP4187830B2 (en) Medical image synthesizer
JP2003079616A (en) Detecting method of three-dimensional location of examination tool which is inserted in body region
JP2022526540A (en) Orthopedic fixation control and visualization
US20230248467A1 (en) Method of medical navigation
US20220031502A1 (en) Medical device for eye surgery
CN115798689A (en) Slice fusion method and device, electronic equipment and storage medium
EP4384985A1 (en) Systems and methods for depth-based measurement in a three-dimensional view
CN116350359A (en) Method for guiding position adjustment of image equipment, storage medium and medical system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant