CN116687564A - Surgical robot self-sensing navigation method system and device based on virtual reality - Google Patents

Surgical robot self-sensing navigation method system and device based on virtual reality Download PDF

Info

Publication number
CN116687564A
CN116687564A CN202310582746.7A CN202310582746A CN116687564A CN 116687564 A CN116687564 A CN 116687564A CN 202310582746 A CN202310582746 A CN 202310582746A CN 116687564 A CN116687564 A CN 116687564A
Authority
CN
China
Prior art keywords
surgical robot
target
dimensional model
navigation
surgical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310582746.7A
Other languages
Chinese (zh)
Other versions
CN116687564B (en
Inventor
张逸凌
刘星宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Longwood Valley Medtech Co Ltd
Original Assignee
Longwood Valley Medtech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Longwood Valley Medtech Co Ltd filed Critical Longwood Valley Medtech Co Ltd
Priority to CN202310582746.7A priority Critical patent/CN116687564B/en
Priority claimed from CN202310582746.7A external-priority patent/CN116687564B/en
Publication of CN116687564A publication Critical patent/CN116687564A/en
Application granted granted Critical
Publication of CN116687564B publication Critical patent/CN116687564B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Manipulator (AREA)

Abstract

The application provides a surgical robot self-sensing navigation method system and equipment based on virtual reality, wherein the method comprises the following steps: acquiring image data shot by a depth camera on an operation robot, and establishing a three-dimensional model of an operating room; generating a VR map scene based on the three-dimensional model of the operating room; identifying and positioning a scene target in the three-dimensional model of the operating room; planning an optimal path of the surgical robot to a preset end position according to the identified scene target; based on current position information of the surgical robot in the three-dimensional model, navigating the surgical robot and updating the VR map scene in real time. According to the application, specific information in an operating room is acquired through the depth camera arranged on the surgical robot, a three-dimensional model is generated based on the acquired information, a scene target is identified, navigation in the operating room is performed, and the problem of interference between the surgical robot and other equipment in the navigation process is solved.

Description

Surgical robot self-sensing navigation method system and device based on virtual reality
Technical Field
The application relates to the technical field of medical instruments, in particular to a surgical robot self-sensing navigation method system and device based on virtual reality.
Background
Surgical robots are a complex of a plurality of modern high-tech means and an integral body, and have wide application, and a great number of applications are clinically, and with the vigorous development of medical robot technology, surgical robot assistants have been widely used, which also makes the requirements of navigation technology of the surgical robots in an operating room more and more urgent.
However, unlike the existing navigation method, the information acquisition method of the surgical robot is limited due to the privacy of the operating room, the specific information in the operating room cannot be acquired in advance, and the problem of interference between the surgical robot and other devices in the navigation process is difficult to solve.
Disclosure of Invention
The application solves the problem that the interference problem between the surgical robot and other equipment in the navigation process is difficult to solve.
To solve the above problems, a first aspect of the present application provides a surgical robot self-sensing navigation method based on virtual reality, including:
acquiring image data shot by a depth camera on an operation robot, and establishing a three-dimensional model of an operating room;
generating a VR map scene based on the three-dimensional model of the operating room;
identifying and positioning a scene target in the three-dimensional model of the operating room;
Planning an optimal path of the surgical robot to a preset end position according to the identified scene target;
based on current position information of the surgical robot in the three-dimensional model, navigating the surgical robot and updating the VR map scene in real time.
The second aspect of the present application provides a surgical robot self-sensing navigation system based on virtual reality, comprising:
the three-dimensional modeling module is used for acquiring image data shot by a depth camera on the surgical robot and establishing a three-dimensional model of an operating room;
the map generation module is used for generating a VR map scene based on the three-dimensional model of the operating room;
the target identification module is used for identifying and positioning a scene target in the three-dimensional model of the operating room;
the path planning module is used for planning an optimal path of the surgical robot to a preset end position according to the identified scene target;
and the indoor navigation module is used for navigating the surgical robot and updating the VR map scene in real time based on the current position information of the surgical robot in the three-dimensional model.
A third aspect of the present application provides an electronic device comprising: a memory and a processor;
The memory is used for storing programs;
the processor, coupled to the memory, is configured to execute the program for:
acquiring image data shot by a depth camera on an operation robot, and establishing a three-dimensional model of an operating room;
generating a VR map scene based on the three-dimensional model of the operating room;
identifying and positioning a scene target in the three-dimensional model of the operating room;
planning an optimal path of the surgical robot to a preset end position according to the identified scene target;
based on current position information of the surgical robot in the three-dimensional model, navigating the surgical robot and updating the VR map scene in real time.
A fourth aspect of the present application provides a computer-readable storage medium having stored thereon a computer program for execution by a processor to implement the virtual reality based surgical robot self-sensing navigation method described above.
According to the application, specific information in an operating room is acquired through the depth camera arranged on the surgical robot, a three-dimensional model is generated based on the acquired information, a scene target is identified, navigation in the operating room is performed, and the problem of interference between the surgical robot and other equipment in the navigation process is solved.
According to the application, the depth camera is arranged to acquire image information, so that a three-dimensional model and a VR map can be generated, and the problem that the surgical robot cannot navigate based on the existing map in the operating room is solved.
Drawings
FIG. 1 is a flow chart of a surgical robot self-sensing navigation method according to an embodiment of the present application;
FIG. 2 is a flow chart of three-dimensional modeling in a surgical robot self-sensing navigation method according to an embodiment of the present application;
FIG. 3 is a flow chart of a path planning in a self-aware navigation method of a surgical robot according to an embodiment of the present application;
FIG. 4 is a flow chart of an optimal planning in a self-sensing navigation method of a surgical robot according to an embodiment of the present application;
FIG. 5 is a flow chart of indoor navigation in a surgical robot self-aware navigation method according to an embodiment of the present application;
FIG. 6 is a flow chart of a navigation interaction in a surgical robot self-sensing navigation method according to an embodiment of the present application;
FIG. 7 is a flow chart of an interaction process in a surgical robot self-sensing navigation method according to an embodiment of the present application;
FIG. 8 is a flow chart of a surgical robot self-sensing navigation in combination with secondary navigation according to an embodiment of the present application;
FIG. 9 is a flowchart of a second navigation method of the surgical robot according to an embodiment of the present application;
FIG. 10 is a block diagram of a surgical robot self-sensing navigation system according to an embodiment of the present application;
fig. 11 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order that the above objects, features and advantages of the application will be readily understood, a more particular description of the application will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the application to those skilled in the art.
It is noted that unless otherwise indicated, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this application belongs.
Surgical robots are a complex of a plurality of modern high-tech means and an integral body, and have wide application, and a great number of applications are clinically, and with the vigorous development of medical robot technology, surgical robot assistants have been widely used, which also makes the requirements of navigation technology of the surgical robots in an operating room more and more urgent.
But the space in the operating room is smaller, and a plurality of precise operating devices are distributed in the space, so that the operating robot can easily interfere with the operating devices in the operating room during navigation, and interference is generated.
In addition, in order to protect privacy of patients, the operating room is not allowed to be provided with observation equipment such as a fixed camera, so that the operating robot cannot be based on the existing map in the operating room when navigating, and great inconvenience is brought to the navigation of the operating robot.
Aiming at the problems, the application provides a novel self-sensing navigation scheme of the surgical robot based on virtual reality, which can navigate by sensing three-dimensional information in an operating room through a depth camera of the surgical robot, and solves the interference problem of the surgical robot and other equipment in the navigation process.
For ease of understanding, the following terms that may be used are explained herein:
depth camera: an RGB-D camera (also called a 3D camera, where D represents Depth as Depth information) can acquire distance information from an object to the camera, and together with X, Y coordinates of a 2D plane, three-dimensional coordinates of each point can be calculated.
FPS: (Frames Per Second) is defined in the field of images, and refers to the number of frames per second transmitted by a picture, and can be understood as a refresh rate.
F-PointNet: (Frustum-PointNet, cone-Point cloud network) is a multi-step 3D object detection algorithm, which is based on a method of positioning an object by using a 2D detection algorithm of an image and performing bbox regression on corresponding point cloud data cone to realize 3D object detection, and the algorithm mainly comprises three steps of cone generation (Frustum configuration), 3D instance segmentation (3D instance segmentation) and 3D bounding box regression (3D box estimation).
NDI device: the optical tracking positioning device is characterized in that a positioning ball is required to be arranged on the device to be tracked and positioned, and the NDI device is used for realizing positioning by capturing the real-time position of the positioning ball optically.
The embodiment of the application provides a surgical robot self-sensing navigation method based on virtual reality, and the specific scheme of the method is shown in fig. 1-9, the method can be executed by a surgical robot self-sensing navigation system based on virtual reality, and the surgical robot self-sensing navigation system based on virtual reality can be integrated in electronic equipment such as a computer, a server, a computer, a surgical trolley, a server cluster, a data center and the like. Referring to fig. 1, a flowchart of a method for self-sensing navigation of a surgical robot based on virtual reality according to one embodiment of the application is shown; the surgical robot self-sensing navigation method based on virtual reality comprises the following steps:
S100, acquiring image data shot by a depth camera on an operation robot, and establishing a three-dimensional model of an operation room;
in the application, the depth camera is arranged on the surgical robot, and can be particularly arranged on the mechanical arm of the surgical robot, so that the pose and the orientation of the depth camera can be adjusted; the device can also be arranged at the highest point of the surgical robot, so that better shooting images can be obtained; the device can also be arranged at other optional positions of the surgical robot, and the specific positions can be determined according to actual requirements.
The image data shot by the depth camera is a plurality of image data or multi-frame image data streams, so that a three-dimensional model of an operating room can be determined through stitching.
According to the application, the surgical robot can surround the operating room for a circle in the obstacle avoidance mode, so that multi-frame image data are acquired through the depth camera in the surrounding process, and then a three-dimensional model of the operating room is determined through splicing; the multi-frame image data of the surrounding environment can be obtained by changing the pose of the depth camera for the placement position of the surgical robot, and then the three-dimensional model in the visual range of the placement position in the operating room is determined by splicing; the surgical robot can also surround the operating room for a circle through the obstacle avoidance mode, stay for a plurality of times in the surrounding process and change the pose of the depth camera at the stay position to acquire multi-frame image data of the surrounding environment, so that the splicing accuracy of the multi-frame image data is improved;
S200, generating a VR map scene based on the three-dimensional model of the operating room;
setting an origin and a view angle direction taking the origin as a view angle point under the condition that the three-dimensional model of the operating room is acquired, and generating a VR map scene corresponding to the origin and the view angle direction of the origin.
Specifically, it may be: dividing a plurality of view directions of an origin point by taking the origin point as a view base point, and obtaining a view sequence of each group of view directions by taking the view directions which are mutually related as a group; intercepting/generating at least one original view corresponding to the view direction from the three-dimensional model according to each view direction of the view sequence to obtain a group of view sets; and splicing the original views in the view set according to the view angle direction to obtain the corresponding VR map scene.
Other specific VR map scene generating methods may also be used, which will not be described in detail in the present application.
The origin and the view angle direction of the origin may be plural, and the specific number is set according to the actual requirement, which is not limited in the present application.
S300, identifying and positioning scene targets in the three-dimensional model of the operation room;
The scene targets are all targets possibly interfering with the navigation/movement of the surgical robot in the operating room, and better navigation effect can be achieved through identification and positioning of the scene targets.
In the application, the identification of the scene target can be completed by setting a pre-trained scene target identification model. S400, planning an optimal path of the surgical robot to a preset end position according to the identified scene target;
s500, based on the current position information of the surgical robot in the three-dimensional model, navigating the surgical robot and updating the VR map scene in real time.
The VR map scene is updated by using the current position of the surgical robot as an origin and the forward direction of the surgical robot as a viewing angle direction, so as to regenerate a corresponding VR map scene; or the current position of the depth camera on the surgical robot is taken as an origin, and the forward direction of the depth camera on the surgical robot is taken as a viewing angle direction, so that a corresponding VR map scene is regenerated; the corresponding VR map scene may be regenerated by using the origin of the set position and the set direction of the set position as the viewing angle direction, where the generated VR map scene includes the surgical robot (if any).
According to the application, specific information in an operating room is acquired through the depth camera arranged on the surgical robot, a three-dimensional model is generated based on the acquired information, a scene target is identified, navigation in the operating room is performed, and the problem of interference between the surgical robot and other equipment in the navigation process is solved.
According to the application, the depth camera is arranged to acquire image information, so that a three-dimensional model and a VR map can be generated, and the problem that the surgical robot cannot navigate based on the existing map in the operating room is solved.
Referring to fig. 2, in one embodiment, S100 is configured to acquire image data captured by a depth camera on a surgical robot, and build a three-dimensional model of an operating room, including:
s110, acquiring multi-frame image data of a depth camera on a surgical robot; each frame of image data contains a color image and a depth image;
the depth camera may be a structured light depth camera, a time-of-flight depth camera or a binocular stereoscopic depth camera, and may specifically be selected according to actual requirements, which is not limited in the present application.
In the application, the depth image and the color image have a one-to-one correspondence, and each frame of depth image has a color image corresponding to one frame; the frame depth image and the frame color image form a frame of image data.
In the present application, the depth image data stream format may be: resolution 640 x 480, frame rate 30FPS; the color image data stream format may be: resolution is 640 x 480 and frame rate is 30FPS.
S120, converting a color image and a depth image of the image data into corresponding point cloud data, wherein each spatial point in the point cloud data is provided with position information and color information;
in the application, when converting the color image and the depth image of the image data into the corresponding point cloud data, registering the color image and the depth image, and then converting the registered depth image into the point cloud data.
In the application, the color image and the depth image of the depth camera are registered, namely, the image coordinates of the color image and the depth image are unified to obtain a registered color image and a registered depth image. The coordinate unification mode can be that an image coordinate system of the depth image is converted into an image coordinate system of the color image or the image coordinate system of the color image is converted into the image coordinate system of the depth image; detailed description of the embodiments is omitted in this disclosure.
In the application, each pixel point in the registered depth image can not only contain three-dimensional coordinates, but also contain color information (RGB), so that the color image is registered on the depth image.
In the application, the registered depth image is converted into point cloud data, namely, transformation of a coordinate system: the image coordinate system is converted into a world coordinate system. The constraint of the transformation is the camera internal parameters.
The pixel points in the registered depth image contain position information and color information, so that the spatial points in the converted point cloud data also have position information (converted) and color information.
In one embodiment, the image data is preprocessed to remove extraneous information, noise, and distortion prior to converting the color image and depth image of the image data into corresponding point cloud data.
S130, registering and splicing point cloud data of a plurality of image data to obtain spliced first point cloud data;
and the point cloud data of the plurality of image data have a certain overlapping area, and the point cloud data are registered and spliced based on the overlapping area to obtain first point cloud data.
The registering and splicing process of the point cloud data of the plurality of image data can be specifically as follows: coarse alignment is carried out on the point cloud data in pairs, repeated points of the coarse aligned point cloud data are removed, and the coarse aligned point cloud data with the repeated points removed are registered and spliced.
The coarse alignment point cloud data is registered by an ICP (Iterative Closest Point) iterative closest point algorithm (other registration algorithms can be selected, and the actual requirement is specifically met). The registered point cloud data may then be stitched together as the first point cloud data.
In the application, the registration of the point cloud data is two-by-two registration, one of the point cloud data can be selected as the reference data after registration, and the rest of the point cloud data is directly or indirectly spliced to (a coordinate system of) the reference data, so that the first point cloud data is obtained.
And S140, modeling the spliced first point cloud data, and determining a three-dimensional model of the operating room.
In the application, the spliced first point cloud data has a plurality of repeated points, and before modeling, the first point cloud data needs to be subjected to data reduction and the data of the repeated area is removed, so that the speed and the quality of subsequent modeling are improved.
In one embodiment, prior to modeling, the point cloud data is segmented, the segmented point cloud data is modeled separately, and the modeled segmented data is combined to obtain a three-dimensional model of the operating room. Thus, the complexity of mathematical expression of the three-dimensional model can be reduced, and the modeling effect can be improved.
In one embodiment, the scene targets include a surgical personnel target, a patient target, a surgical robot target, and a medical device target; and S300, identifying and positioning the surgical personnel target, the patient target, the surgical robot target and the medical equipment target through an F-PointNet algorithm in the identification and positioning of the scene target in the three-dimensional model of the operating room.
In the application, the scene target in the operating room is generally a small target, wherein the volumes of the operating personnel target, the patient target, the operating robot target and the medical equipment target are relatively smaller, the identification is complex, and the accuracy is lower.
In the application, the targets of the surgical staff are the doctors, nurses and other staff for directly or indirectly performing the surgery, and the staff can specifically comprise a doctor of a main knife, an assistant doctor, an anesthesiologist, a tour nurse, an instrument nurse and the like; the patient aims at the personnel to be operated, and the patient is generally in an operation position for the convenience of operation; the surgical robot targets are the surgical robots in the application, wherein the number of the surgical robots can be multiple, and different functions can be respectively realized, so that the surgical robot targets can exist in the generated three-dimensional model; the medical equipment targets other equipment except the operation robot in the operating room, and the quantity is more, and discernment is comparatively difficult, and the difference is great between different medical equipment.
In the application, the medical equipment targets are various, the appearance characteristics of the similar targets are greatly different, the scene scale in the operating room is small, and the distance is short.
According to the application, the three-dimensional identification accuracy is increased by converting the image data into the point cloud data and then identifying the point cloud data and utilizing the characteristic that the three-dimensional characteristics of the point cloud characteristics are more prominent.
In the application, scene targets are identified and positioned through the F-PointNet algorithm, on one hand, small targets such as scene targets in an operating room are accurately identified by utilizing the F-PointNet algorithm, and the problems of complex scene target identification and low accuracy in the operating room are solved; on the other hand, the F-PointNet algorithm is utilized to directly detect scene targets through the point cloud data instead of identifying based on a three-dimensional model, and the accuracy of identification is further improved through multiplexing the first point cloud data.
In the application, the F-PointNet algorithm utilizes the point cloud data to represent the depth information, so that the point cloud data is directly processed, and the accuracy of three-dimensional identification is increased by utilizing the characteristic that the three-dimensional characteristics of the point cloud characteristics are more prominent; and through the F-PointNet algorithm, the first point cloud data can be multiplexed, so that the data volume and occupied resources for identifying the scene target in the three-dimensional model are greatly reduced.
Referring to fig. 3, in one embodiment, the step S400 of planning, according to the identified scene target, an optimal path of the surgical robot to reach a preset end position includes:
s410, acquiring the position information of the patient target and the position constraint information of the surgical robot, and determining the preset end position of the surgical robot;
wherein the location information of the patient target may be determined based on the identification and positioning results of the patient target. Since the identification result of the patient target is generally highly coincident with the operating table, the identification result of the operating table can be used as the patient target to determine the position information thereof under the condition that the patient target is not identified.
In the application, in order to facilitate surgery or assist surgery, the distance between the surgical robot and the patient/operating table cannot exceed a preset threshold, and the threshold may be position constraint information of the surgical robot.
In the application, if the surgical robot is designed to have an inoperable area, the parking position of the surgical robot needs to be such that the inoperable area is not overlapped with the patient/operating table, and the constraint can be the position constraint information of the surgical robot.
In the present application, the positional constraint information of the surgical robot may include the threshold value, the constraint, and other constraint conditions.
In the application, the position information of the patient target (under the condition of setting the preset threshold value) and the region of the position constraint information of the surgical robot can be simultaneously satisfied, namely, the preset end position of the surgical robot can be used. In the case where there are a plurality of preset end positions, one of the plurality of preset end positions may be selected as the preset end position based on a preset strategy or randomly.
S420, acquiring the current position of a surgical robot, and the safety ranges of the surgical personnel target, the patient target, the surgical robot target and the medical equipment target;
wherein the surgical personnel object, the patient object, the surgical robot object and the different medical device objects have respective safety ranges which can be set in a preset manner so as to be directly read when necessary.
It should be noted that, the safety range of the surgical robot target includes not only the safety ranges of the plurality of surgical robot targets parked, but also the safety range of the surgical robot to be navigated, so as to perform safer navigation on the surgical robot.
S430, calculating an optimal path of the surgical robot by taking the current position of the surgical robot as a starting point and the preset end point position as an end point according to the safety ranges of the surgical personnel target, the patient target, the surgical robot target and the medical equipment target.
In the application, the surgical personnel target, the patient target, the surgical robot target and the medical equipment target have safety ranges, the surgical robot to be navigated also has the safety ranges, and when the safety ranges of the surgical personnel target, the patient target, the surgical robot target and the medical equipment target are partially overlapped with the safety ranges of the surgical robot to be navigated in calculating the optimal path, a larger safety range in the overlapped safety ranges can be set as a basis of calculation, and a strategy that the safety ranges of the two can not be overlapped can be set as a basis of calculation; other security range coincidence strategies can also be set as the basis of calculation.
In the application, path planning is performed based on the self-perceived three-dimensional model and the constraint condition of the safety range, so that interference of the surgical robot and other equipment in the navigation process can be avoided.
In the application, taking the case that the depth camera is arranged on the mechanical arm of the surgical robot as an example, the surgical robot can determine the relative position information and the relative angle (orientation) information of the depth camera and the surgical robot according to the angle and the motion information of the mechanical arm, and can determine the specific position and the specific angle of the depth camera by combining the position and the orientation information of the surgical robot; therefore, the surgical robot can self-sense the position information and the angle information of the depth camera, and can self-sense the position information and the angle information of the robot.
In addition, the surgical robot can also self-sense/determine the height information of the surgical robot, and specifically can determine the current gesture of the mechanical arm and the final height information of the surgical robot under the current gesture according to the height information of the position where the mechanical arm is installed and the angle and the motion information of the mechanical arm; the height information of the self-sensing device can also be self-sensed in other modes, and the detailed self-sensing mode is not repeated in the application.
It should be noted that, the technical difficulty of three-dimensional panoramic navigation is that the model has high complexity and large data volume, and compared with the 2D navigation map, more CPU and GPU resource support are needed. In the application, when the current position of the surgical robot is taken as a starting point and the preset end point position is taken as an end point, and the optimal path of the surgical robot is calculated, if a three-dimensional model in an operating room is adopted to calculate the optimal path, more and better CPU and/or GPU are required to be arranged in the surgical robot to support resources. However, the surgical robot is designed to assist the surgery more precisely, and the CPU and/or GPU resources in the surgical robot are relatively less, so that it is difficult to complete the calculation process of the three-dimensional model optimal path within a preset time.
Referring to fig. 4, in one embodiment, the optimal path calculation process for the three-dimensional model in the operating room with known start and end points is:
s401, acquiring first point cloud data, a recognition result and a positioning result of the scene target and height information of the surgical robot;
s402, determining second point cloud data in the surgical robot height range in the first point cloud data according to the height information of the surgical robot;
the space points in the point cloud data have three-dimensional coordinates, the height coordinates in the three-dimensional coordinates are the heights of the space points, and the space points with the height coordinates smaller than the height of the surgical robot are selected as second point cloud data.
S403, determining vertical projection of the second point cloud data according to the three-dimensional coordinates of the second point cloud data; the vertical projection of the second point cloud data includes a vertical projection of the identified scene object;
the space points in the point cloud data have three-dimensional coordinates, and the height coordinates in the three-dimensional coordinates are the coordinates of the space points in the vertical direction; and directly deleting the height coordinates in the three-dimensional coordinates of the space points, namely converting the three-dimensional coordinates into the coordinates of vertical projection.
In the application, the height selection and the vertical projection are carried out based on the point cloud data, the specificity of the three-dimensional coordinates in the point cloud data is fully utilized, and only the height coordinates in the three-dimensional coordinates are needed to be judged or deleted, thereby greatly simplifying the judging and executing process and greatly reducing the occupation of CPU and GPU resources.
And S404, calculating an optimal path from a starting point to an end point by taking the vertical projection of the second point cloud data as a navigation map according to the vertical projection and the safety range of the scene target, wherein the optimal path is the optimal path of the three-dimensional model.
According to the application, the cloud data of the high interception point of the surgical robot is utilized, so that the model complexity and the data quantity in the process of calculating the optimal path are greatly reduced, and the path planning speed is remarkably improved.
In the application, the three-dimensional model map is converted into the two-dimensional map in a projection mode, so that the three-dimensional model navigation is simplified into the two-dimensional model navigation, the model complexity and the data volume required by path planning in the navigation process are greatly reduced, and the real-time performance of the path planning can be improved.
Referring to fig. 5, in one embodiment, the step S500 of navigating the surgical robot and updating the VR map scene in real time based on the current position information of the surgical robot in the three-dimensional model includes:
S510, acquiring the planned optimal path, and detecting a set path;
in the present application, the set route is a route input by a doctor or nurse in an operating room through a control panel or the like.
S520, determining a navigation path according to the detection result of the set path and the optimal path;
in the application, if a set path is detected, the set path is taken as the navigation path; and if the set path is not detected, taking the optimal path as a navigation path.
S530, navigating the surgical robot according to the navigation path and acquiring real-time image data of the surgical robot;
the surgical robot is controlled to continuously run according to the navigation path.
In the application, in the navigation process, the current image data is acquired through a depth camera on the surgical robot, and the image data is the real-time image data.
S540, determining the current position and the visual field axis of the surgical robot according to the image data, and updating the displayed VR map scene;
the visual field axis of the surgical robot is an axis reflecting the eye direction of the surgical robot, and the surgical robot has only forward, backward, and steering functions and does not have a lateral movement function, so that the visual field axis is set to an axis directed directly forward from the center of the surgical robot (the height of the axis may be a set height).
In the application, VR navigation is performed on the surgical robot, so that the visual field axis of the surgical robot needs to be determined, and the VR scene corresponding to the visual field axis is determined.
According to the application, the position and the specific angle of the depth camera for shooting the image data can be reversely determined through the comparison of the real-time image data of the surgical robot and the three-dimensional model, so that the current position of the surgical robot is determined.
In the application, the current position of the surgical robot can be determined by a depth camera or NDI equipment on other surgical robots; the current position of the surgical robot may also be determined by other positioning means.
According to the application, the surgical robot can determine the relative position information and the relative angle (orientation) information of the depth camera and the surgical robot according to the angle and the motion information of the mechanical arm, and can determine the current position and the orientation information of the surgical robot by combining the position information and the angle information of the depth camera.
The VR map scene displayed in an updating manner may be a VR map scene reproduced by using the current position of the surgical robot as an origin and the view axis of the surgical robot as a viewing angle direction.
S550, after detecting that the surgical robot reaches the end position of the navigation path, the navigation is ended.
According to the application, the surgical robot is navigated based on the three-dimensional model, and the VR map scene is updated based on the real-time image data in the navigation process of the surgical robot, so that the surgical robot is navigated in real time, and the updated VR map scene is convenient for medical staff to observe and interfere a route.
Referring to fig. 6, in an embodiment, the step S500 of navigating the surgical robot and updating the VR map scene in real time based on the current position information of the surgical robot in the three-dimensional model further includes:
s560, stopping the motion of the surgical robot after detecting that the VR map scene is triggered;
in the step, the VR map scene is displayed on a display interface, and if an external medical staff operates the VR map scene in an external input mode, the VR map scene is triggered; the external input mode may be a mode of touching a display screen, moving a mouse on a VR map scene, clicking the VR map scene, inputting a keyboard, or the like, or a mode of collecting information by a surgical robot, such as voice input, face recognition input, gesture conversion input, or the like, so long as the input mode can trigger the VR map scene.
The specific interaction mode between the surgical robot and the external operator is as follows, referring to fig. 7:
s501, the surgical robot displays a real-time VR map scene through a display interface, detects that the VR map scene is triggered, and externally sends whether to start setting options or not;
s502, detecting a start setting option, clearing a navigation route of a VR map scene, and stopping the motion of the surgical robot;
in this step, the top view of the three-dimensional model in the operating room can also be displayed simultaneously, so that the operator can set the route.
S503, receiving a set route input from the outside, and judging whether the set route meets constraint conditions;
the constraint condition at least comprises: the safety range of the scene target; a safety range of the surgical robot; and whether the preset end positions are consistent or not.
S504, determining and displaying a road section which does not meet the constraint condition in the set route under the condition that the set route does not meet the constraint condition; and receiving an updated set route input from the outside, and re-executing the judgment on whether the set route meets the constraint condition or not until the set route meets the constraint condition.
In this step, after receiving the updated set route input from the outside, step S503 is re-executed until the set route satisfies the constraint condition.
In one embodiment, when the road sections which do not meet the constraint conditions are displayed, synchronizing to the outside to send an automatic compensation option; and if the automatic compensation option is detected, reserving the road sections meeting the constraint conditions in the set route, and recalculating the paths of the road sections not meeting the constraint conditions as new set paths.
In this step, the set route may be divided into a plurality of segments based on segments satisfying the constraint condition, so that segments satisfying the constraint condition and segments not satisfying the constraint condition among the divided segments are alternately distributed.
In this step, the path of the road section that does not satisfy the constraint condition is recalculated, that is, the starting point and the end point of the road section are kept unchanged, and the optimal path between the starting point and the end point is recalculated.
Preferably, the optimal paths of the start point and the end point are recalculated according to steps S401 to S404.
According to the application, through automatic compensation of the surgical robot, the restriction on the setting path of the operator can be reduced, and the degree of freedom of the operation of the operator is improved, so that the operation experience is improved, and the navigation effect is improved.
S570, after receiving the new set path, determining the set path as a navigation path, and re-executing the steps of navigating the surgical robot according to the navigation path and acquiring real-time image data of the surgical robot.
In this step, when it is determined that the set route satisfies the constraint condition, the set route is determined as a navigation route.
In this step, the steps of navigating the surgical robot according to the navigation path and acquiring real-time image data of the surgical robot are re-executed, that is, steps S530-S550 are re-executed, and the navigation of the surgical robot is completed according to the new navigation path.
According to the application, the navigation path is adjusted according to the requirement of the operator through interaction between the display interface and the operator, so that a better navigation effect is achieved.
Referring to fig. 8, in an embodiment, the surgical robot self-sensing navigation method based on virtual reality further includes:
s600, acquiring the operation position of the patient target, and performing secondary navigation on the operation robot based on the operation position.
Wherein, the operation position of the patient target is the position of the patient needing operation; due to the limitation of the operation range and the operation angle of the mechanical arm of the surgical robot, the surgical position of the patient may exceed the preset range, so that the surgical robot cannot directly perform the surgical operation/auxiliary surgical operation.
According to the application, the surgical robot is navigated to the surgical position through the secondary navigation, so that inconvenience caused by the stop position of the surgical robot is avoided.
Referring to fig. 9, in one embodiment, S600, acquiring a surgical position of the patient target, and performing secondary navigation on the surgical robot based on the surgical position includes:
s610, adjusting a depth camera of the surgical robot, and shooting multiple frames of images of the patient target;
the multi-frame images of the patient target are shot as images of different positions of the patient target, and the multi-frame images can be gathered to cover all positions of the patient target, so that omission is avoided.
The depth camera of the surgical robot can be adjusted through a preset adjusting strategy, so that a better shooting effect is achieved.
S620, determining the operation position of the patient target according to the photographed multi-frame images;
in the application, the operation position of the patient target has obvious difference between the color of the operation position and the surrounding color due to the dark iodophor coating; the non-operative position of the patient's target is covered with a green wipe which forms a distinct color difference with the operative position.
Based on the color characteristics of the operation position, the operation position of the patient target can be locked quickly from the multi-frame images. The specific determination mode is not repeated in the present application.
S630, acquiring the pose constraint information of the mechanical arm of the surgical robot;
In the application, the mechanical arm has the difference in operation of different positions due to the constraint of the maximum rotation angle, the moment angle and the like of the mechanical arm: for example, the stability of the mechanical arm at different positions is different, and the maximum moment of operation at different positions is different. In order to achieve the optimal stability or the maximum operation moment, the operation position of the mechanical arm is constrained, and the constraint is the pose constraint of the mechanical arm. The data of the specific constraint information can be determined according to practical situations, and the application is not limited.
S640, determining the working position and the working pose of the surgical robot according to the surgical position of the patient target, the current position of the surgical robot and the pose constraint information of the mechanical arm;
in the step, the working position of the surgical robot is the parking position of the surgical robot during the operation/auxiliary operation; the working pose of the surgical robot is an initial pose of the surgical robot during operation/auxiliary operation, and the initial pose can also comprise the orientation of the surgical robot.
S650, navigating the surgical robot at the current position to a working position, and adjusting the pose of the surgical robot to the working pose.
According to the application, the position and the pose of the surgical robot are adjusted to the most suitable working position and working pose by the secondary navigation, so that the convenience of surgery is increased.
According to the application, through secondary navigation, accurate navigation can be realized on the basis of primary navigation, and on the basis of accurate navigation, the definition requirement on image data is reduced, the application range of the surgical robot is increased, and the cost is reduced.
The embodiment of the application provides a surgical robot self-sensing navigation system based on virtual reality, which is used for executing the surgical robot self-sensing navigation method based on virtual reality, and the surgical robot self-sensing navigation system based on virtual reality is described in detail below.
As shown in fig. 10, the surgical robot self-sensing navigation system based on virtual reality includes:
the three-dimensional modeling module 101 is used for acquiring image data shot by a depth camera on the surgical robot and establishing a three-dimensional model of an operating room;
a map generation module 102 for generating a VR map scene based on the three-dimensional model of the operating room;
a target recognition module 103 for recognizing and locating a scene target in the three-dimensional model of an operating room;
A path planning module 104, configured to plan an optimal path of the surgical robot to reach a preset end position according to the identified scene target;
an indoor navigation module 105 for navigating the surgical robot and updating the VR map scene in real time based on current position information of the surgical robot in the three-dimensional model.
In one embodiment, the three-dimensional modeling module 101 is further configured to:
acquiring multi-frame image data of a depth camera on a surgical robot; each frame of image data contains a color image and a depth image; converting a color image and a depth image of the image data into corresponding point cloud data, wherein each spatial point in the point cloud data has position information and color information; registering and splicing the point cloud data of the plurality of image data to obtain spliced first point cloud data; modeling the spliced first point cloud data, and determining a three-dimensional model of an operating room.
In one embodiment, the scene targets include a surgical personnel target, a patient target, a surgical robot target, and a medical device target; the object recognition module 103 is further configured to: identifying and locating the surgical personnel target, the patient target, the surgical robot target, and the medical device target by an F-PointNet algorithm.
In one embodiment, the path planning module 104 is further configured to:
acquiring the position information of the patient target and the position constraint information of the surgical robot, and determining the preset end position of the surgical robot; acquiring the current position of a surgical robot, and the safety ranges of the surgical personnel target, the patient target, the surgical robot target and the medical equipment target; and calculating an optimal path of the surgical robot by taking the current position of the surgical robot as a starting point and the preset end point as an end point according to the safety ranges of the surgical personnel target, the patient target, the surgical robot target and the medical equipment target.
In one embodiment, the indoor navigation module 105 is further configured to:
acquiring the planned optimal path, and detecting a set path; determining a navigation path according to the detection result of the set path and the optimal path; navigating the surgical robot according to the navigation path and acquiring real-time image data of the surgical robot; determining the current position and the visual field axis of the surgical robot according to the image data, and updating the displayed VR map scene; and after detecting that the surgical robot reaches the end position of the navigation path, ending the navigation.
In one embodiment, the indoor navigation module 105 is further configured to:
stopping the motion of the surgical robot after detecting that the VR map scene is triggered; and after receiving the new set path, determining the set path as a navigation path, and re-executing the steps of navigating the surgical robot according to the navigation path and acquiring real-time image data of the surgical robot.
In one embodiment, the virtual reality-based surgical robot self-sensing navigation system further comprises:
and the secondary navigation module is used for acquiring the operation position of the patient target and performing secondary navigation on the operation robot based on the operation position.
In one embodiment, the secondary navigation module is further configured to:
adjusting a depth camera of the surgical robot, and shooting multiple frames of images of the patient target; determining the operation position of the patient target according to the photographed multi-frame images; acquiring pose constraint information of a mechanical arm of the surgical robot; determining the working position and the working pose of the surgical robot according to the surgical position of the patient target, the current position of the surgical robot and the pose constraint information of the mechanical arm; and navigating the surgical robot at the current position to a working position, and adjusting the pose of the surgical robot to the working pose.
The surgical robot self-sensing navigation system based on virtual reality provided by the embodiment of the application and the surgical robot self-sensing navigation method based on virtual reality provided by the embodiment of the application have the same beneficial effects as the method adopted, operated or realized by the application program stored by the surgical robot self-sensing navigation system based on virtual reality provided by the embodiment of the application due to the same inventive concept.
The internal functions and structures of the virtual reality-based surgical robot self-sensing navigation system are described above, and as shown in fig. 11, in practice, the virtual reality-based surgical robot self-sensing navigation system may be implemented as an electronic device, including: memory 301 and processor 303.
The memory 301 may be configured to store a program.
In addition, the memory 301 may also be configured to store other various data to support operations on the electronic device. Examples of such data include instructions for any application or method operating on the electronic device, contact data, phonebook data, messages, pictures, videos, and the like.
The memory 301 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
A processor 303 coupled to the memory 301 for executing programs in the memory 301 for:
acquiring image data shot by a depth camera on an operation robot, and establishing a three-dimensional model of an operating room;
generating a VR map scene based on the three-dimensional model of the operating room;
identifying and positioning a scene target in the three-dimensional model of the operating room;
planning an optimal path of the surgical robot to a preset end position according to the identified scene target;
based on current position information of the surgical robot in the three-dimensional model, navigating the surgical robot and updating the VR map scene in real time.
In one embodiment, the processor 303 is specifically configured to:
acquiring multi-frame image data of a depth camera on a surgical robot; each frame of image data contains a color image and a depth image; converting a color image and a depth image of the image data into corresponding point cloud data, wherein each spatial point in the point cloud data has position information and color information; registering and splicing the point cloud data of the plurality of image data to obtain spliced first point cloud data; modeling the spliced first point cloud data, and determining a three-dimensional model of an operating room.
In one embodiment, the scene targets include a surgical personnel target, a patient target, a surgical robot target, and a medical device target; the processor 303 is specifically configured to:
identifying and locating the surgical personnel target, the patient target, the surgical robot target, and the medical device target by an F-PointNet algorithm.
In one embodiment, the processor 303 is specifically configured to:
acquiring the position information of the patient target and the position constraint information of the surgical robot, and determining the preset end position of the surgical robot; acquiring the current position of a surgical robot, and the safety ranges of the surgical personnel target, the patient target, the surgical robot target and the medical equipment target; and calculating an optimal path of the surgical robot by taking the current position of the surgical robot as a starting point and the preset end point as an end point according to the safety ranges of the surgical personnel target, the patient target, the surgical robot target and the medical equipment target.
In one embodiment, the processor 303 is specifically configured to:
acquiring the planned optimal path, and detecting a set path; determining a navigation path according to the detection result of the set path and the optimal path; navigating the surgical robot according to the navigation path and acquiring real-time image data of the surgical robot; determining the current position and the visual field axis of the surgical robot according to the image data, and updating the displayed VR map scene; and after detecting that the surgical robot reaches the end position of the navigation path, ending the navigation.
In one embodiment, the processor 303 is specifically configured to:
stopping the motion of the surgical robot after detecting that the VR map scene is triggered; and after receiving the new set path, determining the set path as a navigation path, and re-executing the steps of navigating the surgical robot according to the navigation path and acquiring real-time image data of the surgical robot.
In one embodiment, the method further comprises:
and acquiring the operation position of the patient target, and performing secondary navigation on the operation robot based on the operation position.
In one embodiment, the processor 303 is specifically configured to:
adjusting a depth camera of the surgical robot, and shooting multiple frames of images of the patient target; determining the operation position of the patient target according to the photographed multi-frame images; acquiring pose constraint information of a mechanical arm of the surgical robot; determining the working position and the working pose of the surgical robot according to the surgical position of the patient target, the current position of the surgical robot and the pose constraint information of the mechanical arm; and navigating the surgical robot at the current position to a working position, and adjusting the pose of the surgical robot to the working pose.
In the present application, only some components are schematically shown in fig. 11, which does not mean that the electronic device includes only the components shown in fig. 11.
The electronic device provided by the embodiment of the application has the same beneficial effects as the method adopted, operated or realized by the application program stored by the electronic device and the self-sensing navigation method of the surgical robot based on virtual reality provided by the embodiment of the application due to the same inventive concept.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-readable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
The present application also provides a computer readable storage medium corresponding to the self-sensing navigation method of the surgical robot based on virtual reality provided in the foregoing embodiment, on which a computer program (i.e. a program product) is stored, which when executed by a processor, performs the self-sensing navigation method of the surgical robot based on virtual reality provided in any of the foregoing embodiments.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
The computer readable storage medium provided by the above embodiment of the present application has the same beneficial effects as the method adopted, operated or implemented by the application program stored in the computer readable storage medium, because the computer readable storage medium and the virtual reality-based surgical robot self-sensing navigation method provided by the embodiment of the present application are the same inventive concept.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the application may be practiced without these specific details. In some instances, well-known structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (10)

1. The surgical robot self-sensing navigation method based on virtual reality is characterized by comprising the following steps of:
acquiring image data shot by a depth camera on an operation robot, and establishing a three-dimensional model of an operating room;
generating a VR map scene based on the three-dimensional model of the operating room;
identifying and positioning a scene target in the three-dimensional model of the operating room;
planning an optimal path of the surgical robot to a preset end position according to the identified scene target;
based on current position information of the surgical robot in the three-dimensional model, navigating the surgical robot and updating the VR map scene in real time.
2. The method for self-sensing navigation of a surgical robot according to claim 1, wherein the acquiring image data captured by a depth camera on the surgical robot, and creating a three-dimensional model of an operating room, comprises:
Acquiring multi-frame image data of a depth camera on a surgical robot; each frame of image data contains a color image and a depth image;
converting a color image and a depth image of the image data into corresponding point cloud data, wherein each spatial point in the point cloud data has position information and color information;
registering and splicing the point cloud data of the plurality of image data to obtain spliced first point cloud data;
modeling the spliced first point cloud data, and determining a three-dimensional model of an operating room.
3. The surgical robot self-sensing navigation method of claim 1, wherein the scene targets include a surgical personnel target, a patient target, a surgical robot target, and a medical device target; in the identification and positioning of the scene target in the three-dimensional model of the operating room, the operator target, the patient target, the surgical robot target and the medical equipment target are identified and positioned through an F-PointNet algorithm.
4. The surgical robot self-sensing navigation method of claim 3, wherein the navigating the surgical robot and updating the VR map scene in real time based on current positional information of the surgical robot in the three-dimensional model comprises:
Acquiring the planned optimal path, and detecting a set path;
determining a navigation path according to the detection result of the set path and the optimal path;
navigating the surgical robot according to the navigation path and acquiring real-time image data of the surgical robot;
determining the current position and the visual field axis of the surgical robot according to the image data, and updating the displayed VR map scene;
and after detecting that the surgical robot reaches the end position of the navigation path, ending the navigation.
5. The surgical robot self-sensing navigation method of claim 4, wherein the navigating the surgical robot and updating the VR map scene in real time based on current positional information of the surgical robot in the three-dimensional model, further comprises:
stopping the motion of the surgical robot after detecting that the VR map scene is triggered;
and after receiving the new set path, determining the set path as a navigation path, and re-executing the steps of navigating the surgical robot according to the navigation path and acquiring real-time image data of the surgical robot.
6. A surgical robot self-sensing navigation method according to claim 3, further comprising:
And acquiring the operation position of the patient target, and performing secondary navigation on the operation robot based on the operation position.
7. The surgical robot self-sensing navigation method of claim 6, wherein the acquiring the surgical position of the patient target and performing a secondary navigation of the surgical robot based on the surgical position comprises:
adjusting a depth camera of the surgical robot, and shooting multiple frames of images of the patient target;
determining the operation position of the patient target according to the photographed multi-frame images;
acquiring pose constraint information of a mechanical arm of the surgical robot;
determining the working position and the working pose of the surgical robot according to the surgical position of the patient target, the current position of the surgical robot and the pose constraint information of the mechanical arm;
and navigating the surgical robot at the current position to a working position, and adjusting the pose of the surgical robot to the working pose.
8. A surgical robot self-sensing navigation system based on virtual reality, comprising:
the three-dimensional modeling module is used for acquiring image data shot by a depth camera on the surgical robot and establishing a three-dimensional model of an operating room;
The map generation module is used for generating a VR map scene based on the three-dimensional model of the operating room;
the target identification module is used for identifying and positioning a scene target in the three-dimensional model of the operating room;
the path planning module is used for planning an optimal path of the surgical robot to a preset end position according to the identified scene target;
and the indoor navigation module is used for navigating the surgical robot and updating the VR map scene in real time based on the current position information of the surgical robot in the three-dimensional model.
9. An electronic device, comprising: a memory and a processor;
the memory is used for storing programs;
the processor, coupled to the memory, is configured to execute the program for:
acquiring image data shot by a depth camera on an operation robot, and establishing a three-dimensional model of an operating room;
generating a VR map scene based on the three-dimensional model of the operating room;
identifying and positioning a scene target in the three-dimensional model of the operating room;
planning an optimal path of the surgical robot to a preset end position according to the identified scene target;
Based on current position information of the surgical robot in the three-dimensional model, navigating the surgical robot and updating the VR map scene in real time.
10. A computer-readable storage medium having stored thereon a computer program, wherein the program is executed by a processor to implement the virtual reality based surgical robot self-sensing navigation method of any of claims 1-7.
CN202310582746.7A 2023-05-22 Surgical robot self-sensing navigation method system and device based on virtual reality Active CN116687564B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310582746.7A CN116687564B (en) 2023-05-22 Surgical robot self-sensing navigation method system and device based on virtual reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310582746.7A CN116687564B (en) 2023-05-22 Surgical robot self-sensing navigation method system and device based on virtual reality

Publications (2)

Publication Number Publication Date
CN116687564A true CN116687564A (en) 2023-09-05
CN116687564B CN116687564B (en) 2024-06-25

Family

ID=

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170312031A1 (en) * 2016-04-27 2017-11-02 Arthrology Consulting, Llc Method for augmenting a surgical field with virtual guidance content
CN108303089A (en) * 2017-12-08 2018-07-20 浙江国自机器人技术有限公司 Based on three-dimensional laser around barrier method
CN110457407A (en) * 2018-05-02 2019-11-15 北京京东尚科信息技术有限公司 Method and apparatus for handling point cloud data
US20210304423A1 (en) * 2018-07-31 2021-09-30 Gmeditec Corp. Device for providing 3d image registration and method therefor
CN113613582A (en) * 2019-05-31 2021-11-05 直观外科手术操作公司 System and method for bifurcated navigation control of a manipulator cart included within a computer-assisted medical system
WO2022138495A1 (en) * 2020-12-25 2022-06-30 川崎重工業株式会社 Surgery assistance robot, surgery assistance system, and method for controlling surgery assistance robot
CN115542889A (en) * 2021-06-30 2022-12-30 上海微觅医疗器械有限公司 Preoperative navigation method and system for robot, storage medium and computer equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170312031A1 (en) * 2016-04-27 2017-11-02 Arthrology Consulting, Llc Method for augmenting a surgical field with virtual guidance content
CN108303089A (en) * 2017-12-08 2018-07-20 浙江国自机器人技术有限公司 Based on three-dimensional laser around barrier method
CN110457407A (en) * 2018-05-02 2019-11-15 北京京东尚科信息技术有限公司 Method and apparatus for handling point cloud data
US20210304423A1 (en) * 2018-07-31 2021-09-30 Gmeditec Corp. Device for providing 3d image registration and method therefor
CN113613582A (en) * 2019-05-31 2021-11-05 直观外科手术操作公司 System and method for bifurcated navigation control of a manipulator cart included within a computer-assisted medical system
WO2022138495A1 (en) * 2020-12-25 2022-06-30 川崎重工業株式会社 Surgery assistance robot, surgery assistance system, and method for controlling surgery assistance robot
CN115542889A (en) * 2021-06-30 2022-12-30 上海微觅医疗器械有限公司 Preoperative navigation method and system for robot, storage medium and computer equipment

Similar Documents

Publication Publication Date Title
CN110047104B (en) Object detection and tracking method, head-mounted display device, and storage medium
US11501527B2 (en) Visual-inertial positional awareness for autonomous and non-autonomous tracking
US20210192774A1 (en) Mapping Optimization in Autonomous and Non-Autonomous Platforms
JP6896077B2 (en) Vehicle automatic parking system and method
US11948369B2 (en) Visual-inertial positional awareness for autonomous and non-autonomous mapping
WO2019242262A1 (en) Augmented reality-based remote guidance method and device, terminal, and storage medium
TWI574223B (en) Navigation system using augmented reality technology
CA2888943C (en) Augmented reality system and method for positioning and mapping
CN109298629B (en) System and method for guiding mobile platform in non-mapped region
JP4278979B2 (en) Single camera system for gesture-based input and target indication
EP3321889A1 (en) Device and method for generating and displaying 3d map
CN110362193B (en) Target tracking method and system assisted by hand or eye tracking
CN109461208B (en) Three-dimensional map processing method, device, medium and computing equipment
US20030012410A1 (en) Tracking and pose estimation for augmented reality using real features
CN109300143B (en) Method, device and equipment for determining motion vector field, storage medium and vehicle
Stricker et al. A fast and robust line-based optical tracker for augmented reality applications
WO2023056544A1 (en) Object and camera localization system and localization method for mapping of the real world
CA3105356A1 (en) Synthesizing an image from a virtual perspective using pixels from a physical imager array
US11727637B2 (en) Method for generating 3D skeleton using joint-based calibration acquired from multi-view camera
JP7003594B2 (en) 3D point cloud display device, 3D point cloud display system, 3D point cloud display method, 3D point cloud display program, recording medium
CN113748445A (en) Boundary estimation from posed monocular video
US11443719B2 (en) Information processing apparatus and information processing method
CN116687564B (en) Surgical robot self-sensing navigation method system and device based on virtual reality
WO2023088127A1 (en) Indoor navigation method, server, apparatus and terminal
CN116687564A (en) Surgical robot self-sensing navigation method system and device based on virtual reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant