CN117338436A - Manipulator and control method thereof - Google Patents

Manipulator and control method thereof Download PDF

Info

Publication number
CN117338436A
CN117338436A CN202311657703.7A CN202311657703A CN117338436A CN 117338436 A CN117338436 A CN 117338436A CN 202311657703 A CN202311657703 A CN 202311657703A CN 117338436 A CN117338436 A CN 117338436A
Authority
CN
China
Prior art keywords
manipulator
intention
operator
data
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311657703.7A
Other languages
Chinese (zh)
Other versions
CN117338436B (en
Inventor
邵博文
夏天
夏楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jixi Jikuang Hospital Co ltd
Original Assignee
Jixi Jikuang Hospital Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jixi Jikuang Hospital Co ltd filed Critical Jixi Jikuang Hospital Co ltd
Priority to CN202311657703.7A priority Critical patent/CN117338436B/en
Publication of CN117338436A publication Critical patent/CN117338436A/en
Application granted granted Critical
Publication of CN117338436B publication Critical patent/CN117338436B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/70Manipulators specially adapted for use in surgery
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • B25J13/088Controls for manipulators by means of sensing devices, e.g. viewing or touching devices with position, velocity or acceleration sensors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Mechanical Engineering (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Human Computer Interaction (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Manipulator (AREA)

Abstract

The invention belongs to the field of surgical robots, and particularly relates to a manipulator and a control method thereof. The invention discloses a manipulator, comprising: the operation parameter acquisition unit is used for acquiring the states of the handle, the manipulator joint and the connecting wire; an intention acquisition unit for acquiring parameters of an operator and states of a manipulator joint and a connecting wire, determining an intention of the operator based on the parameters of the operator and the states of the manipulator, and performing a predicted operation when the intention of the operator is greater than a threshold; wherein intent is obtained by training based on virtual training data to obtain a bayesian classifier. The invention can realize the acquisition of the intention of the operator and can use a preset program for assistance based on the intention of the operator.

Description

Manipulator and control method thereof
Technical Field
The invention belongs to the field of surgical robots, and particularly relates to a manipulator and a control method thereof.
Background
The endoscope is a detection instrument which integrates traditional optics, ergonomics, precision machinery, modern electronics, mathematics, software and the like, can enter the stomach through the oral cavity or enter the body through other natural pore canals, and can be used for seeing lesions which cannot be displayed by X rays, so that the endoscope is very helpful for clinical treatment.
The manipulator operation of the surgical robot depends on the experience of the operator and the equipment, and the experience of the operator cannot be quickly transplanted to new equipment. In addition, when the manipulator is operated, if the manipulator can be properly prompted according to the operation of a user, the efficiency of an operator can be effectively improved.
Disclosure of Invention
At least one aspect and advantage of the present invention will be set forth in part in the description that follows, or may be obvious from the description, or may be learned by practice of the presently disclosed subject matter.
It is an object of the present invention to overcome the drawbacks of the prior art and to provide a manipulator that can acquire a user's intention based on a user's operation. The invention also provides a control system of the manipulator, which can be used for acquiring the operation intention of a user.
According to a first aspect of the invention, a manipulator for a surgical robot comprises:
the operation parameter acquisition unit is used for acquiring the states of the handle, the manipulator joint and the connecting wire;
an intention acquisition unit for acquiring parameters of an operator and states of a manipulator joint and a connecting wire, determining an intention of the operator based on the parameters of the operator and the states of the manipulator, and performing a predicted operation when the intention of the operator is greater than a threshold;
Wherein intent is obtained by training based on virtual training data to obtain a bayesian classifier.
According to one embodiment of the invention, the information collected by the operating parameter collecting unit comprises a handle displacement, a handle position speed, a handle position acceleration, a main manipulator moment, a manipulator joint displacement, a manipulator joint speed, a manipulator joint acceleration, a guide wire tension and/or a guide wire feeding length.
According to one embodiment of the invention, the virtual training data is based on an antagonistic neural network and random noise generation.
According to one embodiment of the present invention, based on the actual operation parameters acquired by the operation parameter acquisition unit, the virtual training data is cut, and the virtual training data with the adjacency degree lower than the threshold value of the actual operation parameters is removed, wherein the adjacency degree is the time for transition from the actual operation parameters to the virtual training data.
According to one embodiment of the invention, the virtual training data is generated by an antagonistic neural network.
According to one embodiment of the invention, the training process of the Bayesian classifier comprises:
an operation parameter sequence is obtained from the operation parameter acquisition unit or the virtual countermeasure data by using the sliding data window, an operation characteristic is generated based on the operation parameter sequence, statistics are carried out on the operation characteristic and the operator intention associated with the operation characteristic, and the prior probability that the operation characteristic corresponds to the operator intention is determined.
According to one embodiment of the present invention, based on the actual operation parameter acquired by the operation parameter acquisition unit, the virtual training data is cut, and the virtual training data with the adjacency degree higher than the threshold value with the actual operation parameter is proposed, and the adjacency degree is the time of transition from the actual operation parameter to the virtual training data.
According to one embodiment of the invention, the step of acquiring intent by the Bayesian classifier comprises:
acquiring an operation parameter sequence acquired by an operation intention acquisition unit, and generating operation characteristics based on a plurality of latest operation parameters of an execution operation; the intention of the operator is obtained using a bayesian classifier.
According to a second aspect of the present invention, a control method of a manipulator for a surgical robot includes:
collecting the states of a handle, a manipulator joint and a connecting wire;
acquiring parameters of an operator and states of a manipulator joint and a connecting wire, determining the intention of the operator based on the parameters of the operator and the states of the manipulator, and executing predicted operation when the intention of the operator is greater than a threshold value;
wherein intent is obtained by training based on virtual training data to obtain a bayesian classifier.
The invention can realize the acquisition of the intention of the operator and can use a preset program for assistance based on the intention of the operator.
Drawings
FIG. 1 is a flow chart of a method of controlling a robot of the present invention;
fig. 2 is a flow chart of a robot control system of the present invention.
Detailed Description
The present disclosure will now be discussed with reference to several exemplary embodiments. It should be understood that these embodiments are discussed only to enable those of ordinary skill in the art to better understand and thus practice the teachings of the present invention, and are not meant to imply any limitation on the scope of the invention.
As used herein, the term "comprising" and variants thereof are to be interpreted as meaning "including but not limited to" open-ended terms. The term "based on" is to be interpreted as "based at least in part on". The terms "one embodiment" and "an embodiment" are to be interpreted as "at least one embodiment. The term "another embodiment" is to be interpreted as "at least one other embodiment".
Fig. 1 is a flowchart of a robot control method of the present invention, and fig. 2 is a flowchart of a robot control system of the present invention, as shown in fig. 1 and 2, according to an embodiment of the present invention, a robot for a surgical robot, comprising:
the operation parameter acquisition unit is used for acquiring the states of the handle, the manipulator joint and the connecting wire;
An intention acquisition unit for acquiring parameters of an operator and states of a manipulator joint and a connecting wire, determining an intention of the operator based on the parameters of the operator and the states of the manipulator, and performing a predicted operation when the intention of the operator is greater than a threshold;
wherein intent is obtained by training based on virtual training data to obtain a bayesian classifier.
The manipulator at least comprises a manipulator joint and a handle, and is used for receiving the instruction and converting the instruction into an action instruction of the manipulator.
The manipulator provided by the invention is suitable for interventional operations, wherein the interventional operations are a minimally invasive clinical therapy for diagnosing and locally treating in-vivo pathological conditions by introducing special precise instruments such as a catheter or a guide wire into a human body under the guidance of medical imaging equipment.
Specifically, the components of the manipulator suitable for the invention comprise an output rod and a manipulator joint; common manipulator joint types include, among others, mobile joints, flip joints, turner joints, and gripper fingers. The rotating joint rotates the guide wire of the guide tube, the moving joint moves the guide wire of the guide tube, the holding joint can clamp interventional devices such as the guide wire of the guide tube, and the like, and the turning joint rotates the rotating joint and clamps fingers.
By means of the sensor, it is possible to acquire the operation of the user from this, and to acquire the operation intention of the user based on the operation sequence performed by the user over a period of time and the operation actually performed by the user after one operation sequence.
Therefore, the invention has better universality, and the improvement of the manipulator is mainly focused on the increase of the sensor, the acquisition of the sensor parameters and the analysis of the acquired parameters, and the acquisition of the user intention and the execution of auxiliary operation are provided based on the sensor parameters.
During the operation, the parameters of the operation parameters can be acquired by a sensor arranged on the manipulator or acquired based on image recognition.
For example, a film pressure sensor can be used for pressure monitoring of a component driven by the connecting wire, and when the film pressure sensor is installed, the film pressure sensor can be attached to the side, far away from an operator, of the driven component and used for collecting pressure signals when the connecting wire is stressed, so that the stress condition of the guide wire can be accurately obtained;
still alternatively, surgical robots for blood interventions rely on instructions from the hand delivery mechanism to track the primary hand manipulator to acquire the operating motions of the physician to complete delivery of the catheter/guidewire. The process can track the operation instruction of a doctor by tracking the acceleration and the angle of the movement of the handle;
Alternatively, the sensor may also monitor the instructions of the handle as determined by an electrical signal from a button on the handle; with this determination, the control commands of the manipulator, such as forward, reverse, forward rotation, and reverse rotation commands of the manipulator, can be mapped into an array of commands and used for subsequent intent determination.
Further, on the basis of the obtained parameters of the operator and the states of the manipulator joints and the connecting wires, the parameters are associated with the intention of the operator, then the degree of association is judged or predicted, and when the intention meeting the operating environment is matched with the operator, the intention of the operator can be obtained; when it is greater than the threshold, with the operation as the intention of the operator, the predicted operation may be performed.
The judgment of intention is made in the present application using a bayesian classifier, since for trained operators it is often regularly circulated for the realization of one intention and for one device it is often limited for a certain period of time, so that the correlation of the intention and the operation of the user can be obtained by analysis of the use history data and the intention of the user is obtained based on the actual operation sequence.
The invention realizes the amplification of data according to the actual operation record of the user, carries out the construction of the Bayesian classifier based on the amplified data, obtains a series of correlations between the intention of the operator and the operation sequence, determines the operation intention according to the collected operation sequence, and carries out the execution of the prediction operation based on the determination of the operation intention.
By this embodiment, acquisition of the intention of the operator can be achieved, and the assistance can be performed using a preset program based on the intention of the operator.
In one embodiment of the present invention, a prompt method corresponding to an intention is preset, and when a user intention is determined, prompt presentation corresponding to the intention is performed. For example, when the user's intention to operate the interface is determined, a corresponding auxiliary program is selected to be provided to the user to assist the user's operation of the endoscope.
In another embodiment of the invention, the prompting method corresponding to the intention is parameter setting of the manipulator and the connecting wire, and when the intention of the user is determined, parameters of the joint of the manipulator and the tension state of the connecting wire are set.
According to one embodiment of the invention, the information collected by the operating parameter collecting unit comprises a handle displacement, a handle position speed, a handle position acceleration, a main manipulator moment, a manipulator joint displacement, a manipulator joint speed, a manipulator joint acceleration, a guide wire tension and/or a guide wire feeding length.
For example, common manipulator-related parameters include displacement, velocity, acceleration; while the parameters related to the guide wire include parameters such as tension, feed length, etc., in some cases the parameters related to the guide wire may be obtained directly from the control unit of the guide wire, such as by reading the output torque and output force seen by the drive driving the guide wire to determine the actual tension of the guide wire.
Further, parameters related to the complex mechanical structure can be obtained by using other parameter acquisition devices besides the sensor, and common ways also comprise reference image acquisition devices and auxiliary algorithm acquisition.
For example, when the manipulator is a seven-degree-of-freedom serial main operation manipulator, the parameters that can be further obtained include the coordinate system of the arm of the operator and the D-H parameter table of the arm of the doctor, and the gravity moment Gi of each joint of the main manipulator, where Gi is the gravity moment of the ith joint, and i is a natural number with a value of 1-7. In this way, the state and the equipment parameters of the operator when operating the manipulator can be obtained, and the operation sequences corresponding to the operations performed later by the user, which are obtained by indexing the data set or by batch classification using the tool, for example, by classifying according to the operation records, can be obtained according to the state and the equipment parameters of the time sequence.
According to one embodiment of the invention, the virtual training data is based on an antagonistic neural network and random noise generation.
In the actual operation process, the intention and the operation sequence are often closely related to the interface of the operated operation robot, the operator and the equipment, the interface of the operation robot for interaction is changed, the batch of data which can be used for training cannot be obtained, and the operation sequence of the operator and some characteristics of the equipment cannot be transplanted and can be determined only in a range, so that virtual training data generation is carried out based on a limited amount of data actually collected by the equipment, and the determination of the intention is necessary based on the virtual training data. Because the data volume is small, the situations of over fitting and result drifting easily occur when an unsupervised neural network or other networks are used for training, so that the accuracy of the intention judgment result is affected.
Therefore, the invention mainly processes the actual data to obtain a series of virtual training data for resisting the generation of the neural network and the random noise, and judges and acquires the intention based on the virtual training data.
In the invention, a blank countermeasure neural network is firstly constructed, the countermeasure neural network is trained based on real operation data and user intention, a trained countermeasure neural network is obtained, and then part or all of the steps in the method are executed by using the trained countermeasure neural network.
The antagonism neural network is consistent with the common antagonism neural network, comprises a generator and a discriminator, wherein for a typical antagonism neural network, an input layer corresponds to a historical operation sequence, data corresponding to the input layer is subjected to a downsampling module, an upsampling module and a feature fusion module, and then an output label is obtained.
It can be understood that in training the antagonistic neural network, the input image data is a context of a series of operations and parameter states, and output as the user's intention. When using the trained antagonistic neural network, the user's current state and parameters are the input sequence, and the output tag value is the user intent corresponding to the operator.
It will be appreciated that the neural network generators and discriminators are opposed to each other during the training process to optimize both generators and discriminators.
However, the data generated in this part may have data redundancy, so when the virtual training data is constructed, the judger may be selected to match the approximation degree of the virtual training data with the actual data, and when the approximation degree is higher than the threshold value, the judgment of the data may be performed by eliminating the virtual training data, i.e. by defining an approximation degree (optionally euclidean distance) as a basis. Before judging the data approximation, a normalized Softmax function can be selected to process the input source data, then the euclidean distance is calculated, and a proper euclidean distance is selected to reject the data with the excessively high approximation.
According to one embodiment of the present invention, based on the actual operation parameters acquired by the operation parameter acquisition unit, the generated virtual training data is cut, and the virtual training data with the adjacency degree lower than the threshold value of the actual operation parameters is removed, wherein the adjacency degree is the time for transition from the actual operation parameters to the virtual training data.
When training the antagonistic neural network, the obtained training data are virtual, the part of data has relevance with intention to a certain extent, but the distribution of the part of data is highly correlated with the actually collected data and has an approximate distribution rule, so that the labeling and training of the part of data cannot reflect the information of all predictable conditions, and therefore, the data are screened and cut, and the data which are close in time and space are removed according to the time rule, so that the generated data are closer to the actual distribution.
In order to perform data screening and clipping, firstly, the data can be subjected to Euclidean data calculation, for example, the generated virtual data and Euclidean distance thereof are calculated for elements of an original data set respectively, and before the Euclidean distance calculation is performed, the data is normalized by using a Softmax function; all virtual training data with Euclidean distance lower than a threshold value are acquired, then nodes, speeds or accelerations are selected to calculate time, for example, in actually acquired data, the speed is 1.00mm/s, the acceleration is 0.05mm/s, in one virtual training data, the speed is 1.001mm/s, the time difference or the adjacency degree is 0.001/0.05=0.02 s, when the threshold value is set to be 1 second, the adjacency degree of two data is far lower than the threshold value, and the data should be removed.
According to one embodiment of the invention, the virtual training data is generated by an antagonistic neural network.
In one embodiment of the invention, the virtual training data is generated by:
constructing a generated countermeasure neural network, wherein the generated countermeasure neural network comprises a generator and a discriminator, the discriminator comprises a gesture discriminator and a parameter discriminator, and the gesture discriminator is used for judging whether the gesture corresponding to the generated parameter is associated with other parameters or not and whether unreasonable path points exist or not; the parameter determiner is configured to determine whether the generated parameters are within a reasonable range, such as acceleration, movement range, and real-time tension, within a reference range, and within a range that an operator can apply. For example, if the acceleration generated significantly exceeds a manually applicable value, the operating parameters are considered unreasonable. In addition, in the indexing process of some data, whether an unreasonable path point exists is determined reasonably by judging whether a data sequence, for example, the sequence A contains 10 operations A1-A10, but parameters contained in A5 obviously correspond to unreasonable operation gravity moment in generation, so that the path point of A5 is considered to be wrong and needs to be regenerated.
Further, the generator adjusts the existing parameters according to the Euclidean distance proximity, and generates a new operation parameter sequence for expanding the original acquired operation parameters. For example, when the acquired parameters contain 10 parameters, firstly, keeping the parameters with indexes of 2-10 unchanged, adjusting the 1 st parameter, and then calculating the upper limit and the lower limit of new data with Euclidean distance below a threshold value with the current actual data; this operation is repeated to determine the upper and lower limits of the numerical range corresponding to the parameters 2-10, and then virtual training data with euclidean distance and actual data below the threshold value is obtained by randomly altering the data.
Labeling of the generated data types is performed by using Euclidean distances among operation parameters, namely, in a plurality of operation sequences (assumed to be A) corresponding to the same operation intention, if the Euclidean distance of any sequence in A is within a threshold range, the operation sequences are related to the operation intention. In addition, if one operation sequence is related to two or more intents, it is considered to be related to both operations, at this time, the nearest operation intent may be determined in accordance with the magnitude of the euclidean distance difference. For example, if intention 1 and intention 2 coexist and the current operation intention obtained by calculation of euclidean distance is closer to intention 1, intention 1 is selected as the target intention.
The virtual training data can be generated in batch, the training set of the real sample is input into the training discriminator, meanwhile, new data is input into the generator to generate the virtual sample, then the virtual sample is input into the discriminator to realize the countermeasure behavior training generator of the generator and the discriminator, after training is finished, the test set of the real sample is utilized for verification, finally, the model is saved, and virtual training data is generated based on the model.
In some embodiments, the antagonistic neural network comprises a first generator, a second generator, a first determiner, a second determiner, and a third classification determiner; the first generator is used for generating operator parameters, such as handle related parameters, and the second generator is used for generating equipment related state parameters, such as states of manipulator joints and connecting wires; the first determiner is used for determining whether the parameters of the operator generated by the first generator are reasonable or not, and the second determiner is used for determining whether the parameters generated by the second generator are reasonable or not; when the operator parameters generated by the first generator and the parameters generated by the second generator are reasonable, the third determiner determines whether the Euclidean distance between the generated parameters and the context parameters is within a threshold range, and when the Euclidean distance is within the threshold range, the operator parameters and the parameters are considered to be reasonable, otherwise, the operator parameters and the parameters are considered to be unreasonable.
However, the data generated in this part may have data redundancy, so when the virtual training data is constructed, the judger may be selected to match the approximation degree of the virtual training data with the actual data, and when the approximation degree is higher than the threshold value, the judgment of the data may be performed by eliminating the virtual training data, i.e. by defining an approximation degree (optionally euclidean distance) as a basis. Before judging the data approximation, a normalized Softmax function can be selected to process the input source data, then the euclidean distance is calculated, and a proper euclidean distance is selected to reject the data with the excessively high approximation.
According to one embodiment of the invention, the training process of the Bayesian classifier comprises:
an operation parameter sequence is obtained from the operation parameter acquisition unit or the virtual countermeasure data by using the sliding data window, an operation characteristic is generated based on the operation parameter sequence, statistics are carried out on the operation characteristic and the operator intention associated with the operation characteristic, and the prior probability that the operation characteristic corresponds to the operator intention is determined.
In the present invention, the data for training the bayesian classifier is generated against the neural network, and after the actual processing, the data is reflected to be similar to the history data distribution of the habit of the actual operator.
Further, a single variable is illustrated as an example.
For a bayesian classifier, assuming that only the tension of one position of the connection wire is considered, the collected operation parameters f= { A1, A2, a 3..an }, where Ai,1< = i < = n are used to characterize the tension of the connection wire, and when the size of the window is set to 4, the characteristics formed by the combination thereof correspond to one intention, and the prior probability distribution thereof is obtained by counting the distribution between the characteristics and the intention.
Further, training the bayesian classifier based on the virtual training data is divided into three phases: in the preparation working stage, selecting virtual training data; and (3) a classifier training stage. Acquiring characteristics according to the last m operation parameters, calculating the occurrence frequency of each category in a training sample, dividing the conditional probability estimation of each category by each characteristic attribute, and recording data; and then classifying the user operation sequence to be classified by using a Bayesian classifier.
When the Bayesian classifier is constructed, a better mode is to set the size of a sliding data window to be 6, which corresponds to the latest 6 seconds of operation parameters, wherein the operation parameters comprise states of a handle, a manipulator joint and a connecting wire, the states comprise parameters of sensors arranged on the handle, the key and the connecting wire, and in order to reduce the amount of virtual training data and provide better user operation characteristics, the number of states acquired by an operation parameter acquisition unit can be selected in a targeted manner.
According to one embodiment of the present invention, based on the actual operation parameters acquired by the operation parameter acquisition unit, the generated virtual training data is cut, and virtual training data with an adjacency degree higher than a threshold value with respect to the actual operation parameters is proposed, and the adjacency degree is the time for transition from the actual operation parameters to the virtual training data.
When training the antagonistic neural network, the obtained training data are virtual, the part of data has relevance with intention to a certain extent, but the distribution of the part of data is highly correlated with the actually collected data and has an approximate distribution rule, so that the labeling and training of the part of data cannot reflect the information of all predictable conditions, and therefore, the data are screened and cut, and the data which are close in time and space are removed according to the time rule, so that the generated data are closer to the actual distribution.
In order to perform data screening and clipping, firstly, the data can be subjected to Euclidean data calculation, for example, the generated virtual data and Euclidean distance thereof are calculated for elements of an original data set respectively, and before the Euclidean distance calculation is performed, the data is normalized by using a Softmax function; all virtual training data with Euclidean distance lower than a threshold value are acquired, then nodes, speeds or accelerations are selected to calculate time, for example, in actually acquired data, the speed is 1.00mm/s, the acceleration is 0.05mm/s, in one virtual training data, the speed is 1.001mm/s, the time difference or the adjacency degree is 0.001/0.05=0.02 s, when the threshold value is set to be 1 second, the adjacency degree of two data is far lower than the threshold value, and the data should be removed.
According to one embodiment of the invention, the step of acquiring intent by the Bayesian classifier comprises:
acquiring an operation parameter sequence acquired by an operation intention acquisition unit, and generating operation characteristics based on a plurality of latest operation parameters of an execution operation; the intention of the operator is obtained using a bayesian classifier.
The specific process of classifying by using the Bayesian classifier is as follows: acquiring an operation parameter sequence acquired by an operation intention acquisition unit, generating characteristics of an operation based on the latest operation parameters of the execution operation, wherein the selected operation parameters are supposed to have the same source and characteristic extraction steps as those of a constructed Bayesian classifier;
the probability of the operation corresponding to each type under the operation characteristic is obtained by using a Bayesian classifier, the probability of the operation belonging to each type is obtained by calculating the prior probability, the probabilities are ordered, and the operation with the maximum probability is obtained as the intention of an operator.
In this process, the probability that the user intention P { P1, P2, p3...pm } corresponds to the intention is W { W1, W2, w3...wm } and, assuming that W is already the probability of being arranged in descending order, P1 is the user intention of the greatest probability; if the p1 and p2 values are close, e.g., within 5% of the differentiation, it may be considered that the user intention is not obvious and the user's primary operational intention is not determined.
In this way, adjustments to the instrument personalization may be provided based on the user's actions and parameters. When the equipment changes, the same function can be realized only by adjusting the historical training data related to the related parameters.
According to an embodiment of the present invention, a control method of a manipulator for a surgical robot includes:
collecting the states of a handle, a manipulator joint and a connecting wire;
acquiring parameters of an operator and states of a manipulator joint and a connecting wire, determining the intention of the operator based on the parameters of the operator and the states of the manipulator, and executing predicted operation when the intention of the operator is greater than a threshold value;
wherein intent is obtained by training based on virtual training data to obtain a bayesian classifier.
According to an embodiment of the present invention, there is provided a robot arm including:
the operation parameter acquisition unit is used for acquiring handle displacement, handle position speed, handle position acceleration, displacement of a manipulator joint, speed of the manipulator joint and guide wire tension;
an intention acquisition unit for acquiring parameters of an operator and states of a manipulator joint and a connecting wire, determining an intention of the operator based on the parameters of the operator and the states of the manipulator, and performing a predicted operation when the intention of the operator is greater than a threshold;
Wherein intent is obtained by training based on virtual training data to obtain a bayesian classifier;
the virtual training data is obtained by data expansion based on data generated by historical operation, and the data expansion mode comprises the following steps:
constructing a generated countermeasure neural network, wherein the generated countermeasure neural network comprises a generator and a discriminator, the discriminator comprises a gesture discriminator and a parameter discriminator, and the gesture discriminator is used for judging whether the gesture corresponding to the generated parameter is associated with other parameters or not and whether unreasonable path points exist or not; the parameter determiner is configured to determine whether the generated parameters are within a reasonable range, such as acceleration, movement range, and real-time tension, within a reference range, and within a range that an operator can apply. For example, if the acceleration generated significantly exceeds a manually applicable value, the operating parameters are considered unreasonable.
Further, random noise is introduced to generate new data and verified by a arbiter. Labeling of the generated data types is performed by using Euclidean distances among operation parameters, namely, in a plurality of operation sequences (assumed to be A) corresponding to the same operation intention, if the Euclidean distance of any sequence in A is within a threshold range, the operation sequences are related to the operation intention. In addition, if one operation sequence is related to two or more intents, it is considered to be related to both operations, at this time, the nearest operation intent may be determined in accordance with the magnitude of the euclidean distance difference. For example, if intention 1 and intention 2 coexist and the current operation intention obtained by calculation of euclidean distance is closer to intention 1, intention 1 is selected as the target intention.
Training a bayesian classifier based on virtual training data is divided into three phases: in the preparation working stage, selecting virtual training data; and (3) a classifier training stage. Acquiring characteristics according to the last m operation parameters, calculating the occurrence frequency of each category in a training sample, dividing the conditional probability estimation of each category by each characteristic attribute, and recording data; and then classifying the user operation sequence to be classified by using a Bayesian classifier.
When the Bayesian classifier is constructed, the size of the sliding data window is set to 6, which corresponds to the latest 6 seconds of operation parameters, and the operation characteristics are calculated based on the operation parameters.
The specific process of classifying by using the Bayesian classifier in the embodiment is as follows: acquiring an operation parameter sequence acquired by an operation intention acquisition unit, generating characteristics of an operation based on the latest operation parameters of the execution operation, wherein the selected operation parameters are supposed to have the same source and characteristic extraction steps as those of a constructed Bayesian classifier; the probability of the operation corresponding to each type under the operation characteristic is obtained by using a Bayesian classifier, the probability of the operation belonging to each type is obtained by calculating the prior probability, the probabilities are ordered, and the operation with the maximum probability is obtained and is used as the intention of an operator when the probability is larger than a threshold value.
In this way, adjustments to the instrument personalization may be provided based on the user's actions and parameters. When the equipment changes, the same function can be realized only by adjusting the historical training data related to the related parameters.
According to an embodiment of the present invention, there is provided a robot arm including:
the operation parameter acquisition unit is used for acquiring handle displacement, handle position speed, handle position acceleration, displacement of a manipulator joint, speed of the manipulator joint, guide wire feeding length and guide wire tension;
an intention acquisition unit for acquiring parameters of an operator and states of a manipulator joint and a connecting wire, determining an intention of the operator based on the parameters of the operator and the states of the manipulator, and performing a predicted operation when the intention of the operator is greater than a threshold;
wherein intent is obtained by training based on virtual training data to obtain a bayesian classifier;
the virtual training data is obtained by data expansion based on data generated by historical operation, and the data expansion mode comprises the following steps:
constructing a generated countermeasure neural network, wherein the generated countermeasure neural network comprises a generator and a discriminator, the discriminator comprises a gesture discriminator and a parameter discriminator, and the gesture discriminator is used for judging whether the gesture corresponding to the generated parameter is associated with other parameters or not and whether unreasonable path points exist or not; the parameter determiner is configured to determine whether the generated parameters are within a reasonable range, such as acceleration, movement range, and real-time tension, within a reference range, and within a range that an operator can apply. For example, if the acceleration generated significantly exceeds a manually applicable value, the operating parameters are considered unreasonable.
Further, the generator adjusts the existing parameters according to the Euclidean distance proximity, and generates a new operation parameter sequence for expanding the original acquired operation parameters. For example, when the parameters after the index 1 are kept unchanged, the 1 st parameter is adjusted, and then the upper limit and the lower limit of new data with the Euclidean distance below the threshold value with the current actual data are calculated; and repeating the operation to determine the upper limit and the lower limit of the numerical range corresponding to all the parameters, and then obtaining the virtual training data with the Euclidean distance and the actual data lower than the threshold value by adopting a mode of randomly changing the data. The new virtual training data can be expanded by adopting an approximate method to obtain new expansion data.
Labeling of the generated data types is performed using Euclidean distances between the operating parameters. For example, if intention 1 and intention 2 coexist and the current operation intention obtained by calculation of euclidean distance is closer to intention 1, intention 1 is selected as the target intention.
The virtual training data can be generated in batch, the training set of the real sample is input into the training discriminator, meanwhile, new data is input into the generator to generate the virtual sample, then the virtual sample is input into the discriminator to realize the countermeasure behavior training generator of the generator and the discriminator, after training is finished, the test set of the real sample is utilized for verification, finally, the model is saved, and virtual training data is generated based on the model.
Training a bayesian classifier based on virtual training data is divided into three phases: in the preparation working stage, selecting virtual training data; and (3) a classifier training stage. Acquiring characteristics according to the last m operation parameters, calculating the occurrence frequency of each category in a training sample, dividing the conditional probability estimation of each category by each characteristic attribute, and recording data; and then classifying the user operation sequence to be classified by using a Bayesian classifier. When the Bayesian classifier is constructed, the size of the sliding data window is set to be 5, which corresponds to the latest 5 seconds of operation parameters, and the operation characteristics are calculated based on the operation parameters.
The specific process of classifying by using the Bayesian classifier in the embodiment is as follows: acquiring an operation parameter sequence acquired by an operation intention acquisition unit, generating characteristics of an operation based on the latest operation parameters of the execution operation, wherein the selected operation parameters are supposed to have the same source and characteristic extraction steps as those of a constructed Bayesian classifier; the probability of the operation corresponding to each type under the operation characteristic is obtained by using a Bayesian classifier, the probability of the operation belonging to each type is obtained by calculating the prior probability, the probabilities are ordered, and the operation with the maximum probability is obtained and is used as the intention of an operator when the probability is larger than a threshold value.
In this way, adjustments to the instrument personalization may be provided based on the user's actions and parameters. When the equipment changes, the same function can be realized only by adjusting the historical training data related to the related parameters.
According to an embodiment of the present invention, a control method of a manipulator for a surgical robot includes:
collecting the states of a handle, a manipulator joint and a connecting wire;
acquiring parameters of an operator and states of a manipulator joint and a connecting wire, determining the intention of the operator based on the parameters of the operator and the states of the manipulator, and executing predicted operation when the intention of the operator is greater than a threshold value;
wherein intent is obtained by training based on virtual training data to obtain a bayesian classifier.
According to an embodiment of the present invention, there is provided a robot arm including:
the operation parameter acquisition unit is used for acquiring handle displacement, handle position speed, handle position acceleration, displacement of a manipulator joint, speed of the manipulator joint and guide wire tension;
an intention acquisition unit for acquiring parameters of an operator and states of a manipulator joint and a connecting wire, determining an intention of the operator based on the parameters of the operator and the states of the manipulator, and performing a predicted operation when the intention of the operator is greater than a threshold;
Wherein intent is obtained by training based on virtual training data to obtain a bayesian classifier;
the virtual training data is obtained by data expansion based on data generated by historical operation, and the data expansion mode comprises the following steps:
and constructing a generated countermeasure neural network, wherein the generated countermeasure neural network comprises a generator and a discriminator.
Further, random noise is introduced to generate new data and verified by a arbiter. Labeling the generated data type is performed by using Euclidean distance between operation parameters, when the intention 1 and the intention 2 exist simultaneously, the current operation intention obtained according to Euclidean distance calculation is closer to the intention 1, and the intention 1 is selected as a target intention.
The virtual training data can be generated in batch, the training set of the real sample is input into the training discriminator, meanwhile, new data is input into the generator to generate the virtual sample, then the virtual sample is input into the discriminator to realize the countermeasure behavior training generator of the generator and the discriminator, after training is finished, the test set of the real sample is utilized for verification, finally, the model is saved, and virtual training data is generated based on the model.
Normalizing the virtual training data by using a Softmax function; and selecting part or all of actual shu data, corresponding to any selected data, acquiring all virtual training data with Euclidean distance lower than a threshold value, selecting the position, speed or acceleration of the manipulator joint for calculating time, and eliminating data with transition time lower than 1 second.
Training a bayesian classifier based on virtual training data is divided into three phases: in the preparation working stage, selecting virtual training data; and (3) a classifier training stage. Acquiring characteristics according to the last m operation parameters, calculating the occurrence frequency of each category in a training sample, dividing the conditional probability estimation of each category by each characteristic attribute, and recording data; and then classifying the user operation sequence to be classified by using a Bayesian classifier. When the Bayesian classifier is constructed, the size of the sliding data window is set to 6, which corresponds to the latest 6 seconds of operation parameters, and the operation characteristics are calculated based on the operation parameters.
The specific process of classifying by using the Bayesian classifier in the embodiment is as follows: acquiring an operation parameter sequence acquired by an operation intention acquisition unit, generating characteristics of an operation based on the latest operation parameters of the execution operation, wherein the selected operation parameters are supposed to have the same source and characteristic extraction steps as those of a constructed Bayesian classifier; the probability of the operation corresponding to each type under the operation characteristic is obtained by using a Bayesian classifier, the probability of the operation belonging to each type is obtained by calculating the prior probability, the probabilities are ordered, and the operation with the maximum probability is obtained and is used as the intention of an operator when the probability is larger than a threshold value. In this way, adjustments to the instrument personalization may be provided based on the user's actions and parameters. When the equipment changes, the same function can be realized only by adjusting the historical training data related to the related parameters.
According to an embodiment of the present invention, there is provided a robot arm including:
the operation parameter acquisition unit is used for acquiring handle displacement, handle position speed, handle position acceleration, displacement of a manipulator joint, speed of the manipulator joint, guide wire feeding length and guide wire tension;
an intention acquisition unit for acquiring parameters of an operator and states of a manipulator joint and a connecting wire, determining an intention of the operator based on the parameters of the operator and the states of the manipulator, and performing a predicted operation when the intention of the operator is greater than a threshold;
wherein intent is obtained by training based on virtual training data to obtain a bayesian classifier;
the virtual training data is obtained by data expansion based on data generated by historical operation, and the data expansion mode comprises the following steps:
constructing a generated countermeasure neural network, wherein the generated countermeasure neural network comprises a generator and a discriminator, the discriminator comprises a gesture discriminator and a parameter discriminator, and the gesture discriminator is used for judging whether the gesture corresponding to the generated parameter is associated with other parameters or not and whether unreasonable path points exist or not; the parameter determiner is configured to determine whether the generated parameters are within a reasonable range, such as acceleration, movement range, and real-time tension, within a reference range, and within a range that an operator can apply. For example, if the acceleration generated significantly exceeds a manually applicable value, the operating parameters are considered unreasonable.
Further, the generator adjusts the existing parameters according to the Euclidean distance proximity, and generates a new operation parameter sequence for expanding the original acquired operation parameters. For example, when the parameters after the index 1 are kept unchanged, the 1 st parameter is adjusted, and then the upper limit and the lower limit of new data with the Euclidean distance below the threshold value with the current actual data are calculated; and repeating the operation to determine the upper limit and the lower limit of the numerical range corresponding to all the parameters, and then obtaining the virtual training data with the Euclidean distance and the actual data lower than the threshold value by adopting a mode of randomly changing the data. The new virtual training data can be expanded by adopting an approximate method to obtain new expansion data.
Labeling of the generated data types is performed using Euclidean distances between the operating parameters. For example, if intention 1 and intention 2 coexist and the current operation intention obtained by calculation of euclidean distance is closer to intention 1, intention 1 is selected as the target intention.
The virtual training data can be generated in batch, the training set of the real sample is input into the training discriminator, meanwhile, new data is input into the generator to generate the virtual sample, then the virtual sample is input into the discriminator to realize the countermeasure behavior training generator of the generator and the discriminator, after training is finished, the test set of the real sample is utilized for verification, finally, the model is saved, and virtual training data is generated based on the model.
Normalizing the virtual training data by using a Softmax function; and selecting part or all of actual shu data, corresponding to any selected data, acquiring all virtual training data with Euclidean distance lower than a threshold value, selecting positions, speeds or accelerations related to the manipulator joints and the guide wires, calculating time, and eliminating data with transition time of the manipulator joints and the guide wires lower than 1 second.
Training a bayesian classifier based on virtual training data is divided into three phases: in the preparation working stage, selecting virtual training data; and (3) a classifier training stage. Acquiring characteristics according to the last m operation parameters, calculating the occurrence frequency of each category in a training sample, dividing the conditional probability estimation of each category by each characteristic attribute, and recording data; and then classifying the user operation sequence to be classified by using a Bayesian classifier. When the Bayesian classifier is constructed, the size of the sliding data window is set to 10, which corresponds to the latest 10 seconds of operation parameters, and the operation characteristics are calculated based on the operation parameters.
The specific process of classifying by using the Bayesian classifier in the embodiment is as follows: acquiring an operation parameter sequence acquired by an operation intention acquisition unit, generating characteristics of an operation based on the latest operation parameters of the execution operation, wherein the selected operation parameters are supposed to have the same source and characteristic extraction steps as those of a constructed Bayesian classifier; the probability of the operation corresponding to each type under the operation characteristic is obtained by using a Bayesian classifier, the probability of the operation belonging to each type is obtained by calculating the prior probability, the probabilities are ordered, and the operation with the maximum probability is obtained and is used as the intention of an operator when the probability is larger than a threshold value.
In this way, adjustments to the instrument personalization may be provided based on the user's actions and parameters. When the equipment changes, the same function can be realized only by adjusting the historical training data related to the related parameters.
Those of ordinary skill in the art will appreciate that the modules and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working procedures of the apparatus and device described above may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the embodiment of the invention.
In addition, each functional module in the embodiment of the present invention may be integrated in one processing module, or each module may exist alone physically, or two or more modules may be integrated in one module.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method for energy saving signal transmission/reception of the various embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
The foregoing description is only of the preferred embodiments of the present application and is presented as a description of the principles of the technology being utilized. It will be appreciated by persons skilled in the art that the scope of the invention referred to in this application is not limited to the specific combinations of features described above, but it is intended to cover other embodiments in which any combination of features described above or equivalents thereof is possible without departing from the spirit of the invention. Such as the above-described features and technical features having similar functions (but not limited to) disclosed in the present application are replaced with each other.
It should be understood that, the sequence numbers of the steps in the summary and the embodiments of the present invention do not necessarily mean the order of execution, and the execution order of the processes should be determined by the functions and the internal logic, and should not be construed as limiting the implementation process of the embodiments of the present invention. The foregoing description of implementations of the present disclosure has been presented for purposes of illustration and description. The foregoing description is not intended to be exhaustive or to limit the disclosure to the precise form disclosed, and modifications and variations are possible in light of the above teachings or may be acquired from practice of the disclosure. The embodiments were chosen and described in order to explain the principles of the present disclosure and its practical application to enable one skilled in the art to utilize the present disclosure in various embodiments and with various modifications as are suited to the particular use contemplated.

Claims (9)

1. A manipulator for a surgical robot, comprising:
the operation parameter acquisition unit is used for acquiring the states of the handle, the manipulator joint and the connecting wire;
an intention acquisition unit for acquiring parameters of an operator and states of a manipulator joint and a connecting wire, determining an intention of the operator based on the parameters of the operator and the states of the manipulator, and performing a predicted operation when the intention of the operator is greater than a threshold;
wherein intent is obtained by training based on virtual training data to obtain a bayesian classifier.
2. A manipulator according to claim 1, wherein the information collected by the operating parameter collection unit comprises a handle displacement, a handle position velocity, a handle position acceleration, a main manipulator moment, a manipulator joint displacement, a manipulator joint velocity, a manipulator joint acceleration, a guide wire tension and/or a guide wire feed length.
3. A manipulator according to claim 1, wherein the virtual training data is based on an antagonistic neural network and random noise generation.
4. A manipulator according to claim 3, wherein the virtual training data is clipped based on the actual operating parameter acquired by the operating parameter acquisition unit, and the virtual training data having an adjacency degree with respect to the actual operating parameter lower than a threshold value is removed, the adjacency degree being a time for transition from the actual operating parameter to the virtual training data.
5. A manipulator according to claim 1, wherein the virtual training data is generated by an antagonistic neural network.
6. The manipulator of claim 5, wherein the bayesian classifier training process comprises:
an operation parameter sequence is obtained from the operation parameter acquisition unit or the virtual countermeasure data by using the sliding data window, an operation characteristic is generated based on the operation parameter sequence, statistics are carried out on the operation characteristic and the operator intention associated with the operation characteristic, and the prior probability that the operation characteristic corresponds to the operator intention is determined.
7. The manipulator according to claim 5, wherein the virtual training data is cut based on the actual operation parameter acquired by the operation parameter acquisition unit, and the virtual training data having a degree of adjacency with the actual operation parameter higher than a threshold value is presented, the degree of adjacency being a time for transition from the actual operation parameter to the virtual training data.
8. The manipulator of claim 1, wherein the step of bayesian classifier obtaining intent comprises:
acquiring an operation parameter sequence acquired by an operation intention acquisition unit, and generating operation characteristics based on a plurality of latest operation parameters of an execution operation; the intention of the operator is obtained using a bayesian classifier.
9. A method for controlling a manipulator for a surgical robot, comprising:
collecting the states of a handle, a manipulator joint and a connecting wire;
acquiring parameters of an operator and states of a manipulator joint and a connecting wire, determining the intention of the operator based on the parameters of the operator and the states of the manipulator, and executing predicted operation when the intention of the operator is greater than a threshold value;
wherein intent is obtained by training based on virtual training data to obtain a bayesian classifier.
CN202311657703.7A 2023-12-06 2023-12-06 Manipulator and control method thereof Active CN117338436B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311657703.7A CN117338436B (en) 2023-12-06 2023-12-06 Manipulator and control method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311657703.7A CN117338436B (en) 2023-12-06 2023-12-06 Manipulator and control method thereof

Publications (2)

Publication Number Publication Date
CN117338436A true CN117338436A (en) 2024-01-05
CN117338436B CN117338436B (en) 2024-02-27

Family

ID=89357999

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311657703.7A Active CN117338436B (en) 2023-12-06 2023-12-06 Manipulator and control method thereof

Country Status (1)

Country Link
CN (1) CN117338436B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140046128A1 (en) * 2012-08-07 2014-02-13 Samsung Electronics Co., Ltd. Surgical robot system and control method thereof
CN107097227A (en) * 2017-04-17 2017-08-29 北京航空航天大学 A kind of man-machine collaboration robot system
US20190219972A1 (en) * 2018-01-12 2019-07-18 General Electric Company System and method for context-driven predictive simulation selection and use
CN111589156A (en) * 2020-05-20 2020-08-28 北京字节跳动网络技术有限公司 Image processing method, device, equipment and computer readable storage medium
CN112276944A (en) * 2020-10-19 2021-01-29 哈尔滨理工大学 Man-machine cooperation system control method based on intention recognition
CN112766348A (en) * 2021-01-12 2021-05-07 云南电网有限责任公司电力科学研究院 Method and device for generating sample data based on antagonistic neural network
US20220296315A1 (en) * 2019-07-15 2022-09-22 Corindus, Inc. Data capture and adaptove guidance for robotic procedures with an elongated medical device
CN115157236A (en) * 2022-05-30 2022-10-11 中国航发南方工业有限公司 Robot stiffness model precision modeling method, system, medium, equipment and terminal
CN115715173A (en) * 2020-06-12 2023-02-24 皇家飞利浦有限公司 Automatic selection of collaborative robot control parameters based on tool and user interaction forces
CN116214522A (en) * 2023-05-05 2023-06-06 中建科技集团有限公司 Mechanical arm control method, system and related equipment based on intention recognition
CN116392260A (en) * 2023-03-03 2023-07-07 中国科学院自动化研究所 Control device and method for vascular intervention operation
US20230226698A1 (en) * 2022-01-19 2023-07-20 Honda Motor Co., Ltd. Robot teleoperation control device, robot teleoperation control method, and storage medium
CN116597943A (en) * 2023-04-27 2023-08-15 华中科技大学 Forward track prediction method and equipment for instrument operation in minimally invasive surgery
CN116619369A (en) * 2023-05-29 2023-08-22 同济大学 Sharing control method based on teleoperation flexible mechanical arm and application thereof

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140046128A1 (en) * 2012-08-07 2014-02-13 Samsung Electronics Co., Ltd. Surgical robot system and control method thereof
CN107097227A (en) * 2017-04-17 2017-08-29 北京航空航天大学 A kind of man-machine collaboration robot system
US20190219972A1 (en) * 2018-01-12 2019-07-18 General Electric Company System and method for context-driven predictive simulation selection and use
US20220296315A1 (en) * 2019-07-15 2022-09-22 Corindus, Inc. Data capture and adaptove guidance for robotic procedures with an elongated medical device
CN111589156A (en) * 2020-05-20 2020-08-28 北京字节跳动网络技术有限公司 Image processing method, device, equipment and computer readable storage medium
CN115715173A (en) * 2020-06-12 2023-02-24 皇家飞利浦有限公司 Automatic selection of collaborative robot control parameters based on tool and user interaction forces
US20230339109A1 (en) * 2020-06-12 2023-10-26 Koninklijke Philips N.V. Automatic selection of collaborative robot control parameters based on tool and user interaction force
CN112276944A (en) * 2020-10-19 2021-01-29 哈尔滨理工大学 Man-machine cooperation system control method based on intention recognition
CN112766348A (en) * 2021-01-12 2021-05-07 云南电网有限责任公司电力科学研究院 Method and device for generating sample data based on antagonistic neural network
US20230226698A1 (en) * 2022-01-19 2023-07-20 Honda Motor Co., Ltd. Robot teleoperation control device, robot teleoperation control method, and storage medium
CN115157236A (en) * 2022-05-30 2022-10-11 中国航发南方工业有限公司 Robot stiffness model precision modeling method, system, medium, equipment and terminal
CN116392260A (en) * 2023-03-03 2023-07-07 中国科学院自动化研究所 Control device and method for vascular intervention operation
CN116597943A (en) * 2023-04-27 2023-08-15 华中科技大学 Forward track prediction method and equipment for instrument operation in minimally invasive surgery
CN116214522A (en) * 2023-05-05 2023-06-06 中建科技集团有限公司 Mechanical arm control method, system and related equipment based on intention recognition
CN116619369A (en) * 2023-05-29 2023-08-22 同济大学 Sharing control method based on teleoperation flexible mechanical arm and application thereof

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李军强;齐恒佳;张改萍;赵海文;郭士杰;: "基于力信息的人机协调运动控制方法", 计算机集成制造***, no. 08, 15 August 2018 (2018-08-15), pages 123 - 129 *
赵海文;齐恒佳;王旭之;李军强;: "基于机器学习的人机协调操作意图感知与控制方法研究", 机床与液压, no. 10, 28 May 2019 (2019-05-28), pages 156 - 159 *

Also Published As

Publication number Publication date
CN117338436B (en) 2024-02-27

Similar Documents

Publication Publication Date Title
US11642179B2 (en) Artificial intelligence guidance system for robotic surgery
CN111083922B (en) Dental image analysis method for correction diagnosis and apparatus using the same
CN110111880A (en) The Artificial Potential Field paths planning method and device based on obstacle classification of flexible needle
US20200046439A1 (en) Detection of unintentional movement of a user interface device
WO2022073342A1 (en) Surgical robot and motion error detection method and detection device therefor
Guo et al. Eye-tracking for performance evaluation and workload estimation in space telerobotic training
Qin et al. davincinet: Joint prediction of motion and surgical state in robot-assisted surgery
WO2021090870A1 (en) Instrument-to-be-used estimation device and method, and surgery assistance robot
CN114299604A (en) Two-dimensional image-based hand skeleton capturing and gesture distinguishing method
CN117338436B (en) Manipulator and control method thereof
Dai et al. Human-inspired haptic perception and control in robot-assisted milling surgery
CN113813053A (en) Operation process analysis method based on laparoscope endoscopic image
EP4191606A1 (en) Medical assistance system, medical assistance method, and computer program
JP7395125B2 (en) Determining the tip and orientation of surgical tools
US20230316545A1 (en) Surgical task data derivation from surgical video data
Lotfi et al. Surgical instrument tracking for vitreo-retinal eye surgical procedures using aras-eye dataset
WO2022014447A1 (en) Surgical assistance system and method
US20230256618A1 (en) Robotic hand system and method for controlling robotic hand
EP3868303A1 (en) Ultrasound guidance method and system
KR20180100831A (en) Method for controlling view point of surgical robot camera and apparatus using the same
CN115715173A (en) Automatic selection of collaborative robot control parameters based on tool and user interaction forces
CN116392247B (en) Operation positioning navigation method based on mixed reality technology
US20230286159A1 (en) Remote control system
US20240087715A1 (en) Surgical instrument operation monitoring using artificial intelligence
CN117297769A (en) Bone layer identification method in hard bone tissue operation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant