CN113510701B - Robot control method, robot, and computer-readable storage medium - Google Patents

Robot control method, robot, and computer-readable storage medium Download PDF

Info

Publication number
CN113510701B
CN113510701B CN202110553828.XA CN202110553828A CN113510701B CN 113510701 B CN113510701 B CN 113510701B CN 202110553828 A CN202110553828 A CN 202110553828A CN 113510701 B CN113510701 B CN 113510701B
Authority
CN
China
Prior art keywords
speed
robot
dynamic object
target dynamic
mechanical arm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110553828.XA
Other languages
Chinese (zh)
Other versions
CN113510701A (en
Inventor
徐升
陈凯
欧勇盛
王志扬
江国来
熊荣
刘超
赛高乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202110553828.XA priority Critical patent/CN113510701B/en
Publication of CN113510701A publication Critical patent/CN113510701A/en
Application granted granted Critical
Publication of CN113510701B publication Critical patent/CN113510701B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The application relates to the technical field of robot control, and discloses a robot control method, a robot and a computer readable storage medium. The control method comprises the following steps: acquiring a first speed of a target dynamic object at the current moment; inputting the first speed into a control model obtained by pre-training, and obtaining a second speed of the robot output by the control model, wherein the second speed corresponds to the speed of the target dynamic object at the next moment; the control model is obtained by training the control model by taking a second speed generated by the robot in the historical movement process and a first speed generated by the target dynamic object in the historical movement process as training samples; and controlling the mechanical arm of the robot according to the second speed so that the mechanical arm can synchronously track the target dynamic object at the next moment. By the mode, the precision of tracking the dynamic object by the mechanical arm and the smoothness of the mechanical arm in moving can be improved.

Description

Robot control method, robot, and computer-readable storage medium
Technical Field
The present application relates to the field of robot control technologies, and in particular, to a robot control method, a robot, and a computer-readable storage medium.
Background
When the mechanical arm of the robot grabs an object, the position of the object to be grabbed needs to be known in advance, then the tail end of the mechanical arm reaches the position of the object through the change of joint angles of the connecting rods, and then a grabbing command is executed.
For a moving object, since the position information of the object obtained each time comes from the previous time, the object cannot be grasped through a set of processes of obtaining the position of the object, moving the mechanical arm and grasping. And it often will take a period of time to solve the motion of mechanical arm under cartesian space system, and the object can move to farther distance, is difficult to implement and snatchs.
Disclosure of Invention
The technical problem mainly solved by the application is to provide a control method of a robot, the robot and a computer readable storage medium, which can improve the precision of a mechanical arm tracking a dynamic object and the smoothness of the mechanical arm in moving.
In order to solve the above problem, one technical solution adopted by the present application is to provide a control method of a robot, including: acquiring a first speed of a target dynamic object at the current moment; inputting the first speed into a control model obtained by pre-training, and obtaining a second speed of the robot output by the control model, wherein the second speed corresponds to the speed of the target dynamic object at the next moment; the control model is obtained by training the control model by taking a second speed generated by the robot in the historical movement process and a first speed generated by the target dynamic object in the historical movement process as training samples; and controlling the mechanical arm of the robot according to the second speed so that the mechanical arm can synchronously track the target dynamic object at the next moment.
Wherein, the method also comprises: acquiring a second speed generated by the robot in the historical movement process and a first speed generated by the target dynamic object in the historical movement process; initializing a second speed and a first speed generated in the historical movement process to generate a training sample; and inputting the training samples into a pre-established control model so as to train the control model.
The method for acquiring the first speed of the target dynamic object at the current moment comprises the following steps: acquiring a first position of a target dynamic object at a previous moment and a second position of the target dynamic object at a current moment; and obtaining a first speed of the target dynamic object according to the first position and the second position.
Wherein, input first speed to the control model that the training in advance obtained in to obtain the second speed of the robot of control model output, include: inputting the first speed into a control model obtained by pre-training so that the control model obtains a second speed by using the first speed and an error value in the control model; wherein, the error value is obtained when the control model is trained.
Wherein, the computational formula of the control model is as follows:
Figure BDA0003076410600000021
wherein,
Figure BDA0003076410600000022
represents a second velocity of the robot arm, u represents a mean value, e represents an error value,
Figure BDA0003076410600000023
representing a first speed, P, of the target dynamic object j Representing the prior of the jth gaussian model in the control model,
Figure BDA0003076410600000024
represents the covariance, Σ, of the velocity and position of the end of the robot arm in the jth gaussian model in the control model e,j And J represents a Jacobian matrix.
Wherein, according to the mechanical arm of second speed control robot to make the mechanical arm follow tracks in step to the dynamic object of target at the next moment, include: obtaining a third speed corresponding to each joint of the mechanical arm by using the second speed; and controlling the mechanical arm according to the third speed so that the mechanical arm can synchronously track the target dynamic object at the next moment.
Wherein, utilize the second speed to obtain the third speed that each joint of arm corresponds, include: the following formula is used for calculation:
Figure BDA0003076410600000025
wherein,
Figure BDA0003076410600000026
a third velocity corresponding to each joint is indicated,
Figure BDA0003076410600000027
indicating a second speed of the arm, i.e. as described above
Figure BDA0003076410600000028
J represents a Jacobian matrix, J' is X J, X is a basic matrix, and T represents a transition matrix.
Wherein, the method also comprises: and when the preset conditions are met, the mechanical arm of the robot is controlled to grab the target dynamic object.
In order to solve the above problem, another technical solution adopted by the present application is to provide a robot, including: a robot main body; a robot arm provided to the robot main body; the camera assembly is arranged on the robot main body and used for acquiring image data of a target dynamic object; a memory provided in the robot main body for storing program data; and the processor is arranged in the robot main body, is connected with the mechanical arm, the camera assembly and the memory, and is used for executing program data so as to realize the method provided by the technical scheme.
In order to solve the above problem, another technical solution adopted by the present application is to provide a computer-readable storage medium for storing program data, which when executed by a processor, is used for implementing the method provided by the above technical solution.
The beneficial effect of this application is: the robot control method, the robot, and the computer-readable storage medium according to the present application are different from the related art. The control method comprises the following steps: acquiring a first speed of a target dynamic object at the current moment; inputting the first speed into a control model obtained by pre-training, and obtaining a second speed of the robot output by the control model, wherein the second speed corresponds to the speed of the target dynamic object at the next moment; the control model is obtained by training the control model by taking a second speed generated by the robot in the historical movement process and a first speed generated by the target dynamic object in the historical movement process as training samples; and controlling the mechanical arm of the robot according to the second speed so that the mechanical arm can synchronously track the target dynamic object at the next moment. In this way, on the one hand, the control model is used for outputting the second speed corresponding to the next moment of the target dynamic object, the precision of tracking the dynamic object by the mechanical arm and the smoothness of the mechanical arm when moving can be improved, the success rate of grabbing the target dynamic object is further improved, on the other hand, the second speed of the mechanical arm is obtained by the trained control model, the complex manual parameter adjustment of the controller of the robot can be reduced, and the universality of the robot is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts. Wherein:
fig. 1 is a schematic flowchart of an embodiment of a control method for a robot provided in the present application;
FIG. 2 is a schematic diagram of a target dynamic object provided in the present application in a previous image frame;
FIG. 3 is a schematic diagram of a target dynamic object provided by the present application at a current image frame;
FIG. 4 is a schematic flow chart of step 13 provided herein;
fig. 5 is a schematic flowchart of another embodiment of a control method of a robot provided by the present application;
FIG. 6 is a schematic structural diagram of an embodiment of a robot provided herein;
FIG. 7 is a schematic structural diagram of an embodiment of a computer-readable storage medium provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some of the structures related to the present application are shown in the drawings, not all of the structures. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first", "second", etc. in this application are used to distinguish between different objects and not to describe a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating a control method of a robot according to an embodiment of the present disclosure. The method comprises the following steps:
step 11: and acquiring a first speed of the target dynamic object at the current moment.
In some embodiments, the robot includes a camera, the target dynamic object can be photographed by the camera, and the first speed of the target dynamic object at the current time can be calculated by using the photographed image. The speed of the target dynamic object may be a uniform speed or a non-uniform speed.
In some embodiments, referring to fig. 2, step 11 may be the following flow:
step 111: and acquiring a first position of the target dynamic object at the previous moment and a second position of the target dynamic object at the current moment.
In some embodiments, a camera may be used to capture a target dynamic object, resulting in successive image frames. By extracting adjacent image frames and identifying the target dynamic object in each image frame, the distance that the target dynamic object moves can be determined.
Step 112: and obtaining a first speed of the target dynamic object according to the first position and the second position.
The description is made in conjunction with fig. 2-3:
fig. 2 is a schematic diagram of the target dynamic object in the previous image frame, and fig. 3 is a schematic diagram of the target dynamic object in the current image frame. As shown in fig. 2, the position of the target dynamic object a is position 1, and as shown in fig. 3, the position of the target dynamic object a is position 3. The first velocity of the target dynamic object a can be found from the distance between position 1 and position 3.
Step 12: inputting the first speed into a control model obtained by pre-training, and obtaining a second speed of the robot output by the control model, wherein the second speed corresponds to the speed of the target dynamic object at the next moment; the control model is obtained by taking a second speed generated by the robot in the historical movement process and a first speed generated by the target dynamic object in the historical movement process as training samples to train the control model.
In some embodiments, the historical motion process may be a teaching process of the robot, or may be an operation process of the robot at other times. The teaching process can control the mechanical arm of the robot by dragging or teleoperation, so that the mechanical arm and a target dynamic object in motion are kept synchronous all the time. For example, the mechanical arm and the target dynamic object are moved in the same plane and in the same direction. If expressed in terms of cartesian axes, it can be said that synchronization is maintained in the xy plane. A first velocity of the target dynamic object and a second velocity of the robot are collected during the teaching. Thus, a plurality of sets of data consisting of the first speed and the second speed are obtained. The control model is trained using these data as training samples, and the trained control model is used in step 12.
It can be understood that the second speed and the first speed in the historical movement process are collected at the same moment, which means that the second speed and the first speed in the historical movement process are generated at the same time.
In some embodiments, the control model may be established by using a gaussian mixture model, a hidden markov model, K-nearest neighbor, linear regression, a neural network, a support vector machine, or the like.
Step 13: and controlling the mechanical arm of the robot according to the second speed so that the mechanical arm can synchronously track the target dynamic object at the next moment.
In some embodiments, referring to fig. 4, step 13 may be the following process:
step 131: and obtaining a third speed corresponding to each joint of the mechanical arm by using the second speed.
In some embodiments, the third velocity for each joint may be calculated using the jacobian matrix, the forward kinematics matrix, the transition matrix, and the second velocity for each joint.
Step 132: and controlling the mechanical arm according to the third speed so that the mechanical arm can synchronously track the target dynamic object at the next moment.
According to step 132, each joint of the robot arm reaches its corresponding third speed, and at the next time, the robot arm and the target dynamic object are at the same speed and the same position, so as to achieve synchronous tracking of the robot arm on the target dynamic object.
In the embodiment, a first speed of a target dynamic object at the current moment is obtained; inputting the first speed into a control model obtained by pre-training, and obtaining a second speed of the robot output by the control model, wherein the second speed corresponds to the speed of the target dynamic object at the next moment; the control model is obtained by training the control model by taking a second speed generated by the robot in the historical movement process and a first speed generated by the target dynamic object in the historical movement process as training samples; and controlling the mechanical arm of the robot according to the second speed so that the mechanical arm can synchronously track the target dynamic object at the next moment. In this way, on the one hand, the control model is used for outputting the second speed corresponding to the next moment of the target dynamic object, the precision of tracking the dynamic object by the mechanical arm and the smoothness of the mechanical arm when moving can be improved, the success rate of grabbing the target dynamic object is further improved, on the other hand, the second speed of the mechanical arm is obtained by the trained control model, the complex manual parameter adjustment of the controller of the robot can be reduced, and the universality of the robot is improved.
Referring to fig. 5, fig. 5 is a schematic flowchart of another embodiment of a control method of a robot provided by the present application. The method comprises the following steps:
step 51: and acquiring a first speed of the target dynamic object at the current moment.
In some embodiments, the robot includes a first camera that can capture a depth image. The distance from the camera to the target dynamic object and the coordinate point of the target dynamic object in the depth image can be identified through the depth image of the first camera. And combining the internal reference of the first camera to obtain the position of the target dynamic object under the camera coordinate system of the first camera corresponding to the first camera.
For example, the following formula can be used for calculation:
Figure BDA0003076410600000071
wherein, x' k Representing the X-axis position, y 'of the target dynamic object under the camera coordinate system of the first camera' k Representing the Y-axis position u of the target dynamic object in the camera coordinate system of the first camera k Representing the X-axis position, v, of a dynamic object of interest in an image k Representing the Y-axis position of the target dynamic object in the image, c x Representing the origin translation dimension, c, on the X-axis of the first camera y Representing the translation size of the origin on the Y axis of the first camera, d representing the distance from the first camera to the target dynamic object, f x To representFocal length on X-axis of first camera, f y Indicating the focal length on the Y-axis of the first camera.
And then, the position of the target dynamic object in the coordinate system of the robot can be obtained through the position and the distance under the camera coordinate system corresponding to the first camera.
Then, the robot further comprises a second camera, wherein the second camera is arranged at the tail end of the mechanical arm of the robot, and the position of the target dynamic object under the coordinate system of the robot is obtained. And controlling the tail end of the mechanical arm to enable the second camera to shoot the target dynamic object again.
Then, the position u of the target dynamic object is identified again through the second camera h And a central point v h The internal parameter of the second camera is c hx 、c hy 、f hx And f hy Converting the coordinates of the target dynamic object in the image into the coordinate system of the robot to obtain x' h And y' h . And the height of the second camera from the target dynamic object can be represented by the symbol h by subtracting the height between the second camera and the first camera from the height measured by the first camera.
Figure BDA0003076410600000081
Wherein, x' h Representing the X-axis position, y 'of the target dynamic object under the camera coordinate system of the second camera' h Represents the Y-axis position, Y 'of the target dynamic object in the camera coordinate system of the second camera' h Representing the X-axis position, v, of a dynamic object of interest in an image h Representing the Y-axis position of the target dynamic object in the image, c hx Representing the origin translation dimension, c, on the X-axis of the second camera hy Representing the translation size of the origin on the Y axis of the second camera, h representing the distance from the second camera to the target dynamic object, f hx Denotes the focal length in the X-axis of the second camera, f hy Indicating the focal length on the Y-axis of the second camera.
By means of the two cameras, the mechanical arm can accurately position the target dynamic object.
And then, calculating the first speed of the target dynamic object at the current moment by using the position of the target dynamic object in the current frame image acquired by the second camera and the position of the target dynamic object in the historical frame image acquired by the second camera.
For example, the following formula can be used for calculation:
Figure BDA0003076410600000082
wherein,
Figure BDA0003076410600000083
representing the velocity, Pose, of the target dynamic object relative to the robotic arm t Indicating the position of the target dynamic object in the current frame image, Pose t-1 Indicating the position of the target dynamic object in the historical frame image, Δ t c Representing the time at which the corresponding algorithm processes the data, and f representing the frequency of the second camera.
Figure BDA0003076410600000084
Wherein,
Figure BDA0003076410600000085
representing the velocity of the target dynamic object,
Figure BDA0003076410600000086
indicating the velocity of the robot arm.
It can be understood that, since the mechanical arm and the target dynamic object move synchronously, the relative speed of the target dynamic object corresponding to the mechanical arm is calculated by using the adjacent frame images, so the actual speed of the target dynamic object should be added to the speed of the mechanical arm.
Step 52: inputting the first speed into a control model obtained by pre-training so that the control model obtains a second speed by using the first speed and an error value in the control model; wherein, the error value is obtained when the control model is trained.
The control model can be obtained by utilizing a Gaussian mixture model for training, and the number of the Gaussian models in the Gaussian mixture model can be set according to actual conditions, such as 5, 3 and the like.
Wherein, the computational formula of the control model is as follows:
Figure BDA0003076410600000091
wherein,
Figure BDA0003076410600000092
represents a second velocity of the robot arm, u represents a mean value, e represents an error value,
Figure BDA0003076410600000093
representing the velocity, P, of the target dynamic object j Representing the prior of the jth gaussian model in the control model,
Figure BDA0003076410600000094
represents the covariance, Σ, of the velocity and position of the end of the robot arm in the jth gaussian model in the control model e,j And representing the covariance of the tail end of the mechanical arm in the jth Gaussian model in the control model, and J represents a Jacobian matrix.
Step 53: and obtaining a third speed corresponding to each joint of the mechanical arm by using the second speed.
In some embodiments, the following formula may be used for the calculation:
Figure BDA0003076410600000095
wherein,
Figure BDA0003076410600000096
a third velocity corresponding to each joint is indicated,
Figure BDA0003076410600000097
indicating a second speed of the arm, i.e. as described above
Figure BDA0003076410600000098
J represents a Jacobian matrix, J' is X J, X is a basic matrix, and T represents a transition matrix.
Wherein,
Figure BDA0003076410600000099
r is a 3x3 rotation matrix.
Figure BDA00030764106000000910
The elements with Z being 3x3 are all 0 matrices.
It can be understood that when the robot is a six-axis robot, the mechanical arm corresponds to 6 joint speeds.
Step 54: and controlling the mechanical arm according to the third speed so that the mechanical arm can synchronously track the target dynamic object at the next moment.
Step 55: and when the preset conditions are met, the mechanical arm of the robot is controlled to grab the target dynamic object.
In some embodiments, the preset condition may be a condition of time, a condition of position, and a condition of a joint angle of the mechanical arm.
Taking the case that the mechanical arm is above the target dynamic object, how to control the mechanical arm of the robot to grab the target dynamic object is described. Firstly, the height difference delta h between the tail end of the mechanical arm and a target dynamic object can be obtained, and a nonlinear function is used when the mechanical arm is controlled to descend:
Figure BDA0003076410600000101
the function can control the descending process of the mechanical arm from a rapid process to a slow process, so that the instability of the mechanical arm caused by descending can be prevented, the mechanical arm is buffered for a certain time before grabbing, and it can be understood that the setting of parameters is different according to different mechanical arms), and the mechanical arm can be converged smoothly.
By the mode, the time delay caused by vision can be compensated. The control model is trained to avoid complex manual parameter adjustment of the controller by collecting data of a dragging mechanical arm tracking object, so that people without control related theoretical knowledge can use the control model easily in a teaching mode.
The method is based on a Gaussian mixture model, namely a controller is added in the original position control, the control rate generated by the controller can improve the precision of tracking the dynamic object by the mechanical arm and the smoothness of the mechanical arm in moving, and further improve the success rate of grabbing the target dynamic object.
Referring to fig. 6, the robot 60 includes a robot main body 61, a robot arm 62, a camera assembly 63, a memory 64, and a processor 65. Wherein the robot arm 62 is provided to the robot main body 61; the camera assembly 63 is arranged on the robot main body 61 and used for acquiring image data of a target dynamic object; a memory 64 provided in the robot main body 61 for storing program data; the processor 65 is disposed on the robot main body 61, and is connected to the robot arm 62, the camera assembly 63 and the memory 64, for executing program data to implement the following method:
acquiring a first speed of a target dynamic object at the current moment; inputting the first speed into a control model obtained by pre-training, and obtaining a second speed of the robot output by the control model, wherein the second speed corresponds to the speed of the target dynamic object at the next moment; the control model is obtained by training the control model by taking a second speed generated by the robot in the historical movement process and a first speed generated by the target dynamic object in the historical movement process as training samples; and controlling the mechanical arm of the robot according to the second speed so that the mechanical arm can synchronously track the target dynamic object at the next moment.
It can be understood that the processor 65 in this embodiment may also implement the method of any of the above embodiments, and the specific implementation steps thereof may refer to the above embodiments, which are not described herein again.
The camera assembly 63 includes a first camera (not shown) and a second camera (not shown), the first camera is disposed on the robot main body 61, and is configured to collect image data of the target dynamic object and can collect a global image; the second camera is disposed at the end of the mechanical arm 62 and is used for acquiring image data of the target dynamic object. It can be understood that the accuracy of the target dynamic object collected by the second camera is higher than that collected by the first camera.
Referring to fig. 7, the computer readable storage medium 70 is used for storing program data 71, and the program data 71, when executed by the processor, is used for implementing the method provided by the above technical solution.
Acquiring a first speed of a target dynamic object at the current moment; inputting the first speed into a control model obtained by pre-training, and obtaining a second speed of the robot output by the control model, wherein the second speed corresponds to the speed of the target dynamic object at the next moment; the control model is obtained by training the control model by taking a second speed generated by the robot in the historical movement process and a first speed generated by the target dynamic object in the historical movement process as training samples; and controlling the mechanical arm of the robot according to the second speed so that the mechanical arm can synchronously track the target dynamic object at the next moment.
It is understood that the computer-readable storage medium 70 in this embodiment is applied to the robot 60, and specific implementation steps thereof may refer to the above embodiments, which are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated units in the other embodiments described above may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (8)

1. A control method of a robot, characterized by comprising:
acquiring a first speed of a target dynamic object at the current moment;
inputting the first speed into a control model obtained by pre-training so that the control model obtains a second speed by using the first speed and an error value in the control model; wherein the error value is obtained when the control model is trained, and the second speed corresponds to a speed of the target dynamic object at a next moment; the control model is obtained by training the control model by taking a second speed generated by the robot in a historical movement process and a first speed generated by the target dynamic object in the historical movement process as training samples;
controlling a mechanical arm of the robot according to the second speed so that the mechanical arm can synchronously track the target dynamic object at the next moment;
wherein, the calculation formula of the control model is as follows:
Figure FDA0003708816180000011
wherein,
Figure FDA0003708816180000012
representing a second velocity of the robot arm, u representing a mean value, e representing the error value,
Figure FDA0003708816180000013
representing a first velocity, P, of said target dynamic object j Representing the prior of the jth gaussian model in the control model,
Figure FDA0003708816180000014
represents the covariance, Σ, of the velocity and position of the end of the robot arm in the jth gaussian model in the control model e,j Representing the covariance of the tail end of the mechanical arm in the jth Gaussian model in the control model, and J representing a Jacobian matrix.
2. The control method according to claim 1,
the method further comprises the following steps:
acquiring a second speed generated by the robot in a historical movement process and a first speed generated by the target dynamic object in the historical movement process;
initializing the second speed and the first speed generated in the historical movement process to generate a training sample;
inputting the training samples into a pre-established control model to train the control model.
3. The control method according to claim 1,
the acquiring of the first speed of the target dynamic object at the current moment includes:
acquiring a first position of the target dynamic object at a previous moment and a second position of the target dynamic object at a current moment;
and obtaining a first speed of the target dynamic object according to the first position and the second position.
4. The control method according to claim 1,
the controlling a mechanical arm of the robot according to the second speed so that the mechanical arm synchronously tracks the target dynamic object at the next moment comprises:
obtaining a third speed corresponding to each joint of the mechanical arm by using the second speed;
and controlling the mechanical arm according to the third speed so that the mechanical arm can synchronously track the target dynamic object at the next moment.
5. The control method according to claim 4,
the obtaining of the third speed corresponding to each joint of the mechanical arm by using the second speed includes:
the following formula is used for calculation:
Figure FDA0003708816180000021
wherein,
Figure FDA0003708816180000022
a third velocity corresponding to each joint is indicated,
Figure FDA0003708816180000023
indicating a second speed of the arm, i.e. as described above
Figure FDA0003708816180000024
J represents a Jacobian matrix, J' is X J, X is a basic matrix, and T represents a transition matrix.
6. The control method according to claim 1,
the method further comprises the following steps:
and when a preset condition is met, controlling a mechanical arm of the robot to grab the target dynamic object.
7. A robot, characterized in that the robot comprises:
a robot main body;
a robot arm provided to the robot main body;
the camera assembly is arranged on the robot main body and used for acquiring image data of a target dynamic object;
a memory provided in the robot main body for storing program data;
a processor disposed in the robot body and coupled to the robotic arm, camera assembly and the memory for executing the program data to implement the method of any of claims 1-6.
8. A computer-readable storage medium, characterized in that the computer-readable storage medium is used for storing program data, which, when being executed by a processor, is used for carrying out the method according to any one of claims 1-6.
CN202110553828.XA 2021-05-20 2021-05-20 Robot control method, robot, and computer-readable storage medium Active CN113510701B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110553828.XA CN113510701B (en) 2021-05-20 2021-05-20 Robot control method, robot, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110553828.XA CN113510701B (en) 2021-05-20 2021-05-20 Robot control method, robot, and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN113510701A CN113510701A (en) 2021-10-19
CN113510701B true CN113510701B (en) 2022-08-09

Family

ID=78064788

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110553828.XA Active CN113510701B (en) 2021-05-20 2021-05-20 Robot control method, robot, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN113510701B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0732277A (en) * 1993-07-16 1995-02-03 Toshiba Corp Control device of robot
CN108674922A (en) * 2018-05-16 2018-10-19 广州视源电子科技股份有限公司 Conveyor belt synchronous tracking method, device and system for robot
WO2019037498A1 (en) * 2017-08-25 2019-02-28 腾讯科技(深圳)有限公司 Active tracking method, device and system
CN109483541A (en) * 2018-11-22 2019-03-19 浙江大学 A kind of mobile object grasping means based on decomposition rate planning algorithm
CN111452039A (en) * 2020-03-16 2020-07-28 华中科技大学 Robot posture adjusting method and device under dynamic system, electronic equipment and medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0732277A (en) * 1993-07-16 1995-02-03 Toshiba Corp Control device of robot
WO2019037498A1 (en) * 2017-08-25 2019-02-28 腾讯科技(深圳)有限公司 Active tracking method, device and system
CN108674922A (en) * 2018-05-16 2018-10-19 广州视源电子科技股份有限公司 Conveyor belt synchronous tracking method, device and system for robot
CN109483541A (en) * 2018-11-22 2019-03-19 浙江大学 A kind of mobile object grasping means based on decomposition rate planning algorithm
CN111452039A (en) * 2020-03-16 2020-07-28 华中科技大学 Robot posture adjusting method and device under dynamic system, electronic equipment and medium

Also Published As

Publication number Publication date
CN113510701A (en) 2021-10-19

Similar Documents

Publication Publication Date Title
EP3414710B1 (en) Deep machine learning methods and apparatus for robotic grasping
CN111055279A (en) Multi-mode object grabbing method and system based on combination of touch sense and vision
CN113814986B (en) Method and system for controlling SCARA robot based on machine vision
CN111805547B (en) Method for realizing dynamic tracking of track
CN110744541A (en) Vision-guided underwater mechanical arm control method
CN111872934A (en) Mechanical arm control method and system based on hidden semi-Markov model
CN115070781B (en) Object grabbing method and two-mechanical-arm cooperation system
Skoglund et al. Programming by demonstration of pick-and-place tasks for industrial manipulators using task primitives
CN115480583B (en) Visual servo tracking and impedance control method for flying operation robot
CN114770461B (en) Mobile robot based on monocular vision and automatic grabbing method thereof
CN112734823A (en) Jacobian matrix depth estimation method based on visual servo of image
Xie et al. Visual tracking control of SCARA robot system based on deep learning and Kalman prediction method
Han et al. Visual servoing control of robotics with a neural network estimator based on spectral adaptive law
CN113510701B (en) Robot control method, robot, and computer-readable storage medium
Huang et al. A novel robotic grasping method for moving objects based on multi-agent deep reinforcement learning
CN116423520A (en) Mechanical arm track planning method based on vision and dynamic motion primitives
Long et al. Robotic cutting of soft materials using force control & image moments
CN116834014A (en) Intelligent cooperative control method and system for capturing non-cooperative targets by space dobby robot
CN113681560B (en) Method for operating articulated object by mechanical arm based on vision fusion
Zhou et al. Visual servo control system of 2-DOF parallel robot
Lepora et al. Pose-based servo control with soft tactile sensing
Kawagoshi et al. Visual servoing using virtual space for both learning and task execution
CN111413995A (en) Method and system for tracking relative position and synchronously controlling posture between double rigid body characteristic points
Hung et al. An approach to learn hand movements for robot actions from human demonstrations
Xu et al. A fast and straightforward hand-eye calibration method using stereo camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant