CN118081837A - Robot, training device, robot equipment, training system and training method of training system - Google Patents

Robot, training device, robot equipment, training system and training method of training system Download PDF

Info

Publication number
CN118081837A
CN118081837A CN202410435637.7A CN202410435637A CN118081837A CN 118081837 A CN118081837 A CN 118081837A CN 202410435637 A CN202410435637 A CN 202410435637A CN 118081837 A CN118081837 A CN 118081837A
Authority
CN
China
Prior art keywords
robot
training
execution
learning model
motion vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410435637.7A
Other languages
Chinese (zh)
Inventor
李子军
伍健鹏
刘向平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xiubao Robot Technology Co ltd
Original Assignee
Shenzhen Xiubao Robot Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xiubao Robot Technology Co ltd filed Critical Shenzhen Xiubao Robot Technology Co ltd
Priority to CN202410435637.7A priority Critical patent/CN118081837A/en
Publication of CN118081837A publication Critical patent/CN118081837A/en
Pending legal-status Critical Current

Links

Landscapes

  • Manipulator (AREA)

Abstract

The embodiment of the application discloses a robot, a training device, robot equipment, a training system and a training method thereof, wherein the robot comprises a robot main frame, two execution manipulators, a main camera, at least two auxiliary cameras and a vector detection unit, wherein the two execution manipulators are arranged on the robot main frame, the execution manipulators comprise mechanical arms and execution ends, the mechanical arms are connected with the robot main frame, and the execution ends are connected with the ends of the mechanical arms; the main cameras are arranged on the main frame of the robot and face the two execution manipulators; the two auxiliary cameras are respectively arranged on the two execution manipulators and are arranged towards the execution tail end; the vector detection unit is used for detecting the actual motion vector of each joint of the execution manipulator. The main camera and the auxiliary camera capture image information of the robot, and a vector detection unit is used for detecting a real motion vector of the execution manipulator. These data may be input into the robot learning model, which is trained to obtain the target robot learning model.

Description

Robot, training device, robot equipment, training system and training method of training system
Technical Field
The embodiment of the application relates to the technical field of robots, in particular to a robot, a training device, robot equipment, a training system and a training method thereof.
Background
With the rapid development of artificial intelligence, robots have also been developed rapidly, and have been widely used in various fields, especially in the aspect of material handling, the application of robots can save a lot of labor.
However, most commercial robots generally only support a PLC control program, have no mobility and dexterity, and a small number of robots also use a neural network model on an application program, so that the robots still have more problems on application due to no better training mode, and particularly, as the additional degree of freedom increases, small deviations of basic postures may cause large drifting of the postures of the end effectors of the arms.
Disclosure of Invention
The present invention aims to solve at least one of the technical problems existing in the prior art. Therefore, the invention provides the robot, the training device, the robot equipment, the training system and the training method thereof, the robot can be applied to training of a robot learning model, and the robot can effectively complete preset work in practical application.
In a first aspect, an embodiment of the present application provides a robot for robot learning, including:
A robot main frame;
the two execution manipulators are arranged on the left side and the right side of the robot main frame, and each execution manipulator comprises a manipulator and an execution tail end, wherein the manipulator is connected with the robot main frame, and the execution tail ends are connected with the tail ends of the manipulator and are used for executing grabbing actions;
the main camera is arranged at the upper part of the main frame of the robot and faces to the two execution manipulators, and the main camera is used for capturing action changes of the two execution manipulators globally;
the auxiliary cameras are respectively arranged on the two execution manipulators and are arranged towards the corresponding execution tail ends, and the auxiliary cameras are used for focusing and capturing action changes of the execution tail ends in the environment;
and the vector detection unit is used for detecting the actual motion vector of each joint of the execution manipulator.
According to some embodiments of the invention, the mechanical arm comprises an upper arm and a front arm, wherein one end of the upper arm is movably connected with the main frame of the robot, one end of the front arm is movably connected with the other end of the upper arm, the execution end is movably connected with the other end of the front arm, and the auxiliary camera is arranged at the end part of the front arm, which is close to the execution end.
According to some embodiments of the invention, the robot further comprises a first movable chassis, the robot main frame being arranged for linear and/or rotational movement; the main camera is further used for capturing the position change of the execution manipulator in the environment, and the vector detection unit is further used for detecting the real motion vector of the first movable chassis.
According to some embodiments of the invention, the first movable chassis is an AGV navigation vehicle, and the vector detection unit is configured to detect an angular velocity vector and a linear velocity vector of the first movable chassis.
According to some embodiments of the invention, the robot has a connection for detachable connection with the training device.
According to some embodiments of the invention, the robot further comprises a control unit, the control unit is used for being in communication connection with the training device, and is used for receiving operation signals of the training device to control the two execution manipulators to act.
In a second aspect, an embodiment of the present application provides a training device for training a robot, where the training device includes:
Training a frame;
The two training manipulators are consistent with the structure of the executing manipulator, are arranged on the left side and the right side of the training frame, and are in one-to-one communication connection with the executing manipulator; wherein the training manipulator is consistent with the execution manipulator parameter setting, and the training manipulator can be operated to control the execution manipulator to synchronously move.
According to some embodiments of the invention, the training frame is detachably connected to the robot; or, the training frame is separately arranged from the robot.
According to some embodiments of the invention, the training device further comprises a second movable chassis, the training frame being arranged on the second movable chassis, the second movable chassis being for linear and/or rotational movement; wherein the second movable chassis is for communication connection with the first movable chassis, the second movable chassis being operable to operate the first movable chassis to move synchronously.
In a third aspect, an embodiment of the present application provides a robot apparatus, including:
The robot described above;
the training device.
In a fourth aspect, an embodiment of the present application provides a robot training method, which is applied to the above robot or the above robot device, including:
Acquiring training sample data of a robot, wherein the training sample data comprises image information captured by the auxiliary camera and the main camera and a real motion vector detected by the vector detection unit, and the real motion vector is used for representing a result of the image information;
acquiring a pre-constructed robot learning model;
And inputting the training sample data into the robot learning model to obtain a trained target robot learning model.
According to some embodiments of the invention, the image information includes gesture actions and position movements of the two performing robots in the environment;
The real motion vector comprises gesture motion vectors of joints of the two execution manipulators and position motion vectors of the two execution manipulators in the environment.
According to some embodiments of the invention, the position motion vector comprises a linear velocity and/or an angular velocity.
According to some embodiments of the invention, the inputting the training sample data into the robot learning model to obtain a trained target robot learning model includes:
Inputting the image information captured by the auxiliary camera and the main camera into a robot learning model, and obtaining a predicted motion vector output by the robot learning model;
And evaluating the approaching condition between the predicted motion vector and the real motion vector, and adjusting the parameters of the robot learning model until a target robot learning model is obtained.
According to some embodiments of the invention, the evaluating the proximity between the predicted motion vector and the real motion vector adjusts parameters of the robot learning model until a target robot learning model is obtained includes:
Judging whether the relation between the predicted motion vector and the real motion vector meets a convergence condition or not;
If yes, stopping training the robot learning model to obtain a target robot learning model;
If not, training the robot learning model according to a preset loss function, acquiring a predicted motion vector output by the robot learning model based on the trained robot learning model, and executing the judgment again to judge whether the relation between the predicted motion vector and the real motion vector meets a convergence condition.
According to some embodiments of the invention, before the acquiring training sample data of the robot, the method includes:
Acquiring operation information of two training manipulators of a user operating the training device;
And controlling the two execution manipulators to synchronously move with the two training manipulators of the training device based on the operation information.
According to some embodiments of the invention, the robotic learning model is a neural network model, a classification model, or a regression model.
In a fifth aspect, an embodiment of the present application provides a training system for a robot, which is applied to the robot or the robot device, and the training system includes:
The data acquisition unit is used for acquiring training sample data of the robot, wherein the training sample data comprises image information captured by the auxiliary camera and the main camera and real motion vectors detected by the vector detection unit, and the real motion vectors are used for representing the result of the image information;
and the processor is used for acquiring a pre-constructed robot learning model and inputting the training sample data into the robot learning model to obtain a trained target robot learning model.
From the above technical solutions, the embodiment of the present application has the following advantages: the main camera sets up in the upper portion of robot main frame, and two execution manipulators set up in the left and right sides of robot main frame and towards two execution manipulators setting, and two auxiliary cameras set up respectively in two arms and towards the terminal setting of corresponding execution. In the training process of the robot, for example, the grabbing or placing process of the material, the main camera has a wide visual field, and can sense the surrounding environment conditions, so that the action changes of the two execution manipulators in the environment can be globally captured; the auxiliary camera is close to the corresponding execution end, so that fine actions of the execution end in the environment can be captured in a focusing mode, and the vector detection unit is used for detecting real action vectors of all joints of the execution manipulator. Therefore, the robot can capture the motion change of the robot in the environment, namely the image information, in an omnibearing and accurate way by utilizing the main camera and the auxiliary camera, and detect the real motion vector of each joint of the executing manipulator by utilizing the vector detection unit, wherein the real motion vector is used for representing the result of the image information. The data may be input into a robotic learning model and the robotic learning model trained to obtain a desired target robotic learning model. The application program of the robot adopts the neural network model, and even if the robot faces more joint movement or multidimensional movement, the robot can effectively execute actions under the control of the target robot learning model, so that the grabbing or placing of the material pieces can be accurately completed.
Drawings
The invention is further described with reference to the accompanying drawings and examples, in which:
FIG. 1 is a schematic view of the overall structure of a robot in an embodiment of the present invention;
FIG. 2 is a schematic diagram of the overall structure of a robot and a training apparatus according to an embodiment of the present invention;
Fig. 3 is a flow chart of a training method according to an embodiment of the invention.
Reference numerals:
10. A robot; 100. a robot main frame; 200. executing a manipulator; 210. a mechanical arm; 211. an upper wrist arm; 212. a forearm; 220. an execution end; 300. an auxiliary camera; 400. a main camera; 500. a first movable chassis; 510. a connection part; 600. a training device; 610. training a frame; 620. and training the manipulator.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the invention.
In the description of the present invention, it should be understood that references to orientation descriptions such as upper, lower, front, rear, left, right, etc. are based on the orientation or positional relationship shown in the drawings, are merely for convenience of description of the present invention and to simplify the description, and do not indicate or imply that the apparatus or elements referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus should not be construed as limiting the present invention.
In the description of the present invention, the meaning of a number is one or more, the meaning of a number is two or more, and greater than, less than, exceeding, etc. are understood to exclude the present number, and the meaning of a number is understood to include the present number. The description of the first and second is for the purpose of distinguishing between technical features only and should not be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
In the description of the present invention, unless explicitly defined otherwise, terms such as arrangement, installation, connection, etc. should be construed broadly and the specific meaning of the terms in the present invention can be reasonably determined by a person skilled in the art in combination with the specific contents of the technical scheme.
In the description of the present invention, the descriptions of the terms "one embodiment," "some embodiments," "illustrative embodiments," "examples," "specific examples," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The method provided by the embodiment of the application can be executed by the terminal equipment, and comprises the following steps of: smart phones, tablet computers, notebook computers, desktop computers, and the like.
Or may be executed by a chip or a chip system, which may automatically generate a malicious command detection model and detection of malicious commands, which may be embedded in the terminal device.
Or may be a server execution including, but not limited to: the cloud server comprises an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and a cloud server for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, a content distribution network (Content Delivery Network, CDN), basic cloud computing services such as big data and an artificial intelligent platform and the like.
The application discloses a robot 10 applied to robot learning, referring to fig. 1, comprising a robot main frame 100, two execution manipulators 200, a main camera 400, at least two auxiliary cameras 300 and a vector detection unit, wherein:
the two execution manipulators 200 are disposed on the left and right sides of the robot main frame 100, wherein the execution manipulators 200 include a manipulator 210 and an execution end 220, the manipulator 210 is connected with the robot main frame 100, and the execution end 220 is connected with the end of the manipulator 210 for executing the grabbing action.
The main camera 400 is arranged at the upper part of the robot main frame 100 and is arranged towards the two execution manipulators 200, and the main camera 400 is used for globally capturing the action changes of the two execution manipulators 200; the main camera 400 may be provided with one or more, and the present application is not limited thereto.
The at least two auxiliary cameras 300 are respectively arranged on the two mechanical arms 210 and are arranged towards the corresponding execution ends 220, and the auxiliary cameras 300 are used for focusing and capturing action changes of the execution ends 220 in the environment.
The vector detection unit is used for detecting the actual motion vector of each joint of the execution manipulator 200.
The action change refers to the gesture change of the execution manipulator and the execution unit in the environment so as to complete the grabbing or placing of the material. The vector detection unit may be understood as a signal detection module, such as an encoder or the like, for detecting a change in motion of each joint position of the performing robot arm.
It can be understood that the main camera 400 is disposed at the upper portion of the robot main frame 100, the two actuating manipulators 200 are disposed at the left and right sides of the robot main frame 100 and toward the two actuating manipulators 200, and the two auxiliary cameras 300 are disposed at the two mechanical arms 210 and toward the corresponding actuating terminals 220, respectively. In this way, the robot 10 has a wide field of view during the training process, for example, the grabbing or placing process of the material, so that the surrounding environment condition can be perceived, and thus, the motion changes of the two execution manipulators 200 in the environment can be globally captured; the auxiliary camera 300 is closely directed to the corresponding execution end 220 so that fine motions of the execution end 220 in the environment can be focused, and the vector detection unit is used to detect the true motion vectors of the respective joints of the execution robot 200.
Therefore, the robot 10 of the present application can capture the motion change of the robot 10 in the environment, i.e., the image information, with the master camera 400 and the slave camera 300 in an omnibearing and accurate manner, and detect the real motion vectors of the respective joints of the manipulator 200 with the vector detection unit, the real motion vectors being used for characterizing the results of the image information. The data may be input into a robotic learning model and the robotic learning model trained to obtain a desired target robotic learning model. The application program of the robot 10 adopts the neural network model, and even if the robot 10 faces more joint movements or multidimensional movements, the robot 10 can effectively execute actions under the control of the target robot learning model, so that the grabbing or placing of the material pieces can be accurately completed.
In addition, in the present application, the robot 10 mainly adopts the execution manipulator 200, the main camera 400 and the auxiliary camera 300, the overall structure of the robot 10 is simple, the training process applied to the robot learning model is simple, the application cost of the robot 10 is low, and the robot 10 is suitable for popularization in industry and families, for example, as a nurse robot.
In some embodiments, referring to fig. 1, the mechanical arm 210 includes an upper wrist 211 and a front wrist 212, wherein one end of the upper wrist 211 is movably connected to the robot main frame 100, for example, through a universal joint or a rotation shaft; one end of the front arm 212 is movably connected with the other end of the upper arm 211, for example, through a universal joint or a rotating shaft, the base of the execution end 220 is movably connected with the other end of the front arm 212, for example, through a universal joint or a rotating shaft, and the auxiliary camera 300 is arranged at the end of the front arm 212 close to the execution end 220 and just faces the execution end 220.
It will be appreciated that the upper arm 211 forms a movable joint with the robot main frame 100, the upper arm 211 forms a movable joint with the forearm 212, and the forearm 212 forms a movable joint with the execution end 220. The mechanical arm 210 adopts the above structural form, and the flexibility of the mechanical arm 210 is better, so that the execution end 220 can be controlled at multiple angles and multiple positions, and the execution end 220 can flexibly grasp or place the material. In addition, the whole structure of the mechanical arm 210 is simple, and when each joint position of the mechanical arm 210 moves, the main camera 400 can capture the movement change of each joint position, so that the integrity of the input image information is ensured, and further, the robot learning model can be effectively trained.
And, the structural form of the mechanical arm 210 is basically consistent with that of an arm, so that when the manipulator 200 is trained, the action of the mechanical arm 210 can be trained by the action of the arm of a worker, and the accuracy and the authenticity of the manipulator 200 are ensured.
In addition, the auxiliary camera 300 is arranged at the end part of the front wrist arm 212, which is close to the execution end 220, so that the auxiliary camera 300 can focus to capture the motion change of the execution end 220, thereby ensuring the integrity and accuracy of image information and further ensuring that a robot learning model can be effectively trained. In addition, the auxiliary camera 300 may capture the gesture change between the forearm 212 and the execution end 220, so as to supplement the motion change of the position that the main camera 400 may not clearly capture, thereby ensuring the integrity of the image information.
In some embodiments, the robot 10 further includes a first movable chassis 500, the bottom of the robot main frame 100 is disposed on the first movable chassis 500, and the first movable chassis 500 is used for linear motion and/or rotational motion, that is, the first movable chassis 500 drives the robot 10 to perform rotational motion to adjust the direction of the robot 10, or the first movable chassis 500 drives the robot 10 to perform linear motion to adjust the overall position of the robot 10. Or the first movable chassis 500 may drive the robot 10 to perform a rotational motion, or may drive the robot 10 to perform a linear motion.
The main camera 400 is further configured to capture a change in position of the execution robot 200 in the environment during the movement of the robot 10 by the first movable chassis 500, that is, a change in motion of the execution robot 200 during the movement of the robot 10 by the main camera 400, and a change in motion of the execution terminal 220 during the movement of the robot 10 by the auxiliary camera 300, and the vector detection unit is further configured to detect a true motion vector, such as an angular velocity and/or a linear velocity, of the first movable chassis 500. In this way, the acquired image information and the real motion vector can more comprehensively train the robot learning model, correspondingly, when the acquired target robot learning model is applied to the robot 10, the robot 10 can be controlled to finish the grabbing or placing actions of the material in the motion process, and the robot 10 has stronger industrial applicability.
In a specific embodiment, referring to fig. 1, the first movable chassis 500 is an AGV navigator, the robot main frame 100 is disposed on the AGV navigator, the moving speed of the AGV navigator is up to 1.6 m/s, the maximum effective load is up to 100kg, the minimum ground clearance is about 30mm, and the AGV navigator can drive the robot 10 to move linearly and rotationally. The vector detection unit is used for detecting an angular velocity vector and a linear velocity vector of the AGV navigation vehicle, and the angular velocity vector and the linear velocity vector are collectively called as motion vectors.
Specifically, each of the actuators 200 has 7 degrees of freedom, and two actuators 200 may have 14 degrees of freedom, but not limited to only 14 degrees of freedom, and 14 degrees of freedom may be understood as 14-dimensional motion vectors. The first movable chassis 500 can be understood as a 2-dimensional motion vector (combination of angular velocity and linear velocity), and thus the overall motion gesture of the robot 10 constitutes a true 16-dimensional motion vector, i.e., the vector detection unit detects and acquires a true 16-dimensional motion vector. The true 16-dimensional motion vector covers the precise position of each joint of the robot 10 and also embodies the movement state of the chassis, thereby completely determining the instantaneous pose of the robot 10. The image information captured by the main camera 400 and the auxiliary camera 300 can be mapped to 16-dimensional motion vectors, so that after the image information is input into the initial neural model, the 16-dimensional motion vectors output by the robot learning model are compared with the true 16-dimensional motion vectors, and the neural network model is trained according to the difference condition of the two, until the target robot learning model is obtained.
In some embodiments, referring to fig. 1 and 2, the robot 10 has a connection portion 510, and the connection portion 510 may be disposed on the robot main frame 100 or may be disposed on top of the first movable chassis 500. The connection portion 510 is detachably connected to the training device 600, for example, the connection portion 510 is fastened to the training device 600 by a fastener, or is fixedly connected by a bolt, which is not limited in the present application.
It will be appreciated that the robot 10 is connected to the training apparatus 600, so that when the robot 10 is trained by the training apparatus 600, since the first movable chassis 500 is disposed at the bottom of the robot 10, a worker can push the robot 10 to perform a movement when the training apparatus 600 is applied, thereby conveniently realizing a movement training of the robot 10. After the robot 10 is trained, a worker can detach the training device 600, and the robot 10 can be independently put into industrial use.
In some embodiments, the robot 10 further comprises a control unit, which is communicatively connected to the training device 600, for receiving operation signals of the training device 600 to control the two performing manipulators 200 to perform actions. Specifically, during the training process of the robot 10, a worker sends an operation signal to the robot 10 through the training device 600, and after receiving the operation signal, the control unit of the robot 10 controls the execution manipulator 200 to perform actions so as to complete grabbing or placing of the material, thereby performing exercise training on the robot 10.
The application also discloses a training device 600 of the robot 10, referring to fig. 1 and 2, which is applied to the robot 10, the training device 600 includes a training frame 610 and two training manipulators 620, wherein the training frame 610 may take various structural forms, and may be specifically set according to actual needs, for example, the training frame 610 may be set to a structure that facilitates a worker to push the robot 10 to move. The two training robots 620 are identical in structure and identical to the two execution robots 200 and set in parameters. Two training robots 620 are disposed on the left and right sides of the training frame 610 and are in one-to-one communication connection with the performing robot 200, such as a wireless communication connection or a wired communication connection, and the training robots 620 can be operated to operate the performing robot 200 to perform synchronous motions.
In a specific training process, the arm of the worker is bound to the arm 210 of the training manipulator 620, and the palm of the worker is bound to the execution end 220 of the training manipulator 620. Thus, in the training process of the robot 10, the worker controls the two training manipulators 620 to perform actions, and synchronously, the two execution manipulators 200 and the two training manipulators 620 move synchronously, so as to complete the grabbing or placing of the material, and further train the execution manipulators 200. It can be appreciated that the design ensures that the worker can actually control the motion of the robot 10 when performing the double-hand motion, thereby ensuring the accuracy of the image information and the motion vector, and further helping to better train the robot learning model.
In one possible embodiment, training frame 610 is removably coupled to robot 10. Specifically, the robot 10 has a connection part 510, and the connection part 510 may be disposed on the robot main frame 100 or may be disposed on top of the first movable chassis 500. The connection portion 510 is detachably connected to the training device 600, for example, the connection portion 510 is fastened to the training device 600 by a fastener, or is fixedly connected by a bolt, which is not limited in the present application.
It will be appreciated that the robot 10 is connected to the training apparatus 600, so that when the robot 10 is trained by the training apparatus 600, since the first movable chassis 500 is provided at the bottom of the robot 10, a worker can push the robot 10 to move when operating the training apparatus 600, thereby conveniently performing exercise training on the robot 10. When the robot 10 is trained, the worker removes the training device 600, and the robot 10 can be independently put into industrial use.
In another possible embodiment, the training frame 610 is provided separately from the robot 10 (not shown in the figure), more precisely, the training device 600 is provided separately from the robot 10. During the training of the robot 10, a worker can remotely control the robot 10 to perform actions by using the training device 600, thereby training the robot 10.
In some embodiments, the training apparatus 600 further comprises a second movable chassis (not shown in the figures) to which the training frame 610 is disposed, the second movable chassis being used for linear and/or rotational movement, i.e. the first movable chassis 500 is structurally identical to the second movable chassis and the parameter settings are identical. The second movable chassis is communicatively coupled to the first movable chassis 500 and is operable to operate the first movable chassis 500 for synchronous movement. Specifically, if the training device 600 is separately disposed from the robot 10, when the training device 600 is applied, the worker pushes the second movable chassis to move through the training frame 610, and the first movable chassis 500 moves synchronously with the second movable chassis, so that the robot 10 is exercise-trained, that is, the training device 600 can exercise the manipulator 200 during the exercise.
The application discloses a robot device comprising the robot 10 and the training device 600.
In a specific training process, the arm of the worker is bound to the arm 210 of the training manipulator 620, and the palm of the worker is bound to the execution end 220 of the training manipulator 620. In the training process of the robot 10, a worker controls the two training manipulators 620 to perform actions, and synchronously, the two execution manipulators 200 and the two training manipulators 620 synchronously move, so that the grabbing or placing of the material piece is completed.
Therefore, during the operation of the two execution robots 200, the motion changes of the robot 10 in the environment, that is, the image information, are captured omnidirectionally and accurately by using the main camera 400 and the auxiliary camera 300, and the real motion vectors of the respective joints of the execution robots 200 are detected by using the vector detection unit, and the real motion vectors are used to characterize the results of the image information. The data may be input into a robotic learning model and the robotic learning model trained to obtain a desired target robotic learning model. The application program of the robot 10 adopts the neural network model, and even if the robot 10 faces more movable joints or multidimensional movements, the robot 10 can effectively execute actions under the control of the target robot learning model, so that the grabbing or placing of the material pieces can be accurately completed.
Referring to fig. 3, a flow chart of a training method of a robot 10 according to an embodiment of the present application is applied to the robot 10 or the robot device, and the training method includes S101 to S103.
S1O1 acquires training sample data of the robot 10, wherein the training sample data comprises image information captured by the auxiliary camera 300 and the main camera 400 and a real motion vector detected by a vector detection unit, and the real motion vector is used for representing a result of the image information.
If the first movable chassis 500 is disposed at the bottom of the robot 10, the image information includes the gesture and the position motion of the two execution robots 200 in the environment. Accordingly, the real motion vector includes the gesture motion vector of each joint of the two execution robots 200 and the position motion vector of the two execution robots 200 in the environment. For example, the first movable chassis 500 is an AGV navigator, the AGV navigator can rotate and move linearly, the position motion vector includes a linear velocity and an angular velocity, and of course, other types of first movable chassis 500 are selected, the position motion vector may also include only the linear velocity or the angular velocity, and the corresponding first movable chassis is selected according to actual needs.
Of course, if the bottom of the robot 10 is not provided with the first movable chassis 500, the image information may include only the gesture motion of the two execution robots 200 in the environment, and correspondingly, the real motion vector also includes only the gesture motion vectors of the respective joints of the two execution robots 200.
Specifically, during the training process of the robot 10, for example, the grabbing or placing process of the material, the main camera 400 has a wide field of view, and can sense the surrounding environment conditions, so that the motion changes of the two execution manipulators 200 in the environment can be globally captured; the auxiliary camera 300 is closely directed to the corresponding execution end 220 so that fine motions of the execution end 220 in the environment can be focused, whereby the main camera 400 and the auxiliary camera 300 omnidirectionally and accurately capture motion changes, i.e., image information, of the robot 10 in the environment, and the vector detection unit is used to detect true motion vectors of the respective joints of the execution robot 200.
For example, each of the actuating manipulators 200 has 7 degrees of freedom, and two of the actuating manipulators 200 may have 14 degrees of freedom, but not limited to only 14 degrees of freedom, and 14 degrees of freedom may be understood as 14-dimensional motion vectors. The first movable chassis 500 can be understood as a 2-dimensional motion vector (combination of angular velocity and linear velocity), and thus the robot 10 as a whole constitutes a true 16-dimensional motion vector, i.e., the vector detection unit detects and acquires a true 16-dimensional motion vector. The true 16-dimensional motion vector covers the precise position of each joint of the robot 10 and also embodies the movement state of the chassis, thereby completely determining the instantaneous pose of the robot 10. Wherein the image information captured by the primary camera 400 and the secondary camera 300 is mapped to a 16-dimensional motion vector, i.e., a true motion vector is used to characterize the result of the image information.
S1O2 acquires a pre-constructed robot learning model.
Wherein a machine learning model is an algorithm or model that predicts and makes decisions by learning rules and patterns from data. The robot learning model refers to a machine learning model applied to the field of robots.
In the robot 10 learning, there are several common models and algorithms, including but not limited to the following:
And (3) supervising a learning model: supervised learning is a method of training a model by using labeled training data. In the robot 10 learning, the task of target detection, object recognition, pose estimation, and the like may be performed using a supervised learning model.
Reinforcement learning model: reinforcement learning is a method of learning optimal behavior by interacting with an environment. In the robot 10 learning, tasks such as path planning and motion control may be performed using a reinforcement learning model.
Unsupervised learning model: unsupervised learning is a method of finding hidden structures and patterns from unlabeled data. In the robot 10 learning, an unsupervised learning model may be used to perform tasks such as clustering, dimension reduction, and the like.
Deep learning model: deep learning is a machine learning method based on a neural network, and complex feature representation is learned through a multi-level neural network structure. In the robot 10 learning, tasks such as image recognition, voice recognition, autonomous navigation, and the like may be performed using a deep learning model.
And (3) migrating a learning model: transfer learning is a method by which learned knowledge and models are transferred into new tasks. In the robot 10 learning, a transfer learning model may be used to improve the learning effect and adaptability of the robot 10 in a new environment.
S1O3 inputs training sample data into the robot learning model to obtain a trained target robot learning model.
The training sample data comprises a real motion vector and image information, wherein the real motion vector is used for representing the result of the image information. For example, the real motion vector is a 16-dimensional motion vector, but is not limited to a 16-dimensional motion vector, that is, the angular velocity, linear velocity, and motion variables of the two-stage manipulator 200 of the first movable chassis.
Specifically, after the image information and the true motion vector are acquired, the image information and the true motion vector are input into the robot learning model, so that the robot learning model is trained. The angles of the joints of the two execution manipulators 200 are adjusted, the movement of the first movable chassis 500 is controlled, the obtained image information and the actual motion vector are input into the robot learning model again, the robot learning model is trained in a circulating manner until the 16-dimensional motion vector output by the robot learning model completes the preset action, and therefore the required target robot learning model is obtained.
In summary, according to steps S101 to S103, the robot 10 of the present application can capture the motion changes of the robot 10 in the environment, i.e. the image information, by using the main camera 400 and the auxiliary camera 300 in an omnibearing and accurate manner, and detect the real motion vectors of the joints of the manipulator 200 by using the vector detection unit, wherein the real motion vectors are used for representing the results of the image information. The data may be input into a robotic learning model and the robotic learning model trained to obtain a desired target robotic learning model. The application program of the robot 10 adopts the neural network model, and even if the robot 10 faces more joint movements or multidimensional movements, the robot 10 can effectively execute actions under the control of the target robot learning model, so that the grabbing or placing of the material pieces can be accurately completed.
In some embodiments, step S103 inputs training sample data into the robotic learning model to obtain a trained target robotic learning model, including A1-A2.
A1, inputting the image information captured by the auxiliary camera 300 and the main camera 400 into a robot learning model, and obtaining a predicted motion vector output by the robot learning model.
A2, evaluating the approaching condition between the predicted motion vector and the real motion vector, and adjusting parameters of the robot learning model until a target robot learning model is obtained.
Specifically, after the image information and the actual motion vector are obtained, the image information and the actual motion vector are input into a robot learning model, the robot learning model outputs a 16-dimensional motion vector, that is, a predicted motion vector, and the output 16-dimensional motion vector is used to characterize the angular velocity variable, the linear velocity variable and the motion variables of the two execution manipulators 200 of the first movable chassis. And comparing the difference condition between the output 16-dimensional motion vector and the real motion vector, and adjusting parameters of the robot learning model to train the robot training model. Then, the worker adjusts the angles of the joints of the two-stage manipulator 200 and controls the first movable chassis 500 to move, the auxiliary camera 300 and the main camera 400 capture corresponding image information, and the image information is input into a robot learning model, and the robot learning model obtains an output 16-dimensional motion vector. And comparing the difference condition between the output 16-dimensional motion vector and the real motion vector again, and adjusting parameters of the robot learning model to train the robot training model. The steps are circularly trained, and the 16-dimensional motion vector output by the robot learning model is used for completing the real motion vector, and at the moment, the robot learning model is a trained target robot learning model.
In a further embodiment, step A2 evaluates the proximity between the predicted motion vector and the actual motion vector, adjusts parameters of the robot learning model until a target robot learning model is obtained, comprising steps B1-B3.
B1 judges whether the relation between the predicted motion vector and the real motion vector meets a convergence condition.
And B2, if so, stopping training the robot learning model to obtain a target robot learning model.
And B3, if not, training the robot learning model according to a preset loss function, acquiring a predicted motion vector output by the robot learning model based on the trained robot learning model, and executing the judgment again whether the relation between the predicted motion vector and the real motion vector meets a convergence condition.
The training sample data can be input into a pre-built robot learning model in batches, the model extracts character strings of the sample data, and a corresponding detection result is output. And then comparing the difference between the output motion vector and the corresponding real motion vector, adjusting model parameters of a pre-constructed robot learning model according to the difference and a preset loss function, inputting the next batch of sample data for training, and repeating the training process until the pre-constructed robot learning model meets the convergence condition, thereby obtaining a trained target robot learning model. For example, in the training process of a Back Propagation (BP) network, the learning algorithm uses a steepest descent BP (STEEPEST DESCENT Back Propa gation, SDBP) algorithm to calculate the output of each neuron backward from the first layer of the network, calculate the influence of the model parameter value on the total error forward from the last layer, and continuously adjust the model parameters of the network through the back propagation algorithm to minimize the square sum of the total errors of the network, thereby realizing the convergence of the network. It should be noted that the learning algorithm may be other algorithms, such as LM (Leverberg-Marquardt, levenberg-Ma Kuite) algorithm, momentum BP (Momentum Backpropagation, MOBP) algorithm, variable learning rate BP (Variable LEARNING RATE Backpropagation, VLBP) algorithm, elastic (RES ILIENT Backpropagation, RPROP) algorithm, gradient algorithm, quasi-newton algorithm, and the like; the error function for adjustment may be a mean square error function or the like; the number of hidden layer nodes is set to be 50; the maximum training number is 10000, and the target error is 0.001; minimum gradient 1e-6.
In some embodiments, step S103 includes steps C1-C2 before acquiring training sample data for robot 10.
C1 acquires operation information of the user at two training manipulators 620 that operate the training device 600.
C2 controls the two execution robots 200 to move in synchronization with the two training robots 620 of the training apparatus 600 based on the operation information.
Specifically, in connection with the training apparatus 600 described above, the arm of the operator is bound to the arm 210 of the training manipulator 620, and the palm of the operator is bound to the execution end 220 of the training manipulator 620. In the training process of the robot 10, a worker controls the two training manipulators 620 to act and controls the robot 10 to move, and the two execution manipulators 200 and the two training manipulators 620 synchronously move and finish grabbing or placing the material in the moving process. In the operation process of the two execution manipulators 200, the motion changes of the robot 10 in the environment, namely, the image information, are captured in an omnibearing and accurate manner by using the main camera 400 and the auxiliary camera 300, and the real motion vectors of the joints of the execution manipulators 200 are detected by using the vector detection unit, wherein the real motion vectors are used for representing the result of the image information. The data may be input into a robotic learning model and the robotic learning model trained to obtain a desired target robotic learning model. The application program of the robot 10 adopts the neural network model, and even if the robot 10 faces more movable joints or multidimensional movements, the robot 10 can effectively execute actions under the control of the target robot learning model, so that the grabbing or placing of the material pieces can be accurately completed.
In some embodiments, the robotic learning model is a neural network model, a classification model, or a regression model. It will be appreciated that the robotic learning model may employ the above-described model, and that the robotic learning model may effectively predict the motion of the robot 10 to complete the gripping or placement of the workpiece when applied.
In some embodiments, the present disclosure discloses a robotic 10 training system comprising:
The data acquisition unit is configured to acquire training sample data of the robot 10, where the training sample data includes image information captured by the auxiliary camera 300 and the main camera 400 and a real motion vector detected by the vector detection unit, and the real motion vector is used to characterize a result of the image information.
And the processor is used for acquiring a pre-constructed robot learning model and inputting training sample data into the robot learning model to obtain a trained target robot learning model.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random-access memory (RAM, random access memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.

Claims (18)

1. A robot for robotic learning, comprising:
A robot main frame;
the two execution manipulators are arranged on the left side and the right side of the robot main frame, and each execution manipulator comprises a manipulator and an execution tail end, wherein the manipulator is connected with the robot main frame, and the execution tail ends are connected with the tail ends of the manipulator and are used for executing grabbing actions;
the main camera is arranged at the upper part of the main frame of the robot and faces to the two execution manipulators, and the main camera is used for capturing action changes of the two execution manipulators globally;
the auxiliary cameras are respectively arranged on the two execution manipulators and are arranged towards the corresponding execution tail ends, and the auxiliary cameras are used for focusing and capturing action changes of the execution tail ends in the environment;
and the vector detection unit is used for detecting the actual motion vector of each joint of the execution manipulator.
2. The robot for robot learning of claim 1, wherein the mechanical arm comprises an upper arm and a front arm, wherein one end of the upper arm is movably connected with the robot main frame, one end of the front arm is movably connected with the other end of the upper arm, the execution end is movably connected with the other end of the front arm, and the auxiliary camera is arranged at an end of the front arm close to the execution end.
3. A robot for use in robotic learning as claimed in claim 1, further comprising a first movable chassis, said robot main frame being disposed on said first movable chassis, said first movable chassis being adapted for linear and/or rotational movement; the main camera is further used for capturing the position change of the execution manipulator in the environment, and the vector detection unit is further used for detecting the real motion vector of the first movable chassis.
4. A robot for robotic learning as claimed in claim 3, wherein the first movable chassis is an AGV, and the vector detection unit is configured to detect an angular velocity vector and a linear velocity vector of the first movable chassis.
5. A robot for use in robotic learning as claimed in claim 3, wherein the robot has a connection for detachable connection with the training device.
6. A robot for use in robotic learning as claimed in claim 1, further comprising a control unit for communication with the training means for receiving operation signals from the training means for controlling the two of said performing manipulators to perform actions.
7. A training device for a robot, characterized by training the robot according to any one of claims 1 to 6, comprising:
Training a frame;
The two training manipulators are consistent with the structure of the executing manipulator, are arranged on the left side and the right side of the training frame, and are in one-to-one communication connection with the executing manipulator; wherein the training manipulator is consistent with the execution manipulator parameter setting, and the training manipulator can be operated to control the execution manipulator to synchronously move.
8. A training device for a robot according to claim 7, wherein said training frame is detachably connected to said robot; or, the training frame is separately arranged from the robot.
9. A training device for a robot according to claim 8, characterized in that the training device further comprises a second movable chassis, the training frame being arranged on the second movable chassis, the second movable chassis being adapted for linear and/or rotational movement; wherein the second movable chassis is communicatively coupled to the first movable chassis, the second movable chassis being operable to operate the first movable chassis to move synchronously.
10. A robotic device, comprising:
The robot of any one of claims 1 to 6;
training device according to any of claims 7 to 9.
11. A robot training method applied to the robot of any one of claims 1 to 6 or the robotic device of claim 10, comprising:
Acquiring training sample data of a robot, wherein the training sample data comprises image information captured by the auxiliary camera and the main camera and a real motion vector detected by the vector detection unit, and the real motion vector is used for representing a result of the image information;
acquiring a pre-constructed robot learning model;
And inputting the training sample data into the robot learning model to obtain a trained target robot learning model.
12. A robot training method according to claim 11, characterized in that said image information includes the gesture actions and the position movements of two of said execution manipulators in the environment;
The real motion vector comprises gesture motion vectors of joints of the two execution manipulators and position motion vectors of the two execution manipulators in the environment.
13. A robot training method according to claim 11, characterized in that the positional motion vector comprises a linear velocity and/or an angular velocity.
14. The robot training method of claim 11, wherein said inputting the training sample data into the robot learning model to obtain a trained target robot learning model comprises:
Inputting the image information captured by the auxiliary camera and the main camera into a robot learning model, and obtaining a predicted motion vector output by the robot learning model;
And evaluating the approaching condition between the predicted motion vector and the real motion vector, and adjusting the parameters of the robot learning model until a target robot learning model is obtained.
15. The robot training method of claim 12, wherein said evaluating the proximity between the predicted motion vector and the actual motion vector, adjusting parameters of the robot learning model until a target robot learning model is obtained, comprises:
Judging whether the relation between the predicted motion vector and the real motion vector meets a convergence condition or not;
If yes, stopping training the robot learning model to obtain a target robot learning model;
If not, training the robot learning model according to a preset loss function, acquiring a predicted motion vector output by the robot learning model based on the trained robot learning model, and executing the judgment again to judge whether the relation between the predicted motion vector and the real motion vector meets a convergence condition.
16. The method for training a robot according to claim 11, wherein before acquiring training sample data of the robot, the method comprises:
Acquiring operation information of two training manipulators of a user operating the training device;
And controlling the two execution manipulators to synchronously move with the two training manipulators of the training device based on the operation information.
17. The robot training method of claim 11, wherein the robot learning model is a neural network model, a classification model, or a regression model.
18. A training system for a robot, applied to the robot of any one of claims 1 to 6 or the robotic device of claim 10, characterized in that the training system comprises:
The data acquisition unit is used for acquiring training sample data of the robot, wherein the training sample data comprises image information captured by the auxiliary camera and the main camera and real motion vectors detected by the vector detection unit, and the real motion vectors are used for representing the result of the image information;
and the processor is used for acquiring a pre-constructed robot learning model and inputting the training sample data into the robot learning model to obtain a trained target robot learning model.
CN202410435637.7A 2024-04-11 2024-04-11 Robot, training device, robot equipment, training system and training method of training system Pending CN118081837A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410435637.7A CN118081837A (en) 2024-04-11 2024-04-11 Robot, training device, robot equipment, training system and training method of training system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410435637.7A CN118081837A (en) 2024-04-11 2024-04-11 Robot, training device, robot equipment, training system and training method of training system

Publications (1)

Publication Number Publication Date
CN118081837A true CN118081837A (en) 2024-05-28

Family

ID=91150820

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410435637.7A Pending CN118081837A (en) 2024-04-11 2024-04-11 Robot, training device, robot equipment, training system and training method of training system

Country Status (1)

Country Link
CN (1) CN118081837A (en)

Similar Documents

Publication Publication Date Title
US20180272529A1 (en) Apparatus and methods for haptic training of robots
Bagnell et al. An integrated system for autonomous robotics manipulation
US20170106542A1 (en) Robot and method of controlling thereof
Sayour et al. Autonomous robotic manipulation: real‐time, deep‐learning approach for grasping of unknown objects
US11413748B2 (en) System and method of direct teaching a robot
JP2022542241A (en) Systems and methods for augmenting visual output from robotic devices
Skoglund et al. Programming-by-Demonstration of reaching motions—A next-state-planner approach
CN114516060A (en) Apparatus and method for controlling a robotic device
Aleotti et al. Position teaching of a robot arm by demonstration with a wearable input device
Deng et al. Human-like posture correction for seven-degree-of-freedom robotic arm
Mathur et al. A review of pick and place operation using computer vision and ros
dos Santos et al. A neural autonomous robotic manipulator with three degrees of freedom
Hersch et al. Learning dynamical system modulation for constrained reaching tasks
CN118081837A (en) Robot, training device, robot equipment, training system and training method of training system
Hansen et al. Transferring human manipulation knowledge to robots with inverse reinforcement learning
WO2022209924A1 (en) Robot remote operation control device, robot remote operation control system, robot remote operation control method, and program
CA3157774A1 (en) Robots, tele-operation systems, and methods of operating the same
Kim et al. Deep learning-based smith predictor design for a remote grasping control system
CN115319764A (en) Robot based on multi-mode fusion in complex limited environment and operation method
JP2021061014A (en) Learning device, learning method, learning model, detector, and gripping system
Jin et al. Shared Control With Efficient Subgoal Identification and Adjustment for Human–Robot Collaborative Tasks
CN111015676A (en) Grabbing learning control method and system based on hands-free eye calibration, robot and medium
Felip et al. Tombatossals: A humanoid torso for autonomous sensor-based tasks
Chen et al. Robotic pick-and-handover maneuvers with camera-based intelligent object detection and impedance control
Maeda et al. Lighting-and occlusion-robust view-based teaching/playback for model-free robot programming

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination