CN117103248A - Robot control method and device, storage medium and robot - Google Patents

Robot control method and device, storage medium and robot Download PDF

Info

Publication number
CN117103248A
CN117103248A CN202310937340.6A CN202310937340A CN117103248A CN 117103248 A CN117103248 A CN 117103248A CN 202310937340 A CN202310937340 A CN 202310937340A CN 117103248 A CN117103248 A CN 117103248A
Authority
CN
China
Prior art keywords
control
target
robot
control information
digital twin
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310937340.6A
Other languages
Chinese (zh)
Inventor
王秋林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Shanghai Robotics Co Ltd
Original Assignee
Cloudminds Shanghai Robotics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Shanghai Robotics Co Ltd filed Critical Cloudminds Shanghai Robotics Co Ltd
Priority to CN202310937340.6A priority Critical patent/CN117103248A/en
Publication of CN117103248A publication Critical patent/CN117103248A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Manipulator (AREA)

Abstract

The disclosure relates to a control method and device of a robot, a storage medium and the robot, and relates to the field of terminal control, wherein the method comprises the following steps: and acquiring control information corresponding to the current task scene, wherein the control information comprises a control sequence of forward kinematics control and reverse kinematics control, and first control information corresponding to the forward kinematics control and second control information corresponding to the reverse kinematics control. And controlling the digital twin model of the robot to execute the target action in the virtual scene according to the control information. And controlling the robot to execute the target action according to the motion information when the digital twin model executes the target action. The control method combines the forward kinematics and the reverse kinematics to control the robot, and can control the robot more flexibly and accurately.

Description

Robot control method and device, storage medium and robot
Technical Field
The disclosure relates to the technical field of terminals, and in particular relates to a control method and device of a robot, a storage medium and the robot.
Background
With the rapid development of artificial intelligence, robots are beginning to be widely used in people's daily lives. The robot kinematics comprise forward kinematics and reverse kinematics, wherein the forward kinematics is given joint variables of the robot, the position and the posture of the tail end of the robot are calculated, the reverse kinematics is known as the position and the posture of the tail end of the robot, and the joint variables of the corresponding position of the robot are calculated. However, it is difficult to calculate full-arm control data according to the end target position using forward kinematics, and it is difficult to effectively control the single joint pose of the full arm using reverse kinematics, so that flexible control of the robot is not possible.
Disclosure of Invention
The invention aims to provide a control method and device of a robot, a storage medium and the robot, which are used for improving the flexibility of controlling the robot.
According to a first aspect of embodiments of the present disclosure, there is provided a control method of a robot, the method including:
acquiring control information corresponding to a current task scene, wherein the control information comprises a control sequence of forward kinematics control and reverse kinematics control, and first control information corresponding to the forward kinematics control and second control information corresponding to the reverse kinematics control;
controlling a digital twin model of the robot to execute target actions in the virtual scene according to the control information;
and controlling the robot to execute the target action according to the motion information when the digital twin model executes the target action.
Optionally, the obtaining the control information corresponding to the current task scenario includes:
acquiring task categories corresponding to the task scenes;
acquiring environment information of the task scene;
and determining the control information according to the task category and the environment information.
Optionally, the controlling the digital twin model of the robot to perform the target action in the virtual scene according to the control information includes:
and controlling the digital twin model to execute the target action in the virtual scene according to the control sequence, the first control information and the second control information.
Optionally, the first control information includes a first target angle of a first target joint of the digital twin model; the second control information comprises a target position of a target control point of the digital twin model; the controlling the digital twin model to execute the target action in the virtual scene according to the control sequence, the first control information and the second control information comprises:
according to the control sequence and the first control information, controlling the first target joint to rotate to the first target angle in the virtual scene;
and controlling the target control point to move to the target position in the virtual scene according to the control sequence and the second control information.
Optionally, controlling the target control point to move to the target position in the virtual scene according to the control sequence and the second control information includes:
determining a second target angle of a second target joint of the digital twin model according to the target position;
and controlling the second target joint to rotate to the second target angle so as to enable the target control point to reach the target position.
Optionally, the motion information includes angle information for each joint of the digital twin model; the controlling the robot to execute the target action according to the motion information when the digital twin model executes the target action comprises:
acquiring angle information of each joint when the digital twin model executes the target action according to a preset frequency;
and controlling the robot according to the angle information so as to enable the robot to execute the target action.
According to a second aspect of embodiments of the present disclosure, there is provided a control device of a robot, the device including:
the acquisition module is configured to acquire control information corresponding to a current task scene, wherein the control information comprises a control sequence of forward kinematic control and reverse kinematic control, and first control information corresponding to the forward kinematic control and second control information corresponding to the reverse kinematic control;
the first control module is configured to control a digital twin model of the robot to execute a target action in a virtual scene according to the control information;
and the second control module is configured to control the robot to execute the target action according to the motion information when the digital twin model executes the target action.
Optionally, the acquisition module is configured to:
acquiring task categories corresponding to the task scenes;
acquiring environment information of the task scene;
and determining the control information according to the task category and the environment information.
Optionally, the first control module is configured to:
and controlling the digital twin model to execute the target action in the virtual scene according to the control sequence, the first control information and the second control information.
Optionally, the first control information includes a first target angle of a first target joint of the digital twin model; the second control information comprises a target position of a target control point of the digital twin model; the first control module includes:
the first control sub-module is configured to control the first target joint to rotate to the first target angle in a virtual scene according to the control sequence and the first control information;
and the second control sub-module is configured to control the target control point to move to the target position in the virtual scene according to the control sequence and the second control information.
Optionally, the second control sub-module is configured to:
determining a second target angle of a second target joint of the digital twin model according to the target position;
and controlling the second target joint to rotate to the second target angle so as to enable the target control point to reach the target position.
Optionally, the motion information includes angle information for each joint of the digital twin model; the second control module includes:
the acquisition sub-module is configured to acquire the angle information of each joint of the digital twin model when the target action is executed according to a preset frequency;
and a third control sub-module configured to control the robot according to the angle information so that the robot performs the target action.
According to a third aspect of embodiments of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method of the first aspect of the present disclosure.
According to a fourth aspect of embodiments of the present disclosure, there is provided a robot including:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the method of the first aspect of the disclosure.
Through the technical scheme, the control information corresponding to the current task scene is firstly obtained, wherein the control information comprises a control sequence of forward kinematics control and reverse kinematics control, and first control information corresponding to the forward kinematics control and second control information corresponding to the reverse kinematics control. And then controlling the digital twin model of the robot to execute the target action in the virtual scene according to the control information, and controlling the robot to execute the target action according to the motion information when the digital twin model executes the target action. The control method combines the forward kinematics and the reverse kinematics to control the robot, and can control the robot more flexibly and accurately.
Additional features and advantages of the present disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings are included to provide a further understanding of the disclosure, and are incorporated in and constitute a part of this specification, illustrate the disclosure and together with the description serve to explain, but do not limit the disclosure. In the drawings:
FIG. 1 is a flow chart illustrating a method of controlling a robot according to an exemplary embodiment;
FIG. 2 is a flow chart illustrating another method of controlling a robot according to an exemplary embodiment;
FIG. 3 is a flow chart illustrating another method of controlling a robot according to an exemplary embodiment;
FIG. 4 is a flow chart illustrating another method of controlling a robot according to an exemplary embodiment;
FIG. 5 is a block diagram illustrating a control device for a robot according to an exemplary embodiment;
FIG. 6 is a block diagram of another control device of a robot shown according to an exemplary embodiment;
FIG. 7 is a block diagram of another control device of a robot shown according to an exemplary embodiment;
fig. 8 is a block diagram of a robot shown according to an exemplary embodiment.
Detailed Description
Specific embodiments of the present disclosure are described in detail below with reference to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating and illustrating the disclosure, are not intended to limit the disclosure.
Before introducing a control method, a device, a storage medium and a robot of a robot shown in an embodiment of the disclosure, an application scenario related to the embodiment of the disclosure is first described. In the present disclosure, a robot may be installed with a UE5 (english: universal Engine 5, chinese: phantom Engine 5), a digital twin model of the robot may be built in the UE5, the digital twin model has a skinned skeleton system, and an animated blueprint using the robot skeleton may be built, and control over the robot may be achieved by controlling the digital twin model. The motion control of the robot arm generally refers to how to control the robot arm of the robot from a current state to a target state, and the description of the target state may be described by a joint state (i.e., an angle of each joint) of the robot arm or may be described by three-dimensional space coordinates of a position of an end of the robot arm. When the joint state is used for describing the target state of the mechanical arm, a transformBone can be adopted in the UE5 to realize motion control (i.e., forward kinematics control) of each joint of the mechanical arm; when the target state of the robot arm is described using the robot arm end position, the fullbody-IK may be employed in the UE5 to implement motion control (i.e., inverse kinematics control) for each joint of the robot arm.
Fig. 1 is a flowchart illustrating a control method of a robot according to an exemplary embodiment, and as shown in fig. 1, the method may include:
step 101, obtaining control information corresponding to a current task scene, wherein the control information comprises a control sequence of forward kinematics control and reverse kinematics control, and first control information corresponding to the forward kinematics control and second control information corresponding to the reverse kinematics control.
For example, when the robot performs a task, there may be a plurality of task scenes, where the task scenes may include, for example, "forward to a target position", "get around a front obstacle", "grab an apple on a front table", and the like, and the robot may acquire control information corresponding to the current task scene. The control information comprises a control sequence of forward kinematics control and reverse kinematics control, and first control information corresponding to the forward kinematics control and second control information corresponding to the reverse kinematics control. That is, the robot may be controlled in combination with the forward kinematic control and the reverse kinematic control, so that the robot may perform the related actions more flexibly.
In some embodiments, the control order of the forward kinematics control and the reverse kinematics control may be set in advance for each task scenario. For example, only forward kinematics control may be used for the task scenario of "forward to target position", reverse kinematics control may be used for the task scenario of "bypassing the obstacle ahead", forward kinematics control may be used first, and forward kinematics control may be used for the task scenario of "grabbing an apple on a table ahead". After determining the current task scenario, a control sequence corresponding to the task scenario may be acquired.
In other embodiments, the first control information may include a first target angle corresponding to a first target joint of the digital twin model, and the second control information may include a target position corresponding to a target control point of the digital twin model. After determining the task scenario, the first control information and the second control information may be determined according to the task category and the environment information corresponding to the task scenario.
And 102, controlling a digital twin model of the robot to execute a target action in the virtual scene according to the control information.
For example, the digital twin model may be controlled to perform the target action in a virtual scenario according to the control sequence, the first control information, and the second control information, where the virtual scenario may be understood as a scenario built in the UE 5. Taking the control sequence as an example of adopting forward kinematics control and then adopting reverse kinematics control, the forward kinematics control can be executed according to the first control information, the first target joint of the digital twin model is adjusted to be a first target angle, then the reverse kinematics control is executed according to the second control information, and the target control point of the digital twin model is controlled to move to a target position. For example, the first target joint is each joint of the left arm, the first target angle is 0 degrees, the target control point is the end of the left arm, and the target position is the a position, so that each joint of the left arm can be rotated to 0 degrees by a transformation bone mode, and then the end of the left arm can be controlled to move to the a position by a fullbody ik mode.
Taking the control sequence as an example of adopting reverse kinematic control and then adopting forward kinematic control, the reverse kinematic control can be performed according to the second control information to control the target control point of the digital twin model to move to the target position, and then the forward kinematic control is performed according to the first control information to adjust the first target joint of the digital twin model to the first target angle. For example, the first target joint is each joint of the left arm, the first target angle is 0 degrees, the target control point is the tail end of the left arm, and the target position is the position a, so that the tail end of the left arm can be controlled to move to the position a by a fullbody ik mode, and then each joint of the left arm can be rotated to 0 degrees by a transformBone mode.
And step 103, controlling the robot to execute the target action according to the motion information when the digital twin model executes the target action.
For example, when the digital twin model performs the target action, the motion information of the digital twin model may be acquired according to a preset frequency, where the motion information may include angle information of each joint, so that a plurality of angle information of each joint may be obtained when the digital twin model performs the target action. And then the robot can be controlled according to the collected multiple angle information of each joint in turn, so that the robot executes the same target action.
In summary, the present disclosure first obtains control information corresponding to a current task scenario, where the control information includes a control sequence of forward kinematic control and reverse kinematic control, and first control information corresponding to the forward kinematic control and second control information corresponding to the reverse kinematic control. And then controlling the digital twin model of the robot to execute the target action in the virtual scene according to the control information, and controlling the robot to execute the target action according to the motion information when the digital twin model executes the target action. The control method combines the forward kinematics and the reverse kinematics to control the robot, and can control the robot more flexibly and accurately.
Fig. 2 is a flowchart illustrating another control method of a robot according to an exemplary embodiment, and as shown in fig. 2, step 101 may be implemented by:
step 1011, obtaining a task category corresponding to the task scene.
Step 1012, obtaining environmental information of the task scene.
In step 1013, control information is determined according to the task category and the environmental information.
For example, in the process of executing a task by a robot, a task category corresponding to a current task scenario may be acquired first, for example, the current task scenario is "grab an apple on a table in front", the corresponding task category may be a grab task, for example, the current task scenario is "bypass a front obstacle", the corresponding task category may be an obstacle avoidance task, for example, the current task scenario is "advance to a target position", and the corresponding task category may be a travel task.
In some embodiments, environmental information of the current task scenario may also be obtained. For example, if the current task scene is "grab an apple on a front table", the position coordinates of the apple can be obtained as environmental information by using a computer vision algorithm, and if the current task scene is "bypass an obstacle in front", the position information and the outline information of the obstacle can be obtained as environmental information by using a computer vision algorithm, and if the current task scene is "advance to a target position", the distance from the robot to the target position can be obtained as environmental information.
In other embodiments, control information for the robot may be determined based on the task category and the environmental information. For example, a control sequence of forward kinematics control and reverse kinematics control may be preset for each task category, a preset corresponding relation between the task category and the control sequence may be obtained, after the task category corresponding to the current task scene is obtained, the control sequence corresponding to the task category may be determined according to the preset corresponding relation, and then, by combining the control sequence and the environmental information, the first control information corresponding to the forward kinematics control and the second control information corresponding to the reverse kinematics control may be obtained through a preset control algorithm.
Fig. 3 is a flowchart illustrating another control method of a robot according to an exemplary embodiment, and as shown in fig. 3, step 102 may be implemented by:
and step 1021, controlling the first target joint to rotate to a first target angle in the virtual scene according to the control sequence and the first control information.
For example, the first control information may include a first target angle corresponding to a first target joint of the digital twin model, an order of forward kinematics control that may be indicated in a control order, and first control information of forward kinematics control, and the first target joint of the digital twin model is controlled to rotate to the first target angle. For example, taking the control sequence as the first reverse kinematics control and then the forward kinematics control, the first target joint includes each joint of the left arm, and the first target angle is 150 degrees as an example, after the reverse kinematics control is completed, each joint of the left arm may be controlled to rotate to 150 degrees.
In some embodiments, forward kinematics control may be implemented in UE5 by transformBone. Taking a robot including a six-axis mechanical arm as an example, an array with a length of 6 can be utilized to control the action gesture of a digital twin model of the six-axis mechanical arm, the ith value in the array is used for generating a rotation angle around a rotation axis, and then the rotation angle is used as a parameter of a transform bone method to set the rotation angle of the ith joint of the mechanical arm, so that each joint is rotated to a corresponding angle.
Step 1022, according to the control sequence and the second control information, the target control point is controlled to move to the target position in the virtual scene.
For example, the second control information may include a target position of a target control point of the digital twin model. The target control point may be controlled to move to the target position in the virtual scene in the order of the inverse kinematics control indicated by the control order, and the second control information of the inverse kinematics control.
In another embodiment, step 1022 may be implemented by:
and determining a second target angle of a second target joint of the digital twin model according to the target position.
And controlling the second target joint to rotate to a second target angle so as to enable the target control point to reach the target position.
For example, taking the control sequence as the first forward kinematics control and then the reverse kinematics control, the target control point as the left arm end of the mechanical arm and the target position as the chest position of the robot as an example, after the reverse kinematics control is completed, a second target angle corresponding to a second target joint of the digital twin model can be obtained through a preset resolving algorithm according to coordinates of the chest position, wherein the second target joint includes a joint that the left arm end of the mechanical arm needs to rotate when reaching the chest position. The second target joint is then controlled to rotate to a second target angle such that the left arm end of the robotic arm reaches the chest position.
By way of example, the inverse kinematics control may be implemented in the UE5 by fullbody ik. Taking the target control point as the arm end as an example, the target position of the arm end indicated by the second control information can be solved by a solver of the FullBodyIK to obtain angle data of each joint of the mechanical arm of the robot, and when each joint of the mechanical arm is endowed with corresponding angle data, the arm end can reach the target position.
Fig. 4 is a flowchart illustrating another control method of a robot according to an exemplary embodiment, and as shown in fig. 4, step 103 may be implemented by:
step 1031, acquiring angle information of each joint of the digital twin model when executing the target action according to the preset frequency.
Step 1032, controlling the robot according to the angle information, so that the robot executes the target action.
For example, the motion information of the digital twin model when the target motion is performed includes angle information of each joint of the digital twin model, and the angle information of each joint of the digital twin model when the target motion is performed may be acquired according to a preset frequency. The preset frequency may be, for example, 40 times/second, that is, 1 second, to collect angle information of each joint 40 times, so as to obtain multiple continuous angle information of each joint when the digital twin model executes the target action. And then, each joint of the robot can be controlled sequentially according to the acquired multiple angle information, so that the robot can execute the same target action as the digital twin model.
Taking the preset frequency of 40 times/second and the time length for executing the target action of 5 seconds as an example, the digital twin model can acquire 200 times of angle information of each joint when executing the target action, and each joint acquires 200 pieces of angle information in total. Then, the angles of each joint of the robot can be controlled in sequence according to the time of collecting the information of each angle, so that each joint of the robot is adjusted for 200 times to complete the target action.
In summary, the present disclosure first obtains control information corresponding to a current task scenario, where the control information includes a control sequence of forward kinematic control and reverse kinematic control, and first control information corresponding to the forward kinematic control and second control information corresponding to the reverse kinematic control. And then controlling the digital twin model of the robot to execute the target action in the virtual scene according to the control information, and controlling the robot to execute the target action according to the motion information when the digital twin model executes the target action. The control method combines the forward kinematics and the reverse kinematics to control the robot, and can control the robot more flexibly and accurately.
Fig. 5 is a block diagram illustrating a control apparatus of a robot according to an exemplary embodiment, and as shown in fig. 5, the apparatus 200 includes:
the acquiring module 201 is configured to acquire control information corresponding to a current task scene, where the control information includes a control sequence of forward kinematic control and reverse kinematic control, and first control information corresponding to the forward kinematic control and second control information corresponding to the reverse kinematic control.
A first control module 202 configured to control a digital twin model of the robot to perform a target action in the virtual scene according to the control information.
The second control module 203 is configured to control the robot to execute the target action according to the motion information when the digital twin model executes the target action.
In some embodiments, the acquisition module 201 is configured to:
and obtaining task categories corresponding to the task scenes.
And acquiring the environment information of the task scene.
And determining control information according to the task category and the environment information.
In other embodiments, the first control module 202 is configured to:
and controlling the digital twin model to execute the target action in the virtual scene according to the control sequence, the first control information and the second control information.
Fig. 6 is a block diagram of another control apparatus of a robot, according to an exemplary embodiment, as shown in fig. 6, the first control information includes a first target angle of a first target joint of a digital twin model. The second control information includes a target position of a target control point of the digital twin model. The first control module 202 includes:
the first control sub-module 2021 is configured to control the rotation of the first target joint to the first target angle in the virtual scene according to the control order and the first control information.
A second control sub-module 2022 configured to control movement of the target control point to the target location in the virtual scene in accordance with the control order and the second control information.
In other embodiments, the second control submodule 2022 is configured to:
and determining a second target angle of a second target joint of the digital twin model according to the target position.
And controlling the second target joint to rotate to a second target angle so as to enable the target control point to reach the target position.
Fig. 7 is a block diagram of another control apparatus of a robot according to an exemplary embodiment, and as shown in fig. 7, the motion information includes angle information of each joint of the digital twin model. The second control module 203 includes:
the acquisition submodule 2031 is configured to acquire angle information of each joint of the digital twin model when the target action is performed according to a preset frequency.
The third control submodule 2032 is configured to control the robot according to the angle information so that the robot performs a target action.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
In summary, the present disclosure first obtains control information corresponding to a current task scenario, where the control information includes a control sequence of forward kinematic control and reverse kinematic control, and first control information corresponding to the forward kinematic control and second control information corresponding to the reverse kinematic control. And then controlling the digital twin model of the robot to execute the target action in the virtual scene according to the control information, and controlling the robot to execute the target action according to the motion information when the digital twin model executes the target action. The control method combines the forward kinematics and the reverse kinematics to control the robot, and can control the robot more flexibly and accurately.
Fig. 8 is a block diagram of a robot shown according to an exemplary embodiment. As shown in fig. 8, the robot 300 may include: a processor 301, a memory 302. The robot 300 may also include one or more of a multimedia component 303, an input/output (I/O) interface 304, and a communication component 305.
The processor 301 is configured to control the overall operation of the robot 300 to perform all or part of the steps in the control method of the robot. The memory 302 is used to store various types of data to support operations at the robot 300, which may include, for example, instructions for any application or method operating on the robot 300, as well as application-related data, such as contact data, messages sent and received, pictures, audio, video, and so forth. The Memory 302 may be implemented by any type or combination of volatile or non-volatile Memory devices, such as static random access Memory (Static Random Access Memory, SRAM for short), electrically erasable programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM for short), erasable programmable Read-Only Memory (Erasable Programmable Read-Only Memory, EPROM for short), programmable Read-Only Memory (Programmable Read-Only Memory, PROM for short), read-Only Memory (ROM for short), magnetic Memory, flash Memory, magnetic disk, or optical disk. The multimedia component 303 may include a screen and an audio component. Wherein the screen may be, for example, a touch screen, the audio component being for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signals may be further stored in the memory 302 or transmitted through the communication component 305. The audio assembly further comprises at least one speaker for outputting audio signals. The I/O interface 304 provides an interface between the processor 301 and other interface modules, which may be a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 305 is used for wired or wireless communication between the robot 300 and other devices. Wireless communication, such as Wi-Fi, bluetooth, near field communication (Near Field Communication, NFC for short), 2G, 3G, 4G, NB-IOT, eMTC, or other 5G, etc., or one or a combination of more of them, is not limited herein. The corresponding communication component 305 may thus comprise: wi-Fi module, bluetooth module, NFC module, etc.
In an exemplary embodiment, the robot 300 may be implemented by one or more application specific integrated circuits (Application Specific Integrated Circuit, abbreviated as ASIC), digital signal processors (Digital Signal Processor, abbreviated as DSP), digital signal processing devices (Digital Signal Processing Device, abbreviated as DSPD), programmable logic devices (Programmable Logic Device, abbreviated as PLD), field programmable gate arrays (Field Programmable Gate Array, abbreviated as FPGA), controllers, microcontrollers, microprocessors, or other electronic components for performing the control methods of the robot described above.
In another exemplary embodiment, a computer readable storage medium comprising program instructions which, when executed by a processor, implement the steps of the method of controlling a robot described above is also provided. For example, the computer readable storage medium may be the memory 302 including the program instructions described above, which are executable by the processor 301 of the robot 300 to perform the control method of the robot described above.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-mentioned control method of a robot when being executed by the programmable apparatus.
The preferred embodiments of the present disclosure have been described in detail above with reference to the accompanying drawings, but the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solutions of the present disclosure within the scope of the technical concept of the present disclosure, and all the simple modifications belong to the protection scope of the present disclosure.
In addition, the specific features described in the foregoing embodiments may be combined in any suitable manner, and in order to avoid unnecessary repetition, the present disclosure does not further describe various possible combinations.
Moreover, any combination between the various embodiments of the present disclosure is possible as long as it does not depart from the spirit of the present disclosure, which should also be construed as the disclosure of the present disclosure.

Claims (10)

1. A method of controlling a robot, the method comprising:
acquiring control information corresponding to a current task scene, wherein the control information comprises a control sequence of forward kinematics control and reverse kinematics control, and first control information corresponding to the forward kinematics control and second control information corresponding to the reverse kinematics control;
controlling a digital twin model of the robot to execute target actions in the virtual scene according to the control information;
and controlling the robot to execute the target action according to the motion information when the digital twin model executes the target action.
2. The method according to claim 1, wherein the obtaining control information corresponding to the current task scenario includes:
acquiring task categories corresponding to the task scenes;
acquiring environment information of the task scene;
and determining the control information according to the task category and the environment information.
3. The method of claim 1, wherein controlling the digital twin model of the robot to perform the target action in the virtual scene in accordance with the control information comprises:
and controlling the digital twin model to execute the target action in the virtual scene according to the control sequence, the first control information and the second control information.
4. A method according to claim 3, wherein the first control information comprises a first target angle of a first target joint of the digital twin model; the second control information comprises a target position of a target control point of the digital twin model; the controlling the digital twin model to execute the target action in the virtual scene according to the control sequence, the first control information and the second control information comprises:
according to the control sequence and the first control information, controlling the first target joint to rotate to the first target angle in the virtual scene;
and controlling the target control point to move to the target position in the virtual scene according to the control sequence and the second control information.
5. The method of claim 4, wherein controlling the target control point to move to the target location in the virtual scene in accordance with the control sequence and the second control information comprises:
determining a second target angle of a second target joint of the digital twin model according to the target position;
and controlling the second target joint to rotate to the second target angle so as to enable the target control point to reach the target position.
6. The method of any one of claims 1 to 5, wherein the motion information comprises angle information for each joint of the digital twin model; the controlling the robot to execute the target action according to the motion information when the digital twin model executes the target action comprises:
acquiring angle information of each joint when the digital twin model executes the target action according to a preset frequency;
and controlling the robot according to the angle information so as to enable the robot to execute the target action.
7. A control device for a robot, the device comprising:
the acquisition module is configured to acquire control information corresponding to a current task scene, wherein the control information comprises a control sequence of forward kinematic control and reverse kinematic control, and first control information corresponding to the forward kinematic control and second control information corresponding to the reverse kinematic control;
the first control module is configured to control a digital twin model of the robot to execute a target action in a virtual scene according to the control information;
and the second control module is configured to control the robot to execute the target action according to the motion information when the digital twin model executes the target action.
8. The apparatus of claim 7, wherein the acquisition module is configured to:
acquiring task categories corresponding to the task scenes;
acquiring environment information of the task scene;
and determining the control information according to the task category and the environment information.
9. A non-transitory computer readable storage medium having stored thereon a computer program, characterized in that the program when executed by a processor realizes the steps of the method according to any of claims 1-6.
10. A robot, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the method of any one of claims 1-6.
CN202310937340.6A 2023-07-27 2023-07-27 Robot control method and device, storage medium and robot Pending CN117103248A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310937340.6A CN117103248A (en) 2023-07-27 2023-07-27 Robot control method and device, storage medium and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310937340.6A CN117103248A (en) 2023-07-27 2023-07-27 Robot control method and device, storage medium and robot

Publications (1)

Publication Number Publication Date
CN117103248A true CN117103248A (en) 2023-11-24

Family

ID=88804683

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310937340.6A Pending CN117103248A (en) 2023-07-27 2023-07-27 Robot control method and device, storage medium and robot

Country Status (1)

Country Link
CN (1) CN117103248A (en)

Similar Documents

Publication Publication Date Title
CN111694429A (en) Virtual object driving method and device, electronic equipment and readable storage
CN112580582B (en) Action learning method, action learning device, action learning medium and electronic equipment
US11833692B2 (en) Method and device for controlling arm of robot
CN110446164B (en) Mobile terminal positioning method and device, mobile terminal and server
CN112847336B (en) Action learning method and device, storage medium and electronic equipment
KR20210047258A (en) Robot control device, and method and program for controlling the same
CN111599459A (en) Control method and control device for remote surgery and surgery system
CN113119104B (en) Mechanical arm control method, mechanical arm control device, computing equipment and system
CN110363811B (en) Control method and device for grabbing equipment, storage medium and electronic equipment
CN115703227A (en) Robot control method, robot, and computer-readable storage medium
CN117103248A (en) Robot control method and device, storage medium and robot
CN106774178B (en) Automatic control system and method and mechanical equipment
CN110091323B (en) Intelligent equipment, robot control method and device with storage function
CN109702736B (en) Robot posture processing method, device and system
CN110631586A (en) Map construction method based on visual SLAM, navigation system and device
CN113894779B (en) Multi-mode data processing method applied to robot interaction
US20220203523A1 (en) Action learning method, medium, and electronic device
CN113043268A (en) Robot eye calibration method, device, terminal, system and storage medium
CN111823215A (en) Synchronous control method and device for industrial robot
CN114571463B (en) Motion detection method and device, readable storage medium and electronic equipment
CN111267088A (en) Method, device, equipment and storage medium for executing action molecules
CN111267086A (en) Action task creating and executing method and device, equipment and storage medium
WO2024013895A1 (en) Remote control system, remote control method, and remote control program
CN111267089A (en) Method, device, equipment and storage medium for generating and executing action atoms
CN115213881A (en) Robot control method, robot control device, storage medium, and electronic apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination