WO2022142078A1 - 动作学习方法、装置、介质及电子设备 - Google Patents

动作学习方法、装置、介质及电子设备 Download PDF

Info

Publication number
WO2022142078A1
WO2022142078A1 PCT/CN2021/094432 CN2021094432W WO2022142078A1 WO 2022142078 A1 WO2022142078 A1 WO 2022142078A1 CN 2021094432 W CN2021094432 W CN 2021094432W WO 2022142078 A1 WO2022142078 A1 WO 2022142078A1
Authority
WO
WIPO (PCT)
Prior art keywords
action
robot
sub
atomic
data
Prior art date
Application number
PCT/CN2021/094432
Other languages
English (en)
French (fr)
Inventor
张站朝
黄晓庆
Original Assignee
达闼机器人股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 达闼机器人股份有限公司 filed Critical 达闼机器人股份有限公司
Priority to US17/566,211 priority Critical patent/US11999060B2/en
Publication of WO2022142078A1 publication Critical patent/WO2022142078A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/0081Programme-controlled manipulators with master teach-in means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Definitions

  • the present disclosure relates to the field of robotics, and in particular, to an action learning method, device, medium and electronic device.
  • the commonly used method is to directly control the action of the robot based on the motion capture device.
  • the device is then connected to the robot through the computing device to form a local network, which captures human movements through the motion capture device, and synchronously controls the robot to make similar movements.
  • the speed is basically kept close or within a certain error range of the control.
  • it is necessary to carry out motion trajectory planning based on kinematics and dynamics algorithms through the position, velocity and acceleration of each joint motion in the robot coordinate system, and each joint moves according to the planned trajectory, and multiple The joint linkage forms the action behavior of the robot.
  • the purpose of the present disclosure is to provide an action learning method, device, medium and electronic equipment, without the need for a motion capture device and without planning the trajectory of the robot, through 2D human motion image data, it can be matched in the robot atomic action library to obtain the matching result.
  • the robot action sequence data corresponding to the human body motion image data can also be learned quickly and accurately by smooth connection and action optimization of each action in the robot action sequence data.
  • the present disclosure provides an action learning method, the method includes:
  • the three-dimensional human body posture action data includes a plurality of three-dimensional human body postures arranged in the order of action time;
  • the robot action sequence data is composed of a plurality of robot sub-actions,
  • the robot sub-action includes the atomic action and/or the mapping action obtained by mapping the three-dimensional human body posture action data;
  • the continuous motion learned by the robot is determined according to the motion sequence data of the robot after the motion continuity splicing is performed.
  • the determining of the three-dimensional human body gesture action data corresponding to the human body motion image data includes:
  • the three-dimensional human body posture action data is determined according to the two-dimensional key point sequence data formed by the two-dimensional human body movement key points corresponding to the respective images.
  • the matching of the three-dimensional human body posture action data with the atomic actions in the robot atomic action library to determine the robot action sequence data corresponding to the human body motion image data includes:
  • the multiple human body sub-actions included in the three-dimensional human body posture action data are sequentially matched according to the action time sequence, and determined according to the similarity between all the atomic actions in the robot atomic action library and the human body sub-actions a robot sub-action corresponding to the human sub-action, wherein the human sub-action is composed of one or more of the three-dimensional human poses;
  • the robot action sequence data composed of the robot sub-actions is determined according to the action time sequence.
  • determining the robot sub-action corresponding to the human sub-action according to the similarity between all the atomic actions in the robot atomic action library and the human sub-action includes:
  • the human sub-action is not the first human sub-action included in the three-dimensional human posture action data, and the similarity with the human sub-action is higher than the similarity threshold.
  • the atomic action whose similarity with the human sub-action is higher than the similarity threshold is used as a candidate atomic action;
  • an atomic action that matches the human sub-action is determined as the robot sub-action corresponding to the human sub-action.
  • the determining the robot sub-action corresponding to the human sub-action according to the similarity between all the atomic actions in the robot atomic action library and the human sub-action further includes:
  • the robot sub-action is obtained by mapping the human sub-action.
  • the sequential action continuity splicing of each robot sub-action in the robot action sequence data includes:
  • the method further includes:
  • the robot action sequence data after the action continuity splicing is performed in the digital twin model of the robot, and the robot action sequence data is optimized according to the simulation data of the digital twin model;
  • Determining the continuous actions learned by the robot according to the robot action sequence data after the action continuity splicing includes:
  • the robot action sequence data optimized according to the simulation data of the digital twin model is determined as the continuous action learned by the robot.
  • the present disclosure also provides an action learning device, the device comprising:
  • a first determining module configured to determine three-dimensional human body posture action data corresponding to the human body motion image data, where the three-dimensional human body posture action data includes a plurality of three-dimensional human body postures arranged in the order of action time;
  • the matching module is used to match the three-dimensional human body posture and motion data with the atomic motions in the robot atomic motion library, so as to determine the robot motion sequence data corresponding to the human body motion image data, and the robot motion sequence data consists of a plurality of consists of robot sub-actions, and the robot sub-actions include the atomic actions and/or the mapping actions obtained by mapping the three-dimensional human body posture action data;
  • a splicing module configured to sequentially perform motion continuity splicing for each robot sub-action in the robot action sequence data
  • the second determining module is configured to determine the continuous motion learned by the robot according to the motion sequence data of the robot after the motion continuity splicing.
  • the present disclosure also provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, implements the steps of the above-described method.
  • the present disclosure also provides an electronic device, comprising:
  • a processor for executing the computer program in the memory to implement the steps of the above-described method.
  • the robot motion sequence data corresponding to the human motion image data can be matched in the robot atomic motion library.
  • the smooth robot action can be learned quickly and accurately.
  • FIG. 1 is a flowchart of an action learning method according to an exemplary embodiment of the present disclosure.
  • Fig. 2 is a flowchart of an action learning method according to another exemplary embodiment of the present disclosure.
  • Fig. 3 is a flowchart of an action learning method according to another exemplary embodiment of the present disclosure.
  • Fig. 4 is a flowchart of an action learning method according to another exemplary embodiment of the present disclosure.
  • FIG. 5 is a flowchart of a method for determining a robot sub-action corresponding to the human sub-action according to the similarity between an atomic action and a human sub-action in an action learning method according to another exemplary embodiment of the present disclosure.
  • FIG. 6 is a structural block diagram of an action learning apparatus according to an exemplary embodiment of the present disclosure.
  • Fig. 7 is a block diagram of an electronic device according to an exemplary embodiment.
  • Fig. 8 is a block diagram of an electronic device according to an exemplary embodiment.
  • FIG. 1 is a flowchart of an action learning method according to an exemplary embodiment of the present disclosure. As shown in FIG. 1 , the method includes steps 101 to 105 .
  • human body motion image data is acquired.
  • the human body motion image data is 2D image data, which can be acquired by any image acquisition device such as an RGB camera.
  • the image acquisition device may be a device installed on the robot, or may be any external device.
  • step 102 three-dimensional human body posture action data corresponding to the human body motion image data is determined, and the three-dimensional human body posture action data includes a plurality of three-dimensional human body postures arranged in the order of action time.
  • the method for determining corresponding three-dimensional human body gesture action data from the human body motion image data may be as shown in FIG. 2 , including step 201 and step 202 .
  • step 201 two-dimensional human motion key points corresponding to each image in the human motion image data are determined.
  • step 202 the three-dimensional human body posture action data is determined according to the two-dimensional key point sequence data formed by the two-dimensional human body movement key points corresponding to the respective images.
  • There can be various methods for detecting human motion key points in each image in the 2D human motion image data such as MSPN-based human pose estimation method (multi-stage pose estimation network), HRNet-based human pose estimation method (High-Resolution Representationsnetwork), human pose estimation method based on Hourglass network, etc.
  • MSPN-based human pose estimation method multi-stage pose estimation network
  • HRNet-based human pose estimation method High-Resolution Representationsnetwork
  • human pose estimation method based on Hourglass network etc.
  • the three-dimensional movement posture of the corresponding human body movement can be estimated by accumulating the movements of the two-dimensional human body movement key points in time, so as to obtain the three-dimensional human body posture and movement data.
  • it can be estimated by, for example, a fully convolutional model, that is, a 3D pose is obtained through a model of hollow temporal convolution on two-dimensional key points.
  • the three-dimensional human body posture motion data is matched with the atomic motions in the robot atomic motion library to determine the robot motion sequence data corresponding to the human motion image data, and the robot motion sequence data consists of a plurality of
  • the robot sub-action consists of the atomic action and/or the mapping action obtained by mapping the three-dimensional human body posture action data.
  • the robot atomic action library is the action data file (including the motion trajectory of each joint of the robot and the corresponding time stamp) obtained by the robot through a preset method (such as pre-implantation or pre-learning) that the robot can directly execute, and then consists of database, each action data file is also an atomic action.
  • Each atomic action in the robot atomic action library can no longer be divided into sub-actions. At the same time, when each atomic action is executed on the corresponding robot body, there will be no self-collision or non-human action.
  • the three-dimensional human body posture action data can be matched with one or more atomic actions in the robot atomic action library to form the robot action sequence data, or, some parts of the three-dimensional human body posture action data cannot be matched to the corresponding atomic actions.
  • the part of the three-dimensional human body posture motion data can be directly mapped to the joint motion data of the robot, as the mapping motion, together with other matched atomic motions as the robots included in the motion sequence data of the robot. sub action.
  • the robot motion sequence data can be formed directly according to the mapped motion obtained by the mapping.
  • all the action data in the three-dimensional human posture action data can match the corresponding atomic actions, then all the robot sub-actions included in the robot action sequence data are the atomic action database. Atomic action.
  • the action duration of the atomic actions included in the robot action sequence data may or may not be equal to the action duration of the matching three-dimensional human posture action data, that is, the acquired 2-second human body motion image data corresponds to
  • the three-dimensional human posture action data can be matched to an atomic action with an action duration of 3 seconds, as long as the matching degree of the atomic action and the three-dimensional human posture action data can meet the preset matching conditions.
  • each robot sub-action in the robot action sequence data is sequentially spliced with action continuity.
  • the action continuity splicing may include smooth optimization of the robot posture position and robot motion speed at the junction between adjacent robot sub-actions, and/or appearing in the robot action sequence data obtained by sequentially splicing the robot sub-actions. Avoid self-collision exceptions. That is, in two connected robot sub-actions, smooth optimization needs to be performed between the robot state at the end of the previous sub-action and the robot state at the beginning of the subsequent sub-action, so that the connection between the two sub-actions is smoother.
  • the robot motion sequence data has abnormal problems such as self-collision and other abnormal problems that affect the safety of the robot, it needs to be avoided to ensure the safety of the robot.
  • step 105 the continuous motion learned by the robot is determined according to the motion sequence data of the robot after the motion continuity splicing is performed.
  • the robot action sequence data after the action continuity splicing can be directly used as the continuous action learned by the robot and executed directly on the robot, or saved as a fixed action and called and executed on demand.
  • the adjusted robot action sequence data may be determined as the continuous action learned by the robot.
  • the specific adjustment method is not limited in the present disclosure, but an exemplary adjustment method is given in FIG. 3 , as shown in FIG. 3 , including step 301 and step 302 .
  • step 301 the robot action sequence data after the action continuity splicing is performed in the digital twin model of the robot, and the robot action sequence data is optimized according to the simulation data of the digital twin model.
  • step 302 the robot action sequence data optimized according to the simulation data of the digital twin model is determined as the continuous action learned by the robot.
  • the digital twin model is the same digital twin intelligent body as the physical robot built in the virtual mirror world. It can be a geometric model such as a mesh mesh, or a digital model obtained by simulating the physical properties of the robot itself.
  • the categories include but are not limited to: joint motor simulation, sensor simulation (lidar, depth camera, binocular stereo camera, etc.), self-gravity, collision, material damping.
  • the behaviors of the digital twin model can be realized by methods such as feedback control, environment perception and state acquisition, and virtual-real synchronization.
  • the digital twin model can be used to determine whether the robot motion sequence data needs to be optimized by means of simulation observation, self-collision detection or abnormal motion judgment, etc., and optimize the data that needs to be optimized accordingly.
  • the optimization process can be It is an automatic optimization, and it can also be optimized by receiving manual correction instructions.
  • the robot action sequence data optimized according to the simulation data of the digital twin model can be determined as the continuous action learned by the robot.
  • the robot motion sequence data corresponding to the human motion image data can be matched in the robot atomic motion library.
  • the smooth robot action can be learned quickly and accurately.
  • Fig. 4 is a flowchart of an action learning method according to another exemplary embodiment of the present disclosure. As shown in FIG. 4 , the method further includes step 401 and step 402 .
  • step 401 a plurality of human body sub-actions included in the three-dimensional human body posture action data are sequentially matched according to the action time sequence, and according to all the atomic actions in the robot atomic action library and the human body sub-actions The similarity of the actions determines the robot sub-action corresponding to the human sub-action, wherein the human sub-action is composed of one or more of the three-dimensional human poses.
  • step 402 the robot action sequence data composed of the robot sub-actions is determined according to the action time sequence.
  • the human body sub-action is a part of three-dimensional human body posture action data with different durations. All the human body sub-actions are arranged according to the action time sequence to form the three-dimensional human body posture action data. The division method of each human body sub-action can be determined according to the actual matching situation.
  • the first 2s three-dimensional human body posture action data in the three-dimensional human body posture action data matches the similar atomic action in the robot atomic action library
  • the first 2s three-dimensional human body posture action data can be determined as a human body sub-action
  • the subsequent three-dimensional human body posture action data can be added frame by frame starting from the 3s three-dimensional human body posture action data as the three-dimensional human body posture action data to be matched, so as to be used in this Matching is continued in the atomic action library until the three-dimensional human body gesture and action data to be matched matches a similar atomic action in the atomic action library.
  • the part of the three-dimensional human body posture action data currently used for matching can be determined as a human sub-action.
  • the first 30 frames of 3D human poses in the 3s can be used as the 3D human pose action data to be matched for matching in the atomic action library, and if there is no matching result, the last 30 frames of 3D human poses in the 3s can be added to the to-be-matched Matching the 3D human body pose and motion data in 3s (in the case of a total of 60 frames of 3D human body poses in one second), if similar atomic actions are matched at this time, the 3D human body pose and motion data in the 3s can be used as a human body sub action.
  • the method for determining whether a similar atomic action is matched may be judged according to the above-mentioned similarity, for example, determining the atomic action whose similarity between the atomic action library and the human sub-action currently to be matched is higher than the similarity threshold. is an atomic action that matches the human sub-action to be matched, and then the atomic action can be used as a robot sub-action corresponding to the human sub-action.
  • the method for determining the similarity may include, but is not limited to, methods such as calculating the vector Euclidean distance closest, variance minimum, and cosine approximation between two motion data.
  • the atomic action with the highest similarity can be directly selected, or the continuity of the actions can also be considered, and the atomic actions with the similarity higher than the similarity threshold can be determined.
  • the atomic action with better continuity between the robot sub-action corresponding to the previous human sub-action is used as the final matched atomic action.
  • a specific method may be shown in FIG. 5 , including steps 501 to 503 .
  • step 501 if the human sub-action is not the first human sub-action included in the three-dimensional human posture action data, and the similarity with the human sub-action is higher than a similarity threshold, the atomic action When there are two or more, the atomic action whose similarity with the human sub-action is higher than the similarity threshold is used as a candidate atomic action.
  • step 502 the degree of continuity matching between each candidate atomic action and the robot sub-action corresponding to the previous human sub-action is sequentially calculated.
  • step 503 according to the similarity and the continuity matching degree, among each candidate atomic action, an atomic action that matches the human sub-action is determined, as the human sub-action corresponding to the human sub-action.
  • Robot sub-action according to the similarity and the continuity matching degree, among each candidate atomic action, an atomic action that matches the human sub-action is determined, as the human sub-action corresponding to the human sub-action.
  • the method for determining the degree of continuity matching may include, but is not limited to, calculating the difference between the distance (including Euclidean distance, variance or cosine distance, etc.) between the candidate atomic action and the robot sub-action corresponding to the previous human sub-action and the movement speed of the action. and other methods.
  • the weights occupied by the similarity degree and the continuous matching degree can be set according to the actual situation.
  • the atomic action with the highest similarity may be directly selected as the atomic action matching the human body sub-action.
  • the robot is mapped according to the human sub-action.
  • the robot sub-action in the case that there is no atomic action whose similarity with the human sub-action is higher than the similarity threshold in the robot atomic action library, the robot is mapped according to the human sub-action.
  • the robot sub-action in the case that there is no atomic action whose similarity with the human sub-action is higher than the similarity threshold in the robot atomic action library, the robot is mapped according to the human sub-action.
  • the robot sub-action in the case that there is no atomic action whose similarity with the human sub-action is higher than the similarity threshold in the robot atomic action library, the robot is mapped according to the human sub-action. The robot sub-action.
  • the 3rd s of the 3D human posture action data For example, starting from the 3rd s of the 3D human posture action data, matching is performed in the atomic action library, and until the end of the last frame of the 3D human posture action data, no similarity is obtained from the atomic action library matching that is higher than the similarity threshold.
  • the three-dimensional human body posture action data after the 3rd s can be directly mapped to the joint motion data of the robot, which is used as the above-mentioned mapping action, and finally constitutes the robot action sequence data as the robot sub-action.
  • the duration of the robot action sequence data composed of the robot sub-action may also be the same as the three-dimensional human body.
  • the duration of gesture action data is not equal.
  • FIG. 6 is a structural block diagram of an action learning apparatus according to an exemplary embodiment of the present disclosure.
  • the device includes: an acquisition module 10 for acquiring human body motion image data; a first determination module 20 for determining three-dimensional human body posture action data corresponding to the human body motion image data, the three-dimensional The human body posture and action data includes a plurality of three-dimensional human body postures arranged in the order of action time; the matching module 30 is used to match the three-dimensional human body posture and action data with the atomic actions in the robot atomic action library, so as to determine whether the human body is compatible with the human body.
  • the robot action sequence data corresponding to the moving image data is composed of a plurality of robot sub-actions, and the robot sub-actions include the atomic actions and/or the mapping obtained by mapping the three-dimensional human body posture action data action;
  • the splicing module 40 is used to sequentially splicing the motion continuity of each robot sub-action in the robot action sequence data;
  • the second determining module 50 is used to splicing the robot according to the movement continuity Action sequence data determines the sequential actions that the robot learns.
  • the robot motion sequence data corresponding to the human motion image data can be matched in the robot atomic motion library.
  • the smooth robot action can be learned quickly and accurately.
  • the first determining module 20 is further configured to: determine the two-dimensional human motion key points corresponding to each image in the human motion image data;
  • the three-dimensional human body posture action data is determined by the two-dimensional key point sequence data formed by the three-dimensional human body movement key points.
  • the matching module 30 includes: a first sub-module, configured to sequentially match a plurality of human sub-actions included in the three-dimensional human posture and action data according to the action time sequence, and perform matching according to the time sequence of the actions.
  • the similarity between all atomic actions in the robot atomic action library and the human sub-action determines the robot sub-action corresponding to the human sub-action, wherein the human sub-action is determined by one or more of the three-dimensional human body. pose composition; a second sub-module for determining the robot action sequence data composed of the robot sub-actions according to the action time sequence.
  • the first sub-module is further configured to: when the human body sub-action is not the first human sub-action included in the three-dimensional human body posture action data, and the human body sub-action is different from the human body sub-action
  • the atomic actions whose similarity with the human sub-action is higher than the similarity threshold are used as candidate atomic actions; Calculate the degree of continuity matching between each candidate atomic action and the robot sub-action corresponding to the previous human sub-action; according to the similarity and the continuous matching degree, determine each candidate atomic action with the human sub-action
  • the matching atomic action is used as the robot sub-action corresponding to the human sub-action.
  • the first sub-module is further configured to: there is no atomic action whose similarity with the human sub-action is higher than the similarity threshold in the robot atomic action library Next, the robot sub-action is obtained according to the human sub-action mapping.
  • the splicing module 40 is further configured to: smoothly optimize the robot posture position and the robot movement speed at the junction between the adjacent robot sub-actions; The self-collision anomaly in the robot action sequence data obtained by sequential splicing is avoided.
  • the apparatus further includes: an optimization module, configured to execute the robot action sequence data after the action continuity splicing is performed in a digital twin model of the robot, and perform the data according to the digital twin model of the robot
  • the simulation data of the twin model optimizes the robot action sequence data
  • the second determination module 50 is also used for: determining the robot action sequence data optimized according to the simulation data of the digital twin model as the continuous sequence data learned by the robot action.
  • FIG. 7 is a block diagram of an electronic device 700 according to an exemplary embodiment.
  • the electronic device 700 may include: a processor 701 and a memory 702 .
  • the electronic device 700 may also include one or more of a multimedia component 703 , an input/output (I/O) interface 704 , and a communication component 705 .
  • I/O input/output
  • the processor 701 is configured to control the overall operation of the electronic device 700 to complete all or part of the steps in the above-mentioned action learning method.
  • the memory 702 is used to store various types of data to support operations on the electronic device 700, such data may include, for example, instructions for any application or method operating on the electronic device 700, and application-related data, Such as contact data, messages sent and received, pictures, audio, video, and so on.
  • the memory 702 can be implemented by any type of volatile or non-volatile storage device or their combination, such as static random access memory (Static Random Access Memory, SRAM for short), electrically erasable programmable read-only memory ( Electrically Erasable Programmable Read-Only Memory (EEPROM for short), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (Read-Only Memory, ROM for short), magnetic memory, flash memory, magnetic disk or optical disk.
  • Multimedia components 703 may include screen and audio components. Wherein the screen can be, for example, a touch screen, and the audio component is used for outputting and/or inputting audio signals.
  • the audio component may include a microphone for receiving external audio signals.
  • the received audio signal may be further stored in memory 702 or transmitted through communication component 705 .
  • the audio assembly also includes at least one speaker for outputting audio signals.
  • the I/O interface 704 provides an interface between the processor 701 and other interface modules, and the above-mentioned other interface modules may be a keyboard, a mouse, a button, and the like. These buttons can be virtual buttons or physical buttons.
  • the communication component 705 is used for wired or wireless communication between the electronic device 700 and other devices. Wireless communication, such as Wi-Fi, Bluetooth, Near Field Communication (NFC), 2G, 3G, 4G, NB-IOT, eMTC, or other 5G, etc., or one or more of them The combination is not limited here. Therefore, the corresponding communication component 705 may include: Wi-Fi module, Bluetooth module, NFC module and so on.
  • the electronic device 700 may be implemented by one or more application-specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), digital signal processors (Digital Signal Processor, DSP for short), digital signal processing devices (Digital Signal Processing Device (DSPD), Programmable Logic Device (PLD), Field Programmable Gate Array (FPGA), controller, microcontroller, microprocessor or other electronic components Implementation is used to perform the above-mentioned action learning method.
  • ASIC Application Specific Integrated Circuit
  • DSP Digital Signal Processor
  • DSP Digital Signal Processor
  • DSP Digital Signal Processing Device
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • controller microcontroller, microprocessor or other electronic components Implementation is used to perform the above-mentioned action learning method.
  • a computer-readable storage medium including program instructions is also provided, and when the program instructions are executed by a processor, the steps of the above-mentioned action learning method are implemented.
  • the computer-readable storage medium can be the above-mentioned memory 702 including program instructions, and the above-mentioned program instructions can be executed by the processor 701 of the electronic device 700 to complete the above-mentioned action learning method.
  • FIG. 8 is a block diagram of an electronic device 800 according to an exemplary embodiment.
  • the electronic device 800 may be provided as a server.
  • an electronic device 800 includes a processor 822 , which may be one or more in number, and a memory 832 for storing computer programs executable by the processor 822 .
  • the computer program stored in memory 832 may include one or more modules, each corresponding to a set of instructions.
  • the processor 822 may be configured to execute the computer program to perform the above-described action learning method.
  • the electronic device 800 may also include a power supply component 826, which may be configured to perform power management of the electronic device 800, and a communication component 850, which may be configured to enable communication of the electronic device 800, eg, wired or wireless communication. Additionally, the electronic device 800 may also include an input/output (I/O) interface 858 . Electronic device 800 may operate based on an operating system stored in memory 832, such as Windows Server TM , Mac OS X TM , Unix TM , Linux TM , and the like.
  • an operating system stored in memory 832, such as Windows Server TM , Mac OS X TM , Unix TM , Linux TM , and the like.
  • a computer-readable storage medium including program instructions is also provided, and when the program instructions are executed by a processor, the steps of the above-mentioned action learning method are implemented.
  • the computer-readable storage medium can be the above-mentioned memory 832 including program instructions, and the above-mentioned program instructions can be executed by the processor 822 of the electronic device 800 to implement the above-mentioned action learning method.
  • a computer program product comprising a computer program executable by a programmable apparatus, the computer program having, when executed by the programmable apparatus, for performing the above The code part of the action learning method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Psychiatry (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Social Psychology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

一种动作学***滑连接和动作优化,快捷、准确地学习得到流畅的机器人动作。

Description

动作学习方法、装置、介质及电子设备 技术领域
本公开涉及机器人领域,具体地,涉及一种动作学习方法、装置、介质及电子设备。
背景技术
目前用于控制机器人动作行为的方案中,常用的是基于动作捕捉器直接控制机器人动作,具体的,需要人体穿上若干个动作捕捉器设备(包括不限于IMU惯性测量单元),通过连接一个计算设备,再通过计算设备连接到机器人身上,构成一个本地网络,通过动作捕捉器将人的动作进行捕捉,并同步控制机器人做出类似动作,机器人身体的对应关节与人体动作对应关节运动的角度和速度基本保持相近或在控制一定误差的范围内。或者,基于机器人轨迹规划的方法,需要通过在机器人坐标系下通过对各个关节运动的位置、速度和加速度进行基于运动学和动力学算法运动轨迹规划,各个关节按照规划好轨迹进行运动,多个关节联动形成机器人的动作行为。
发明内容
本公开的目的是提供一种动作学***滑连接和动作优化,快捷、准确地学习得到流畅的机器人动作。
为了实现上述目的,本公开提供一种动作学习方法,所述方法包括:
获取人体运动图像数据;
确定与所述人体运动图像数据对应的三维人体姿态动作数据,所述三维人体姿态动作数据中包括按照动作时间顺序排列的多个三维人体姿态;
将所述三维人体姿态动作数据与机器人原子动作库中的原子动作进行匹配,以确定与所述人体运动图像数据对应的机器人动作序列数据,所述机器人动作序列数据由多个机器人子动作组成,所述机器人子动作中包括所述原子动作和/或由所述三维人体姿态动作数据映射得到的映射动作;
对所述机器人动作序列数据中的各机器人子动作按序进行动作连续性拼接;
根据进行所述动作连续性拼接后的所述机器人动作序列数据确定机器人学习到的连续动作。
可选地,所述确定与所述人体运动图像数据对应的三维人体姿态动作数据包括:
确定所述人体运动图像数据中的各图像分别对应的二维人体运动关键点;
根据所述各图像分别对应的所述二维人体运动关键点所构成的二维关键点序列数据确定所述三维人体姿态动作数据。
可选地,所述将所述三维人体姿态动作数据与机器人原子动作库中的原子动作进行匹配,以确定与所述人体运动图像数据对应的机器人动作序列数据包括:
按照所述动作时间顺序对所述三维人体姿态动作数据中所包括的多个人体子动作依次进行匹配,并根据所述机器人原子动作库中的所有原子动作与所述人体子动作的相似度确定与所述人体子动作相对应的机器人子动作,其中,所述人体子动作由一个或多个所述三维人体姿态构成;
根据所述动作时间顺序确定由所述机器人子动作组成的所述机器人动 作序列数据。
可选地,所述根据所述机器人原子动作库中的所有原子动作与所述人体子动作的相似度确定与所述人体子动作相对应的机器人子动作包括:
在所述人体子动作不为所述三维人体姿态动作数据中所包括的首个人体子动作、且与所述人体子动作的相似度高于相似度阈值的所述原子动作存在两个或多个的情况下,将与所述人体子动作的相似度高于相似度阈值的所述原子动作作为候选原子动作;
依次计算各候选原子动作与前一个人体子动作对应的机器人子动作之间的连续性匹配度;
根据所述相似度和所述连续性匹配度,在各候选原子动作中确定与所述人体子动作相匹配的原子动作,以作为与所述人体子动作相对应的所述机器人子动作。
可选地,所述根据所述机器人原子动作库中的所有原子动作与所述人体子动作的相似度确定与所述人体子动作相对应的机器人子动作还包括:
在所述机器人原子动作库中不存在与所述人体子动作的相似度高于所述相似度阈值的原子动作的情况下,根据所述人体子动作映射得到所述机器人子动作。
可选地,所述对所述机器人动作序列数据中的各机器人子动作按序进行动作连续性拼接包括:
对各相邻机器人子动作之间衔接处的机器人姿态位置和机器人运动速度进行平滑优化;和/或
对各机器人子动作按序拼接得到的机器人动作序列数据中出现的自身碰撞异常进行规避处理。
可选地,所述方法还包括:
将进行所述动作连续性拼接后的所述机器人动作序列数据在机器人的 数字孪生模型中执行,并根据所述数字孪生模型的仿真数据对所述机器人动作序列数据进行优化;
所述根据进行所述动作连续性拼接后的所述机器人动作序列数据确定机器人学习到的连续动作包括:
将根据所述数字孪生模型的仿真数据优化后的机器人动作序列数据确定为所述机器人学习到的连续动作。
本公开还提供一种动作学习装置,所述装置包括:
获取模块,用于获取人体运动图像数据;
第一确定模块,用于确定与所述人体运动图像数据对应的三维人体姿态动作数据,所述三维人体姿态动作数据中包括按照动作时间顺序排列的多个三维人体姿态;
匹配模块,用于将所述三维人体姿态动作数据与机器人原子动作库中的原子动作进行匹配,以确定与所述人体运动图像数据对应的机器人动作序列数据,所述机器人动作序列数据由多个机器人子动作组成,所述机器人子动作中包括所述原子动作和/或由所述三维人体姿态动作数据映射得到的映射动作;
拼接模块,用于对所述机器人动作序列数据中的各机器人子动作按序进行动作连续性拼接;
第二确定模块,用于根据进行所述动作连续性拼接后的所述机器人动作序列数据确定机器人学习到的连续动作。
本公开还提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现以上所述方法的步骤。
本公开还提供一种电子设备,包括:
存储器,其上存储有计算机程序;
处理器,用于执行所述存储器中的所述计算机程序,以实现以上所述方 法的步骤。
通过上述技术方案,无需动作捕捉器,也无需对机器人轨迹进行规划,通过2D的人体运动图像数据即可在机器人原子动作库中匹配得到与人体运动图像数据对应的机器人动作序列数据,并且还能通过对该机器人动作序列数据中的各动作进行平滑连接和动作优化,快捷、准确地学习得到流畅的机器人动作。
本公开的其他特征和优点将在随后的具体实施方式部分予以详细说明。
附图说明
附图是用来提供对本公开的进一步理解,并且构成说明书的一部分,与下面的具体实施方式一起用于解释本公开,但并不构成对本公开的限制。在附图中:
图1是根据本公开一示例性实施例示出的一种动作学习方法的流程图。
图2是根据本公开又一示例性实施例示出的一种动作学习方法的流程图。
图3是根据本公开又一示例性实施例示出的一种动作学习方法的流程图。
图4是根据本公开又一示例性实施例示出的一种动作学习方法的流程图。
图5是根据本公开又一示例性实施例示出的一种动作学习方法中根据原子动作与人体子动作的相似度确定与该人体子动作相对应的机器人子动作的方法的流程图。
图6是根据本公开一示例性实施例示出的一种动作学习装置的结构框图。
图7是根据一示例性实施例示出的一种电子设备的框图。
图8是根据一示例性实施例示出的一种电子设备的框图。
具体实施方式
以下结合附图对本公开的具体实施方式进行详细说明。应当理解的是, 此处所描述的具体实施方式仅用于说明和解释本公开,并不用于限制本公开。
图1是根据本公开一示例性实施例示出的一种动作学习方法的流程图。如图1所示,所述方法包括步骤101至步骤105。
在步骤101中,获取人体运动图像数据。该人体运动图像数据为2D图像数据,可以通过任意图像获取设备例如RGB相机获取得到。该图像获取设备可以是设置于机器人上的设备,也可以为任意外部设备。
在步骤102中,确定与所述人体运动图像数据对应的三维人体姿态动作数据,所述三维人体姿态动作数据中包括按照动作时间顺序排列的多个三维人体姿态。其中,从该人体运动图像数据中确定对应的三维人体姿态动作数据的方法可以如图2所示,包括步骤201和步骤202。
在步骤201中,确定所述人体运动图像数据中的各图像分别对应的二维人体运动关键点。在步骤202中,根据所述各图像分别对应的所述二维人体运动关键点所构成的二维关键点序列数据确定所述三维人体姿态动作数据。在该2D的人体运动图像数据中的每一图像中检测人体运动关键点的方法可以为多种,例如基于MSPN的人体姿态估计方法(multi-stage pose estimation network),基于HRNet的人体姿态估计方法(High-Resolution Representationsnetwork),基于Hourglass网络的人体姿态估计方法等等。在检测得到每帧图像中的二维人体运动关键点之后,即可在时间维度获得一个人体运动关键点的序列。通过该二维人体运动关键点的运动的在时间上的累积便可对相应人体运动动作的三维运动姿态进行估计,从而得到该三维人体姿态动作数据。具体可以通过例如全卷积模型的方式来进行估计,也即二维关键点上通过空洞时间卷积的模型得到3D姿态。
在步骤103中,将所述三维人体姿态动作数据与机器人原子动作库中的原子动作进行匹配,以确定与所述人体运动图像数据对应的机器人动作序列数据,所述机器人动作序列数据由多个机器人子动作组成,所述机器人子动 作中包括所述原子动作和/或由所述三维人体姿态动作数据映射得到的映射动作。
该机器人原子动作库为机器人通过预设方法(例如预先植入或者预先学习)所得到的机器人可直接执行的动作数据文件(其中包括机器人各关节的运动轨迹和对应的时间戳),进而所构成的数据库,每个动作数据文件也即一个原子动作。该机器人原子动作库中的每一个原子动作不可再进行子动作的划分,同时每个原子动作在对应的机器人本体上执行时,不会出现自行碰撞或出现非人类动作的情况。
也即,该三维人体姿态动作数据能够在该机器人原子动作库中匹配到一个或多个原子动作,以组成该机器人动作序列数据,或者,在该三维人体姿态动作数据中存在部分无法匹配到相应原子动作的动作数据的情况下,可以将该部分三维人体姿态动作数据直接映射为机器人的关节运动数据,作为该映射动作,与匹配到的其他原子动作共同作为该机器人动作序列数据包括的各个机器人子动作。
在一种可能的实施方式中,在该三维人体姿态动作数据中的所有动作数据都没有匹配到原子动作的情况下,能够直接根据映射得到的映射动作来构成该机器人动作序列数据。在一种可能的实施方式中,该三维人体姿态动作数据中的所有动作数据都能匹配到相应的原子动作,则该机器人动作序列数据中包括的所有机器人子动作都为该原子动作库中的原子动作。
另外,该机器人动作序列数据中所包括的原子动作的动作时长,和与其相匹配的三维人体姿态动作数据的动作时长可以相等也可以不相等,也即,获取到2秒人体运动图像数据所对应的三维人体姿态动作数据,可以匹配到动作时长为3秒的原子动作,只要该原子动作与该三维人体姿态动作数据的匹配程度能够满足预设的匹配条件即可。
在步骤104中,对所述机器人动作序列数据中的各机器人子动作按序进 行动作连续性拼接。该动作连续性拼接可以包括对各相邻机器人子动作之间衔接处的机器人姿态位置和机器人运动速度进行平滑优化,和/或对各机器人子动作按序拼接得到的机器人动作序列数据中出现的自身碰撞异常进行规避处理。也即,两个相连机器人子动作中,在前的子动作结束时的机器人状态与在后的子动作开始时的机器人状态之间需要进行平滑优化,从而使得两个子动作的衔接更加流畅。而在该机器人动作序列数据出现自身碰撞异常等等影响机器人安全的异常问题时,还需要对其进行规避处理,从而保障机器人的安全。
在步骤105中,根据进行所述动作连续性拼接后的所述机器人动作序列数据确定机器人学习到的连续动作。
经过所述动作连续性拼接之后的机器人动作序列数据可以直接作为该机器人学习到的连续动作,直接在该机器人上执行,或者保存为固定的动作,按需调用执行。
或者,也可以对进行所述动作连续性拼接之后的机器人动作序列数据进行其他的数据优化和数据修正等调整之后,再将调整之后的机器人动作序列数据确定为该机器人学习到的连续动作。具体的调整方式在本公开中不进行限定,但图3中给出了一种示例性的调整方法,如图3所示,包括步骤301和步骤302。
在步骤301中,将进行所述动作连续性拼接后的所述机器人动作序列数据在机器人的数字孪生模型中执行,并根据所述数字孪生模型的仿真数据对所述机器人动作序列数据进行优化。
在步骤302中,将根据所述数字孪生模型的仿真数据优化后的机器人动作序列数据确定为所述机器人学习到的连续动作。
该数字孪生模型也即在虚拟镜像世界里面构建的与实体机器人相同的数字孪生智能体,可以是例如Mesh网格体的几何模型,也可以是对机器人 自身物理属性仿真得到的数字模型,仿真的类容包括但不限于:关节电机模拟、传感模拟(激光雷达、深度相机、双目立体相机等)、自身重力、碰撞、材质阻尼。该数字孪生模型的行为动作的实现可以通过例如反馈控制实现、环境感知及状态采集、虚实同步等方法。
其中,可以通过数字孪生模型进行例如仿真观测、自身碰撞检测或异常动作判断等等方式来确定该机器人动作序列数据是否需要进行优化,并对需要进行优化的数据进行相应的优化,该优化过程可以是自动优化,也可以通过接收人工修正指令所进行的优化。最终,便可以将根据所述数字孪生模型的仿真数据优化后的机器人动作序列数据确定为所述机器人学习到的连续动作。
通过上述技术方案,无需动作捕捉器,也无需对机器人轨迹进行规划,通过2D的人体运动图像数据即可在机器人原子动作库中匹配得到与人体运动图像数据对应的机器人动作序列数据,并且还能通过对该机器人动作序列数据中的各动作进行平滑连接和动作优化,快捷、准确地学习得到流畅的机器人动作。
图4是根据本公开又一示例性实施例示出的一种动作学习方法的流程图。如图4所示,所述方法还包括步骤401和步骤402。
在步骤401中,按照所述动作时间顺序对所述三维人体姿态动作数据中所包括的多个人体子动作依次进行匹配,并根据所述机器人原子动作库中的所有原子动作与所述人体子动作的相似度确定与所述人体子动作相对应的机器人子动作,其中,所述人体子动作由一个或多个所述三维人体姿态构成。
在步骤402中,根据所述动作时间顺序确定由所述机器人子动作组成的所述机器人动作序列数据。
该人体子动作为时长不一的一部分三维人体姿态动作数据。所有的人体子动作按照动作时间顺序排列即可组成该三维人体姿态动作数据。各个人体 子动作的划分方法可以是根据实际的匹配情况来确定,例如,若该三维人体姿态动作数据中的前2s三维人体姿态动作数据在该机器人原子动作库中匹配到了相近的原子动作,则可以将该前2s三维人体姿态动作数据确定为一个人体子动作,并从第3s三维人体姿态动作数据开始逐帧加入后续的三维人体姿态动作数据作为待匹配的三维人体姿态动作数据,以在该原子动作库中继续进行匹配,直到该待匹配的三维人体姿态动作数据在原子动作库中匹配到相近的原子动作。此时便可以将当前用于匹配的部分三维人体姿态动作数据确定为一个人体子动作。例如,可以先将第3s的前30帧三维人体姿态作为待匹配的三维人体姿态动作数据在原子动作库中进行匹配,无匹配结果的情况下将第3s的后30帧三维人体姿态加入待匹配的三维人体姿态动作数据中进行匹配(在一秒中一共包括60帧三维人体姿态的情况下),若此时匹配到相近的原子动作,则可以将第3s的三维人体姿态动作数据作为一个人体子动作。
确定是否匹配到相近的原子动作的方法可以是根据上述相似度来判断,例如,将该原子动作库中与该当前待匹配的人体子动作之间的相似度高于相似度阈值的原子动作确定为与该待匹配的人体子动作相匹配的原子动作,进而便可以将该原子动作作为与该人体子动作相对应的机器人子动作。
确定该相似度的方法可以包括但不限于计算两个动作数据之间的向量欧式距离最近、方差最小、余弦近似等方法。
若该相似度高于相似度阈值的原子动作有多个,可以直接选取相似度最高的原子动作,或者也可以考虑动作的连续性,在该相似度高于相似度阈值的原子动作中确定出与上一个人体子动作对应的机器人子动作之间连续性更好的原子动作,作为最终匹配到的原子动作。具体的方法可以如图5所示,包括步骤501至步骤503。
在步骤501中,在所述人体子动作不为所述三维人体姿态动作数据中所 包括的首个人体子动作、且与所述人体子动作的相似度高于相似度阈值的所述原子动作存在两个或多个的情况下,将与所述人体子动作的相似度高于相似度阈值的所述原子动作作为候选原子动作。
在步骤502中,依次计算各候选原子动作与前一个人体子动作对应的机器人子动作之间的连续性匹配度。
在步骤503中,根据所述相似度和所述连续性匹配度,在各候选原子动作中确定与所述人体子动作相匹配的原子动作,以作为与所述人体子动作相对应的所述机器人子动作。
确定该连续性匹配度的方法可以包括但不限于计算候选原子动作与前一个人体子动作对应的机器人子动作之间距离(包括欧式距离、方差或余弦距离等)和动作运动速度之间的差异等方法。该相似度和该连续性匹配度所占的权重可以根据实际情况进行设定。
另外,若人体子动作为所述三维人体姿态动作数据中所包括的首个人体子动作,则可以直接选取相似度最高的原子动作作为与该人体子动作相匹配的原子动作。
在一种可能的实施方式中,在所述机器人原子动作库中不存在与所述人体子动作的相似度高于所述相似度阈值的原子动作的情况下,根据所述人体子动作映射得到所述机器人子动作。
例如,从该三维人体姿态动作数据的第3s开始在该原子动作库中进行匹配,直到该三维人体姿态动作数据的最后一帧结束,都没有在原子动作库匹配得到相似度高于相似度阈值的原子动作,则此时就可以直接将该第3s之后的三维人体姿态动作数据映射为机器人的关节运动数据,作为上述映射动作,并最终作为该机器人子动作构成该机器人动作序列数据。
或者,在另一种可能的实施方式中,从该三维人体姿态动作数据的第3s开始在该原子动作库中进行匹配,直到第10s才在原子动作库中匹配得到一 个相似度高于相似度阈值、但动作时长仅3s的原子动作,则可以将第3s至第7s的三维人体姿态动作数据作为一个人体子动作,并对其进行映射得到相应的机器人关节运动数据,并将其也作为该机器人子动作。
其中,由于在进行原子动作的匹配时,匹配到的原子动作与对应的人体子动之间的时长可以不相等,因此由该机器人子动作构成的机器人动作序列数据的时长也可以与该三维人体姿态动作数据的时长不相等。
图6是根据本公开一示例性实施例示出的一种动作学习装置的结构框图。如图6所示,所述装置包括:获取模块10,用于获取人体运动图像数据;第一确定模块20,用于确定与所述人体运动图像数据对应的三维人体姿态动作数据,所述三维人体姿态动作数据中包括按照动作时间顺序排列的多个三维人体姿态;匹配模块30,用于将所述三维人体姿态动作数据与机器人原子动作库中的原子动作进行匹配,以确定与所述人体运动图像数据对应的机器人动作序列数据,所述机器人动作序列数据由多个机器人子动作组成,所述机器人子动作中包括所述原子动作和/或由所述三维人体姿态动作数据映射得到的映射动作;拼接模块40,用于对所述机器人动作序列数据中的各机器人子动作按序进行动作连续性拼接;第二确定模块50,用于根据进行所述动作连续性拼接后的所述机器人动作序列数据确定机器人学习到的连续动作。
通过上述技术方案,无需动作捕捉器,也无需对机器人轨迹进行规划,通过2D的人体运动图像数据即可在机器人原子动作库中匹配得到与人体运动图像数据对应的机器人动作序列数据,并且还能通过对该机器人动作序列数据中的各动作进行平滑连接和动作优化,快捷、准确地学习得到流畅的机器人动作。
在一种可能的实施方式中,第一确定模块20还用于:确定所述人体运动图像数据中的各图像分别对应的二维人体运动关键点;根据所述各图像分别对应的所述二维人体运动关键点所构成的二维关键点序列数据确定所述 三维人体姿态动作数据。
在一种可能的实施方式中,匹配模块30包括:第一子模块,用于按照所述动作时间顺序对所述三维人体姿态动作数据中所包括的多个人体子动作依次进行匹配,并根据所述机器人原子动作库中的所有原子动作与所述人体子动作的相似度确定与所述人体子动作相对应的机器人子动作,其中,所述人体子动作由一个或多个所述三维人体姿态构成;第二子模块,用于根据所述动作时间顺序确定由所述机器人子动作组成的所述机器人动作序列数据。
在一种可能的实施方式中,所述第一子模块还用于:在所述人体子动作不为所述三维人体姿态动作数据中所包括的首个人体子动作、且与所述人体子动作的相似度高于相似度阈值的所述原子动作存在两个或多个的情况下,将与所述人体子动作的相似度高于相似度阈值的所述原子动作作为候选原子动作;依次计算各候选原子动作与前一个人体子动作对应的机器人子动作之间的连续性匹配度;根据所述相似度和所述连续性匹配度,在各候选原子动作中确定与所述人体子动作相匹配的原子动作,以作为与所述人体子动作相对应的所述机器人子动作。
在一种可能的实施方式中,所述第一子模块还用于:在所述机器人原子动作库中不存在与所述人体子动作的相似度高于所述相似度阈值的原子动作的情况下,根据所述人体子动作映射得到所述机器人子动作。
在一种可能的实施方式中,所述拼接模块40还用于:对各相邻机器人子动作之间衔接处的机器人姿态位置和机器人运动速度进行平滑优化;和/或对各机器人子动作按序拼接得到的机器人动作序列数据中出现的自身碰撞异常进行规避处理。
在一种可能的实施方式中,所述装置还包括:优化模块,用于将进行所述动作连续性拼接后的所述机器人动作序列数据在机器人的数字孪生模型 中执行,并根据所述数字孪生模型的仿真数据对所述机器人动作序列数据进行优化;第二确定模块50还用于:将根据所述数字孪生模型的仿真数据优化后的机器人动作序列数据确定为所述机器人学习到的连续动作。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
图7是根据一示例性实施例示出的一种电子设备700的框图。如图7所示,该电子设备700可以包括:处理器701,存储器702。该电子设备700还可以包括多媒体组件703,输入/输出(I/O)接口704,以及通信组件705中的一者或多者。
其中,处理器701用于控制该电子设备700的整体操作,以完成上述的动作学习方法中的全部或部分步骤。存储器702用于存储各种类型的数据以支持在该电子设备700的操作,这些数据例如可以包括用于在该电子设备700上操作的任何应用程序或方法的指令,以及应用程序相关的数据,例如联系人数据、收发的消息、图片、音频、视频等等。该存储器702可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,例如静态随机存取存储器(Static Random Access Memory,简称SRAM),电可擦除可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,简称EEPROM),可擦除可编程只读存储器(Erasable Programmable Read-Only Memory,简称EPROM),可编程只读存储器(Programmable Read-Only Memory,简称PROM),只读存储器(Read-Only Memory,简称ROM),磁存储器,快闪存储器,磁盘或光盘。多媒体组件703可以包括屏幕和音频组件。其中屏幕例如可以是触摸屏,音频组件用于输出和/或输入音频信号。例如,音频组件可以包括一个麦克风,麦克风用于接收外部音频信号。所接收的音频信号可以被进一步存储在存储器702或通过通信组件705发送。音频组件还包括至少一个扬声器,用于输出音频信号。I/O接口704为处理器701 和其他接口模块之间提供接口,上述其他接口模块可以是键盘,鼠标,按钮等。这些按钮可以是虚拟按钮或者实体按钮。通信组件705用于该电子设备700与其他设备之间进行有线或无线通信。无线通信,例如Wi-Fi,蓝牙,近场通信(Near Field Communication,简称NFC),2G、3G、4G、NB-IOT、eMTC、或其他5G等等,或它们中的一种或几种的组合,在此不做限定。因此相应的该通信组件705可以包括:Wi-Fi模块,蓝牙模块,NFC模块等等。
在一示例性实施例中,电子设备700可以被一个或多个应用专用集成电路(Application Specific Integrated Circuit,简称ASIC)、数字信号处理器(Digital Signal Processor,简称DSP)、数字信号处理设备(Digital Signal Processing Device,简称DSPD)、可编程逻辑器件(Programmable Logic Device,简称PLD)、现场可编程门阵列(Field Programmable Gate Array,简称FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述的动作学习方法。
在另一示例性实施例中,还提供了一种包括程序指令的计算机可读存储介质,该程序指令被处理器执行时实现上述的动作学习方法的步骤。例如,该计算机可读存储介质可以为上述包括程序指令的存储器702,上述程序指令可由电子设备700的处理器701执行以完成上述的动作学习方法。
图8是根据一示例性实施例示出的一种电子设备800的框图。例如,电子设备800可以被提供为一服务器。参照图8,电子设备800包括处理器822,其数量可以为一个或多个,以及存储器832,用于存储可由处理器822执行的计算机程序。存储器832中存储的计算机程序可以包括一个或一个以上的每一个对应于一组指令的模块。此外,处理器822可以被配置为执行该计算机程序,以执行上述的动作学习方法。
另外,电子设备800还可以包括电源组件826和通信组件850,该电源 组件826可以被配置为执行电子设备800的电源管理,该通信组件850可以被配置为实现电子设备800的通信,例如,有线或无线通信。此外,该电子设备800还可以包括输入/输出(I/O)接口858。电子设备800可以操作基于存储在存储器832的操作***,例如Windows Server TM,Mac OS X TM,Unix TM,Linux TM等等。
在另一示例性实施例中,还提供了一种包括程序指令的计算机可读存储介质,该程序指令被处理器执行时实现上述的动作学习方法的步骤。例如,该计算机可读存储介质可以为上述包括程序指令的存储器832,上述程序指令可由电子设备800的处理器822执行以完成上述的动作学习方法。
在另一示例性实施例中,还提供一种计算机程序产品,该计算机程序产品包含能够由可编程的装置执行的计算机程序,该计算机程序具有当由该可编程的装置执行时用于执行上述的动作学习方法的代码部分。
以上结合附图详细描述了本公开的优选实施方式,但是,本公开并不限于上述实施方式中的具体细节,在本公开的技术构思范围内,可以对本公开的技术方案进行多种简单变型,这些简单变型均属于本公开的保护范围。
另外需要说明的是,在上述具体实施方式中所描述的各个具体技术特征,在不矛盾的情况下,可以通过任何合适的方式进行组合。为了避免不必要的重复,本公开对各种可能的组合方式不再另行说明。
此外,本公开的各种不同的实施方式之间也可以进行任意组合,只要其不违背本公开的思想,其同样应当视为本公开所公开的内容。

Claims (17)

  1. 一种动作学习方法,其特征在于,所述方法包括:
    获取人体运动图像数据;
    确定与所述人体运动图像数据对应的三维人体姿态动作数据,所述三维人体姿态动作数据中包括按照动作时间顺序排列的多个三维人体姿态;
    将所述三维人体姿态动作数据与机器人原子动作库中的原子动作进行匹配,以确定与所述人体运动图像数据对应的机器人动作序列数据,所述机器人动作序列数据由多个机器人子动作组成,所述机器人子动作中包括所述原子动作和/或由所述三维人体姿态动作数据映射得到的映射动作;
    对所述机器人动作序列数据中的各机器人子动作按序进行动作连续性拼接;
    根据进行所述动作连续性拼接后的所述机器人动作序列数据确定机器人学习到的连续动作。
  2. 根据权利要求1所述的方法,其特征在于,所述确定与所述人体运动图像数据对应的三维人体姿态动作数据包括:
    确定所述人体运动图像数据中的各图像分别对应的二维人体运动关键点;
    根据所述各图像分别对应的所述二维人体运动关键点所构成的二维关键点序列数据确定所述三维人体姿态动作数据。
  3. 根据权利要求1或2所述的方法,其特征在于,所述将所述三维人体姿态动作数据与机器人原子动作库中的原子动作进行匹配,以确定与所述人体运动图像数据对应的机器人动作序列数据包括:
    按照所述动作时间顺序对所述三维人体姿态动作数据中所包括的多个 人体子动作依次进行匹配,并根据所述机器人原子动作库中的所有原子动作与所述人体子动作的相似度确定与所述人体子动作相对应的机器人子动作,其中,所述人体子动作由一个或多个所述三维人体姿态构成;
    根据所述动作时间顺序确定由所述机器人子动作组成的所述机器人动作序列数据。
  4. 根据权利要求3所述的方法,其特征在于,所述根据所述机器人原子动作库中的所有原子动作与所述人体子动作的相似度确定与所述人体子动作相对应的机器人子动作包括:
    在所述人体子动作不为所述三维人体姿态动作数据中所包括的首个人体子动作、且与所述人体子动作的相似度高于相似度阈值的所述原子动作存在两个或多个的情况下,将与所述人体子动作的相似度高于相似度阈值的所述原子动作作为候选原子动作;
    依次计算各候选原子动作与前一个人体子动作对应的机器人子动作之间的连续性匹配度;
    根据所述相似度和所述连续性匹配度,在各候选原子动作中确定与所述人体子动作相匹配的原子动作,以作为与所述人体子动作相对应的所述机器人子动作。
  5. 根据权利要求4所述的方法,其特征在于,所述根据所述机器人原子动作库中的所有原子动作与所述人体子动作的相似度确定与所述人体子动作相对应的机器人子动作还包括:
    在所述机器人原子动作库中不存在与所述人体子动作的相似度高于所述相似度阈值的原子动作的情况下,根据所述人体子动作映射得到所述机器人子动作。
  6. 根据权利要求1-5中任一项所述的方法,其特征在于,所述对所述机器人动作序列数据中的各机器人子动作按序进行动作连续性拼接包括:
    对各相邻机器人子动作之间衔接处的机器人姿态位置和机器人运动速度进行平滑优化;和/或
    对各机器人子动作按序拼接得到的机器人动作序列数据中出现的自身碰撞异常进行规避处理。
  7. 根据权利要求1-6中任一项所述的方法,其特征在于,所述方法还包括:
    将进行所述动作连续性拼接后的所述机器人动作序列数据在机器人的数字孪生模型中执行,并根据所述数字孪生模型的仿真数据对所述机器人动作序列数据进行优化;
    所述根据进行所述动作连续性拼接后的所述机器人动作序列数据确定机器人学习到的连续动作包括:
    将根据所述数字孪生模型的仿真数据优化后的机器人动作序列数据确定为所述机器人学习到的连续动作。
  8. 一种动作学习装置,其特征在于,所述装置包括:
    获取模块,用于获取人体运动图像数据;
    第一确定模块,用于确定与所述人体运动图像数据对应的三维人体姿态动作数据,所述三维人体姿态动作数据中包括按照动作时间顺序排列的多个三维人体姿态;
    匹配模块,用于将所述三维人体姿态动作数据与机器人原子动作库中的原子动作进行匹配,以确定与所述人体运动图像数据对应的机器人动作序列 数据,所述机器人动作序列数据由多个机器人子动作组成,所述机器人子动作中包括所述原子动作和/或由所述三维人体姿态动作数据映射得到的映射动作;
    拼接模块,用于对所述机器人动作序列数据中的各机器人子动作按序进行动作连续性拼接;
    第二确定模块,用于根据进行所述动作连续性拼接后的所述机器人动作序列数据确定机器人学习到的连续动作。
  9. 根据权利要求8所述的装置,其特征在于,第一确定模块还用于:
    确定所述人体运动图像数据中的各图像分别对应的二维人体运动关键点;
    根据所述各图像分别对应的所述二维人体运动关键点所构成的二维关键点序列数据确定所述三维人体姿态动作数据。
  10. 根据权利要求8或9所述的装置,其特征在于,所述匹配模块包括:
    第一子模块,用于按照所述动作时间顺序对所述三维人体姿态动作数据中所包括的多个人体子动作依次进行匹配,并根据所述机器人原子动作库中的所有原子动作与所述人体子动作的相似度确定与所述人体子动作相对应的机器人子动作,其中,所述人体子动作由一个或多个所述三维人体姿态构成;
    第二子模块,用于根据所述动作时间顺序确定由所述机器人子动作组成的所述机器人动作序列数据。
  11. 根据权利要求10所述的装置,其特征在于,所述第一子模块还用于:
    在所述人体子动作不为所述三维人体姿态动作数据中所包括的首个人体子动作、且与所述人体子动作的相似度高于相似度阈值的所述原子动作存在两个或多个的情况下,将与所述人体子动作的相似度高于相似度阈值的所述原子动作作为候选原子动作;
    依次计算各候选原子动作与前一个人体子动作对应的机器人子动作之间的连续性匹配度;
    根据所述相似度和所述连续性匹配度,在各候选原子动作中确定与所述人体子动作相匹配的原子动作,以作为与所述人体子动作相对应的所述机器人子动作。
  12. 根据权利要求11所述的装置,其特征在于,所述第一子模块还用于:
    在所述机器人原子动作库中不存在与所述人体子动作的相似度高于所述相似度阈值的原子动作的情况下,根据所述人体子动作映射得到所述机器人子动作。
  13. 根据权利要求8-12中任一项所述的装置,其特征在于,所述拼接模块还用于:
    对各相邻机器人子动作之间衔接处的机器人姿态位置和机器人运动速度进行平滑优化;和/或
    对各机器人子动作按序拼接得到的机器人动作序列数据中出现的自身碰撞异常进行规避处理。
  14. 根据权利要求8-13中任一项所述的装置,其特征在于,所述装置还包括:
    优化模块,用于将进行所述动作连续性拼接后的所述机器人动作序列数据在机器人的数字孪生模型中执行,并根据所述数字孪生模型的仿真数据对所述机器人动作序列数据进行优化;
    所述第二确定模块还用于:
    将根据所述数字孪生模型的仿真数据优化后的机器人动作序列数据确定为所述机器人学习到的连续动作。
  15. 一种非易失性计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现权利要求1-7中任一项所述方法的步骤。
  16. 一种电子设备,其特征在于,包括:
    存储器,其上存储有计算机程序;
    处理器,用于执行所述存储器中的所述计算机程序,以实现权利要求1-7中任一项所述方法的步骤。
  17. 一种计算机程序产品,其特征在于,该计算机程序产品包含能够由可编程的装置执行的计算机程序,该计算机程序具有当由该可编程的装置执行时用于执行权利要求1-7中任一项所述方法的步骤的代码部分。
PCT/CN2021/094432 2020-12-28 2021-05-18 动作学习方法、装置、介质及电子设备 WO2022142078A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/566,211 US11999060B2 (en) 2020-12-28 2021-12-30 Action learning method, medium, and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011582786.4A CN112580582B (zh) 2020-12-28 2020-12-28 动作学习方法、装置、介质及电子设备
CN202011582786.4 2020-12-28

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/566,211 Continuation US11999060B2 (en) 2020-12-28 2021-12-30 Action learning method, medium, and electronic device

Publications (1)

Publication Number Publication Date
WO2022142078A1 true WO2022142078A1 (zh) 2022-07-07

Family

ID=75140742

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/094432 WO2022142078A1 (zh) 2020-12-28 2021-05-18 动作学习方法、装置、介质及电子设备

Country Status (2)

Country Link
CN (1) CN112580582B (zh)
WO (1) WO2022142078A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112580582B (zh) * 2020-12-28 2023-03-24 达闼机器人股份有限公司 动作学习方法、装置、介质及电子设备
CN113146634A (zh) * 2021-04-25 2021-07-23 达闼机器人有限公司 机器人姿态的控制方法、机器人及存储介质
CN113276117B (zh) * 2021-05-21 2023-01-03 重庆工程职业技术学院 一种工业机器人自动控制***

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160288323A1 (en) * 2015-04-02 2016-10-06 Honda Research Institute Europe Gmbh Method for improving operation of a robot
CN108098780A (zh) * 2016-11-24 2018-06-01 广州映博智能科技有限公司 一种新型的机器人仿人运动***
CN110825368A (zh) * 2019-11-22 2020-02-21 上海乐白机器人有限公司 机器人的控制方法、***、电子设备、存储介质
CN112580582A (zh) * 2020-12-28 2021-03-30 达闼机器人有限公司 动作学习方法、装置、介质及电子设备

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AUPP299498A0 (en) * 1998-04-15 1998-05-07 Commonwealth Scientific And Industrial Research Organisation Method of tracking and sensing position of objects
CN100541540C (zh) * 2006-09-14 2009-09-16 浙江大学 基于侧影和末端节点的视频人体三维运动恢复方法
CN101187990A (zh) * 2007-12-14 2008-05-28 华南理工大学 一种会话机器人***
CN101692284B (zh) * 2009-07-24 2012-01-04 西安电子科技大学 基于量子免疫克隆算法的三维人体运动跟踪方法
CA2794226C (en) * 2012-10-31 2020-10-20 Queen's University At Kingston Automated intraoperative ultrasound calibration
CN106078752B (zh) * 2016-06-27 2019-03-19 西安电子科技大学 一种基于Kinect的仿人机器人人体行为模仿方法
CN106131493A (zh) * 2016-07-20 2016-11-16 绥化学院 基于虚拟现实远端临场智能消防机器人的体感控制***
CN108858188B (zh) * 2018-06-20 2020-10-27 华南理工大学 一种应用于人形机器人的人体转体和位移映射方法
EP3628453A1 (en) * 2018-09-28 2020-04-01 Siemens Aktiengesellschaft A control system and method for a robot
CN111860243A (zh) * 2020-07-07 2020-10-30 华中师范大学 一种机器人动作序列生成方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160288323A1 (en) * 2015-04-02 2016-10-06 Honda Research Institute Europe Gmbh Method for improving operation of a robot
CN108098780A (zh) * 2016-11-24 2018-06-01 广州映博智能科技有限公司 一种新型的机器人仿人运动***
CN110825368A (zh) * 2019-11-22 2020-02-21 上海乐白机器人有限公司 机器人的控制方法、***、电子设备、存储介质
CN112580582A (zh) * 2020-12-28 2021-03-30 达闼机器人有限公司 动作学习方法、装置、介质及电子设备

Also Published As

Publication number Publication date
CN112580582A (zh) 2021-03-30
CN112580582B (zh) 2023-03-24

Similar Documents

Publication Publication Date Title
WO2022142078A1 (zh) 动作学习方法、装置、介质及电子设备
KR102365465B1 (ko) 로봇 행동들에 대한 보정들의 결정 및 이용
CN108873768B (zh) 任务执行***及方法、学习装置及方法、以及记录介质
CN108986801B (zh) 一种人机交互方法、装置及人机交互终端
CN106780608B (zh) 位姿信息估计方法、装置和可移动设备
CN110246182B (zh) 基于视觉的全局地图定位方法、装置、存储介质和设备
KR20200014368A (ko) 로봇 행동 보정에 기초한 로컬 피쳐 모델의 업데이트
JP7117237B2 (ja) ロボット制御装置、ロボットシステム及びロボット制御方法
CN112847336B (zh) 动作学习方法、装置、存储介质及电子设备
Akinola et al. Learning precise 3d manipulation from multiple uncalibrated cameras
KR102517023B1 (ko) 머신 제어 시스템, 머신 및 통신 방법
WO2019128496A1 (zh) 控制设备运动
CN110363811B (zh) 用于抓取设备的控制方法、装置、存储介质及电子设备
JP2023544215A (ja) 姿勢の脱曖昧化
CN113959444A (zh) 用于无人设备的导航方法、装置、介质及无人设备
JP2012071394A (ja) シミュレーションシステムおよびそのためのシミュレーションプログラム
US11999060B2 (en) Action learning method, medium, and electronic device
JP2019149621A (ja) 情報処理装置、情報処理方法、およびプログラム
US12020444B2 (en) Production line monitoring method and monitoring system thereof
CN111784842B (zh) 三维重建方法、装置、设备及可读储存介质
CN114220172A (zh) 一种唇动识别的方法、装置、电子设备和存储介质
JP7372076B2 (ja) 画像処理システム
CN110962120B (zh) 网络模型的训练方法及装置、机械臂运动控制方法及装置
Chaudhary et al. Controlling a swarm of unmanned aerial vehicles using full-body k-nearest neighbor based action classifier
CN114571463B (zh) 动作检测方法、装置、可读存储介质及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21912839

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22.11.2023)