CN115431264A - Interactive motion control method and system with individual characteristics - Google Patents

Interactive motion control method and system with individual characteristics Download PDF

Info

Publication number
CN115431264A
CN115431264A CN202210956939.XA CN202210956939A CN115431264A CN 115431264 A CN115431264 A CN 115431264A CN 202210956939 A CN202210956939 A CN 202210956939A CN 115431264 A CN115431264 A CN 115431264A
Authority
CN
China
Prior art keywords
motion
robot
user
time
acceleration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210956939.XA
Other languages
Chinese (zh)
Other versions
CN115431264B (en
Inventor
翟超
李思远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Geosciences
Original Assignee
China University of Geosciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Geosciences filed Critical China University of Geosciences
Priority to CN202210956939.XA priority Critical patent/CN115431264B/en
Publication of CN115431264A publication Critical patent/CN115431264A/en
Application granted granted Critical
Publication of CN115431264B publication Critical patent/CN115431264B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1661Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Manipulator (AREA)

Abstract

The invention relates to the field of robot motion control, and provides an interactive motion control method and system with individual characteristics, which comprises the following steps: s1: acquiring personal motion characteristics, designing an intention coupling item, and constructing a virtual teacher motion frame through the personal motion characteristics and the intention coupling item; s2: the virtual teacher motion frame acquires the motion trail of the user in real time, and the robot is controlled to move along by the motion trail of the user. The invention can select different personal motion characteristics according to different participants to control the teaching process of the virtual teacher, so that the learning of different participants is more targeted, thereby improving the effect of the motion learning; in addition, different motion modes can be taught by changing the intention coupling item, introduction of various teaching models is achieved, and compared with a real teacher, the virtual teacher has lower teaching cost while achieving an individualized learning process.

Description

Interactive motion control method and system with individual characteristics
Technical Field
The invention relates to the field of robot motion control, in particular to an interactive motion control method and system with individual characteristics.
Background
Interpersonal coordination is a natural bridge for individuals to collaborate with others, and it is derived from a dynamic and complex set of specialized processes involving adaptive non-verbal behaviors, emotions, actions, emotions, and the like. In the research on human interpersonal interaction, different individuals have different degrees of difference in motion coordination level, and coordination level between individuals with similar motion characteristics is higher, so that it is desirable to design a virtual robot capable of interacting with human beings, guiding the motion of participants and trying to improve the motion coordination level of the individual participants. .
In real life, new behaviors are typically created with the help of more specialized partners, such as creating new personal coordination patterns in choruses such as making social dances or choruses. Previous studies have attempted to explore the learning of new stable collaborative patterns by participants using finger rhythmic telescoping movements. A common stable mode of finger pinching is either in-phase or in-phase with the two fingers, with the participant learning a stable phase generally between 0 degrees in-phase and 180 degrees in-phase. After disturbance, if the learned coordination mode can still be performed, the learning effect is good.
A series of previous studies have shown that sensorimotor synchrony between individuals may play a crucial role in successful social interactions, and spontaneous and induced synchrony have been shown to have a strong impact on the quality of social interactions. A higher level of motion coordination means a better motion interaction effect. Mirror games of motion interaction between human individuals and robots have been adopted as a typical example by several research groups, and all these studies show that two moving parties with similar individual motion characteristics exhibit greater synchronicity. The robot has different functions by selecting different control input items, and on the basis of synchronous motion, the robot is expected to act as a virtual teacher by introducing an intention coupling item into the control input, so that the robot can teach a new motion mode of a participant in the interactive motion process, and the motion mode of the participant is gradually guided to a motion mode with high coordination level. To this end, a new idea is provided for improving the individual movement coordination level, and a robot, namely a virtual player, needs to be designed to be a virtual teacher role by giving a determined intention coupling item so as to guide participants to learn a new movement mode. Meanwhile, the virtual teacher can be endowed with personal motion characteristics, so that the motion of the virtual teacher is closer to that of human beings, and the human beings are better guided to carry out motion coordination and learning.
The prior art does not consider the influence of personal motion characteristics on motion coordination, so that the behavior of a virtual player is not vivid enough, and the effect of coordinated motion cannot be expected experimentally.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
In order to solve the above technical problem, the present invention provides an interactive motion control method with individual characteristics, comprising:
s1: acquiring personal motion characteristics, designing an intention coupling item, and constructing a virtual teacher motion frame through the personal motion characteristics and the intention coupling item;
s2: the virtual teacher motion frame acquires the motion trail of the user in real time, and the robot is controlled to move along by the motion trail of the user.
Preferably:
the intention coupling term C int The expression of (a) is:
Figure BDA0003791743160000021
wherein c (t) is an exponential function of personal motion characteristics, t is sampling time, phi represents a phase difference between motion made by a user and motion made by a robot,
Figure BDA0003791743160000022
is the speed of movement, x, of the user hp Is the position of the user or the like,
Figure BDA0003791743160000023
is the speed of movement of the robot.
Preferably, step S2 specifically includes:
s21: the virtual teacher motion frame acquires the position of a user and the acceleration of the robot;
s22: calculating to obtain the movement speed of the user and the acceleration of the user according to the position of the user, and calculating to obtain the position of the robot and the movement speed of the robot according to the acceleration of the robot;
s23: the virtual teacher motion frame controls the robot to move along with the virtual teacher motion frame through the motion speed of the user, the acceleration of the user, the position of the robot and the motion speed of the robot, and the motion of the robot is adjusted through boundary conditions.
Preferably, in step S22, the calculation formula of the movement speed of the user and the acceleration of the user is:
Figure BDA0003791743160000024
Figure BDA0003791743160000025
wherein k is a sampling number,
Figure BDA0003791743160000026
the motion speed of the user for the kth sample,
Figure BDA0003791743160000027
the motion speed of the user for the k-1 th sample,
Figure BDA0003791743160000028
is the acceleration, x, of the user sampled for the kth time hp (k) Position of user for the k-th sampling, x hp (k-1) is the position of the user sampled at the k-1 st time, and t is the sampling time.
Preferably, in step S22, the calculation formula of the position of the robot and the movement speed of the robot is:
Figure BDA0003791743160000031
Figure BDA0003791743160000032
wherein k is a sampling number,
Figure BDA0003791743160000033
the motion speed of the robot for the kth sample,
Figure BDA0003791743160000034
speed of movement, x, of the robot for the k-1 th sample vp (k) Position of the robot for the kth sample, x vp (k-1) is the position of the robot sampled at the k-1 st time,
Figure BDA0003791743160000035
the acceleration of the robot sampled at the k-1 st time, and t is the sampling time.
Preferably, in step S23, the expression of the virtual teacher motion frame controlling the motion of the robot is:
Figure BDA0003791743160000036
Figure BDA0003791743160000037
wherein,
Figure BDA0003791743160000038
is the acceleration of the robot;
Figure BDA0003791743160000039
for movement of robotsSpeed; alpha, beta and gamma are motion control parameters, and omega is the eigenfrequency of motion; c int Coupling terms for intent; v is a pre-recorded personal speed time sequence; c (t) is a function of the transition time; t is 0 To the conversion time; t is 1 Is the end time of the transition; τ is the time constant.
Preferably, in step S23, the expression of the boundary condition is:
Figure BDA00037917431600000310
Figure BDA00037917431600000311
wherein k is a sample number, k is a sampling number,
Figure BDA00037917431600000312
the acceleration of the robot sampled at the k-1 st time,
Figure BDA00037917431600000313
acceleration of the robot, x, sampled for the kth time vp And epsilon is the position of the robot, epsilon is a control trigger condition, and L (1) = -1, L (2) =1 is a boundary condition of the motion of the robot.
An interactive motion control system having individual features, comprising:
the frame building module is used for obtaining personal motion characteristics, designing an intention coupling item and building a virtual teacher motion frame through the personal motion characteristics and the intention coupling item;
and the motion control module is used for acquiring the motion trail of the user in real time by the virtual teacher motion frame and controlling the robot to follow the motion according to the motion trail of the user.
The invention has the following beneficial effects:
1. the invention can select different personal motion characteristics according to different participants to control the teaching process of the virtual teacher, so that the learning of different participants is more pertinent, and the effect of the motion learning is improved;
2. in addition, different movement modes can be taught by changing the intention coupling item, introduction of various teaching models is achieved, and compared with a real teacher, the virtual teacher has lower teaching cost while achieving an individualized learning process.
Drawings
FIG. 1 is a flow chart of a method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of simple harmonic motions made by a user and a robot;
FIG. 3 is a position time series and velocity time series of user and robot motion;
FIG. 4 is a statistical analysis of RMSE (position synchronization error) values;
FIG. 5 is a statistical analysis of CV (phase stability) values;
the implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
Referring to fig. 1, the present invention provides an interactive motion control method having an individual characteristic, including:
s1: acquiring personal motion characteristics, designing an intention coupling item, and constructing a virtual teacher motion frame through the personal motion characteristics and the intention coupling item;
specifically, the personal motion characteristics are obtained in the following manner: one user independently performs one-dimensional simple harmonic motion by using a mouse, records the position time sequence of the motion, processes the position time sequence to obtain the speed time sequence of the user so as to obtain a speed probability distribution map of the user, and describes the personal motion characteristics of the user by using the speed probability distribution map of the user, wherein previous researches prove that the personal motion characteristics of each person are kept unchanged;
s2: the virtual teacher motion frame acquires the motion trail of a user in real time, and the robot is controlled to follow the motion trail of the user;
specifically, the motion control system of the robot uses a Haken-Kelso-Bunz (HKB) oscillator which combines a Rayleigh oscillator and a Van der pol oscillator, and the HKB oscillator has a stable limit ring, and simultaneously, the amplitude of motion is reduced along with the increase of the motion speed, so that the motion characteristics of periodic oscillation of the human trunk are met.
In this embodiment:
the intention coupling term C int The expression of (c) is:
Figure BDA0003791743160000051
wherein c (t) is an exponential function of personal motion characteristics, t is sampling time, phi represents the phase difference between the motion of the user and the motion of the robot,
Figure BDA0003791743160000052
is the speed of movement, x, of the user hp Is the position of the user or the like,
Figure BDA0003791743160000053
is the movement speed of the robot.
In this embodiment, step S2 specifically includes:
s21: the virtual teacher motion frame acquires the position of a user and the acceleration of the robot;
s22: calculating the position of the user to obtain the movement speed of the user and the acceleration of the user, and calculating the position of the robot and the movement speed of the robot through the acceleration of the robot;
s23: the virtual teacher motion frame controls the robot to follow the motion through the motion speed of the user, the acceleration of the user, the position of the robot and the motion speed of the robot, and the motion of the robot is adjusted through boundary conditions;
specifically, the robot is controlled by the virtual teacher motion frame, so that simple harmonic motion performed by the user and the robot always keeps a set phase difference, the motion form is shown in fig. 2, HP is the position of the user, and VP is the position of the robot; the position time series and the velocity time series of the user and the robot movement are shown in fig. 3.
In this embodiment, in step S22, the calculation formula of the motion speed of the user and the acceleration of the user is:
Figure BDA0003791743160000054
Figure BDA0003791743160000055
wherein k is a sampling number,
Figure BDA0003791743160000056
the motion speed of the user for the kth sample,
Figure BDA0003791743160000057
the motion speed of the user for the k-1 th sample,
Figure BDA0003791743160000058
is the acceleration, x, of the user sampled for the kth time hp (k) Is the location, x, of the user sampled at the kth time hp (k-1) is the position of the user sampled at the k-1 st time, and t is the sampling time.
In this embodiment, in step S22, the calculation formula of the position of the robot and the movement speed of the robot is:
Figure BDA0003791743160000061
Figure BDA0003791743160000062
wherein k is a sample number, k is a sampling number,
Figure BDA0003791743160000063
is the movement speed of the robot sampled at the kth time,
Figure BDA0003791743160000064
speed of movement, x, of the robot for the k-1 th sample vp (k) Position of the robot for the kth sample, x vp (k-1) is the position of the robot sampled at the k-1 st time,
Figure BDA0003791743160000065
the acceleration of the robot sampled at the k-1 st time, and t is the sampling time.
In this embodiment, in step S23, the expression of the virtual teacher motion frame controlling the motion of the robot is as follows:
Figure BDA0003791743160000066
Figure BDA0003791743160000067
wherein,
Figure BDA0003791743160000068
is the acceleration of the robot;
Figure BDA0003791743160000069
is the movement speed of the robot; alpha, beta and gamma are motion control parameters, and omega is the eigenfrequency of motion; c int An intent-coupled term for maintaining a particular motion coordination pattern; v is a pre-recorded personal speed time sequence used for representing personal motion characteristics; c (t) is a conversion time function used for an over-learning process and a subsequent autonomous movement process; t is a unit of 0 For the switching time, it is preferably set to 30s; t is 1 Is the end time of the transition; τ is a time constant.
In this embodiment, in step S23, the expression of the boundary condition is:
Figure BDA00037917431600000610
Figure BDA00037917431600000611
wherein k is a sampling number,
Figure BDA00037917431600000612
is the acceleration of the robot sampled at the k-1 st time,
Figure BDA00037917431600000613
acceleration of the robot, x, for the kth sample vp Is the position of the robot, epsilon is a control trigger condition, and L (1) = -1, L (2) =1 is a boundary condition of the motion of the robot; when the motion position of the robot is close to or exceeds two boundaries, the motion planning module in the robot is updated in time to enable the robot to move in the opposite direction.
After a user experiences long-term motion interaction with the robot, the user learns the ability of perceiving and imitating the behaviors of other people, namely the social ability, and in order to verify the ability, a series of performance indexes are designed to measure the improvement condition of the social ability of the user.
Figure BDA0003791743160000071
Wherein x is 1,j Is the position of the robot at the jth sample; x is the number of 2,j Is the location of the jth sampling user; n is the total number of samples in the experiment; the RMSE (position synchronization error) is used for describing the position synchronization of both moving parties, and the lower the RMSE value is, the better the effect of representing the movement position synchronization is; the RMSE of the robot in the training phase and the presentation phase for four different individual movement characteristics is shown in fig. 4.
Figure BDA0003791743160000072
Wherein, Δ Φ i The phase difference of the motion tracks of the participant and the robot at the ith sampling point is obtained; the | · | | is a two-norm for solving the average value of the whole sampling; CV (phase stability) measures the phase stability of the motion tracks of the two, and the higher the CV value is, the more stable the motion phase is kept; the CV of the robot in the training stage and the demonstration stage under four different individual motion characteristics are shown in fig. 5.
The present invention provides an interactive motion control system with individual features, comprising:
the frame building module is used for obtaining personal motion characteristics, designing an intention coupling item and building a virtual teacher motion frame through the personal motion characteristics and the intention coupling item;
and the motion control module is used for acquiring the motion trail of the user in real time by the virtual teacher motion frame and controlling the robot to follow the motion according to the motion trail of the user.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of other like elements in a process, method, article, or system comprising the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments. In the unit claims enumerating several means, several of these means can be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order, but rather the words first, second, etc. are to be interpreted as indicating.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (8)

1. An interactive motion control method having an individual characteristic, comprising:
s1: acquiring personal motion characteristics, designing an intention coupling item, and constructing a virtual teacher motion frame through the personal motion characteristics and the intention coupling item;
s2: the virtual teacher motion frame acquires the motion trail of the user in real time, and the robot is controlled to follow the motion trail of the user.
2. The interactive motion control method with personality according to claim 1, wherein:
the intention coupling term C int The expression of (a) is:
Figure FDA0003791743150000011
wherein c (t) is an exponential function of personal motion characteristics, t is sampling time, phi represents a phase difference between motion made by a user and motion made by a robot,
Figure FDA0003791743150000012
is the speed of movement, x, of the user hp Is the location of the user or users,
Figure FDA0003791743150000013
is the movement speed of the robot.
3. The interactive motion control method with individual characteristics according to claim 1, wherein the step S2 is specifically:
s21: the virtual teacher motion frame acquires the position of a user and the acceleration of the robot;
s22: calculating to obtain the movement speed of the user and the acceleration of the user according to the position of the user, and calculating to obtain the position of the robot and the movement speed of the robot according to the acceleration of the robot;
s23: the virtual teacher motion frame controls the robot to move along with the virtual teacher motion frame through the motion speed of the user, the acceleration of the user, the position of the robot and the motion speed of the robot, and the motion of the robot is adjusted through boundary conditions.
4. The interactive motion control method with individual characteristics according to claim 3, wherein in step S22, the calculation formula of the motion velocity of the user and the acceleration of the user is:
Figure FDA0003791743150000014
Figure FDA0003791743150000015
wherein k is a sample number, k is a sampling number,
Figure FDA0003791743150000016
the motion speed of the user for the kth sample,
Figure FDA0003791743150000017
the motion speed of the user for the k-1 th sample,
Figure FDA0003791743150000018
acceleration of the user for the kth sample, x hp (k) Position of user for the k-th sampling, x hp (k-1) is the position of the user sampled at the k-1 st time, and t is the sampling time.
5. The interactive motion control method with individual characteristics according to claim 3, wherein in step S22, the position of the robot and the motion velocity of the robot are calculated by the following formulas:
Figure FDA0003791743150000021
Figure FDA0003791743150000022
wherein k is a sample number, k is a sampling number,
Figure FDA0003791743150000023
the motion speed of the robot for the kth sample,
Figure FDA0003791743150000024
speed of movement, x, of the robot for the k-1 th sample vp (k) Position of the robot for the kth sample, x vp (k-1) is the position of the robot sampled at the k-1 st time,
Figure FDA0003791743150000025
the acceleration of the robot sampled at the k-1 st time, and t is the sampling time.
6. The interactive motion control method with individual characteristics according to claim 3, wherein in step S23, the expression of the virtual teacher motion frame controlling the motion of the robot is:
Figure FDA0003791743150000026
Figure FDA0003791743150000027
wherein,
Figure FDA0003791743150000028
is the acceleration of the robot;
Figure FDA0003791743150000029
is the movement speed of the robot; alpha, beta and gamma are motion control parameters, and omega is the eigenfrequency of motion; c int Coupling terms for intent; v is a pre-recorded personal speed time sequence; c (t) is a function of the transition time; t is 0 To the conversion time; t is 1 Is the end time of the transition; τ is the time constant.
7. The interactive motion control method with individual features of claim 3, wherein in step S23, the expression of the boundary condition is:
Figure FDA00037917431500000210
Figure FDA00037917431500000211
wherein k is a sampling number,
Figure FDA00037917431500000212
the acceleration of the robot sampled at the k-1 st time,
Figure FDA00037917431500000213
acceleration of the robot, x, for the kth sample vp And epsilon is the position of the robot, epsilon is a control trigger condition, and L (1) = -1, L (2) =1 is a boundary condition of the motion of the robot.
8. An interactive motion control system having individual features, comprising:
the frame building module is used for obtaining personal motion characteristics, designing an intention coupling item and building a virtual teacher motion frame through the personal motion characteristics and the intention coupling item;
and the motion control module is used for acquiring the motion trail of the user in real time by the virtual teacher motion frame and controlling the robot to follow the motion trail of the user.
CN202210956939.XA 2022-08-10 2022-08-10 Interactive motion control method and system with individual features Active CN115431264B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210956939.XA CN115431264B (en) 2022-08-10 2022-08-10 Interactive motion control method and system with individual features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210956939.XA CN115431264B (en) 2022-08-10 2022-08-10 Interactive motion control method and system with individual features

Publications (2)

Publication Number Publication Date
CN115431264A true CN115431264A (en) 2022-12-06
CN115431264B CN115431264B (en) 2024-07-09

Family

ID=84242213

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210956939.XA Active CN115431264B (en) 2022-08-10 2022-08-10 Interactive motion control method and system with individual features

Country Status (1)

Country Link
CN (1) CN115431264B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005297095A (en) * 2004-04-07 2005-10-27 Sony Corp Robot device and its action comparison method
CN111009318A (en) * 2019-11-25 2020-04-14 上海交通大学 Virtual reality technology-based autism training system, method and device
CN112123334A (en) * 2020-08-24 2020-12-25 中国地质大学(武汉) Interactive arm control method and system based on event-driven mechanism
CN113096805A (en) * 2021-04-12 2021-07-09 华中师范大学 Autism emotion cognition and intervention system
CN113771043A (en) * 2021-09-30 2021-12-10 上海傅利叶智能科技有限公司 Control method and device for enabling robot to follow virtual object and rehabilitation robot
CN113858201A (en) * 2021-09-29 2021-12-31 清华大学 Intention-driven adaptive impedance control method, system, device, storage medium and robot
CN114675541A (en) * 2022-03-31 2022-06-28 中国地质大学(武汉) Biological heuristic two-dimensional limb cooperative control method and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005297095A (en) * 2004-04-07 2005-10-27 Sony Corp Robot device and its action comparison method
CN111009318A (en) * 2019-11-25 2020-04-14 上海交通大学 Virtual reality technology-based autism training system, method and device
CN112123334A (en) * 2020-08-24 2020-12-25 中国地质大学(武汉) Interactive arm control method and system based on event-driven mechanism
CN113096805A (en) * 2021-04-12 2021-07-09 华中师范大学 Autism emotion cognition and intervention system
CN113858201A (en) * 2021-09-29 2021-12-31 清华大学 Intention-driven adaptive impedance control method, system, device, storage medium and robot
CN113771043A (en) * 2021-09-30 2021-12-10 上海傅利叶智能科技有限公司 Control method and device for enabling robot to follow virtual object and rehabilitation robot
CN114675541A (en) * 2022-03-31 2022-06-28 中国地质大学(武汉) Biological heuristic two-dimensional limb cooperative control method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵剑辉;陈卓铭;周萍;黄伟新;王玉意;: "多感官互动训练***的设计与开发", 中国医疗设备, no. 10, 15 October 2009 (2009-10-15), pages 20 - 44 *

Also Published As

Publication number Publication date
CN115431264B (en) 2024-07-09

Similar Documents

Publication Publication Date Title
Bhattacharya et al. Text2gestures: A transformer-based network for generating emotive body gestures for virtual agents
Argall et al. A survey of robot learning from demonstration
Beer Computational and Dlmamical Languages for Autonomous Agents
Clegg et al. Learning to collaborate from simulation for robot-assisted dressing
Mochizuki et al. Developmental human-robot imitation learning of drawing with a neuro dynamical system
Lebourque et al. A complete system for the specification and the generation of sign language gestures
Luo et al. A generalized robotic handwriting learning system based on dynamic movement primitives (dmps)
Dong et al. Passive bimanual skills learning from demonstration with motion graph attention networks
Solis et al. Reactive robot system using a haptic interface: an active interaction to transfer skills from the robot to unskilled persons
Zhang et al. Learning robust point-to-point motions adversarially: A stochastic differential equation approach
CN115431264B (en) Interactive motion control method and system with individual features
Guo et al. Exploiting LSTM-RNNs and 3D skeleton features for hand gesture recognition
Zhou et al. Modeling of endpoint feedback learning implemented through point-to-point learning control
Stubbe Articulating novelty in science and art: The comparative technography of a robotic hand and a media art installation
Reiter et al. Spacecraft detection avoidance maneuver optimization using reinforcement learning
Hosseini et al. “Let There Be Intelligence!”-A Novel Cognitive Architecture for Teaching Assistant Social Robots
Ruiz-del-Solar et al. Analyzing the human-robot interaction abilities of a general-purpose social robot in different naturalistic environments
Kaerlein Social Robots Should Mediate, Not Replace, Social Interactions
Alemi “Let There Be Intelligence!”-A Novel Cognitive Architecture for Teaching Assistant Social Robots
Cheng et al. A framework of an agent planning with reinforcement learning for e-pet
Xu et al. Robots Learn to Write via Human–Robot Interaction
Neff Aesthetic exploration and refinement: A computational framework for expressive character animation
Weststeijn et al. Painting as" Reall Performance" in Rembrandt's Studio
Murata et al. Analysis of imitative interactions between humans and a robot with a neuro-dynamical system
Nemoto et al. Design of Humanity by the Concept of Artificial Personalities

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant