CN109291052B - Massage manipulator training method based on deep reinforcement learning - Google Patents

Massage manipulator training method based on deep reinforcement learning Download PDF

Info

Publication number
CN109291052B
CN109291052B CN201811261282.5A CN201811261282A CN109291052B CN 109291052 B CN109291052 B CN 109291052B CN 201811261282 A CN201811261282 A CN 201811261282A CN 109291052 B CN109291052 B CN 109291052B
Authority
CN
China
Prior art keywords
action
pressure
massage
data
pressure value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201811261282.5A
Other languages
Chinese (zh)
Other versions
CN109291052A (en
Inventor
范一诺
王翔宇
丁萌
任晓惠
汪浩
陆佃杰
张桂娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Normal University
Original Assignee
Shandong Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Normal University filed Critical Shandong Normal University
Priority to CN201811261282.5A priority Critical patent/CN109291052B/en
Publication of CN109291052A publication Critical patent/CN109291052A/en
Application granted granted Critical
Publication of CN109291052B publication Critical patent/CN109291052B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/40ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the management of medical equipment or devices, e.g. scheduling maintenance or upgrades
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/63ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Business, Economics & Management (AREA)
  • Business, Economics & Management (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Massaging Devices (AREA)

Abstract

The invention discloses a massage manipulator training method based on depth reinforcement learning, which solves the problems that the action of a massage manipulator in the prior art is only in a simulation state and the massage action is not accurate enough, and has the effects of enhancing the skill of the massage manipulator, providing professional accurate massage and reducing the fatigue of manual massage; the technical scheme is as follows: acquiring motion and pressure data, processing the data, constructing a reference motion set and a reference pressure set, and setting a comfort range of a pressure value; inputting the data, the reference action and the reference pressure into a neural network for prediction and decision making, executing an action value and a pressure value corresponding to the neural network output decision making, and comparing with comfort ranges of the reference action and the pressure value; and after the set conditions are met, connecting the trained network with a control system of the massage manipulator.

Description

Massage manipulator training method based on deep reinforcement learning
Technical Field
The invention relates to the field of manipulators, in particular to a massage manipulator training method based on depth reinforcement learning.
Background
At present, mechanical equipment for massage is not diversified, and most of the mechanical equipment is a single-function or multifunctional massager, a massage chair and the like, the action is less and mechanical, the use of strength is difficult to master, and more comfortable and professional services cannot be provided for users. The manual massage action is fine and soft, and especially the professional massage has strong skill and skillful manipulation. But the number of professional massagers is small, the professional massagers cannot be served anytime and anywhere, the cost is high, and therefore the requirements of ordinary people cannot be met.
With the development of artificial intelligence and the increasing demand for productivity, industrial robots have been used in more and more occasions. The deep reinforcement learning is applied to more and more control problems, and has great advantages in the field of mechanical arm path planning and animation simulation. Because the reinforcement learning algorithm has high-dimensional sample complexity and other physical limitations, the dimensionality and the complexity of data are greatly reduced through deep learning and reinforcement learning combined training, but the reinforcement learning algorithm is only in a simulation state at present and cannot be completely applied to actual situations.
At present, the main attack points in the field of mechanical arm control are the problems of path planning and trajectory planning, but the situation of mechanical arm simulation actions, particularly the situation of mechanical arm action simulation by using a deep reinforcement learning method, is rare and the realization is very difficult.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a massage manipulator training method based on deep reinforcement learning, which has the effects of enhancing the skill of the massage manipulator, providing professional accurate massage and reducing the workload of manual massage.
The invention adopts the following technical scheme:
a massage manipulator training method based on deep reinforcement learning comprises the steps of collecting action and pressure data, processing the data, constructing a reference action set and a reference pressure set, and setting a comfort level range of a pressure value;
inputting the data, the reference action and the reference pressure into a neural network for prediction and decision making, executing an action value and a pressure value corresponding to the neural network output decision making, and comparing with comfort ranges of the reference action and the pressure value;
and after the set conditions are met, connecting the trained network with a control system of the massage manipulator.
Furthermore, data are collected through the motion capture gloves, and the motion capture gloves are used for capturing motion data of the finger joints and the wrist joints.
Furthermore, pressure sensors are arranged on the motion capture gloves corresponding to the finger joints and the wrist joints.
Further, the data processing process is as follows: and (4) cutting each collected action segment into a set length, and averagely dividing the cut action segments into a plurality of parts.
Further, an initial state value and a pressure value of the action segment are extracted, the action value is used as a reference action, and the pressure value is normalized and used as a reference pressure value.
Further, the pressure value comfort range is obtained by collecting feedback pressure data for multiple times by the pressure sensor.
Further, the massage manipulator comprises 14 finger joints, 1 wrist joint and an elbow joint, and tentacles with pressure sensors are arranged at the finger joints and the wrist joints.
Further, the tentacle is a soft cushion.
Further, the neural network adopts a convolutional neural network, and the motion distribution is modeled by gauss.
Furthermore, the fine adjustment is carried out by collecting the action and pressure data of the massage mechanical hand.
Compared with the prior art, the invention has the beneficial effects that:
(1) the manipulator achieves the skills of professionals through deep reinforcement learning, and is properly adjusted according to the actual situation while continuously learning and simulating the reference action, so that the manipulator is better suitable for different environments and massage objects, and more comfortable and professional massage experience is provided for users;
(2) the invention reduces the fatigue work of human therapists, reduces the cost and improves the specialty of massage.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application.
FIG. 1 is a flow chart of the present application;
fig. 2 is a neural network training diagram of the present application.
Detailed Description
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
As described in the background art, the prior art has the defects that the action of the massage manipulator is only in a simulation state and the massage action is not accurate enough, and in order to solve the technical problems, the application provides a massage manipulator training method based on deep reinforcement learning.
In an exemplary embodiment of the present application, as shown in fig. 1-2, there is provided a massage manipulator training method based on deep reinforcement learning, including the steps of:
step 1, collecting hand motion and pressure data:
the motion acquisition is to enable professional massagers or motion providers to wear existing motion capture gloves, and the motion capture gloves can capture and record motion data of 14 joints of fingers (two joints are arranged on the thumb, and three joints are respectively arranged on the other four fingers) and wrist joints as a reference motion set; since the elbow joint is adjusted by the angle with the wrist joint, it is not necessary to capture data as motion.
Pressure sensors are arranged on the motion capture gloves corresponding to the positions of finger joints and wrist joints.
The massage actions and pressure values of a plurality of force levels are collected, so that a user can select the force levels according to the needs of the user.
Step 2, processing the acquired data, constructing a reference action set and a reference pressure set, and setting a comfortable range of pressure values:
the data processing process comprises the following steps: and (4) cutting each collected action segment into a set length, and averagely dividing the cut action segments into a plurality of parts.
And acquiring the initial state value and the pressure value of the clipped action segment, taking the action value as a reference action, and normalizing the pressure value as a reference pressure value.
The pressure value comfort range is obtained according to pressure data collected and fed back by a plurality of tests.
In some embodiments, each acquired motion segment is clipped to 1.5 seconds, with segment vacancy times of less than 1.5 seconds set to 0; each 1.5 second motion segment was then divided equally into 5 portions, each 0.3 second. Because the hand massage action time is generally short, the same action in one cycle is repeated, one action can be basically finished within 1.5 seconds, the time is saved, the efficiency is increased, and the division into 0.3 seconds basically can ensure that 5 action segments within 1.5 seconds are continuously spliced into a complete action under the efficient condition.
Step 3, constructing a massage manipulator structure by simulating hand joints of a human:
the massage manipulator comprises 14 finger joints, 1 wrist joint and an elbow joint, and tentacles with pressure sensors are arranged at the finger joints and the wrist joints.
In some embodiments, the tentacles are soft pads for added comfort; the pressure sensor is arranged in the soft cushion.
Furthermore, the soft pad is made of rubber materials.
Step 4, training by adopting a neural network:
inputting the collected action, reference action and reference pressure value into a convolutional neural network for prediction and decision, executing the action and pressure value corresponding to the decision output by the network, and comparing with the reference action and pressure value comfort range.
When the action is similar to the reference action enough (the similarity reaches 99%) and the pressure value is proper enough, executing the action, and connecting the trained network with the control system of the massage manipulator;
and when the action and pressure values do not meet the conditions, repeating the prediction and decision process of the convolutional neural network.
The policy network, pi, is represented by a convolutional neural network, the motion distribution is modeled by gaussians,
π(a|s)=N(μ(s),Σ) (1)
the learning goal is to find the optimal strategy pi ═ argπmaxJ(π)。
If each set starts with a fixed initial state, the expected return can be rewritten to the expected return from the first step,
J(π)=E(R0|π)=Eτ~p(τ|π)[∑r(st,at)] (2)
Figure BDA0001843939120000041
in the above formulas, J (π) is the long-term cumulative reward, stIs the current state, st+1Is the next state, atFor the current action, s0For the initial state, τ is a sample tuple and p (τ | π) represents the probability of tracing τ under strategy π.
The input of the upper layer of the neural network is a state s and an action a generated in the previous stepi-1(ii) a The input of the next layer is state s and reference action agiThe upper and lower layers, referenced to pressure, each pass through a fully connected layer of 512 cells and then together pass through two linear output layers of 128 cells, outputting the determined action, as shown in fig. 2.
The method comprises the steps of inputting a state, reference actions (as elements for target and measuring reward values), reference pressure values and actions generated in the previous step into a network, formulating strategies through reward and value functions V, wherein each strategy corresponds to one output action, and the state generated by the actions is taken as the next state to be continuously used as input.
The specific content of the network is as follows:
(1) state of manipulator s:
from a 47-dimensional tuple θ ═ (θ)1,θ2,θ3,θ4,θ5,θ6,θ7,θ8,θ9,θ10,θ11,θ12,θ13,θ14,θ15,θ16) The first 14 joints from thumb to little finger and from finger tip to finger root are defined, the 15 th joint is defined as wrist joint and the 16 th joint is defined as elbow joint.
Each joint comprises two components of angle and angular velocity, and the finger joint and the wrist joint comprise 15 pressure sensors.
The pressure values are determined by angle and angular velocity, but the combination is not exclusive. And carrying out normalization processing on the theta, so that the accuracy of the training of the neural network is facilitated.
(2) The manipulator motion a is defined by a 32-dimensional tuple ψ (θ)11,……,θ1616) And (4) forming.
ψ is the angle and angular velocity that needs to be rotated in the case of the current state.
Also normalize psi i if θiiGreater than 1, then θii=θii-1
ψ16Because the elbow joint has no reference motion, the elbow joint can learn by itself according to the position requirement.
(3) Setting of reward function:
if the pressure is not in the comfort pressure value range, r is-10; if the pressure is within the comfort range, then
r=wa*ra+ww*rw+wy*ry+wt*rt+c+wp*peo,
In the formula wa=-0,55,ww=-0.05,wy=-0.3,wt=-0.1,c=1,wp=5。
raIs the difference between the angle of the joint and the angle in the reference movement, rwIs the difference in angular velocity of the joint, ryIs the difference between the actual pressure value and the reference pressure value, rtIs the difference between the actual frame rate and the reference motion frame (0.3 seconds).
peo defaults to 0, when the user presses the down intensity button, peo ═ adjust pre-shift-post shift |, to help the robot hand shift to the next shift faster.
All differences take the form of exponential euclidean distances as follows:
r=exp(∑||y-y’||2)
where y is the value of the actual variable and y' is the value of the reference variable.
Step 5, fine adjustment
And fine adjustment is carried out by collecting feedback data of the action and the pressure of the manipulator in the real environment.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (6)

1. A massage manipulator training method based on deep reinforcement learning is characterized in that data are collected through motion capture gloves, the motion capture gloves are used for capturing motion data of finger joints and wrist joints, collecting motion and pressure data and processing the data; the data processing process comprises the following steps: editing each collected action segment into a set length, and averagely dividing the edited action segment into a plurality of parts; extracting an initial state value and a pressure value of the action segment, taking the action value as a reference action, and normalizing the pressure value as a reference pressure value; constructing a reference action set and a reference pressure set, and setting a pressure value comfort level range;
inputting the data, the reference action and the reference pressure into a neural network for prediction and decision making, executing an action value and a pressure value corresponding to the neural network output decision making, and comparing with comfort ranges of the reference action and the pressure value; specifically, strategies are formulated through reward and value functions by the neural network input state, the reference action, the reference pressure value and the action generated in the previous step, each strategy corresponds to one output action, and the state generated by the action is taken as the next state to continue to be used as input; when the set conditions are not met, repeating the neural network prediction and decision process; after the set conditions are met, connecting the trained network with a control system of the massage manipulator; the massage manipulator comprises 14 finger joints, 1 wrist joint and an elbow joint, and tentacles with pressure sensors are arranged at the finger joints and the wrist joints.
2. The massage manipulator training method based on the deep reinforcement learning as claimed in claim 1, wherein pressure sensors are installed on the motion capture gloves corresponding to the finger joints and the wrist joints.
3. The massage manipulator training method based on the deep reinforcement learning as claimed in claim 1, wherein the pressure value comfort range is obtained by collecting feedback pressure data by a pressure sensor for a plurality of times.
4. The massage manipulator training method based on the deep reinforcement learning as claimed in claim 1, wherein the tentacle is a soft cushion.
5. The massage manipulator training method based on the deep reinforcement learning as claimed in claim 1, wherein the neural network is a convolutional neural network, and the motion distribution is modeled by gauss.
6. The method for training a massage manipulator based on the deep reinforcement learning as claimed in claim 1, wherein the fine adjustment is performed by collecting data of the action and pressure of the massage manipulator.
CN201811261282.5A 2018-10-26 2018-10-26 Massage manipulator training method based on deep reinforcement learning Expired - Fee Related CN109291052B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811261282.5A CN109291052B (en) 2018-10-26 2018-10-26 Massage manipulator training method based on deep reinforcement learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811261282.5A CN109291052B (en) 2018-10-26 2018-10-26 Massage manipulator training method based on deep reinforcement learning

Publications (2)

Publication Number Publication Date
CN109291052A CN109291052A (en) 2019-02-01
CN109291052B true CN109291052B (en) 2021-11-09

Family

ID=65158970

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811261282.5A Expired - Fee Related CN109291052B (en) 2018-10-26 2018-10-26 Massage manipulator training method based on deep reinforcement learning

Country Status (1)

Country Link
CN (1) CN109291052B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110039537B (en) * 2019-03-15 2021-07-13 北京精密机电控制设备研究所 Online self-learning multi-joint motion planning method based on neural network
CN110147891B (en) * 2019-05-23 2021-06-01 北京地平线机器人技术研发有限公司 Method and device applied to reinforcement learning training process and electronic equipment
CN110516389B (en) * 2019-08-29 2021-04-13 腾讯科技(深圳)有限公司 Behavior control strategy learning method, device, equipment and storage medium
CN110561430B (en) * 2019-08-30 2021-08-10 哈尔滨工业大学(深圳) Robot assembly track optimization method and device for offline example learning
CN113211441B (en) * 2020-11-30 2022-09-09 湖南太观科技有限公司 Neural network training and robot control method and device
CN114053112A (en) * 2021-10-19 2022-02-18 奥佳华智能健康科技集团股份有限公司 Massage method, device, terminal equipment and medium
CN114609918B (en) * 2022-05-12 2022-08-02 齐鲁工业大学 Four-footed robot motion control method, system, storage medium and equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003052771A (en) * 2001-08-20 2003-02-25 Sintokogio Ltd Method and system for controlling massage machine
CN107280910A (en) * 2017-05-15 2017-10-24 武汉理工大学 A kind of autonomous intelligence massager and operating method based on data acquisition
CN107825393A (en) * 2017-12-14 2018-03-23 北京工业大学 A kind of total joint measurement type data glove
CN108171329A (en) * 2017-12-13 2018-06-15 华南师范大学 Deep learning neural network training method, number of plies adjusting apparatus and robot system
CN108621159A (en) * 2018-04-28 2018-10-09 首都师范大学 A kind of Dynamic Modeling in Robotics method based on deep learning

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102727362B (en) * 2012-07-20 2014-09-24 上海海事大学 NUI (Natural User Interface)-based peripheral arm motion tracking rehabilitation training system and training method
WO2014021603A1 (en) * 2012-08-02 2014-02-06 한국기술교육대학교 산학협력단 Motion control device based on winding string
CN105311792B (en) * 2014-07-02 2018-12-11 北京蝶禾谊安信息技术有限公司 The collecting method of recovery training appliance for recovery and recovery training appliance for recovery
US11246786B2 (en) * 2016-12-22 2022-02-15 Rehab-Robotcs Company Ltd. Power assistive device for hand rehabilitation and a method of using the same
CN108543216A (en) * 2018-01-26 2018-09-18 南京航空航天大学 A kind of hand function reconstructing device and its implementation based on master & slave control

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003052771A (en) * 2001-08-20 2003-02-25 Sintokogio Ltd Method and system for controlling massage machine
CN107280910A (en) * 2017-05-15 2017-10-24 武汉理工大学 A kind of autonomous intelligence massager and operating method based on data acquisition
CN108171329A (en) * 2017-12-13 2018-06-15 华南师范大学 Deep learning neural network training method, number of plies adjusting apparatus and robot system
CN107825393A (en) * 2017-12-14 2018-03-23 北京工业大学 A kind of total joint measurement type data glove
CN108621159A (en) * 2018-04-28 2018-10-09 首都师范大学 A kind of Dynamic Modeling in Robotics method based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
足部按摩机器人的技能学习与融合方法研究;王洪伟;《中国优秀硕士学位论文全文数据库信息科技辑》;20131215;第20-32页 *

Also Published As

Publication number Publication date
CN109291052A (en) 2019-02-01

Similar Documents

Publication Publication Date Title
CN109291052B (en) Massage manipulator training method based on deep reinforcement learning
Brahmi et al. Cartesian trajectory tracking of a 7-DOF exoskeleton robot based on human inverse kinematics
CN106650687B (en) Posture correction method based on depth information and skeleton information
CN111902847A (en) Real-time processing of hand state representation model estimates
Miao et al. Reviewing high-level control techniques on robot-assisted upper-limb rehabilitation
Nandy et al. Recognizing & interpreting Indian sign language gesture for human robot interaction
Luces et al. A phantom-sensation based paradigm for continuous vibrotactile wrist guidance in two-dimensional space
CN108044625B (en) A kind of robot arm control method based on the virtual gesture fusion of more Leapmotion
Llop-Harillo et al. System for the experimental evaluation of anthropomorphic hands. Application to a new 3D-printed prosthetic hand prototype
CN106406518A (en) Gesture control device and gesture recognition method
Wei et al. A novel upper limb rehabilitation system with hand exoskeleton mechanism
Owen et al. Development of a dexterous prosthetic hand
Wang et al. Development of human-machine interface for teleoperation of a mobile manipulator
Pu et al. Design and development of the wearable hand exoskeleton system for rehabilitation of hand impaired patients
Nasr et al. Model-based mid-level regulation for assist-as-needed hierarchical control of wearable robots: A computational study of human-robot adaptation
Antonius et al. Electromyography gesture identification using CNN-RNN neural network for controlling quadcopters
Liu et al. A practical system for 3-D hand pose tracking using EMG wearables with applications to prosthetics and user interfaces
Côté-Allard et al. Virtual reality to study the gap between offline and real-time EMG-based gesture recognition
Zaidan et al. Design and implementation of upper prosthetic controlled remotely by flexible sensor glove
Chen et al. A novel telerehabilitation system based on bilateral upper limb exoskeleton robot
James et al. Realtime hand landmark tracking to aid development of a prosthetic arm for reach and grasp motions
Gong et al. Design of Cerebral Palsy Rehabilitation Training System Based on Human-Computer Interaction
Fukuda et al. An EMG‐controlled omnidirectional pointing device
Lai et al. Design of a multi-degree-of-freedom virtual hand bench for myoelectrical prosthesis
McInnes South African sign language dataset development and translation: a glove-based approach

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20211109