CN111259699A - Human body action recognition and prediction method and device - Google Patents
Human body action recognition and prediction method and device Download PDFInfo
- Publication number
- CN111259699A CN111259699A CN201811461333.9A CN201811461333A CN111259699A CN 111259699 A CN111259699 A CN 111259699A CN 201811461333 A CN201811461333 A CN 201811461333A CN 111259699 A CN111259699 A CN 111259699A
- Authority
- CN
- China
- Prior art keywords
- human body
- data
- recognition
- target
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000009471 action Effects 0.000 title claims abstract description 66
- 238000000034 method Methods 0.000 title claims abstract description 65
- 230000033001 locomotion Effects 0.000 claims abstract description 85
- 230000008569 process Effects 0.000 claims description 28
- 238000002567 electromyography Methods 0.000 claims description 8
- 230000008859 change Effects 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 6
- 230000001360 synchronised effect Effects 0.000 claims description 2
- 230000004927 fusion Effects 0.000 abstract description 4
- 230000000474 nursing effect Effects 0.000 description 16
- 238000004422 calculation algorithm Methods 0.000 description 7
- 238000013527 convolutional neural network Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 238000003909 pattern recognition Methods 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 5
- 238000013135 deep learning Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 210000003414 extremity Anatomy 0.000 description 4
- 238000012549 training Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000010924 continuous production Methods 0.000 description 3
- 125000004122 cyclic group Chemical group 0.000 description 3
- 230000007613 environmental effect Effects 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 239000003814 drug Substances 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 210000003205 muscle Anatomy 0.000 description 2
- 230000004118 muscle contraction Effects 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 210000002363 skeletal muscle cell Anatomy 0.000 description 2
- 230000008925 spontaneous activity Effects 0.000 description 2
- MDEHFNXAMDVESV-UHFFFAOYSA-N 3-methyl-5-(4-phenylphenyl)pentanoic acid Chemical compound C1=CC(CCC(C)CC(O)=O)=CC=C1C1=CC=CC=C1 MDEHFNXAMDVESV-UHFFFAOYSA-N 0.000 description 1
- 206010021118 Hypotonia Diseases 0.000 description 1
- BQCADISMDOOEFD-UHFFFAOYSA-N Silver Chemical compound [Ag] BQCADISMDOOEFD-UHFFFAOYSA-N 0.000 description 1
- 230000032683 aging Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000037237 body shape Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 230000008602 contraction Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000036640 muscle relaxation Effects 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000000276 sedentary effect Effects 0.000 description 1
- 229910052709 silver Inorganic materials 0.000 description 1
- 239000004332 silver Substances 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000000638 stimulation Effects 0.000 description 1
- 210000001364 upper extremity Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The invention discloses a human body action recognition and prediction method, which obtains data in parallel through two data sources, wherein the data are from a target human body motion video or image shot by a camera, and human body surface electromyographic signals collected by intelligent equipment which is worn on a target human body and can extract human body surface electromyographic signals, and simultaneously carries out real-time recognition of human body actions on the two data in parallel, and then realizes fusion of intermediate data for carrying out human body action recognition according to the motion video or image of the human body and the intermediate data for carrying out human body action recognition according to the human body surface electromyographic signal data through a human body action recognition and prediction model, thereby obtaining the recognition and prediction of the human body actions.
Description
Technical Field
The invention relates to the field of intelligent nursing based on a mobile platform, in particular to a human body action recognition and prediction method and device.
Background
Human motion recognition is a research hotspot in the field of pattern recognition. In recent years, with rapid development of key technologies such as a human-computer interaction technology, a sensor technology, machine learning and the like, human motion recognition is widely applied to the fields of intelligent control, medical rehabilitation, health monitoring and the like. The use of various sensors including a camera to enable a computer to "see" the course of motion of an object to identify the course of motion of the object relies on a sequence of successive movements of the object that the computer "sees", which is a well established technique.
However, when migrating such motion recognition to a mobile platform, since the mobile platform may not continuously stare at a target or concentrate on a specific target, it is difficult to recognize the current motion of the target B due to missing motion process data of the target B when the computer is transferred from the target a to the target B; on the other hand, for the nursing robot, for example: when the object is changed from a sedentary state in which the handle is placed on the armrest of the chair to a state in which the object is intended to stand up by hand, since there is no movement process or a movement process with a slight amplitude for the hand or the upper limb, it is difficult for the computer to make a motion judgment or a motion prediction about the object. The solution of these problems is of great significance to a nursing robot or the like for a mobile platform.
In order to make the mobile platform based nursing robot capable of accurately predicting and identifying the movement of the target, other heuristic data, such as surface electromyography, needs to be introduced. The research and application of the current surface electromyographic signals mainly focus on the aspects of artificial limb control, functional nerve electrical stimulation and biofeedback research, sports medicine, rehabilitation medicine, clinical diagnosis and the like. In the field of computer human-computer interaction, many researchers also apply surface electromyographic signals to gesture recognition, and the activities of hands are recognized through analysis of the surface electromyographic signals, so that a good effect is achieved. Most of these devices can only control a specific device, such as a drone or a prosthetic limb. In practical applications, the electromyographic signals are unstable, and the above-mentioned devices using electromyography for control not only require the user to wear the electromyographic device on a specific position such as an arm, but also require frequent and extensive motion training and recognition.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the mobile platform using the camera as the main sensor cannot identify the action of the target human body at the current moment due to the lack of certain image data in the action process of the target human body, or cannot judge the action intention of the target human body only by identifying the posture of the target human body at the current moment, or predict the action intention of the target human body.
The invention provides a human body action recognition and prediction method, which is characterized in that data are acquired in parallel through two data sources, namely a target human body motion video or image shot by a camera, a human body surface electromyographic signal collected by an intelligent device which is worn on a target human body and can extract the human body surface electromyographic signal, human body action real-time recognition is simultaneously carried out on the two data in parallel, then a human body action recognition result is carried out according to the human body motion video or image, and a human body action recognition result is carried out according to the human body surface electromyographic signal data, and fusion is realized through a human body action recognition model and a prediction model, so that the human body action is recognized and predicted.
Therefore, the first purpose of the present invention is to provide a human body motion recognition and prediction method, which comprises the following specific steps:
step 1, acquiring motion video data of a target human body shot by a camera based on video or image acquisition of the camera;
step 2, identifying human body actions oriented to video data in real time, and identifying the action or posture change process of a target human body until the current moment;
step 3, acquiring a surface electromyographic signal of the target human body based on the surface electromyographic signal acquisition device to acquire the surface electromyographic signal of the target human body;
step 4, identifying a signal mode facing the surface electromyographic signals in real time, and identifying a specific mode in the surface electromyographic signals of the target human body;
and 5, a human body action recognition and prediction model is used for fusing the human body action or posture change process in the video data and a specific change mode in the surface electromyogram signal, recognizing the action of the target human body and predicting the action intention of the target human body.
Further, the steps 1 to 2 are to acquire video data and process the video data, and the steps 3 to 4 are to acquire and process the surface electromyogram signal, and the data acquisition and processing of the two data sources are parallel.
Further, the video data collected from the camera and the human body surface electromyographic signal data collected by the electromyographic signal are synchronized in time sequence.
In the step 1, a mobile platform taking a camera as a main sensor needs the camera to shoot an environmental video or image in the moving process so as to acquire environmental data, so that when a target human body performs actions, two situations exist, namely, the camera shoots the continuous process of the actions of the target human body; and secondly, the camera does not shoot continuous process video or images of the action. In both cases the video data does not affect the subsequent steps of the inventive method.
In step 2, according to the video and the image of the human body action, the image of the target human body is segmented from the environment background in each image, and then the human body action recognition or the human body posture recognition can be realized in a mode of extracting space-time characteristics and classifying according to the characteristics or an end-to-end learning training mode by utilizing deep learning.
In step 3, acquiring a surface electromyographic signal of the target requires that a target human body is worn with a surface electromyographic signal acquisition and transmission device;
preferably, the surface electromyographic signals of the target can be acquired through an intelligent bracelet with a surface electromyographic signal acquisition function;
preferably, the calculation process of human body action recognition and prediction is at the host computer end based on the mobile platform, so the host computer realizes Bluetooth communication through Bluetooth communication and a device which is worn by a target human body and is used for acquiring surface electromyographic signals;
preferably, a surface electromyographic signal of the multiple channels may be acquired.
In step 4, the electromyographic signals are comprehensive signals of spontaneous activities of skeletal muscle cells, and complex time sequence signal analysis is usually required to be carried out on multichannel electromyographic signals when the surface electromyographic signals are used for identifying human body actions.
In step 5, the human body motion recognition and prediction model recognizes and predicts the motion of the target human body by fusing the intermediate data of human body motion recognition based on the video image data and the intermediate data at the time of human body motion based on the surface electromyogram signal, involving the fusion of the multi-source signals.
In summary, the method of the present invention obtains data in parallel through two data sources, wherein the data is from a target human motion video or image shot by a camera, and the data is from a human surface electromyographic signal collected by an intelligent device worn on a target human body and capable of extracting a human surface electromyographic signal, and the two data are subjected to real-time human motion recognition in parallel, and then the intermediate data for performing human motion recognition according to the human motion video or image and the intermediate data for performing human motion recognition according to the human surface electromyographic signal data are fused through a human motion recognition and prediction model, so as to obtain recognition and prediction of human motion. By utilizing the method, the intelligent nursing robot can intelligently find whether the target to be nursed needs assistance or not and provide necessary assistance in time, thereby better serving the target to be nursed.
Another object of the present invention is to provide a human body motion recognition and prediction apparatus, comprising:
the first acquisition module is used for acquiring video or image data shot by the camera;
the first recognition module is used for recognizing the action or the posture of the target human body in real time according to the video or the image;
the second acquisition module is used for acquiring a surface electromyographic signal of the target human body;
the second identification module is used for identifying a signal mode in real time according to the surface electromyogram signal;
and the third identification module is used for fusing the intermediate data of the first identification module and the second identification module and identifying the current action of the user or predicting the action intention of the user.
The first acquisition module and the second acquisition module acquire data in parallel, and the first recognition module and the second recognition module process the data in parallel.
Compared with the prior art, the invention has the beneficial effects that:
1. aiming at human body action recognition and prediction of a mobile platform taking a camera as a main sensor, the method solves the human body action recognition and prediction when continuous video or image data of a target is lacked by fusing a target surface electromyographic signal;
2. on the basis of human body action or posture recognition based on video or image data, the method can realize human body action recognition and prediction through a human body action prediction model only by a simple mode of a target human body surface electromyographic signal, and solves the problem that frequent calibration and training are needed when action recognition is carried out by utilizing the surface electromyographic signal.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention.
Fig. 1 is a schematic structural diagram of a human body motion recognition and prediction method according to an embodiment of the present invention.
Fig. 2 is a schematic structural diagram of a human body motion recognition and prediction method according to another embodiment of the present invention.
Fig. 3 is a schematic structural diagram of a human body motion recognition and prediction apparatus according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
Fig. 1 is a schematic structural diagram of a human body motion recognition and prediction method according to an embodiment of the present invention.
As shown in fig. 1, a human body motion recognition and prediction method according to an embodiment of the present invention specifically includes:
step S101, acquiring video or image data of a current shot target;
as shown in fig. 1, the target video captured by the camera is composed of consecutive video frames 101.
In this step, in an application scenario of the intelligent nursing robot in this embodiment, the object to be nursed is in a state of putting a hand on an armrest of a chair, and since the object to be nursed is kept in a relaxed sitting state for a period of time, in all the video or sequence images obtained at this time, the object to be nursed has no motion process.
In this step, in another application scenario of the intelligent nursing robot, the present embodiment only captures a video or an image of the last holding phase in which the cared person cannot reach the cup because the intelligent nursing robot does not nurse one-to-one or the cared person does not capture the process of extending the hand of the cared person to the cup during the process of identifying the environment.
Step S102, analyzing the current image or images by using a human body action recognition algorithm or a gesture recognition algorithm, thereby recognizing the action or gesture of the target or judging all possible actions or gestures of the target;
in this step, the video or sequence image lacking the motion process can recognize the current posture through a human posture recognition algorithm, and the human posture change process described by the human body rod piece model is obtained through a human body limb fitting algorithm and the like.
In the step, human body action recognition or gesture recognition needs to firstly segment an image area only containing a human body from an image, and a Mask R-CNN method can be adopted, wherein the method carries out pixel-level segmentation on the image through a convolutional neural network to obtain a foreground image; then, a spatio-temporal model spanning a plurality of images is established, so that motion recognition based on human motion continuous process videos or images can be realized, and various human motion recognition methods based on feature extraction are introduced into Wang H, Ullah M M, Klaser A, et al, Evaluation of local spatial-temporal features for action recognition [ C ]// Proceedings of the 2009 British Machine Vision conference-reference London, UK: BMVA Press, 2009.124.1-124.11; in The current state of The Deep Learning field, AsheshJain, Amir R. Zamir, silver savress, et al, Structural-RNN: Deep Learning field-discrete-Temporal graphics [ C ]// The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. implementation of The Human Action Recognition method using a recurrent Neural network was proposed, S.Ji, W.Xu, M. Yang, et al 3D probabilistic Networks for Human Action Recognition [ J ]. IEEE actions on Pattern Analysis and machine Recognition, 2013, 35(1): 231. implementation of The Human Action Recognition method using a Convolutional Neural network, Ling, J.J., J., Pattern Recognition, Ku-Pattern Analysis and machine Recognition, 221/35 (1): 1) implementation of The Human Action Recognition method using a Convolutional Neural network, III-simulation, and robust Neural network was proposed .
In the step, human body action recognition or gesture recognition needs to firstly segment an image area only containing a human body from an image, and a Mask R-CNN method can be adopted, wherein the method carries out pixel-level segmentation on the image through a convolutional neural network to obtain a foreground image; then, key parts of the human body are fitted by utilizing the body shape characteristics of the human body or are identified by utilizing an identification algorithm, and the like, and then the human body posture identification based on the image can be realized by utilizing a fitting regression algorithm; at the present time of the great ave of deep learning, s.e. Wei, v. Ramakrishna, t. Kanade, et al, relational position place Machines [ C ]// 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 27-30 June 2016, 2016: 4724-.
Step S103, acquiring a surface electromyographic signal of a target;
in the step, for a target human body which is not completely nursed by limbs, other electromyographic signal acquisition structures such as a patch and the like can be adopted to acquire the surface electromyographic signals of the target human body and transmit the signals to the host end;
as shown in FIG. 1, the surface electromyogram signal acquisition structure acquires a multi-channel electromyogram signal 102.
In this step, the surface electromyographic signals of the target human body can be collected by a smart bracelet with an electromyographic signal collecting structure worn on the arm of the target human body, and the surface electromyographic signals of the target human body are transmitted to the host end by a Bluetooth structure, C, Wong, Z.Q.Zhang, B, Lo, et al, Wearable Sensing for Solid biomedicines: AReview [ J ] IEEE Sensors Journal, 2015, 15(5): 2747 + 2760.
Step S104, identifying the signal mode by using an identification algorithm of the surface electromyogram signal;
in this step, the electromyographic signals are the comprehensive signals of the spontaneous activities of skeletal muscle cells, the surface electromyographic signals are the data collected by the host end through a bluetooth equivalent communication structure, although the muscle movement patterns of the target human body are contained, due to instability of the surface electromyographic signals and the like, in this embodiment, only simple patterns of the surface electromyographic signals need to be identified, such as muscle relaxation or contraction patterns.
In this step, the use of surface electromyography signals to identify human body motion usually requires complex Time series Signal analysis of multi-channel electromyography signals, P.K. Aretemiadis, K.J. Kyro kopoulos. An EMG-Based Robotcontrol Scheme to Time-Varying EMG Signal sources [ J ]. IEEETransactions on Information Technology in Biomedicine, 2010, 14(3): 582. 588. A method for extracting Signal Features to identify Signal patterns is proposed, and Fan Y, Liu H, Li G, et al. environmental surface management system for handling motion recognition [ J ]. International journal of human Robotics, 2015, 12(02): 1550011. A method for identifying hand motion using multi-channel surface electromyography signals is proposed.
Step S105, fusing the result identified by the image data and the result identified by the surface electromyogram data by using a human body motion prediction model to realize human body motion prediction;
in this step, a cyclic neural network is used to approximately model a human body motion prediction model, the recognition result of step S102 and the recognition result of step S104 are converted into sequence data, and the sequence data are input into the cyclic neural network to obtain the current motion purpose of the target human body recognized by the cyclic neural network or predict the next motion of the target human body, so as to stimulate the motion and behavior assistance of the intelligent nursing robot on the target to be nursed.
In this step, the motion of the target human body is identified and predicted by using image data and surface electromyographic signals, which involves fusion of multi-source signals, Domen Novak, Robert Riener. A surveiy of sensor fusion methods in aided optics [ J ]. Robotics and Autonomous Systems, 2015, 73: 155- "170.
In this step, in an application scenario of the intelligent nursing robot in this embodiment, the gesture recognition process based on the video recognizes that the object to be nursed is in a state of putting a hand on an armrest of a chair, the pattern recognition process based on the electromyographic signal recognizes a muscular contraction process of the object to be nursed, and the human body motion prediction model determines that the next action intention of the object to be nursed is to lift or stand up with a hand supporting a body, so that the intelligent nursing robot can be driven to provide corresponding motion assistance.
In this step, in another application scenario of the intelligent nursing robot, the gesture recognition process based on the video recognizes that the hand of the target to be nursed is in the extending gesture, the pattern recognition process based on the electromyographic signal recognizes the contraction process of the muscle of the target to be nursed, and the human body motion prediction model determines that the target to be nursed is not only the motion of raising the hand, but is purposeful, so that the intelligent nursing robot can be driven to further recognize the object pointed by the arm of the target to be nursed, such as a cup, a remote controller, and the like, thereby providing nursing assistance.
The application scene of the embodiment is a common application scene of intelligent nursing in an aging society, the method disclosed by the embodiment only needs to identify a simple mode of the electromyographic signal in the electromyographic signal identification process, so that frequent calibration and action training of the electromyographic signal are not needed, the surface electromyographic signal acquisition structure can be conveniently combined with the daily life of a target to be nursed, the life degree of the intelligent nursing robot integrated into the target to be nursed is remarkably improved through the method disclosed by the embodiment, and the happiness of the target to be nursed is improved.
Fig. 2 is a schematic structural diagram of a human body motion recognition and prediction method according to an embodiment of the present invention.
As shown in fig. 2, the method of the present invention enables the mobile platform 201 using the camera as the main sensor to accurately recognize the motion of the target human body, or predict the next motion of the target human body;
as shown in fig. 2, the data source for implementing human motion recognition and prediction disposed at the host 202 of the mobile platform in the present embodiment is divided into two parts, a first data 203 and a second data 204;
as shown in fig. 2, the first data 203 is a current posture video or image of a target human body shot by a camera, and the second data 204 is a human body surface electromyogram signal collected by a smart bracelet with a human body surface electromyogram signal collecting structure worn on an arm of the target human body.
In order to realize the above embodiment, the invention further provides a human body motion recognition and prediction device.
Fig. 3 is a schematic structural diagram of a human body motion recognition and prediction apparatus according to an embodiment of the present invention.
As shown in fig. 3, a human body motion recognition and prediction apparatus according to an embodiment of the present invention includes: a first obtaining module 310, a first identifying module 320, a second obtaining module 330, a second identifying module 340, and a third identifying module 350.
The first acquiring module 310 is configured to acquire video or image data captured by a camera;
specifically, for a mobile platform with a camera as a main sensor, the human body motion recognition and prediction device needs to process a video stream captured by a video camera in real time, so the first acquisition module 310 needs to match the frame rate captured by the video camera with the moving speed of the platform, and each video frame image acquired by the first acquisition module 310 is processed by the first recognition module 320.
A first recognition module 320 for recognizing the motion or posture of the target human body according to the video or the image;
specifically, the first recognition module 320 implements online human body motion or gesture recognition, and incrementally determines the motion of the target human body as new video frames arrive, and the speed of processing the video frames by the first recognition module 320 is matched with the speed of acquiring the video frames by the first acquisition module 310, so as to prevent the unprocessed video frames from being inundated.
A second obtaining module 330, configured to obtain a surface electromyographic signal of the target human body;
specifically, the second obtaining module 330 obtains the surface electromyogram signal of the target human body transmitted by the paired bluetooth devices received by the bluetooth receiving device in real time, for low power consumption design of the paired bluetooth devices, the paired bluetooth devices may be designed such that the surface electromyogram signal is transmitted only when the energy of the surface electromyogram signal of the target human body exceeds a certain threshold, while the second obtaining module 330 at the host end is in the monitoring mode, and all the received surface electromyogram signals are processed by the second identifying module 340.
A second recognition module 340 for recognizing a signal pattern according to a surface electromyogram signal;
specifically, since the surface electromyogram signal is a continuous time sequence signal, and in order to meet the real-time processing requirement of the mobile platform, the second identification module 340 implements online surface electromyogram signal processing, in order to match discrete frames of the video signal, the interval time of each video frame is the speed constraint of the second identification module 340 for processing the surface electromyogram signal.
And a third recognition module 350, configured to fuse result data of the first recognition module and the second recognition module, and recognize a current action of the user or predict an action intention of the user.
It should be noted that the explanation of the embodiment of the body motion recognition and prediction method is also applicable to the body motion recognition and prediction device of the embodiment, and the implementation principle is similar, and is not repeated here.
The technical solutions provided by the present invention are described in detail above, and for those skilled in the art, the ideas according to the embodiments of the present invention may be changed in the specific implementation manners and the application ranges, and in summary, the content of the present description should not be construed as limiting the present invention.
Claims (6)
1. A human body motion recognition and prediction method is characterized by comprising the steps that 1) data are obtained in parallel through two data sources, firstly, a target human body motion video or image shot by a camera is obtained, secondly, human body surface electromyographic signals collected by intelligent equipment which is worn on a target human body and can extract human body surface electromyographic signals are obtained, meanwhile, real-time recognition of human body motion is conducted on the two data in parallel, 2) then, intermediate data for conducting human body motion recognition according to the motion video or image of the human body and the intermediate data for conducting human body motion recognition according to the human body surface electromyographic signal data are fused through a human body motion recognition and prediction model.
2. The human body motion recognition and prediction method according to claim 1, characterized by comprising the following specific steps:
step 1, acquiring motion video data of a target human body shot by a camera based on video or image acquisition of the camera;
step 2, identifying human body actions oriented to video data in real time, and identifying the action or posture change process of a target human body until the current moment;
step 3, acquiring a surface electromyographic signal of the target human body based on the surface electromyographic signal acquisition device to acquire the surface electromyographic signal of the target human body;
step 4, identifying a signal mode facing the surface electromyographic signals in real time, and identifying a specific mode in the surface electromyographic signals of the target human body;
and 5, a human body action recognition and prediction model is used for fusing the human body action or posture change process in the video data and a specific change mode in the surface electromyogram signal, recognizing the action of the target human body and predicting the action intention of the target human body.
3. The human body motion recognition and prediction method according to claim 2, wherein the steps 1 to 2 are to acquire video data and process the video data, and the steps 3 to 4 are to acquire surface electromyography and process the surface electromyography, and the data acquisition and processing for the two data sources are parallel.
4. A human body motion recognition and prediction method according to claims 2 and 3, characterized in that the video data collected from the camera and the human body surface electromyographic signal data collected by electromyographic signals are synchronized in time sequence.
5. The human motion recognition and prediction method according to claim 2, wherein the human motion recognition and prediction model fuses intermediate data of human motion recognition based on video image data and intermediate data of human motion based on surface electromyogram signals.
6. A device for realizing the human body action recognition and prediction method according to any one of claims 1 to 5, which is characterized by comprising:
the first acquisition module is used for acquiring video or image data shot by the camera; the first recognition module is used for recognizing the action or the posture of the target human body in real time according to the video or the image; the second acquisition module is used for acquiring a surface electromyographic signal of the target human body; the second identification module is used for identifying a signal mode in real time according to the surface electromyogram signal; and the third identification module is used for fusing the intermediate data of the first identification module and the second identification module and identifying the current action of the user or predicting the action intention of the user.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811461333.9A CN111259699A (en) | 2018-12-02 | 2018-12-02 | Human body action recognition and prediction method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811461333.9A CN111259699A (en) | 2018-12-02 | 2018-12-02 | Human body action recognition and prediction method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111259699A true CN111259699A (en) | 2020-06-09 |
Family
ID=70948368
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811461333.9A Pending CN111259699A (en) | 2018-12-02 | 2018-12-02 | Human body action recognition and prediction method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111259699A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114733160A (en) * | 2022-04-14 | 2022-07-12 | 福州大学 | Myoelectric signal-based muscle strength training equipment control method |
CN114983447A (en) * | 2022-08-01 | 2022-09-02 | 广东海洋大学 | Wearable device of human action discernment, analysis and storage based on AI technique |
CN116449967A (en) * | 2023-06-20 | 2023-07-18 | 浙江强脑科技有限公司 | Bionic hand teaching aid, control method thereof and main control equipment |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104010125A (en) * | 2013-02-22 | 2014-08-27 | 联想(北京)有限公司 | Electronic device and method |
CN104360736A (en) * | 2014-10-30 | 2015-02-18 | 广东美的制冷设备有限公司 | Gesture-based terminal control method and system |
CN104379056A (en) * | 2012-03-27 | 2015-02-25 | B10尼克斯有限公司 | System for the acquisition and analysis of muscle activity and operation method thereof |
CN107315479A (en) * | 2017-07-06 | 2017-11-03 | 哈尔滨工业大学 | Myoelectricity real-time operation device based on laser projection |
CN108211310A (en) * | 2017-05-25 | 2018-06-29 | 深圳市前海未来无限投资管理有限公司 | The methods of exhibiting and device of movement effects |
KR20180090644A (en) * | 2017-02-03 | 2018-08-13 | 한국전자통신연구원 | Device and mehtod for interaction between driver and vehicle |
CN108415571A (en) * | 2018-03-08 | 2018-08-17 | 李飞洋 | A kind of somatosensory device implementation method moving caused data analysis based on thumb |
-
2018
- 2018-12-02 CN CN201811461333.9A patent/CN111259699A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104379056A (en) * | 2012-03-27 | 2015-02-25 | B10尼克斯有限公司 | System for the acquisition and analysis of muscle activity and operation method thereof |
CN104010125A (en) * | 2013-02-22 | 2014-08-27 | 联想(北京)有限公司 | Electronic device and method |
CN104360736A (en) * | 2014-10-30 | 2015-02-18 | 广东美的制冷设备有限公司 | Gesture-based terminal control method and system |
KR20180090644A (en) * | 2017-02-03 | 2018-08-13 | 한국전자통신연구원 | Device and mehtod for interaction between driver and vehicle |
CN108211310A (en) * | 2017-05-25 | 2018-06-29 | 深圳市前海未来无限投资管理有限公司 | The methods of exhibiting and device of movement effects |
CN107315479A (en) * | 2017-07-06 | 2017-11-03 | 哈尔滨工业大学 | Myoelectricity real-time operation device based on laser projection |
CN108415571A (en) * | 2018-03-08 | 2018-08-17 | 李飞洋 | A kind of somatosensory device implementation method moving caused data analysis based on thumb |
Non-Patent Citations (2)
Title |
---|
GUY LEV 等: "RNN Fisher Vectors for Action Recognition and Image Annotation", 《ECCV 2016》, 31 December 2016 (2016-12-31), pages 833 - 850 * |
熊俊涛 等: "基于视觉技术的手势跟踪与动作识别算法", 《计算机与现代化》, no. 227, 17 July 2014 (2014-07-17), pages 75 - 79 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114733160A (en) * | 2022-04-14 | 2022-07-12 | 福州大学 | Myoelectric signal-based muscle strength training equipment control method |
CN114733160B (en) * | 2022-04-14 | 2022-10-18 | 福州大学 | Myoelectric signal-based muscle strength training equipment control method |
CN114983447A (en) * | 2022-08-01 | 2022-09-02 | 广东海洋大学 | Wearable device of human action discernment, analysis and storage based on AI technique |
CN116449967A (en) * | 2023-06-20 | 2023-07-18 | 浙江强脑科技有限公司 | Bionic hand teaching aid, control method thereof and main control equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Fang et al. | A multichannel surface EMG system for hand motion recognition | |
WO2018113392A1 (en) | Brain-computer interface-based robotic arm self-assisting system and method | |
US10061389B2 (en) | Gesture recognition system and gesture recognition method | |
RU2635632C1 (en) | Method and system of intellectual bionic limb control | |
CN105549743A (en) | Robot system based on brain-computer interface and implementation method | |
CN111259699A (en) | Human body action recognition and prediction method and device | |
Hamedi et al. | Human facial neural activities and gesture recognition for machine-interfacing applications | |
CN108762303A (en) | A kind of portable brain control UAV system and control method based on Mental imagery | |
CN111584031B (en) | Brain-controlled intelligent limb rehabilitation system based on portable electroencephalogram acquisition equipment and application | |
CN103777752A (en) | Gesture recognition device based on arm muscle current detection and motion sensor | |
CN108646915B (en) | Method and system for controlling mechanical arm to grab object by combining three-dimensional sight tracking and brain-computer interface | |
CN101711709A (en) | Method for controlling electrically powered artificial hands by utilizing electro-coulogram and electroencephalogram information | |
CN109009887A (en) | A kind of man-machine interactive navigation system and method based on brain-computer interface | |
CN110688910B (en) | Method for realizing wearable human body basic gesture recognition | |
CN103294192A (en) | LED lamp switch control device and control method thereof based on motor imagery | |
CN107066956B (en) | Multisource emotion recognition robot based on body area network | |
CN115050104B (en) | Continuous gesture action recognition method based on multichannel surface electromyographic signals | |
CN106708273B (en) | EOG-based switching device and switching key implementation method | |
Tang et al. | Wearable supernumerary robotic limb system using a hybrid control approach based on motor imagery and object detection | |
CN108897418A (en) | A kind of wearable brain-machine interface arrangement, man-machine interactive system and method | |
CN103815991A (en) | Double-passage operation sensing virtual artificial hand training system and method | |
CN111399652A (en) | Multi-robot hybrid system based on layered SSVEP and visual assistance | |
CN110673721B (en) | Robot nursing system based on vision and idea signal cooperative control | |
CN112691292A (en) | Parkinson closed-loop deep brain stimulation system based on wearable intelligent equipment | |
CN105718032A (en) | Spaced control autodyne aircraft |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |