CN113762217A - Behavior detection method - Google Patents

Behavior detection method Download PDF

Info

Publication number
CN113762217A
CN113762217A CN202111225681.8A CN202111225681A CN113762217A CN 113762217 A CN113762217 A CN 113762217A CN 202111225681 A CN202111225681 A CN 202111225681A CN 113762217 A CN113762217 A CN 113762217A
Authority
CN
China
Prior art keywords
information
human body
training
dimensional
posture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111225681.8A
Other languages
Chinese (zh)
Inventor
朱樊
顾海松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Kangbo Intelligent Health Research Institute Co ltd
Original Assignee
Nanjing Kangbo Intelligent Health Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Kangbo Intelligent Health Research Institute Co ltd filed Critical Nanjing Kangbo Intelligent Health Research Institute Co ltd
Priority to CN202111225681.8A priority Critical patent/CN113762217A/en
Publication of CN113762217A publication Critical patent/CN113762217A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a behavior detection method, which comprises the steps of extracting coordinate information of human body characteristic points; enhancing the coordinate information content of the human body characteristic points to obtain the kinematic information of the human body posture; combining the coordinate information of the human body characteristic points and the kinematic information to obtain high-dimensional information, and meanwhile, presetting a nonlinear dimensionality reduction algorithm to obtain low-dimensional effective data with redundancy removed and low signal-to-noise ratio; unsupervised clustering is carried out on the low-dimensional effective data, a convolutional neural network is constructed for training to form a behavior recognition classifier, and the motion behavior of the human body posture is output and displayed. By fusing relevance information of the posture and the action and kinematic parameters, obtaining a posture mode by adopting an unsupervised algorithm, and then using 1DCNN (one-dimensional convolutional neural network) and combining with continuous frame time sequence information to train a classifier to obtain action categories, the effect of behavior recognition is achieved, and the problem that accurate behavior detection is difficult is solved.

Description

Behavior detection method
Technical Field
The invention relates to the technical field of human body posture detection and recognition, in particular to a behavior detection method.
Background
Accurate behavior detection is still a difficult goal today. Recent machine learning advances have made limb positioning possible, however, while limb position or posture may provide information, its corresponding behavior interpretability is rather low. Extracting behavioral information requires determining spatiotemporal patterns of these locations.
Therefore, the invention fuses the relevance information of the gesture and the action and the kinematic parameters, obtains a gesture mode by adopting an unsupervised algorithm, and then obtains an action category by using 1DCNN (one-dimensional convolutional neural network) and combining with continuous frame time sequence information to train a classifier so as to achieve the effect of behavior recognition, thereby solving the technical problem in the prior art.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a behavior detection method to solve the problems in the prior art, and the method provided by the invention has the advantages that through fusing the relevance information and kinematic parameters of the posture and the action, obtaining the posture mode by adopting an unsupervised algorithm, and then obtaining the action category by using 1DCNN (one-dimensional convolutional neural network) and combining with the continuous frame time sequence information to carry out classifier training, so as to achieve the behavior recognition effect and solve the problems in the prior art.
In order to achieve the purpose, the invention is realized by the following technical scheme: a method of behavioral detection, comprising the steps of:
firstly, extracting coordinate information of human body feature points
S1-1, presetting human body posture acquisition equipment, and establishing a time sequence to obtain a data set containing all human body posture estimation information under the time sequence;
s1-2, obtaining key point information of the human posture estimation information and kinematic information of the human posture in the data set based on an Openpos algorithm and a DeepLabCut tracking algorithm respectively;
secondly, combining coordinate information of the human body feature points and kinematic information to obtain high-dimensional information, and meanwhile, presetting a nonlinear dimensionality reduction algorithm to obtain low-dimensional effective data with redundancy removed and low signal-to-noise ratio;
and thirdly, carrying out unsupervised clustering on the low-dimensional effective data, constructing a convolutional neural network for training to form a behavior recognition classifier, and outputting and displaying the motion behavior of the human body posture.
As an improvement to the behavior detection method in the present invention, after unsupervised clustering is performed on the low-dimensional valid data and before a convolutional neural network is constructed, the method further includes:
firstly, modeling the obtained low-dimensional effective data into a GMM Gaussian mixture clustering model, solving parameters of the GMM Gaussian mixture clustering model based on an EM expectation maximization algorithm, and obtaining k classification modes;
and secondly, taking the classification results of the k classification modes as classification labels, and respectively and correspondingly marking the classification labels on the training samples of the convolutional neural network to be used as characteristic data.
As an improvement of the behavior detection method in the present invention, in the first step, the kinematic information of the human body posture is obtained after performing space-time relationship processing based on continuous frames on the coordinate information of the human body feature points, wherein the kinematic information of the human body posture includes a distance value between every two human body feature points, a limb included angle value in the human body posture, and a limb movement velocity value.
As an improvement of the behavior detection method in the present invention, in the second step, the specific implementation manner of combining the coordinate information of the human body feature point and the kinematic information to obtain the high-dimensional information is as follows: inputting the kinematic information of the human body posture and the coordinate information of the human body characteristic points into the model together as input characteristics so as to enrich the information content of input data;
and is
And realizing dimension reduction visualization of the high-dimension information based on a TSNE algorithm so as to improve the density of training samples and reduce the calculation load of data caused by overhigh dimension.
As an improvement of the behavior detection method in the present invention, in the third step, the convolutional neural network training is a 1DCNN one-dimensional convolutional neural network, and the specific construction method thereof is as follows:
obtaining a modeling training sample, wherein at least first training characteristic information is marked in the training sample, and the first training characteristic information is used for representing classification results of k classification modes serving as classification labels;
the method comprises the steps of regular modeling, wherein a time sequence model is established for reserving training characteristic information of continuous frames in a certain time period to form a time sequence matrix with the training characteristic information and a classification label;
inputting the first training characteristic information into a convolutional neural network, and training at the same time to obtain a behavior recognition classifier;
and obtaining the motion behaviors and the categories of the human body postures based on the behavior recognition classifier so as to finish behavior recognition.
As an improvement of the behavior detection method, the behavior detection method is realized based on a phython computer programming language.
Compared with the prior art, the invention has the beneficial effects that:
by fusing relevance information of the posture and the action and kinematic parameters, obtaining a posture mode by adopting an unsupervised algorithm, and then using 1DCNN (one-dimensional convolutional neural network) and combining with continuous frame time sequence information to train a classifier to obtain action categories, the effect of behavior recognition is achieved, and the problem that accurate behavior detection is difficult is solved.
Drawings
The disclosure of the present invention is illustrated with reference to the accompanying drawings. It is to be understood that the drawings are designed solely for purposes of illustration and not as a definition of the limits of the invention, for which like reference numerals are used to indicate like parts. Wherein:
fig. 1 is a schematic diagram of an implementation flow for identifying and outputting acquired human body feature point coordinate information and kinematics information as a motion behavior category based on a one-dimensional convolutional neural network according to an embodiment of the present invention.
Detailed Description
It is easily understood that according to the technical solution of the present invention, a person skilled in the art can propose various alternative structures and implementation ways without changing the spirit of the present invention. Therefore, the following detailed description and the accompanying drawings are merely illustrative of the technical aspects of the present invention, and should not be construed as all of the present invention or as limitations or limitations on the technical aspects of the present invention.
As shown in fig. 1, the present invention provides a technical solution: a method of behavioral detection, comprising the steps of:
firstly, extracting coordinate information of human body feature points
S1-1, presetting human body posture acquisition equipment, and establishing a time sequence to obtain a data set containing all human body posture estimation information under the time sequence;
s1-2, obtaining key point information of the human body posture estimation information in the data set based on an Openpos algorithm and a DeepLabCut tracking algorithm respectively, wherein on one hand, the Openpos algorithm is a bottom-up algorithm, can realize posture estimation of human body actions, facial expressions, finger motions and the like, is suitable for single person and multiple persons, and has excellent robustness; on the other hand, the DeepLabCut tracking algorithm can track fine actions, if flies lay eggs and stretch into the kiss and a mouse stretches the claw, the track of each finger, and meanwhile, the DeepLabCut tracking algorithm can complete a new tracking task only by receiving small-scale human marking image (about 200) training, so that the research on animal behaviors by neurologists is facilitated;
s1-3, counting and fusing the results obtained by the openpos algorithm and the deplab cut tracking algorithm to obtain accurate coordinate information of the human feature points, it should be noted that both the openpos algorithm and the deplab cut tracking algorithm can obtain coordinate information of the human feature points, and the deplab cut tracking algorithm can further obtain human kinematic information according to the continuous frames, specifically,
secondly, preprocessing the coordinate information of the human body characteristic points, and specifically implementing the following steps:
the method for obtaining the kinematic information of the human body posture includes the steps of enhancing the coordinate information content of the human body feature points to obtain the kinematic information of the human body posture, wherein the kinematic information of the human body posture is obtained only after the time-space relation processing based on continuous frames is carried out on the coordinate information of the human body feature points, and the time-space relation of the continuous frames is understood to mean that moving images collected by moving people at the same time interval t are as follows: acquiring an image at t, acquiring an image at 2t, acquiring an image at 3t and the like, and introducing the kinematic information of the human body posture based on continuous frames to make sense, namely, the images are in a spatial relation, and the time sequence is in a time relation, wherein the kinematic information of the human body posture comprises a distance value of every two human body characteristic points, an included angle value of limbs in the human body posture and a motion velocity value of the limbs;
thirdly, combining the coordinate information of the human body characteristic point and the kinematic information to obtain high-dimensional information, wherein the combined processing mode refers to that the kinematic information is the same as the coordinates of the human body characteristic point and is also used as an input characteristic and is input into the model together, any mathematical method and algorithm are not involved, the information quantity of input data is enriched, meanwhile, a nonlinear dimension reduction algorithm is preset to obtain low-dimensional effective data with redundancy removal and low signal-to-noise ratio, high-efficiency operation can be promoted, the possibility of real-time detection is ensured, thus not only the effective fusion of the coordinate information of the human body characteristic point and the kinematic information is ensured, but also the analysis and calculation difficulty caused by overhigh dimension is relieved, and it can be understood that the nonlinear dimension reduction algorithm used by the invention is based on a TSNE algorithm, namely, the dimension reduction visualization of the high-dimensional information is realized based on the TSNE algorithm, in order to improve the density of training samples and reduce the computational load of data due to over-high dimensionality, as an understanding of the above technical concept, SNE maps data points onto probability distributions through affine (affinite) transformation, and mainly includes two steps: 1. the SNE constructs a probability distribution among high-dimensional objects, so that similar objects have higher probability to be selected, and dissimilar objects have lower probability to be selected; 2. the SNE constructs the probability distribution of these points in a low dimensional space so that the two probability distributions are similar as much as possible, and therefore, the gradient update of the TSNE algorithm has two advantages: for dissimilar points, a smaller distance will produce a larger gradient to repel the points; this repulsion is again not infinite (denominator in the gradient) and avoids dissimilar points being too far apart.
And fourthly, carrying out unsupervised clustering on the low-dimensional effective data, constructing a convolutional neural network for training to form a behavior recognition classifier, and outputting and displaying the motion behavior of the human body posture, wherein after the unsupervised clustering is carried out on the low-dimensional effective data and before the convolutional neural network is constructed, the method further comprises the following steps of:
firstly, modeling the obtained low-dimensional effective data into a GMM Gaussian mixture clustering model, solving parameters of the GMM Gaussian mixture clustering model based on an EM expectation maximization algorithm, and obtaining k classification modes;
secondly, the classification results of the k classification modes are used as classification labels which are respectively correspondingly marked on the training samples of the convolutional neural network and are used as characteristic data, and when the method is implemented,
the convolutional neural network provided by the invention is trained as a 1DCNN one-dimensional convolutional neural network, and the specific construction mode is as follows:
firstly, obtaining a modeling training sample, wherein at least first training characteristic information is marked in the training sample, and the first training characteristic information is used for representing classification results of k classification modes serving as classification labels;
secondly, modeling regularly, and establishing a time sequence model for reserving training characteristic information of continuous frames in a certain time period to form a time sequence matrix with the training characteristic information and a classification label;
thirdly, inputting the first training characteristic information into the convolutional neural network, and training at the same time to obtain a behavior recognition classifier;
finally, the motion behavior and the category of the human body posture are obtained based on the behavior recognition classifier to complete behavior recognition, for example, the motion behavior of the human body posture can be the behavior category of eating, smoking, falling, running and the like.
In an embodiment of the invention, the behavior detection method is realized based on a phython computer programming language, and after a posture mode is obtained by adopting an unsupervised algorithm through a mode of fusing relevance information of the posture and the action and kinematic parameters, a 1DCNN (one-dimensional convolutional neural network) is used and is combined with continuous frame time sequence information to carry out classifier training to obtain an action category so as to achieve the effect of behavior identification, thereby solving the problem that accurate behavior detection is difficult.
The technical scope of the present invention is not limited to the above description, and those skilled in the art can make various changes and modifications to the above-described embodiments without departing from the technical spirit of the present invention, and such changes and modifications should fall within the protective scope of the present invention.

Claims (6)

1. A method of behavior detection, characterized by: the method comprises the following steps:
firstly, extracting coordinate information of human body feature points
S1-1, presetting human body posture acquisition equipment, and establishing a time sequence to obtain a data set containing all human body posture estimation information under the time sequence;
s1-2, obtaining key point information of the human posture estimation information and kinematic information of the human posture in the data set based on an Openpos algorithm and a DeepLabCut tracking algorithm respectively;
secondly, combining coordinate information of the human body feature points and kinematic information to obtain high-dimensional information, and meanwhile, presetting a nonlinear dimensionality reduction algorithm to obtain low-dimensional effective data with redundancy removed and low signal-to-noise ratio;
and thirdly, carrying out unsupervised clustering on the low-dimensional effective data, constructing a convolutional neural network for training to form a behavior recognition classifier, and outputting and displaying the motion behavior of the human body posture.
2. A method of activity detection as claimed in claim 1, wherein: after unsupervised clustering is carried out on the low-dimensional effective data and before the convolutional neural network is constructed, the method further comprises the following steps:
firstly, modeling the obtained low-dimensional effective data into a GMM Gaussian mixture clustering model, solving parameters of the GMM Gaussian mixture clustering model based on an EM expectation maximization algorithm, and obtaining k classification modes;
and secondly, taking the classification results of the k classification modes as classification labels, and respectively and correspondingly marking the classification labels on the training samples of the convolutional neural network to be used as characteristic data.
3. A method of activity detection as claimed in claim 1, wherein: in the first step, after the time-space relation processing based on continuous frames is carried out on the human body characteristic point coordinate information, the kinematic information of the human body posture is obtained, wherein the kinematic information of the human body posture comprises a distance value between every two human body characteristic points, a limb included angle value in the human body posture and a limb movement speed value.
4. A method of activity detection as claimed in claim 1, wherein: in the second step, the specific implementation mode of combining the coordinate information of the human body feature points and the kinematic information to obtain the high-dimensional information is as follows: inputting the kinematic information of the human body posture and the coordinate information of the human body characteristic points into the model together as input characteristics so as to enrich the information content of input data;
and is
And realizing dimension reduction visualization of the high-dimension information based on a TSNE algorithm so as to improve the density of training samples and reduce the calculation load of data caused by overhigh dimension.
5. A method of activity detection as claimed in claim 1 or 2, wherein: in the third step, the convolutional neural network is trained as a 1DCNN one-dimensional convolutional neural network, and the specific construction mode is as follows:
obtaining a modeling training sample, wherein at least first training characteristic information is marked in the training sample, and the first training characteristic information is used for representing classification results of k classification modes serving as classification labels;
the method comprises the steps of regular modeling, wherein a time sequence model is established for reserving training characteristic information of continuous frames in a certain time period to form a time sequence matrix with the training characteristic information and a classification label;
inputting the first training characteristic information into a convolutional neural network, and training at the same time to obtain a behavior recognition classifier;
and obtaining the motion behaviors and the categories of the human body postures based on the behavior recognition classifier so as to finish behavior recognition.
6. A method of activity detection as claimed in claim 1, wherein: the behavior detection method is realized based on a phython computer programming language.
CN202111225681.8A 2021-10-21 2021-10-21 Behavior detection method Pending CN113762217A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111225681.8A CN113762217A (en) 2021-10-21 2021-10-21 Behavior detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111225681.8A CN113762217A (en) 2021-10-21 2021-10-21 Behavior detection method

Publications (1)

Publication Number Publication Date
CN113762217A true CN113762217A (en) 2021-12-07

Family

ID=78784254

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111225681.8A Pending CN113762217A (en) 2021-10-21 2021-10-21 Behavior detection method

Country Status (1)

Country Link
CN (1) CN113762217A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115862130A (en) * 2022-11-16 2023-03-28 之江实验室 Behavior recognition method based on human body posture and body motion field thereof

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115862130A (en) * 2022-11-16 2023-03-28 之江实验室 Behavior recognition method based on human body posture and body motion field thereof
CN115862130B (en) * 2022-11-16 2023-10-20 之江实验室 Behavior recognition method based on human body posture and trunk sports field thereof

Similar Documents

Publication Publication Date Title
CN110321833B (en) Human body behavior identification method based on convolutional neural network and cyclic neural network
Amor et al. Action recognition using rate-invariant analysis of skeletal shape trajectories
Devanne et al. 3-d human action recognition by shape analysis of motion trajectories on riemannian manifold
Pantic et al. Dynamics of facial expression: recognition of facial actions and their temporal segments from face profile image sequences
Li et al. Data-free prior model for facial action unit recognition
Bashir et al. Object trajectory-based activity classification and recognition using hidden Markov models
CN110575663B (en) Physical education auxiliary training method based on artificial intelligence
Eskil et al. Facial expression recognition based on anatomy
Avola et al. Deep temporal analysis for non-acted body affect recognition
Wang et al. A hierarchical context model for event recognition in surveillance video
CN101561881B (en) Emotion identification method for human non-programmed motion
WO2021243561A1 (en) Behaviour identification apparatus and method
Chen et al. Data-free prior model for upper body pose estimation and tracking
Jenkins et al. Tracking human motion and actions for interactive robots
Nale et al. Suspicious human activity detection using pose estimation and lstm
CN113762217A (en) Behavior detection method
Zhao et al. Experiments with facial expression recognition using spatiotemporal local binary patterns
Batool et al. Fundamental recognition of ADL assessments using machine learning engineering
Jenkins et al. Interactive human pose and action recognition using dynamical motion primitives
Alharbi et al. A data preprocessing technique for gesture recognition based on extended-kalman-filter
Chen et al. Activity recognition through multi-scale dynamic bayesian network
Usman et al. Skeleton-based motion prediction: A survey
CN113537164B (en) Real-time action time sequence positioning method
He et al. Spontaneous facial expression recognition based on feature point tracking
CN115761814A (en) System for detecting emotion in real time according to human body posture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination