CN109117893A - A kind of action identification method and device based on human body attitude - Google Patents
A kind of action identification method and device based on human body attitude Download PDFInfo
- Publication number
- CN109117893A CN109117893A CN201810988873.6A CN201810988873A CN109117893A CN 109117893 A CN109117893 A CN 109117893A CN 201810988873 A CN201810988873 A CN 201810988873A CN 109117893 A CN109117893 A CN 109117893A
- Authority
- CN
- China
- Prior art keywords
- skeleton data
- artis
- data
- human body
- angle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The present invention provides a kind of action identification method and device based on human body attitude, method therein includes: the filtered skeleton data obtained by improved limit filtration algorithm, angle character is obtained by improved angle computation method, the angle character logic-based recurrence of point good class is trained, classifier after being trained, the recognition result to human body static posture is obtained by classifier again, finally identifies the movement of human body using inverted order method according to the recognition result of static posture.The present invention realizes the technical effect for promoting recognition speed and improving identification accuracy.
Description
Technical field
The present invention relates to human-computer interaction technique field, in particular to a kind of action identification method and dress based on human body attitude
It sets.
Background technique
With the development of the times, people call more natural man-machine interaction mode, and the interactive mode of person to person is introduced into
In human-computer interaction, referred to as " nature " interactive mode.It among these include that a series of pairs of human bodies, arm and gesture identify
Technology.In these natural interactive modes, there are the modes such as movement, gesture, voice.Movement is the important of people and other objects
Difference.People express certain information and emotion by posture, such as in sports tournament, and referee is come using various postures
Transmit information.Therefore, a good method is found to identify that human posture is necessary.
Traditional human action identification technology is often set using common picture pick-up device, radar or some wearable sensors
Standby to wait media, these modes make this skill in the defect of the one or more aspects such as recognition efficiency, cost, environmental constraints respectively
Art using relatively limited.The inexpensive depth camera Kinect of the publication of Microsoft in 2010 provides new choosing for this technology
It selects, Kinect can obtain more accurate depth image, directly show the three-dimensional feature of object, can be in certain journey
Avoided the problem that on degree in the action recognition based on conventional two-dimensional image feature there may be.
Applicant is when implementing the solution of the present invention, after discovery obtains depth image using Kinect in the prior art, then
It carries out human action and knows method for distinguishing, due to having higher requirements to identification maneuver, and the shadow different by illumination, identification person
It rings, algorithm complexity is higher, and the accuracy of identification is also to be improved.
Summary of the invention
In view of this, the purpose of the present invention is to provide a kind of action identification method and device based on human body attitude,
Real-time identification can be made for the movement of human body, there is higher accuracy rate to action recognition, while reducing answering for algorithm
Miscellaneous degree keeps it more easy-to-use, solves the technical problem that recognition accuracy is not high in the prior art.
First aspect present invention provides a kind of action identification method based on human body attitude, comprising:
Step S1: human skeleton data, the skeleton data packet are obtained by the bone tracking technique of depth transducer
The three-dimensional coordinate is converted the world coordinate system to where human body by the three-dimensional coordinate for including human joint points;
Step S2: being filtered the skeleton data using improved limit filtration algorithm, bone number after being filtered
According to, wherein the improved limit filtration algorithm specifically includes: first determine whether this skeleton data degree of jitter whether be more than
Threshold value updates this using the skeleton data of filtering buffer area if the degree of jitter of this skeleton data is less than the threshold value
Otherwise secondary skeleton data continues to judge whether the degree of jitter of last skeleton data is more than the threshold value, if last bone
The degree of jitter of data is less than the threshold value, then the skeleton data of filtering buffer area is updated using this skeleton data;On if
The degree of jitter of skeleton data is more than the threshold value, then judge this described skeleton data whether in filter range, if
It is that this described skeleton data is then updated using the skeleton data of the filtering buffer area, otherwise using this described skeleton data
Update the skeleton data of the filtering buffer area;
Step S3: according to after conversion three-dimensional coordinate and predetermined angle calculation method skeleton data after the filtering is carried out
Feature extraction obtains the angle character being made of the angle of each artis;
Step S4: logic-based regression algorithm and the angle character are trained the training sample set obtained in advance,
Obtain classifier;
Step S5: identifying the movement of human body by the classifier, obtains static posture recognition result;
Step S6: being based on the static posture recognition result, determines whether identify two in five frames using inverted order method of identification
The preset static posture of kind a, if it is, dynamic action is identified, as action recognition result.
Further, the depth transducer also obtains depth information, and step S1 is specifically included:
Step S1.1: actual range of the depth transducer apart from human body is obtained according to the depth information;
Step S1.2: the three-dimensional coordinate of depth image is converted to by the world according to the actual range and Formula of Coordinate System Transformation
Actual coordinate under coordinate system, wherein the Formula of Coordinate System Transformation are as follows:
Wherein, (x, y) is actual coordinate, (xd,yd,zd) be depth image in depth information three-dimensional coordinate, w*h is
The resolution ratio of depth transducer, D and F are constant, wherein D=-10, F=0.0021.
Further, in step S2, the degree of jitter of skeleton data is indicated using the shake radius of skeleton data.
Further, step S3 is specifically included:
Step S3.1: the range information between artis is calculated using distance calculation formula, wherein distance calculation formula are as follows:
Wherein, artis includes 3 points of A, B, C, and wherein the actual coordinate of artis A is (x1,y1), the reality of artis B
Coordinate is (x2,y2), the actual coordinate of artis C is (x3,y3);
Step S3.2: obtaining the angle of the line between each artis according to the range information, as the angle
Feature is spent, specifically:
Wherein, a indicates the distance of line between artis B and artis C, and b indicates to connect between artis A and artis C
The distance of line, c indicate the distance of line between artis A and artis B, angle of the θ between AC and BC.
Further, step S4 is specifically included:
Step S4.1: based on the angle character using logistic regression algorithm to the training sample set obtained in advance into
Row training, obtains disaggregated model, wherein the training sample set obtained in advance is the gesture data of each frame;
Step S4.2: the effect of disaggregated model described in the data verification by test set adjusts hyper parameter, after being adjusted
Classifier.
Further, the classifier includes N number of vector, shaped like θ=[θ0,θ1,θ2…,θN-1]T, and in the classifier
It is numbered including N number of default posture and corresponding posture, step S5 is specifically included:
Step S5.1: using human action to be detected as sample xi, calculate the probability vector p of the sample1*j=g (x(i)
θ), wherein i indicates sample number, and j indicates the number of species of static posture, and g is the kernel function of logistic regression algorithm;
Step S5.2: it is designated as the posture identified number under the maximum element of probability vector is corresponding, is identified described
Posture number corresponding posture as the static posture recognition result.
Based on same inventive concept, second aspect of the present invention provides a kind of action recognition dress based on human body attitude
It sets, comprising:
Skeleton data obtains module, for obtaining human skeleton data by the bone tracking technique of depth transducer,
The skeleton data includes the three-dimensional coordinate of human joint points, and the three-dimensional coordinate is converted to the world coordinates to where human body
System;
Skeleton data filter module is obtained for being filtered using improved limit filtration algorithm to the skeleton data
Skeleton data after must filtering, wherein the improved limit filtration algorithm specifically includes: first determining whether trembling for this skeleton data
Whether traverse degree is more than threshold value, if the degree of jitter of this skeleton data is less than the threshold value, using filtering buffer area
Skeleton data updates this skeleton data, otherwise continues to judge whether the degree of jitter of last skeleton data is more than the threshold
Value updates filtering buffer area using this skeleton data if the degree of jitter of last skeleton data is less than the threshold value
Skeleton data;If the degree of jitter of last skeleton data is more than the threshold value, whether this described skeleton data is judged
In filter range, if then updating this described skeleton data using the skeleton data of the filtering buffer area, otherwise use
This described skeleton data updates the skeleton data of the filtering buffer area;
Angle character extraction module, for according to after conversion three-dimensional coordinate and predetermined angle calculation method to the filtering
Skeleton data carries out feature extraction afterwards, obtains the angle character being made of the angle of each artis;
Training module carries out the training sample set obtained in advance for logic-based regression algorithm and the angle character
Training obtains classifier;
Gesture recognition module obtains static posture identification for identifying by the classifier to the movement of human body
As a result;
Action recognition module, for be based on the static posture recognition result, using inverted order method of identification determine five frames in be
It is no to identify two kinds of preset static postures, if it is, a dynamic action is identified, as action recognition result.
Further, the depth transducer also obtains depth information, and skeleton data obtains module and is specifically used for:
Actual range of the depth transducer apart from human body is obtained according to the depth information;
The three-dimensional coordinate of depth image is converted under world coordinate system according to the actual range and Formula of Coordinate System Transformation
Actual coordinate, wherein the Formula of Coordinate System Transformation are as follows:
Wherein, (x, y) is actual coordinate, (xd,yd,zd) be depth image in depth information three-dimensional coordinate, w*h is
The resolution ratio of depth transducer, D and F are constant, wherein D=-10, F=0.0021.
Based on same inventive concept, third aspect present invention provides a kind of computer readable storage medium, deposits thereon
Computer program is contained, which, which is performed, realizes method described in first aspect.
Based on same inventive concept, fourth aspect present invention provides a kind of computer equipment, including memory, processing
Device and storage are on a memory and the computer program that can run on a processor, which is characterized in that processor execution institute
Method described in first aspect is realized when stating program.
Said one or multiple technical solutions in the embodiment of the present application at least have following one or more technology effects
Fruit:
In method provided by the invention, place is filtered to the skeleton data of acquisition by improved limit filtration algorithm
Reason, available stable skeleton data provide basis for subsequent identification, and according to the three-dimensional coordinate and preset angle after conversion
It spends calculation method and feature extraction is carried out to skeleton data after the filtering, it is special to obtain the angle being made of the angle of each artis
Sign, logic-based regression algorithm and the angle character are trained the training sample set obtained in advance, obtain classifier, can
To use angle character to realize action recognition by logistic regression, the complexity of recognition methods is reduced, identification can be promoted
Speed, the classifier that training obtains accurately can be defined and be described to each movement, can be to people by classifier
Body movement is accurately identified, so improving identification accuracy, solve the skill that recognition accuracy is not high in the prior art
Art problem.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is this hair
Bright some embodiments for those of ordinary skill in the art without creative efforts, can be with root
Other attached drawings are obtained according to these attached drawings.
Fig. 1 is a kind of flow chart of the action identification method based on human body attitude in the embodiment of the present invention;
Fig. 2 is to lift double-handed exercise schematic diagram in the embodiment of the present invention;
Fig. 3 is the human skeleton model that 25 joint dot position informations of the human body of the acquisition of method shown in Fig. 1 indicate
Schematic diagram;
Fig. 4 is a kind of structure chart of the action recognition device based on human body attitude in the embodiment of the present invention;
Fig. 5 is a kind of structure chart of computer readable storage medium in the embodiment of the present invention;
Fig. 6 is a kind of structure chart of computer equipment in the embodiment of the present invention.
Specific embodiment
The embodiment of the invention provides a kind of action identification method and device based on human body attitude, can be directed to human body
Movement make real-time identification, there is higher accuracy rate to action recognition, while the complexity for reducing algorithm makes it more
It is easy-to-use.
In order to reach above-mentioned technical effect, general thought of the invention is as follows:
A kind of action identification method based on human body attitude, the filtered bone obtained by improved limit filtration algorithm
Bone data obtain angle character by improved angle computation method, return and carry out to the angle character logic-based of point good class
Training, the classifier after being trained, then the static posture recognition result to human action, last basis are obtained by classifier
Static posture recognition result identifies human action using inverted order method.
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention
In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is
A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art
Every other embodiment obtained without creative efforts, shall fall within the protection scope of the present invention.
Embodiment one
A kind of action identification method based on human body attitude is present embodiments provided, referring to Figure 1, this method comprises:
Step S1 is first carried out: human skeleton data, bone number are obtained by the bone tracking technique of depth transducer
According to the three-dimensional coordinate including human joint points, three-dimensional coordinate is converted to the world coordinate system to where human body.
Specifically, depth transducer can be existing sensor, such as the Kinect sensor of Microsoft, apple
PrimeSense sensor etc., the quantity for obtaining ground human joint points are corresponding with the technology of depth transducer.Present embodiment
In, KinectV2 can be used, it is a kind of human-computer interaction device of Microsoft's exploitation, and bone tracer technique is the core of KinectV2
Heart technology, it can be with 25 key nodes of accurate calibration human body, and can carry out real-time tracing to the position of this 25 points, it
Resolution ratio is 1920*1080.KinectV1 can also be used, 20 artis of human body are obtained.Refer to Fig. 3,25 passes
The human skeleton model schematic diagram that node location information indicates, wherein 25 artis specifically include head, neck, left shoulder, the right side
Shoulder, left elbow, right elbow, left wrist, right wrist, left hand, the right hand, left hand refer to, the right hand refers to, vertebra under left thumb, right thumb, neck, in vertebra
Portion, vertebra base portion, left stern, right stern, left knee, right knee, left ankle, right ankle, left foot and right crus of diaphragm.Due to 25 human synovials of acquisition
The three-dimensional coordinate of point is the information under KinectV2 coordinate system, thus needs to be converted to actual coordinates, i.e. human body place
World coordinate system.
In one embodiment, the depth transducer also obtains depth information, and step S1 is specifically included:
Step S1.1: actual range of the depth transducer apart from human body is obtained according to depth information;
Step S1.2: the three-dimensional coordinate of depth image is converted to by world coordinates according to actual range and Formula of Coordinate System Transformation
Actual coordinate under system, wherein Formula of Coordinate System Transformation are as follows:
Wherein, (x, y) is actual coordinate, (xd,yd,zd) be depth image in depth information three-dimensional coordinate, w*h is
The resolution ratio of depth transducer, D and F are constant, wherein D=-10, F=0.0021.
Then it executes step S2: skeleton data being filtered using improved limit filtration algorithm, bone after being filtered
Bone data, wherein improved limit filtration algorithm specifically includes: first determine whether this skeleton data degree of jitter whether be more than
Threshold value updates this bone using the skeleton data of filtering buffer area if the degree of jitter of this skeleton data is less than threshold value
Otherwise bone data continue to judge whether the degree of jitter of last skeleton data is more than threshold value, if last skeleton data is trembled
Traverse degree is less than threshold value, then the skeleton data of filtering buffer area is updated using this skeleton data;If last skeleton data
Degree of jitter be more than threshold value, then this skeleton data is judged whether in filter range, if then using filtering buffer area
Skeleton data updates this skeleton data, and the skeleton data of filtering buffer area is otherwise updated using this skeleton data.
Specifically, the improved limit filtration algorithm in the present invention is added in existing limit filtration algorithm
The thought of Dynamic Programming first determines whether the degree of jitter of this skeleton data (data of present frame) is more than threshold value, then sentences
Whether the degree of jitter of disconnected last time skeleton data is more than threshold value, then determines how this skeleton data is specifically handled again, leads to
Crossing this kind of mode can make skeleton data more stable.
In the specific implementation process, the degree of jitter of skeleton data is indicated using the shake radius of skeleton data.By
It can sometimes occur to shake by a small margin in the skeleton data that depth transducer obtains, shake is usually to sit in the practical joint of human body
Target surrounding does fluctuation within a narrow range, and shake leads to not detect artis sometimes.Therefore it needs to select filtering algorithm to bone
Data do respective handling, keep laststate constant the artis of fluctuation within a narrow range.To can guarantee the stability of data, this
In embodiment, the degree of jitter of skeleton data is known as skeleton data degree of corroboration, hereinafter, the meaning of the two is identical.More have
Body, the degree of jitter of skeleton data can indicate that threshold value can root with the shake radius of skeleton data in present embodiment
It is configured according to actual conditions, such as can be 0.02m, 0.03m, 0.04m etc..When threshold value of the shake radius more than setting, lead to
Crossing improved limit filtration algorithm will be by error correcting within the scope of this.
Wherein, if the degree of jitter of last skeleton data is more than threshold value, can be judged by following manner:
PointFilter [id] .position.X and point.position.X, pointFilter [id] .position.Y with
Whether the difference of point.position.Y, pointFilter [id] .position.Z and point.position.Z are less than threshold
Value, wherein pointFilter [id] .position.X, pointFilter [id] .position.Y and pointFilter
[id] .position.Z passes through the coordinate of the artis after filtering processing, point.position.X.point.pos before being
Ition.Y and point.position.Z is the coordinate of current joint point, i.e., by the coordinate of calculating current joint point and before
The distance between body joint point coordinate mode judges.
Next execute step S3: according to after conversion three-dimensional coordinate and predetermined angle calculation method to bone number after filtering
According to feature extraction is carried out, the angle character being made of the angle of each artis is obtained.
In one embodiment, step S3 is specifically included:
Step S3.1: the range information between artis is calculated using distance calculation formula, wherein distance calculation formula are as follows:
Wherein, artis includes 3 points of A, B, C, and wherein the actual coordinate of artis A is (x1,y1), the reality of artis B
Coordinate is (x2,y2), the actual coordinate of artis C is (x3,y3);
Step S3.2: obtaining the angle of the line between each artis according to range information, as angle character, tool
Body are as follows:
Wherein, a indicates the distance of line between artis B and artis C, and b indicates to connect between artis A and artis C
The distance of line, c indicate the distance of line between artis A and artis B, angle of the θ between AC and BC.
Specifically, by the above-mentioned means, the angle between each artis line can be calculated, thus by acquisition
Multiple angles are as angle character.
Execute step S4 again: logic-based regression algorithm and angle character instruct the training sample set obtained in advance
Practice, obtains classifier.
In one embodiment, step S4 is specifically included:
Step S4.1: based on the angle character using logistic regression algorithm to the training sample set obtained in advance into
Row training, obtains disaggregated model, wherein the training sample set obtained in advance is the gesture data of each frame;
Step S4.2: the effect of disaggregated model described in the data verification by test set adjusts hyper parameter, after being adjusted
Classifier.
Specifically, the training sample set obtained in advance is just to mark in advance well, since logistic regression is that have supervision to learn
It practises, that is to say, that the training sample set obtained in advance is the data of known static posture, is then carried out by logistic regression
Training, obtains a model, that is, classifier, and then by the effect of the above-mentioned model of data verification of test set, adjustment is super
Parameter obtains the preferable classifier of final effect, that is, classifier adjusted.
Classified using action message of the logistic regression algorithm to each frame done, for example, assuming that there is N-dimensional feature
Vector x=[x0,x1,x2…,xn-1]T, parameter vector θ=[θ0,θ1,θ2…,ΘN-1]T,, in one-to-many logistic regression classification,
Every one kind will train a model hθ (i)(x), h is selected when being predictedθ (i)(x) value is maximum a kind of as classification results.
In the present embodiment, to each static posture, an one-to-many classifier θ=[θ is trained0,θ1,θ2…,θN-1]T,
For the new sample x that comes ini, calculate probability vector p1* j=g (x(i)θ), then maximum element subscript is exactly to identify
The number of static posture out.hθ (i)(x) it is the function model of logistic regression algorithm, concrete form is as follows:
hθ(x)=g (θTx)
G is kernel function:
Step S5 is executed again: the movement of human body being identified by classifier, obtains static posture recognition result.
In one embodiment, classifier includes N number of vector, shaped like θ=[θ0,θ1,θ2…,θN-1]T, and in classifier
It is numbered including N number of default static posture and corresponding posture, step S5 is specifically included:
Step S5.1: using human action to be detected as sample xi, calculate the probability vector p of the sample1*j=g (x(i)
θ), wherein i indicates sample number, and j indicates the number of species of static posture, and g is the kernel function of logistic regression algorithm;
Step S5.2: the posture identified number, the appearance that will identify that are designated as under the maximum element of probability vector is corresponding
Gesture numbers corresponding posture as static posture recognition result.
Step S6: being based on static posture recognition result, determines whether to identify two kinds in five frames using inverted order method of identification pre-
If static posture, if it is, a dynamic action is identified, as action recognition result.
Specifically, after obtaining the static posture recognition result after logistic regression, the present embodiment uses inverted order method of identification
Carry out identification maneuver.First the static gesture within five frames is judged to obtain as a result, and with five frames as one by logistic regression classification
A period, the data before five frames can be automatically deleted, current frame data can be compared with 5 frames before, when there are two types of specified
Static posture in five frames be identified, then be identified as one movement.
Action identification method provided by the invention based on human body attitude is not illuminated by the light, the influence of identification person, to difference
Good effect can be obtained under the test of illumination, different heights and fat or thin user.The improved limit filtration algorithm used
The stabilization that ensure that each frame data realizes action recognition by logistic regression using angle character, reduces the complexity of algorithm
Degree, improves the speed and accuracy of identification, is 35ms by empirical average recognition time.By to static state appearances different in five frames
The judgement of gesture carrys out the diversity and complexity that identification maneuver has also greatly reinforced recognizable movement.
In order to illustrate more clearly of the realization process of recognition methods of the invention, give below by a specific example
It introduces, refers to Fig. 2, passed the imperial examinations at the provincial level double-handed exercise schematic diagram for this embodiment of the present invention, acted about puting hands up, it is quiet by two
State posture composition, i.e., first make that both hands are flat to be lifted in T shape, and then both hands are lifted over the top of the head, are made the two postures i.e. judgement and are acted
To put hands up.
When judge target body whether put hands up act when, the specific implementation of the present embodiment the following steps are included:
Step S101: human skeleton data are obtained by the bone tracking technique of depth transducer, skeleton data includes
Three-dimensional coordinate is converted the world coordinate system to where human body, specifically included by the three-dimensional coordinate of human joint points:
Step S11: actual range of the depth transducer apart from human body is obtained according to depth information.
D=K tan (Hdd+L)-O
Wherein,
ddRefer to the depth information got, O=3.7cm, L=1.18rad, K=12.36cm, H=3.5*10-4rad。
Step S12: the three-dimensional coordinate of depth image is converted to by world coordinates according to actual range and Formula of Coordinate System Transformation
Actual coordinate under system, wherein Formula of Coordinate System Transformation are as follows:
Wherein, (x, y) is actual coordinate, (xd,yd,zd) be depth image in depth information three-dimensional coordinate, w*h is
The resolution ratio of depth transducer, D and F are constant, wherein D=-10, F=0.0021.
Step S201: being filtered skeleton data using improved limit filtration algorithm, skeleton data after being filtered,
Specific embodiment includes following sub-step:
Step S21: the thought of Dynamic Programming is added in limit filtration algorithm, first judges whether skeleton data degree of corroboration is small
In threshold value JOINT_CONFIDENCE,
That is point.fConfidence < JOINT_CONFIDENCE, if being less than the bone number directly using filtering buffer area
According to this skeleton data of update.
Step S22: if skeleton data degree of corroboration is greater than threshold value, continue to judge whether last filter result degree of corroboration is small
In threshold value, if last be less than threshold value, buffer area skeleton data is updated using this skeleton data.
Step S23: if last filter result degree of corroboration is still greater than threshold value, then judge skeleton data whether in filtering model
In enclosing, that is, judge whether lower formula is true, pointFilter [id] .position.X and point.position.X,
PointFilter [id] .position.Y and point.position.Y, pointFilter [id] .position.Z with
Whether the difference of point.position.Z is less than threshold value, if then updating this skeleton data using filtering buffer data,
If not then updating the skeleton data of filtering buffer area using this skeleton data.
Step S301: feature extraction is carried out to the skeleton data in step S201, is obtained by a kind of angle computation method
The feature being made of each artis angle.Specific embodiment includes the following steps:
Angle character is obtained using line-of-sight course, i.e., first obtains the range information between artis with following formula
The angle of the line between artis is found out by following formula again
Wherein, extracted from 25 joints 10 may angle character relevant to posture, institute it is angled all in 0-
Between 180 °, the angle including left shoulder left wrist and Y-axis, the angle of right shoulder right wrist and Y-axis, the left left elbow of shoulder and the left wrist vector of left elbow
Angle, the angle of right shoulder right elbow and the right wrist of right elbow, left knee, the angle of left stern and the left ankle of left knee, right knee, right stern and right knee, right ankle
Angle, left shoulder, the angle of right shoulder and X-axis, left stern, the angle of right stern and X-axis, in the middle part of vertebra, the folder of vertebra and Y-axis under neck
Angle, head, the angle of vertebra base portion and Y-axis.
Next it executes step S401: being returned using the angle character logic-based obtained in step S301 to training sample
Collection is trained, and obtains classifier, to be classified to static posture and be identified, the specific embodiment of step S401 includes
Following steps:
After the completion of to multiple joint angles feature extractions, the movement of each frame done is believed using logistic regression algorithm
Breath is classified, and in present embodiment, has been recruited 6 (three male three female) subjects and has been tested, kinect is placed in subject
In front of 1.8m, for experimenter 2-5, every experimenter finishes 20 kinds of movements of setting in order, every kind gesture sample 50 times, altogether
Count 4000 samples, and for experimenter 1, every kind amounts to 5000 samples gesture sample 250 times, wherein the 50% of experimenter 1
Data are for training, and residue 50% is for testing, and all data of experimenter 2-5 are for testing.Pass through logistic regression algorithm
Establish real-time static posture identifying system (i.e. classifier).Assuming that there is N-dimensional feature vector x=[x0, x1, x2..., xn-1]T, ginseng
Number vector θ=[θ0, θ1, θ2..., θN-1]T,, function model is as follows:
hθ(x)=g (θ Tx)
Wherein, defining kernel function g is
In one-to-many logistic regression classification, every one kind will train a model hθ (i)(x), the choosing when being predicted
Select hθ (i)(x) value is maximum a kind of as classification results.I.e. when to each posture, one one-to-many classifier θ of training=
[θ0, θ1, θ2..., θN-1]TIf coming in a new sample xi, calculate probability vector p1*j=g (x(i)θ) then maximum is first
Plain subscript is exactly the posture number identified.
When making the movement put hands up, there are two postures, i.e. flat lift of both hands is lifted in T font and both hands, when double
When hand level multiplies T-shaped, hθ (i)(x) value is worth maximum in the case where element subscript is the number of both hands flat-hand position, that is, is determined as both hands
It is flat to lift, it can equally identify and be lifted on both hands, that is, identify two static postures.
Step S501: identifying the movement of human body by classifier, obtains static posture recognition result.
Step S601 determines the static posture that two kinds of definition whether are identified in five frames using inverted order method of identification, to identify
One dynamic action, specific embodiment include the following steps:
First static posture is judged, the corresponding relationship of static posture is defined as follows and is stored in variable static
In.When making the movement put hands up, first identifies the flat posture lifted of both hands, save its posture and be numbered in variable static
In, wherein Static variable as every frame data static posture identify as a result, can be held by the sequence in C++ java standard library
Device vector is pressed such as container, and using five frames as a cycle, the data before five frames can be deleted from vector, currently
Frame data can be compared with frame before, and rear several frames such as in five frames have found that movement is made on both hands, then by judging item
Part confirms there are two lifting on the flat act of posture both hands and both hands in this five frame, so that identification maneuver is put hands up.
Based on the same inventive concept, present invention also provides know with the movement of embodiment one based on human body attitude one of
The corresponding device of other method, detailed in Example two.
Embodiment two
The present embodiment provides a kind of action recognition devices based on human body attitude, refer to Fig. 4, which includes:
Skeleton data obtains module 401, for obtaining human skeleton number by the bone tracking technique of depth transducer
According to skeleton data includes the three-dimensional coordinate of human joint points, and three-dimensional coordinate is converted to the world coordinate system to where human body;
Skeleton data filter module 402 is obtained for being filtered using improved limit filtration algorithm to skeleton data
Skeleton data after filtering, wherein improved limit filtration algorithm specifically includes: first determining whether the degree of jitter of this skeleton data
Whether it is more than threshold value, if the degree of jitter of this skeleton data is less than threshold value, uses the skeleton data of filtering buffer area more
This new skeleton data, otherwise continues to judge whether the degree of jitter of last skeleton data is more than threshold value, if last bone
The degree of jitter of data is less than threshold value, then the skeleton data of filtering buffer area is updated using this skeleton data;If last
The degree of jitter of skeleton data is more than threshold value, then this skeleton data is judged whether in filter range, if then using filtering
The skeleton data of buffer area updates this skeleton data, and the bone number of filtering buffer area is otherwise updated using this skeleton data
According to;
Angle character extraction module 403, for according to after conversion three-dimensional coordinate and predetermined angle calculation method to filtering
Skeleton data carries out feature extraction afterwards, obtains the angle character being made of the angle of each artis;
Training module 404 carries out the training sample set obtained in advance for logic-based regression algorithm and angle character
Training obtains classifier;
Gesture recognition module 405 obtains static posture identification knot for identifying by classifier to the movement of human body
Fruit;
Action recognition module 406, for be based on static posture recognition result, using inverted order method of identification determine five frames in whether
Two kinds of preset static postures are identified, if it is, a dynamic action is identified, as action recognition result.
In one embodiment, the depth transducer also obtains depth information, and it is specific that skeleton data obtains module 401
For:
Actual range of the depth transducer apart from human body is obtained according to depth information;
The three-dimensional coordinate of depth image is converted to the reality under world coordinate system according to actual range and Formula of Coordinate System Transformation
Border coordinate, wherein Formula of Coordinate System Transformation are as follows:
Wherein, (x, y) is actual coordinate, (xd,yd,zd) be depth image in depth information three-dimensional coordinate, w*h is
The resolution ratio of depth transducer, D and F are constant, wherein D=-10, F=0.0021.
In one embodiment, the degree of jitter of skeleton data is indicated using the shake radius of skeleton data.
In one embodiment, angle character extraction module 403 is specifically used for:
Range information between artis is calculated using distance calculation formula, wherein distance calculation formula are as follows:
Wherein, artis includes 3 points of A, B, C, and wherein the actual coordinate of artis A is (x1,y1), the reality of artis B
Coordinate is (x2,y2), the actual coordinate of artis C is (x3,y3);
The angle that the line between each artis is obtained according to range information, as angle character, specifically:
Wherein, a indicates the distance of line between artis B and artis C, and b indicates to connect between artis A and artis C
The distance of line, c indicate the distance of line between artis A and artis B, angle of the θ between AC and BC.
In one embodiment, training module 404 is specifically used for:
The training sample set obtained in advance is trained using logistic regression algorithm based on angle character, obtains classification mould
Type, wherein the training sample set obtained in advance is the gesture data of each frame;
By the effect of the data verification disaggregated model of test set, hyper parameter is adjusted, classifier adjusted is obtained.
In one embodiment, classifier includes N number of vector, shaped like θ=[θ0,θ1,θ2…,θN-1]T, and in classifier
It is numbered including N number of default posture and corresponding posture, gesture recognition module 405 is specifically used for:
Using human action to be detected as sample xi, calculate the probability vector p of the sample1*j=g (x(i)θ), wherein i
Indicate sample number, j indicates the number of species of static posture, and g is the kernel function of logistic regression algorithm;
The posture identified number is designated as under the maximum element of probability vector is corresponding, the posture number that will identify that corresponds to
Posture as static posture recognition result.
It is dynamic based on human body attitude in the embodiment of the present invention one to implement by the device that the embodiment of the present invention two is introduced
Make device used by recognition methods, so based on the method that the embodiment of the present invention one is introduced, the affiliated personnel in this field can
Understand specific structure and the deformation of the device, so details are not described herein.Used by the method for all embodiment of the present invention one
Device belongs to the range of the invention to be protected.
Embodiment three
Based on the same inventive concept, present invention also provides a kind of computer readable storage medium 500, Fig. 5 is referred to,
On be stored with computer program 511, the program be performed realize embodiment one in method.
By the computer readable storage medium that the embodiment of the present invention three is introduced, to implement base in the embodiment of the present invention one
The computer readable storage medium used by the action identification method of human body attitude, so be situated between based on the embodiment of the present invention one
The method to continue, the affiliated personnel in this field can understand specific structure and the deformation of the computer readable storage medium, so herein
It repeats no more.Computer readable storage medium used by the method for all embodiment of the present invention one belongs to the present invention and is intended to protect
The range of shield.
Example IV
Based on the same inventive concept, present invention also provides a kind of computer equipments, refer to Fig. 6, including memory
601, processor 602 and storage on a memory and the computer program 603 that can run on a processor, processor execution program
The method of Shi Shixian embodiment one.
By the computer equipment that the embodiment of the present invention four is introduced, to implement to be based on human body appearance in the embodiment of the present invention one
Equipment used by the action identification method of state, so based on the method that the embodiment of the present invention one is introduced, the affiliated people in this field
Member can understand specific structure and the deformation of the computer equipment, so details are not described herein.All embodiment of the present invention one
Computer equipment used by method belongs to the range of the invention to be protected.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as method, system or computer program
Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the present invention
Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the present invention, which can be used in one or more,
The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces
The form of product.
The present invention be referring to according to the method for the embodiment of the present invention, the process of equipment (system) and computer program product
Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions
The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs
Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce
A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real
The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates,
Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or
The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting
Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or
The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one
The step of function of being specified in a box or multiple boxes.
Although preferred embodiments of the present invention have been described, it is created once a person skilled in the art knows basic
Property concept, then additional changes and modifications may be made to these embodiments.So it includes excellent that the following claims are intended to be interpreted as
It selects embodiment and falls into all change and modification of the scope of the invention.
Obviously, those skilled in the art can carry out various modification and variations without departing from this hair to the embodiment of the present invention
The spirit and scope of bright embodiment.In this way, if these modifications and variations of the embodiment of the present invention belong to the claims in the present invention
And its within the scope of equivalent technologies, then the present invention is also intended to include these modifications and variations.
Claims (10)
1. a kind of action identification method based on human body attitude characterized by comprising
Step S1: human skeleton data are obtained by the bone tracking technique of depth transducer, the skeleton data includes people
The three-dimensional coordinate is converted the world coordinate system to where human body by the three-dimensional coordinate of body artis;
Step S2: being filtered the skeleton data using improved limit filtration algorithm, skeleton data after being filtered,
In, the improved limit filtration algorithm specifically includes: whether the degree of jitter for first determining whether this skeleton data is more than threshold value,
If the degree of jitter of this skeleton data is less than the threshold value, this bone is updated using the skeleton data of filtering buffer area
Otherwise data continue to judge whether the degree of jitter of last skeleton data is more than the threshold value, if last skeleton data
Degree of jitter is less than the threshold value, then the skeleton data of filtering buffer area is updated using this skeleton data;If last bone
The degree of jitter of bone data is more than the threshold value, then this described skeleton data is judged whether in filter range, if then adopting
This described skeleton data is updated with the skeleton data of the filtering buffer area, institute is otherwise updated using this described skeleton data
State the skeleton data of filtering buffer area;
Step S3: according to after conversion three-dimensional coordinate and predetermined angle calculation method to after the filtering skeleton data carry out feature
It extracts, obtains the angle character being made of the angle of each artis;
Step S4: logic-based regression algorithm and the angle character are trained the training sample set obtained in advance, obtain
Classifier;
Step S5: identifying the movement of human body by the classifier, obtains static posture recognition result;
Step S6: being based on the static posture recognition result, determines whether to identify two kinds in five frames using inverted order method of identification pre-
If static posture, if it is, a dynamic action is identified, as action recognition result.
2. the method as described in claim 1, which is characterized in that the depth transducer also obtains depth information, step S1 tool
Body includes:
Step S1.1: actual range of the Kinect2 apart from human body is obtained according to the depth information;
Step S1.2: the three-dimensional coordinate is converted under world coordinate system according to the actual range and Formula of Coordinate System Transformation
Actual coordinate, wherein the Formula of Coordinate System Transformation are as follows:
Wherein, (x, y) is actual coordinate, (xd,yd,zd) be depth information in depth image three-dimensional coordinate, w*h is depth
The resolution ratio of sensor, D and F are constant, wherein D=-10, F=0.0021.
3. the method as described in claim 1, which is characterized in that in step S2, the degree of jitter of skeleton data uses bone number
According to shake radius indicate.
4. the method as described in claim 1, which is characterized in that step S3 is specifically included:
Step S3.1: the range information between artis is calculated using distance calculation formula, wherein distance calculation formula are as follows:
Wherein, artis includes 3 points of A, B, C, and wherein the actual coordinate of artis A is (x1,y1), the actual coordinate of artis B
For (x2,y2), the actual coordinate of artis C is (x3,y3);
Step S3.2: obtaining the angle of the line between each artis according to the range information, special as the angle
Sign, specifically:
Wherein, a indicates the distance of line between artis B and artis C, and b indicates line between artis A and artis C
Distance, c indicate the distance of line between artis A and artis B, angle of the θ between AC and BC.
5. the method as described in claim 1, which is characterized in that step S4 is specifically included:
Step S4.1: the training sample set obtained in advance is instructed using logistic regression algorithm based on the angle character
Practice, obtain disaggregated model, wherein the training sample set obtained in advance is the gesture data of each frame;
Step S4.2: the effect of disaggregated model described in the data verification by test set adjusts hyper parameter, obtains adjusted point
Class device.
6. the method as described in claim 1, which is characterized in that the classifier includes N number of vector, shaped like θ=[θ0,θ1,
θ2…,θN-1]T, and include that N number of default posture and corresponding posture are numbered in the classifier, step S5 is specifically included:
Step S5.1: using human action to be detected as sample xi, calculate the probability vector p of the sample1*j=g (x(i)θ),
In, i indicates sample number, and j indicates the number of species of static posture, and g is the kernel function of logistic regression algorithm;
Step S5.2: the posture identified number is designated as under the maximum element of probability vector is corresponding, by the appearance identified
Gesture numbers corresponding posture as the static posture recognition result.
7. a kind of action recognition device based on human body attitude characterized by comprising
Skeleton data obtains module, for obtaining human skeleton data, the bone number by Kinect2 bone tracking technique
According to the three-dimensional coordinate including human joint points, the three-dimensional coordinate is converted to the world coordinate system to where human body;
Skeleton data filter module is filtered for being filtered using improved limit filtration algorithm to the skeleton data
Skeleton data after wave, wherein the improved limit filtration algorithm specifically includes: first determining whether the shake journey of this skeleton data
Whether degree is more than threshold value, if the degree of jitter of this skeleton data is less than the threshold value, using the bone of filtering buffer area
Data update this skeleton data, otherwise continue to judge whether the degree of jitter of last skeleton data is more than the threshold value, if
The degree of jitter of last skeleton data is less than the threshold value, then the bone of filtering buffer area is updated using this skeleton data
Data;If the degree of jitter of last skeleton data is more than the threshold value, judge whether this described skeleton data is filtering
In range, if then updating this described skeleton data using the skeleton data of the filtering buffer area, described is otherwise used
Secondary skeleton data updates the skeleton data of the filtering buffer area;
Angle character extraction module, for according to after conversion three-dimensional coordinate and predetermined angle calculation method to bone after the filtering
Bone data carry out feature extraction, obtain the angle character being made of the angle of each artis;
Training module instructs the training sample set obtained in advance for logic-based regression algorithm and the angle character
Practice, obtains classifier;
Gesture recognition module obtains static posture recognition result for identifying by the classifier to the movement of human body;
Action recognition module determines whether know in five frames using inverted order method of identification for being based on the static posture recognition result
Not Chu two kinds of preset static postures, if it is, a dynamic action is identified, as action recognition result.
8. device as claimed in claim 7, which is characterized in that the depth transducer also obtains depth information, skeleton data
Module is obtained to be specifically used for:
Actual range of the Kinect2 apart from human body is obtained according to the depth information;
The three-dimensional coordinate of depth image is converted to the reality under world coordinate system according to the actual range and Formula of Coordinate System Transformation
Border coordinate, wherein the Formula of Coordinate System Transformation are as follows:
Wherein, (x, y) is actual coordinate, (xd,yd,zd) be depth image in depth information three-dimensional coordinate, w*h is
The resolution ratio of kinect2, D and F are constant, wherein D=-10, F=0.0021.
9. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is performed reality
The now method as described in any one of claims 1 to 6 claim.
10. a kind of computer equipment including memory, processor and stores the meter that can be run on a memory and on a processor
Calculation machine program, which is characterized in that realized when the processor executes described program as any one of claims 1 to 6 right is wanted
Seek the method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810988873.6A CN109117893A (en) | 2018-08-28 | 2018-08-28 | A kind of action identification method and device based on human body attitude |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810988873.6A CN109117893A (en) | 2018-08-28 | 2018-08-28 | A kind of action identification method and device based on human body attitude |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109117893A true CN109117893A (en) | 2019-01-01 |
Family
ID=64861058
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810988873.6A Pending CN109117893A (en) | 2018-08-28 | 2018-08-28 | A kind of action identification method and device based on human body attitude |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109117893A (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109635783A (en) * | 2019-01-02 | 2019-04-16 | 上海数迹智能科技有限公司 | Video monitoring method, device, terminal and medium |
CN110309743A (en) * | 2019-06-21 | 2019-10-08 | 新疆铁道职业技术学院 | Human body attitude judgment method and device based on professional standard movement |
CN110327053A (en) * | 2019-07-12 | 2019-10-15 | 广东工业大学 | A kind of human body behavior safety monitoring method, equipment and system based on lift space |
CN111067597A (en) * | 2019-12-10 | 2020-04-28 | 山东大学 | System and method for determining puncture path according to human body posture in tumor puncture |
CN111142663A (en) * | 2019-12-27 | 2020-05-12 | 恒信东方文化股份有限公司 | Gesture recognition method and gesture recognition system |
CN111341040A (en) * | 2020-03-28 | 2020-06-26 | 江西财经职业学院 | Financial self-service equipment and management system thereof |
CN111840920A (en) * | 2020-07-06 | 2020-10-30 | 暨南大学 | Upper limb intelligent rehabilitation system based on virtual reality |
CN111860243A (en) * | 2020-07-07 | 2020-10-30 | 华中师范大学 | Robot action sequence generation method |
CN112233769A (en) * | 2020-10-12 | 2021-01-15 | 安徽动感智能科技有限公司 | Recovery system after suffering from illness based on data acquisition |
CN112434741A (en) * | 2020-11-25 | 2021-03-02 | 杭州盛世传奇标识***有限公司 | Method, system, device and storage medium for using interactive introduction identifier |
CN112711332A (en) * | 2020-12-29 | 2021-04-27 | 上海交通大学宁波人工智能研究院 | Human body motion capture method based on attitude coordinates |
CN112801061A (en) * | 2021-04-07 | 2021-05-14 | 南京百伦斯智能科技有限公司 | Posture recognition method and system |
CN113627369A (en) * | 2021-08-16 | 2021-11-09 | 南通大学 | Action recognition and tracking method in auction scene |
CN114677625A (en) * | 2022-03-18 | 2022-06-28 | 北京百度网讯科技有限公司 | Object detection method, device, apparatus, storage medium and program product |
CN116719417A (en) * | 2023-08-07 | 2023-09-08 | 海马云(天津)信息技术有限公司 | Motion constraint method and device for virtual digital person, electronic equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106056035A (en) * | 2016-04-06 | 2016-10-26 | 南京华捷艾米软件科技有限公司 | Motion-sensing technology based kindergarten intelligent monitoring method |
US20170161563A1 (en) * | 2008-09-18 | 2017-06-08 | Grandeye, Ltd. | Unusual Event Detection in Wide-Angle Video (Based on Moving Object Trajectories) |
CN107180235A (en) * | 2017-06-01 | 2017-09-19 | 陕西科技大学 | Human action recognizer based on Kinect |
CN107832713A (en) * | 2017-11-13 | 2018-03-23 | 南京邮电大学 | A kind of human posture recognition method based on OptiTrack |
CN107943276A (en) * | 2017-10-09 | 2018-04-20 | 广东工业大学 | Based on the human body behavioral value of big data platform and early warning |
-
2018
- 2018-08-28 CN CN201810988873.6A patent/CN109117893A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170161563A1 (en) * | 2008-09-18 | 2017-06-08 | Grandeye, Ltd. | Unusual Event Detection in Wide-Angle Video (Based on Moving Object Trajectories) |
CN106056035A (en) * | 2016-04-06 | 2016-10-26 | 南京华捷艾米软件科技有限公司 | Motion-sensing technology based kindergarten intelligent monitoring method |
CN107180235A (en) * | 2017-06-01 | 2017-09-19 | 陕西科技大学 | Human action recognizer based on Kinect |
CN107943276A (en) * | 2017-10-09 | 2018-04-20 | 广东工业大学 | Based on the human body behavioral value of big data platform and early warning |
CN107832713A (en) * | 2017-11-13 | 2018-03-23 | 南京邮电大学 | A kind of human posture recognition method based on OptiTrack |
Non-Patent Citations (1)
Title |
---|
朱宇辉: "《中国优秀硕士学位论文全文数据库信息科技辑》", 15 March 2017 * |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109635783A (en) * | 2019-01-02 | 2019-04-16 | 上海数迹智能科技有限公司 | Video monitoring method, device, terminal and medium |
CN110309743A (en) * | 2019-06-21 | 2019-10-08 | 新疆铁道职业技术学院 | Human body attitude judgment method and device based on professional standard movement |
CN110327053A (en) * | 2019-07-12 | 2019-10-15 | 广东工业大学 | A kind of human body behavior safety monitoring method, equipment and system based on lift space |
CN111067597B (en) * | 2019-12-10 | 2021-04-16 | 山东大学 | System for determining puncture path according to human body posture in tumor puncture |
CN111067597A (en) * | 2019-12-10 | 2020-04-28 | 山东大学 | System and method for determining puncture path according to human body posture in tumor puncture |
CN111142663A (en) * | 2019-12-27 | 2020-05-12 | 恒信东方文化股份有限公司 | Gesture recognition method and gesture recognition system |
CN111142663B (en) * | 2019-12-27 | 2024-02-02 | 恒信东方文化股份有限公司 | Gesture recognition method and gesture recognition system |
CN111341040A (en) * | 2020-03-28 | 2020-06-26 | 江西财经职业学院 | Financial self-service equipment and management system thereof |
CN111840920A (en) * | 2020-07-06 | 2020-10-30 | 暨南大学 | Upper limb intelligent rehabilitation system based on virtual reality |
CN111860243A (en) * | 2020-07-07 | 2020-10-30 | 华中师范大学 | Robot action sequence generation method |
CN112233769A (en) * | 2020-10-12 | 2021-01-15 | 安徽动感智能科技有限公司 | Recovery system after suffering from illness based on data acquisition |
CN112434741A (en) * | 2020-11-25 | 2021-03-02 | 杭州盛世传奇标识***有限公司 | Method, system, device and storage medium for using interactive introduction identifier |
CN112711332A (en) * | 2020-12-29 | 2021-04-27 | 上海交通大学宁波人工智能研究院 | Human body motion capture method based on attitude coordinates |
CN112711332B (en) * | 2020-12-29 | 2022-07-15 | 上海交通大学宁波人工智能研究院 | Human body motion capture method based on attitude coordinates |
CN112801061A (en) * | 2021-04-07 | 2021-05-14 | 南京百伦斯智能科技有限公司 | Posture recognition method and system |
CN113627369A (en) * | 2021-08-16 | 2021-11-09 | 南通大学 | Action recognition and tracking method in auction scene |
CN114677625A (en) * | 2022-03-18 | 2022-06-28 | 北京百度网讯科技有限公司 | Object detection method, device, apparatus, storage medium and program product |
CN114677625B (en) * | 2022-03-18 | 2023-09-08 | 北京百度网讯科技有限公司 | Object detection method, device, apparatus, storage medium, and program product |
CN116719417A (en) * | 2023-08-07 | 2023-09-08 | 海马云(天津)信息技术有限公司 | Motion constraint method and device for virtual digital person, electronic equipment and storage medium |
CN116719417B (en) * | 2023-08-07 | 2024-01-26 | 海马云(天津)信息技术有限公司 | Motion constraint method and device for virtual digital person, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109117893A (en) | A kind of action identification method and device based on human body attitude | |
JP6082101B2 (en) | Body motion scoring device, dance scoring device, karaoke device, and game device | |
JP7071054B2 (en) | Information processing equipment, information processing methods and programs | |
Chaudhari et al. | Yog-guru: Real-time yoga pose correction system using deep learning methods | |
CN111488824A (en) | Motion prompting method and device, electronic equipment and storage medium | |
KR102377561B1 (en) | Apparatus and method for providing taekwondo movement coaching service using mirror dispaly | |
JP2018026131A (en) | Motion analyzer | |
CN107624061A (en) | Machine vision with dimension data reduction | |
WO2017161734A1 (en) | Correction of human body movements via television and motion-sensing accessory and system | |
CN113449570A (en) | Image processing method and device | |
KR20170084643A (en) | Motion analysis appratus and method using dual smart band | |
CN107092882A (en) | A kind of Activity recognition system and its method of work perceived based on sub- action | |
Santhalingam et al. | Synthetic smartwatch imu data generation from in-the-wild asl videos | |
Ong et al. | Investigation of feature extraction for unsupervised learning in human activity detection | |
CN109858402B (en) | Image detection method, device, terminal and storage medium | |
CN110910426A (en) | Action process and action trend identification method, storage medium and electronic device | |
CN108051001A (en) | A kind of robot movement control method, system and inertia sensing control device | |
JP2003256850A (en) | Movement recognizing device and image processor and its program | |
CN111353345B (en) | Method, apparatus, system, electronic device, and storage medium for providing training feedback | |
CN116740618A (en) | Motion video action evaluation method, system, computer equipment and medium | |
JP7023210B2 (en) | Multidimensional data visualization equipment, methods and programs | |
JP2021135995A (en) | Avatar facial expression generating system and avatar facial expression generating method | |
CN110490165A (en) | A kind of dynamic hand tracking method based on convolutional neural networks | |
CN108573216A (en) | A kind of limbs posture judgment method and device | |
Liu et al. | Gesture recognition based on Kinect |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190101 |