CN101894278A - Human motion tracing method based on variable structure multi-model - Google Patents

Human motion tracing method based on variable structure multi-model Download PDF

Info

Publication number
CN101894278A
CN101894278A CN 201010230975 CN201010230975A CN101894278A CN 101894278 A CN101894278 A CN 101894278A CN 201010230975 CN201010230975 CN 201010230975 CN 201010230975 A CN201010230975 A CN 201010230975A CN 101894278 A CN101894278 A CN 101894278A
Authority
CN
China
Prior art keywords
model
motion
model group
human
group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 201010230975
Other languages
Chinese (zh)
Other versions
CN101894278B (en
Inventor
韩红
焦李成
陈志超
范友健
李阳阳
吴建设
王爽
尚荣华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN2010102309755A priority Critical patent/CN101894278B/en
Publication of CN101894278A publication Critical patent/CN101894278A/en
Application granted granted Critical
Publication of CN101894278B publication Critical patent/CN101894278B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a human motion tracing method based on VSMM, relating to the computer vision field. The invention solves the problems that the existing method can not solve human motion ambiguity, time complexity is high and good three-dimensional human pose estimation can not be obtained by simply adding motion model. The method includes the following steps: (1) a human motion video image is input, so as to obtain a human profile as well as outer contour and skeleton line thereof; (2) position of articulation point is detected; (3) ridge regression method is used for training a motion model, and a total motion model set is divided into groups; (4) a model group M1 is initialized; (5) interactive multi-model algorithm is operated, so as to obtain a human motion pose; (6) the motion model group meeting activation condition is activated and initialized, if no motion model group meets the activation condition, the step (5) is executed; (7) the model group meeting end condition is ended, and the step (5) is executed, otherwise the step is continued. The invention has the advantages of low time complexity and good tracing effect and can be applied to human motion tracing and pose estimation.

Description

Human body motion tracking method based on variable structure multi-model
Technical field
The invention belongs to technical field of computer vision, relate to a kind of human body motion tracking method, can be used for human motion tracking and attitude and estimate.
Background technology
It is an important branch of computer vision field that human motion is followed the tracks of, because it all has potential application in various aspects such as therapeutic treatment, motion-captured, cartoon making, intelligent monitor systems, so received a lot of scholars' concern.Though in reality, can obtain a large amount of unmarked monocular video sequences, but these type of data are the 2D projection of three-dimensional scenic on image or image sequence, lacked depth information, add from block, foreground detection noise or the like, it is difficult to recover the human motion attitude from such video sequence.
Estimating from monocular image and following the tracks of the complicated 3D structure that links object has two kinds of important method: based on the method for model with based on the method for learning.Ankur Agarwal points out, generally all will a clear and definite in advance parameterized manikin based on the method for model, recover human body attitude according to kinematic principle then, perhaps design the likelihood tolerance of a model to image, according to the projection of current time predicted state on image and the likelihood degree of characteristics of image, use optimized method to obtain optimum human body attitude, but be to use the time complexity of optimization method recovery human motion attitude very high, it needs good initialization, and need to solve the local minimum problem in the optimal search procedure, along with the continuous accumulation of error, optimization method can't guarantee correct human body attitude estimation.Consider that one group of typical human body attitude is than similar many of attitude possible on one group of kinematics, by training a model, directly recovering attitude from observed image is measured estimates, method based on study is avoided the 3D modeling problem, it uses the method study movement capturing data and the mapping relations between the characteristics of image or popular of recurrence or dimensionality reduction, recover 3 d pose according to characteristics of image or other forms of input, obtained good effect.
In forefathers' work, Deutscher et al. uses border and silhouette to make up weighting function as characteristics of image, uses annealing particle filter framework and realizes the human motion tracking.Mikic et al. obtains manikin automatically from a plurality of synchronization videos stream, application extension Kalman filtering framework is estimated human body sport parameter according to the measurement information on the voxel data of mark.Urtasun et al. use balance Gaussian process dynamic model to instruct in the monocular video sequence, to follow the tracks of 3D human motion, this dynamic model be from the less training exercise data middle school acquistion that comprises various modes to.XinyuXu et al. uses less training data in the HumanEva database, adopt partial least-square regression method to train the left side of special human motion and definite relation of right side body kinematics, use Rao-Blackwellised particle filter RBPF to follow the tracks of framework at last and carry out the human body motion tracking.Sigal et al. proposes a Bayesian frame, comprises sequential importance sampling and annealing particle filter, has used multiple motion model and likelihood function during tracking; In order to make 3-d recovery meet the anatomical joints restriction more and to reduce the search volume, from training data, learn motion model, use the Euclidean distance difference of virtual tag to measure as error.Ni et al. has proposed a kind of random-tracking framework that combines interactive multi-model and Kalman's particle filter, the 3D human body that uses the reconstruct of vision shell is as input, physical force/the moment of simulation has reduced required number of particles, obtains good three-dimensional tracking effect in conjunction with the interacting multiple model algorithm that comprises a plurality of kinematics models.Farmer et al. in the real-time supervisory system cheaply, not only can accurately follow the tracks of the motion of human body to interactive multi-model Kalman filtering frame application, also can well follow the tracks of the shape of human body.
Tracking based on model uses the method for optimizing when searching for optimal result, the time complexity height, and can't fundamentally solve the ambiguity of human motion, block under the situation of generation, owing to there is not good guidance, the accurate recovery of human motion can't be guaranteed; Based on the method for study, though the motion model that has used training to obtain has increased the accuracy and the stability of following the tracks of, but single motion model can only motor pattern of match, in addition, for improving tracking effect, the application of good descriptor also will spend a large amount of time.Forefathers have attempted using interacting multiple model algorithm IMM to finish human motion and have followed the tracks of, use well-chosen motion model collection to obtain tracking effect preferably to specific human motion pattern, but in the application of reality, complicacy and polytrope that less motion model set can't solve the human motion pattern, for example, when current motor pattern is walking for redirect, and only comprise the motion model that jumps in the motion model set, can't guarantee tracking effect.The simple quantity that increases motion model is infeasible, not only can increase the time complexity of computing, but also can be because the unnecessary competition between the motion model causes the reduction of tracking effect.
Summary of the invention
The objective of the invention is to overcome existing method deficiency, a kind of human body motion tracking method based on variable structure multi-model VSMM has been proposed, to reduce the ambiguousness of human motion pose recovery, improve the accuracy that human motion is followed the tracks of, reduce the time that single frames is followed the tracks of simultaneously.
The technical scheme that realizes the object of the invention is: on the basis of human body articulation point position, by using the motion model of movement capturing data training, solve the human motion tracking problem in conjunction with the VSMM algorithm frame.
One. the present invention is based on the human body tracing method of VSMM, comprising:
Pre-treatment step: input human body video image, obtain the human body silhouette by background subtraction, extract human body silhouette outline, and the human body silhouette is carried out thinning processing;
Articulation point detects step: to pretreated human body video image, carry out following articulation point and detect:
1) uses the concentric circles template to search for, will fall into the point center of circle the most for a long time of annulus as head node along skeleton line;
2) choosing human body silhouette center of gravity position is root node;
3) use the projection on image of 3D human skeleton model, obtain other articulation point positions on the trunk;
4) choose the skeleton line vertex position as hand node and pin node;
5) make the parallel lines of bipod line by the barycenter of lower part of the body silhouette, two intersection points of itself and skeleton line are as knee joint;
6) with equidistant as elbow joint on the skeleton line with hand and shoulder;
7) to because of blocking or can't detected part articulation point because of cutting apart noise, then adopt the Kalman filtering method, one-step prediction obtains;
Motion model training step: from the CMU of CMU motion capture database, choose the seizure data of multiple motor pattern, adopt the state-transition matrix F of Ridge Regression Modeling Method to the motion model equation iTrain, and calculate the noise w of this motion model kCovariance, the motion model set of acquisition is called total motion model collection M;
Motion model collection covering design step: in total motion model collection M, if two human motion pattern similarities that motion model mated then divide these two kinds of motion models in same model group, otherwise, it is divided into different model groups; Each model group comprises 3 motion models;
Initialization model group's step: the motion model equation that total motion model is concentrated is as the state equation of interactive multi-model wave filter, with interactive multi-model ten cycles of operation, calculate each model group's model group probability, the model group who selects the probability maximum is as initial current model group M 1
Interactive multi-model mixes estimating step: human joint points is as input constantly with k, and the execution interacting multiple model algorithm obtains the human motion attitude and estimates, upgrades motion model probability and human motion attitude estimation error covariance;
The motion model group activates step: according to the human joint points position, calculate the angle changing value of extremities joint point skeleton line projection on image, if the changing value size satisfies the model group and activates rule, remember that this is k constantly 0, carry out the following new model group model initialization step that activates, otherwise output human motion attitude is estimated, carries out above-mentioned interactive multi-model and mixes estimating step;
The new model group model initialization step that activates: the probability that will newly activate model is initialized as model probability maximal value among the current model group, and normalization model probability; The predicated error covariance is initialized as the noise covariance of motion model self; Choose in the movement capturing data with the highest state of present mode matching degree as original state; With master mould group M oCandidate's model group M with new activation nMerge into new current model group;
The model group stops step: the current model group according to new re-executes above-mentioned interactive multi-model one-period, if model group M nWith model group M oModel group likelihood ratio Perhaps model group likelihood ratio
Figure BSA00000196294700042
Less than 0.9, then stop model group M n, output human motion attitude is estimated, and is returned and carry out above-mentioned interactive multi-model mixing estimating step; If
Figure BSA00000196294700043
With
Figure BSA00000196294700044
All, then stop model group M greater than 1 o, output human motion attitude is estimated, and is returned and carry out above-mentioned interactive multi-model mixing estimating step; Otherwise output human motion attitude is estimated, and is continued to carry out this step.
Two. a kind of human body tracking system based on VSMM comprises:
Pretreatment unit: be used to import the human body video image, obtain the human body silhouette, extract human body silhouette outline, and the human body silhouette is carried out thinning processing by background subtraction;
Articulation point pick-up unit: be used for pretreated human body video image, carry out following articulation point and detect:
1) uses the concentric circles template to search for, will fall into the point center of circle the most for a long time of annulus as head node along skeleton line;
2) choosing human body silhouette center of gravity position is root node;
3) use the projection on image of 3D human skeleton model, obtain other articulation point positions on the trunk;
4) choose the skeleton line vertex position as hand node and pin node;
5) make the parallel lines of bipod line by the barycenter of lower part of the body silhouette, two intersection points of itself and skeleton line are as knee joint;
6) with equidistant as elbow joint on the skeleton line with hand and shoulder;
7) to because of blocking or can't detected part articulation point because of cutting apart noise, then adopt the Kalman filtering method, one-step prediction obtains;
Motion model trainer: be used for choosing the seizure data of multiple motor pattern, adopt the state-transition matrix F of Ridge Regression Modeling Method to the motion model equation from CMU's CMU motion capture database iTrain, and calculate the noise w of this motion model kCovariance, the motion model set of acquisition is called total motion model collection M;
Motion model collection covering design device: be used at total motion model collection M, if two human motion pattern similarities that motion model mated then divide these two kinds of motion models in same model group, otherwise, it is divided into different model groups; Each model group comprises 3 motion models;
Initialization model group's device: be used for the state equation of the motion model equation that total motion model is concentrated as the interactive multi-model wave filter, with interactive multi-model ten cycles of operation, calculate each model group's model group probability, the model group who selects the probability maximum is as initial current model group M 1
Interactive multi-model mixing estimation unit: be used for k constantly human joint points carry out interacting multiple model algorithm as input, obtain the human motion attitude and estimate, upgrade motion model probability and human motion attitude estimation error covariance;
Motion model group active device: be used for according to the human joint points position, calculate the angle changing value of extremities joint point skeleton line projection on image,, remember that this is k constantly if the changing value size satisfies the model group and activates rule 0, carry out the following new model group model apparatus for initializing that activates, otherwise output human motion attitude is estimated, carries out above-mentioned interactive multi-model mixing estimation unit;
The new model group model apparatus for initializing that activates: the probability that is used for newly activating model is initialized as current model group model probability maximal value, and normalization model probability; The predicated error covariance is initialized as the noise covariance of motion model self; Choose in the movement capturing data with the highest state of present mode matching degree as original state; With master mould group M oCandidate's model group M with new activation nMerge into new current model group;
The model group stops device: be used for re-executing above-mentioned interactive multi-model one-period according to new current model group, if model group M nWith model group M oModel group likelihood ratio Perhaps model group likelihood ratio
Figure BSA00000196294700052
Less than 0.9, then stop model group M n, output human motion attitude is estimated, and is returned and carry out above-mentioned interactive multi-model mixing estimation unit; If
Figure BSA00000196294700053
With
Figure BSA00000196294700054
All, then stop model group M greater than 1 o, output human motion attitude is estimated, and is returned and carry out above-mentioned interactive multi-model mixing estimation unit; Otherwise output human motion attitude is estimated, and is continued to carry out this device.
The present invention has the following advantages compared with prior art:
1, directly uses movement capturing data training motion model, rather than learn the characteristics of image of motion-captured video and the relation between the movement capturing data, eliminated the influence of picture noise, improved the accuracy and the stability of motion model;
2, in the process of implementation, has only the motion model that is complementary with the current motor pattern effect of all rising, rather than each motion model of total motion model collection all in action, reduce the number of incoherent motion model, not only shortened working time, and the malice competition that has alleviated uncorrelated motion model, improved the degree of accuracy that human motion is followed the tracks of;
3, use human joint points as input, algorithm is simple, and time complexity is low.
Description of drawings
Fig. 1 is the human motion tracker block diagram that the present invention is based on VSMM;
Fig. 2 is the human body motion tracking method general flow chart that the present invention is based on VSMM;
Fig. 3 is human motion image pre-service sub-process figure of the present invention;
Fig. 4 is that human joint points of the present invention detects sub-process figure;
Fig. 5 is the human joint points testing result figure of emulation experiment of the present invention;
Fig. 6 is the 3D human skeleton illustraton of model that the present invention tests use;
Fig. 7 is total motion model collection topological diagram of emulation experiment of the present invention;
Fig. 8 is that podomere projection angle of the present invention changes exemplary plot;
Fig. 9 is emulation experiment testing result of the present invention, front projection result and three-dimensional result figure;
Figure 10 is the model probability figure as a result of emulation experiment of the present invention;
Figure 11 is the right elbow of emulation experiment human body of the present invention and right hand 3D projection as a result and the Error Graph that detects articulation point.
Embodiment
With reference to Fig. 1, the human motion tracker that the present invention is based on VSMM comprises: pretreatment unit, the articulation point pick-up unit, the motion model trainer, motion model collection covering design device, initialization model group's device, interactive multi-model mixing estimation unit, motion model group active device, the new model group model apparatus for initializing that activates, the model group stops device, wherein: pretreatment unit, obtain the human motion image, then with background image do poor, obtain the background subtraction image, handle the background subtraction image with morphological method, obtain human body silhouette clearly, adopt border following algorithm to obtain human body silhouette outline, refinement human body silhouette obtains the skeleton line of human body silhouette; The articulation point pick-up unit according to pretreated image, rule of thumb detects the human joint points of each podomere respectively; The motion model trainer, it is right to extract training data from movement capturing data, adopts Ridge Regression Modeling Method training motion model equation state matrix, obtains total motion model collection; Motion model collection covering design device according to transition probability between the motion model and topological structure, divides into groups to total motion model collection, obtains some motion model groups; Initialization model group's device uses interacting multiple model algorithm to obtain the initial model group of variable structure multi-model algorithm; Interactive multi-model mixing estimation unit is input with the human joint points, and current model group's motion model system of equations committee state equation obtains the human motion attitude and estimates, upgrades motion model probability and human motion attitude estimation error covariance; Motion model group active device activates rule if the model group is satisfied in the variation of the projection angle of four limbs, then activates corresponding candidate's model group; The new model group model apparatus for initializing that activates, candidate's model group that motion model group active device is obtained carries out initialization; The model group stops device, if new activation model group and former current model group's model group likelihood ratio or model group likelihood ratio less than 0.9, then stop the new model group who activates, if model group likelihood ratio and model group likelihood ratio then stop former current model group all greater than 1.
With reference to Fig. 2, the present invention is based on the human body motion tracking method of VSMM, the specific implementation process is as follows:
Step 1 is done pre-service to input picture, obtains human body silhouette and outline thereof, skeleton line.
With reference to Fig. 3, this step is implemented as follows:
1.1) before human body did not enter the video camera visual angle, the empty bat background area 3-5 second asked arithmetic mean to each pixel position that sky shoots the video in the image, remembers that final mean value image is a background image;
1.2) obtain the human motion image, to do pixel poor with background image, obtains the background subtraction image;
1.3) adopt the noise of cutting apart in the morphological method removing background subtraction image, obtain human body silhouette clearly;
1.4) adopt border following algorithm to obtain human body silhouette outline; Refinement human body silhouette obtains the skeleton line of human body silhouette.
Step 2:, carry out articulation point and detect to pretreated human body video image.
With reference to Fig. 4, this step is implemented as follows:
2.1) use the concentric circles template to search for along skeleton line, will fall into the human body silhouettes point center of circle the most for a long time of annulus as head node;
2.2) to choose human body silhouette center of gravity position be root node, the arithmetic mean of everyone side shadow point x coordinate figure is as the x coordinate of root node, and the arithmetic mean of y coordinate figure is as the y coordinate of root node;
2.3) with the root node benchmark projection on video image with 3D human skeleton model, obtain trunk central point, clavicle joint point, left and right sides shoulder point and left and right sides buttocks articulation point;
2.4) detect the end points that obtains human body silhouette skeleton line, determine hand node and pin node according to nearest neighbouring rule;
2.5) make the parallel lines of bipod line by the barycenter of lower part of the body silhouette, two intersection points of itself and skeleton line are as knee joint;
2.6) with equidistant as elbow joint with hand and shoulder on the skeleton line;
2.7) to because of blocking or can't detected part articulation point, then adopting the Kalman filtering method because of cutting apart noise, one-step prediction obtains.
The result who human joint points is detected by this step as shown in Figure 5.
Step 3: motion model is trained, obtain total motion model set M.
May be used on multiple motion model in the experiment, motion model is trained, as stiff walking model m 1, walking model m 2, stretching hand with arm keeps balance walking model m 3, Jack mode hopping type m 4, hopping model m 5With the model m that squats down 6, the present invention adopts walking model m 2, but be not limited to this motion model, its training step is as follows:
3.1) from the CMU of CMU motion capture database, choose the seizure data of walking mode, extract the joint angles that needs, be converted into hypercomplex number and represent, it is right to form training data
Figure BSA00000196294700081
3.2) establish
Figure BSA00000196294700082
Expression walking model m 2The motion model equation,
Figure BSA00000196294700083
The human body sport parameter of expression walking model, F 2The state-transition matrix of expression motion model equation, The noise of expression motion model;
3.3) F 2Calculate according to following formula:
F 2 = arg min F 2 { Σ k = 1 114 | | F 2 x k 2 - x k + 1 2 | | 2 + R ( F 2 ) } - - - 1 )
Wherein, R (F 2)=λ || F 2|| 2, λ is the regularization factor, λ in experiment of the present invention=0.15;
All motion models are all pressed the above-mentioned steps training, finally obtain total motion model collection M={m 1, m 2, m 3, m 4, m 5, m 6.
Step 4: total motion model collection is divided into groups.
In total motion model collection M, according to transition probability between the motion model and topological structure, analyze connectedness and redirect possibility between two motion models, if two motion models are not only connection, and model probability again can the between redirect, then claims the human motion pattern similarity of two motion models couplings, and these two kinds of motion models are divided in same model group, otherwise, these two motion model branches are gone into different model groups, each motion model group comprises 3 motion models; In emulation experiment, stiff walking model m 1, walking model m 2, stretching hand with arm keeps balance walking model m 3All with the walking mode coupling, so these three motion model branches are gone into same motion model group; Topological structure between total motion model transporting something containerized movable model as shown in Figure 7, the transition probability between total motion model transporting something containerized movable model is as shown in table 1, total motion model collection group result is as shown in table 2.
Step 5: the current model group of initialization M 1
The motion model equation that total motion model is concentrated with interactive multi-model ten cycles of operation, calculates each model group's model group probability as the state equation of interactive multi-model wave filter, and the model group who selects the probability maximum is as initial current model group M 1
Step 6: use interacting multiple model algorithm to calculate the human motion attitude and estimate.
At first design the state equation and the measurement equation of model in the interacting multiple model algorithm, then with k constantly human joint points as input, upgrade and four steps of state estimation fusion through the initialization of model condition, model condition filtering, model probability, the final human motion attitude that obtains estimates that concrete implementation step is as follows:
6.1) select to comprise the interacting multiple model algorithm of 3 models, system state equation and the measurement equation of establishing model i are as follows:
x k + 1 i = F i x k i + w k i , i = 1,2,3 - - - 2 )
z k = H ( x k i ) + v k - - - 3 )
In the formula, Be the state vector of model i, F iBe state-transition matrix, identical with the motion model state transition equation of being trained in the step 4,
Figure BSA00000196294700094
Be state-noise, Q 1..., Q 9Be the anglec of rotation of the human joint points represented with hypercomplex number, T 0, Q 0..., Q 9Pairing articulation point position as shown in Figure 6, wherein, T 0The expression human body is at the whole displacement of global coordinate system, Q 0The anglec of rotation of expression global coordinate system, Q 1The anglec of rotation of representing left stern articulation point, Q 2The anglec of rotation of representing left knee joint point, Q 3The anglec of rotation of representing right stern articulation point, Q 4The anglec of rotation of representing right knee joint point, Q 5The anglec of rotation of representing left shoulder joint node, Q 6The anglec of rotation of representing left elbow joint point, Q 7The anglec of rotation of representing right shoulder joint node, Q 8The anglec of rotation of representing right elbow joint point, Q 9The anglec of rotation of expression P point; z kBe k human joint points picture position constantly, totally 34 dimensions; H (x k) for measuring transition matrix, v kBe measurement noise;
6.2) initialization of model condition
The wave filter of considering each model all might become current effective system model wave filter, the starting condition of each model filtering device all is each model filtering result's of previous moment a weighted sum, weights are corresponding model probability, calculate respectively and mix probability and mix estimation, and implementation step is as follows:
6.2a) calculate and mix probability
Note k-1 Matching Model constantly is
Figure BSA00000196294700095
And k Matching Model constantly is
Figure BSA00000196294700096
With k-1 information Z constantly K-1For the mixing probability of condition is:
u k - 1 | k - 1 ( i , j ) = P ( m k - 1 i | m k j , Z k - 1 ) = 1 c ‾ j π ij u k - 1 i - - - 4 )
Wherein
Figure BSA00000196294700098
Be normaliztion constant,
Figure BSA00000196294700099
Be k-1 moment Matching Model Probability, π IjBe Matching Model
Figure BSA000001962947000911
To Matching Model
Figure BSA000001962947000912
Transition probability, Z K-1={ z 1, z 2..., z K-1;
6.2b) calculate to mix and estimate
To k Matching Model constantly
Figure BSA00000196294700101
Heavy init state
Figure BSA00000196294700102
And error covariance matrix
Figure BSA00000196294700103
Mixing estimate to be respectively:
x ^ ^ k - 1 | k - 1 j = E ( x k - 1 | m k j , Z k - 1 ) = Σ i = 1 3 x ^ k - 1 | k - 1 i u k - 1 | k - 1 ( i , j ) - - - 5 )
P ^ k - 1 | k - 1 j = Σ i = 1 3 [ P k - 1 | k - 1 i + ( x ^ k - 1 | k - 1 i - x ^ ^ k - 1 | k - 1 j ) ( x ^ k - 1 | k - 1 i - x ^ ^ k - 1 | k - 1 j ) T ] u k - 1 | k - 1 ( i , j ) - - - 6 )
Wherein,
Figure BSA00000196294700106
The expression Matching Model To the estimation of human motion attitude,
Figure BSA00000196294700108
Expression mixes probability;
6.3) estimation of calculating human body attitude
Figure BSA00000196294700109
And error covariance
Figure BSA000001962947001010
Residual error
Figure BSA000001962947001011
And covariance
Figure BSA000001962947001012
Measure z kWith Matching Model
Figure BSA000001962947001013
The likelihood function of coupling
Figure BSA000001962947001014
Filter gain
Figure BSA000001962947001015
Estimate to upgrade with the human motion attitude
Figure BSA000001962947001016
And error covariance matrix
6.3a) will weigh initialized state and covariance matrix and estimate promptly by mixing
Figure BSA000001962947001018
With
Figure BSA000001962947001019
The substitution Matching Model
Figure BSA000001962947001020
Wave filter, obtain state estimation
Figure BSA000001962947001021
And error covariance
x ^ k | k - 1 j = F j x ^ ^ k - 1 | k - 1 j - - - 7 )
P k | k - 1 j = F j P ^ k - 1 | k - 1 j ( F j ) T + Q k - 1 j - - - 8 )
Wherein,
Figure BSA000001962947001025
The expression Matching Model
Figure BSA000001962947001026
Noise covariance.
6.3b) will weigh initialized state
Figure BSA000001962947001027
Substitution measures transition matrix H (), calculates to measure residual error
Figure BSA000001962947001028
And covariance matrix
Figure BSA000001962947001029
z ~ k j = z k - H ( x ^ k | k - 1 j ) - - - 9 )
S k j = h k P k | k - 1 j ( h k ) T + R k j - - - 10 )
Wherein, z kExpression k measurement constantly, The expression Matching Model
Figure BSA000001962947001033
The measurement noise covariance, h kExpression measures the Jacobian matrix of matrix H.
6.3c) under Gauss's hypothesis, with residual error And covariance matrix
Figure BSA000001962947001035
The substitution following formula calculates and measures z kWith Matching Model
Figure BSA000001962947001036
The likelihood function of coupling
Figure BSA000001962947001037
Λ k j = p ( z k | m k j , Z k - 1 )
≈ p [ z k | m k j , x ^ ^ k - 1 | k - 1 j , S k j ( P ^ k - 1 | k - 1 j ) ] 11 )
≈ 2 π | S k j | - 1 / 2 exp { - 1 2 ( z ~ k j ) T ( S k j ) - 1 z ~ k j }
Wherein,
Figure BSA00000196294700115
The expression Matching Model The measurement prediction residual.
6.3d) with state estimation
Figure BSA00000196294700117
And error covariance
Figure BSA00000196294700118
Residual error
Figure BSA00000196294700119
And covariance matrix
Figure BSA000001962947001110
The substitution following formula, the calculation of filtered gain The human motion attitude is estimated to upgrade
Figure BSA000001962947001112
And error covariance matrix
Figure BSA000001962947001113
K k j = P k | k - 1 j ( h k ) T ( S k j ) - 1 - - - 12 )
x ^ k | k j = x ^ k | k - 1 j + K k j z ~ k j - - - 13 )
P k | k j = P k | k - 1 j - K k j S k j ( K k j ) T - - - 14 )
6.4) the model probability renewal
According to step 6.3c) likelihood function that obtains
Figure BSA000001962947001117
Calculate k Matching Model constantly
Figure BSA000001962947001118
Probability
Figure BSA000001962947001119
u k j = P ( m k j | Z k ) = 1 c Λ k j c ‾ j - - - 15 )
Wherein
Figure BSA000001962947001121
Be normaliztion constant, and
Figure BSA000001962947001122
6.5) the state estimation fusion
According to step 6.3d) Matching Model that calculates
Figure BSA000001962947001123
State estimation
Figure BSA000001962947001124
With step 6.4) the Matching Model probability that obtains
Figure BSA000001962947001125
Calculating k human motion attitude constantly estimates and human motion attitude estimation error covariance matrix:
6.5a) utilize following formula to k human motion attitude constantly
Figure BSA000001962947001126
Estimate:
x ^ k | k = Σ j = 1 3 x ^ k | k j u k j - - - 16 )
Wherein, Be k moment Matching Model
Figure BSA000001962947001129
The human motion attitude estimate,
Figure BSA000001962947001130
Be k moment Matching Model Probability.
6.5b) utilize following formula to calculate k human motion attitude estimation error covariance matrix P constantly K|k:
P k | k = Σ j = 1 3 [ P k | k j + ( x ^ k | k - x ^ k | k j ) ( x ^ k | k - x ^ k | k j ) T ] u k j - - - 17 )
Wherein,
Figure BSA00000196294700121
Be k moment Matching Model
Figure BSA00000196294700122
Human motion attitude estimation error covariance,
Figure BSA00000196294700123
Expression k human motion constantly attitude is estimated.
Step 7: the motion model group activates.
According to the human joint points position, calculate the projection angle changing value of extremities joint point on image, be that example podomere projection angle changes as shown in Figure 8 with lower limb, if the projection angle value satisfies following activation model group rule, then remember k 0=k carry out the following new model group model initialization step that activates, otherwise output human motion attitude is estimated, carries out above-mentioned steps 6.
A) if the projection angle changing value of buttocks is 2 times of last time, then motion model group 3 is activated;
B) if the projection angle changing value of most of lower limb is 2 times of last time, then motion model group 2 is activated;
C), then motion model group 1 is activated if the projection angle changing value of most of lower limb is 1/2 of the last time.
Step 8: newly activate candidate's model group initialization.
Remember that current model group is M k, former current model group is M oWith candidate's model group of new activation be M n, make M o=M k, M k=M n∪ M o
8.1) newly activate motion model m iProbability be initialized as:
μ ^ ( m i | M n , Z k ) = max ( μ ^ ( m j | M o , Z k ) ) - - - 18 )
The current model group of normalization M kMiddle model probability
Figure BSA00000196294700125
Figure BSA00000196294700126
Represent former current model group M oMiddle model m jProbability estimate;
8.2) the predicated error covariance is initialized as the noise covariance of motion model;
8.3) choose in the movement capturing data with the highest state of present mode matching degree as original state;
8.4) with master mould group M oCandidate's model group M with new activation nMerge into new current model group.
Step 9: the model group stops.
According to new current model group, to model group M l=M n, M o, calculate respectively:
Figure BSA00000196294700127
Figure BSA00000196294700128
Wherein,
Figure BSA000001962947001210
The expression k moment, the model probability of i motion model,
Figure BSA000001962947001211
The expression k moment, model group M lModel group probability and,
Figure BSA000001962947001212
The expression k moment, the model likelihood of i motion model,
Figure BSA000001962947001213
The expression k moment, model group M lModel group likelihood and;
Figure BSA000001962947001214
The expression k moment, the model probability estimation of i motion model,
Figure BSA00000196294700131
The expression k moment, model group M lModel group probability estimate and, the model group stops carrying out as follows:
9.1) if model group M nWith model group M oModel group likelihood ratio Perhaps model group likelihood ratio
Figure BSA00000196294700133
Less than 0.9, then stop model group M n, output human motion attitude is estimated, and is returned and carry out above-mentioned steps 6;
9.2) if model group likelihood ratio
Figure BSA00000196294700134
With model group likelihood ratio
Figure BSA00000196294700135
All, then stop model group M greater than 1 o, output human motion attitude is estimated, and is returned and carry out above-mentioned steps 6;
9.3) if step 9.1) and step 9.2) all be not performed, then export the human motion attitude and estimate, and continue to carry out this step.
Effect of the present invention can further specify by following emulation experiment:
1) the used data of emulation experiment
In the emulation experiment, the employed data of training motion model obtain from the CMU motion capture database, and the form of data is ASF+AMC, and data content is the joint anglec of rotation that Eulerian angle are represented, therefrom extract the needed human joint points anglec of rotation in the experiment, be converted to hypercomplex number then and represent.
The human motion video that uses in the experiment is from shooting the video, the video image size is 320 * 240, the empty image of clapping of preceding 100 frames is used for rebuilding background, back 450 frames are used for doing the human body motion tracking, the human motion that comprises in the video sequence has: the marking time of 1-120 frame, the hand of 121-250 frame is waved and is striden, and the Jack of 251-390 frame jumps, and the 391-450 frame is squatted down.
2) emulation content
Adopt the variable structure multi-model method that the human motion sequence is followed the tracks of.Obtain movement capturing data from the CMU motion capture database, the modelling of human body motion of training is respectively: stiff walking model m 1, walking model m 2, stretching hand with arm keeps balance walking model m 3, Jack mode hopping type m 4, hopping model m 5With the model m that squats down 6, as shown in table 1; Topological relation between the motion model as shown in Figure 7, according to table 1 and Fig. 7, analyze connectedness and redirect possibility between the motion model, if two motion models are not only connection, and model probability again can the between redirect, then claims the human motion pattern similarity of two motion models couplings, and these two kinds of motion models are divided in same model group, otherwise, these two motion model branches are gone into different model groups, to total motion model collection M={m 1, m 2, m 3, m 4, m 5, m 6Grouping, group result is as shown in table 2; The human motion attitude merges estimates to adopt the variable structure multi-model algorithm to obtain.
Table 1 motion model transition probability
Figure BSA00000196294700141
The total motion model group group result of table 2
Figure BSA00000196294700142
3) simulation result and analysis
Adopt the variable structure multi-model algorithm that the human motion in shooting the video is certainly followed the tracks of, final human articulation point testing result, 3D human motion tracking results and 2D projection thereof are as shown in Figure 9, wherein, the human joint points testing result is shown in Fig. 9 (a), human motion is followed the tracks of the result of two-dimensional projection shown in Fig. 9 (b), and human motion is followed the tracks of the 3 d pose estimated result shown in Fig. 9 (c); From Fig. 9 (b) as can be seen: the 2D projection on image of human motion attitude estimated result overlaps with the human skeleton line basically; From Fig. 9 (c) as can be seen: the 3D effect of human motion attitude estimated result is identical with real human motion attitude, and the present invention has effectively solved the ambiguity problem of human motion, has improved accuracy and stability that human motion is followed the tracks of.
In the tracking test, the model probability of each motion model changes as shown in figure 10, wherein, and stiff walking model m 1Model probability change shown in Figure 10 (a) walking model m 2Model probability change as Figure 10 (b) shown in stretching hand with arm maintenance balance walking model m 3Model probability change shown in Figure 10 (c) Jack mode hopping type m 4Model probability change shown in Figure 10 (d) hopping model m 5Model probability change shown in Figure 10 (e), model m squats down 6Model probability change shown in Figure 10 (f); As can be seen from Figure 10: each constantly all has only a motion model to play main effect, when motion model and human motion pattern similarity, the model probability of motion model is bigger, when the human motion pattern changes, the motion model that plays a major role changes thereupon, and motion model group of the present invention activates rule and well finished motion model group activate a task.
The error of projection of 3D articulation point and articulation point physical location as shown in figure 11, wherein, the error of right elbow spot projection of 3D and right elbow physical location is shown in Figure 11 (a), the error of 3D right hand spot projection and right hand physical location is shown in Figure 11 (b), as can be known from Fig. 11, as seen average error uses the variable structure multi-model method less with result's projection error between 2.7cm-5.2cm.
Emulation experiment of the present invention compiles on Matlab and finishes, and execution environment is 1 a second/frame for the HP workstation under the windows framework, human joint points detection speed, and human motion was tracked as for 10 frame/seconds, and time complexity is low.
The present invention uses the variable structure multi-model method to carry out human motion and follows the tracks of, and the motion model that the training of employing movement capturing data obtains makes motion tracking meet the human motion rule more as the state equation of wave filter, has reduced the influence of human motion ambiguousness; The use that total motion model collection covers has not only solved the problem that complicated human motion can't be accurately followed the tracks of in little motion model set, and avoided the unnecessary competition of using a large amount of motion models to bring simultaneously, this competition not only can improve time complexity, also can reduce the accuracy of tracking simultaneously; The characteristics of image that extracts is simple, has reduced single frames working time; When following the tracks of every frame, only use the motion model compatible with current motor pattern, rather than total motion model collection, time complexity reduced.Simulation result shows, this tracking has obtained two-dimensional projection and three-dimensional pose recovery accurately, has reduced the human motion ambiguousness, and time complexity is low.

Claims (4)

1. human body motion tracking method based on VSMM comprises:
Pre-treatment step: input human body video image, obtain the human body silhouette by background subtraction, extract human body silhouette outline, and the human body silhouette is carried out thinning processing;
Articulation point detects step: to pretreated human body video image, carry out following articulation point and detect:
1) uses the concentric circles template to search for, will fall into the point center of circle the most for a long time of annulus as head node along skeleton line;
2) choosing human body silhouette center of gravity position is root node;
3) use the projection on image of 3D human skeleton model, obtain other articulation point positions on the trunk;
4) choose the skeleton line vertex position as hand node and pin node;
5) make the parallel lines of bipod line by the barycenter of lower part of the body silhouette, two intersection points of itself and skeleton line are as knee joint;
6) with equidistant as elbow joint on the skeleton line with hand and shoulder;
7) to because of blocking or can't detected part articulation point because of cutting apart noise, then adopt the Kalman filtering method, one-step prediction obtains;
Motion model training step: from the CMU of CMU motion capture database, choose the seizure data of multiple motor pattern, adopt the state-transition matrix F of Ridge Regression Modeling Method to the motion model equation iTrain, and calculate the noise w of this motion model kCovariance, the motion model set of acquisition is called total motion model collection M;
Motion model collection covering design step: in total motion model collection M, if two human motion pattern similarities that motion model mated then divide these two kinds of motion models in same model group, otherwise, it is divided into different model groups; Each model group comprises 3 motion models;
Initialization model group's step: the motion model equation that total motion model is concentrated is as the state equation of interactive multi-model wave filter, with interactive multi-model ten cycles of operation, calculate each model group's model group probability, the model group who selects the probability maximum is as initial current model group M 1
Interactive multi-model mixes estimating step: human joint points is as input constantly with k, and the execution interacting multiple model algorithm obtains the human motion attitude and estimates, upgrades motion model probability and human motion attitude estimation error covariance;
The motion model group activates step: according to the human joint points position, calculate the angle changing value of extremities joint point skeleton line projection on image, if the changing value size satisfies the model group and activates rule, remember that this is k constantly 0, carry out the following new model group model initialization step that activates, otherwise output human motion attitude is estimated, carries out above-mentioned interactive multi-model and mixes estimating step;
The new model group model initialization step that activates: the probability that will newly activate model is initialized as model probability maximal value among the current model group, and normalization model probability; The predicated error covariance is initialized as the noise covariance of motion model self; Choose in the movement capturing data with the highest state of present mode matching degree as original state; With master mould group M oCandidate's model group M with new activation nMerge into new current model group;
The model group stops step: the current model group according to new re-executes above-mentioned interactive multi-model one-period, if model group M nWith model group M oModel group likelihood ratio
Figure FSA00000196294600021
Perhaps model group likelihood ratio
Figure FSA00000196294600022
Less than 0.9, then stop model group M n, output human motion attitude is estimated, and is returned and carry out above-mentioned interactive multi-model mixing estimating step; If
Figure FSA00000196294600023
With
Figure FSA00000196294600024
All, then stop model group M greater than 1 o, output human motion attitude is estimated, and is returned and carry out above-mentioned interactive multi-model mixing estimating step; Otherwise output human motion attitude is estimated, and is continued to carry out this step.
2. human body motion tracking method according to claim 1, wherein the employing Ridge Regression Modeling Method described in the motion model training step is to the state-transition matrix F of motion model equation iTraining, is to carry out as follows:
2a) establish
Figure FSA00000196294600025
Represent i the motion model equation that will train, wherein F iThe expression state-transition matrix;
It is right 2b) to obtain data from movement capturing data
Figure FSA00000196294600026
Wherein It is the human body sport parameter of representing with hypercomplex number;
2c) F iObtain according to following formula:
F i = arg min F i { Σ k = 1 n - 1 | | F i x k i - x k + 1 i | | 2 + R ( F i ) }
Wherein, R (F i)=λ || F i|| 2, λ is the regularization factor, λ in experiment of the present invention=0.15.
May be used on multiple motion model in the experiment, motion model is trained, as stiff walking model m 1, walking model m 2, stretching hand with arm keeps balance walking model m 3, Jack mode hopping type m 4, hopping model m 5With the model m that squats down 6, all motion models are all pressed the above-mentioned steps training, finally obtain total motion model collection M={m 1, m 2, m 3, m 4, m 5, m 6.
3. human body motion tracking method according to claim 1, wherein the motion model group activates the model group described in the step and activates rule, comprising:
3a), then motion model group 3 is activated if the projection angle changing value of buttocks is 2 times of last time;
3b), then motion model group 2 is activated if the projection angle changing value of most of lower limb is 2 times of last time;
3c) if the projection angle changing value of most of lower limb is 1/2 of the last time, then motion model group 1 is activated.
4. human motion tracker based on VSMM comprises:
Pretreatment unit: be used to import the human body video image, obtain the human body silhouette, extract human body silhouette outline, and the human body silhouette is carried out thinning processing by background subtraction;
Articulation point pick-up unit: be used for pretreated human body video image, carry out following articulation point and detect:
1) uses the concentric circles template to search for, will fall into the point center of circle the most for a long time of annulus as head node along skeleton line;
2) choosing human body silhouette center of gravity position is root node;
3) use the projection on image of 3D human skeleton model, obtain other articulation point positions on the trunk;
4) choose the skeleton line vertex position as hand node and pin node;
5) make the parallel lines of bipod line by the barycenter of lower part of the body silhouette, two intersection points of itself and skeleton line are as knee joint;
6) with equidistant as elbow joint on the skeleton line with hand and shoulder;
7) to because of blocking or can't detected part articulation point because of cutting apart noise, then adopt the Kalman filtering method, one-step prediction obtains;
Motion model trainer: be used for choosing the seizure data of multiple motor pattern, adopt the state-transition matrix F of Ridge Regression Modeling Method to the motion model equation from CMU's CMU motion capture database iTrain, and calculate the noise w of this motion model kCovariance, the motion model set of acquisition is called total motion model collection M;
Motion model collection covering design device: be used at total motion model collection M, if two human motion pattern similarities that motion model mated then divide these two kinds of motion models in same model group, otherwise, it is divided into different model groups; Each model group comprises 3 motion models;
Initialization model group's device: be used for the state equation of the motion model equation that total motion model is concentrated as the interactive multi-model wave filter, with interactive multi-model ten cycles of operation, calculate each model group's model group probability, the model group who selects the probability maximum is as initial current model group M 1
Interactive multi-model mixing estimation unit: be used for k constantly human joint points carry out interacting multiple model algorithm as input, obtain the human motion attitude and estimate, upgrade motion model probability and human motion attitude estimation error covariance;
Motion model group active device: be used for according to the human joint points position, calculate the angle changing value of extremities joint point skeleton line projection on image,, remember that this is k constantly if the changing value size satisfies the model group and activates rule 0, carry out the following new model group model apparatus for initializing that activates, otherwise output human motion attitude is estimated, carries out above-mentioned interactive multi-model mixing estimation unit;
The new model group model apparatus for initializing that activates: the probability that is used for newly activating model is initialized as current model group model probability maximal value, and normalization model probability; The predicated error covariance is initialized as the noise covariance of motion model self; Choose in the movement capturing data with the highest state of present mode matching degree as original state; With master mould group M oCandidate's model group M with new activation nMerge into new current model group;
The model group stops device: be used for re-executing above-mentioned interactive multi-model one-period according to new current model group, if model group M nWith model group M oModel group likelihood ratio
Figure FSA00000196294600041
Perhaps model group likelihood ratio Less than 0.9, then stop model group M n, output human motion attitude is estimated, and is returned and carry out above-mentioned interactive multi-model mixing estimation unit; If
Figure FSA00000196294600043
With
Figure FSA00000196294600044
All, then stop model group M greater than 1 o, output human motion attitude is estimated, and is returned and carry out above-mentioned interactive multi-model mixing estimation unit; Otherwise output human motion attitude is estimated, and is continued to carry out this device.
CN2010102309755A 2010-07-16 2010-07-16 Human motion tracing method based on variable structure multi-model Expired - Fee Related CN101894278B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010102309755A CN101894278B (en) 2010-07-16 2010-07-16 Human motion tracing method based on variable structure multi-model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010102309755A CN101894278B (en) 2010-07-16 2010-07-16 Human motion tracing method based on variable structure multi-model

Publications (2)

Publication Number Publication Date
CN101894278A true CN101894278A (en) 2010-11-24
CN101894278B CN101894278B (en) 2012-06-27

Family

ID=43103466

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010102309755A Expired - Fee Related CN101894278B (en) 2010-07-16 2010-07-16 Human motion tracing method based on variable structure multi-model

Country Status (1)

Country Link
CN (1) CN101894278B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102074034A (en) * 2011-01-06 2011-05-25 西安电子科技大学 Multi-model human motion tracking method
CN102254343A (en) * 2011-07-01 2011-11-23 浙江理工大学 Convex hull and OBB (Oriented Bounding Box)-based three-dimensional grid model framework extracting method
CN102486816A (en) * 2010-12-02 2012-06-06 三星电子株式会社 Device and method for calculating human body shape parameters
CN102509338A (en) * 2011-09-20 2012-06-20 北京航空航天大学 Contour and skeleton diagram-based video scene behavior generation method
CN102663779A (en) * 2012-05-03 2012-09-12 西安电子科技大学 Human motion tracking method based on stochastic Gaussian hidden variables
CN104020466A (en) * 2014-06-17 2014-09-03 西安电子科技大学 Maneuvering target tracking method based on variable structure multiple models
CN104463146A (en) * 2014-12-30 2015-03-25 华南师范大学 Posture identification method and device based on near-infrared TOF camera depth information
CN106295616A (en) * 2016-08-24 2017-01-04 张斌 Exercise data analyses and comparison method and device
CN108596917A (en) * 2018-04-19 2018-09-28 湖北工业大学 A kind of target main skeleton extraction method
CN109633590A (en) * 2019-01-08 2019-04-16 杭州电子科技大学 Extension method for tracking target based on GP-VSMM-JPDA
CN110427890A (en) * 2019-08-05 2019-11-08 华侨大学 More people's Attitude estimation methods based on depth cascade network and mass center differentiation coding
CN110849369A (en) * 2019-10-29 2020-02-28 苏宁云计算有限公司 Robot tracking method, device, equipment and computer readable storage medium
CN112220559A (en) * 2020-10-16 2021-01-15 北京理工大学 Method and device for determining gravity and bias force of mechanical arm
CN112488005A (en) * 2020-12-04 2021-03-12 临沂市新商网络技术有限公司 On-duty monitoring method and system based on human skeleton recognition and multi-angle conversion
TWI736083B (en) * 2019-12-27 2021-08-11 財團法人工業技術研究院 Method and system for motion prediction
CN113344963A (en) * 2021-05-27 2021-09-03 绍兴市北大信息技术科创中心 Seed point self-adaptive target tracking system based on image segmentation

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1298454A2 (en) * 2001-09-28 2003-04-02 IBEO Automobile Sensor GmbH Method for recognising and tracking objects
CN101038671A (en) * 2007-04-25 2007-09-19 上海大学 Tracking method of three-dimensional finger motion locus based on stereo vision
CN101216941A (en) * 2008-01-17 2008-07-09 上海交通大学 Motion estimation method under violent illumination variation based on corner matching and optic flow method
CN101231703A (en) * 2008-02-28 2008-07-30 上海交通大学 Method for tracing a plurality of human faces base on correlate vector machine to improve learning
WO2010069168A1 (en) * 2008-12-15 2010-06-24 东软集团股份有限公司 Method and apparatus for estimating self-motion parameters of vehicle

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1298454A2 (en) * 2001-09-28 2003-04-02 IBEO Automobile Sensor GmbH Method for recognising and tracking objects
CN101038671A (en) * 2007-04-25 2007-09-19 上海大学 Tracking method of three-dimensional finger motion locus based on stereo vision
CN101216941A (en) * 2008-01-17 2008-07-09 上海交通大学 Motion estimation method under violent illumination variation based on corner matching and optic flow method
CN101231703A (en) * 2008-02-28 2008-07-30 上海交通大学 Method for tracing a plurality of human faces base on correlate vector machine to improve learning
WO2010069168A1 (en) * 2008-12-15 2010-06-24 东软集团股份有限公司 Method and apparatus for estimating self-motion parameters of vehicle

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102486816A (en) * 2010-12-02 2012-06-06 三星电子株式会社 Device and method for calculating human body shape parameters
CN102074034B (en) * 2011-01-06 2013-11-06 西安电子科技大学 Multi-model human motion tracking method
CN102074034A (en) * 2011-01-06 2011-05-25 西安电子科技大学 Multi-model human motion tracking method
CN102254343A (en) * 2011-07-01 2011-11-23 浙江理工大学 Convex hull and OBB (Oriented Bounding Box)-based three-dimensional grid model framework extracting method
CN102254343B (en) * 2011-07-01 2013-01-30 浙江理工大学 Convex hull and OBB (Oriented Bounding Box)-based three-dimensional grid model framework extracting method
CN102509338A (en) * 2011-09-20 2012-06-20 北京航空航天大学 Contour and skeleton diagram-based video scene behavior generation method
CN102509338B (en) * 2011-09-20 2014-05-07 北京航空航天大学 Contour and skeleton diagram-based video scene behavior generation method
CN102663779A (en) * 2012-05-03 2012-09-12 西安电子科技大学 Human motion tracking method based on stochastic Gaussian hidden variables
CN104020466B (en) * 2014-06-17 2016-05-25 西安电子科技大学 Based on the maneuvering target tracking method of variable structure multi-model
CN104020466A (en) * 2014-06-17 2014-09-03 西安电子科技大学 Maneuvering target tracking method based on variable structure multiple models
CN104463146B (en) * 2014-12-30 2018-04-03 华南师范大学 Posture identification method and device based on near-infrared TOF camera depth information
CN104463146A (en) * 2014-12-30 2015-03-25 华南师范大学 Posture identification method and device based on near-infrared TOF camera depth information
CN106295616A (en) * 2016-08-24 2017-01-04 张斌 Exercise data analyses and comparison method and device
CN106295616B (en) * 2016-08-24 2019-04-30 张斌 Exercise data analyses and comparison method and device
CN108596917A (en) * 2018-04-19 2018-09-28 湖北工业大学 A kind of target main skeleton extraction method
CN109633590A (en) * 2019-01-08 2019-04-16 杭州电子科技大学 Extension method for tracking target based on GP-VSMM-JPDA
CN110427890B (en) * 2019-08-05 2021-05-11 华侨大学 Multi-person attitude estimation method based on deep cascade network and centroid differentiation coding
CN110427890A (en) * 2019-08-05 2019-11-08 华侨大学 More people's Attitude estimation methods based on depth cascade network and mass center differentiation coding
CN110849369A (en) * 2019-10-29 2020-02-28 苏宁云计算有限公司 Robot tracking method, device, equipment and computer readable storage medium
CN110849369B (en) * 2019-10-29 2022-03-29 苏宁云计算有限公司 Robot tracking method, device, equipment and computer readable storage medium
TWI736083B (en) * 2019-12-27 2021-08-11 財團法人工業技術研究院 Method and system for motion prediction
US11403768B2 (en) * 2019-12-27 2022-08-02 Industrial Technology Research Institute Method and system for motion prediction
CN112220559A (en) * 2020-10-16 2021-01-15 北京理工大学 Method and device for determining gravity and bias force of mechanical arm
CN112488005A (en) * 2020-12-04 2021-03-12 临沂市新商网络技术有限公司 On-duty monitoring method and system based on human skeleton recognition and multi-angle conversion
CN112488005B (en) * 2020-12-04 2022-10-14 临沂市新商网络技术有限公司 On-duty monitoring method and system based on human skeleton recognition and multi-angle conversion
CN113344963A (en) * 2021-05-27 2021-09-03 绍兴市北大信息技术科创中心 Seed point self-adaptive target tracking system based on image segmentation

Also Published As

Publication number Publication date
CN101894278B (en) 2012-06-27

Similar Documents

Publication Publication Date Title
CN101894278B (en) Human motion tracing method based on variable structure multi-model
CN102074034B (en) Multi-model human motion tracking method
CN102184541B (en) Multi-objective optimized human body motion tracking method
CN109636831B (en) Method for estimating three-dimensional human body posture and hand information
CN101692284B (en) Three-dimensional human body motion tracking method based on quantum immune clone algorithm
CN105930767B (en) A kind of action identification method based on human skeleton
Zhang et al. Real-time human motion tracking using multiple depth cameras
CN102982557B (en) Method for processing space hand signal gesture command based on depth camera
CN102622766A (en) Multi-objective optimization multi-lens human motion tracking method
CN101604447B (en) No-mark human body motion capture method
CN102682452A (en) Human movement tracking method based on combination of production and discriminant
CN104167016A (en) Three-dimensional motion reconstruction method based on RGB color and depth image
Chang et al. The model-based human body motion analysis system
CN106815855A (en) Based on the human body motion tracking method that production and discriminate combine
Rosenhahn et al. Scaled motion dynamics for markerless motion capture
CN102663779A (en) Human motion tracking method based on stochastic Gaussian hidden variables
Yu et al. Towards robust and accurate single-view fast human motion capture
Ko et al. CNN and bi-LSTM based 3D golf swing analysis by frontal swing sequence images
Zou et al. Automatic reconstruction of 3D human motion pose from uncalibrated monocular video sequences based on markerless human motion tracking
Sheu et al. Improvement of human pose estimation and processing with the intensive feature consistency network
Wu et al. An unsupervised real-time framework of human pose tracking from range image sequences
Ross et al. Unsupervised learning of skeletons from motion
KR20200057572A (en) Hand recognition augmented reality-intraction apparatus and method
Jaafar et al. An investigation of motion tracking for solat movement with dual sensor approach
Zhang et al. Human Action prediction based on skeleton data

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120627

Termination date: 20180716

CF01 Termination of patent right due to non-payment of annual fee