CN103810496A - 3D (three-dimensional) Gaussian space human behavior identifying method based on image depth information - Google Patents

3D (three-dimensional) Gaussian space human behavior identifying method based on image depth information Download PDF

Info

Publication number
CN103810496A
CN103810496A CN201410009445.6A CN201410009445A CN103810496A CN 103810496 A CN103810496 A CN 103810496A CN 201410009445 A CN201410009445 A CN 201410009445A CN 103810496 A CN103810496 A CN 103810496A
Authority
CN
China
Prior art keywords
joint
human body
space
action
depth information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410009445.6A
Other languages
Chinese (zh)
Other versions
CN103810496B (en
Inventor
蒋敏
孔军
唐晓微
姜克
郑宪成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing weiminghui Information Technology Co.,Ltd.
Original Assignee
Jiangnan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangnan University filed Critical Jiangnan University
Priority to CN201410009445.6A priority Critical patent/CN103810496B/en
Publication of CN103810496A publication Critical patent/CN103810496A/en
Application granted granted Critical
Publication of CN103810496B publication Critical patent/CN103810496B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a 3D (three-dimensional) Gaussian space human behavior identifying method based on image depth information. The 3D Gaussian space human behavior identifying method based on the image depth information includes the steps: extracting human skeleton 3D coordinates in the depth information, normalizing the 3D coordinates and filtering joints with low human behavior identifying rate and redundant joints; building interest joint groups according to behaviors, checking human motion space characteristics based on Gaussian distance to perform AP (affinity propagation) clustering to obtain word lists of behavior characteristics, and performing data cleaning for the word lists; building human behavior conditional random field identifying models, and classifying the human behaviors according to the human behavior conditional random field identifying models. The 3D Gaussian space human behavior identifying method has high interference resistance for specific directions, skeleton sizes and space positions of a human body, high generalization capability for motion differences led in by different experiment individuals and excellent identifying capability for inhomogeneous similar behaviors.

Description

3D Gauss space human body behavior recognition methods based on picture depth information
Technical field:
The present invention relates to field of machine vision, particularly a kind of 3D Gauss space human body behavior recognition methods based on picture depth information.
Background technology:
Human body behavior in video is identified in the fields such as a lot of video monitorings, man-machine interaction, video recovery important application.Although within the past ten years, various countries experts and scholars have proposed a lot of methods, have obtained in the field a lot of breathtaking progress, and high-precision human body behavior identification is still a job that has challenge.One of reason be exactly human body behavior be a kind of dynamic actuation time of sequence, exercises boundary is fuzzy, even its action of same people also can be out of shape, even exercises are combined mutually, the while is carried out the generation of the middle situation that may be blocked in action.Human body cutting apart from background itself is exactly a difficult task, further aggravated the difficulty of behavior identification.
The depth camera of releasing in recent years provides millimetre-sized 3D depth information.This has reduced the difficulty that human body is cut apart to a great extent.For depth information, Shotton has proposed a kind of single pixel object identification method (Shotton based on Stochastic Decision-making forest classified device, J., et al.Real-Time Human Pose Recognition in Parts from Single Depth Images.in Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on.2011.), the method has been used for reference object identification theory, adopt the middle expression of a kind of human body more difficult motion estimation to be mapped as to the classification problem of simple pixel-oriented, and adopt the method for the local optimum based on average drifting to find the optimal estimation of each joint.Based on the method, can directly obtain human body 3D skeleton joint coordinate.Human action is a kind of hinge arrangement, given skeleton as shown in Figure 1, left figure is depth image, right figure is the corresponding skeletal graph picture that the single pixel object identification method based on Stochastic Decision-making forest classified device of Shotton proposition obtains, human visual system can easily judge its action, even if part joint is blocked.
But, with a lot of noises, even there is manifest error in the 3D joint based on monocular depth information estimator, especially in the situation that blocking, as both hands intersect, multiple human body is touched mutually etc.Based on this 3D joint reasoning, still can not guarantee human body behavior accuracy of identification.
Summary of the invention:
The present invention is in order to overcome defect in above-mentioned prior art, and a kind of 3D Gauss space human body behavior recognition methods based on picture depth information of robust is provided.
To achieve these goals, the invention provides following technical scheme:
3D Gauss space human body behavior recognition methods based on picture depth information, the steps include:
Step 1, for the depth information of every two field picture, adopts the single pixel object identification method based on Stochastic Decision-making forest classified device that Shotton proposes confirm human body and further obtain human body 3D joint coordinates;
Step 2, by described human body 3D joint coordinates data normalization;
Step 3, screening human synovial, filters low joint or the redundancy joint of human body behavior identification contribution;
Step 4, analyzes every class behavior, based on AP clustering algorithm, adds up in every class behavior, and the articulation point that joint space movement travel is outstanding, builds interest and close knot cluster;
Step 5, for every class behavior, closes knot cluster based on interest, calculates the 3D Gauss space characteristics of each action;
Step 6, adopts AP clustering algorithm, builds Gauss apart from core, the 3D Gauss space characteristics that projects to human action space is gathered for the classification of motion of n group, and obtain the cluster centre that represents every group of action;
Step 7, for every group of action, adopts the affiliated cluster centre of each action to build behavioural characteristic word list, and every group of action is carried out to data scrubbing preparation;
Step 8, builds human body behavior conditional random field models, and training sample, obtains human body behavior model of cognition;
Step 9, identifies new samples.
In technique scheme, in step 2, described human body 3D joint coordinates data normalization is comprised to skeleton limbs vector size normalizing, skeleton reference zero normalizing and skeleton direction normalizing.
The step of wherein said skeleton limbs vector size normalizing comprises:
A) selecting a human body 3D joint coordinates is master pattern;
B) keep each sample limb segment direction vector constant, each vector is zoomed to master pattern length;
C) take buttocks center as reference point, structure joint tree, moves each joint according to convergent-divergent length, and mobile vector is:
Figure BDA0000455074700000031
here Δ d fifi ancestors' of present node mobile vector, ancestors' number that n is present node.
The step of wherein said skeleton reference zero normalizing comprises: be new coordinate reference space O ' at zero point, mobile skeleton take buttocks center.
The step of wherein said skeleton direction normalizing comprises:
A) select former coordinate system X-axis, make itself and left stern arrive the vector of right stern
Figure BDA0000455074700000032
parallel, take new coordinate reference space O ' at zero point as buttocks center construction straight line is perpendicular to new ground reference plane, obtain new coordinate reference space Z axis;
B) rotation skeleton, is mapped to new coordinate reference space by skeleton.
In technique scheme, step 3 retains the large joint set of human body behavior identification contribution by screening, and the joint set of reservation comprises 12 joints: head, left/right elbow, left/right wrist, left/right knee, left/right ankle, left/right stern, buttocks center.
In technique scheme, the step that step 4 builds interest pass knot cluster by AP algorithm is:
A) move distance in calculating each joint of consecutive frame, the coordinate that is located at certain joint in consecutive frame (i frame, i+1 frame) is respectively: (x ik, y ik, z ik), (x i+1, k, y i+1, k, z i+1, k), move distance d ikfor:
d ik 2=(x ik-x i+1k) 2+(y ik-y i+1k) 2+(z ik-z i+1k) 2
B) cumulative all move distances obtain the total kilometres D in a joint k:
D k = Σ i = 1 n - 1 d ik 2
C) based on AP algorithm, specify cluster numbers, employing Euclidean distance is measuring similarity, calculates gained move distance all joints are divided into 3 classes according to previous step;
D) abandon the shortest joint of move distance, get two long classes of move distance as the higher joint of contribution degree, the interest that builds the behavior is closed knot cluster.
In technique scheme, the computation process of the 3D Gauss space characteristics of each action in step 5 is:
A) 3d space is divided into m × n × l (m, n, l ∈ Z) sub spaces, each joint must be in a sub spaces;
B) the subspace gaussian density in all the other 11 joints of calculating except buttocks center:
(1) to each joint, calculate its subspace gaussian density,
p ( X , u , Σ ) = 1 ( 2 π ) n | Σ | e - 1 2 ( x - μ ) T Σ - 1 ( x - μ )
Wherein X represents joint coordinates, and u represents center, subspace, and ∑ represents covariance matrix, makes ∑=d/3*n*I, and d is every sub spaces catercorner length here, and n is subspace number, and I is unit matrix;
(2) for normal distribution, 99% information is included in positive and negative 3 standard deviations (being d*n*I, n=3.5), and order is apart from subspace centre distance d joint, bin> ε (ε=d; ) the corresponding subspace gaussian density p in joint (X, u, ∑)=0;
C) the subspace gaussian density in 11 of each action joints has formed sparse motion characteristic expression.In technique scheme, in step 6, Gauss apart from the building method of core is:
Figure BDA0000455074700000051
here x, y represents two stack features vectors, σ is standard deviation.
In step 6, the 3D Gauss space characteristics clustering method in human action space is:
A) adopt above-mentioned Gauss apart from core, calculate each group of action gaussian density characteristic similarity s (x, y);
B) for large numbers matrix, order similarity be 0, build sparse similar matrix;
C) get
Figure BDA0000455074700000052
for reference value, n is number of samples, according to sample, transmits by message, automatically determines cluster numbers, and the AP cluster of sparse matrix is supported in application, obtains the individual cluster centre action of k '.In technique scheme, in step 7, the building method of behavioural characteristic word list is:
A) replace all samples of former action sequence for center of a sample's action under it, obtain one group of vision word strings;
B) clear up each behavior sample vision word strings, delete the word repeating continuously, to reduce the impact that between different samples, time migration causes, obtain behavioural characteristic word list.
In technique scheme, the human body behavior model of cognition that step 8 is obtained adopts PSS to be optimized:
min θf(θ)=-logp θ(Y|X)+r(θ),
Here
r ( θ ) = λ 1 Σ i | θ i | + λ 2 θ 2
p θ ( Y | X ) = Π t = 1 T + 1 exp ( Ψ θ ( y t , X ) + Ψ θ ( y t , y t - 1 ) ) Z θ ( X )
Ψ θ ( y t , X ) = Σ a = 1 A λ a h a ( y t , X ) ,
Ψ θ ( y t , y t - 1 ) = Σ b = 1 B β b g b ( y t , y t - 1 ) ,
Z θ ( X ) = Σ Y Π c ∈ C ( Y , X ) φ θ c ( Y c , X c ) ,
Figure BDA0000455074700000066
for an implicit function corresponding to (clique) C,
h a ( y t , X ) = 1 [ y t = m ] x j i , Wherein m ∈ Y, j ∈ [0, T],
G b(y t, y t-1)=1[y t=m 1∧ y t-1=m 2], wherein m 1, m 2∈ Y
Compared with prior art, the present invention has following beneficial effect:
By described human body 3D joint coordinates normalization technology, strengthen direction unchangeability, bone size constancy, the locus anti-interference of the method by step 2; Select by the large selection of joint set and the interest joint mass selection relevant to behavior of step 4 of step 3 human body behavior identification contribution, enlarged markedly distance of all categories, the interference that effectively the irrelevant joint of filtering causes, has strengthened the anti-noise ability of model; The learning system that selection, interest joint mass selection in conjunction with human body 3D joint coordinates normalization technology, joint set that identification contribution is large selected, 3D Gauss space characteristics sparse expression and human body behavior conditional random field models have built a robust jointly; The present invention has very high identification to generic behavior, and the individual action difference of introducing of different experiments is had to very strong generalization ability, similar behavior is also had to good recognition capability simultaneously.
Accompanying drawing explanation:
Fig. 1 is the skeleton schematic diagram that adopts the single pixel object identification method based on Stochastic Decision-making forest classified device of Shotton proposition to obtain;
Fig. 2 is the process flow diagram of the 3D Gauss space human body behavior recognition methods based on picture depth information of the present invention;
Fig. 3 is human body 3D joint coordinates normalization schematic diagram of the present invention;
Fig. 4 is the human body behavior conditional random field models schematic diagram that the present invention adopts;
Fig. 5 is that the further refinement of human body behavior conditional random field models that the present invention adopts is explained, this figure waves as example with the right hand;
Fig. 6 is the identity confusion matrix of the present invention to 8 kinds of common behaviors.
Embodiment:
As shown in Figure 1, 2, step 1, for every frame depth information, adopt the single pixel object identification method based on Stochastic Decision-making forest classified device that Shotton proposes confirm human body and further obtain human body 3D joint coordinates, left figure is depth image, and right figure is the corresponding skeletal graph picture that said method obtains.
As shown in Figure 2,3, step 2, comprises skeleton limbs vector size normalizing, skeleton reference zero normalizing and skeleton direction normalizing by described human body 3D joint coordinates data normalization.
The step of wherein said skeleton limbs vector size normalizing comprises:
A) selecting a human body 3D joint coordinates is master pattern;
B) keep each sample limb segment direction vector constant, each vector is zoomed to master pattern length;
C) take buttocks center as reference point, structure joint tree, moves each joint according to convergent-divergent length, and mobile vector is:
Figure BDA0000455074700000071
here Δ d fifi ancestors' of present node mobile vector, ancestors' number that n is present node.
The step of wherein said skeleton reference zero normalizing comprises: be new coordinate reference space O ' at zero point, mobile skeleton take buttocks center.
The step of wherein said skeleton direction normalizing comprises:
A) select former coordinate system X-axis, make itself and left stern arrive the vector of right stern parallel, take new coordinate reference space O ' at zero point as buttocks center construction straight line is perpendicular to new ground reference plane, obtain new coordinate reference space Z axis;
B) rotation skeleton, is mapped to new coordinate reference space by skeleton.
By by described human body 3D joint coordinates normalization technology, strengthen direction unchangeability, bone size constancy, the locus anti-interference of the method.
As shown in Figure 2, step 3, screening human synovial, filter low joint or the redundancy joint of human body behavior identification contribution, retain the large joint set of human body behavior identification contribution, the joint set of reservation comprises 12 joints: head, left/right elbow, left/right wrist, left/right knee, left/right ankle, left/right stern, buttocks center.
As shown in Figure 6, the every class behavior in step 4, step 5 refers to the typical behavior that set sequence forms.In experimental result of the present invention, comprise the recognition result of following 8 kinds of behaviors: height jettisonings and throw, frontly play, side is played, trot, tennis ball hitting, tennis service, golf ball-batting, pick up and throw.
As shown in Figure 2, step 4, analyzes every class behavior, based on AP clustering algorithm, adds up in each class behavior, and the articulation point that joint space movement travel is outstanding, builds interest and close knot cluster:
A) move distance in calculating each joint of consecutive frame, the coordinate that is located at certain joint in consecutive frame (i frame, i+1 frame) is respectively: (x ik, y ik, z ik), (x i+1, k, y i+1, k, z i+1, k), move distance d ikfor:
d ik 2=(x ik-x i+1k) 2+(y ik-y i+1k) 2+(z ik-z i+1k) 2
B) cumulative all move distances obtain the total kilometres D in a joint k:
D k = Σ i = 1 n - 1 d ik 2
C) based on AP algorithm, specify cluster numbers, employing Euclidean distance is measuring similarity, calculates gained move distance all joints are divided into 3 classes according to previous step;
D) abandon the shortest joint of move distance, get two long classes of move distance as the higher joint of contribution degree, the interest that builds the behavior is closed knot cluster.
Select by the large selection of joint set and the interest joint mass selection relevant to behavior of step 4 of step 3 human body behavior identification contribution, enlarged markedly distance of all categories, the interference that effectively the irrelevant joint of filtering causes, has strengthened the anti-noise ability of model.
As shown in Figure 2, step 5, for every class behavior, closes knot cluster based on interest, calculates the 3D Gauss space characteristics of each action:
A) 3d space is divided into m × n × l (m, n, l ∈ Z) sub spaces, each joint must be in a sub spaces;
B) the subspace gaussian density in all the other 11 joints of calculating except buttocks center:
(1) to each joint, calculate its subspace gaussian density,
p ( X , u , Σ ) = 1 ( 2 π ) n | Σ | e - 1 2 ( x - μ ) T Σ - 1 ( x - μ )
Wherein X represents joint coordinates, and u represents center, subspace, and ∑ represents covariance matrix, makes ∑=d/3*n*I, and d is every sub spaces catercorner length here, and n is subspace number, and I is unit matrix.
(2) for normal distribution, 99% information is included in positive and negative 3 standard deviations (being d*n*I, n=3.5), and order is apart from subspace centre distance d joint, bin> ε (ε=d; ) the corresponding subspace gaussian density p in joint (X, u, ∑)=0.
C) the subspace gaussian density in 11 of each action joints has formed sparse motion characteristic expression.As shown in Figure 2, step 6, adopts AP clustering algorithm, builds Gauss apart from core:
here x, y represents two stack features vectors, σ is standard deviation.
The 3D Gauss space characteristics that projects to human action space is gathered for the classification of motion of n group, and obtains the cluster centre that represents every group of action:
A) adopt above-mentioned Gauss apart from core, calculate each group of action gaussian density characteristic similarity s (x, y);
B) for large numbers matrix, order similarity be 0, build sparse similar matrix;
C) get
Figure BDA0000455074700000102
for reference value, n is number of samples, according to sample, transmits by message, automatically determines cluster numbers, and the AP cluster of sparse matrix is supported in application, obtains the individual cluster centre action of k '.
As shown in Figure 2, step 7, for every group of action, adopts the affiliated cluster centre of each action to build behavioural characteristic word list, and every group of action is carried out to data scrubbing preparation:
A) replace all samples of former action sequence for center of a sample's action under it, obtain one group of vision word strings;
B) clear up each behavior sample vision word strings, delete the word repeating continuously, to reduce the impact that between different samples, time migration causes, obtain behavioural characteristic word list.
As shown in Figure 2, step 8, builds human body behavior conditional random field models, and training sample, obtains human body behavior model of cognition;
Human body behavior conditional random field models as shown in Figure 4, y in Fig. 4 tprediction discrete state, x tfor random action variable, step 7 obtains visual signature word list.
Human body behavior conditional random field models explains as shown in Figure 5, and Fig. 5 is listed is exemplified as the right hand action of waving, and its interest is closed knot cluster and comprised right finesse, right hand elbow and head.
The human body behavior model of cognition that step 8 is obtained adopts PSS (Schmidt, M., Graphical ModelStructure Learning with L1-Regularization, 2010, UNIVERSITY OF BRITISH COLUMBIA.) be optimized:
min θf(θ)=-logp θ(Y|X)+r(θ),
Here
r ( θ ) = λ 1 Σ i | θ i | + λ 2 θ 2
p θ ( Y | X ) = Π t = 1 T + 1 exp ( Ψ θ ( y t , X ) + Ψ θ ( y t , y t - 1 ) ) Z θ ( X )
Ψ θ ( y t , X ) = Σ a = 1 A λ a h a ( y t , X ) ,
Ψ θ ( y t , y t - 1 ) = Σ b = 1 B β b g b ( y t , y t - 1 ) ,
Z θ ( X ) = Σ Y Π c ∈ C ( Y , X ) φ θ c ( Y c , X c ) , for an implicit function corresponding to (clique) C,
h a ( y t , X ) = 1 [ y t = m ] x j i , Wherein m ∈ Y, j ∈ [0, T],
G b(y t, y t-1)=1[y t=m 1∧ y t-1=m 2], wherein m 1, m 2∈ Y
As shown in Figure 2,6, step 9, identifies new samples, and the present invention has very high identification to generic behavior, and the individual action difference of introducing of different experiments is had to very strong generalization ability, similar behavior is also had to good recognition capability simultaneously.
The learning system that selection, interest joint mass selection in conjunction with human body 3D joint coordinates normalization technology, joint set that identification contribution is large selected, 3D Gauss space characteristics sparse expression and human body behavior conditional random field models have built a robust jointly; The present invention has very high identification to generic behavior, and the individual action difference of introducing of different experiments is had to very strong generalization ability, similar behavior is also had to good recognition capability simultaneously.
The present invention has wide practical use in fields such as video monitoring, man-machine interaction, video frequency searchings.
Disclosed is above only several specific embodiment of the present invention, and still, the present invention is not limited thereto, and the changes that any person skilled in the art can think of all should fall into protection scope of the present invention.

Claims (9)

1. the 3D Gauss space human body behavior recognition methods based on picture depth information, is characterized in that, comprises the following steps:
Step 1, for the depth information of every two field picture, adopts the single pixel object identification method based on Stochastic Decision-making forest classified device that Shotton proposes confirm human body and further obtain human body 3D joint coordinates;
Step 2, by described human body 3D joint coordinates data normalization;
Step 3, screening human synovial, filters low joint or the redundancy joint of human body behavior identification contribution;
Step 4, analyzes every class behavior, based on AP clustering algorithm, adds up in each class behavior, and the articulation point that joint space movement travel is outstanding, builds interest and close knot cluster;
Step 5, for every class behavior, closes knot cluster based on interest, calculates the 3D Gauss space characteristics of each action;
Step 6, adopts AP clustering algorithm, builds Gauss apart from core, the 3D Gauss space characteristics that projects to human action space is gathered for the classification of motion of n group, and obtain the cluster centre that represents every group of action;
Step 7, for every group of action, adopts the affiliated cluster centre of each action to build behavioural characteristic word list, and every group of action is carried out to data scrubbing preparation;
Step 8, builds human body behavior conditional random field models, and training sample, obtains human body behavior model of cognition;
Step 9, identifies new samples.
2. the 3D Gauss space human body behavior recognition methods based on picture depth information according to claim 1, is characterized in that: in described step 2, described human body 3D joint coordinates data normalization is comprised to skeleton limbs vector size normalizing, skeleton reference zero normalizing and skeleton direction normalizing;
The step of wherein said skeleton limbs vector size normalizing comprises:
A) selecting a human body 3D joint coordinates is master pattern;
B) keep each sample limb segment direction vector constant, each vector is zoomed to master pattern length;
C) take buttocks center as reference point, structure joint tree, moves each joint according to convergent-divergent length, and mobile vector is: here Δ d fifi ancestors' of present node mobile vector, ancestors' number that n is present node;
The step of wherein said skeleton reference zero normalizing comprises: be new coordinate reference space O ' at zero point, mobile skeleton take buttocks center;
The step of wherein said skeleton direction normalizing comprises:
A) select former coordinate system X-axis, make itself and left stern arrive the vector of right stern
Figure FDA0000455074690000022
parallel, take new coordinate reference space O ' at zero point as buttocks center construction straight line is perpendicular to new ground reference plane, obtain new coordinate reference space Z axis;
B) rotation skeleton, is mapped to new coordinate reference space by skeleton.
3. the 3D Gauss space human body behavior recognition methods based on picture depth information according to claim 1, it is characterized in that: described step 3 retains the large joint set of human body behavior identification contribution by screening, and the joint set of reservation comprises 12 joints: head, left/right elbow, left/right wrist, left/right knee, left/right ankle, left/right stern, buttocks center.
4. the 3D Gauss space human body behavior recognition methods based on picture depth information according to claim 1, is characterized in that: the step that described step 4 builds interest pass knot cluster by AP algorithm is:
A) move distance in calculating each joint of consecutive frame, the coordinate that is located at certain joint in consecutive frame (i frame, i+1 frame) is respectively: (x ik, y ik, z ik), (x i+1, k, y i+1, k, z i+1, k), move distance d ikfor:
d ik 2=(x ik-x i+1k) 2+(y ik-y i+1k) 2+(z ik-z i+1k) 2
B) cumulative all move distances obtain the total kilometres D in a joint k:
D k = Σ i = 1 n - 1 d ik 2
C) based on AP algorithm, specify cluster numbers, employing Euclidean distance is measuring similarity, calculates gained move distance all joints are divided into 3 classes according to previous step;
D) abandon the shortest joint of move distance, get two long classes of move distance as the higher joint of contribution degree, the interest that builds the behavior is closed knot cluster.
5. the 3D Gauss space human body behavior recognition methods based on picture depth information according to claim 1, is characterized in that: the computation process of the 3D Gauss space characteristics of each action in described step 5 is:
A) 3d space is divided into m × n × l (m, n, l ∈ Z) sub spaces, each joint must be in a sub spaces;
B) the subspace gaussian density in all the other 11 joints of calculating except buttocks center:
(1) to each joint, calculate its subspace gaussian density,
p ( X , u , Σ ) = 1 ( 2 π ) n | Σ | e - 1 2 ( x - μ ) T Σ - 1 ( x - μ )
Wherein X represents joint coordinates, and u represents center, subspace, and ∑ represents covariance matrix, makes ∑=d/3*n*I, and d is every sub spaces catercorner length here, and n is subspace number, and I is unit matrix;
(2) for normal distribution, 99% information is included in positive and negative 3 standard deviations (being d*n*I, n=3.5), and order is apart from subspace centre distance d joint, bin> ε (ε=d; ) the corresponding subspace gaussian density p in joint (X, u, ∑)=0;
C) the subspace gaussian density in 11 of each action joints has formed sparse motion characteristic expression.
6. the 3D Gauss space human body behavior recognition methods based on picture depth information according to claim 1, is characterized in that: in described step 6, Gauss apart from the building method of core is:
Figure FDA0000455074690000041
here x, y represents two stack features vectors, σ is standard deviation.
7. the 3D Gauss space human body behavior recognition methods based on picture depth information according to claim 1, is characterized in that: in described step 6, the 3D Gauss space characteristics clustering method in human action space is:
A) adopt above-mentioned Gauss apart from core, calculate each group of action gaussian density characteristic similarity s (x, y);
B) for large numbers matrix, order
Figure FDA0000455074690000043
similarity be 0, build sparse similar matrix;
C) get
Figure FDA0000455074690000042
for reference value, n is number of samples, according to sample, transmits by message, automatically determines cluster numbers, and the AP cluster of sparse matrix is supported in application, obtains the individual cluster centre action of k '.
8. the 3D Gauss space human body behavior recognition methods based on picture depth information according to claim 1, is characterized in that: in described step 7, the building method of behavioural characteristic word list is:
A) replace all samples of former action sequence for center of a sample's action under it, obtain one group of vision word strings;
B) clear up each behavior sample vision word strings, delete the word repeating continuously, to reduce the impact that between different samples, time migration causes, obtain behavioural characteristic word list.
9. the 3D Gauss space human body behavior recognition methods based on picture depth information according to claim 1, is characterized in that: the human body behavior model of cognition that described step 8 obtains adopts PSS to be optimized:
min θf(θ)=-logp θ(Y|X)+r(θ),
Here
r ( θ ) = λ 1 Σ i | θ i | + λ 2 θ 2 ,
p θ ( Y | X ) = Π t = 1 T + 1 exp ( Ψ θ ( y t , X ) + Ψ θ ( y t , y t - 1 ) ) Z θ ( X ) ,
Ψ θ ( y t , X ) = Σ a = 1 A λ a h a ( y t , X ) ,
Ψ θ ( y t , y t - 1 ) = Σ b = 1 B β b g b ( y t , y t - 1 ) ,
Z θ ( X ) = Σ Y Π c ∈ C ( Y , X ) φ θ c ( Y c , X c ) ,
Figure FDA0000455074690000056
for an implicit function corresponding to (clique) C,
h a ( y t , X ) = 1 [ y t = m ] x j i , Wherein m ∈ Y, j ∈ [0, T],
G b(y t, y t-1)=1[y t=m 1∧ y t-1=m 2], wherein m 1, m 2∈ Y.
CN201410009445.6A 2014-01-09 2014-01-09 3D (three-dimensional) Gaussian space human behavior identifying method based on image depth information Active CN103810496B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410009445.6A CN103810496B (en) 2014-01-09 2014-01-09 3D (three-dimensional) Gaussian space human behavior identifying method based on image depth information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410009445.6A CN103810496B (en) 2014-01-09 2014-01-09 3D (three-dimensional) Gaussian space human behavior identifying method based on image depth information

Publications (2)

Publication Number Publication Date
CN103810496A true CN103810496A (en) 2014-05-21
CN103810496B CN103810496B (en) 2017-01-25

Family

ID=50707237

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410009445.6A Active CN103810496B (en) 2014-01-09 2014-01-09 3D (three-dimensional) Gaussian space human behavior identifying method based on image depth information

Country Status (1)

Country Link
CN (1) CN103810496B (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104616028A (en) * 2014-10-14 2015-05-13 北京中科盘古科技发展有限公司 Method for recognizing posture and action of human limbs based on space division study
CN104951793A (en) * 2015-05-14 2015-09-30 西南科技大学 STDF (standard test data format) feature based human behavior recognition algorithm
CN105740815A (en) * 2016-01-29 2016-07-06 南京邮电大学 Human body behavior identification method based on deep recursive and hierarchical condition random fields
CN106021926A (en) * 2016-05-20 2016-10-12 北京九艺同兴科技有限公司 Real-time evaluation method of human body motion sequences
CN106156714A (en) * 2015-04-24 2016-11-23 北京雷动云合智能技术有限公司 The Human bodys' response method merged based on skeletal joint feature and surface character
CN106462747A (en) * 2014-06-17 2017-02-22 河谷控股Ip有限责任公司 Activity recognition systems and methods
CN106919947A (en) * 2015-12-25 2017-07-04 ***通信集团公司 A kind of user has a meal the method and device of Activity recognition
WO2017133009A1 (en) * 2016-02-04 2017-08-10 广州新节奏智能科技有限公司 Method for positioning human joint using depth image of convolutional neural network
CN107341471A (en) * 2017-07-04 2017-11-10 南京邮电大学 A kind of Human bodys' response method based on Bilayer condition random field
CN108564047A (en) * 2018-04-19 2018-09-21 北京工业大学 A kind of Human bodys' response method based on the joints 3D point sequence
CN108830215A (en) * 2018-06-14 2018-11-16 南京理工大学 Hazardous act recognition methods based on personnel's framework information
CN108846348A (en) * 2018-06-07 2018-11-20 四川大学 A kind of Human bodys' response method based on three-dimensional skeleton character
CN108921810A (en) * 2018-06-20 2018-11-30 厦门美图之家科技有限公司 A kind of color transfer method and calculate equipment
CN108965850A (en) * 2018-07-05 2018-12-07 盎锐(上海)信息科技有限公司 The acquisition device and method of human figure
CN110597251A (en) * 2019-09-03 2019-12-20 三星电子(中国)研发中心 Method and device for controlling intelligent mobile equipment
CN111310605A (en) * 2020-01-21 2020-06-19 北京迈格威科技有限公司 Image processing method and device, electronic equipment and storage medium
CN111931804A (en) * 2020-06-18 2020-11-13 南京信息工程大学 RGBD camera-based automatic human body motion scoring method
CN112749671A (en) * 2021-01-19 2021-05-04 澜途集思生态科技集团有限公司 Human behavior recognition method based on video
CN113705542A (en) * 2021-10-27 2021-11-26 北京理工大学 Pedestrian behavior state identification method and system

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875563A (en) * 2018-04-28 2018-11-23 尚谷科技(天津)有限公司 A kind of human motion recognition method based on muscle signal

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090087024A1 (en) * 2007-09-27 2009-04-02 John Eric Eaton Context processor for video analysis system
CN102867175A (en) * 2012-08-31 2013-01-09 浙江捷尚视觉科技有限公司 Stereoscopic vision-based ATM (automatic teller machine) machine behavior analysis method
CN103400160A (en) * 2013-08-20 2013-11-20 中国科学院自动化研究所 Zero training sample behavior identification method
CN103500342A (en) * 2013-09-18 2014-01-08 华南理工大学 Human behavior recognition method based on accelerometer

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090087024A1 (en) * 2007-09-27 2009-04-02 John Eric Eaton Context processor for video analysis system
CN102867175A (en) * 2012-08-31 2013-01-09 浙江捷尚视觉科技有限公司 Stereoscopic vision-based ATM (automatic teller machine) machine behavior analysis method
CN103400160A (en) * 2013-08-20 2013-11-20 中国科学院自动化研究所 Zero training sample behavior identification method
CN103500342A (en) * 2013-09-18 2014-01-08 华南理工大学 Human behavior recognition method based on accelerometer

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106462747A (en) * 2014-06-17 2017-02-22 河谷控股Ip有限责任公司 Activity recognition systems and methods
CN104616028A (en) * 2014-10-14 2015-05-13 北京中科盘古科技发展有限公司 Method for recognizing posture and action of human limbs based on space division study
CN104616028B (en) * 2014-10-14 2017-12-12 北京中科盘古科技发展有限公司 Human body limb gesture actions recognition methods based on space segmentation study
CN106156714A (en) * 2015-04-24 2016-11-23 北京雷动云合智能技术有限公司 The Human bodys' response method merged based on skeletal joint feature and surface character
CN104951793B (en) * 2015-05-14 2018-04-17 西南科技大学 A kind of Human bodys' response method based on STDF features
CN104951793A (en) * 2015-05-14 2015-09-30 西南科技大学 STDF (standard test data format) feature based human behavior recognition algorithm
CN106919947A (en) * 2015-12-25 2017-07-04 ***通信集团公司 A kind of user has a meal the method and device of Activity recognition
CN106919947B (en) * 2015-12-25 2019-12-13 ***通信集团公司 method and device for recognizing eating behavior of user
CN105740815A (en) * 2016-01-29 2016-07-06 南京邮电大学 Human body behavior identification method based on deep recursive and hierarchical condition random fields
CN105740815B (en) * 2016-01-29 2018-12-18 南京邮电大学 A kind of Human bodys' response method based on depth recurrence stratified condition random field
WO2017133009A1 (en) * 2016-02-04 2017-08-10 广州新节奏智能科技有限公司 Method for positioning human joint using depth image of convolutional neural network
CN106021926B (en) * 2016-05-20 2019-06-18 北京九艺同兴科技有限公司 A kind of real-time estimating method of human action sequence
CN106021926A (en) * 2016-05-20 2016-10-12 北京九艺同兴科技有限公司 Real-time evaluation method of human body motion sequences
CN107341471B (en) * 2017-07-04 2019-10-01 南京邮电大学 A kind of Human bodys' response method based on Bilayer condition random field
CN107341471A (en) * 2017-07-04 2017-11-10 南京邮电大学 A kind of Human bodys' response method based on Bilayer condition random field
CN108564047A (en) * 2018-04-19 2018-09-21 北京工业大学 A kind of Human bodys' response method based on the joints 3D point sequence
CN108564047B (en) * 2018-04-19 2021-09-10 北京工业大学 Human behavior identification method based on3D joint point sequence
CN108846348A (en) * 2018-06-07 2018-11-20 四川大学 A kind of Human bodys' response method based on three-dimensional skeleton character
CN108846348B (en) * 2018-06-07 2022-02-11 四川大学 Human behavior recognition method based on three-dimensional skeleton characteristics
CN108830215A (en) * 2018-06-14 2018-11-16 南京理工大学 Hazardous act recognition methods based on personnel's framework information
CN108830215B (en) * 2018-06-14 2021-07-13 南京理工大学 Dangerous behavior identification method based on personnel skeleton information
CN108921810A (en) * 2018-06-20 2018-11-30 厦门美图之家科技有限公司 A kind of color transfer method and calculate equipment
CN108965850B (en) * 2018-07-05 2020-04-07 盎锐(上海)信息科技有限公司 Human body shape acquisition device and method
CN108965850A (en) * 2018-07-05 2018-12-07 盎锐(上海)信息科技有限公司 The acquisition device and method of human figure
CN110597251A (en) * 2019-09-03 2019-12-20 三星电子(中国)研发中心 Method and device for controlling intelligent mobile equipment
CN111310605A (en) * 2020-01-21 2020-06-19 北京迈格威科技有限公司 Image processing method and device, electronic equipment and storage medium
CN111310605B (en) * 2020-01-21 2023-09-01 北京迈格威科技有限公司 Image processing method and device, electronic equipment and storage medium
CN111931804A (en) * 2020-06-18 2020-11-13 南京信息工程大学 RGBD camera-based automatic human body motion scoring method
CN111931804B (en) * 2020-06-18 2023-06-27 南京信息工程大学 Human body action automatic scoring method based on RGBD camera
CN112749671A (en) * 2021-01-19 2021-05-04 澜途集思生态科技集团有限公司 Human behavior recognition method based on video
CN113705542A (en) * 2021-10-27 2021-11-26 北京理工大学 Pedestrian behavior state identification method and system

Also Published As

Publication number Publication date
CN103810496B (en) 2017-01-25

Similar Documents

Publication Publication Date Title
CN103810496A (en) 3D (three-dimensional) Gaussian space human behavior identifying method based on image depth information
CN108229444B (en) Pedestrian re-identification method based on integral and local depth feature fusion
CN110147743B (en) Real-time online pedestrian analysis and counting system and method under complex scene
CN105787471B (en) It is a kind of applied to help the elderly help the disabled Information Mobile Service robot control gesture identification method
Pishchulin et al. Strong appearance and expressive spatial models for human pose estimation
CN100583127C (en) An identification method for movement by human bodies irrelevant with the viewpoint based on stencil matching
CN106384093B (en) A kind of human motion recognition method based on noise reduction autocoder and particle filter
CN103246884B (en) Real-time body's action identification method based on range image sequence and device
CN106066996A (en) The local feature method for expressing of human action and in the application of Activity recognition
CN105005769B (en) A kind of sign Language Recognition Method based on depth information
CN108052896A (en) Human bodys' response method based on convolutional neural networks and support vector machines
Arif et al. Automated body parts estimation and detection using salient maps and Gaussian matrix model
CN104616028B (en) Human body limb gesture actions recognition methods based on space segmentation study
CN106055091A (en) Hand posture estimation method based on depth information and calibration method
CN104866860A (en) Indoor human body behavior recognition method
CN106127804A (en) The method for tracking target of RGB D data cross-module formula feature learning based on sparse depth denoising own coding device
CN102682452A (en) Human movement tracking method based on combination of production and discriminant
Khoshhal et al. Probabilistic LMA-based classification of human behaviour understanding using power spectrum technique
Englert Locally weighted learning
CN105373810A (en) Method and system for building action recognition model
CN106548194A (en) The construction method and localization method of two dimensional image human joint pointses location model
Trigueiros et al. Generic system for human-computer gesture interaction
CN103778439B (en) Human body contour outline reconstructing method based on dynamic space-time information excavating
CN117115911A (en) Hypergraph learning action recognition system based on attention mechanism
Zhang et al. Real-time action recognition based on a modified deep belief network model

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220208

Address after: 400000 12-10, building 2, No. 297, Yunan Avenue, Banan District, Chongqing

Patentee after: Chongqing weiminghui Information Technology Co.,Ltd.

Address before: No. 1800 Lihu Avenue, Wuxi City, Jiangsu Province

Patentee before: Jiangnan University