CN108681700B - Complex behavior identification method - Google Patents
Complex behavior identification method Download PDFInfo
- Publication number
- CN108681700B CN108681700B CN201810421670.9A CN201810421670A CN108681700B CN 108681700 B CN108681700 B CN 108681700B CN 201810421670 A CN201810421670 A CN 201810421670A CN 108681700 B CN108681700 B CN 108681700B
- Authority
- CN
- China
- Prior art keywords
- motion
- joint point
- dimensional
- action
- sequence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The invention discloses a complex behavior identification method, which comprises the following steps: acquiring three-dimensional bone joint point information of target motion by using a sensor; preprocessing joint point information and normalizing a coordinate system; extracting the motion trail of each joint point and projecting the motion trail to three two-dimensional planes; extracting motion vectors, lengths and direction angles of the motion vectors between every two frames, clustering by using a k-means algorithm to obtain motion elements, and counting to obtain a histogram; calculating the weight of each joint point by combining the time pyramid with the time information and the values of each cluster of all the histograms to form a descriptor; and (4) carrying out classification by using an SVM (support vector machine) to realize action recognition. The invention can extract and effectively express the characteristics of the action bone joint point information, thereby improving the accuracy of action identification; all motion information is completely reserved, and action reconstruction can be carried out; clustering all the motion classes, and capturing human motion characteristics from the whole situation; by using the low-level features, the calculation difficulty is reduced, the action recognition efficiency is improved, and the real-time requirement of the system is met.
Description
Technical Field
The invention relates to a complex behavior recognition method, and belongs to the technical field of image recognition.
Background
Action recognition is a hotspot of research in the field of machine vision, and the action recognition method is widely applied to the aspects of human-computer interaction, virtual reality, video retrieval, security monitoring and the like. With the development of a depth camera, the information of human body bone joint points can be directly acquired, and the action recognition method based on the bone features greatly improves the accuracy of action recognition. Despite the many relevant studies and exciting results, effective description of human actions remains a challenging task.
Many methods extract a variety of high-level features from skeletal information and then combine them in some form to form descriptors, but descriptors constructed in this combined mode are not complete and there is always a loss of motion information. On the other hand, many methods train on different motion classes separately, which results in a bias in the global features of human motion in the description of each individual class of motion. Meanwhile, the method using the high-level features has a problem of excessive calculation cost. There is a need to design an algorithm that uses low-level features, reduces computational cost, improves algorithm efficiency, while not losing motion information, and extracts global features of human motion from all classes of motion.
Therefore, in order to solve the above technical problems, it is necessary to provide a complex behavior recognition method.
Disclosure of Invention
The invention aims to provide a complex behavior identification method, which is used for extracting and effectively expressing the characteristics of the information of the joint points of the action skeleton, improving the accuracy of the action identification, completely retaining all motion information and reconstructing the action; clustering all the motion classes, and capturing human motion characteristics from the whole situation; by using the low-level features, the calculation difficulty is reduced, the action recognition efficiency is improved, and the real-time requirement of the system is met.
In order to achieve the purpose of the invention, the technical scheme adopted by the invention is as follows: a complex behavior recognition method comprises the following steps:
(1) acquiring three-dimensional skeleton joint point information of target motion by using a depth sensor to acquire three-dimensional coordinates of each joint of a human body;
(2) preprocessing the information of the skeletal joint points and normalizing a coordinate system;
(3) extracting the motion trail of each skeleton joint point, and defining the motion between adjacent frames as motion-let;
(4) projecting the three-dimensional trajectory of each skeleton joint point to three two-dimensional planes to obtain a two-dimensional motion-let set of all skeleton joint points;
(5) calculating the vector length parameter and the direction angle parameter of each representative motion-let;
(6) collecting all vectors of all action classes of a single skeleton joint point, and performing two-dimensional clustering on length parameters and direction angle parameters of the single skeleton joint point by using a k-means algorithm to obtain motion elements;
(7) counting the number of motion-lets represented by each motion element to obtain a motion element histogram;
(8) capturing time information of the action by using the time pyramid;
(9) calculating the weight of each bone joint point by combining the values of all the clusters of all the histograms, and finally forming a descriptor;
(10) and training the final descriptor by using an SVM classifier to obtain good division of the multi-action category descriptor and realize action recognition.
Preferably, step (2) comprises: and normalizing the coordinate system by taking the vector from the left shoulder to the right shoulder of the first frame of the action sequence as a horizontal axis and taking the vector from the crotch bone to the midpoint of the two shoulders as a vertical axis, and converting the X-Y-Z coordinate system into an X ' -Y ' -Z ' coordinate system.
Preferably, the motion trajectory of each bone joint point is extracted in step (3), and the specific steps are as follows:
the sequence of actions S for n frames is represented as:
S={Γj|j∈[1,J]},
Γj={pj(t)|t∈[1,n],j∈[1,J]},
wherein gamma isjIs the three-dimensional track of the joint point J, J is the total number of the joint points, t is the serial number of the frame number, Pj(t) is the position of the joint j in t frames:
the motion of the skeletal joint point between the two frames is defined as motion-let, and the motion-let of the skeletal joint point j between the frame t and the frame t +1 can be expressed as a vector vj(t):
The three-dimensional trajectory Γ of the skeletal joint point jjCan be expressed as a sequence of vectors:
Γj={vj(t)|t∈[1,n-1]}。
preferably, step (4) is specifically as follows:
projecting each 3D motion-let to three two-dimensional planes to obtain:
wherein the content of the first and second substances,andthe 2D motion-let on three two-dimensional planes respectively has the following calculation formula:
in connection with all motion-lets, the motion sequence S is further represented as:
preferably, step (5) is specifically as follows:
the calculation formula of the parameters of the skeletal joint point j under the x-y coordinate system is as follows:
the calculation formula of the parameters of the skeletal joint point j under the y-z coordinate system is as follows:
the calculation formula of the parameters of the skeletal joint point j under the x-z coordinate system is as follows:
whereinAndthe motion-let of the skeleton joint point j in the frame from t to t +1 corresponds to the direction angle parameter of the vector in three two-dimensional planes, the value range is-180 degrees to 180 degrees,andand the length parameters of the motion-let of the skeleton joint point j in the frame from t to t +1 correspond to the vectors in three two-dimensional planes.
Preferably, step (6) is specifically as follows:
under the x-y coordinate system, the number of the skeletal joint points j and the cluster centers is K, all the cluster centers can be represented as a set U:
whereinIs a cluster of the kth clusterThe center of the class is the center of the class,andis the coordinate value of the center of the cluster,
each point is represented by the cluster center of the cluster, K cluster centers represent corresponding K motion elements, and the motion element of the kth clusterExpressed as:
thus, all motion primitives P of the motion sequence S are:
preferably, in step (7), all motion-lets in each two-dimensional plane of each bone joint point of the motion sequence S are represented by corresponding motion primitives, and the number of motion-lets represented by each motion primitive is counted to form a histogram, so that the values of all clusters in the histogram at three coordinates of the bone joint point j can be represented as Hj:
Wherein the content of the first and second substances,andthe values of the kth bin of the histogram of motion primitives for the three two-dimensional trajectories of the skeletal joint point j.
Preferably, the stepsStep (8) capturing time information by adopting three layers of time pyramids, wherein the first layer calculates a motion element histogram aiming at the whole complete track, the second layer divides the track into two parts according to time, the motion element histogram is respectively counted, the third layer divides the two parts into two parts, and finally the 3D track descriptor D of the bone joint point j is obtainedjComprises the following steps:
preferably, step (9) is specifically as follows:
calculating the length of the three-dimensional track of all the joint points in the whole action sequence to respectively obtain the total motion amount m of all the joint pointsj:
The total motion quantity of all the joint points of the sample is accumulated to obtain the total motion quantity F of the motion sample SS:
All training samples of each type of action are collected and arranged according to sample sequence numbers, and then a set is formed:
F={F1,F2,...,Fe},
where e is the number of samples in a training set of a certain type of motion,
for each action class, calculate MjAnd F covariance to obtain a covariance set, wherein the covariance set comprises the following specific calculation steps:
the total motion quantity of all J joint points and the total motion quantity F of the motion sample form a J + 1-dimensional random variable G:
G=(M1,M2,...,MJ,F)T,
then matrix
The covariance matrix is a covariance matrix of the J + 1-dimensional random variable G, and the covariance calculation formula is as follows:
covij=E[Mi-E(Mi)]×E[Mj-E(Mj)],
wherein F is Mj+1Participating in the calculation, E (M)j) Is MjThe calculation formula is as follows:
the last column of the matrix C is a covariance set of Mj and F; if the covariance cov of Mj and Fj(J+1)Less than 0, the weight w of the node jjIs 0; if the covariance cov of Mj and Fj(J+1)If the weight is greater than or equal to 0, the weight w of the joint point jjComprises the following steps:
therein, max { covi(J+1)|i∈[1,J+1]Means the maximum value of the J +1 th column of the matrix C;
finally, the descriptor D of the action sequence S is represented as:
D={Dj×wj|j∈[1,J]}。
due to the application of the technical scheme, compared with the prior art, the invention has the following advantages:
the method can extract and effectively express the characteristics of the action bone joint point information, thereby improving the accuracy of action identification; all motion information can be completely reserved, and motion reconstruction can be carried out; clustering all the motion classes, and capturing human motion characteristics from the whole situation; by using the low-level features, the calculation difficulty is reduced, the action recognition efficiency is improved, and the real-time requirement of the system can be met.
Drawings
FIG. 1 is a schematic diagram of the present invention showing the normalization of a coordinate system.
FIG. 2 is a schematic representation of the present invention projecting three-dimensional trajectories of skeletal joint points onto three two-dimensional planes.
Fig. 3 is a schematic diagram of the calculation method of the length parameter and the direction angle parameter of the present invention.
Fig. 4 is a schematic diagram of the clustering result of the vector two-dimensional parameters of all motion classes of the skeletal joint point on three planes.
FIG. 5 is a histogram of the bone joint points of the present invention in three coordinate systems, x-y, y-z, and x-z.
FIG. 6 is a schematic diagram of the temporal pyramid of the present invention.
Fig. 7 is a flow chart of the present invention.
Detailed Description
The invention is further described with reference to the following figures and examples:
the first embodiment is as follows: referring to fig. 7, a complex behavior recognition method includes the following steps:
(1) acquiring three-dimensional skeleton joint point information of target motion by using a depth sensor to acquire three-dimensional coordinates of each joint of a human body;
(2) preprocessing the skeleton joint point information, normalizing a coordinate system, as shown in figure 1, by taking a vector from a left shoulder to a right shoulder as a horizontal axis and a vector from a crotch bone to a midpoint of two shoulders as a vertical axis, and converting an X-Y-Z coordinate system into an X ' -Y ' -Z ' coordinate system;
(3) connecting the three-dimensional coordinates of each bone joint point in the action sequence according to the time sequence to obtain the three-dimensional tracks of all the bone joint points;
this embodiment uses a 60 frame motion sequence S (two hands swinging) with 20 skeletal joint points, expressed as:
S={Γj|j∈[1,20]},
Γj={pj(t)|t∈[1,60],j∈[1,20]},
wherein gamma isjIs the three-dimensional track of the joint point J, J is the total number of the joint points, t is the serial number of the frame number, Pj(t) is the position of the joint j in t frames:
the motion of the skeletal joint point between the two frames is defined as motion-let, and the motion-let of the skeletal joint point j between the frame t and the frame t +1 can be expressed as a vector vj(t):
The three-dimensional trajectory Γ of the skeletal joint point jjCan be expressed as a sequence of vectors:
Γj={vj(t)|t∈[1,n-1]};
(4) for each skeletal joint, its three-dimensional trajectory is projected onto three two-dimensional planes, namely the x-y, y-z and x-z planes, resulting in three two-dimensional trajectories, as shown in FIG. 2.
Each 3D motion-let is projected onto three two-dimensional planes, resulting in:
wherein the content of the first and second substances,andthe 2D motion-let on three two-dimensional planes respectively has the following calculation formula:
in connection with all motion-lets, the motion sequence S can be further expressed as:
(5) the length and orientation angle of each vector representing the motion-let, shown in FIG. 3, are calculated as follows for skeletal joint point j in the x-y coordinate system:
the calculation formula of the parameters of the skeletal joint point j under the y-z coordinate system is as follows:
the calculation formula of the parameters of the skeletal joint point j under the x-z coordinate system is as follows:
whereinAndthe motion-let of the skeleton joint point j in the frame from t to t +1 corresponds to the direction angle parameter of the vector in three two-dimensional planes, the value range is-180 degrees to 180 degrees,andand the length parameters of the motion-let of the skeleton joint point j in the frame from t to t +1 correspond to the vectors in three two-dimensional planes.
(6) In order to globally extract human motion features, for each skeletal joint point, all vectors of the skeletal joint point of all classes of motion are collected, and the length parameter and the direction angle parameter of the vector are subjected to two-dimensional clustering by using a k-means algorithm. Taking a certain joint as an example, the clustering result is shown in fig. 4.
Taking the clustering result of the bone joint point j in the x-y coordinate system as an example, if the number K of the clustering centers is 8, all the clustering centers can be represented as a set U:
Each point is represented by the cluster center of its cluster, and 8 cluster centers represent the corresponding 8 motion primitives. E.g. motion primitive of the kth clusterCan be expressed as:
thus, all motion primitives P of the motion sequence S are:
(7) for the motion sequence S, all motion-lets in each two-dimensional plane of each skeletal joint point are represented by corresponding motion primitives, and the number of motion-lets represented by each motion primitive is counted to form a histogram, as shown in fig. 5.
For the most central cluster and the corresponding motion primitives in the clustering result, statistics and subsequent calculation are not involved, because the most central cluster contains vectors in all directions and has no division significance in direction angles. In addition, the vector length parameters of the centermost cluster are small, and contribute little to the motion composition, which can be ignored.
The values of all clusters in the three histograms of bone joint points j can be represented as Hj:
Wherein the content of the first and second substances,andvalues of kth column of histogram of motion elements for three two-dimensional trajectories of skeletal joint point j。
(8) The time pyramid is applied to capture the time information of the motion, taking three layers of time pyramids as an example, the first layer calculates a motion element histogram for the whole complete track, the second layer divides the track into two parts according to time, the motion element histogram is respectively counted, and the two parts are further divided into two parts in the third layer. The description of the bone joint point j is thus divided into 7 parts, as shown in fig. 6: of the first layerOf the second layerAndof the third layerAnd
thus, the 3D trajectory descriptor D of the skeletal joint jjIs a combination of the above 7 moieties, namely:
(9) considering that different skeletal joint points contribute differently to the recognition of a motion, it is necessary to increase the importance of key skeletal joint point descriptors to the recognition of a motion. Taking the descriptor of the bone joint j as an example, calculate the corresponding weight wj。
Calculating the length of the three-dimensional track of all the joint points in the whole action sequence to respectively obtain the total motion amount m of all the joint pointsj:
The total motion quantity of all the joint points of the sample is accumulated to obtain the total motion quantity F of the motion sample SS:
All training samples of each type of action are collected and arranged according to sample numbers, and in this embodiment, if there are 80 samples in a training set of a certain type of action, there is a set:
F={F1,F2,...,F80}。
for each action class, calculate MjAnd F covariance to obtain a covariance set. The specific calculation steps are as follows: the total motion quantity of all 20 joint points and the total motion quantity F of the motion sample form a 21-dimensional random variable G:
G=(M1,M2,...,M20,F)T,
then matrix
Is a covariance matrix of 21-dimensional random variables G, and the covariance calculation formula is:
covij=E[Mi-E(Mi)]×E[Mj-E(Mj)],
wherein F is M21Participating in the calculation, E (M)j) Is MjThe calculation formula is as follows:
the last column of the matrix C is the covariance set of Mj and F. If the covariance cov of Mj and Fj(21)Less than 0, then the joint jWeight wjIs 0. If the covariance cov of Mj and Fj(21)If the weight is greater than or equal to 0, the weight w of the joint point jjComprises the following steps:
therein, max { covi(21)|i∈[1,21]It refers to the maximum value of column 21 of the matrix C.
Thus, the descriptor D of the action sequence S can be expressed as:
D={Dj×wj|j∈[1,20]}。
(10) and training a final descriptor by using an SVM classifier to obtain good division of the multi-action category descriptor and realize action recognition.
Claims (9)
1. A complex behavior recognition method is characterized by comprising the following steps:
(1) acquiring three-dimensional skeleton joint point information of target motion by using a depth sensor to acquire three-dimensional coordinates of each joint of a human body;
(2) preprocessing the information of the skeletal joint points and normalizing a coordinate system;
(3) extracting the motion trail of each skeleton joint point, and defining the motion between adjacent frames as motion-let;
(4) projecting the three-dimensional trajectory of each skeleton joint point to three two-dimensional planes to obtain a two-dimensional motion-let set of all skeleton joint points;
(5) calculating the vector length parameter and the direction angle parameter of each representative motion-let;
(6) collecting all vectors of all action classes of a single skeleton joint point, and performing two-dimensional clustering on length parameters and direction angle parameters of the single skeleton joint point by using a k-means algorithm to obtain motion elements;
(7) counting the number of motion-lets represented by each motion element to obtain a motion element histogram;
(8) capturing time information of the action by using the time pyramid;
(9) calculating the weight of each bone joint point by combining the values of all the clusters of all the histograms, and finally forming a descriptor;
(10) and training the final descriptor by using an SVM classifier to obtain good division of the multi-action category descriptor and realize action recognition.
2. The complex behavior recognition method according to claim 1, wherein the step (2) comprises: and normalizing the coordinate system by taking the vector from the left shoulder to the right shoulder of the first frame of the action sequence as a horizontal axis and taking the vector from the crotch bone to the midpoint of the two shoulders as a vertical axis, and converting the X-Y-Z coordinate system into an X ' -Y ' -Z ' coordinate system.
3. The complex behavior recognition method according to claim 1, wherein the motion trajectory of each skeletal joint point is extracted in step (3), and specifically as follows:
the sequence of actions S for n frames is represented as:
whereinIs the three-dimensional track of the joint point J, J is the total number of the joint points, t is the serial number of the frame number,is the position of the joint point j in the t frame:
the movement of the skeletal joint point between the two frames is defined as motion-let, and the skeletal joint point j is from t frame to t +Motion-let between 1 frame can be represented as a vector:
The three-dimensional trajectory of the skeletal joint point jCan be expressed as a sequence of vectors:
4. the complex behavior recognition method according to claim 3, wherein the step (4) is specifically as follows:
projecting each 3D motion-let to three two-dimensional planes to obtain:
wherein the content of the first and second substances,,andthe 2D motion-let on three two-dimensional planes respectively has the following calculation formula:
in connection with all motion-lets, the motion sequence S is further represented as:
5. the complex behavior recognition method according to claim 3, wherein the step (5) is specifically as follows:
the calculation formula of the parameters of the skeletal joint point j under the x-y coordinate system is as follows:
the calculation formula of the parameters of the skeletal joint point j under the y-z coordinate system is as follows:
the calculation formula of the parameters of the skeletal joint point j under the x-z coordinate system is as follows:
wherein,Andthe motion-let of the skeleton joint point j in the frame from t to t +1 corresponds to the direction angle parameter of the vector in three two-dimensional planes, the value range is-180 degrees to 180 degrees,,andand the length parameters of the motion-let of the skeleton joint point j in the frame from t to t +1 correspond to the vectors in three two-dimensional planes.
6. The complex behavior recognition method according to claim 1, wherein the step (6) is specifically as follows:
under the x-y coordinate system, the number of the skeletal joint points j and the cluster centers is K, all the cluster centers can be represented as a set U:
whereinIs the cluster center of the k-th cluster,andis the coordinate value of the center of the cluster,
each point is represented by the cluster center of the cluster, K cluster centers represent corresponding K motion elements, and the motion element of the kth clusterExpressed as:
thus, all motion primitives P of the motion sequence S are:
7. the complex behavior recognition method according to claim 6, wherein in step (7), all motion-lets in each two-dimensional plane of each bone joint point of the motion sequence S are represented by corresponding motion primitives, and the number of motion-lets represented by each motion primitive is counted to form a histogram, so that the values of all clusters in the histogram at three coordinates of the bone joint point j can be represented as:
8. The complex behavior recognition method according to claim 1, wherein the step (8) captures the time information by using a three-layer time pyramid, the first layer calculates a motion primitive histogram for the whole complete trajectory, the second layer divides the trajectory into two parts according to time, the motion primitive histogram is respectively counted, the third layer further divides the two parts into two parts, and finally the 3D trajectory descriptor of the skeletal joint j is obtainedComprises the following steps:
9. The complex behavior recognition method according to claim 3, wherein the step (9) is specifically as follows:
calculating the length of the three-dimensional track of all the joint points in the whole action sequence to respectively obtain the respective total motion amount of all the joint points:
The total motion amount of all the joint points of the motion sequence is accumulated to obtain the total motion amount of the motion sequence S:
All training samples of each type of action are collected and arranged according to sample sequence numbers, and then a set is formed:
where e is the number of samples in a training set of a certain type of motion,
for each action class, calculate MjAnd F covariance to obtain a covariance set, wherein the covariance set comprises the following specific calculation steps:
the total motion quantity of all J joint points and the total motion quantity F of the motion sample form a J + 1-dimensional random variable G:
then matrix
The covariance matrix is a covariance matrix of the J + 1-dimensional random variable G, and the covariance calculation formula is as follows:
wherein F is Mj+1Participating in the calculation, E (M)j) Is MjThe calculation formula is as follows:
the last column of the matrix C is MjA set of covariances with F; if M isjCovariance with FLess than 0, the weight of the node jIs 0; if M isjCovariance with FGreater than or equal to 0, the weight of the node jComprises the following steps:
wherein the content of the first and second substances,refers to the maximum value of the J +1 th column of the matrix C;
finally, the descriptor D of the action sequence S is represented as:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810421670.9A CN108681700B (en) | 2018-05-04 | 2018-05-04 | Complex behavior identification method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810421670.9A CN108681700B (en) | 2018-05-04 | 2018-05-04 | Complex behavior identification method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108681700A CN108681700A (en) | 2018-10-19 |
CN108681700B true CN108681700B (en) | 2021-09-28 |
Family
ID=63801510
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810421670.9A Active CN108681700B (en) | 2018-05-04 | 2018-05-04 | Complex behavior identification method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108681700B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109670401B (en) * | 2018-11-15 | 2022-09-20 | 天津大学 | Action recognition method based on skeletal motion diagram |
CN110070070B (en) * | 2019-04-30 | 2021-03-02 | 苏州大学 | Action recognition method |
CN110084211B (en) * | 2019-04-30 | 2020-12-18 | 苏州大学 | Action recognition method |
CN110119707B (en) * | 2019-05-10 | 2021-02-02 | 苏州大学 | Human body action recognition method |
CN110414316A (en) * | 2019-06-11 | 2019-11-05 | 中国科学院自动化研究所 | Data de-noising method, apparatus, computer equipment and storage medium |
CN111028339B (en) * | 2019-12-06 | 2024-03-29 | 国网浙江省电力有限公司培训中心 | Behavior modeling method and device, electronic equipment and storage medium |
CN111310590B (en) * | 2020-01-20 | 2023-07-11 | 北京西米兄弟未来科技有限公司 | Action recognition method and electronic equipment |
CN111914798B (en) * | 2020-08-17 | 2022-06-07 | 四川大学 | Human body behavior identification method based on skeletal joint point data |
US11625938B2 (en) | 2020-12-29 | 2023-04-11 | Industrial Technology Research Institute | Method and device for detecting human skeletons |
CN113011381B (en) * | 2021-04-09 | 2022-09-02 | 中国科学技术大学 | Double-person motion recognition method based on skeleton joint data |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104298974A (en) * | 2014-10-10 | 2015-01-21 | 北京工业大学 | Human body behavior recognition method based on depth video sequence |
CN104598890A (en) * | 2015-01-30 | 2015-05-06 | 南京邮电大学 | Human body behavior recognizing method based on RGB-D video |
CN107194366A (en) * | 2017-06-06 | 2017-09-22 | 西安电子科技大学 | The Activity recognition method of son is described based on dense track covariance |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102026382B1 (en) * | 2014-03-31 | 2019-09-30 | 한국전자통신연구원 | System and method for motion estimation |
-
2018
- 2018-05-04 CN CN201810421670.9A patent/CN108681700B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104298974A (en) * | 2014-10-10 | 2015-01-21 | 北京工业大学 | Human body behavior recognition method based on depth video sequence |
CN104598890A (en) * | 2015-01-30 | 2015-05-06 | 南京邮电大学 | Human body behavior recognizing method based on RGB-D video |
CN107194366A (en) * | 2017-06-06 | 2017-09-22 | 西安电子科技大学 | The Activity recognition method of son is described based on dense track covariance |
Non-Patent Citations (2)
Title |
---|
Spatio-Temporal Naive-Bayes Nearest-Neighbor (ST-NBNN) for Skeleton-Based Action Recognition;J.W. Weng et al.;《CVPR》;20171231;第445-454页 * |
基于BOF-Gist特征的手势识别算法研究;丁毅 等;《计算机工程与应用》;20171231;第170-174页 * |
Also Published As
Publication number | Publication date |
---|---|
CN108681700A (en) | 2018-10-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108681700B (en) | Complex behavior identification method | |
Xu et al. | Learning cross-modal deep representations for robust pedestrian detection | |
CN107301370B (en) | Kinect three-dimensional skeleton model-based limb action identification method | |
CN110555412B (en) | End-to-end human body gesture recognition method based on combination of RGB and point cloud | |
Xiong et al. | A good practice towards top performance of face recognition: Transferred deep feature fusion | |
Zheng et al. | Deep learning for event-based vision: A comprehensive survey and benchmarks | |
CN107292246A (en) | Infrared human body target identification method based on HOG PCA and transfer learning | |
Ding et al. | STFC: Spatio-temporal feature chain for skeleton-based human action recognition | |
CN111639580B (en) | Gait recognition method combining feature separation model and visual angle conversion model | |
Baby et al. | Dynamic vision sensors for human activity recognition | |
CN108875586B (en) | Functional limb rehabilitation training detection method based on depth image and skeleton data multi-feature fusion | |
CN105930770A (en) | Human motion identification method based on Gaussian process latent variable model | |
CN108280421A (en) | Human bodys' response method based on multiple features Depth Motion figure | |
CN101826155B (en) | Method for identifying act of shooting based on Haar characteristic and dynamic time sequence matching | |
CN111914643A (en) | Human body action recognition method based on skeleton key point detection | |
Jadhav et al. | Aerial multi-object tracking by detection using deep association networks | |
CN113378649A (en) | Identity, position and action recognition method, system, electronic equipment and storage medium | |
CN108921062A (en) | A kind of gait recognition method for combining more gait feature collaboration dictionaries | |
CN106529441B (en) | Depth motion figure Human bodys' response method based on smeared out boundary fragment | |
CN104794446A (en) | Human body action recognition method and system based on synthetic descriptors | |
Liu et al. | Viewpoint invariant action recognition using rgb-d videos | |
Yan et al. | Human-object interaction recognition using multitask neural network | |
Li et al. | Dynamic long short-term memory network for skeleton-based gait recognition | |
CN105844204A (en) | Method and device for recognizing behavior of human body | |
El Madany et al. | Human action recognition using temporal hierarchical pyramid of depth motion map and keca |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |