CN108681700A - A kind of complex behavior recognition methods - Google Patents

A kind of complex behavior recognition methods Download PDF

Info

Publication number
CN108681700A
CN108681700A CN201810421670.9A CN201810421670A CN108681700A CN 108681700 A CN108681700 A CN 108681700A CN 201810421670 A CN201810421670 A CN 201810421670A CN 108681700 A CN108681700 A CN 108681700A
Authority
CN
China
Prior art keywords
joint point
skeletal joint
motion
action
artis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810421670.9A
Other languages
Chinese (zh)
Other versions
CN108681700B (en
Inventor
杨剑宇
朱晨
黄瑶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou University
Original Assignee
Suzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou University filed Critical Suzhou University
Priority to CN201810421670.9A priority Critical patent/CN108681700B/en
Publication of CN108681700A publication Critical patent/CN108681700A/en
Application granted granted Critical
Publication of CN108681700B publication Critical patent/CN108681700B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a kind of complex behavior recognition methods, including:The three-dimensional skeletal joint point information of target movement is obtained using sensor;To artis information pre-processing, normalized coordinate system;Extract the movement locus of each artis, projection to three two dimensional surfaces;The motion vector and its length and deflection for extracting every two interframe are clustered to obtain movement primitive with k means algorithms, and statistics obtains histogram;The weight of each artis is calculated in conjunction with the value of each cluster of all histograms using time pyramid binding time information, forms descriptor;With svm classifier, action recognition is realized.The present invention can carry out action skeletal joint point information the extraction of feature and effectively indicate, improve the accuracy rate of action recognition;All movable informations are completely retained, and action reconstruction can be carried out;Everything class is clustered, human action's feature is captured from the overall situation;Using low-level feature, difficulty in computation is reduced, action recognition efficiency is improved, meets the requirement of real-time of system.

Description

A kind of complex behavior recognition methods
Technical field
The present invention relates to a kind of complex behavior recognition methods, belong to image identification technical field.
Background technology
Action recognition is the hot spot of field of machine vision research, and action identification method has been widely used in man-machine friendship Mutually, virtual reality, video frequency searching and safety monitoring etc..With the development of depth camera, skeleton artis information can To be directly acquired, the action identification method based on skeleton character substantially increases the accuracy of action recognition.Although existing permitted More correlative study and exciting as a result, effective description to human action is still a full of challenges task.
Many methods extract a variety of high-level features from bone information, and then they are combined shape with certain forms At descriptor, but the descriptor of this integrated mode structure is not complete, and there is always the loss of movable information.Another party Face, many methods are trained different action classes respectively, this causes the global characteristics of human action in each independent class Do not move work description in produce deviation.Meanwhile there is also calculate the excessively high problem of cost using the method for high-level feature.Have Necessity designs a kind of algorithm, using rudimentary another characteristic, reduces and calculates cost, improves efficiency of algorithm, while not having movable information Loss, and from all categories action in extract human action global characteristics.
Therefore, in view of the above technical problems, it is necessary to propose a kind of complex behavior recognition methods.
Invention content
The goal of the invention of the present invention is to provide a kind of complex behavior recognition methods, is carried out to action skeletal joint point information special The extraction of sign and effectively expression, improve the accuracy rate of action recognition, completely retain all movable informations, carry out action reconstruction; Everything class is clustered, human action's feature is captured from the overall situation;Using low-level feature, difficulty in computation is reduced, is carried High action recognition efficiency, meets the requirement of real-time of system.
To achieve the above object of the invention, the technical solution adopted by the present invention is:A kind of complex behavior recognition methods, including such as Lower step:
(1), the three-dimensional skeletal joint point information of target movement is obtained using depth transducer, obtains the three-dimensional in each joint of human body Coordinate;
(2), skeletal joint point information is pre-processed, normalized coordinate system;
(3), the movement locus of each skeletal joint point is extracted, adjacent interframe movement is defined as motion-let;
(4), each skeletal joint point three-dimensional track is projected to three two dimensional surfaces, obtains all skeletal joint point two dimensions Motion-let collection;
(5), each vector length parameter and direction angular dimensions for representing motion-let are calculated;
(6), the institute's directed quantity for gathering single skeletal joint point everything class, using k-means algorithms to its length parameter and Direction angular dimensions carries out two-dimentional cluster, obtains movement primitive;
(7), the motion-let quantity that statistics is indicated by each movement primitive obtains movement primitive histogram;
(8), the temporal information of time pyramid capturing motion is utilized;
(9), in conjunction with the value of each cluster of all histograms, the weight of each skeletal joint point is calculated, descriptor is ultimately formed;
(10), final descriptor is trained using SVM classifier, obtains the good division of more action classification descriptors, is realized dynamic It identifies.
Preferably, step (2) includes:Using the left shoulder of action sequence first frame to right shoulder vector as horizontal axis, with hipbone to double Shoulder midpoint vector is the longitudinal axis, and X-Y-Z coordinate systems are converted to X '-Y '-Z ' coordinate systems by normalized coordinate system.
Preferably, the movement locus of each skeletal joint point is extracted in step (3), it is specific as follows:
The action sequence S of n frames is expressed as:
S={ Γj| j ∈ [1, J] },
Γj={ pj(t) | t ∈ [1, n], j ∈ [1, J] },
Wherein ΓjIt is the three-dimensional track of artis j, J is artis sum, and t is frame number serial number, Pj(t) it is artis j in t frames Position:
The movement of front and back two interframe skeletal joint point is defined as motion-let, and skeletal joint point j is in t frames between t+1 frames Motion-let can be expressed as vector vj(t):
The then three-dimensional track Γ of skeletal joint point jjIt can be expressed as a sequence vector:
Γj={ vj(t)|t∈[1,n-1]}。
Preferably, step (4) is specific as follows:
The motion-let of each 3D is projected into three two dimensional surfaces, is obtained:
Wherein,WithThe motion-let of 2D, calculation formula on respectively three two dimensional surfaces It is as follows:
In conjunction with all motion-let, action sequence S is further represented as:
Preferably, step (5) is specific as follows:
The skeletal joint point j under x-y coordinate system, the calculation formula of parameter are as follows:
The skeletal joint point j under y-z coordinate systems, the calculation formula of parameter are as follows:
The skeletal joint point j under x-z coordinate systems, the calculation formula of parameter are as follows:
WhereinWithFor skeletal joint point j t to t+1 frames motion-let in three two dimensional surfaces The direction angular dimensions of corresponding vector, value range are -180 °~180 °,WithFor skeletal joint point j Vectorial length parameter is corresponded in three two dimensional surfaces in the motion-let of t to t+1 frames.
Preferably, step (6) is specific as follows:
Under x-y coordinate system, skeletal joint point j, cluster centre quantity is K, then all cluster centres can be expressed as gathering U:
WhereinIt is the cluster centre of k-th of cluster,WithIt is the coordinate value of the cluster centre,
Each point cluster centre of cluster where it indicates that K cluster centre represents corresponding K movement primitive, k-th of cluster Movement primitiveIt is expressed as:
Therefore, all movement primitive P of action sequence S are:
Preferably, all under each two dimensional surface of each skeletal joint point of motion sequence S in step (7) Motion-let is indicated that the motion-let quantity that statistics is indicated by each movement primitive is formed by corresponding movement primitive Histogram, therefore the value of all clusters can be expressed as H in the histogram under three coordinates of skeletal joint point jj
Wherein,WithFor movement primitive k-th of column of histogram of three of skeletal joint point j two-dimentional tracks Value.
Preferably, step (8) uses three layers of time pyramid pull-in time information, first layer to be directed to entire complete track Movement primitive histogram is calculated, track is temporally divided equally into two parts by the second layer, respectively statistics movement primitive histogram, third Layer does not divide this two parts further equally, the tracks the 3D descriptor D of final skeletal joint point jjFor:
Preferably, step (9) is specific as follows:
It calculates in entire action sequence, the length of the three-dimensional track of all artis, it is respective total to respectively obtain all artis Amount of exercise mj
The total amount of exercise of all artis for adding up the sample, obtains total amount of exercise F of sample action SS
All training samples of every a kind of action are gathered, is arranged by sample serial number, then has set:
F={ F1,F2,...,Fe,
Wherein e is certain a kind of action training collection sample number,
For each action class, M is calculatedjWith the covariance of F, covariance set is obtained, covariance set specifically calculates step It is as follows:
The total amount of exercise of all J artis and the total amount of exercise F composition J+1 n-dimensional random variable ns G of sample action:
G=(M1,M2,...,MJ,F)T,
Then matrix
For the covariance matrix of J+1 n-dimensional random variable ns G, covariance calculation formula is:
covij=E [Mi-E(Mi)]×E[Mj-E(Mj)],
Wherein F is as Mj+1It participates in calculating, E (Mj) it is MjMathematic expectaion, calculation formula is as follows:
Last row of Matrix C are the covariance set of Mj and F;If the covariance cov of Mj and Fj(J+1)Less than 0, then artis The weight w of jjIt is 0;If the covariance cov of Mj and Fj(J+1)More than or equal to 0, then the weight w of artis jjFor:
Wherein, max { covi(J+1)| i ∈ [1, J+1] } refer to the maximum value that Matrix C J+1 is arranged;
Finally, the descriptor D of action sequence S is expressed as:
D={ Dj×wj|j∈[1,J]}。
Since above-mentioned technical proposal is used, the present invention has following advantages compared with prior art:
This method can carry out action skeletal joint point information the extraction of feature and effectively indicate, improve the standard of action recognition True rate;All movable informations can completely be retained, and action reconstruction can be carried out;Everything class is clustered, from complete Human action's feature is captured in office;Using low-level feature, difficulty in computation is reduced, improves action recognition efficiency, Ke Yiman The requirement of real-time of pedal system.
Description of the drawings
Fig. 1 is the coordinate system normalization schematic diagram of the present invention.
Fig. 2 is of the invention projects skeletal joint point three-dimensional track to the schematic diagram of three two dimensional surfaces.
Fig. 3 is the length parameter and direction angular dimensions calculation schematic diagram of the present invention.
Fig. 4 is cluster knot of the vectorial two-dimensional parameter of the skeletal joint point everything classification of the present invention in three planes Fruit schematic diagram.
Fig. 5 is histogram of the skeletal joint point of the present invention under tri- coordinate systems of x-y, y-z, x-z.
Fig. 6 is the time pyramid schematic diagram of the present invention.
Fig. 7 is the flow chart of the present invention.
Specific implementation mode
The invention will be further described with reference to the accompanying drawings and embodiments:
Embodiment one:Shown in Figure 7, a kind of complex behavior recognition methods includes the following steps:
(1) it utilizes depth transducer to obtain the three-dimensional skeletal joint point information of target movement, obtains the three-dimensional in each joint of human body Coordinate;
(2) skeletal joint point information is pre-processed, normalized coordinate system is shown in Figure 1, is with left shoulder to right shoulder vector Horizontal axis, using hipbone to both shoulders midpoint vector as the longitudinal axis, X-Y-Z coordinate systems are converted to X '-Y '-Z ' coordinates by normalized coordinate system System;
(3) three-dimensional coordinate by each skeletal joint point in action sequence is connected in chronological order, obtains all skeletal joints The three-dimensional track of point;
The present embodiment is expressed as using the action sequence S (both hands are flat to be waved) with 20 skeletal joint points of 60 frames:
S={ Γj| j ∈ [1,20] },
Γj={ pj(t) | t ∈ [1,60], j ∈ [1,20] },
Wherein ΓjIt is the three-dimensional track of artis j, J is artis sum, and t is frame number serial number, Pj(t) it is artis j in t frames Position:
The movement of front and back two interframe skeletal joint point is defined as motion-let, skeletal joint point j t frames to t+1 frames it Between motion-let can be expressed as vector vj(t):
The then three-dimensional track Γ of skeletal joint point jjIt can be expressed as a sequence vector:
Γj={ vj(t)|t∈[1,n-1]};
(4) for each skeletal joint point, its three-dimensional track is projected to three two dimensional surfaces, i.e. x-y, y-z and x-z-plane, Three two-dimentional tracks are obtained, as shown in Figure 2.
The motion-let of each 3D has projected to three two dimensional surfaces, obtains:
Wherein,WithThe motion-let of 2D, calculation formula on respectively three two dimensional surfaces It is as follows:
In conjunction with all motion-let, action sequence S can be further represented as:
(5) length and deflection for calculating the vector that each represents motion-let, as shown in figure 3, in x-y coordinate system Lower skeletal joint point j, the calculation formula of the two parameters are as follows:
The skeletal joint point j under y-z coordinate systems, the calculation formula of parameter are as follows:
The skeletal joint point j under x-z coordinate systems, the calculation formula of parameter are as follows:
WhereinWithFor skeletal joint point j t to t+1 frames motion-let in three two dimensional surfaces The direction angular dimensions of corresponding vector, value range are -180 °~180 °,WithFor skeletal joint point j Vectorial length parameter is corresponded in three two dimensional surfaces in the motion-let of t to t+1 frames.
(6) in order to extract human motion feature from the overall situation, for each skeletal joint point, set all categories act Institute's directed quantity of the skeletal joint point carries out two-dimentional cluster with k-means algorithms to its length parameter and direction angular dimensions.With it In for a certain artis, cluster result is as shown in Figure 4.
By taking skeletal joint point j is in the cluster result under x-y coordinate system as an example, the value of cluster centre quantity K is 8, then institute Some cluster centres can be expressed as set U:
WhereinIt is the cluster centre of k-th of cluster,WithIt is the coordinate value of the cluster centre.
Each point cluster centre of cluster where it indicates that 8 cluster centres just represent corresponding 8 movement bases Member.Such as the movement primitive of k-th of clusterIt can be expressed as:
Therefore, all movement primitive P of action sequence S are:
(7) for motion sequence S, all motion-let under each two dimensional surface of each skeletal joint point are by right Primitive should be moved to indicate, the motion-let quantity that statistics is indicated by each movement primitive forms histogram, as shown in Figure 5.
Cluster for bosom in cluster result and its corresponding movement primitive, are not involved in statistics and follow-up calculating, because most The cluster at center contains the vector of all directions, does not have the meaning divided on deflection.In addition to this, the cluster in bosom Vector length parameter all very littles, it is smaller to the composition contribution of movement, it can be ignored.
The value of all clusters can be expressed as H in three histograms of skeletal joint point jj
Wherein,WithFor movement primitive k-th of column of histogram of three of skeletal joint point j two-dimentional tracks Value.
(8) application time pyramid carrys out the temporal information of capturing motion, and by taking three layers of time pyramid as an example, first layer is directed to Entire complete trajectory calculation moves primitive histogram, and track is temporally divided equally into two parts by the second layer, respectively statistics movement Primitive histogram, this two parts are further bisected into respectively in third layer.Therefore the description of skeletal joint point j is divided into 7 parts, As shown in Figure 6:First layerThe second layerWithThird layerWith
Therefore, the tracks the 3D descriptor D of skeletal joint point jjIt is the combination of above 7 parts, i.e.,:
(9) consider that different skeletal joint points is different for the identification percentage contribution of action, it is necessary to increase crucial bone Importance of the bone artis descriptor to action recognition.By taking the descriptor of skeletal joint point j as an example, corresponding weight w is calculatedj
It calculates in entire action sequence, the length of the three-dimensional track of all artis respectively obtains all artis respectively Total amount of exercise mj
The total amount of exercise of all artis for adding up the sample, obtains total amount of exercise F of sample action SS
All training samples of every a kind of action are gathered, are arranged by sample serial number, certain one kind is dynamic in the present embodiment There are 80 samples as training set, then has set:
F={ F1,F2,...,F80}。
For each action class, M is calculatedjWith the covariance of F, covariance set is obtained.Steps are as follows for specific calculating:Institute 21 n-dimensional random variable n G are formed by the total amount of exercise of 20 artis and the total amount of exercise F of sample action:
G=(M1,M2,...,M20,F)T,
Then matrix
For the covariance matrix of 21 n-dimensional random variable n G, covariance calculation formula is:
covij=E [Mi-E(Mi)]×E[Mj-E(Mj)],
Wherein F is as M21It participates in calculating, E (Mj) it is MjMathematic expectaion, calculation formula is as follows:
Last row of Matrix C are the covariance set of Mj and F.If the covariance cov of Mj and Fj(21)Less than 0, then close The weight w of node jjIt is 0.If the covariance cov of Mj and Fj(21)More than or equal to 0, then the weight w of artis jjFor:
Wherein, max { covi(21)| i ∈ [1,21] } refer to the maximum value that Matrix C the 21st arranges.
Therefore, the descriptor D of action sequence S can be expressed as:
D={ Dj×wj|j∈[1,20]}。
(10) final descriptor is trained using SVM classifier, obtains the good division of more action classification descriptors, realized Action recognition.

Claims (9)

1. a kind of complex behavior recognition methods, which is characterized in that include the following steps:
(1), the three-dimensional skeletal joint point information of target movement is obtained using depth transducer, obtains the three-dimensional in each joint of human body Coordinate;
(2), skeletal joint point information is pre-processed, normalized coordinate system;
(3), the movement locus of each skeletal joint point is extracted, adjacent interframe movement is defined as motion-let;
(4), each skeletal joint point three-dimensional track is projected to three two dimensional surfaces, obtains all skeletal joint point two dimensions Motion-let collection;
(5), each vector length parameter and direction angular dimensions for representing motion-let are calculated;
(6), the institute's directed quantity for gathering single skeletal joint point everything class, using k-means algorithms to its length parameter and Direction angular dimensions carries out two-dimentional cluster, obtains movement primitive;
(7), the motion-let quantity that statistics is indicated by each movement primitive obtains movement primitive histogram;
(8), the temporal information of time pyramid capturing motion is utilized;
(9), in conjunction with the value of each cluster of all histograms, the weight of each skeletal joint point is calculated, descriptor is ultimately formed;
(10), final descriptor is trained using SVM classifier, obtains the good division of more action classification descriptors, is realized dynamic It identifies.
2. according to the complex behavior recognition methods in claim 1, which is characterized in that step (2) includes:With action sequence first The left shoulder of frame to right shoulder vector is horizontal axis, using hipbone to both shoulders midpoint vector as the longitudinal axis, normalized coordinate system, by X-Y-Z coordinates System is converted to X '-Y '-Z ' coordinate systems.
3. according to the complex behavior recognition methods in claim 1, which is characterized in that each skeletal joint of extraction in step (3) The movement locus of point, it is specific as follows:
The action sequence S of n frames is expressed as:
S={ Γj| j ∈ [1, J] },
Γj={ pj(t) | t ∈ [1, n], j ∈ [1, J] },
Wherein ΓjIt is the three-dimensional track of artis j, J is artis sum, and t is frame number serial number, Pj(t) it is artis j in t frames Position:
The movement of front and back two interframe skeletal joint point is defined as motion-let, and skeletal joint point j is in t frames between t+1 frames Motion-let can be expressed as vector vj(t):
The then three-dimensional track Γ of skeletal joint point jjIt can be expressed as a sequence vector:
Γj={ vj(t)|t∈[1,n-1]}。
4. according to the complex behavior recognition methods in claim 1, which is characterized in that step (4) is specific as follows:
The motion-let of each 3D is projected into three two dimensional surfaces, is obtained:
Wherein,WithThe motion-let of 2D, calculation formula on respectively three two dimensional surfaces It is as follows:
In conjunction with all motion-let, action sequence S is further represented as:
5. according to the complex behavior recognition methods in claim 1, which is characterized in that step (5) is specific as follows:
The skeletal joint point j under x-y coordinate system, the calculation formula of parameter are as follows:
The skeletal joint point j under y-z coordinate systems, the calculation formula of parameter are as follows:
The skeletal joint point j under x-z coordinate systems, the calculation formula of parameter are as follows:
WhereinWithFor skeletal joint point j t to t+1 frames motion-let in three two dimensional surfaces pair It is -180 °~180 ° to answer the direction angular dimensions of vector, value range,WithIt is skeletal joint point j in t Motion-let to t+1 frames corresponds to vectorial length parameter in three two dimensional surfaces.
6. according to the complex behavior recognition methods in claim 1, which is characterized in that step (6) is specific as follows:
Under x-y coordinate system, skeletal joint point j, cluster centre quantity is K, then all cluster centres can be expressed as gathering U:
WhereinIt is the cluster centre of k-th of cluster,WithIt is the coordinate value of the cluster centre,
Each point cluster centre of cluster where it indicates that K cluster centre represents corresponding K movement primitive, k-th of cluster Movement primitiveIt is expressed as:
Therefore, all movement primitive P of action sequence S are:
7. according to the complex behavior recognition methods in claim 1, which is characterized in that in step (7), motion sequence S's is every All motion-let under each two dimensional surface of a skeletal joint point are indicated by corresponding movement primitive, are counted by each The motion-let quantity that primitive indicates is moved, forms histogram, therefore in the histogram under three coordinates of skeletal joint point j The value of all clusters can be expressed as Hj
Wherein,WithFor movement primitive k-th of column of histogram of three of skeletal joint point j two-dimentional tracks Value.
8. according to the complex behavior recognition methods in claim 1, which is characterized in that step (8) uses three layers of time pyramid Pull-in time information, first layer move primitive histogram for entire complete trajectory calculation, and the second layer temporally puts down track It is divided into two parts, statistics movement primitive histogram, third layer do not divide this two parts further equally respectively, final skeletal joint point The tracks the 3D descriptor D of jjFor:
9. according to the complex behavior recognition methods in claim 1, which is characterized in that step (9) is specific as follows:
It calculates in entire action sequence, the length of the three-dimensional track of all artis, it is respective total to respectively obtain all artis Amount of exercise mj
The total amount of exercise of all artis for adding up the sample, obtains total amount of exercise F of sample action SS
All training samples of every a kind of action are gathered, is arranged by sample serial number, then has set:
F={ F1,F2,...,Fe,
Wherein e is certain a kind of action training collection sample number,
For each action class, M is calculatedjWith the covariance of F, covariance set is obtained, covariance set specifically calculates step such as Under:
The total amount of exercise of all J artis and the total amount of exercise F composition J+1 n-dimensional random variable ns G of sample action:
G=(M1,M2,...,MJ,F)T,
Then matrix
For the covariance matrix of J+1 n-dimensional random variable ns G, covariance calculation formula is:
covij=E [Mi-E(Mi)]×E[Mj-E(Mj)],
Wherein F is as Mj+1It participates in calculating, E (Mj) it is MjMathematic expectaion, calculation formula is as follows:
Last row of Matrix C are the covariance set of Mj and F;If the covariance cov of Mj and Fj(J+1)Less than 0, then artis The weight w of jjIt is 0;If the covariance cov of Mj and Fj(J+1)More than or equal to 0, then the weight w of artis jjFor:
Wherein, max { covi(J+1)| i ∈ [1, J+1] } refer to the maximum value that Matrix C J+1 is arranged;
Finally, the descriptor D of action sequence S is expressed as:
D={ Dj×wj|j∈[1,J]}。
CN201810421670.9A 2018-05-04 2018-05-04 Complex behavior identification method Active CN108681700B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810421670.9A CN108681700B (en) 2018-05-04 2018-05-04 Complex behavior identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810421670.9A CN108681700B (en) 2018-05-04 2018-05-04 Complex behavior identification method

Publications (2)

Publication Number Publication Date
CN108681700A true CN108681700A (en) 2018-10-19
CN108681700B CN108681700B (en) 2021-09-28

Family

ID=63801510

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810421670.9A Active CN108681700B (en) 2018-05-04 2018-05-04 Complex behavior identification method

Country Status (1)

Country Link
CN (1) CN108681700B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109670401A (en) * 2018-11-15 2019-04-23 天津大学 A kind of action identification method based on skeleton motion figure
CN110070070A (en) * 2019-04-30 2019-07-30 苏州大学 A kind of action identification method
CN110084211A (en) * 2019-04-30 2019-08-02 苏州大学 A kind of action identification method
CN110119707A (en) * 2019-05-10 2019-08-13 苏州大学 A kind of human motion recognition method
CN110414316A (en) * 2019-06-11 2019-11-05 中国科学院自动化研究所 Data de-noising method, apparatus, computer equipment and storage medium
CN111028339A (en) * 2019-12-06 2020-04-17 国网浙江省电力有限公司培训中心 Behavior action modeling method and device, electronic equipment and storage medium
CN111310590A (en) * 2020-01-20 2020-06-19 北京西米兄弟未来科技有限公司 Action recognition method and electronic equipment
CN111914798A (en) * 2020-08-17 2020-11-10 四川大学 Human body behavior identification method based on skeletal joint point data
CN113011381A (en) * 2021-04-09 2021-06-22 中国科学技术大学 Double-person motion identification method based on skeleton joint data
US11625938B2 (en) 2020-12-29 2023-04-11 Industrial Technology Research Institute Method and device for detecting human skeletons

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104298974A (en) * 2014-10-10 2015-01-21 北京工业大学 Human body behavior recognition method based on depth video sequence
CN104598890A (en) * 2015-01-30 2015-05-06 南京邮电大学 Human body behavior recognizing method based on RGB-D video
US20150279053A1 (en) * 2014-03-31 2015-10-01 Electronics And Telecommunications Research Institute System and method for motion estimation
CN107194366A (en) * 2017-06-06 2017-09-22 西安电子科技大学 The Activity recognition method of son is described based on dense track covariance

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150279053A1 (en) * 2014-03-31 2015-10-01 Electronics And Telecommunications Research Institute System and method for motion estimation
CN104298974A (en) * 2014-10-10 2015-01-21 北京工业大学 Human body behavior recognition method based on depth video sequence
CN104598890A (en) * 2015-01-30 2015-05-06 南京邮电大学 Human body behavior recognizing method based on RGB-D video
CN107194366A (en) * 2017-06-06 2017-09-22 西安电子科技大学 The Activity recognition method of son is described based on dense track covariance

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
J.W. WENG ET AL.: "Spatio-Temporal Naive-Bayes Nearest-Neighbor (ST-NBNN) for Skeleton-Based Action Recognition", 《CVPR》 *
丁毅 等: "基于BOF-Gist特征的手势识别算法研究", 《计算机工程与应用》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109670401A (en) * 2018-11-15 2019-04-23 天津大学 A kind of action identification method based on skeleton motion figure
CN110070070A (en) * 2019-04-30 2019-07-30 苏州大学 A kind of action identification method
CN110084211A (en) * 2019-04-30 2019-08-02 苏州大学 A kind of action identification method
CN110119707A (en) * 2019-05-10 2019-08-13 苏州大学 A kind of human motion recognition method
CN110414316A (en) * 2019-06-11 2019-11-05 中国科学院自动化研究所 Data de-noising method, apparatus, computer equipment and storage medium
CN111028339A (en) * 2019-12-06 2020-04-17 国网浙江省电力有限公司培训中心 Behavior action modeling method and device, electronic equipment and storage medium
CN111028339B (en) * 2019-12-06 2024-03-29 国网浙江省电力有限公司培训中心 Behavior modeling method and device, electronic equipment and storage medium
CN111310590A (en) * 2020-01-20 2020-06-19 北京西米兄弟未来科技有限公司 Action recognition method and electronic equipment
CN111914798A (en) * 2020-08-17 2020-11-10 四川大学 Human body behavior identification method based on skeletal joint point data
CN111914798B (en) * 2020-08-17 2022-06-07 四川大学 Human body behavior identification method based on skeletal joint point data
US11625938B2 (en) 2020-12-29 2023-04-11 Industrial Technology Research Institute Method and device for detecting human skeletons
CN113011381A (en) * 2021-04-09 2021-06-22 中国科学技术大学 Double-person motion identification method based on skeleton joint data

Also Published As

Publication number Publication date
CN108681700B (en) 2021-09-28

Similar Documents

Publication Publication Date Title
CN108681700A (en) A kind of complex behavior recognition methods
EP3711024B1 (en) Event camera-based deformable object tracking
WO2021129064A9 (en) Posture acquisition method and device, and key point coordinate positioning model training method and device
CN102332095B (en) Face motion tracking method, face motion tracking system and method for enhancing reality
Ding et al. STFC: Spatio-temporal feature chain for skeleton-based human action recognition
Sincan et al. Using motion history images with 3d convolutional networks in isolated sign language recognition
CN108241849A (en) Human body interactive action recognition methods based on video
CN105160310A (en) 3D (three-dimensional) convolutional neural network based human body behavior recognition method
CN108875586B (en) Functional limb rehabilitation training detection method based on depth image and skeleton data multi-feature fusion
CN101877143A (en) Three-dimensional scene reconstruction method of two-dimensional image group
CN107808376A (en) A kind of detection method of raising one's hand based on deep learning
CN108171133A (en) A kind of dynamic gesture identification method of feature based covariance matrix
CN101826155B (en) Method for identifying act of shooting based on Haar characteristic and dynamic time sequence matching
CN106327528A (en) Moving object tracking method and operation method of unmanned aerial vehicle
CN108280421A (en) Human bodys' response method based on multiple features Depth Motion figure
CN113378649A (en) Identity, position and action recognition method, system, electronic equipment and storage medium
CN114419732A (en) HRNet human body posture identification method based on attention mechanism optimization
CN110555383A (en) Gesture recognition method based on convolutional neural network and 3D estimation
CN112906520A (en) Gesture coding-based action recognition method and device
CN106529441A (en) Fuzzy boundary fragmentation-based depth motion map human body action recognition method
CN104794446A (en) Human body action recognition method and system based on synthetic descriptors
Batool et al. Telemonitoring of daily activities based on multi-sensors data fusion
Zhang Analyzing body changes of high-level dance movements through biological image visualization technology by convolutional neural network
Pismenskova et al. Classification of a two-dimensional pose using a human skeleton
CN111626212B (en) Method and device for identifying object in picture, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant