CN113011381A - Double-person motion identification method based on skeleton joint data - Google Patents

Double-person motion identification method based on skeleton joint data Download PDF

Info

Publication number
CN113011381A
CN113011381A CN202110383857.6A CN202110383857A CN113011381A CN 113011381 A CN113011381 A CN 113011381A CN 202110383857 A CN202110383857 A CN 202110383857A CN 113011381 A CN113011381 A CN 113011381A
Authority
CN
China
Prior art keywords
action
demonstrator
sgda
motion
pivot joint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110383857.6A
Other languages
Chinese (zh)
Other versions
CN113011381B (en
Inventor
叶中付
穆哈姆德·舒嘉·***姆·赛米姆
潘威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN202110383857.6A priority Critical patent/CN113011381B/en
Publication of CN113011381A publication Critical patent/CN113011381A/en
Application granted granted Critical
Publication of CN113011381B publication Critical patent/CN113011381B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a double-person motion identification method based on skeletal joint data, which comprises the following steps: step 1, determining coordinates of pivot joint points of two action demonstrators according to defined human pivot joint points; step 2, calculating coordinates of the pivot joint points to obtain a rectangular coordinate angle, a sine correlation, a Gaussian Laplace operator, an Euclidean distance and a cosine angle of the pivot joint points of the two motion demonstrators and the selected joint point in a three-dimensional space; and 3, connecting the rectangular coordinate angle, the sine correlation, the Gaussian Laplace operator, the Euclidean distance and the cosine angle which are obtained by calculation in the step 2 to obtain an SGDA action descriptor, and performing classified learning on the SGDA action descriptor by using a neural network model adopting a K-nearest neighbor algorithm to finish the action recognition between two action demonstrators. Due to the adoption of joint correlation, the two-person action recognition by using the SGDA action descriptor can effectively improve the recognition rate.

Description

Double-person motion identification method based on skeleton joint data
Technical Field
The invention relates to the field of motion recognition of image processing, in particular to a double-person motion recognition method based on skeleton joint data.
Background
Human action recognition is a problem which needs to be handled in a current good scene, and has many applications in the fields of multimedia internet of things, criminal monitoring and automatic vehicle driving. Due to the numerous applications and the current situation envisaged, there is a need for an efficient two-person motion recognition system.
In the existing commonly used human body action recognition method, action recognition is realized mostly through a double-current LSTM network based on an attention mechanism, a deep convolutional neural network, characteristics obtained manually, Euclidean distance and a deep learning model.
However, in the conventional human motion recognition method, since image data is excessively relied on and skeletal joint data is not considered, the method has the problem that the method is easily influenced by objective environments such as illumination background.
Disclosure of Invention
Based on the problems in the prior art, the invention aims to provide a double-person motion recognition method based on skeletal joint data, which can solve the problem that the existing human body motion recognition method is easily influenced by objective environments such as illumination backgrounds and the like due to excessive dependence on image data.
The purpose of the invention is realized by the following technical scheme:
the embodiment of the invention provides a double-person motion identification method based on skeletal joint data, which comprises the following steps:
step 1, sampling motion demonstration videos of two motion demonstrators, and determining coordinates of pivot joint points of the two motion demonstrators according to defined human pivot joint points in each frame of sampled images;
step 2, calculating by using the coordinates of the pivot joint points to obtain a rectangular coordinate angle, a sine correlation, a Gaussian Laplacian operator, an Euclidean distance and a cosine angle of the pivot joint points of the two motion demonstrators and the selected joint point in a three-dimensional space;
and 3, connecting the rectangular coordinate angle, the sine correlation, the Gaussian Laplace operator, the Euclidean distance and the cosine angle which are obtained by calculation in the step 2 to obtain an SGDA action descriptor, and performing classification learning on the SGDA action descriptor by adopting a neural network model of a K-nearest neighbor algorithm to finish the identification of actions finished by two action demonstrators.
According to the technical scheme provided by the invention, the double-person motion identification method based on the skeletal joint data has the beneficial effects that:
because the importance of the pivot joint is fully considered, the pivot joint point of a selected action demonstrator is used for constructing a characteristic vector, the concept of joint correlation is introduced, two-dimensional and three-dimensional skeleton joint data are used at the same time, the rectangular coordinate angle and the Euclidean distance of the correlation between the pivot joint and the selected joint are calculated, the sine correlation between the selected joint point and the pivot joint point is calculated, the generalized Gaussian Laplacian operator is used for carrying out the feature description of the selected joint point, the cosine angle is calculated, the motion characteristic of a single person and the interaction information of the actions of the two persons are combined, the rectangular coordinate angle, the sine correlation, the Gaussian Laplacian operator, the Euclidean distance and the cosine angle are used for constructing the SGDA action descriptor, the SGDA action descriptor is used for carrying out double-person action recognition, a good recognition effect is obtained on an SBU data set, and the accuracy of 98.2 percent is achieved, the accuracy of double action recognition is effectively improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
Fig. 1 is a flowchart of a double-person motion recognition method based on skeletal joint data according to an embodiment of the present invention;
fig. 2 is a specific flowchart of a method for identifying a double-person motion based on skeletal joint data according to an embodiment of the present invention;
fig. 3 is a schematic diagram of bone joint data acquisition of two motion demonstrations of a double-person motion recognition method based on bone joint data according to an embodiment of the present invention; wherein, (1) and (2) are respectively the skeleton joint data schematic diagrams of the motion demonstration persons with numbers 1 and 2.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to the specific contents of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention. Details which are not described in detail in the embodiments of the invention belong to the prior art which is known to the person skilled in the art.
As shown in fig. 1, an embodiment of the present invention provides a method for identifying a double-person motion based on skeletal joint data, which is a method for identifying a double-person motion by using an SGDA motion descriptor based on skeletal joint data, and includes the following steps:
step 1, sampling motion demonstration videos of two motion demonstrators, and determining coordinates of pivot joint points of the two motion demonstrators according to defined human pivot joint points in each frame of sampled images;
step 2, calculating by using the coordinates of the pivot joint points to obtain a rectangular coordinate angle, a sine correlation, a Gaussian Laplacian operator, an Euclidean distance and a cosine angle of the pivot joint points of the two motion demonstrators and the selected joint point in a three-dimensional space;
and 3, connecting the rectangular coordinate angle, the sine correlation, the Gaussian Laplace operator, the Euclidean distance and the cosine angle which are obtained by calculation in the step 2 to obtain an SGDA action descriptor, and performing classification learning on the SGDA action descriptor by adopting a neural network model of a K-nearest neighbor algorithm to finish the identification of actions finished by two action demonstrators.
In step 1 of the above method, the defined pivot joint point of the human body is the head of the motion demonstrator.
In step 2 of the above method, the pivots of the two motion demo are calculated by the following formula (1)Rectangular coordinate angle between axis joint point and selected joint point in three-dimensional space
Figure BDA0003014072350000031
In the formula (1), the reaction mixture is,
Figure BDA0003014072350000032
is the two-dimensional coordinate of the pivot joint point of the action demonstrator, and k is the number of the action demonstrator;
Figure BDA0003014072350000033
and k is the number of the action demonstrator.
In step 2 of the above method, the sinusoidal correlation ρ is calculated by the following formula (2)k
Figure BDA0003014072350000034
In the formula (2), the reaction mixture is,
Figure BDA0003014072350000035
the three-dimensional coordinates of the selected joint points of the action demonstrator, k is the serial number of the action demonstrator, i is the serial number of the selected joint points, and the value of i is 1 to 14;
Figure BDA0003014072350000036
the rectangular coordinate angles of two motion presenter in three-dimensional space.
In step 2 of the above method, the laplacian of gaussian G is calculated by the following formula (3):
Figure BDA0003014072350000037
in the formula (3), the reaction mixture is,
Figure BDA0003014072350000038
three-dimensional selection of joint points for action demonstratorCoordinate, k is the number of the action demonstrator, k is 1 or 2, i is the number of the selected joint point, and i takes the value of 1 to 14; z is a radical ofk(k is 1,2) represents the standard deviation σk(k=1,2)。
In step 2 of the method, the Euclidean distance between the pivot joint point of the two motion demonstrators and the selected joint point in the three-dimensional space is calculated by the following formula (4)
Figure BDA0003014072350000039
Figure BDA00030140723500000310
In the above-mentioned formula (4),
Figure BDA00030140723500000311
is the three-dimensional coordinate of the pivot joint point of the action demonstrator, and k is the number of the action demonstrator;
Figure BDA00030140723500000312
the three-dimensional coordinates of the selected joint point of the action demonstrator, k is the number of the action demonstrator, i is the number of the selected joint point, and the value of i is 1 to 14.
In step 2 of the method, the cosine angles of the pivot joint points and the selected joint points of the two motion demonstrators in the three-dimensional space are calculated by the following formula (5)
Figure BDA0003014072350000041
Figure BDA0003014072350000042
In the above-mentioned formula (5),
Figure BDA0003014072350000043
is the three-dimensional coordinate of the pivot joint point of the action demonstrator, and k is the number of the action demonstrator;
Figure BDA0003014072350000044
the three-dimensional coordinates of the selected joint points of the action demonstrator, i is the number of the selected joint points, and the value of i is 1 to 14;
when k is l, the cosine angle of the selected joint point of the same person relative to the pivot joint point is calculated;
when k ≠ l, the cosine angle of one person's selected joint point relative to the other person's pivot joint point is calculated.
In step 3 of the above method, connecting the rectangular coordinate angle, the sine correlation, the laplacian of gaussian, the euclidean distance, and the cosine angle calculated in step 2 to obtain the SGDA action descriptor:
the SGDA action descriptor expression for a single joint s is:
Figure BDA0003014072350000045
in the above-mentioned formula (6),
Figure BDA0003014072350000046
respectively is a rectangular coordinate angle of two action demonstrators in a three-dimensional space; rho12Respectively representing the sine correlation degrees of two motion demonstrators in a three-dimensional space; d1,D2Respectively representing Euclidean distances of two action demonstrators in a three-dimensional space; g1,G2The Gaussian descriptions of the two sign language demonstrator respectively;
Figure BDA0003014072350000047
the cosine angles of two motion demonstrators in three-dimensional space are respectively, wherein,
Figure BDA0003014072350000048
are the cosine angles between the joints of two action demonstrators in the three-dimensional space respectively,
Figure BDA0003014072350000049
the cosine angles between the mutual joints of the two action demonstrators in the three-dimensional space are respectively;
connecting the SGDA action descriptors corresponding to all other joint points except the pivot joint point to obtain the final SGDA action descriptor for describing the double-person action, wherein the SGDA action descriptor is as follows: SGDA ═ SGDA1,SGDA2,……,SGDAS]Where S is the number of all the remaining joints except the pivot joint, S ═ 14.
The identification method fully considers the importance of the pivot joint, takes the head joint point of an action demonstrator as the pivot joint point to construct a feature vector, introduces the concept of joint correlation, simultaneously uses two-dimensional and three-dimensional skeleton joint data to calculate the rectangular coordinate angle and Euclidean distance of the correlation between the pivot joint and a selected joint, calculates the sine correlation between the selected joint and the pivot joint point, further uses a generalized Gaussian Laplacian operator to carry out the feature description of the selected joint point, calculates the cosine angle to combine the single motion characteristic and the interaction information of two-person actions, utilizes the rectangular coordinate angle, the sine correlation, the Gaussian Laplacian operator, the Euclidean distance and the cosine angle to construct an SGDA action descriptor, utilizes the SGDA action descriptor to carry out double-person action identification, and obtains good identification effect on an SBU data set, the accuracy rate of 98.2 percent is achieved.
The embodiments of the present invention are described in further detail below.
The embodiment of the invention provides a double-person motion recognition method of an SGDA motion descriptor based on skeletal joint data, which fully explores the spatial information of human body motions and mainly comprises the following steps (see figure 3):
step 1, determining the specific coordinates of pivot joint points from each frame of image acquired by an action demonstration video according to the definition of the pivot joint points;
step 2, calculating the rectangular coordinate angles, sine correlation, Gaussian Laplace operator, Euclidean distance and cosine angle of the pivot joint and other points in the three-dimensional space;
and 3, connecting the features in the step 2 to obtain an SGDA action descriptor, and learning by using a K-nearest neighbor algorithm.
Referring to fig. 2, fig. 2 is a diagram of the acquisition of coordinate data of 15 joint points of a motion demonstrator by KinectV1, wherein features in an SGDA motion descriptor are calculated by taking a head joint (joint point with node number 1) as a pivot joint point; in the above method, the head is the pivot joint that identifies the motion descriptor, so the head joint is used as the reference point. The rectangular coordinate angle, the Euclidean distance and the cosine angle are calculated based on the three-dimensional pivot joint coordinate, and the sine correlation degree between the joints is calculated indirectly by using the two-dimensional pivot joint coordinate. Wherein the coordinates of the pivot joint points of the two motion demonstrators are respectively used
Figure BDA0003014072350000051
And
Figure BDA0003014072350000052
represents; and coordinates of selected joint points used to calculate features
Figure BDA0003014072350000053
And
Figure BDA0003014072350000054
showing (in addition to the pivot joint, also 14 joints can be seen in connection with fig. 2)
In the above method, the pivot joint is first calculated
Figure BDA0003014072350000055
And selected joint
Figure BDA0003014072350000056
Rectangular coordinate angle and slope therebetween
Figure BDA0003014072350000057
The rectangular coordinate angle and the slope indicate the angle of correlation between two joint points in two-dimensional space; the importance of calculating the rectangular coordinate angle is that it can be used not only independently as a referenceOne feature, but it may also assist in computing other features; the calculation formula of the rectangular coordinate angles of the two action demo is as follows:
Figure BDA0003014072350000058
(k is the number of action presenter).
In the method, the sine correlation is part of the motion descriptor, and the sine correlation needs to use the rectangular coordinate angle characteristic to record the coordinate of the corresponding joint as the coordinate
Figure BDA0003014072350000059
The formula for the sinusoidal correlation is as follows:
Figure BDA00030140723500000510
in the above method, the joint point is selected
Figure BDA00030140723500000511
The laplacian of gaussian is used as the feature description. The relationship of the laplacian of gaussian can be extended in such a way that three-dimensional joint points are used as feature vectors. In the following laplacian of gaussians, the standard deviation σk(k is 1,2) with zk(k is 1, 2):
Figure BDA0003014072350000061
in the above method, the Euclidean distance
Figure BDA0003014072350000062
At the pivot joint point
Figure BDA0003014072350000063
And
Figure BDA0003014072350000064
calculating; this feature showsThe euclidean distance between the reference node (i.e., pivot joint) and the selected joint; this is an important factor in the motion recognition descriptor, the Euclidean distance
Figure BDA0003014072350000065
Comprises the following steps:
Figure BDA0003014072350000066
in the above method, the cosine angle is
Figure BDA0003014072350000067
And
Figure BDA0003014072350000068
the cosine angle is a necessary factor in the action descriptor, because the cosine angle indicates the pivot joint point and the selected joint point in the three-dimensional space, and the angle has correlation; four cosine angles were calculated:
1) a cosine angle of one's own pivot joint and the remaining selected joint points;
2) the cosine angle of one person's pivot joint point with the other selected joint points of the other person;
the formula for calculating the cosine angle is as follows:
Figure BDA0003014072350000069
in the formula, when k is l, the cosine angle of each joint point of the same person relative to the pivot joint point is calculated;
when k ≠ l, the cosine angles of the joint points of one person with respect to the pivot joint points of the other person are calculated.
In the method, the sine correlation degree, the generalized Gaussian Laplace operator, the Euclidean distance, the cosine angle and the rectangular coordinate angle between the pivot joint and other selected joints are taken as characteristics to be connected to obtain the SGDA action descriptor; the SGDA is calculated frame by frame, and two-dimensional and three-dimensional skeleton joint data are embedded to improve the calculation, so that a robust double-person motion recognition system is convenient to manufacture; the SGDA motion descriptor comprises information between the pivot joint and the selected joint; specifically, the SGDA expression of a single joint s is:
Figure BDA00030140723500000610
connecting the SGDA descriptors corresponding to all other joints except the pivot joint point to obtain the final SGDA descriptor for describing the double-person action: SGDA ═ SGDA1,SGDA2,……,SGDAS](ii) a Where S is the number of all other joints except the pivot joint, it can be determined from fig. 2 that S is 14
The accuracy of the identification method of the present invention on the SBU data set is shown in table 1, which includes 8 different categories, including: approaching (Approaching), leaving (separating), Exchanging (Exchanging), Hugging (Hugging), Kicking (mounting), hitting (Punching), Pushing (Pushing) and shaking (ShakingHands), table 1 is as follows:
Figure BDA0003014072350000071
as can be seen from Table 1, the recognition method of the present invention achieves 98.2% accuracy on the SBU data set, 100% accuracy on the "close" action category, and 97.1% worst recognition on the "push" action category data set. The number of wrong judgments of the push action recognition is concentrated in the embrace action category, because the contents of a plurality of frames are the same under the push action category and the embrace action category; and classifying by a K-nearest neighbor classifier.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (8)

1. A double-person motion recognition method based on skeletal joint data is characterized by comprising the following steps:
step 1, sampling motion demonstration videos of two motion demonstrators, and determining coordinates of pivot joint points of the two motion demonstrators according to defined human pivot joint points in each frame of sampled images;
step 2, calculating by using the coordinates of the pivot joint points to obtain a rectangular coordinate angle, a sine correlation, a Gaussian Laplacian operator, an Euclidean distance and a cosine angle of the pivot joint points of the two motion demonstrators and the selected joint point in a three-dimensional space;
and 3, connecting the rectangular coordinate angle, the sine correlation, the Gaussian Laplace operator, the Euclidean distance and the cosine angle which are obtained by calculation in the step 2 to obtain an SGDA action descriptor, and performing classification learning on the SGDA action descriptor by adopting a neural network model of a K-nearest neighbor algorithm to finish the identification of actions finished by two action demonstrators.
2. The method for identifying two persons acting on the basis of skeletal joint data as claimed in claim 1, wherein the human body pivot joint point defined in step 1 is the head of an action demonstrator.
3. The method for identifying two persons based on bone joint data according to claim 1 or 2, wherein in step 2 of the method, the rectangular coordinate angles of the pivot joint points of the two motion demonstrators and the selected joint point in the three-dimensional space are calculated by the following formula (1)
Figure FDA0003014072340000011
In the formula (1), the reaction mixture is,
Figure FDA0003014072340000012
is the two-dimensional coordinate of the pivot joint point of the action demonstrator, and k is the number of the action demonstrator;
Figure FDA0003014072340000013
and k is the number of the action demonstrator.
4. The method for identifying two persons as claimed in claim 3, wherein in step 2 of the method, the sine correlation degree p is calculated by the following formula (2)k
Figure FDA0003014072340000014
In the formula (2), the reaction mixture is,
Figure FDA0003014072340000015
the three-dimensional coordinates of the selected joint points of the action demonstrator, k is the serial number of the action demonstrator, i is the serial number of the selected joint points, and the value of i is 1 to 14;
Figure FDA0003014072340000016
the rectangular coordinate angles of two motion presenter in three-dimensional space.
5. The method for identifying two persons based on bone joint data according to claim 3, wherein in step 2 of the method, the Gaussian Laplace operator G is calculated by the following formula (3):
Figure FDA0003014072340000017
in the formula (3), the reaction mixture is,
Figure FDA0003014072340000018
to moveMaking three-dimensional coordinates of the selected joint point of the demonstrator, wherein k is the number of the action demonstrator, the value of k is 1 or 2, i is the number of the selected joint point, and the value of i is 1 to 14; z is a radical ofk(k is 1,2) represents the standard deviation σk(k=1,2)。
6. The method for identifying two persons based on skeletal joint data according to claim 3, wherein in step 2 of the method, the Euclidean distance between the pivot joint point of the two motion demonstrators and the selected joint point in the three-dimensional space is calculated by the following formula (4)
Figure FDA0003014072340000021
Figure FDA0003014072340000022
In the above-mentioned formula (4),
Figure FDA0003014072340000023
is the three-dimensional coordinate of the pivot joint point of the action demonstrator, and k is the number of the action demonstrator;
Figure FDA0003014072340000024
the three-dimensional coordinates of the selected joint point of the action demonstrator, k is the number of the action demonstrator, i is the number of the selected joint point, and the value of i is 1 to 14.
7. The method for identifying two persons based on bone joint data according to claim 3, wherein in step 2 of the method, the cosine angles of the pivot joint points of the two motion demonstration persons and the selected joint point in the three-dimensional space are calculated by the following formula (5)
Figure FDA0003014072340000025
Figure FDA0003014072340000026
In the above-mentioned formula (5),
Figure FDA0003014072340000027
is the three-dimensional coordinate of the pivot joint point of the action demonstrator, and k is the number of the action demonstrator;
Figure FDA0003014072340000028
the three-dimensional coordinates of the selected joint points of the action demonstrator, i is the number of the selected joint points, and the value of i is 1 to 14;
when k is l, the cosine angle of the selected joint point of the same person relative to the pivot joint point is calculated;
when k ≠ l, the cosine angle of one person's selected joint point relative to the other person's pivot joint point is calculated.
8. The method for identifying two persons based on bone joint data according to claim 3, wherein in step 3 of the method, the rectangular coordinate angle, the sine correlation, the laplacian of gaussian, the euclidean distance and the cosine angle calculated in step 2 are connected to obtain an SGDA action descriptor:
the SGDA action descriptor expression for a single joint s is:
Figure FDA0003014072340000029
in the above-mentioned formula (6),
Figure FDA00030140723400000210
respectively is a rectangular coordinate angle of two action demonstrators in a three-dimensional space; rho12Respectively representing the sine correlation degrees of two motion demonstrators in a three-dimensional space; d1,D2Euclidean distance in three-dimensional space for two motion demonstrator respectively;G1,G2The Gaussian descriptions of the two sign language demonstrator respectively;
Figure FDA00030140723400000211
the cosine angles of two motion demonstrators in three-dimensional space are respectively, wherein,
Figure FDA00030140723400000212
are the cosine angles between the joints of two action demonstrators in the three-dimensional space respectively,
Figure FDA0003014072340000031
the cosine angles between the mutual joints of the two action demonstrators in the three-dimensional space are respectively;
connecting the SGDA action descriptors corresponding to all other joint points except the pivot joint point to obtain the final SGDA action descriptor for describing the double-person action, wherein the SGDA action descriptor is as follows: SGDA ═ SGDA1,SGDA2,……,SGDAS]Where S is the number of all the remaining joints except the pivot joint, S ═ 14.
CN202110383857.6A 2021-04-09 2021-04-09 Double-person motion recognition method based on skeleton joint data Active CN113011381B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110383857.6A CN113011381B (en) 2021-04-09 2021-04-09 Double-person motion recognition method based on skeleton joint data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110383857.6A CN113011381B (en) 2021-04-09 2021-04-09 Double-person motion recognition method based on skeleton joint data

Publications (2)

Publication Number Publication Date
CN113011381A true CN113011381A (en) 2021-06-22
CN113011381B CN113011381B (en) 2022-09-02

Family

ID=76388182

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110383857.6A Active CN113011381B (en) 2021-04-09 2021-04-09 Double-person motion recognition method based on skeleton joint data

Country Status (1)

Country Link
CN (1) CN113011381B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109446927A (en) * 2018-10-11 2019-03-08 西安电子科技大学 Double interbehavior recognition methods based on priori knowledge
CN114612524A (en) * 2022-05-11 2022-06-10 西南交通大学 Motion recognition method based on RGB-D camera
WO2023147775A1 (en) * 2022-02-04 2023-08-10 Huawei Technologies Co., Ltd. Methods, systems, and media for identifying human coactivity in images and videos using neural networks

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104794446A (en) * 2015-04-22 2015-07-22 中南民族大学 Human body action recognition method and system based on synthetic descriptors
CN107301370A (en) * 2017-05-08 2017-10-27 上海大学 A kind of body action identification method based on Kinect three-dimensional framework models
CN108681700A (en) * 2018-05-04 2018-10-19 苏州大学 A kind of complex behavior recognition methods
CN111695523A (en) * 2020-06-15 2020-09-22 浙江理工大学 Double-current convolutional neural network action identification method based on skeleton space-time and dynamic information
CN111797806A (en) * 2020-07-17 2020-10-20 浙江工业大学 Three-dimensional graph convolution behavior identification method based on 2D framework

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104794446A (en) * 2015-04-22 2015-07-22 中南民族大学 Human body action recognition method and system based on synthetic descriptors
CN107301370A (en) * 2017-05-08 2017-10-27 上海大学 A kind of body action identification method based on Kinect three-dimensional framework models
CN108681700A (en) * 2018-05-04 2018-10-19 苏州大学 A kind of complex behavior recognition methods
CN111695523A (en) * 2020-06-15 2020-09-22 浙江理工大学 Double-current convolutional neural network action identification method based on skeleton space-time and dynamic information
CN111797806A (en) * 2020-07-17 2020-10-20 浙江工业大学 Three-dimensional graph convolution behavior identification method based on 2D framework

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
DAWID WARCHOŁ 等: "Human Action Recognition Using Bone Pair Descriptor and Distance Descriptor", 《MDPI》 *
IOANNIS KAPSOURAS等: "Action recognition by fusing depth video and skeletal data information", 《CROSSMARK》 *
胡新荣等: "基于 Kinect 的人体三维动作实时动态识别", 《科学技术与工程》 *
钱毅敏: "基于深度图像和骨骼数据的行为识别", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109446927A (en) * 2018-10-11 2019-03-08 西安电子科技大学 Double interbehavior recognition methods based on priori knowledge
CN109446927B (en) * 2018-10-11 2021-11-23 西安电子科技大学 Double-person interaction behavior identification method based on priori knowledge
WO2023147775A1 (en) * 2022-02-04 2023-08-10 Huawei Technologies Co., Ltd. Methods, systems, and media for identifying human coactivity in images and videos using neural networks
CN114612524A (en) * 2022-05-11 2022-06-10 西南交通大学 Motion recognition method based on RGB-D camera

Also Published As

Publication number Publication date
CN113011381B (en) 2022-09-02

Similar Documents

Publication Publication Date Title
CN113011381B (en) Double-person motion recognition method based on skeleton joint data
US10719759B2 (en) System for building a map and subsequent localization
CN108229355B (en) Behavior recognition method and apparatus, electronic device, computer storage medium
Baradel et al. Human action recognition: Pose-based attention draws focus to hands
CN110135249B (en) Human behavior identification method based on time attention mechanism and LSTM (least Square TM)
JP5253588B2 (en) Capturing and recognizing hand postures using internal distance shape related methods
US8379986B2 (en) Device, method, and computer-readable storage medium for recognizing an object in an image
Fan et al. Identifying first-person camera wearers in third-person videos
US20170161546A1 (en) Method and System for Detecting and Tracking Objects and SLAM with Hierarchical Feature Grouping
Ding et al. STFC: Spatio-temporal feature chain for skeleton-based human action recognition
Shao et al. Computer vision for RGB-D sensors: Kinect and its applications [special issue intro.]
JP2009514109A (en) Discriminant motion modeling for tracking human body motion
Shao et al. Robust height estimation of moving objects from uncalibrated videos
JP2016099982A (en) Behavior recognition device, behaviour learning device, method, and program
Liu et al. A structured multi-feature representation for recognizing human action and interaction
Sun et al. When we first met: Visual-inertial person localization for co-robot rendezvous
Phadtare et al. Detecting hand-palm orientation and hand shapes for sign language gesture recognition using 3D images
Bartol et al. A review of 3D human pose estimation from 2D images
Wu et al. Multimodal human action recognition based on spatio-temporal action representation recognition model
Liu et al. Human-human interaction recognition based on spatial and motion trend feature
Islam et al. MVS‐SLAM: Enhanced multiview geometry for improved semantic RGBD SLAM in dynamic environment
Zhang et al. Human action recognition bases on local action attributes
CN107122718B (en) Novel target pedestrian trajectory tracking method based on Kinect
Zhao et al. Human pose regression through multiview visual fusion
Perera et al. Human motion analysis from UAV video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant