CN113011381B - Double-person motion recognition method based on skeleton joint data - Google Patents
Double-person motion recognition method based on skeleton joint data Download PDFInfo
- Publication number
- CN113011381B CN113011381B CN202110383857.6A CN202110383857A CN113011381B CN 113011381 B CN113011381 B CN 113011381B CN 202110383857 A CN202110383857 A CN 202110383857A CN 113011381 B CN113011381 B CN 113011381B
- Authority
- CN
- China
- Prior art keywords
- action
- demonstrator
- motion
- sgda
- pivot joint
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24133—Distances to prototypes
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a double-person motion identification method based on skeletal joint data, which comprises the following steps: step 1, determining coordinates of pivot joint points of two action demonstrators according to defined human pivot joint points; step 2, calculating coordinates of the pivot joint points to obtain a rectangular coordinate angle, a sine correlation, a Gaussian Laplace operator, an Euclidean distance and a cosine angle of the pivot joint points of the two motion demonstrators and the selected joint point in a three-dimensional space; and 3, connecting the rectangular coordinate angle, the sine correlation, the Gaussian Laplace operator, the Euclidean distance and the cosine angle which are obtained by calculation in the step 2 to obtain an SGDA action descriptor, and performing classified learning on the SGDA action descriptor by using a neural network model adopting a K-nearest neighbor algorithm to finish the action recognition between two action demonstrators. Due to the adoption of joint correlation, the two-person action recognition by using the SGDA action descriptor can effectively improve the recognition rate.
Description
Technical Field
The invention relates to the field of motion recognition of image processing, in particular to a double-person motion recognition method based on skeleton joint data.
Background
Human action recognition is a problem which needs to be handled in a current good scene, and has many applications in the fields of multimedia internet of things, criminal monitoring and automatic vehicle driving. Due to the numerous applications and the current situation envisaged, a need has developed for an effective two-person motion recognition system.
In the existing commonly used human body action recognition method, action recognition is realized mostly through a double-current LSTM network based on an attention mechanism, a deep convolutional neural network, characteristics obtained manually, Euclidean distance and a deep learning model.
However, in the conventional human motion recognition method, since image data is excessively relied on and skeletal joint data is not considered, the method has the problem that the method is easily influenced by objective environments such as illumination background.
Disclosure of Invention
Based on the problems in the prior art, the invention aims to provide a double-person motion recognition method based on skeletal joint data, which can solve the problem that the existing human body motion recognition method is easily influenced by objective environments such as illumination backgrounds and the like due to excessive dependence on image data.
The purpose of the invention is realized by the following technical scheme:
the embodiment of the invention provides a double-person motion identification method based on skeletal joint data, which comprises the following steps:
and 3, connecting the rectangular coordinate angle, the sine correlation, the Gaussian Laplace operator, the Euclidean distance and the cosine angle which are obtained by calculation in the step 2 to obtain an SGDA action descriptor, and performing classification learning on the SGDA action descriptor by adopting a neural network model of a K-nearest neighbor algorithm to finish the identification of actions finished by two action demonstrators.
According to the technical scheme provided by the invention, the double-person motion identification method based on the skeletal joint data has the beneficial effects that:
because the importance of the pivot joint is fully considered, the pivot joint point of a selected action demonstrator is used for constructing a characteristic vector, the concept of joint correlation is introduced, two-dimensional and three-dimensional skeleton joint data are used at the same time, the rectangular coordinate angle and the Euclidean distance of the correlation between the pivot joint and the selected joint are calculated, the sine correlation between the selected joint point and the pivot joint point is calculated, the generalized Gaussian Laplacian operator is used for carrying out the feature description of the selected joint point, the cosine angle is calculated, the motion characteristic of a single person and the interaction information of the actions of the two persons are combined, the rectangular coordinate angle, the sine correlation, the Gaussian Laplacian operator, the Euclidean distance and the cosine angle are used for constructing the SGDA action descriptor, the SGDA action descriptor is used for carrying out double-person action recognition, a good recognition effect is obtained on an SBU data set, and the accuracy of 98.2 percent is achieved, the accuracy of double action recognition is effectively improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a double-person motion recognition method based on skeletal joint data according to an embodiment of the present invention;
fig. 2 is a specific flowchart of a double-person motion recognition method based on skeletal joint data according to an embodiment of the present invention;
fig. 3 is a schematic diagram of bone joint data acquisition of two motion demonstrations of a double-person motion recognition method based on bone joint data according to an embodiment of the present invention; wherein, (1) and (2) are respectively the skeleton joint data schematic diagrams of the motion demonstration persons with numbers 1 and 2.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to the specific contents of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention. Details which are not described in detail in the embodiments of the invention belong to the prior art which is known to the person skilled in the art.
As shown in fig. 1, an embodiment of the present invention provides a method for identifying a double-person motion based on skeletal joint data, which is a method for identifying a double-person motion based on an SGDA motion descriptor of the skeletal joint data, and includes the following steps:
and 3, connecting the rectangular coordinate angle, the sine correlation, the Gaussian Laplace operator, the Euclidean distance and the cosine angle which are obtained by calculation in the step 2 to obtain an SGDA action descriptor, and performing classification learning on the SGDA action descriptor by adopting a neural network model of a K-nearest neighbor algorithm to finish the identification of actions finished by two action demonstrators.
In step 1 of the above method, the defined pivot joint point of the human body is the head of the motion demonstrator.
In step 2 of the method, the rectangular coordinate angle of the pivot joint point and the selected joint point of the two motion demonstrators in the three-dimensional space is calculated by the following formula (1)
In the formula (1), the reaction mixture is,is the two-dimensional coordinate of the pivot joint point of the action demonstrator, and k is the number of the action demonstrator;and k is the number of the action demonstrator.
In step 2 of the above method, the sinusoidal correlation ρ is calculated by the following formula (2) k :
In the formula (2), the reaction mixture is,the three-dimensional coordinates of the selected joint points of the action demonstrator, k is the serial number of the action demonstrator, i is the serial number of the selected joint points, and the value of i is 1 to 14;the rectangular coordinate angles of two motion presenter in three-dimensional space.
In step 2 of the above method, the laplacian of gaussian G is calculated by the following formula (3):
in the formula (3), the reaction mixture is,the three-dimensional coordinates of the selected joint points of the action demonstrator are shown, k is the number of the action demonstrator, the value of k is 1 or 2, i is the number of the selected joint points, and the value of i is 1 to 14; z is a radical of k (k-1, 2) represents a standard deviation σ k (k=1,2)。
In step 2 of the method, the Euclidean distance between the pivot joint point of the two motion demonstrator and the selected joint point in the three-dimensional space is calculated by the following formula (4)
In the above-mentioned formula (4),is the three-dimensional coordinate of the pivot joint point of the action demonstrator, and k is the number of the action demonstrator;the three-dimensional coordinates of the selected joint point of the action demonstrator, k is the number of the action demonstrator, i is the number of the selected joint point, and the value of i is 1 to 14.
In step 2 of the method, the cosine angles of the pivot joint points and the selected joint points of the two motion demonstrators in the three-dimensional space are calculated by the following formula (5)
In the above-mentioned formula (5),is the three-dimensional coordinate of the pivot joint point of the action demonstrator, and k is the number of the action demonstrator;the three-dimensional coordinates of the selected joint points of the action demonstrator, i is the number of the selected joint points, and the value of i is 1 to 14;
when k is l, the cosine angle of the selected joint point of the same person relative to the pivot joint point is calculated;
when k ≠ l, the cosine angle of one person's selected joint point relative to the other person's pivot joint point is calculated.
In step 3 of the above method, connecting the rectangular coordinate angle, the sine correlation, the laplacian of gaussian, the euclidean distance, and the cosine angle calculated in step 2 to obtain the SGDA action descriptor:
the SGDA action descriptor expression for a single joint s is:
in the above-mentioned formula (6),respectively is a rectangular coordinate angle of two action demonstrators in a three-dimensional space; rho 1 ,ρ 2 Respectively representing the sine correlation degrees of two motion demonstrators in a three-dimensional space; d 1 ,D 2 Respectively representing Euclidean distances of two action demonstrators in a three-dimensional space; g 1 ,G 2 The Gaussian descriptions of the two sign language demonstrator respectively;the cosine angles of two motion demonstrators in three-dimensional space are respectively, wherein,are the cosine angles between the joints of two action demonstrators in the three-dimensional space respectively,the cosine angles between the mutual joints of the two action demonstrators in the three-dimensional space are respectively;
connecting the SGDA action descriptors corresponding to all other joint points except the pivot joint point to obtain the final SGDA action descriptor for describing the double-person action, wherein the SGDA action descriptor is as follows: SGDA ═ SGDA 1 ,SGDA 2 ,……,SGDA S ]Where S is the number of all the remaining joints except the pivot joint, S ═ 14.
The identification method fully considers the importance of the pivot joint, takes the head joint point of an action demonstrator as the pivot joint point to construct a feature vector, introduces the concept of joint correlation, simultaneously uses two-dimensional and three-dimensional skeleton joint data to calculate the rectangular coordinate angle and Euclidean distance of the correlation between the pivot joint and a selected joint, calculates the sine correlation between the selected joint and the pivot joint point, further uses a generalized Gaussian Laplacian operator to carry out the feature description of the selected joint point, calculates the cosine angle to combine the single motion characteristic and the interaction information of two-person actions, utilizes the rectangular coordinate angle, the sine correlation, the Gaussian Laplacian operator, the Euclidean distance and the cosine angle to construct an SGDA action descriptor, utilizes the SGDA action descriptor to carry out double-person action identification, and obtains good identification effect on an SBU data set, the accuracy rate of 98.2 percent is achieved.
The embodiments of the present invention are described in further detail below.
The embodiment of the invention provides a double-person motion recognition method of an SGDA motion descriptor based on skeletal joint data, which fully explores the spatial information of human body motions and mainly comprises the following steps (see figure 3):
and 3, connecting the features in the step 2 to obtain an SGDA action descriptor, and learning by using a K-nearest neighbor algorithm.
Referring to fig. 2, fig. 2 is a diagram of the acquisition of coordinate data of 15 joint points of a motion demonstrator by KinectV1, wherein features in an SGDA motion descriptor are calculated by taking a head joint (joint point with node number 1) as a pivot joint point; in the above method, the head is the pivot joint that identifies the motion descriptor, so the head joint is used as the reference point. Rectangular coordinate angle, Euclidean distance and cosine angle features are calculated based on three-dimensional pivot joint coordinates, and sine correlation among joints is calculated indirectly by using two-dimensional pivot joint coordinates. Wherein the coordinates of the pivot joint points of the two motion demonstrators are respectively usedAndrepresents; and coordinates of selected joint points used to calculate featuresAndshows (in addition to the pivot joint points, also 14 joint points can be seen in connection with fig. 2)
In the method, the pivot joint is firstly calculatedAnd selected jointRectangular coordinate angle and slope therebetweenThe rectangular coordinate angle and the slope indicate the angle of correlation between two joint points in two-dimensional space; the importance of calculating the rectangular coordinate angle is that it not only acts independently as a feature, but it can also assist in calculating other features; the calculation formula of the rectangular coordinate angles of the two action demo is as follows:
In the method, the sine correlation is part of the motion descriptor, and the sine correlation needs to use the rectangular coordinate angle characteristic to record the coordinate of the corresponding joint as the coordinateThe formula for the sinusoidal correlation is as follows:
in the above method, the joint point is selectedThe gaussian laplacian is used as the feature description. The relationship of the laplacian of gaussian can be extended in such a way that three-dimensional joint points are used as feature vectors. In the following Gaussian Laplace operator, the standard deviation σ k (k is 1,2) with z k (k is 1,2) represents:
in the above method, the Euclidean distanceAt the pivot joint pointAndcalculating; this feature indicates the euclidean distance between the reference node (i.e. the pivot joint) and the selected joint; this is an important factor in the motion recognition descriptor, the Euclidean distanceComprises the following steps:
in the above method, the cosine angle isAndthe cosine angle is a necessary factor in the action descriptor, because the cosine angle indicates the pivot joint point and the selected joint point in the three-dimensional space, and the angle has correlation; four cosine angles were calculated:
1) a cosine angle of one's own pivot joint and the remaining selected joint points;
2) the cosine angle of one person's pivot joint point with the other selected joint points of the other person;
the formula for calculating the cosine angle is as follows:
in the formula, when k is l, the cosine angle of each joint point of the same person relative to the pivot joint point is calculated;
when k ≠ l, the cosine angles of the joint points of one person with respect to the pivot joint points of the other person are calculated.
In the method, the sine correlation degree, the generalized Gaussian Laplacian operator, the Euclidean distance, the cosine angle and the rectangular coordinate angle between the pivot joint and other selected joints are taken as characteristics to be connected to obtain the SGDA action descriptor; the SGDA is calculated frame by frame, and two-dimensional and three-dimensional bone joint data are embedded to improve the SGDA, so that a robust double-person motion recognition system is conveniently manufactured; the SGDA motion descriptor comprises information between the pivot joint and the selected joint; specifically, the SGDA expression of a single joint s is:
connecting the SGDA descriptors corresponding to all other joints except the pivot joint point to obtain the final SGDA descriptor for describing the double-person action: SGDA ═ SGDA 1 ,SGDA 2 ,……,SGDA S ](ii) a Where S is the number of all other joints except the pivot joint, it can be determined from fig. 2 that S is 14
The accuracy of the identification method of the present invention on the SBU data set is shown in table 1, which includes 8 different categories, including: approaching (Approaching), leaving (separating), Exchanging (Exchanging), Hugging (Hugging), Kicking (mounting), hitting (Punching), Pushing (Pushing) and shaking (ShakingHands), table 1 is as follows:
as can be seen from Table 1, the recognition method of the present invention achieves 98.2% accuracy on the SBU data set, 100% accuracy on the "close" action category, and 97.1% worst recognition on the "push" action category data set. The number of wrong judgments of the push action recognition is concentrated in the embrace action category, because the contents of a plurality of frames are the same under the push action category and the embrace action category; and classifying by a K-nearest neighbor classifier.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (2)
1. A double-person motion recognition method based on skeletal joint data is characterized by comprising the following steps:
step 1, sampling motion demonstration videos of two motion demonstrator, and determining coordinates of pivot joint points of the two motion demonstrator according to defined human body pivot joint points from each frame of image obtained by sampling;
step 2, calculating by using the coordinates of the pivot joint points to obtain a rectangular coordinate angle, a sine correlation, a Gaussian Laplacian operator, an Euclidean distance and a cosine angle of the pivot joint points of the two motion demonstrators and the selected joint point in a three-dimensional space; in the step 2, the rectangular coordinate angles of the pivot joint points of the two motion demonstrators and the selected joint point in the three-dimensional space are calculated by the following formula (1)
In the formula (1), the reaction mixture is,is the two-dimensional coordinate of the pivot joint point of the action demonstrator, and k is the number of the action demonstrator;the two-dimensional coordinates of the selected joint point are selected, and k is the number of the action demonstrator;
in the step 2, the sine correlation rho is calculated and obtained through the following formula (2) k :
In the formula (2), the reaction mixture is,the three-dimensional coordinates of the selected joint points of the action demonstrator, k is the number of the action demonstrator, i is the number of the selected joint points, and the value of i is 1 to 14;the rectangular coordinate angles of two motion demonstrator in three-dimensional space;
in the step 2, a gaussian laplacian G is obtained by calculating according to the following formula (3):
in the formula (3), the reaction mixture is,the three-dimensional coordinates of the selected joint points of the action demonstrator are shown, k is the number of the action demonstrator, the value of k is 1 or 2, i is the number of the selected joint points, and the value of i is 1 to 14; z is a radical of k Where k is 1 and 2 denotes the standard deviation σ k ,k=1,2;
In the step 2, the Euclidean distance between the pivot joint point of the two motion demonstrators and the selected joint point in the three-dimensional space is calculated by the following formula (4)
In the above-mentioned formula (4),is the three-dimensional coordinate of the pivot joint point of the action demonstrator, and k is the number of the action demonstrator;the three-dimensional coordinates of the selected joint points of the action demonstrator, k is the number of the action demonstrator, i is the number of the selected joint points, and the value of i is 1 to 14;
in the step 2, the cosine angles of the pivot joint points and the selected joint points of the two motion demonstrator in the three-dimensional space are calculated by the following formula (5)
In the above-mentioned formula (5),is the three-dimensional coordinate of the pivot joint point of the action demonstrator, and k is the number of the action demonstrator;the three-dimensional coordinates of the selected joint points of the action demonstrator, i is the number of the selected joint points, and the value of i is 1 to 14;
when k is l, the cosine angle of the selected joint point of the same person relative to the pivot joint point is calculated;
when k ≠ l, the cosine angle of the selected joint point of one person with respect to the pivot joint point of the other person is calculated;
step 3, connecting the rectangular coordinate angle, the sine correlation, the Gaussian Laplace operator, the Euclidean distance and the cosine angle which are obtained by calculation in the step 2 to obtain an SGDA action descriptor, and performing classification learning on the SGDA action descriptor by adopting a neural network model of a K-nearest neighbor algorithm to finish the identification of actions finished by two action demonstrators;
in step 3 of the method, the rectangular coordinate angle, the sine correlation, the laplacian of gaussian operator, the euclidean distance, and the cosine angle calculated in step 2 are connected to obtain an SGDA action descriptor:
the SGDA action descriptor expression for a single joint s is:
in the above-mentioned formula (6),respectively representing rectangular coordinate angles of two action demonstrator in three-dimensional space; rho 1 ,ρ 2 Respectively representing the sine correlation degrees of two motion demonstrators in a three-dimensional space; d 1 ,D 2 Respectively representing Euclidean distances of two action demonstrators in a three-dimensional space; g 1 ,G 2 The Gaussian descriptions of the two sign language demonstrator respectively;respectively two motion demonstratorCosine angles in three-dimensional space, wherein,are the cosine angles between the joints of two action demonstrators in the three-dimensional space respectively,the cosine angles between the mutual joints of the two action demonstrators in the three-dimensional space are respectively;
connecting the SGDA action descriptors corresponding to all other joint points except the pivot joint point to obtain the final SGDA action descriptor for describing the double-person action, wherein the SGDA action descriptor is as follows: SGDA ═ SGDA 1 ,SGDA 2 ,……,SGDA S ]Where S is the number of all the remaining joints except the pivot joint, S-14.
2. A method for identifying two-player motion based on skeletal joint data according to claim 1, wherein the human body pivot joint point defined in step 1 is the head of a motion demonstrator.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110383857.6A CN113011381B (en) | 2021-04-09 | 2021-04-09 | Double-person motion recognition method based on skeleton joint data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110383857.6A CN113011381B (en) | 2021-04-09 | 2021-04-09 | Double-person motion recognition method based on skeleton joint data |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113011381A CN113011381A (en) | 2021-06-22 |
CN113011381B true CN113011381B (en) | 2022-09-02 |
Family
ID=76388182
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110383857.6A Active CN113011381B (en) | 2021-04-09 | 2021-04-09 | Double-person motion recognition method based on skeleton joint data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113011381B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109446927B (en) * | 2018-10-11 | 2021-11-23 | 西安电子科技大学 | Double-person interaction behavior identification method based on priori knowledge |
US20230252784A1 (en) * | 2022-02-04 | 2023-08-10 | Walid Mohamed Aly AHMED | Methods, systems, and media for identifying human coactivity in images and videos using neural networks |
CN114612524B (en) * | 2022-05-11 | 2022-07-29 | 西南交通大学 | Motion recognition method based on RGB-D camera |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104794446B (en) * | 2015-04-22 | 2017-12-12 | 中南民族大学 | Human motion recognition method and system based on synthesis description |
CN107301370B (en) * | 2017-05-08 | 2020-10-16 | 上海大学 | Kinect three-dimensional skeleton model-based limb action identification method |
CN108681700B (en) * | 2018-05-04 | 2021-09-28 | 苏州大学 | Complex behavior identification method |
CN111695523B (en) * | 2020-06-15 | 2023-09-26 | 浙江理工大学 | Double-flow convolutional neural network action recognition method based on skeleton space-time and dynamic information |
CN111797806A (en) * | 2020-07-17 | 2020-10-20 | 浙江工业大学 | Three-dimensional graph convolution behavior identification method based on 2D framework |
-
2021
- 2021-04-09 CN CN202110383857.6A patent/CN113011381B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN113011381A (en) | 2021-06-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113011381B (en) | Double-person motion recognition method based on skeleton joint data | |
US10719759B2 (en) | System for building a map and subsequent localization | |
CN108229355B (en) | Behavior recognition method and apparatus, electronic device, computer storage medium | |
CN110135249B (en) | Human behavior identification method based on time attention mechanism and LSTM (least Square TM) | |
Kneip et al. | Robust real-time visual odometry with a single camera and an IMU | |
JP5253588B2 (en) | Capturing and recognizing hand postures using internal distance shape related methods | |
Perez-Sala et al. | A survey on model based approaches for 2D and 3D visual human pose recovery | |
US8379986B2 (en) | Device, method, and computer-readable storage medium for recognizing an object in an image | |
US20170161546A1 (en) | Method and System for Detecting and Tracking Objects and SLAM with Hierarchical Feature Grouping | |
Ding et al. | STFC: Spatio-temporal feature chain for skeleton-based human action recognition | |
JP2009514109A (en) | Discriminant motion modeling for tracking human body motion | |
Shao et al. | Robust height estimation of moving objects from uncalibrated videos | |
Liu et al. | A structured multi-feature representation for recognizing human action and interaction | |
Sun et al. | When we first met: Visual-inertial person localization for co-robot rendezvous | |
Bartol et al. | A review of 3D human pose estimation from 2D images | |
Amrutha et al. | Human Body Pose Estimation and Applications | |
Phadtare et al. | Detecting hand-palm orientation and hand shapes for sign language gesture recognition using 3D images | |
Islam et al. | MVS‐SLAM: Enhanced multiview geometry for improved semantic RGBD SLAM in dynamic environment | |
Liu et al. | Human-human interaction recognition based on spatial and motion trend feature | |
CN107122718B (en) | Novel target pedestrian trajectory tracking method based on Kinect | |
Zhang et al. | Human action recognition bases on local action attributes | |
Zhang et al. | Perspective independent ground plane estimation by 2D and 3D data analysis | |
Zhao et al. | Human pose regression through multiview visual fusion | |
JP2017097549A (en) | Image processing apparatus, method, and program | |
Perera et al. | Human motion analysis from UAV video |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |