CN111222437A - Human body posture estimation method based on multi-depth image feature fusion - Google Patents

Human body posture estimation method based on multi-depth image feature fusion Download PDF

Info

Publication number
CN111222437A
CN111222437A CN201911403474.XA CN201911403474A CN111222437A CN 111222437 A CN111222437 A CN 111222437A CN 201911403474 A CN201911403474 A CN 201911403474A CN 111222437 A CN111222437 A CN 111222437A
Authority
CN
China
Prior art keywords
human body
joint
joint point
depth image
sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911403474.XA
Other languages
Chinese (zh)
Inventor
张文安
贾晓凌
谢长值
杨旭升
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201911403474.XA priority Critical patent/CN111222437A/en
Publication of CN111222437A publication Critical patent/CN111222437A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

A human body posture estimation method based on multi-depth image feature fusion adopts a distributed fusion method, and solves the problem of human body posture estimation of multi-sensor information fusion in a complex scene. By fusing human body posture information from a plurality of 3D vision sensors, factors influencing human body posture estimation, such as view shielding, human body part misrecognition, motion mutation and the like, are effectively overcome. The invention provides a human body posture estimation method based on multi-depth image feature fusion, which effectively improves the accuracy and robustness of human body posture estimation.

Description

Human body posture estimation method based on multi-depth image feature fusion
Technical Field
The invention belongs to the field of human body posture estimation, and particularly relates to a human body posture estimation method based on multi-depth image feature fusion.
Background
With the continuous development of 3D vision and artificial intelligence technologies, the 3D vision sensor has a wider and wider application range, and especially plays an increasingly important role in the field of human body posture estimation. Human body posture estimation techniques based on 3D vision have been applied to the fields of behavior recognition, behavior prediction, human-computer interaction, video surveillance, virtual reality, and the like, for example, for rehabilitation training of injured people, for physical training analysis of athletes, for character image production of 3D animated movies, and the like.
At present, a human body posture estimation technology based on 3D vision is mature, the foreground and the background can be rapidly segmented by using depth image information, all joint points of a human body are identified by a random forest-based method, and then the 3D human body posture information can be calculated and output. However, a large number of factors affecting estimation of the human body posture, such as visual occlusion, human body component misrecognition, sudden motion change, environmental dynamic change, etc., cause a large measurement information deviation, so that it is difficult to capture complete and reliable human body posture information by means of a single 3D visual sensor. In order to enhance the robustness of the human body posture estimation system to adverse factors such as visual occlusion, environmental changes, etc., an effective method is to fuse human body posture information from multiple 3D visual sensors to obtain complete and reliable human body posture estimation information. However, in the existing 3D visual human body posture estimation, no technology exists that can robustly and effectively fuse the information of a plurality of 3D visual sensors to solve the human body posture estimation problem in a complex scene.
Disclosure of Invention
In order to overcome the defect that a single 3D vision sensor has poor robustness to occlusion, motion mutation, dynamic scene change and the like, the invention provides a human body posture estimation method based on multi-depth image feature fusion, namely, a distributed fusion method is adopted to fuse information of a plurality of 3D vision sensors to obtain estimation of human body posture, and the accuracy and robustness of human body posture estimation are effectively improved.
The technical scheme adopted by the invention for solving the technical problems is as follows:
a human body posture estimation method based on multi-depth image feature fusion comprises the following steps:
step 1) determining a world coordinate system and a rotational translation relation between each camera coordinate system and the world coordinate system, and establishing a kinematic model of each joint point of a human body and the quantity of each sensorMeasuring the model, determining the process noise covariance Q of each joint point of the human bodyi,kEach sensor measuring the noise covariance
Figure BDA0002348028480000021
Isoparametric and initial state of each joint point of the human body under each sensor
Figure BDA0002348028480000022
Step 2) calculating the state prediction value of each joint point of the human body in each sensor at the moment k according to the kinematic model of each joint point of the human body
Figure BDA0002348028480000023
And its covariance
Figure BDA0002348028480000024
Step 3) reading a depth image from the 3D vision sensor, and identifying and calculating the positions of all joint points of the human body based on a depth random forest method
Figure BDA0002348028480000025
Calculating the residual error under each sensor
Figure BDA0002348028480000026
And its covariance
Figure BDA0002348028480000027
Step 4) calculating Kalman filtering gains of all joint points of the human body under all sensors at the moment k
Figure BDA0002348028480000028
And state estimation values of all joint points of human body at the time k
Figure BDA0002348028480000029
And its covariance
Figure BDA00023480284800000210
Step 5) estimating the state of each joint point of the human body under each sensor
Figure BDA00023480284800000211
And its covariance
Figure BDA00023480284800000212
When the coordinate system is changed to the world coordinate system, the coordinate system is respectively recorded as
Figure BDA00023480284800000213
And
Figure BDA00023480284800000214
step 6) fusing the state estimation values of all the joints of the human body under all the sensors by adopting a distributed fusion method
Figure BDA0002348028480000031
And
Figure BDA0002348028480000032
calculating to obtain the fusion state estimated value of each joint point of the human body at the moment k
Figure BDA0002348028480000033
And its covariance
Figure BDA0002348028480000034
And (5) repeatedly executing the steps 2) -6) to finish the posture estimation of each joint point of the human body, and obtaining the human body posture estimation fusing the characteristics of the multi-depth image.
In step 1), i represents a serial number of each joint point of the human body, i is 1.. multidot.25, each joint point of the human body comprises a head joint, a thoracic vertebra joint, a shoulder joint, an elbow joint, a wrist joint and other joint points of the human body which need to be estimated, l represents a serial number of a visual sensor, and l is 1, 2.. multidot.n, wherein n is more than or equal to 2 and represents the number of the sensors. And k is a discrete time sequence.
In the step 1), the state of the human body joint point is the position of each joint point on the x, y and z axes of each camera coordinate system.
In the step 3), the residual error
Figure BDA0002348028480000035
For the measured value of each joint point of the human body under each sensor
Figure BDA0002348028480000036
And its predicted value
Figure BDA0002348028480000037
The difference between them.
In the step 3), the position information of each joint point of the human body is calculated on the basis of realizing human body part identification by using a random forest method according to the read position information of each joint point of the human body.
In the step 5), the superscript l represents an estimation result of the sensor l in the world coordinate system, and the read position information of each joint point of the human body is calculated to obtain the position information of each joint point of the human body on the basis of realizing human body part identification by using a random forest method.
In the step 6), the superscript f represents a fusion estimation result.
The invention has the following beneficial effects: aiming at the defects of shielding, environment change and the like existing when a single 3D vision sensor captures the human body posture, the distributed fusion method is adopted to fuse the information of a plurality of 3D vision sensors to obtain the estimation of the human body posture, so that the adverse effect on the estimation of the human body posture in a complex environment is reduced, and the accuracy and the robustness of the estimation of the human body posture are effectively improved.
Drawings
Fig. 1 is a schematic diagram for describing joint points of a human body under a depth image.
Fig. 2 is a flow chart of human body pose estimation based on multi-depth image feature fusion.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Referring to fig. 1 and 2, a human body posture estimation method based on multi-depth image feature fusion includes the following steps:
step 1) determining a world coordinate system and a rotational translation relation between each camera coordinate system and the world coordinate system, establishing a kinematics model of each joint of the human body and a measurement model of each sensor, and determining a process noise covariance Q of each joint of the human bodyi,kEach sensor measuring the noise covariance
Figure BDA0002348028480000041
Isoparametric and initial state of each joint point of the human body under each sensor
Figure BDA0002348028480000042
Step 2) calculating the state prediction value of each joint point of the human body in each sensor at the moment k according to the kinematic model of each joint point of the human body
Figure BDA0002348028480000043
And its covariance
Figure BDA0002348028480000044
Step 3) reading a depth image from the 3D vision sensor, and identifying and calculating the positions of all joint points of the human body based on a depth random forest method
Figure BDA0002348028480000045
Calculating the residual error under each sensor
Figure BDA0002348028480000046
And its covariance
Figure BDA0002348028480000047
Step 4) calculating Kalman filtering gains of all joint points of the human body under all sensors at the moment k
Figure BDA0002348028480000048
And state estimation values of all joint points of human body at the time k
Figure BDA0002348028480000049
And its covariance
Figure BDA00023480284800000410
Step 5) estimating the state of each joint point of the human body under each sensor
Figure BDA00023480284800000411
And its covariance
Figure BDA00023480284800000412
Converted into world coordinate system and respectively recorded as
Figure BDA00023480284800000413
And
Figure BDA00023480284800000414
step 6) fusing the state estimation values of all the joints of the human body under all the sensors by adopting a distributed fusion method
Figure BDA0002348028480000051
And
Figure BDA0002348028480000052
calculating to obtain the fusion state estimated value of each joint point of the human body at the moment k
Figure BDA0002348028480000053
And its covariance
Figure BDA0002348028480000054
And (5) repeatedly executing the steps 2) -6) to finish the posture estimation of each joint point of the human body, and obtaining the human body posture estimation fusing the characteristics of the multi-depth image.
As shown in fig. 1, the human posture estimation problem is decomposed into position estimation problems of joint points of the human body, which include 25 human joint points such as a head joint, a thoracic joint, a shoulder joint, an elbow joint, and a wrist joint. A flowchart of human body pose estimation based on multi-depth image feature fusion is shown in fig. 2. Firstly, calibrating a camera coordinate system and a world coordinate system of each sensor, and determining a rotational-translational relation between each camera coordinate system and the world coordinate system. Establishing a kinematic model of each joint point of the human body and a measurement model of each sensor:
xi,k=xi,k-1+wi,k(1)
Figure BDA0002348028480000055
wherein k is 1,2, … is a discrete time sequence,
Figure BDA0002348028480000056
the state of a human body joint point i is 1,2, m is the serial number of each joint point of the human body, m is 25,
Figure BDA0002348028480000057
and
Figure BDA0002348028480000058
the coordinate values of each joint point of the human body at the time k on the x, y and z axes, wi,kIs zero mean and covariance is Qi,kWhite gaussian noise.
Figure BDA0002348028480000059
The measured values of each joint point of the human body under the camera coordinate system of the sensor,
Figure BDA00023480284800000510
respectively measuring values of all joint points of the human body at the time k on x, y and z axes,
Figure BDA00023480284800000511
is zero mean and covariance of
Figure BDA00023480284800000512
White gaussian noise, where l is 1, …, n, each measured noise
Figure BDA00023480284800000513
Are not related to each other, and wi,kIs not relevant. Determining the initial state and covariance of each joint point of the human body as
Figure BDA00023480284800000514
And
Figure BDA00023480284800000515
secondly, calculating the state prediction values of all the joint points of the human body under all the sensors
Figure BDA00023480284800000516
And its covariance
Figure BDA0002348028480000061
And residual error
Figure BDA0002348028480000062
And its covariance
Figure BDA0002348028480000063
Thirdly, calculating Kalman filtering gains of all joint points of the human body under all sensors
Figure BDA0002348028480000064
State estimation
Figure BDA0002348028480000065
And its covariance
Figure BDA0002348028480000066
Then, estimating the state of each joint point of the human body
Figure BDA0002348028480000067
And its covariance
Figure BDA0002348028480000068
Conversion to the world coordinate system of
Figure BDA0002348028480000069
And
Figure BDA00023480284800000610
finally, calculating the fusion state estimated value of each joint point of the human body
Figure BDA00023480284800000611
And its covariance
Figure BDA00023480284800000612
According to the kinematic model of each joint point of the human body and the state estimation value of the previous moment
Figure BDA00023480284800000613
And its covariance
Figure BDA00023480284800000614
State prediction value of each joint point of human body under each sensor
Figure BDA00023480284800000615
And its covariance
Figure BDA00023480284800000616
And residual error
Figure BDA00023480284800000617
And its covariance
Figure BDA00023480284800000618
The calculation formula of (a) is as follows:
Figure BDA00023480284800000619
Figure BDA00023480284800000620
Figure BDA00023480284800000621
Figure BDA00023480284800000622
calculating Kalman filtering gain of each joint point of human body under each sensor
Figure BDA00023480284800000623
And obtaining the state estimation value of each joint point of the human body at the moment k
Figure BDA00023480284800000624
And its covariance
Figure BDA00023480284800000625
Figure BDA00023480284800000626
Figure BDA00023480284800000627
Figure BDA00023480284800000628
Calculating the state estimation value of each joint point of the human body under each sensor in the world coordinate system
Figure BDA00023480284800000629
And its covariance
Figure BDA00023480284800000630
The conversion formula is as follows:
Figure BDA00023480284800000631
Figure BDA00023480284800000632
wherein the content of the first and second substances,
Figure BDA0002348028480000071
is the rotational-translational relationship between each camera coordinate system and the world coordinate system.
State estimation values of all joint points of human body under all sensors are fused by adopting a distributed fusion method
Figure BDA0002348028480000072
Computing a fused state estimate
Figure BDA0002348028480000073
And its covariance
Figure BDA0002348028480000074
Figure BDA0002348028480000075
Figure BDA0002348028480000076
And (4) repeatedly executing the formulas 3) -13), finishing the state estimation of all the joint points of the human body, and thus obtaining the human body posture estimation based on the multi-depth image feature fusion.

Claims (7)

1. A human body posture estimation method based on multi-depth image feature fusion is characterized by comprising the following steps: the method comprises the following steps:
step 1) determining a world coordinate system and a rotational translation relation between each camera coordinate system and the world coordinate system, establishing a kinematics model of each joint of the human body and a measurement model of each sensor, and determining a process noise covariance Q of each joint of the human bodyi,kEach sensor measuring the noise covariance
Figure FDA0002348028470000011
Isoparametric and initial state of each joint point of the human body under each sensor
Figure FDA0002348028470000012
Step 2) calculating the state prediction value of each joint point of the human body in each sensor at the moment k according to the kinematic model of each joint point of the human body
Figure FDA0002348028470000013
And its covariance
Figure FDA0002348028470000014
Step 3) reading a depth image from the 3D vision sensor, and identifying and calculating the positions of all joint points of the human body based on a depth random forest method
Figure FDA0002348028470000015
Calculating the residual error under each sensor
Figure FDA0002348028470000016
And its covariance
Figure FDA0002348028470000017
Step 4) calculating Kalman filtering gains of all joint points of the human body under all sensors at the moment k
Figure FDA0002348028470000018
And state estimation values of all joint points of human body at the time k
Figure FDA0002348028470000019
And its covariance
Figure FDA00023480284700000110
Step 5) estimating the state of each joint point of the human body under each sensor
Figure FDA00023480284700000111
And its covariance
Figure FDA00023480284700000112
When the coordinate system is changed to the world coordinate system, the coordinate system is respectively recorded as
Figure FDA00023480284700000113
And
Figure FDA00023480284700000114
step 6) fusing the state estimation values of all the joints of the human body under all the sensors by adopting a distributed fusion method
Figure FDA00023480284700000115
And
Figure FDA00023480284700000116
calculating to obtain the fusion state estimated value of each joint point of the human body at the moment k
Figure FDA00023480284700000117
And its covariance
Figure FDA00023480284700000118
And (5) repeatedly executing the steps 2) -6) to finish the posture estimation of each joint point of the human body, and obtaining the human body posture estimation fusing the characteristics of the multi-depth image.
2. The human body posture estimation method based on multi-depth image feature fusion as claimed in claim 1, characterized in that: in the step 1), i represents the serial number of each joint point of the human body, i is 1,.. and 25, each joint point of the human body comprises a head joint, a thoracic vertebra joint, a shoulder joint, an elbow joint, a wrist joint and other joint points of the human body which need to be estimated, l represents the serial number of the visual sensor, l is 1,2,.. n, wherein n is more than or equal to 2 and represents the number of the sensors.
3. The human body posture estimation method based on multi-depth image feature fusion as claimed in claim 1 or 2, characterized in that: in the step 1), the state of the human body joint point is the position of each joint point on the x, y and z axes of each camera coordinate system.
4. The human body posture estimation method based on multi-depth image feature fusion as claimed in claim 1 or 2, characterized in that: in the step 3), the residual error
Figure FDA0002348028470000021
For the measured value of each joint point of the human body under each sensor
Figure FDA0002348028470000022
And its predicted value
Figure FDA0002348028470000023
The difference between them.
5. The human body posture estimation method based on multi-depth image feature fusion as claimed in claim 1 or 2, characterized in that: in the step 3), the position information of each joint point of the human body is calculated on the basis of realizing human body part identification by using a random forest method according to the read position information of each joint point of the human body.
6. The human body posture estimation method based on multi-depth image feature fusion as claimed in claim 1 or 2, characterized in that: in the step 5), the superscript l represents an estimation result of the sensor l in the world coordinate system, and the read position information of each joint point of the human body is calculated to obtain the position information of each joint point of the human body on the basis of realizing human body part identification by using a random forest method.
7. The human body posture estimation method based on multi-depth image feature fusion as claimed in claim 1 or 2, characterized in that: in the step 6), the superscript f represents a fusion estimation result.
CN201911403474.XA 2019-12-31 2019-12-31 Human body posture estimation method based on multi-depth image feature fusion Pending CN111222437A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911403474.XA CN111222437A (en) 2019-12-31 2019-12-31 Human body posture estimation method based on multi-depth image feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911403474.XA CN111222437A (en) 2019-12-31 2019-12-31 Human body posture estimation method based on multi-depth image feature fusion

Publications (1)

Publication Number Publication Date
CN111222437A true CN111222437A (en) 2020-06-02

Family

ID=70827938

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911403474.XA Pending CN111222437A (en) 2019-12-31 2019-12-31 Human body posture estimation method based on multi-depth image feature fusion

Country Status (1)

Country Link
CN (1) CN111222437A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112131928A (en) * 2020-08-04 2020-12-25 浙江工业大学 Human body posture real-time estimation method based on RGB-D image feature fusion

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130250050A1 (en) * 2012-03-23 2013-09-26 Objectvideo, Inc. Video surveillance systems, devices and methods with improved 3d human pose and shape modeling
CN106096565A (en) * 2016-06-16 2016-11-09 山东大学 Mobile robot based on sensing network and the task cooperative method of static sensor
CN106127119A (en) * 2016-06-16 2016-11-16 山东大学 Joint probabilistic data association method based on coloured image and depth image multiple features
CN106897670A (en) * 2017-01-19 2017-06-27 南京邮电大学 A kind of express delivery violence sorting recognition methods based on computer vision
CN108549876A (en) * 2018-04-20 2018-09-18 重庆邮电大学 The sitting posture detecting method estimated based on target detection and human body attitude
CN108871337A (en) * 2018-06-21 2018-11-23 浙江工业大学 Object pose estimation method under circumstance of occlusion based on multiple vision sensor distributed information fusion
CN110097639A (en) * 2019-03-18 2019-08-06 北京工业大学 A kind of 3 D human body Attitude estimation method
CN110174907A (en) * 2019-04-02 2019-08-27 诺力智能装备股份有限公司 A kind of human body target follower method based on adaptive Kalman filter
CN110530365A (en) * 2019-08-05 2019-12-03 浙江工业大学 A kind of estimation method of human posture based on adaptive Kalman filter

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130250050A1 (en) * 2012-03-23 2013-09-26 Objectvideo, Inc. Video surveillance systems, devices and methods with improved 3d human pose and shape modeling
CN106096565A (en) * 2016-06-16 2016-11-09 山东大学 Mobile robot based on sensing network and the task cooperative method of static sensor
CN106127119A (en) * 2016-06-16 2016-11-16 山东大学 Joint probabilistic data association method based on coloured image and depth image multiple features
CN106897670A (en) * 2017-01-19 2017-06-27 南京邮电大学 A kind of express delivery violence sorting recognition methods based on computer vision
CN108549876A (en) * 2018-04-20 2018-09-18 重庆邮电大学 The sitting posture detecting method estimated based on target detection and human body attitude
CN108871337A (en) * 2018-06-21 2018-11-23 浙江工业大学 Object pose estimation method under circumstance of occlusion based on multiple vision sensor distributed information fusion
CN110097639A (en) * 2019-03-18 2019-08-06 北京工业大学 A kind of 3 D human body Attitude estimation method
CN110174907A (en) * 2019-04-02 2019-08-27 诺力智能装备股份有限公司 A kind of human body target follower method based on adaptive Kalman filter
CN110530365A (en) * 2019-08-05 2019-12-03 浙江工业大学 A kind of estimation method of human posture based on adaptive Kalman filter

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HO YUB JUNG等: "Random tree walk toward instantaneous 3D human pose estimation" *
唐心宇,宋爱国: "人体姿态 估计及在康复训练情景 交互中的应用" *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112131928A (en) * 2020-08-04 2020-12-25 浙江工业大学 Human body posture real-time estimation method based on RGB-D image feature fusion
CN112131928B (en) * 2020-08-04 2024-06-18 浙江工业大学 Human body posture real-time estimation method based on RGB-D image feature fusion

Similar Documents

Publication Publication Date Title
CN109255813B (en) Man-machine cooperation oriented hand-held object pose real-time detection method
CN110530365B (en) Human body attitude estimation method based on adaptive Kalman filtering
JP4148281B2 (en) Motion capture device, motion capture method, and motion capture program
CN113706699B (en) Data processing method and device, electronic equipment and computer readable storage medium
Yu et al. HeadFusion: 360${^\circ} $ Head Pose Tracking Combining 3D Morphable Model and 3D Reconstruction
CN112131928B (en) Human body posture real-time estimation method based on RGB-D image feature fusion
CN113158459A (en) Human body posture estimation method based on visual and inertial information fusion
CN113077519A (en) Multi-phase external parameter automatic calibration method based on human skeleton extraction
CN117671738B (en) Human body posture recognition system based on artificial intelligence
CN102156994B (en) Joint positioning method for single-view unmarked human motion tracking
CN111178201A (en) Human body sectional type tracking method based on OpenPose posture detection
CN111222437A (en) Human body posture estimation method based on multi-depth image feature fusion
CN113033501A (en) Human body classification method and device based on joint quaternion
CN111241936A (en) Human body posture estimation method based on depth and color image feature fusion
CN115205737B (en) Motion real-time counting method and system based on transducer model
Li et al. 3D human pose tracking approach based on double Kinect sensors
CN115050095A (en) Human body posture prediction method based on Gaussian process regression and progressive filtering
CN115205750A (en) Motion real-time counting method and system based on deep learning model
Qi et al. 3D human pose tracking approach based on double Kinect sensors
Cordea et al. 3-D head pose recovery for interactive virtual reality avatars
Henning et al. Bodyslam++: Fast and tightly-coupled visual-inertial camera and human motion tracking
Panduranga et al. Dynamic hand gesture recognition system: a short survey
Chen et al. An integrated sensor network method for safety management of construction workers
Ryu et al. Skeleton-based Human Action Recognition Using Spatio-Temporal Geometry (ICCAS 2019)
WO2007110731A1 (en) Image processing unit and image processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination