CN108919943A - A kind of real-time hand method for tracing based on depth transducer - Google Patents
A kind of real-time hand method for tracing based on depth transducer Download PDFInfo
- Publication number
- CN108919943A CN108919943A CN201810526570.2A CN201810526570A CN108919943A CN 108919943 A CN108919943 A CN 108919943A CN 201810526570 A CN201810526570 A CN 201810526570A CN 108919943 A CN108919943 A CN 108919943A
- Authority
- CN
- China
- Prior art keywords
- point
- hand
- real
- distance
- time
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Image Analysis (AREA)
- Length Measuring Devices By Optical Means (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention proposes a kind of real-time hand method for tracing based on depth transducer, the method increase the real-time tracing quality for the hand disordered motion not limited by posture, including therefrom reading depth information, by the auxiliary of skeleton data, reparation is blocked for hand bone information using Forward kinematics completion.After the skeletal joint point information after being optimized, using Kalman filtering algorithm, the motion information of the palm of tracking is smoothed;Then, using real-time palm coordinate information, the recursive continuum parser for taking cascade to realize extracts hand region from depth image;Finally, being taken based on minimum three-dimensional geodesic distance as geometrical characteristic, with finger bone location point is that the finger tip alternatively put obtains scheme using the pixel farthest apart from edge as the skeleton point of finger.It is thus achieved that, according to analysis of experimental results, improving the accuracy of tracking to the real-time tracing of hand.
Description
Technical field
The present invention relates to a kind of hand exercise method for tracing, especially a kind of real-time hand tracking based on depth transducer
Method belongs to human-computer interaction technique field.
Background technique
As computer and sensor technology are constantly progressive, human-computer interaction (Human-Computer Interaction) skill
Art has developed to natural user interface (Natural User Interface) the interaction stage taking human as core, it is desirable that machine energy
It is enough to exchange, interact with user naturally as the mankind.And it is daily in, the most common communication means of the mankind be depending on, listen, say
In conjunction with, and hand play the role of in daily interactive process it is most important.Correspondingly, the human-computer interaction skill based on hand exercise
Also in natural user interface interaction, the stage is indispensable, application prospect is extensive for art.But since every hand has 27 freedom degrees
(DoF) it and can rotate freely in space, it is achieved that high-precision real-time hand tracking still has difficulty.
It is continued to develop with subjects such as image procossing, machine vision, artificial intelligence, especially depth transducer is (such as
The Kinect of Microsoft) occur after, based on depth transducer obtain hand exercise status information be possibly realized.Due to optics
Sensor calculates the parameter of object to be detected by light reflection interval, does not need to wear additional wearable device, thus is based on
It is a kind of more natural, also more convenient and fast man-machine interaction mode that depth transducer, which carries out human-computer interaction,.It is passed now with optics
Sensor is one of forward position and popular field the most in human-computer interaction technology, and the application and development based on depth transducer makes man-machine friendship
Mutually three-dimensional space can be expanded to from two-dimensional surface interaction freely to interact, and be capable of providing non-contacting interactive experience, use
Family is freer in the interactive process with machine.
Two classes can be divided by being currently based on depth transducer hand exercise information exchange:The method of view-based access control model and be based on mould
The method of type.In view of requirement of real-time, and the method for tracing of view-based access control model is more in line with the requirement of real-time, and this method faces
Challenge seek to being capable of accurate, steady, efficiently each hand parameter in detection and tracing movement.
Summary of the invention
It is an object of the invention to:In view of the defects existing in the prior art, it proposes a kind of based on the real-time of depth transducer
Hand method for tracing can capture hand exercise information in real time, and real-time, stable, accurate hand is provided during hand exercise
Portion's latent structure provides effective data source for the scientific research and development based on hand exercise information, to believe based on hand exercise
The application and development of breath provides reliable interaction data.
In order to reach the goals above, the present invention provides a kind of real-time hand motion tracking side based on depth transducer
Method includes the following steps:
Step 1, initial data optimizes, and obtains hand in the depth information of three-dimensional space, before by depth transducer
Reparation is blocked to kinematics completion hand bone information;Based on the skeletal joint point information after optimization, using Kalman filtering
Algorithm is smoothed palm motion information;
Step 2, hand region is extracted, the real-time three-dimensional spatial information provided using depth transducer takes continuum
Parser extracts hand region from depth image, and calculates hand three-dimensional centroid position;
Step 3, the real-time acquisition of finger tip three-dimensional position, using the pixel farthest apart from edge as finger skeleton point, with
Three-dimensional geodesic distance is alternative point as geometrical characteristic and finger bone location point.
Further, the step 1 includes the following steps:
Step 1.1, recovery is blocked:When hand is blocked or when partial occlusion, the real time data of depth transducer feedback is deposited
In violent shake, data reliability is not high, need to predict the joint position not tracked by depth transducer;Pass through depth transducer
The depth data and skeleton data of offer combine mode, repair to hand joint point is blocked, utilize the length of skeleton
Invariance is spent, elbow, hand, wrist and finger tip are constituted into articulated chain, calculate the length between each adjacent segment, and by joint
Length is compared with the human skeleton of standard as the foundation for blocking joint repair;
According to the two o'clock A (x in known spatiala,ya,za)、B(xb,yb,zb) coordinate information, can by euclidian metric
Obtain bone length formula:
The prediction to joint potential site is realized using the length invariance and direct kinematics of bone.
Step 1.2, data smoothing:By complex background and when illumination effect, in order to obtain more stable tracking to
More reliable data source is provided for application, each joint position for repairing present frame is predicted using Kalman filter;Benefit
Current location is predicted with previous frame data, and the confidence level of artis itself is combined to carry out selection determination to parameter, is led to
The selection to parameter is crossed, completes algorithm for the adaptive of tracking state confidence level.
Further, the step 2 includes the following steps:
Step 2.1, coarse extraction:The maximum value conduct in each interarticular backbone length tracked in selecting step 1
The search radius for separating hand, extracts each pixel within search radius, and carries out the filtering selection of next step;Its
In, less than described search radius pixel as the input carefully extracted of next step, greater than described search radius pixel then
It is defaulted as background pixel point, its depth value is set to 0;
The search radius is defined as follows:
Step 2.2, it precisely extracts:Using the recurrence continuum parser of cascade structure, hand region is precisely extracted,
By comparing the depth difference of sub-pixel point and surrounding pixel point, to judge whether adjacent two o'clock belongs to continuum, for even
The point in continuous region further judges, layer by layer recurrence, until obtaining in region of search all continuous pixels on three-dimensional space;
Step 2.3, centre of the palm point position is determined:By the hand region that obtains in real time is carried out the mass center of three-dimensional space into
Row calculates, and obtains hand centre of the palm position in real time.
Further, the step 3 includes the following steps:
Step 3.1, hand skeleton point extracts:Hand skeleton point is extracted based on hand region Edge Distance, in order to
Improve efficiency of algorithm and accuracy in computation, while calculating pixel to Edge Distance from both direction, and choose in the two compared with
Distance of the small value as the point to boundary, and target point is the smaller value of both direction calculated result at a distance from edge;For
Any pixel point of hand region, when distance is greater than the distance of surrounding point, then it is assumed that its corresponding coordinate is finger skeleton point
Position;
The distance calculation formula at pixel to edge is:
Distance (x, y)=min { Distance1 (x, y), Distance2 (x, y) },
For any pixel point of hand region, when Distance value is greater than the Distance value of surrounding point, then it is assumed that
Its corresponding coordinate is finger bone point position.
Step 3.2, real-time fingertip positions:Geodesic distance is applied to three-dimensional information, is made using minimum three-dimensional geodesic distance
It is characterized parameter, is chosen with the maximum alternative bone pixel of centre of the palm position minimum three-dimensional geodesic distance as finger tip point, and
The interference that preamble finger tip point is nearby put is eliminated when continuing finger tip point after computation.Geodesic distance relationship is given by between each point:
, wherein Origin is the origin of geodesic distance measurement, and Target is the target point of current solution, and Neighbor is
The point of proximity of 3D GSP has been acquired around target point Target;
Distance relation is given by between each pixel,
The invention adopts the above technical scheme compared with prior art, has the following technical effects:
1, the data optimization methods that use of the present invention can eliminate the shake of the initial data of depth transducer offer, and
Hand is at least partially obscured and user's hand joint position is predicted in the case that sensor can not be accurately positioned, and improves the joint being tracked
The accuracy of point position.
2, the cascade recurrence continuum parser that hand region uses is extracted in the present invention remain initial data
More effective informations, and the extraction efficiency of hand region is substantially increased, further improve the real-time of whole system.
3, finger tip proposed by the present invention obtains scheme either hand and opens still local buckling completely, wrist rotation, uses
The non-face depth transducer in family more can determine finger tip point position to accurate stable relative to other methods;And it chooses
Hand Skeleton pixel also improves the real-time of system as finger tip alternate pixel point.
Detailed description of the invention
The present invention will be further described below with reference to the drawings.
Fig. 1 is system block diagram of the invention.
Fig. 2 is the schematic diagram of the hand position tracking of information in the present invention based on Kalman filtering algorithm.
Fig. 3 is the connection table of the foundation and storage of recurrence connected region relationship in the present invention.
Fig. 4 is pixel and Edge Distance schematic diagram in the present invention.
Fig. 5 is the calculation method exemplary diagram of three-dimensional geodesic curve distance in the present invention.
Specific embodiment
Present embodiments provide a kind of real-time hand motion tracking method based on depth transducer, detailed process such as Fig. 1
It is shown, include the following steps:
Step 1:Initial data optimization;
Step 1.1:Block recovery;When hand is blocked or when partial occlusion, the real time data of depth transducer feedback is deposited
In violent shake, data reliability is not high, needs the joint position for predicting not tracked by depth transducer at this time.It uses herein
The depth data and skeleton data that depth transducer Kinect is provided combine mode, repair to hand joint point is blocked.
Since skeleton has length invariance, articulated chain is constituted with elbow, hand, wrist, finger tip, is calculated between each adjacent segment
Length i.e. and the length in joint is compared with the human skeleton of standard as the foundation for blocking joint repair.Known spatial
In two o'clock A (xa,ya,za)、B(xb,yb,zb) coordinate information, bone length formula can be obtained by euclidian metric:
The prediction to joint potential site is realized using the length invariance and direct kinematics of bone.
Step 1.2:Data smoothing;By taking Kinect as an example, this kind of depth transducer is imaged using optical reflection, therefore steady
Influence on qualitative vulnerable to environment and noise, even if HAND, HANDTIPS, THUMB artis are not lost in tracing process,
There is also biggish shakes.In order to obtain more stable tracking to provide more reliable data source for application, the present invention is adopted
With Kalman filtering algorithm, the joint information provided depth transducer is further processed.
The frame data (such as 30 frames) of certain amount are exported since depth transducer is per second, and it is regular according to human cinology,
It is believed that in a short time, position mutation will not occur for human joint points, i.e. human motion remains company in regular motion
Continuous state.Therefore, the frame of front can be used to predict current location.When parameter is chosen, this project combination artis
The confidence level of itself is determined.By the selection to parameter, algorithm is completed for the adaptive of tracking state confidence level.Such as
It in tracing process shown in Fig. 2, is read from Kinect, reparation obtains current location information xk,yk, by kinematics analysis, from more
Acquisition speed information in the data of frameAnd control information axk,ayk.Control matrix BkIt is determined, is closed by kinematic relation
It is matrix FkInfluence of the current variable to subsequent time variable, H are describedkDescribe variable that target variable and Kinect are read it
Between relationship.PkFor the covariance matrix between each variable, RkFor the Gauss deviation during prediction, KgainFor kalman gain,
It can be obtained by following formula:
Step 2:Extract hand region;According to hand real-time change and the characteristic of target area Relatively centralized during exercise,
The present invention realizes that hand region is extracted using the target tracking based on the continuous feature in target area.
Step 2.1:Coarse extraction.In this step, each interarticular backbone length for being calculated in our selecting steps 1
In maximum value as separation hand search radius, less than the pixel of the search radius as in next step carefully extract it is defeated
Enter, be defaulted as background pixel point greater than the range, its depth value is set to 0.The search radius is defined as follows:
Step 2.2:Continuum analysis is completed on three-dimensional space using recursive fashion, precisely extracts hand region.Such as
Shown in Fig. 3, X indicates sub-pixel point, by comparing the depth difference of sub-pixel point and surrounding pixel point, to judge adjacent two o'clock
Whether belong to continuum, the point of continuum is further judged, layer by layer recurrence, until obtaining in region of search in three-dimensional
Spatially all continuous pixels.The algorithm remains the three-dimensional communication information of hand region, is the next step centre of the palm and finger tip
Positioning is made that contribution.
Step 2.3:Determine centre of the palm point position.The three-dimensional information of hand region connected relation is remained in previous step,
It can effectively solve when hand moves in space because being tracked caused by closing, rotating, block certainly the overlapping of bring pixel
The influence that deviation between hand region information and actual information calculates centre of the palm position.Therefore for the reality of hand centre of the palm position
When obtain the mass center by carrying out three-dimensional space to the hand region that obtains in real time only needed to calculate.
Step 3:The real-time acquisition of finger tip three-dimensional position.
Step 3.1:Hand skeleton point extracts:Since finger bone point is in the center of finger, by with hand
Each pixel in portion region to edge distance as screening conditions, bone information is extracted from hand continuum, for three
The geodesic distance calculating that the marginal point of dimension space connected region reaches centre of the palm position has little effect.In order to improve efficiency of algorithm
And accuracy in computation, the present invention choose lesser value in the two simultaneously from both direction calculating pixel to the distance at edge
Carried out by the point to boundary distance, as shown in Figure 4.And target point is the lesser of both direction calculated result at a distance from edge
Value, specific formula are as follows:
Distance (x, y)=min { Distance1 (x, y), Distance2 (x, y) };
For any pixel point of hand region, when Distance value is greater than the Distance value of surrounding point, then it is assumed that
Its corresponding coordinate is finger bone point position.
Step 3.2:Real-time fingertip positioning:Traditional implementation method that fingertip location is determined according to Euclidean distance distant relationships
In, when digital flexion, closing and during hand exercise when non-face sensor, it may appear that position is calculated
It is unstable and the case where be equipped with relatively large deviation with actual bit.In view of the above-mentioned problems, geodesic distance is applied to three by the present invention
Information is tieed up, using minimum three-dimensional geodesic distance (3-Dimension Geodesic Shortest Path, 3D GSP) as special
Parameter is levied, is chosen with the maximum alternative bone pixel of centre of the palm position minimum three-dimensional geodesic distance as finger tip, and calculating
The interference that preamble finger tip point is nearby put is eliminated when subsequent finger tip point.Fig. 5 indicates that each pixel in region is surveyed to the minimum three-dimensional of origin
Ground geodesic distance relationship between calculated result, each point is given by,
, wherein Origin is the origin of geodesic distance measurement, and Target is the target point of current solution, and Neighbor is
The point of proximity of 3D GSP has been acquired around target point Target.Distance relation is given by between each pixel,
In conclusion the present invention tracks hand fortune by depth transducer real-time capture hand position, skeleton and finger tip
It is dynamic.In order to correct the influence for the initial data that complex background and illumination provide sensor, the present invention be added to block recovery and
Jitter elimination step, and Kalman filtering algorithm is applied to the movement in tracking user joint and does not track joint prediction, with
The feature extraction scheme based on three-dimensional minimum geodesic distance positioning fingertip location is proposed according to accuracy and robustness requirement afterwards,
According to requirement of real-time, the feature extraction based on hand skeleton point is proposed, improves algorithm operational efficiency;Feature extraction is answered
For skeleton point, the feature extraction scheme for completely meeting real-time hand motion analysis requirement is constituted.It is tested finally by experiment
Accuracy, real-time and the robustness of real-time hand motion tracking methods and results proposed by the present invention are demonstrate,proved.
In addition to the implementation, the present invention can also have other embodiments.It is all to use equivalent substitution or equivalent transformation shape
At technical solution, fall within the scope of protection required by the present invention.
Claims (9)
1. a kind of real-time hand method for tracing based on depth transducer, it is characterised in that:Include the following steps:
Step 1, initial data optimizes, and obtains hand in the depth information of three-dimensional space, using preceding to fortune by depth transducer
It is dynamic finish into hand bone information block reparation;Based on the skeletal joint point information after optimization, using Kalman filtering algorithm,
Palm motion information is smoothed;
Step 2, hand region is extracted, the real-time three-dimensional spatial information provided using depth transducer takes continuum to analyze
Algorithm extracts hand region from depth image, and calculates hand three-dimensional centroid position;
Step 3, the real-time acquisition of finger tip three-dimensional position, using the pixel farthest apart from edge as finger skeleton point, with three-dimensional
Geodesic distance is alternative point as geometrical characteristic and finger bone location point.
2. the real-time hand method for tracing according to claim 1 based on depth transducer, it is characterised in that:The step
1 includes the following steps:
Step 1.1, recovery is blocked:When hand is blocked or when partial occlusion, the real time data of depth transducer feedback exists acute
Strong shake, data reliability is not high, need to predict the joint position not tracked by depth transducer;It is provided by depth transducer
Depth data and skeleton data combine mode, repaired to hand joint point is blocked, using skeleton length not
Elbow, hand, wrist and finger tip are constituted articulated chain, calculate the length between each adjacent segment by denaturation, and by the length in joint
It is compared with the human skeleton of standard as the foundation for blocking joint repair;
Step 1.2, data smoothing:When by complex background and illumination effect, in order to obtain more stable tracking to be to answer
With more reliable data source is provided, each joint position for repairing present frame is predicted using Kalman filter.
3. the real-time hand method for tracing according to claim 2 based on depth transducer, it is characterised in that:The step
In 1.1, according to the two o'clock A (x in known spatiala,ya,za)、B(xb,yb,zb) coordinate information, can be obtained by euclidian metric
Bone length formula:
The prediction to joint potential site is realized using the length invariance and direct kinematics of bone.
4. the real-time hand method for tracing according to claim 2 based on depth transducer, it is characterised in that:The step
In 1.2, current location is predicted using previous frame data, and the confidence level of artis itself is combined to choose parameter
It determines, by the selection to parameter, completes algorithm for the adaptive of tracking state confidence level.
5. the real-time hand method for tracing according to claim 1 based on depth transducer, it is characterised in that:The step
2 include the following steps:
Step 2.1, coarse extraction:Maximum value in each interarticular backbone length tracked in selecting step 1 is as separation
The search radius of hand extracts each pixel within search radius, and carries out the filtering selection of next step;
Step 2.2, it precisely extracts:Using the recurrence continuum parser of cascade structure, hand region is precisely extracted, is passed through
Compare the depth difference of sub-pixel point Yu surrounding pixel point, to judge whether adjacent two o'clock belongs to continuum, for continuum
The point in domain further judges, layer by layer recurrence, until obtaining in region of search all continuous pixels on three-dimensional space;
Step 2.3, centre of the palm point position is determined:It is counted by the mass center for carrying out three-dimensional space to the hand region obtained in real time
It calculates, obtains hand centre of the palm position in real time.
6. the real-time hand method for tracing according to claim 5 based on depth transducer, it is characterised in that:The step
In 2.1, less than described search radius pixel as the input carefully extracted of next step, greater than the pixel of described search radius
It is then defaulted as background pixel point, its depth value is set to 0;
The search radius is defined as follows:
7. the real-time hand method for tracing according to claim 1 based on depth transducer, it is characterised in that:The step
3 include the following steps:
Step 3.1, hand skeleton point extracts:Hand skeleton point is extracted based on hand region Edge Distance, in order to improve
Efficiency of algorithm and accuracy in computation, while pixel is calculated to Edge Distance from the upper left corner, lower right corner both direction, and chooses two
Distance of the smaller value as the point to boundary in person, and target point is the smaller of both direction calculated result at a distance from edge
Value;For any pixel point of hand region, when distance is greater than the distance of surrounding point, then it is assumed that its corresponding coordinate is maniphalanx
The position of bone point;
Step 3.2, real-time fingertip positions:Geodesic distance is applied to three-dimensional information, uses minimum three-dimensional geodesic distance as spy
Parameter is levied, is chosen with the maximum alternative bone pixel of centre of the palm position minimum three-dimensional geodesic distance as finger tip point, and counting
The interference that preamble finger tip point is nearby put is eliminated when calculating subsequent finger tip point.
8. the real-time hand method for tracing according to claim 7 based on depth transducer, it is characterised in that:The step
In 3.1, the distance calculation formula of pixel to edge is:
Distance (x, y)=min { Distance1 (x, y), Distance2 (x, y) }
For any pixel point of hand region, when Distance value is greater than the Distance value of surrounding point, then it is assumed that its is right
The coordinate answered is finger bone point position.
9. the real-time hand method for tracing according to claim 7 based on depth transducer, it is characterised in that:The step
In 3.2, geodesic distance relationship is given by between each point:
Wherein, Origin is the origin of geodesic distance measurement, and Target is the target point of current solution, and Neighbor is target point
The point of proximity of 3D GSP has been acquired around Target;
Distance relation is given by between each pixel,
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810526570.2A CN108919943B (en) | 2018-05-22 | 2018-05-22 | Real-time hand tracking method based on depth sensor |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810526570.2A CN108919943B (en) | 2018-05-22 | 2018-05-22 | Real-time hand tracking method based on depth sensor |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108919943A true CN108919943A (en) | 2018-11-30 |
CN108919943B CN108919943B (en) | 2021-08-03 |
Family
ID=64418209
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810526570.2A Active CN108919943B (en) | 2018-05-22 | 2018-05-22 | Real-time hand tracking method based on depth sensor |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108919943B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110070573A (en) * | 2019-04-25 | 2019-07-30 | 北京卡路里信息技术有限公司 | Joint figure determines method, apparatus, equipment and storage medium |
CN110457990A (en) * | 2019-06-19 | 2019-11-15 | 特斯联(北京)科技有限公司 | A kind of the safety monitoring video shelter intelligence complementing method and system of machine learning |
CN111062360A (en) * | 2019-12-27 | 2020-04-24 | 恒信东方文化股份有限公司 | Hand tracking system and tracking method thereof |
CN111696140A (en) * | 2020-05-09 | 2020-09-22 | 青岛小鸟看看科技有限公司 | Monocular-based three-dimensional gesture tracking method |
CN112487877A (en) * | 2020-11-12 | 2021-03-12 | 广东芯盾微电子科技有限公司 | Monitoring method, system, device and medium for standard operation of kitchen waste |
CN112801061A (en) * | 2021-04-07 | 2021-05-14 | 南京百伦斯智能科技有限公司 | Posture recognition method and system |
CN112927290A (en) * | 2021-02-18 | 2021-06-08 | 青岛小鸟看看科技有限公司 | Bare hand data labeling method and system based on sensor |
WO2021129487A1 (en) * | 2019-12-25 | 2021-07-01 | 华为技术有限公司 | Method and apparatus for determining position of limb node of user, medium and system |
CN114494338A (en) * | 2021-12-21 | 2022-05-13 | 特斯联科技集团有限公司 | Hand real-time sensing method based on adaptive positioning and Kalman filtering tracking |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150117708A1 (en) * | 2012-06-25 | 2015-04-30 | Softkinetic Software | Three Dimensional Close Interactions |
US20150324988A1 (en) * | 2014-05-08 | 2015-11-12 | Digitalglobe, Inc. | Automated tonal balancing |
US20160132121A1 (en) * | 2014-11-10 | 2016-05-12 | Fujitsu Limited | Input device and detection method |
CN106055091A (en) * | 2016-05-16 | 2016-10-26 | 电子科技大学 | Hand posture estimation method based on depth information and calibration method |
CN106346485A (en) * | 2016-09-21 | 2017-01-25 | 大连理工大学 | Non-contact control method of bionic manipulator based on learning of hand motion gestures |
CN106650687A (en) * | 2016-12-30 | 2017-05-10 | 山东大学 | Posture correction method based on depth information and skeleton information |
CN106709464A (en) * | 2016-12-29 | 2017-05-24 | 华中师范大学 | Method for collecting and integrating body and hand movements of Tujia brocade technique |
CN107256083A (en) * | 2017-05-18 | 2017-10-17 | 河海大学常州校区 | Many finger method for real time tracking based on KINECT |
CN107253192A (en) * | 2017-05-24 | 2017-10-17 | 湖北众与和智能装备科技有限公司 | It is a kind of based on Kinect without demarcation human-computer interactive control system and method |
-
2018
- 2018-05-22 CN CN201810526570.2A patent/CN108919943B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150117708A1 (en) * | 2012-06-25 | 2015-04-30 | Softkinetic Software | Three Dimensional Close Interactions |
US20150324988A1 (en) * | 2014-05-08 | 2015-11-12 | Digitalglobe, Inc. | Automated tonal balancing |
US20160132121A1 (en) * | 2014-11-10 | 2016-05-12 | Fujitsu Limited | Input device and detection method |
CN106055091A (en) * | 2016-05-16 | 2016-10-26 | 电子科技大学 | Hand posture estimation method based on depth information and calibration method |
CN106346485A (en) * | 2016-09-21 | 2017-01-25 | 大连理工大学 | Non-contact control method of bionic manipulator based on learning of hand motion gestures |
CN106709464A (en) * | 2016-12-29 | 2017-05-24 | 华中师范大学 | Method for collecting and integrating body and hand movements of Tujia brocade technique |
CN106650687A (en) * | 2016-12-30 | 2017-05-10 | 山东大学 | Posture correction method based on depth information and skeleton information |
CN107256083A (en) * | 2017-05-18 | 2017-10-17 | 河海大学常州校区 | Many finger method for real time tracking based on KINECT |
CN107253192A (en) * | 2017-05-24 | 2017-10-17 | 湖北众与和智能装备科技有限公司 | It is a kind of based on Kinect without demarcation human-computer interactive control system and method |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110070573A (en) * | 2019-04-25 | 2019-07-30 | 北京卡路里信息技术有限公司 | Joint figure determines method, apparatus, equipment and storage medium |
CN110070573B (en) * | 2019-04-25 | 2021-07-06 | 北京卡路里信息技术有限公司 | Joint map determination method, device, equipment and storage medium |
CN110457990B (en) * | 2019-06-19 | 2020-06-12 | 特斯联(北京)科技有限公司 | Machine learning security monitoring video occlusion intelligent filling method and system |
CN110457990A (en) * | 2019-06-19 | 2019-11-15 | 特斯联(北京)科技有限公司 | A kind of the safety monitoring video shelter intelligence complementing method and system of machine learning |
WO2021129487A1 (en) * | 2019-12-25 | 2021-07-01 | 华为技术有限公司 | Method and apparatus for determining position of limb node of user, medium and system |
CN113111678A (en) * | 2019-12-25 | 2021-07-13 | 华为技术有限公司 | Method, device, medium and system for determining position of limb node of user |
CN113111678B (en) * | 2019-12-25 | 2024-05-24 | 华为技术有限公司 | Method, device, medium and system for determining position of limb node of user |
CN111062360A (en) * | 2019-12-27 | 2020-04-24 | 恒信东方文化股份有限公司 | Hand tracking system and tracking method thereof |
CN111062360B (en) * | 2019-12-27 | 2023-10-24 | 恒信东方文化股份有限公司 | Hand tracking system and tracking method thereof |
CN111696140A (en) * | 2020-05-09 | 2020-09-22 | 青岛小鸟看看科技有限公司 | Monocular-based three-dimensional gesture tracking method |
CN111696140B (en) * | 2020-05-09 | 2024-02-13 | 青岛小鸟看看科技有限公司 | Monocular-based three-dimensional gesture tracking method |
CN112487877A (en) * | 2020-11-12 | 2021-03-12 | 广东芯盾微电子科技有限公司 | Monitoring method, system, device and medium for standard operation of kitchen waste |
CN112927290A (en) * | 2021-02-18 | 2021-06-08 | 青岛小鸟看看科技有限公司 | Bare hand data labeling method and system based on sensor |
CN112801061A (en) * | 2021-04-07 | 2021-05-14 | 南京百伦斯智能科技有限公司 | Posture recognition method and system |
CN114494338A (en) * | 2021-12-21 | 2022-05-13 | 特斯联科技集团有限公司 | Hand real-time sensing method based on adaptive positioning and Kalman filtering tracking |
Also Published As
Publication number | Publication date |
---|---|
CN108919943B (en) | 2021-08-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108919943A (en) | A kind of real-time hand method for tracing based on depth transducer | |
CN103941866B (en) | Three-dimensional gesture recognizing method based on Kinect depth image | |
KR101257207B1 (en) | Method, apparatus and computer-readable recording medium for head tracking | |
CN105759967B (en) | A kind of hand overall situation attitude detecting method based on depth data | |
CN107357427A (en) | A kind of gesture identification control method for virtual reality device | |
CN104978012B (en) | One kind points to exchange method, apparatus and system | |
CN109949375A (en) | A kind of mobile robot method for tracking target based on depth map area-of-interest | |
JP2015222591A (en) | Human-computer interaction system, method for point positioning hand and hand instruction, and finger gesture determination method | |
EP1872334A2 (en) | Method and system for the detection and the classification of events during motion actions | |
JP2012518236A (en) | Method and system for gesture recognition | |
CN112464847B (en) | Human body action segmentation method and device in video | |
CN110794956A (en) | Gesture tracking and accurate fingertip positioning system based on Kinect | |
CN111178170B (en) | Gesture recognition method and electronic equipment | |
CN108830170B (en) | End-to-end target tracking method based on layered feature representation | |
CN109800676A (en) | Gesture identification method and system based on depth information | |
CN108460790A (en) | A kind of visual tracking method based on consistency fallout predictor model | |
Chang et al. | The model-based human body motion analysis system | |
Wan et al. | Chalearn looking at people: Isogd and congd large-scale rgb-d gesture recognition | |
Deng et al. | Hand pose understanding with large-scale photo-realistic rendering dataset | |
Li et al. | Visual slam in dynamic scenes based on object tracking and static points detection | |
Li et al. | Video-based table tennis tracking and trajectory prediction using convolutional neural networks | |
JP2003256850A (en) | Movement recognizing device and image processor and its program | |
Do et al. | Particle filter-based fingertip tracking with circular hough transform features | |
CN108108648A (en) | A kind of new gesture recognition system device and method | |
Fang et al. | Single RGB-D fitting: Total human modeling with an RGB-D shot |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |