CN113205605A - Method for acquiring hand three-dimensional parametric model from depth image - Google Patents
Method for acquiring hand three-dimensional parametric model from depth image Download PDFInfo
- Publication number
- CN113205605A CN113205605A CN202110595988.0A CN202110595988A CN113205605A CN 113205605 A CN113205605 A CN 113205605A CN 202110595988 A CN202110595988 A CN 202110595988A CN 113205605 A CN113205605 A CN 113205605A
- Authority
- CN
- China
- Prior art keywords
- dimensional
- hand
- point cloud
- model
- depth image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a method for acquiring a three-dimensional hand parametric model from a depth image, which comprises the following steps: acquiring a depth image sequence and depth camera internal parameters; reconstructing a rough three-dimensional point cloud of the hand by using the depth image sequence of the hand and the corresponding depth camera parameters; manually removing non-hand point clouds and noise point clouds in the rough hand three-dimensional point clouds to obtain fine hand three-dimensional point clouds; and obtaining the personalized hand three-dimensional parameterized model of the user by carrying out iterative optimization on three stages of the fine hand three-dimensional point cloud. According to the method, the personalized hand three-dimensional parameterized model of the user can be obtained through the depth image sequence. Compared with the traditional model-free or universal hand model, the hand three-dimensional parameterized model with the personalized user provides more prior information based on the user, so the gesture three-dimensional parameterized hand model has higher precision and adaptability during gesture posture estimation, and has application prospects in specific scenes such as human-computer interaction, rehabilitation and medical treatment.
Description
Technical Field
The invention belongs to the field of computer vision, and particularly relates to a method for acquiring a three-dimensional hand parametric model from a depth image.
Background
The traditional model-free gesture pose estimation based on computer vision often encounters the problems of occlusion (including self-occlusion), low resolution, noise and the like, and the problems have great influence on the final pose estimation result. Compared with the traditional model-free method, the hand parametric model-based gesture posture estimation method provides strong prior knowledge for the gesture posture estimation task, so that the gesture posture estimation task can provide excellent performance under the conditions of shielding, low resolution and the like to the maximum extent. The most widely used hand parameterization Model is currently the MANO Model (hand Model with organized and Non-threaded defied objects). The MANO model is a parameterized hand model published by Javier Romero et al in 2017. Relevant research work according to hand posture estimation at home and abroad since 2018 shows that: the MANO parameterized model is used, and the method plays a vital role in estimating a reasonable and accurate gesture posture.
Set of parameters for MANO modelTo define triangulated hand three-dimensional meshWhereinIs the shape parameter of the hand part,is a hand pose parameter representing 16 hand joint angles represented by axis angles. Specifically, the MANO model is first modeled by an averageBy definition, there are 778 vertices in the template. Shape mixing functionTo be provided withAs an input, a hybrid shape describing a hand model is output. Attitude blend functionTo be provided withAs an input, a model change due to a hand gesture is output. Output results of shape blending function and attitude blending functionAndis applied to the average stencilTo obtain the final hand three-dimensional meshThe model is based on the vertex of each finger portion by blending the skin function W (-) withHybrid weightsCalculating the rotation amount for each key pointA rotation operation is performed.
Shape blending parameter B for a particular modelSWhich is itself a space of shapes from a set of lay-flat motion hands by principal component analysisTo extract a principal component vector SnLinear combinations of (3). The principal component vectors are multiplied by linear coefficients in turn and accumulated to obtain the personal customized hand shape mixing parameter BSThe corresponding linear coefficient is the shape parameter
Disclosure of Invention
The invention aims to provide a method for acquiring a three-dimensional hand parametric model from a depth image aiming at the requirement of acquiring the hand parametric model.
The purpose of the invention is realized by the following technical scheme: a method of obtaining a three-dimensional parametric model of a hand from a depth image, the method comprising the steps of:
(1) acquiring a depth image sequence and depth camera internal parameters, comprising the following sub-steps:
(1.1) shooting the two hands of a user, which are horizontally placed on a flat desktop, by using a structured light depth camera to obtain a hand depth image sequence;
(1.2) reading camera parameters of the depth camera, including focal length and center point offset;
(2) reconstructing a rough three-dimensional point cloud of the hand by the hand depth image sequence obtained in the step (1) and the corresponding depth camera internal reference, and comprising the following substeps:
(2.1) obtaining a three-dimensional point cloud according to the depth image of the first frame and the internal parameters of the depth camera, constructing a three-dimensional grid model by taking the camera coordinate system of the first frame as a world coordinate system, and executing the step (2.2) for the subsequent depth image sequence;
(2.2) obtaining a three-dimensional point cloud according to the single-frame depth image and the camera internal parameters and calculating a normal vector of each point in the point cloud;
(2.3) registering the three-dimensional point cloud of the current frame with the point cloud projected from the three-dimensional grid model through light projection according to the position of the previous frame of camera, and calculating the position of the current frame of camera;
(2.4) fusing the point cloud of the current frame with the three-dimensional grid model according to the calculated camera pose, and updating the three-dimensional grid model;
(2.5) projecting from the updated three-dimensional grid model by light projection according to the camera pose of the current frame to obtain point cloud under the current view angle, and calculating a normal vector of each point in the point cloud for registering the input depth image of the next frame; repeatedly executing the steps (2.2) - (2.5) until the processing of the depth image sequence is completed;
(2.6) converting the three-dimensional mesh model into a point cloud to obtain a rough hand three-dimensional point cloud (the hand three-dimensional point cloud usually contains the surrounding environment and noise);
(3) obtaining rough hand three-dimensional point cloud in the step (2), manually removing non-hand point cloud and noise point cloud to obtain fine hand three-dimensional point cloud, and comprising the following substeps:
(3.1) eliminating point clouds which are not connected with the hand three-dimensional point cloud;
(3.2) selecting three non-collinear points belonging to the desktop plane in the three-dimensional point cloud processed in the step (3.1), and calculating the planar representation of the desktop according to the three points;
(3.3) according to the plane representation of the desktop, removing the three-dimensional point cloud far away from the space on one side of the hand point cloud;
(3.4) eliminating point clouds which are not connected with the hand three-dimensional point cloud in the three-dimensional point cloud processed in the step (3.3) to obtain fine hand three-dimensional point cloud;
(4) the method for acquiring the personalized hand three-dimensional parameterized model of the user through the fine hand three-dimensional point cloud comprises the following substeps:
(4.1) based on the fine hand three-dimensional point cloud and the fingertip two-dimensional position projected to the image plane, optimally solving the global rotation and translation parameters of the hand three-dimensional parameterized model;
(4.2) fixing the global rotation and translation parameters obtained in the step (4.1), and optimizing and solving the shape and posture parameters of the hand three-dimensional parametric model based on the fine hand three-dimensional point cloud and the fingertip position projected to the image plane;
and (4.3) actively removing a part of point clouds of the finger parts in order to balance the influence caused by the dense point clouds of the finger parts, further optimizing and solving the shape parameters of the hand three-dimensional parameterized model based on the processed hand three-dimensional point clouds, and finally obtaining the hand three-dimensional parameterized model personalized by the user.
Further, the three-dimensional mesh model is constructed and updated by a three-dimensional reconstruction algorithm named tsdf (truncated Signed Distance function).
Further, the three-dimensional point clouds are registered through a point cloud matching algorithm named ICP (iterative closed Point), and the camera pose is calculated according to the registered point clouds.
Further, in the step (3), the selection and elimination of the three-dimensional point cloud are processed by using three-dimensional mesh visualization software capable of visual representation, and Meshlab and OpenSCAD can be adopted.
Further, in the steps (4.1) and (4.2), the three-dimensional point cloud of the hand is projected into an image under the reference of a stationary camera, and a two-dimensional position of a finger tip is obtained through a fingertip detection algorithm, wherein the fingertip detection algorithm can be performed through a contour detection method of an OpenCV tool.
Further, in the step (4), the objective function for optimizing and solving the optimal parameter includes three parts, which are: measuring point cloud matching error of overall matching degree of three-dimensional mesh patch q on hand three-dimensional parametric model and point p in three-dimensional point cloud, and measuring finger tip two-dimensional projection ft of hand three-dimensional parametric modelq,jAnd point cloud fingertip two-dimensional projection ftp,jFingertip projection error of distance and measurement of hand three-dimensional parametric model shape parameters as final resultThe prior error of the distance between the shape parameter of the average hand and the calculation formulas of the point cloud matching error, the fingertip projection error and the prior error are respectively as follows:
wherein j in the fingertip projection error formula represents 5 fingertips.
Further, in the step (4.3), the shape parameter of the hand three-dimensional parameterized model is a shape space of the hand shape from a set of flat-laying motions by principal component analysisTo extract a principal component vector SnThe customized hand shape can be considered as a principal component vector SnThe corresponding linear coefficient is the shape parameter
Further, in the step (4), the optimization algorithm for solving is Adam gradient descent algorithm.
Further, in the step (4.3), the point cloud elimination rule of the finger part is to eliminate points, which are projected on a two-dimensional plane and have a distance of more than 80 pixels from a two-dimensional point projected by a root node of the hand three-dimensional parameterized model.
Further, according to the user-personalized hand three-dimensional parameterized model obtained in the step (4), the user-personalized hand three-dimensional parameterized model provides more prior information based on the user than the traditional model-free or universal hand model, so that the gesture posture estimation method has higher precision and adaptability during gesture posture estimation, and has application prospects in specific scenes such as human-computer interaction, rehabilitation and the like.
The invention has the beneficial effects that: the invention provides a method for acquiring a three-dimensional parameterized model of a personalized hand of a user from a depth image sequence. According to the method, the three-dimensional point cloud of the user hand is obtained through the structured light depth camera, and the three-dimensional point cloud of the user hand is matched with the standard parameterized hand model, so that the personalized hand three-dimensional parameterized model of the user is obtained. Compared with the traditional model-free or universal hand model, the hand three-dimensional parameterized model with the personalized user provides more prior information based on the user, so the gesture three-dimensional parameterized hand model has higher precision and adaptability during gesture posture estimation, and has application prospects in specific scenes such as human-computer interaction, rehabilitation and medical treatment.
Drawings
Fig. 1 is a flowchart of a method for obtaining a three-dimensional hand parametric model from a depth image according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
As shown in fig. 1, a method for acquiring a three-dimensional hand parametric model from a depth image according to an embodiment of the present invention includes the following specific steps:
acquiring a depth image sequence and depth camera internal parameters from a depth camera, wherein the depth camera adopts a depth camera applying a structured light principle, can adopt Intel RealSense D435i, uses the depth camera to surround a hand depth image sequence shot at a distance of 0.6 m, reads internal parameters of the depth camera including the offset of a focal length and a central point, and needs to ensure that a hand is always in the visual field range of the depth camera during shooting;
and (2) obtaining rough hand three-dimensional reconstruction point clouds on the basis of the depth image sequence obtained in the step (1) and the internal parameters of the depth camera. And obtaining a three-dimensional point cloud according to the depth image of the first frame and the internal parameters of the depth camera, constructing a three-dimensional grid model by taking the camera coordinate system of the first frame as a world coordinate system, and constructing the three-dimensional grid model by using a three-dimensional reconstruction algorithm named as TSDF (reflected Distance function). For each frame of depth image, calculating a three-dimensional point cloud under a camera coordinate system from the frame of depth image and camera internal parameters, and calculating a normal vector of each point in the three-dimensional point cloud; and performing point cloud registration on the three-dimensional point cloud of the current frame and the point cloud obtained by projection calculation of the three-dimensional grid model according to the position and the posture of the previous frame of camera through a light projection algorithm, wherein the point cloud registration is performed by using a point cloud matching algorithm named ICP (iterative Closest Point). After point cloud registration, calculating the camera pose of the current frame based on the camera pose of the previous frame; fusing the point cloud of the current frame and the three-dimensional grid model by using a three-dimensional reconstruction algorithm named TSDF (time series decomposition-transformation function) according to the camera pose of the current frame obtained by calculation, and updating the three-dimensional grid model; meanwhile, according to the camera pose of the current frame, point clouds under the current visual angle are obtained through projection of the updated grid model through a light projection algorithm, and normal vectors of all points in the point clouds are calculated and used for registering the input depth image of the next frame; and repeatedly executing the steps until all the depth image sequences are processed, and obtaining rough three-dimensional point cloud of the hand. The rough three-dimensional point cloud of the hand usually includes surrounding environment point cloud and noise.
And (3) manually processing the rough three-dimensional point cloud of the hand obtained in the step (2). When the three-dimensional point cloud is manually selected and rejected, three-dimensional mesh visualization software capable of visually representing point cloud files is required to be used for processing, and Meshlab and OpenSCAD can be adopted. Firstly, selecting point clouds which are not connected with the hand three-dimensional point cloud, removing the point clouds, and obtaining the three-dimensional point cloud comprising the hand point cloud and the desktop point cloud through the operation; then selecting three non-collinear points belonging to a desktop plane, calculating the plane representation of the desktop according to the three points, eliminating the three-dimensional point cloud in the space on one side of the plane far away from the hand point cloud, and obtaining the three-dimensional point cloud which comprises the hand-shaped point cloud and a part of point cloud fragments which are not connected with the hand through the operation; and eliminating the point cloud which is not connected with the hand three-dimensional point cloud again to obtain the fine hand three-dimensional point cloud.
(4) The method for acquiring the personalized hand three-dimensional parameterized model of the user through the fine hand three-dimensional point cloud comprises the following substeps:
(4.1) based on the fine hand three-dimensional point cloud and the fingertip two-dimensional position projected to the image plane, optimally solving the global rotation and translation parameters of the hand three-dimensional parameterized model;
(4.2) fixing the global rotation and translation parameters obtained in the step (4.1), and optimizing and solving the shape and posture parameters of the hand three-dimensional parametric model based on the fine hand three-dimensional point cloud and the fingertip position projected to the image plane;
and (4.3) removing a part of point cloud of the finger part, further optimizing and solving the shape parameter of the hand three-dimensional parameterized model based on the processed hand three-dimensional point cloud, and finally obtaining the user-personalized hand three-dimensional parameterized model.
And (4) acquiring a hand three-dimensional parameterized model personalized by the user through the fine hand three-dimensional point cloud. The step is divided into three stages, and each stage adopts the following objective function in an iterative optimization mode:
E(θ,β,R,T)=wEpointcloud+αEfingertip+γEprior
wherein, the point cloud matching error loss EpointcloudFor each point p on the hand three-dimensional point cloud, finding the nearest patch q on the three-dimensional mesh of the hand three-dimensional parameterized model, and calculating the distance of the patch q; and for each patch q on the three-dimensional grid of each hand three-dimensional parameterized model, finding the closest point p in the hand three-dimensional point cloud, and calculating the distance of the closest point p.
The point cloud matching error is used for measuring the integral matching degree of a three-dimensional mesh surface patch on the hand three-dimensional parameterized model and the three-dimensional point cloud, and finally the condition that the three-dimensional point cloud is closest to the integral distance of the hand three-dimensional parameterized model and the loss function E is required to be metpointcloudThe formula is as follows:
projection error E of fingertip joint pointfingertipFor each fingertip in a five-finger fingertip set j, calculating a two-dimensional projection ft of the three-dimensional hand parametric model fingertipq,jAnd point cloud fingertip two-dimensional projection ftp,jOf a two-dimensional distance, loss function EfingertipThe formula is as follows:
the prior error formula of the hand three-dimensional parametric model shape parameter is as follows:
hand three-dimensional parametric model shape parameter with end result of prior error measurement of hand three-dimensional parametric model shape parameterEuler distance from the "average hand" shape parameter. Wherein the "average hand" is defined as all 0's of all elements in the shape parameterSpecial cases.
Shape parameter of hand three-dimensional parametric model is from a set of shape space of flat-laying motion hand shape by principal component analysisTo extract a principal component vector SnLinear combinations of (3). Hand shape blending parameter B for a particular modelSThe corresponding linear coefficient is the shape parameterThe calculation formula is as follows:
in the first stage, the global rotation and translation parameters of the hand three-dimensional parameterized model are solved through iterative optimization based on the fine hand three-dimensional point cloud and the fingertip two-dimensional position projected to the image plane. The fingertip detection algorithm can be performed by a contour detection method of an OpenCV tool. In this stage, the hand gesture parameters are defaulted to standard five-finger stretching actions, and the gesture parameters are designated. In the stage, the weight w of the point cloud matching error is set to 300, the weight alpha of the fingertip matching error is set to 1, the prior error gamma of the hand shape parameter is set to 2, point cloud registration is performed by using a point cloud matching algorithm named ICP (iterative close point), and iterative optimization of the objective function is performed by using a gradient descent optimization algorithm named Adam. In the stage, the rotation and translation between the hand three-dimensional parameterized model and the world coordinate system where the three-dimensional point cloud is located are estimated as the initial parameters of R and T.
And in the second stage, based on the fine hand three-dimensional point cloud and the fingertip two-dimensional position projected to the image plane, the shape parameters and the posture parameters of the hand three-dimensional parametric model are solved through iterative optimization. The fingertip detection algorithm is consistent with the first stage. In the stage, hand posture parameters are not assumed any more when iterative optimization of hand parameters is carried out, but initial parameters of R and T obtained in the previous stage are fixedly used, so that influence on point cloud fitting caused by integral displacement and rotation of the model is avoided. In the stage, the weight w of the used point cloud matching error is set to 5000, the weight alpha of the fingertip matching error is set to 1, the prior error gamma of the hand shape parameter is set to 2, and the algorithm of point cloud registration and iterative optimization of the objective function is consistent with that in the first stage. In this stage, the initial values of the shape parameters and the attitude parameters of the hand three-dimensional parametric model are estimated.
In the third stage, firstly, in order to balance the influence of the point cloud density of the finger part on model fitting, a part of point cloud of the finger part is actively removed, and the removing rule is to remove points which are projected on a fine hand three-dimensional point cloud two-dimensional plane and have a distance of more than 80 pixels from a two-dimensional point projected by a hand three-dimensional parametric model root intercept. And solving the shape parameters of the hand three-dimensional parameterized model based on the processed hand three-dimensional point cloud iterative optimization, and finally obtaining the user-personalized hand three-dimensional parameterized model. In the stage, the weight w of the point cloud matching error is set to 6000, the weight alpha of the fingertip matching error is set to 1, the prior error gamma of the hand shape parameter is set to 2, and the algorithm of point cloud registration and iterative optimization of the objective function is consistent with that in the first stage. In the stage, the shape parameters of the hand three-dimensional parameterized model are estimated, and the user-customized hand three-dimensional parameterized model is obtained.
The above description is only a preferred embodiment, and the present invention is not limited to the above embodiment, and the technical effects of the present invention can be achieved by the same means, which are all within the protection scope of the present invention. Within the scope of protection of the present invention, various modifications and variations of the technical solution and/or embodiments thereof are possible.
Claims (10)
1. A method for acquiring a three-dimensional hand parametric model from a depth image is characterized by comprising the following steps:
(1) acquiring a depth image sequence and depth camera internal parameters, comprising the following sub-steps:
(1.1) shooting the two hands of a user, which are horizontally placed on a flat desktop, by using a structured light depth camera to obtain a hand depth image sequence;
(1.2) reading camera parameters of the depth camera, including focal length and center point offset;
(2) reconstructing a rough three-dimensional point cloud of the hand by the hand depth image sequence obtained in the step (1) and the corresponding depth camera internal reference, and comprising the following substeps:
(2.1) obtaining a three-dimensional point cloud according to the depth image of the first frame and the internal parameters of the depth camera, constructing a three-dimensional grid model by taking the camera coordinate system of the first frame as a world coordinate system, and executing the step (2.2) for the subsequent depth image sequence;
(2.2) obtaining a three-dimensional point cloud according to the single-frame depth image and the camera internal parameters and calculating a normal vector of each point in the point cloud;
(2.3) registering the three-dimensional point cloud of the current frame with the point cloud projected from the three-dimensional grid model through light projection according to the position of the previous frame of camera, and calculating the position of the current frame of camera;
(2.4) fusing the point cloud of the current frame with the three-dimensional grid model according to the calculated camera pose, and updating the three-dimensional grid model;
(2.5) projecting from the updated three-dimensional grid model by light projection according to the camera pose of the current frame to obtain point cloud under the current view angle, and calculating a normal vector of each point in the point cloud for registering the input depth image of the next frame; repeatedly executing the steps (2.2) - (2.5) until the processing of the depth image sequence is completed;
(2.6) converting the three-dimensional grid model into point cloud to obtain rough hand three-dimensional point cloud;
(3) obtaining rough hand three-dimensional point cloud in the step (2), manually removing non-hand point cloud and noise point cloud to obtain fine hand three-dimensional point cloud, and comprising the following substeps:
(3.1) eliminating point clouds which are not connected with the hand three-dimensional point cloud;
(3.2) selecting three non-collinear points belonging to the desktop plane in the three-dimensional point cloud processed in the step (3.1), and calculating the planar representation of the desktop according to the three points;
(3.3) according to the plane representation of the desktop, removing the three-dimensional point cloud far away from the space on one side of the hand point cloud;
(3.4) eliminating point clouds which are not connected with the hand three-dimensional point cloud in the three-dimensional point cloud processed in the step (3.3) to obtain fine hand three-dimensional point cloud;
(4) the method for acquiring the personalized hand three-dimensional parameterized model of the user through the fine hand three-dimensional point cloud comprises the following substeps:
(4.1) based on the fine hand three-dimensional point cloud and the fingertip two-dimensional position projected to the image plane, optimally solving the global rotation and translation parameters of the hand three-dimensional parameterized model;
(4.2) fixing the global rotation and translation parameters obtained in the step (4.1), and optimizing and solving the shape and posture parameters of the hand three-dimensional parametric model based on the fine hand three-dimensional point cloud and the fingertip position projected to the image plane;
and (4.3) removing a part of point cloud of the finger part, further optimizing and solving the shape parameter of the hand three-dimensional parameterized model based on the processed hand three-dimensional point cloud, and finally obtaining the user-personalized hand three-dimensional parameterized model.
2. The method of claim 1, wherein the three-dimensional mesh model is constructed and updated by a three-dimensional reconstruction algorithm named TSDF.
3. The method for obtaining the three-dimensional parameterized hand model from the depth image as claimed in claim 1, characterized in that the three-dimensional point clouds are registered by a point cloud matching algorithm named ICP, and the camera pose is calculated according to the registered point clouds.
4. The method for obtaining a three-dimensional hand parametric model from a depth image as claimed in claim 1, wherein in the step (3), the selection and rejection of the three-dimensional point cloud are processed by using three-dimensional mesh visualization software capable of visual representation, such as Meshlab and OpenSCAD.
5. The method according to claim 1, wherein in the steps (4.1) and (4.2), the three-dimensional point cloud of the hand is projected into an image under the stationary camera parameters, and the two-dimensional position of the finger tip is obtained by a fingertip detection algorithm, which can be performed by an outline detection method of an OpenCV tool.
6. The method for obtaining a three-dimensional hand parametric model from a depth image as claimed in claim 1, wherein in the step (4), the objective function for optimizing the solution to the optimal parameters comprises three parts: measuring point cloud matching error of overall matching degree of three-dimensional mesh patch q on hand three-dimensional parametric model and point p in three-dimensional point cloud, and measuring finger tip two-dimensional projection ft of hand three-dimensional parametric modelq,jAnd point cloud fingertip two-dimensional projection ftp,jFingertip projection error of distance and measurement of hand three-dimensional parametric model shape parameters as final resultThe prior error of the distance between the shape parameter of the average hand and the calculation formulas of the point cloud matching error, the fingertip projection error and the prior error are respectively as follows:
wherein j in the fingertip projection error formula represents 5 fingertips.
7. A slave depth of claim 1The method for acquiring the hand three-dimensional parameterized model from the image is characterized in that the shape parameter of the hand three-dimensional parameterized model in the step (4.3) is the shape space of a group of flat-laying motion hand shapes through principal component analysisTo extract a principal component vector SnThe customized hand shape can be considered as a principal component vector SnThe corresponding linear coefficient is the shape parameter
8. The method for obtaining a three-dimensional hand parametric model from a depth image as claimed in claim 1, wherein in the step (4), the optimization algorithm for solving is Adam gradient descent algorithm.
9. The method for obtaining a three-dimensional hand parametric model from a depth image as claimed in claim 1, wherein in step (4.3), the point cloud elimination rule of the finger part is to eliminate points projected on a two-dimensional plane at a distance of more than 80 pixels from a two-dimensional point projected from a root node of the three-dimensional hand parametric model.
10. The method for obtaining a three-dimensional hand parametric model from a depth image as claimed in claim 1, wherein the three-dimensional hand parametric model obtained in step (4) has higher accuracy and adaptability in gesture pose estimation since the three-dimensional hand parametric model provides more prior information based on a user than a conventional model-free or general-purpose hand model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110595988.0A CN113205605B (en) | 2021-05-29 | 2021-05-29 | Method for acquiring hand three-dimensional parametric model from depth image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110595988.0A CN113205605B (en) | 2021-05-29 | 2021-05-29 | Method for acquiring hand three-dimensional parametric model from depth image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113205605A true CN113205605A (en) | 2021-08-03 |
CN113205605B CN113205605B (en) | 2022-04-19 |
Family
ID=77023610
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110595988.0A Active CN113205605B (en) | 2021-05-29 | 2021-05-29 | Method for acquiring hand three-dimensional parametric model from depth image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113205605B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114363161A (en) * | 2022-01-11 | 2022-04-15 | 中国工商银行股份有限公司 | Abnormal equipment positioning method, device, equipment and medium |
CN114463409A (en) * | 2022-02-11 | 2022-05-10 | 北京百度网讯科技有限公司 | Method and device for determining image depth information, electronic equipment and medium |
CN117315092A (en) * | 2023-10-08 | 2023-12-29 | 玩出梦想(上海)科技有限公司 | Automatic labeling method and data processing equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013029675A1 (en) * | 2011-08-31 | 2013-03-07 | Metaio Gmbh | Method for estimating a camera motion and for determining a three-dimensional model of a real environment |
CN107833270A (en) * | 2017-09-28 | 2018-03-23 | 浙江大学 | Real-time object dimensional method for reconstructing based on depth camera |
CN109636831A (en) * | 2018-12-19 | 2019-04-16 | 安徽大学 | A method of estimation 3 D human body posture and hand information |
CN111882659A (en) * | 2020-07-21 | 2020-11-03 | 浙江大学 | High-precision human body foot shape reconstruction method integrating human body foot shape rule and visual shell |
CN112509117A (en) * | 2020-11-30 | 2021-03-16 | 清华大学 | Hand three-dimensional model reconstruction method and device, electronic equipment and storage medium |
-
2021
- 2021-05-29 CN CN202110595988.0A patent/CN113205605B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013029675A1 (en) * | 2011-08-31 | 2013-03-07 | Metaio Gmbh | Method for estimating a camera motion and for determining a three-dimensional model of a real environment |
CN107833270A (en) * | 2017-09-28 | 2018-03-23 | 浙江大学 | Real-time object dimensional method for reconstructing based on depth camera |
CN109636831A (en) * | 2018-12-19 | 2019-04-16 | 安徽大学 | A method of estimation 3 D human body posture and hand information |
CN111882659A (en) * | 2020-07-21 | 2020-11-03 | 浙江大学 | High-precision human body foot shape reconstruction method integrating human body foot shape rule and visual shell |
CN112509117A (en) * | 2020-11-30 | 2021-03-16 | 清华大学 | Hand three-dimensional model reconstruction method and device, electronic equipment and storage medium |
Non-Patent Citations (2)
Title |
---|
WENTAO WEI等: "《Surface-Electromyography-Based Gesture Recognition by Multi-View Deep Learning》", 《IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING》 * |
王丽萍等: "《深度图像中的3D手势姿态估计方法综述》", 《小型微型计算机***》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114363161A (en) * | 2022-01-11 | 2022-04-15 | 中国工商银行股份有限公司 | Abnormal equipment positioning method, device, equipment and medium |
CN114363161B (en) * | 2022-01-11 | 2024-03-22 | 中国工商银行股份有限公司 | Abnormal equipment positioning method, device, equipment and medium |
CN114463409A (en) * | 2022-02-11 | 2022-05-10 | 北京百度网讯科技有限公司 | Method and device for determining image depth information, electronic equipment and medium |
CN114463409B (en) * | 2022-02-11 | 2023-09-26 | 北京百度网讯科技有限公司 | Image depth information determining method and device, electronic equipment and medium |
US11783501B2 (en) | 2022-02-11 | 2023-10-10 | Beijing Baidu Netcom Science Technology Co., Ltd. | Method and apparatus for determining image depth information, electronic device, and media |
CN117315092A (en) * | 2023-10-08 | 2023-12-29 | 玩出梦想(上海)科技有限公司 | Automatic labeling method and data processing equipment |
CN117315092B (en) * | 2023-10-08 | 2024-05-14 | 玩出梦想(上海)科技有限公司 | Automatic labeling method and data processing equipment |
Also Published As
Publication number | Publication date |
---|---|
CN113205605B (en) | 2022-04-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113205605B (en) | Method for acquiring hand three-dimensional parametric model from depth image | |
Sharp et al. | ICP registration using invariant features | |
KR101902702B1 (en) | Tooth axis estimation program, tooth axis estimation device and method of the same, tooth profile data creation program, tooth profile data creation device and method of the same | |
JP4785880B2 (en) | System and method for 3D object recognition | |
CN110675487B (en) | Three-dimensional face modeling and recognition method and device based on multi-angle two-dimensional face | |
CN109544606B (en) | Rapid automatic registration method and system based on multiple Kinects | |
CN110634161A (en) | Method and device for quickly and accurately estimating pose of workpiece based on point cloud data | |
JPWO2009147904A1 (en) | Finger shape estimation device, finger shape estimation method and program | |
JP2009020761A (en) | Image processing apparatus and method thereof | |
Sharp et al. | Invariant features and the registration of rigid bodies | |
Werghi et al. | A functional-based segmentation of human body scans in arbitrary postures | |
CN110603570B (en) | Object recognition method, device, system, and program | |
JP2017532695A (en) | Method and system for scanning an object using an RGB-D sensor | |
CN113555083B (en) | Massage track generation method | |
Pan et al. | Automatic rigging for animation characters with 3D silhouette | |
CN111627043A (en) | Simple human body curve acquisition method based on marker and feature filter | |
CN108010002A (en) | A kind of structuring point cloud denoising method based on adaptive implicit Moving Least Squares | |
EP3801201A1 (en) | Measuring surface distances on human bodies | |
CN111833392A (en) | Multi-angle scanning method, system and device for mark points | |
KR20020073890A (en) | Three - Dimensional Modeling System Using Hand-Fumble and Modeling Method | |
CN111402221A (en) | Image processing method and device and electronic equipment | |
Endo et al. | A computer-aided ergonomic assessment and product design system using digital hands | |
CN113674395B (en) | 3D hand lightweight real-time capturing and reconstructing system based on monocular RGB camera | |
EP4155036A1 (en) | A method for controlling a grasping robot through a learning phase and a grasping phase | |
JP2016162425A (en) | Body posture estimation device, method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |