CN104268138A - Method for capturing human motion by aid of fused depth images and three-dimensional models - Google Patents

Method for capturing human motion by aid of fused depth images and three-dimensional models Download PDF

Info

Publication number
CN104268138A
CN104268138A CN201410205213.8A CN201410205213A CN104268138A CN 104268138 A CN104268138 A CN 104268138A CN 201410205213 A CN201410205213 A CN 201410205213A CN 104268138 A CN104268138 A CN 104268138A
Authority
CN
China
Prior art keywords
human
action
human action
depth information
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410205213.8A
Other languages
Chinese (zh)
Other versions
CN104268138B (en
Inventor
肖秦琨
谢艳梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Technological University
Original Assignee
Xian Technological University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Technological University filed Critical Xian Technological University
Priority to CN201410205213.8A priority Critical patent/CN104268138B/en
Publication of CN104268138A publication Critical patent/CN104268138A/en
Application granted granted Critical
Publication of CN104268138B publication Critical patent/CN104268138B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/285Analysis of motion using a sequence of stereo image pairs

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to a method for capturing human motion by the aid of fused depth images and three-dimensional models. By the aid of the method, problems of adhesion requirement on mark points, inconvenience in interaction and easiness in confusing and shielding the mark points of a method for optically capturing motion can be solved. The method includes acquiring depth information of human actions; removing backgrounds of motion objects; acquiring complete depth information of the human actions; converting the complete depth information of the human actions into three-dimensional point-cloud information of human bodies; acquiring three-dimensional human action models; establishing databases; enabling the databases to be in one-to-one correspondence with data of human action skeleton databases; extracting depth information of to-be-identified human actions to build the three-dimensional models; then matching the similarity of the three-dimensional models with the similarity of human actions in the databases of the three-dimensional models; outputting human action skeletons according to similarity sequences. The human action skeletons are motion capturing results. The method has the advantages that sensors or added mark points can be omitted on the human bodies, the method is easy to implement, motion sequences are matched with one another by the aid of a regular time wrapping process, the matching precision of each two sequences can be improved, the matching time can be greatly shortened, and the motion capturing speed and the motion capturing precision can be guaranteed.

Description

Merge the human body motion capture method of depth map and three-dimensional model
Technical field
The invention belongs to technical field of multimedia information retrieval, be specifically related to a kind of human body motion capture method merging depth map and three-dimensional model.
Background technology
Human motion capture technology is the hot issue in multimedia information retrieval field, and particularly in the development of video display animation, game etc., have wide application prospect, domestic and international many research institutions are just being devoted to the research in this direction.In recent years, along with the fast development of capturing movement technology, the rise in the fields such as three-dimensional video display animation, game, people of new generation are mutual, very many complicated and human actions true to nature need to be captured fast application, need a kind of method fast and effectively to carry out catching of human motion.The method for capturing movement of the optical profile type proposed at present, mainly based on principle of computer vision, by completing motion-captured task to the Monitor and track of luminous point specific in target.But above-mentioned method for capturing movement has some not enough:
(1) capturing movement of optical profile type needs binding mark point with it performing artist in use, and needs performing artist to put on special performance clothes, gets up alternately and inconvenience.When complicated movement, the gauge point of different parts is likely obscured, is blocked, and produces error result, at this moment needs manual intervention last handling process.
(2) although it can catch real time kinematics, the workload of aftertreatment (comprising the calculating of the identification of gauge point, tracking, volume coordinate) is comparatively large, and have certain requirement for the illumination of performance venue, reflection case, device calibration is also comparatively loaded down with trivial details.
Summary of the invention
The object of this invention is to provide a kind of human body motion capture method merging depth map and three-dimensional model, effectively can overcome the technological deficiency that movement range in existing method for capturing movement is restricted, seizure effect degree of distortion is high and error is larger.
The technical solution adopted in the present invention is:
Merge the human body motion capture method of depth map and three-dimensional model, it is characterized in that:
Realized by following steps:
Step one: the depth information gathering human action, removes moving target background, obtains complete human action depth information;
Step 2: by the three-dimensional point cloud information of the human action depth information convert information adult body of extraction, carry out three-dimensional reconstruction to human body, obtains the three-dimensional model of human action;
Step 3: repeat step one and step 2, based on a large amount of human action depth information gathered, sets up the three-dimensional modeling data storehouse M={Y of human action 1, Y 2...;
Step 4: the three-dimensional model according to organization of human body and human action constructs human action skeleton, sets up the skeleton data storehouse G={S of human action 1, S 2..., the wherein skeleton data of human action and three-dimensional modeling data one_to_one corresponding;
Step 5: the depth information extracting human action to be identified, the three-dimensional model of human action to be identified is built based on depth information, then similarity matching is carried out with the human action in three-dimensional modeling data storehouse, human action skeleton is exported by similarity sequence, the minimum skeleton of similarity distance value as optimum skeleton, using optimum skeleton as capturing movement result.
In step one, based on the human action depth information gathered, remove moving target background, the concrete steps obtaining complete human action depth information are:
(1) the human action depth map F (x, y, d) gathered represents, wherein, x, y are respectively horizontal ordinate under pixel coordinate system and ordinate, and d is depth information; Suppose that the threshold value based on depth information segmentation background area and target area is:
Wherein, maxDepthVulue and minDepthVulue is respectively the minimum and maximum value of image depth values, and T is recorded in T 0in, according to threshold value T, F (x, y, d) is divided into background area and target area, obtains the average depth value u of background area and target area respectively 1and u 2;
(2) T=(u is recalculated 1+ u 2)/2, judge T and T 0whether equal, if unequal, so T is recorded in T 0in, repeat above-mentioned steps, until T=T 0set up, termination algorithm; The T finally obtained is split F (x, y, d) as optimal threshold, removes background, obtain complete human action depth information d 0.
In step 2, the human action depth information of extraction is changed into the three-dimensional point cloud information of human body, carry out three-dimensional reconstruction to human body, the concrete steps obtaining the three-dimensional model of human action are:
(1) the depth information d will extracted 0normalization, supposes that the number of depth value is N, and calculate maximal value max d (k) in depth value and minimum value min d (k), the depth value after normalization is:
(2) make z (n)=Z, world coordinate system take video camera as initial point, and after demarcating, video camera can regard desirable imaging model as, according to simple similar triangles shift theory:
Therefrom can calculate X, the Y value of world coordinate system, wherein, x, y are respectively horizontal ordinate and the ordinate of pixel coordinate system, and f is the focal length of video camera.Finally can obtain the three-dimensional point cloud information (X, Y, Z) of human action.
In step 4, the concrete steps constructing human action skeleton according to the three-dimensional model of organization of human body and human action are:
(1) in depth map, be considered as quadrilateral by approximate for body trunk, be designated as Q;
Two summits above Q are exactly shoulder joint node a 1and a 2position;
Neck joint point b is a 1, a 2the mid point of line;
Along b point upwards, the top is exactly the position of head node c;
Two summits below Q are hip joint d 1and d 2position;
Hip articulation point e is at d 1, d 2point midway;
Determine that swivel of hand point and elbow joint position are from a 1, a 2place starts to search for, if arm stretches, according to upper and lower arm length ratio-dependent elbow joint point f 1, f 2with swivel of hand point g 1, g 2position; If arm bending, have flex point in the position of elbow joint point, the position of flex point is f 1, f 2, along f 1, f 2continuing search is swivel of hand point g to terminal 1, g 2position;
In like manner, knee joint point h can be determined according to swivel of hand point and elbow joint method for determining position 1, h 2with pin articulation point i 1, i 2position;
(3) each body joint point coordinate is connected by organization of human body order straight line, just obtains human action skeleton.
In step 5, extract the depth information of human action to be identified, build the three-dimensional model of human action to be identified based on depth information, the concrete steps of then carrying out similarity matching with the human action in three-dimensional modeling data storehouse are:
(1) the human action hierarchical clustering algorithm represented by three-dimensional model carries out cluster:
1. suppose that a parameter m represents the number of final clustering cluster, using the data of each input as independent aggregate of data D 0, obtain the nearest bunch D adjacent with each D x, the distance between aggregate of data can to regard in aggregate of data distance in the heart as;
2. by nearest two aggregate of data D pand D qmerge, the new aggregate of data D of generation n, then calculate D nwith the distance of other bunch, if the number of current data bunch is greater than m, so just forward the merging 2. proceeding aggregate of data to, otherwise algorithm terminates, obtain cluster result D={u 1, u 2..., u m, wherein u represents the value of each class;
(2) distance between each action action frame of human action sequence in each action action frame of human action sequence to be identified and three-dimensional modeling data storehouse is calculated:
1. suppose that in human action to be identified and database, any one three-dimensional model cluster result is expressed as:
From the root node of clustering tree, carry out depth-first search until leaf node;
2. in the process of traversal search to D iand D jin the calculation and object distance of each class, the distance of each class is sued for peace, obtains the Euclidean distance of two clustering trees:
(3) calculate Optimum Matching path by the method for canonical Time alignment, obtain Optimum Matching sequence:
1. suppose that three-dimensional model action sequence to be identified is
Set sequence in three-dimensional modeling data storehouse is
M and n represents the length of two action sequences respectively, and the distance said method in two sequences between each action action frame can calculate;
2. because m and n may be unequal, so the matching relationship of its correspondence may have a variety of, utilize the method for canonical Time alignment, calculate the minor increment J between sequence, obtain best matching path simultaneously p xwith p y:
Here,
With be two binary regular matrixes, have the t ∈ ﹛ 1:l ﹜ of each step
During other situations
Wherein l is the number of the action action frame of the needs coupling that canonical time wrapping algorithm is selected automatically, l >=max (m, n); φ () is regular terms:
p xwith p yrepresent all possible coupling path, the constraint condition of demand fulfillment is:
v x with v y (d≤min (d x, d y)) be linear change matrix, the constraint condition that need meet is:
Wherein, ;
3. repeat above step, utilize canonical Time alignment method, calculate each group action sequence Y in three-dimensional modeling data storehouse respectively with minor increment J and the best matching path of X p xwith p y.Then the minimum value of each J and its correspondence is obtained p xwith p y, thus obtain corresponding best match sequence.
The present invention has the following advantages:
(1) method involved in the present invention uses relevant device sampling depth information, and performing artist is without the need to sensor installation on human body or add gauge point, and movable amplitude is large, requires few for performance venue, convenient and easily realize.
(2) the present invention is directed to the capturing movement technology of unmarked point in the past mainly based on video method, shoot the object that two-dimensional image sequence reaches capturing movement, depth information has been merged in the present invention, what solve in the two dimensional image not having depth information health blocks the problem causing body part loss of learning certainly, ensure that integrality and the reliability of acquisition of information.
(3) the present invention uses hierarchical clustering method, the similarity of aggregate of data spacing Sum fanction easily defines, three dimensional point cloud larger for data volume is reduced into several class, so that later use Depth Priority Searching calculates the Euclidean distance of two frame three-dimensional models, decreases calculated amount.
(4) the present invention matches at motion sequence and uses canonical Time alignment (Canonical Time Warping, CTW) method, the method has the advantage of algorithm robust, even and if the time scale of reference sequences pattern can not be completely the same in test mode sequence and database, it still can complete the pattern match between cycle tests and reference sequences preferably.CTW is the method based on dynamic programming, 2. the middle formula of step 5 (3) step is the objective function of CTW method, regular terms and two regular matrixes of scale-of-two are added with in objective function, for objective function provides the possibility of unique solution, improve the precision of two sequences match, the time of coupling is significantly fallen, ensure that speed and the precision of capturing movement.
Accompanying drawing explanation
Fig. 1 is the overview flow chart of the inventive method.
Fig. 2 is the idiographic flow block diagram of step one of the present invention.
Fig. 3 is the idiographic flow block diagram of step 5 of the present invention.
Embodiment
Below in conjunction with embodiment, the present invention will be described in detail.
Fusion depth map involved in the present invention and the human body motion capture method of three-dimensional model, realized by following steps:
Step one: application relevant device gathers the depth information of human action, removes moving target background, obtains complete human action depth information.
Based on the human action depth information gathered, remove moving target background, the concrete steps obtaining complete human action depth information are:
(1) the human action depth map F (x, y, d) gathered represents, wherein, x, y are respectively horizontal ordinate under pixel coordinate system and ordinate, and d is depth information; Suppose that the threshold value based on depth information segmentation background area and target area is:
Wherein, maxDepthVulue and minDepthVulue is respectively the minimum and maximum value of image depth values, and T is recorded in T 0in, according to threshold value T, F (x, y, d) is divided into background area and target area, obtains the average depth value u of background area and target area respectively 1and u 2;
(2) T=(u is recalculated 1+ u 2)/2, judge T and T 0whether equal, if unequal, so T is recorded in T 0in, repeat above-mentioned steps, until T=T 0set up, termination algorithm; The T finally obtained is split F (x, y, d) as optimal threshold, removes background, obtain complete human action depth information d 0.
Step 2: the three-dimensional point cloud information human action depth information of extraction being changed into human body, carries out three-dimensional reconstruction to human body, and obtain the three-dimensional model of human action, concrete steps are:
(1) the depth information d will extracted 0normalization, supposes that the number of depth value is N, and calculate maximal value max d (k) in depth value and minimum value min d (k), the depth value after normalization is:
(2) make z (n)=Z, world coordinate system take video camera as initial point, and after demarcating, video camera can regard desirable imaging model as, according to simple similar triangles shift theory:
Therefrom can calculate X, the Y value of world coordinate system, wherein, x, y are respectively horizontal ordinate and the ordinate of pixel coordinate system, and f is the focal length of video camera.Finally can obtain the three-dimensional point cloud information (X, Y, Z) of human action.
Step 3: repeat step one and step 2, based on a large amount of human action depth information as much as possible gathered, sets up the three-dimensional modeling data storehouse M={Y of human action 1, Y 2....
Step 4: the three-dimensional model according to organization of human body and human action constructs human action skeleton, sets up the skeleton data storehouse G={S of human action 1, S 2..., the wherein skeleton data of human action and three-dimensional modeling data one_to_one corresponding.
The concrete steps constructing human action skeleton according to the three-dimensional model of organization of human body and human action are:
(1) in depth map, be considered as quadrilateral by approximate for body trunk, be designated as Q;
Two summits above Q are exactly shoulder joint node a 1and a 2position;
Neck joint point b is a 1, a 2the mid point of line;
Along b point upwards, the top is exactly the position of head node c;
Two summits below Q are hip joint d 1and d 2position;
Hip articulation point e is at d 1, d 2point midway;
Determine that swivel of hand point and elbow joint position are from a 1, a 2place starts to search for, if arm stretches, according to upper and lower arm length ratio-dependent elbow joint point f 1, f 2with swivel of hand point g 1, g 2position; If arm bending, have flex point in the position of elbow joint point, the position of flex point is f 1, f 2, along f 1, f 2continuing search is swivel of hand point g to terminal 1, g 2position;
In like manner, knee joint point h can be determined according to swivel of hand point and elbow joint method for determining position 1, h 2with pin articulation point i 1, i 2position;
(3) each body joint point coordinate is connected by organization of human body order straight line, just obtains human action skeleton.
Step 5: the depth information extracting human action to be identified, the three-dimensional model of human action to be identified is built based on depth information, then similarity matching is carried out with the human action in three-dimensional modeling data storehouse, human action skeleton is exported by similarity sequence, the minimum skeleton of similarity distance value as optimum skeleton, using optimum skeleton as capturing movement result.
Extract the depth information of human action to be identified, build the three-dimensional model of human action to be identified based on depth information, the concrete steps of then carrying out similarity matching with the human action in three-dimensional modeling data storehouse are:
(1) the human action hierarchical clustering algorithm represented by three-dimensional model carries out cluster:
1. suppose that a parameter m represents the number of final clustering cluster, using the data of each input as independent aggregate of data D 0, obtain the nearest bunch D adjacent with each D x, the distance between aggregate of data can to regard in aggregate of data distance in the heart as;
2. by nearest two aggregate of data D pand D qmerge, the new aggregate of data D of generation n, then calculate D nwith the distance of other bunch, if the number of current data bunch is greater than m, so just forward the merging 2. proceeding aggregate of data to, otherwise algorithm terminates, obtain cluster result D={u 1, u 2..., u m, wherein u represents the value of each class;
(2) distance between each action action frame of human action sequence in each action action frame of human action sequence to be identified and three-dimensional modeling data storehouse is calculated:
1. suppose that in human action to be identified and database, any one three-dimensional model cluster result is expressed as:
From the root node of clustering tree, carry out depth-first search until leaf node;
2. in the process of traversal search to D iand D jin the calculation and object distance of each class, the distance of each class is sued for peace, obtains the Euclidean distance of two clustering trees:
(3) calculate Optimum Matching path by the method for canonical Time alignment, obtain Optimum Matching sequence:
1. suppose that three-dimensional model action sequence to be identified is
Set sequence in three-dimensional modeling data storehouse is
M and n represents the length of two action sequences respectively, and the distance said method in two sequences between each action action frame can calculate;
2. because m and n may be unequal, so the matching relationship of its correspondence may have a variety of, utilize the method for canonical Time alignment, calculate the minor increment J between sequence, obtain best matching path simultaneously p xwith p y:
Here,
With be two binary regular matrixes, have the t ∈ ﹛ 1:l ﹜ of each step
During other situations
Wherein l is the number of the action action frame of the needs coupling that canonical time wrapping algorithm is selected automatically, l >=max (m, n); φ () is regular terms:
p xwith p yrepresent all possible coupling path, the constraint condition of demand fulfillment is:
v x with v y (d≤min (d x, d y)) be linear change matrix, the constraint condition that need meet is:
Wherein, ;
3. repeat above step, utilize canonical Time alignment method, calculate each group action sequence Y in three-dimensional modeling data storehouse respectively with minor increment J and the best matching path of X p xwith p y.Then the minimum value of each J and its correspondence is obtained p xwith p y, thus obtain corresponding best match sequence.
Content of the present invention is not limited to cited by embodiment, and the conversion of those of ordinary skill in the art by reading instructions of the present invention to any equivalence that technical solution of the present invention is taked, is claim of the present invention and contains.

Claims (5)

1. merge the human body motion capture method of depth map and three-dimensional model, it is characterized in that:
Realized by following steps:
Step one: the depth information gathering human action, removes moving target background, obtains complete human action depth information;
Step 2: by the three-dimensional point cloud information of the human action depth information convert information adult body of extraction, carry out three-dimensional reconstruction to human body, obtains the three-dimensional model of human action;
Step 3: repeat step one and step 2, based on a large amount of human action depth information gathered, sets up the three-dimensional modeling data storehouse M={Y of human action 1, Y 2...;
Step 4: the three-dimensional model according to organization of human body and human action constructs human action skeleton, sets up the skeleton data storehouse G={S of human action 1, S 2..., the wherein skeleton data of human action and three-dimensional modeling data one_to_one corresponding;
Step 5: the depth information extracting human action to be identified, the three-dimensional model of human action to be identified is built based on depth information, then similarity matching is carried out with the human action in three-dimensional modeling data storehouse, human action skeleton is exported by similarity sequence, the minimum skeleton of similarity distance value as optimum skeleton, using optimum skeleton as capturing movement result.
2. the human body motion capture method of fusion depth map according to claim 1 and three-dimensional model, is characterized in that:
In step one, based on the human action depth information gathered, remove moving target background, the concrete steps obtaining complete human action depth information are:
(1) the human action depth map F (x, y, d) gathered represents, wherein, x, y are respectively horizontal ordinate under pixel coordinate system and ordinate, and d is depth information; Suppose that the threshold value based on depth information segmentation background area and target area is:
Wherein, maxDepthVulue and minDepthVulue is respectively the minimum and maximum value of image depth values, and T is recorded in T 0in, according to threshold value T, F (x, y, d) is divided into background area and target area, obtains the average depth value u of background area and target area respectively 1and u 2;
(2) T=(u is recalculated 1+ u 2)/2, judge T and T 0whether equal, if unequal, so T is recorded in T 0in, repeat above-mentioned steps, until T=T 0set up, termination algorithm; The T finally obtained is split F (x, y, d) as optimal threshold, removes background, obtain complete human action depth information d 0.
3. the human body motion capture method of fusion depth map according to claim 2 and three-dimensional model, is characterized in that:
In step 2, the human action depth information of extraction is changed into the three-dimensional point cloud information of human body, carry out three-dimensional reconstruction to human body, the concrete steps obtaining the three-dimensional model of human action are:
(1) the depth information d will extracted 0normalization, supposes that the number of depth value is N, and calculate maximal value max d (k) in depth value and minimum value min d (k), the depth value after normalization is:
(2) make z (n)=Z, world coordinate system take video camera as initial point, and after demarcating, video camera can regard desirable imaging model as, according to simple similar triangles shift theory:
Therefrom can calculate X, the Y value of world coordinate system, wherein, x, y are respectively horizontal ordinate and the ordinate of pixel coordinate system, and f is the focal length of video camera;
Finally can obtain the three-dimensional point cloud information (X, Y, Z) of human action.
4. the human body motion capture method of fusion depth map according to claim 3 and three-dimensional model, is characterized in that:
In step 4, the concrete steps constructing human action skeleton according to the three-dimensional model of organization of human body and human action are:
(1) in depth map, be considered as quadrilateral by approximate for body trunk, be designated as Q;
Two summits above Q are exactly shoulder joint node a 1and a 2position;
Neck joint point b is a 1, a 2the mid point of line;
Along b point upwards, the top is exactly the position of head node c;
Two summits below Q are hip joint d 1and d 2position;
Hip articulation point e is at d 1, d 2point midway;
Determine that swivel of hand point and elbow joint position are from a 1, a 2place starts to search for, if arm stretches, according to upper and lower arm length ratio-dependent elbow joint point f 1, f 2with swivel of hand point g 1, g 2position; If arm bending, have flex point in the position of elbow joint point, the position of flex point is f 1, f 2, along f 1, f 2continuing search is swivel of hand point g to terminal 1, g 2position;
In like manner, knee joint point h can be determined according to swivel of hand point and elbow joint method for determining position 1, h 2with pin articulation point i 1, i 2position;
(3) each body joint point coordinate is connected by organization of human body order straight line, just obtains human action skeleton.
5. the human body motion capture method of fusion depth map according to claim 4 and three-dimensional model, is characterized in that:
In step 5, extract the depth information of human action to be identified, build the three-dimensional model of human action to be identified based on depth information, the concrete steps of then carrying out similarity matching with the human action in three-dimensional modeling data storehouse are:
(1) the human action hierarchical clustering algorithm represented by three-dimensional model carries out cluster:
1. suppose that a parameter m represents the number of final clustering cluster, using the data of each input as independent aggregate of data D 0, obtain the nearest bunch D adjacent with each D x, the distance between aggregate of data can to regard in aggregate of data distance in the heart as;
2. by nearest two aggregate of data D pand D qmerge, the new aggregate of data D of generation n, then calculate D nwith the distance of other bunch, if the number of current data bunch is greater than m, so just forward the merging 2. proceeding aggregate of data to, otherwise algorithm terminates, obtain cluster result D={u 1, u 2..., u m, wherein u represents the value of each class;
(2) distance between each action action frame of human action sequence in each action action frame of human action sequence to be identified and three-dimensional modeling data storehouse is calculated:
1. suppose that in human action to be identified and database, any one three-dimensional model cluster result is expressed as:
From the root node of clustering tree, carry out depth-first search until leaf node;
2. in the process of traversal search to D iand D jin the calculation and object distance of each class, the distance of each class is sued for peace, obtains the Euclidean distance of two clustering trees:
(3) calculate Optimum Matching path by the method for canonical Time alignment, obtain Optimum Matching sequence:
1. suppose that three-dimensional model action sequence to be identified is
Set sequence in three-dimensional modeling data storehouse is
M and n represents the length of two action sequences respectively, and the distance said method in two sequences between each action action frame can calculate;
2. because m and n may be unequal, so the matching relationship of its correspondence may have a variety of, utilize the method for canonical Time alignment, calculate the minor increment J between sequence, obtain best matching path simultaneously p xwith p y:
Here,
With be two binary regular matrixes, have the t ∈ ﹛ 1:l ﹜ of each step
During other situations
Wherein l is the number of the action action frame of the needs coupling that canonical time wrapping algorithm is selected automatically, l >=max (m, n); φ () is regular terms:
p xwith p yrepresent all possible coupling path, the constraint condition of demand fulfillment is:
v x with v y (d≤min (d x, d y)) be linear change matrix, the constraint condition that need meet is:
Wherein, ;
3. repeat above step, utilize canonical Time alignment method, calculate each group action sequence Y in three-dimensional modeling data storehouse respectively with minor increment J and the best matching path of X p xwith p y, then obtain the minimum value of each J and its correspondence p xwith p y, thus obtain corresponding best match sequence.
CN201410205213.8A 2014-05-15 2014-05-15 Merge the human body motion capture method of depth map and threedimensional model Expired - Fee Related CN104268138B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410205213.8A CN104268138B (en) 2014-05-15 2014-05-15 Merge the human body motion capture method of depth map and threedimensional model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410205213.8A CN104268138B (en) 2014-05-15 2014-05-15 Merge the human body motion capture method of depth map and threedimensional model

Publications (2)

Publication Number Publication Date
CN104268138A true CN104268138A (en) 2015-01-07
CN104268138B CN104268138B (en) 2017-08-15

Family

ID=52159660

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410205213.8A Expired - Fee Related CN104268138B (en) 2014-05-15 2014-05-15 Merge the human body motion capture method of depth map and threedimensional model

Country Status (1)

Country Link
CN (1) CN104268138B (en)

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680582A (en) * 2015-03-24 2015-06-03 中国人民解放军国防科学技术大学 Method for creating object-oriented customized three-dimensional human body model
CN104700452A (en) * 2015-03-24 2015-06-10 中国人民解放军国防科学技术大学 Three-dimensional body posture model matching method for any posture
CN105069423A (en) * 2015-07-29 2015-11-18 北京格灵深瞳信息技术有限公司 Human body posture detection method and device
CN105335722A (en) * 2015-10-30 2016-02-17 商汤集团有限公司 Detection system and detection method based on depth image information
CN105930773A (en) * 2016-04-13 2016-09-07 中国农业大学 Motion identification method and device
CN106441275A (en) * 2016-09-23 2017-02-22 深圳大学 Method and device for updating planned path of robot
CN106599806A (en) * 2016-12-01 2017-04-26 西安理工大学 Local curved-surface geometric feature-based human body action recognition method
CN106998430A (en) * 2017-04-28 2017-08-01 北京瑞盖科技股份有限公司 360 degree of video playback methods based on polyphaser
CN106997505A (en) * 2015-11-11 2017-08-01 株式会社东芝 Analytical equipment and analysis method
CN107212975A (en) * 2017-07-17 2017-09-29 徐彬 Wheelchair and method with intelligent rescue function
CN107551551A (en) * 2017-08-09 2018-01-09 广东欧珀移动通信有限公司 Game effect construction method and device
CN108096836A (en) * 2017-12-20 2018-06-01 深圳市百恩互动娱乐有限公司 A kind of method that true man's real scene shooting makes game
CN108121963A (en) * 2017-12-21 2018-06-05 北京奇虎科技有限公司 Processing method, device and the computing device of video data
CN108392207A (en) * 2018-02-09 2018-08-14 西北大学 A kind of action identification method based on posture label
CN108510577A (en) * 2018-01-31 2018-09-07 中国科学院软件研究所 A kind of sense of reality action migration and generation method and system based on existing action data
CN108563329A (en) * 2018-03-23 2018-09-21 上海数迹智能科技有限公司 A kind of human arm position's parameter extraction algorithm based on depth map
CN109215128A (en) * 2018-08-09 2019-01-15 北京华捷艾米科技有限公司 The synthetic method and system of object motion attitude image
CN110020611A (en) * 2019-03-17 2019-07-16 浙江大学 A kind of more human action method for catching based on three-dimensional hypothesis space clustering
TWI672674B (en) * 2017-04-10 2019-09-21 鈺立微電子股份有限公司 Depth processing system
CN110276266A (en) * 2019-05-28 2019-09-24 暗物智能科技(广州)有限公司 A kind of processing method, device and the terminal device of the point cloud data based on rotation
CN111354075A (en) * 2020-02-27 2020-06-30 青岛联合创智科技有限公司 Foreground reduction interference extraction method in three-dimensional reconstruction
CN112330815A (en) * 2020-11-26 2021-02-05 北京百度网讯科技有限公司 Three-dimensional point cloud data processing method, device and equipment based on obstacle fusion
CN114694263A (en) * 2022-05-30 2022-07-01 深圳智华科技发展有限公司 Action recognition method, device, equipment and storage medium
WO2023042592A1 (en) * 2021-09-14 2023-03-23 Nec Corporation Method and apparatus for determining abnormal behaviour during cycle
CN116385663A (en) * 2023-05-26 2023-07-04 北京七维视觉传媒科技有限公司 Action data generation method and device, electronic equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103294996A (en) * 2013-05-09 2013-09-11 电子科技大学 3D gesture recognition method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103294996A (en) * 2013-05-09 2013-09-11 电子科技大学 3D gesture recognition method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
FENG ZHOU等: "canonical time warping for alignment of human behavior", 《CARNEGIE MELLON UNIVERSITY》 *
WANG XUE等: "Human-like character animation of maize driven by motion capture data", 《INFORMATION AND COMPUTATIONAL SCIENCE》 *
罗鸣: "基于Kinect传感器的骨骼定位研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
陈炉军: "动态深度数据匹配及其应用", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104700452A (en) * 2015-03-24 2015-06-10 中国人民解放军国防科学技术大学 Three-dimensional body posture model matching method for any posture
CN104680582A (en) * 2015-03-24 2015-06-03 中国人民解放军国防科学技术大学 Method for creating object-oriented customized three-dimensional human body model
CN104680582B (en) * 2015-03-24 2016-02-24 中国人民解放军国防科学技术大学 A kind of three-dimensional (3 D) manikin creation method of object-oriented customization
CN104700452B (en) * 2015-03-24 2016-03-02 中国人民解放军国防科学技术大学 A kind of 3 D human body attitude mode matching process towards any attitude
CN105069423B (en) * 2015-07-29 2018-11-09 北京格灵深瞳信息技术有限公司 A kind of human body attitude detection method and device
CN105069423A (en) * 2015-07-29 2015-11-18 北京格灵深瞳信息技术有限公司 Human body posture detection method and device
CN105335722B (en) * 2015-10-30 2021-02-02 商汤集团有限公司 Detection system and method based on depth image information
CN105335722A (en) * 2015-10-30 2016-02-17 商汤集团有限公司 Detection system and detection method based on depth image information
CN106997505A (en) * 2015-11-11 2017-08-01 株式会社东芝 Analytical equipment and analysis method
CN106997505B (en) * 2015-11-11 2021-02-09 株式会社东芝 Analysis device and analysis method
CN105930773A (en) * 2016-04-13 2016-09-07 中国农业大学 Motion identification method and device
CN106441275A (en) * 2016-09-23 2017-02-22 深圳大学 Method and device for updating planned path of robot
CN106599806A (en) * 2016-12-01 2017-04-26 西安理工大学 Local curved-surface geometric feature-based human body action recognition method
TWI672674B (en) * 2017-04-10 2019-09-21 鈺立微電子股份有限公司 Depth processing system
CN106998430A (en) * 2017-04-28 2017-08-01 北京瑞盖科技股份有限公司 360 degree of video playback methods based on polyphaser
CN106998430B (en) * 2017-04-28 2020-07-21 北京瑞盖科技股份有限公司 Multi-camera-based 360-degree video playback method
CN107212975A (en) * 2017-07-17 2017-09-29 徐彬 Wheelchair and method with intelligent rescue function
CN107551551A (en) * 2017-08-09 2018-01-09 广东欧珀移动通信有限公司 Game effect construction method and device
CN108096836A (en) * 2017-12-20 2018-06-01 深圳市百恩互动娱乐有限公司 A kind of method that true man's real scene shooting makes game
CN108121963A (en) * 2017-12-21 2018-06-05 北京奇虎科技有限公司 Processing method, device and the computing device of video data
CN108510577A (en) * 2018-01-31 2018-09-07 中国科学院软件研究所 A kind of sense of reality action migration and generation method and system based on existing action data
CN108392207B (en) * 2018-02-09 2020-12-11 西北大学 Gesture tag-based action recognition method
CN108392207A (en) * 2018-02-09 2018-08-14 西北大学 A kind of action identification method based on posture label
CN108563329A (en) * 2018-03-23 2018-09-21 上海数迹智能科技有限公司 A kind of human arm position's parameter extraction algorithm based on depth map
CN108563329B (en) * 2018-03-23 2021-04-27 上海数迹智能科技有限公司 Human body arm position parameter extraction algorithm based on depth map
CN109215128A (en) * 2018-08-09 2019-01-15 北京华捷艾米科技有限公司 The synthetic method and system of object motion attitude image
CN109215128B (en) * 2018-08-09 2019-12-24 北京华捷艾米科技有限公司 Object motion attitude image synthesis method and system
CN110020611A (en) * 2019-03-17 2019-07-16 浙江大学 A kind of more human action method for catching based on three-dimensional hypothesis space clustering
CN110276266B (en) * 2019-05-28 2021-09-10 暗物智能科技(广州)有限公司 Rotation-based point cloud data processing method and device and terminal equipment
CN110276266A (en) * 2019-05-28 2019-09-24 暗物智能科技(广州)有限公司 A kind of processing method, device and the terminal device of the point cloud data based on rotation
CN111354075A (en) * 2020-02-27 2020-06-30 青岛联合创智科技有限公司 Foreground reduction interference extraction method in three-dimensional reconstruction
CN112330815A (en) * 2020-11-26 2021-02-05 北京百度网讯科技有限公司 Three-dimensional point cloud data processing method, device and equipment based on obstacle fusion
CN112330815B (en) * 2020-11-26 2024-05-14 北京百度网讯科技有限公司 Three-dimensional point cloud data processing method, device and equipment based on obstacle fusion
WO2023042592A1 (en) * 2021-09-14 2023-03-23 Nec Corporation Method and apparatus for determining abnormal behaviour during cycle
CN114694263A (en) * 2022-05-30 2022-07-01 深圳智华科技发展有限公司 Action recognition method, device, equipment and storage medium
CN114694263B (en) * 2022-05-30 2022-09-02 深圳智华科技发展有限公司 Action recognition method, device, equipment and storage medium
CN116385663A (en) * 2023-05-26 2023-07-04 北京七维视觉传媒科技有限公司 Action data generation method and device, electronic equipment and storage medium
CN116385663B (en) * 2023-05-26 2023-08-29 北京七维视觉传媒科技有限公司 Action data generation method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN104268138B (en) 2017-08-15

Similar Documents

Publication Publication Date Title
CN104268138A (en) Method for capturing human motion by aid of fused depth images and three-dimensional models
CN104463108B (en) A kind of monocular real time target recognitio and pose measuring method
Schonberger et al. Structure-from-motion revisited
CN108537191B (en) Three-dimensional face recognition method based on structured light camera
CN107767419A (en) A kind of skeleton critical point detection method and device
CN103198477B (en) Apple fruitlet bagging robot visual positioning method
CN110544301A (en) Three-dimensional human body action reconstruction system, method and action training system
CN109166149A (en) A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU
CN101794384B (en) Shooting action identification method based on human body skeleton map extraction and grouping motion diagram inquiry
CN110555412B (en) End-to-end human body gesture recognition method based on combination of RGB and point cloud
CN101877143B (en) Three-dimensional scene reconstruction method of two-dimensional image group
WO2022213612A1 (en) Non-contact three-dimensional human body size measurement method
CN102567716B (en) Face synthetic system and implementation method
CN108564653A (en) Human skeleton tracing system and method based on more Kinect
CN111027432B (en) Gait feature-based visual following robot method
Uddin et al. Human Activity Recognition via 3-D joint angle features and Hidden Markov models
CN109101864A (en) The upper half of human body action identification method returned based on key frame and random forest
CN112200854B (en) Leaf vegetable three-dimensional phenotype measuring method based on video image
CN106815855A (en) Based on the human body motion tracking method that production and discriminate combine
CN105279786A (en) Method and system for obtaining object three-dimensional model
CN103839253A (en) Arbitrary point matching method based on partial affine transformation
CN111862315A (en) Human body multi-size measuring method and system based on depth camera
CN105654479A (en) Multispectral image registering method and multispectral image registering device
CN102479386A (en) Three-dimensional motion tracking method of upper half part of human body based on monocular video
Li et al. FC-SLAM: Federated learning enhanced distributed visual-LiDAR SLAM in cloud robotic system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170815

Termination date: 20190515

CF01 Termination of patent right due to non-payment of annual fee