CN102592112B - Method for determining gesture moving direction based on hidden Markov model - Google Patents

Method for determining gesture moving direction based on hidden Markov model Download PDF

Info

Publication number
CN102592112B
CN102592112B CN201110429580.2A CN201110429580A CN102592112B CN 102592112 B CN102592112 B CN 102592112B CN 201110429580 A CN201110429580 A CN 201110429580A CN 102592112 B CN102592112 B CN 102592112B
Authority
CN
China
Prior art keywords
gesture
hidden markov
markov model
hmm
gesture motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201110429580.2A
Other languages
Chinese (zh)
Other versions
CN102592112A (en
Inventor
刘恒
赵仕才
张彩虹
吕金钢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Changhong Electric Co Ltd
Original Assignee
Sichuan Changhong Electric Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Changhong Electric Co Ltd filed Critical Sichuan Changhong Electric Co Ltd
Priority to CN201110429580.2A priority Critical patent/CN102592112B/en
Publication of CN102592112A publication Critical patent/CN102592112A/en
Application granted granted Critical
Publication of CN102592112B publication Critical patent/CN102592112B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a method for determining a gesture moving direction based on a hidden Markov model. The method comprises the following steps of: a, recognizing a human face; b, acquiring a gesture area gray-scale map through multi-threaded fusion; c, updating a gesture moving history map and acquiring a moving energy map according to the gesture area gray-scale map; d, dividing vector horizontal angles; e, acquiring a starting moment and an ending moment of gesture movement through the moving energy map, and splitting the gesture movement; and f, training hidden Markov model parameters, and presuming the gesture moving direction by the forward algorithm and the backward algorithm of the hidden Markov model. By the method for determining the gesture moving direction based on the hidden Markov model, the gesture moving direction of the hidden Markov model is determined by using a plurality of determining conditions, so that the determination accuracy of the gesture movement can be improved, and the interference of various factors in a determination result can be obviously reduced.

Description

Method based on Hidden Markov Model (HMM) judgement gesture motion direction
Technical field
The present invention relates to field of video image processing, is the method based on Hidden Markov Model (HMM) (HMM) judgement gesture motion direction concretely.
Background technology
In recent years, along with the rapid expansion of computing machine in modern society's impact, multi-mode man-machine interaction is more and more extensive in real-life application.Gesture identification based on vision has become the indispensable technology of man-machine interaction of new generation.
Gesture be a kind of natural, directly perceived, be easy to study man-machine interaction means, using staff directly as the input equipment of computing machine, the communication of between humans and machines will no longer need medium.Computer based gesture identification is a kind of recognition technology with development trend, but technical existence is much difficult.As gesture derives from video flowing, hand under complex background (people self interference, illumination, scene etc.) condition, there is flexible feature, hand and there is bulk redundancy characteristic information (as palmmprint etc.), how gesture is identified at three dimensions etc., these problems are not also improved and are solved at present, and specific implementation need to increase restrictive condition.How to overcome the technical matters in gesture identification, make it change following interactive mode, have at present many researchers to be devoted to this project.
At present, obtaining of gesture identification video flowing is divided into 2D camera and 3D camera.What 3D camera obtained is the three-dimensional information of gesture, and its cost costliness is unfavorable for popularizing.Therefore present stage is devoted to cheap USB 2D video camera Gesture Recognition mostly, has positive effect.
Existing gesture motion direction determining is mainly single following the tracks of based on gesture motion, thereby by calculating hand shift length, judges the method for its direction of motion.But because hand is non-Rigid Bodies, and gesture follows the tracks of and generally based on the colour of skin, to carry out, and causes this method based on following the tracks of unstable, is subject to the interference of ambient lighting and background colour, cannot form correct pursuit path.
Summary of the invention
Problem for above appearance, the invention provides a kind of method based on Hidden Markov Model (HMM) judgement gesture motion direction, by multiple Rule of judgment, the gesture motion direction of Hidden Markov Model (HMM) is judged, improve the accuracy of judgement, reduce the interference of various factors to judged result.
The method that the present invention is based on Hidden Markov Model (HMM) judgement gesture motion direction, comprising:
A. by camera, carry out the detection of people's face, to determine that user enters system identification scope, can adopt conventional Adaboost detecting device to carry out the detection of people's face;
B. from video flowing, obtain current moving image, set up skin similarity model, as the foundation of skin color segmentation in gesture motion process, the data after skin color probability map and the poor result of frame are multiplied each other are carried out filtering through median filter, obtain gesture area grayscale figure;
C. according to current gesture area grayscale figure, upgrade gesture motion historigram, and obtain kinergety figure;
D. effective gesture motion historigram is carried out to image gradient vector calculation, obtain gesture motion vector sum vector horizontal sextant angle, more vectorial horizontal sextant angle is planned in the coding that respective regions that 360 degree plane domains divide is corresponding;
E. by kinergety figure, obtain the zero hour and the finish time of gesture motion, be partitioned into gesture motion;
F. by the analysis of step a~step e to one section of video, obtain the coded sequence of a direction of motion in this video, by described coded sequence, obtain the visible state sequence of Hidden Markov Model (HMM), Hidden Markov Model (HMM) parameter is trained, utilize the forward, backward algorithm of Hidden Markov Model (HMM) to infer gesture motion direction.
Method of the present invention does not rely on the tracking effect of gesture, but the Skin Color Information of hand and the movable information (frame is poor) of gesture are merged mutually, can be partitioned into more accurately gesture like this.Construct on this basis gesture motion historigram and kinergety figure, and according to the motion feature self-adaptation cutting gesture motion interval of hand, calculate and obtain the orientation angle feature of a series of gesture motions in this time interval, use this angle character sequence construct Hidden Markov Model (HMM), and the forward and backward algorithm that utilizes this model infers most possible gesture motion direction, thereby realize the judgement of gesture motion.
Concrete a kind of scheme is, the method for the renewal gesture motion historigram described in step c is:
Figure BDA0000122668150000021
h wherein τ(x, y, t) is current gesture motion historigram,
Figure BDA0000122668150000022
for current gesture area grayscale figure, the parameter of the time range that τ is Describing Motion, δ is the attenuation parameter of gesture motion historigram, ξ is the threshold value of gesture region binary segmentation.
By a large amount of l-G simulation tests and test, for obtaining good motion determination, preferred attenuation parameter δ=0.5, threshold xi=10.
Another kind of concrete scheme is, the method for the acquisition kinergety figure described in step c is:
Figure BDA0000122668150000023
e wherein τ(x, y, t) is kinergety figure, the parameter of the time range that τ is Describing Motion, and x, y is respectively the transverse and longitudinal coordinate of kinergety figure, and t is constantly.
Further, described in step e, pass through that kinergety figure obtains the zero hour of gesture motion and the method for the finish time is: calculate two poor that kinergety figure does not in the same time add up, if described difference > 400 is for gesture motion starts, if described difference < 20 finishes for gesture motion.Wherein 400 and 20 value is the statistics through great many of experiments.
Concrete, described in step f, utilize the forward, backward algorithm of Hidden Markov Model (HMM) to infer gesture motion direction to be: according to the visible state sequence of described Hidden Markov Model (HMM), by state transition probability and the observed reading probability of Hidden Markov Model (HMM) parameter, calculate motion model probability, the motion model corresponding according to maximum motion model probability judges the gesture motion direction of whole system.
Concrete, in step f, Hidden Markov Model (HMM) parameter is trained and is comprised:
F1. training pattern parameter is treated in initialization, according to action coding, sets each state-transition matrix for the treatment of training pattern parameter;
F2. according to the observed value sequence for the treatment of training pattern parameter and Hidden Markov Model (HMM) visible state, training new model parameter: if obtain the logarithm of the probability of described observed value sequence-treat obtains the probability of described observed value sequence logarithm < delta under model under new model, training finishes; Otherwise by repeating step f2.
Preferably, in steps d, gesture is calculated in the motion of upper and lower, left and right four direction.
Preferably, for overcoming motion artifacts, 6~8 of centres that intercept obtained coded sequence in step f are encoded to the visible state sequence of Hidden Markov Model (HMM), more preferably intercept 8 codings.
Test is learnt, the present invention is based on the method for Hidden Markov Model (HMM) judgement gesture motion direction, by multiple Rule of judgment, the gesture motion direction of Hidden Markov Model (HMM) is judged, can improve the accuracy of gesture motion judgement, can obviously reduce the interference of various factors to judged result.
Below in conjunction with the embodiment by accompanying drawing illustrated embodiment, foregoing of the present invention is described in further detail again.But this should be interpreted as to the scope of the above-mentioned theme of the present invention only limits to following example.Without departing from the idea case in the present invention described above, various replacements or the change according to ordinary skill knowledge and customary means, made, all should comprise within the scope of the invention.
Accompanying drawing explanation
Fig. 1 is the process flow diagram that the present invention is based on the method for Hidden Markov Model (HMM) judgement gesture motion direction.
Embodiment
The method that the present invention is based on as shown in Figure 1 Hidden Markov Model (HMM) judgement gesture motion direction, comprising:
A. by camera, carry out the detection of people's face, to determine that user enters system identification scope, adopt conventional Adaboost detecting device to detect present frame f (x, y, t) people's face number numf in, if there is lasting people's face number numf > 0 near certain position and more than 2 seconds, thinking and have user to enter scene;
B. from video flowing, obtain current moving image, set up skin similarity model, foundation as skin color segmentation in gesture motion process, at HSV (hue, saturation, intensity) color space, add up a large amount of colours of skin and non-Skin Color Information, set up normalized H-S (tone, saturation degree) colour of skin histogram, for the skin color segmentation of subsequent video two field picture, the threshold probability of skin color segmentation is set as 0.85, present frame skin color probability map Skin ( x , y , t ) = Skin ( x , y , t ) , Skin ( x , y , t ) &GreaterEqual; 0.85 0 , other . Data after skin color probability map and the poor result of frame are multiplied each other are carried out filtering through median filter, obtain current gesture area grayscale figure
Figure BDA0000122668150000032
the poor computing method of frame are:
Figure BDA0000122668150000033
because the threshold value of denoising in varying environment (as illumination) can produce a very large impact result, therefore the threshold value in calculating is set
Figure BDA0000122668150000034
frame is poor
Figure BDA0000122668150000035
C. according to current gesture area grayscale figure, upgrade gesture motion historigram (MHI):
Figure BDA0000122668150000041
h wherein τ(x, y, t) is current gesture motion historigram
Figure BDA0000122668150000042
for current gesture area grayscale figure, the parameter of the time range that τ is Describing Motion, δ=0.5 is the attenuation parameter of gesture motion historigram, ξ=10 are the threshold value of gesture region binary segmentation.
According to gesture motion historigram, obtain kinergety figure (MEI):
Figure BDA0000122668150000043
e wherein τ(x, y, t) is kinergety figure, the parameter of the time range that τ is Describing Motion, and x, y is respectively the transverse and longitudinal coordinate of kinergety figure, and t is constantly;
D. effective gesture motion historigram is carried out to image gradient vector calculation, obtain gesture motion vector sum vector horizontal sextant angle, on the four direction of upper and lower, left and right, vectorial horizontal sextant angle is planned in the coding that respective regions that 360 degree plane domains divide is corresponding.Be specially:
To effective gesture motion historigram, the vectorial angle angle of computed image, is worth and divides as required and encode, and carries out the judgement of " upper and lower, left and right " four direction.By (50 0, 50 0) be encoded to " 0 ", (50 0, 140 0) be encoded to " 1 ", (140 0, 230 0) be " 2 ", (230 0, 310 0) be " 3 ", not in these interval values, assignment is " 1 ".Analyze one section of gesture motion video, the gesture motion historigram that every frame is built all carries out described judgement, thereby obtains the visible state sequence O (o for Hidden Markov Model (HMM) deduction 1, o 2, Λ o n), o wherein i∈ { 1,0,1,2,3}, i=1 Λ N, N=6~8;
E. by kinergety figure, obtain the zero hour and the finish time of gesture motion, calculate two poor Γ that kinergety figure does not in the same time add up t+k, calculate Γ t+k=(s t+k-s t), s wherein t=∑ ∑ E τ(x, y, t) is the kinergety figure E of t before the moment τ(x, y, t) and, s t+kfor the kinergety figure E of t+k before the moment τ(x, y, t) and.If Γ t+k> T 1(T 1=400) judge that gesture motion starts; If Γ t+k< T 2(T 2=20), judge that gesture motion finishes, be partitioned into thus gesture motion;
F. by the analysis of step a~step e to one section of video, obtain the coded sequence of a direction of motion of this video.For overcoming motion artifacts, remove 6~8 of the centres coding that intercepts end to end this coded sequence, the visible state sequence O (o using the sub-coded sequence of intercepting as Hidden Markov Model (HMM) 1, o 2, Λ o n), o wherein i∈ { 1,0,1,2,3}, i=1 Λ N, N=6~8, utilize Baum-Welch algorithm to Hidden Markov Model (HMM) parameter lambda=(A, B, π) train, make P (O| λ) maximum, wherein A is the transition matrix between hidden state in Hidden Markov Model (HMM), B is visible state probability matrix, and π is original state probability matrix, and P (O| λ) is illustrated in λ=(A, B, π) under model, obtain visible state sequence O (o 1, o 2, Λ o n) probability:
F1. training pattern parameter lambda is treated in initialization 0, according to action coding, set and treat training pattern parameter lambda 0each state-transition matrix (A, B, π);
F2. according to treating training pattern parameter lambda 0observed value sequence O with Hidden Markov Model (HMM) visible state, training new model parameter lambda: if obtain the logarithm of the probability of described observed value sequence-treat obtains the probability of described observed value sequence logarithm < delta under model, i.e. logP (o under new model i| λ)-logP (o i| λ 0) < delta, training finishes; Otherwise by repeating step f2.
Be specially: suppose treating training pattern parameter lambda 0and under the given condition of observed value sequence O, the transition probability from current hidden state i to the hidden state j of the next one is ξ (i, j)=P (s t=i, s t+1=j|O, λ),
Figure BDA0000122668150000051
wherein, α t(i) represent that t is positioned at hidden state i constantly, and produced visible state sequence O (o 1, o 2, Λ o t) the probability of front t symbol; a ijrepresent the probability that is positioned at hidden state i and shifts to hidden state j in Hidden Markov Model (HMM); b j(o t+1) represent to observe symbol o under hidden state j t+1probability; β t+1(j) be illustrated in t+1 constantly Hidden Markov be positioned at hidden state j, and will produce the probability of the target sequence of t+1 after constantly (time range from t+1 → N, N=6~8).Make γ t(i)=∑ ξ t(i, j) represents that t is constantly in hidden state S iprobability, ∑ γ t(i) be from state S in whole process ithe expection of the number of times producing, and ∑ ξ t(i, j) is from hidden state S ijump to hidden state S jthe expection of number of times.Therefore the estimated value of new model parameter lambda is: a ^ ij = &Sigma; t &xi; t ( i , j ) &Sigma; t &Sigma; j &xi; t ( i , j ) , b ^ j ( k ) = &Sigma; t , o t = k &gamma; t ( j ) &Sigma; t &gamma; t ( j ) , π i1(i), π irepresent the probability that original state is i.
According to the visible state sequence O (o of described Hidden Markov Model (HMM) 1, o 2, Λ o n), utilize the forward, backward algorithm of Hidden Markov Model (HMM), by Hidden Markov Model (HMM) parameter lambda=(A, B, state transition probability π) and observed reading probability, calculate motion model probability, the motion model corresponding according to maximum motion model probability judges the gesture motion direction of whole system, infers out the direction of gesture motion.
Wherein the forward direction of Hidden Markov Model (HMM) is calculated as:
At movement angle observation sequence O (o 1, o 2, Λ o n) on basis, definition forward variable α t(i): α t(i)=P (o 1, o 2, Λ o t, q ti/ λ), (N represents the length of observation sequence to 1≤t≤N, and span is 6~8, P (o 1, o 2, L o t, q ti/ λ) represent that current new model parameter lambda obtains sequence O ' (o 1, o 2, Λ o t) probability); Initialization forward variable: α 1(i)=π ib i(o 1); Recursive calculation: &alpha; t + 1 ( j ) = [ &Sigma; i = 1 N &alpha; i ( i ) a ij ] b j ( o t + 1 ) , 1≤t≤N-1,1≤j≤N, N=6~8; α wherein t+1(j) be illustrated in the probability of t+1 new model parameter lambda acquisition constantly sequence O ', b i(o 1) be illustrated under i state, produce considerable measured value o 1probability, π irepresent the probability that original state is i, α i(i) a ijrepresent to be currently located under i state, jump to the probability under j state; Stop:
Figure BDA0000122668150000062
α t(i) be the probability of T new model parameter lambda acquisition constantly sequence O ';
The forward direction of Hidden Markov Model (HMM) is calculated as:
At movement angle observation sequence O (o 1, o 2, Λ o n) on basis, define backward variable β t(i): β t(i)=P (o t-1, o t-2, Λ o t, q ti/ λ), ((N represents the length of observation sequence to 1≤t≤N-1, and span is 6~8, P (o t-1, o t-2, L o t, q ti/ λ) represent that setting models parameter lambda obtains sequence O ' (o t-1, o t-2, Λ o n) probability); The backward variable of initialization: β t(i)=1; Recursive calculation: &beta; t ( i ) = &Sigma; i = 1 N a ij b j ( o t + 1 ) &beta; t + 1 ( j ) , T=N-1, N-2, L, 1,1≤i≤N; Stop:
Figure BDA0000122668150000064
wherein β is illustrated in the probability of corresponding subscript new model parameter lambda acquisition constantly sequence O '.

Claims (5)

1. based on Hidden Markov Model (HMM), judge the method for gesture motion direction, it is characterized by and comprise:
A. by camera, carry out the detection of people's face, to determine that user enters system identification scope;
B. from video flowing, obtain current moving image, set up skin similarity model, as the foundation of skin color segmentation in gesture motion process, the data after skin color probability map and video frame result are multiplied each other are carried out filtering through median filter, obtain gesture area grayscale figure;
C. according to current gesture area grayscale figure, upgrade gesture motion historigram, and obtain kinergety figure;
The method of described renewal gesture motion historigram is:
Figure FDA00003303697000011
h wherein τ(x, y, t) is current gesture motion historigram,
Figure FDA00003303697000012
(x, y, t) is current gesture area grayscale figure, the parameter of the time range that τ is Describing Motion, and δ is the attenuation parameter of gesture motion historigram, ξ is the threshold value of gesture region binary segmentation;
The method of described acquisition kinergety figure is:
Figure FDA00003303697000013
e wherein τ(x, y, t) is kinergety figure, the parameter of the time range that τ is Describing Motion, and x, y is respectively the transverse and longitudinal coordinate of kinergety figure, and t is constantly;
D. effective gesture motion historigram is carried out to image gradient vector calculation, obtain gesture motion vector sum vector horizontal sextant angle, more vectorial horizontal sextant angle is planned in the coding that respective regions that 360 degree plane domains divide is corresponding;
E. by kinergety figure, obtain the zero hour and the finish time of gesture motion: calculate two poor that kinergety figure does not in the same time add up, if described difference > 400 is for gesture motion starts, if described difference < 20 finishes for gesture motion; Be partitioned into gesture motion;
F. by the analysis of step a~step e to one section of video, obtain the coded sequence of a direction of motion of this video, by described coded sequence, obtain the visible state sequence of Hidden Markov Model (HMM), Hidden Markov Model (HMM) parameter is trained, utilize the forward direction of Hidden Markov Model (HMM), backward algorithm is inferred gesture motion direction: according to the visible state sequence of described Hidden Markov Model (HMM), by state transition probability and the observed reading probability of Hidden Markov Model (HMM) parameter, calculate motion model probability, the motion model corresponding according to maximum motion model probability judges the gesture motion direction of whole system.
2. the method based on Hidden Markov Model (HMM) judgement gesture motion direction as claimed in claim 1, is characterized by described attenuation parameter δ=0.5, threshold xi=10s.
3. the method based on Hidden Markov Model (HMM) judgement gesture motion direction as claimed in claim 1, is characterized by step f Hidden Markov Model (HMM) parameter is trained and comprised:
F1. training pattern parameter is treated in initialization, according to action coding, sets each state-transition matrix for the treatment of training pattern parameter;
F2. according to the observed value sequence for the treatment of training pattern parameter and Hidden Markov Model (HMM) visible state, training new model parameter: if obtain the logarithm of the probability of described observed value sequence-treat obtains the probability of described observed value sequence logarithm < delta under model under new model, training finishes; Otherwise by repeating step f2.
4. the method based on Hidden Markov Model (HMM) judgement gesture motion direction as described in one of claims 1 to 3, is characterized by steps d gesture is calculated in the motion of upper and lower, left and right four direction.
5. the method based on Hidden Markov Model (HMM) judgement gesture motion direction as described in one of claims 1 to 3, is characterized by the visible state sequence that 6~8 of centres that intercept obtained coded sequence in step f are encoded to Hidden Markov Model (HMM).
CN201110429580.2A 2011-12-20 2011-12-20 Method for determining gesture moving direction based on hidden Markov model Active CN102592112B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110429580.2A CN102592112B (en) 2011-12-20 2011-12-20 Method for determining gesture moving direction based on hidden Markov model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110429580.2A CN102592112B (en) 2011-12-20 2011-12-20 Method for determining gesture moving direction based on hidden Markov model

Publications (2)

Publication Number Publication Date
CN102592112A CN102592112A (en) 2012-07-18
CN102592112B true CN102592112B (en) 2014-01-29

Family

ID=46480727

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110429580.2A Active CN102592112B (en) 2011-12-20 2011-12-20 Method for determining gesture moving direction based on hidden Markov model

Country Status (1)

Country Link
CN (1) CN102592112B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11783707B2 (en) 2018-10-09 2023-10-10 Ford Global Technologies, Llc Vehicle path planning

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226692B (en) * 2012-11-22 2016-01-20 广东科学中心 A kind of recognition system of video stream image frame and method thereof
CN103065132B (en) * 2013-01-14 2015-12-23 公安部第三研究所 The sleet infomation detection control method in video is realized based on Bayes's space-time model
CN103152060B (en) * 2013-01-17 2016-11-09 中山大学 A kind of packet markov supercomposed coding method
CN103971109A (en) * 2013-02-04 2014-08-06 佳能株式会社 Method and equipment used for extracting features and recognizing tracks
US9625997B2 (en) * 2013-07-19 2017-04-18 Microchip Technology Incorporated Human interface device and method
US9898689B2 (en) * 2014-11-06 2018-02-20 Qualcomm Incorporated Nonparametric model for detection of spatially diverse temporal patterns
CN104700088B (en) * 2015-03-23 2017-11-24 南京航空航天大学 A kind of gesture track recognition method under the follow shot based on monocular vision
CN105956551B (en) * 2016-04-28 2018-01-30 深圳市鼎盛智能科技有限公司 Object detection method and device
CN109102511B (en) * 2018-07-06 2021-05-25 深圳先进技术研究院 Cerebrovascular segmentation method, system and electronic equipment
US11030774B2 (en) 2019-03-19 2021-06-08 Ford Global Technologies, Llc Vehicle object tracking
US11460851B2 (en) 2019-05-24 2022-10-04 Ford Global Technologies, Llc Eccentricity image fusion
CN110188710B (en) * 2019-06-03 2021-05-04 石家庄铁道大学 Method for identifying dynamic behavior of train driver
US11521494B2 (en) 2019-06-11 2022-12-06 Ford Global Technologies, Llc Vehicle eccentricity mapping
US11662741B2 (en) 2019-06-28 2023-05-30 Ford Global Technologies, Llc Vehicle visual odometry
CN110555391B (en) * 2019-08-09 2022-04-05 上海交通大学 Intelligent wireless operating system and method based on grating diffraction and gesture recognition
CN112019892A (en) * 2020-07-23 2020-12-01 深圳市玩瞳科技有限公司 Behavior identification method, device and system for separating client and server
CN112347991B (en) * 2020-11-30 2022-04-01 北京理工大学 Method for analyzing skiing motion sequence based on hidden Markov
CN112734795A (en) * 2020-12-31 2021-04-30 北京深睿博联科技有限责任公司 Method and equipment for judging motion trend and direction of object

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101741952A (en) * 2009-12-10 2010-06-16 中国科学技术大学 Mobile phone interactive system for blind and device thereof
CN101763515A (en) * 2009-09-23 2010-06-30 中国科学院自动化研究所 Real-time gesture interaction method based on computer vision
CN102136066A (en) * 2011-04-29 2011-07-27 电子科技大学 Method for recognizing human motion in video sequence

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101763515A (en) * 2009-09-23 2010-06-30 中国科学院自动化研究所 Real-time gesture interaction method based on computer vision
CN101741952A (en) * 2009-12-10 2010-06-16 中国科学技术大学 Mobile phone interactive system for blind and device thereof
CN102136066A (en) * 2011-04-29 2011-07-27 电子科技大学 Method for recognizing human motion in video sequence

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11783707B2 (en) 2018-10-09 2023-10-10 Ford Global Technologies, Llc Vehicle path planning

Also Published As

Publication number Publication date
CN102592112A (en) 2012-07-18

Similar Documents

Publication Publication Date Title
CN102592112B (en) Method for determining gesture moving direction based on hidden Markov model
CN107784293B (en) A kind of Human bodys&#39; response method classified based on global characteristics and rarefaction representation
CN107967695B (en) A kind of moving target detecting method based on depth light stream and morphological method
CN101996401B (en) Target analysis method and apparatus based on intensity image and depth image
CN106023220A (en) Vehicle exterior part image segmentation method based on deep learning
CN104574439A (en) Kalman filtering and TLD (tracking-learning-detection) algorithm integrated target tracking method
CN106169188A (en) A kind of method for tracing object based on the search of Monte Carlo tree
CN105139004A (en) Face expression identification method based on video sequences
CN104598915A (en) Gesture recognition method and gesture recognition device
CN111488773A (en) Action recognition method, device, equipment and storage medium
CN105981075A (en) Efficient facial landmark tracking using online shape regression method
CN106407891A (en) Target matching method based on convolutional neural network and device
CN101216885A (en) Passerby face detection and tracing algorithm based on video
CN104834916A (en) Multi-face detecting and tracking method
CN104038799A (en) Three-dimensional television-oriented gesture manipulation method
CN103336967B (en) A kind of hand motion trail detection and device
CN109344822A (en) A kind of scene text detection method based on shot and long term memory network
CN103902989A (en) Human body motion video recognition method based on non-negative matrix factorization
CN109033955A (en) A kind of face tracking method and system
CN109740609A (en) A kind of gauge detection method and device
CN109508686A (en) A kind of Human bodys&#39; response method based on the study of stratification proper subspace
CN103456030A (en) Target tracking method based on scattering descriptor
CN104778670A (en) Fractal-wavelet self-adaption image denoising method based on multivariate statistical model
CN105303163A (en) Method and detection device for target detection
CN104778459A (en) Feature fusion method for actions of multiple athletes in football match video

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant