CN102592112A - Method for determining gesture moving direction based on hidden Markov model - Google Patents
Method for determining gesture moving direction based on hidden Markov model Download PDFInfo
- Publication number
- CN102592112A CN102592112A CN2011104295802A CN201110429580A CN102592112A CN 102592112 A CN102592112 A CN 102592112A CN 2011104295802 A CN2011104295802 A CN 2011104295802A CN 201110429580 A CN201110429580 A CN 201110429580A CN 102592112 A CN102592112 A CN 102592112A
- Authority
- CN
- China
- Prior art keywords
- gesture
- hmm
- gesture motion
- probability
- parameter
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention relates to a method for determining a gesture moving direction based on a hidden Markov model. The method comprises the following steps of: a, recognizing a human face; b, acquiring a gesture area gray-scale map through multi-threaded fusion; c, updating a gesture moving history map and acquiring a moving energy map according to the gesture area gray-scale map; d, dividing vector horizontal angles; e, acquiring a starting moment and an ending moment of gesture movement through the moving energy map, and splitting the gesture movement; and f, training hidden Markov model parameters, and presuming the gesture moving direction by the forward algorithm and the backward algorithm of the hidden Markov model. By the method for determining the gesture moving direction based on the hidden Markov model, the gesture moving direction of the hidden Markov model is determined by using a plurality of determining conditions, so that the determination accuracy of the gesture movement can be improved, and the interference of various factors in a determination result can be obviously reduced.
Description
Technical field
The present invention relates to field of video image processing, be based on the method that HMM (HMM) is judged the gesture motion direction concretely.
Background technology
In recent years, along with the computing machine rapid expansion of social influence in modern times, the multi-mode man-machine interaction in real-life application more and more widely.Gesture identification based on vision has become the indispensable technology of man-machine interaction of new generation.
Gesture is a kind of natural, directly perceived, man-machine interaction means of being easy to learn, and directly as the input equipment of computing machine, the communication of between humans and machines will no longer need media with staff.The computer based gesture identification is a kind of recognition technology with development trend, but technical existence is much difficult.Derive from video flowing, hand rubber-like characteristics, hand under complex background (people self interference, illumination, the scene etc.) condition like gesture and have bulk redundancy characteristic information (like palmmprint etc.), how gesture is discerned at three dimensions etc.; These problems are not also improved at present and are solved, and concrete realization needs to increase restrictive condition.How to overcome the technical matters in the gesture identification, make it change following interactive mode, have many researchers to be devoted to this project at present.
At present, obtaining of gesture identification video flowing is divided into 2D camera and 3D camera.What the 3D camera obtained is the three-dimensional information of gesture, and its cost costliness is unfavorable for popularizing.Therefore present stage is devoted to cheap USB 2D video camera Gesture Recognition mostly, has positive effect.
Existing gesture motion direction determining mainly is single following the tracks of based on gesture motion, thereby judges the method for its direction of motion through calculating the hand shift length.But because hand is non-Rigid Bodies, and gesture follows the tracks of and generally carries out based on the colour of skin, causes this unstablely based on the method for following the tracks of, and is subject to ambient lighting, and the interference of background colour, can't form correct pursuit path.
Summary of the invention
Problem to above appearance; The invention provides a kind of method of judging the gesture motion direction based on HMM; Through multiple Rule of judgment the gesture motion direction of HMM is judged, improved the accuracy of judging, reduce the interference of various factors judged result.
The present invention is based on HMM and judge the method for gesture motion direction, comprising:
A. carry out people's face through camera and detect,, can adopt conventional Adaboost detecting device to carry out people's face and detect to confirm that the user gets into the system identification scope;
B. from video flowing, obtain current moving image, set up the skin color probability model, as the foundation of skin color segmentation in the gesture motion process, the data after skin color probability map and frame difference result multiplied each other are carried out filtering through median filter, obtain gesture area grayscale figure;
C. based on current gesture area grayscale figure, upgrade the gesture motion historigram, and obtain kinergety figure;
D. effective gesture motion historigram is carried out the image gradient vector calculation, obtain gesture motion vector sum vector horizontal sextant angle, more vectorial horizontal sextant angle is planned in the respective regions corresponding codes of 360 degree plane domains divisions;
E. obtain the zero hour and the finish time of gesture motion through kinergety figure, be partitioned into gesture motion;
F. through the analysis of step a~step e to one section video; Obtain the coded sequence of a direction of motion in this video; Obtain the visible state sequence of HMM through described coded sequence; The HMM parameter is trained, utilize the forward direction of HMM, back to infer the gesture motion direction to algorithm.
Method of the present invention does not rely on the tracking effect of gesture, but the colour of skin information of hand is merged with the movable information (frame is poor) of gesture mutually, can be partitioned into gesture more accurately like this.Construct gesture motion historigram and kinergety figure on this basis; And at interval according to the motion feature self-adaptation cutting gesture motion of hand; Calculate and obtain the orientation angle characteristic of a series of gesture motions in this time interval; Use this angle character sequence construct HMM, and utilize the forward and backward algorithm of this model to infer most possible gesture motion direction, thereby realize the judgement of gesture motion.
Concrete a kind of scheme is that the method for the described renewal gesture motion of step c historigram is:
H wherein
τ(x, y t) are current gesture motion historigram,
Be current gesture area grayscale figure, τ is for describing the parameter of the time range of moving, and δ is the attenuation parameter of gesture motion historigram, and ξ is the threshold value that gesture zone two-value is cut apart.
Through lots of emulation test and test, for obtaining motion determination preferably, preferred attenuation parameter δ=0.5, threshold xi=10.
Another kind of concrete scheme is that the method for the described acquisition kinergety of step c figure is:
E wherein
τ(x, y t) are kinergety figure, and τ is for describing the parameter of the time range of moving, and x, y are respectively the horizontal ordinate of kinergety figure, and t is constantly.
Further; Step e is described to obtain the zero hour of gesture motion and the method for the finish time is through kinergety figure: calculate poor that two different kinergety figure constantly add up; If described difference>400 are for gesture motion begins, if finish for gesture motion described difference<20.Wherein 400 and 20 value is the statistics through a large amount of experiments.
Concrete; Utilize forward direction, the back of HMM to infer the gesture motion direction to algorithm and be described in the step f: according to the visible state sequence of described HMM; State transition probability and observed reading probability through the HMM parameter; Calculate the motion model probability, the gesture motion direction of total system is judged according to the motion model that the motion model probability of maximum is corresponding.
Concrete, among the step f HMM parameter trained to comprise:
F1. the training pattern parameter is treated in initialization, sets each state-transition matrix of treating the training pattern parameter according to the action coding;
F2. according to the observed value sequence of treating training pattern parameter and HMM visible state; Training new model parameter: if the logarithm of the probability of the said observed value sequence of acquisition under the new model-treat to obtain under the model logarithm<delta of the probability of said observed value sequence, then training finishes; Otherwise with repeating step f2.
Preferably, in the steps d motion of gesture four direction in upper and lower, left and right is calculated.
Preferably, for overcoming the motion noise, 6~8 of the centres of the coded sequence that intercepting among the step f obtained are encoded to the visible state sequence of HMM, further are preferably 8 codings of intercepting.
Test is learnt; The present invention is based on HMM and judge the method for gesture motion direction; Through multiple Rule of judgment the gesture motion direction of HMM is judged, can be improved the accuracy that gesture motion is judged, can obviously reduce the interference of various factors judged result.
Below in conjunction with embodiment, foregoing of the present invention is remake further detailed description by the accompanying drawing illustrated embodiment.But should this be interpreted as that the scope of the above-mentioned theme of the present invention only limits to following instance.Do not breaking away under the above-mentioned technological thought situation of the present invention, various replacements or change according to ordinary skill knowledge and customary means are made all should comprise within the scope of the invention.
Description of drawings
Fig. 1 the present invention is based on the process flow diagram that HMM is judged the method for gesture motion direction.
Embodiment
The method that the present invention is based on HMM judgement gesture motion direction as shown in Figure 1 comprises:
A. carrying out people's face through camera detects; To confirm that the user gets into the system identification scope; Adopt conventional Adaboost detecting device to detect present frame f (x, y, t) middle people's face number n umf; If occur to continue people's face number n umf>0 near certain position and more than 2 seconds, then thinking has the user to get into scene;
B. from video flowing, obtain current moving image, set up the skin color probability model, as the foundation of skin color segmentation in the gesture motion process; At HSV (hue, saturation, intensity) color space; Add up a large amount of colours of skin and non-colour of skin information, set up normalized H-S (tone, saturation degree) colour of skin histogram, be used for the skin color segmentation of subsequent video two field picture; The threshold probability of skin color segmentation is set at 0.85, present frame skin color probability map
Data after skin color probability map and frame difference result multiplied each other are carried out filtering through median filter, obtain current gesture area grayscale figure
The computing method of frame difference are:
Because the threshold value of denoising can produce very big influence to the result in the varying environment (like illumination), therefore the threshold value in the calculating is set
Then frame is poor
C. according to current gesture area grayscale figure, upgrade gesture motion historigram (MHI):
H wherein
τ(x, y t) are current gesture motion historigram
Be current gesture area grayscale figure, τ is for describing the parameter of the time range of moving, and δ=0.5 is the attenuation parameter of gesture motion historigram, the threshold value that ξ=10 are cut apart for gesture zone two-value.
According to the gesture motion historigram, obtain kinergety figure (MEI):
E wherein
τ(x, y t) are kinergety figure, and τ is for describing the parameter of the time range of moving, and x, y are respectively the horizontal ordinate of kinergety figure, and t is constantly;
D. effective gesture motion historigram is carried out the image gradient vector calculation; Obtain gesture motion vector sum vector horizontal sextant angle, on the four direction of upper and lower, left and right, vectorial horizontal sextant angle is planned in the respective regions corresponding codes of 360 degree plane domains divisions.Be specially:
To effective gesture motion historigram, the vectorial angle angle of computed image divides its value as required and encodes, and carries out " upper and lower, left and right " four direction and judges.With (50
0, 50
0) be encoded to " 0 ", (50
0, 140
0) be encoded to " 1 ", (140
0, 230
0) be " 2 ", (230
0, 310
0) be " 3 ", not in these interval values, assignment is " 1 ".Analyze one section gesture motion video, the gesture motion historigram that every frame is made up all carries out said judgement, thereby obtains being used for the visible state sequence O (o that HMM is inferred
1, o
2, Λ o
n), o wherein
i∈ 1,0,1,2,3}, i=1 Λ N, N=6~8;
E. obtain the zero hour and the finish time of gesture motion through kinergety figure, calculate the poor Γ that two different kinergety figure constantly add up
T+k, calculate Γ
T+k=(s
T+k-s
t), s wherein
t=∑ ∑ E
τ(x, y t), are the kinergety figure E of t before the moment
τ(x, y, t) with, s
T+kBe the kinergety figure E of t+k before the moment
τ(x, y, t) with.If Γ
T+k>T
1(T
1=400) judge that then gesture motion begins; If Γ
T+k<T
2(T
2=20), judge that gesture motion finishes, and is partitioned into gesture motion thus;
F. through the analysis of step a~step e, obtain the coded sequence of a direction of motion of this video to one section video.For overcoming the motion noise, remove 6~8 codings in centre of this coded sequence of intercepting end to end, with the sub-coded sequence of intercepting visible state sequence O (o as HMM
1, o
2, Λ o
n), o wherein
i∈ 1,0,1,2,3}; I=1 Λ N, N=6~8 utilize the Baum-Welch algorithm to the HMM parameter lambda=(A, B π) train; Make P (O| λ) maximum, wherein A is the transition matrix between the latent state in the HMM, and B is the visible state probability matrix, and π is the original state probability matrix; P (O| λ) is illustrated in that λ=(A, B π) under the model, obtain visible state sequence O (o
1, o
2, Λ o
n) probability:
F1. the training pattern parameter lambda is treated in initialization
0, the training pattern parameter lambda is treated in setting according to the action coding
0Each state-transition matrix (A, B, π);
F2. according to treating the training pattern parameter lambda
0With the observed value sequence O of HMM visible state, training new model parameter lambda: if new model obtains the logarithm of the probability of said observed value sequence-the treat logarithm<delta of the probability of the said observed value sequence of acquisition model under, i.e. logP (o down
i| λ)-logP (o
i| λ
0)<delta, then training finishes; Otherwise with repeating step f2.
Be specially: suppose treating the training pattern parameter lambda
0And under the given condition of observed value sequence O, from current latent state i to the next one transition probability of latent state j be ξ (i, j)=P (s
t=i, s
T+1=j|O, λ), then
Wherein, α
t(i) expression t is positioned at latent state i constantly, and has produced visible state sequence O (o
1, o
2, Λ o
t) the probability of preceding t symbol; a
IjThe probability that is positioned at latent state i and shifts in the expression HMM to latent state j; b
j(o
T+1) represent to observe symbol o under the latent state j
T+1Probability; β
T+1(j) be illustrated in t+1 constantly Hidden Markov be positioned at latent state j, and will produce t+1 afterwards the probability of target sequence (time range from t+1 → N, N=6~8) constantly.Make γ
t(i)=∑ ξ
t(i, j) expression t is in latent state S constantly
iProbability, ∑ γ then
t(i) be from state S in the whole process
iThe expection of the number of times that produces, and ∑ ξ
t(i is from latent state S j)
iJump to latent state S
jThe expection of number of times.Therefore the estimated value of new model parameter lambda is:
π
i=γ
1(i), π
iThe expression original state is the probability of i.
Visible state sequence O (o according to described HMM
1, o
2, Λ o
n); Utilize HMM forward direction, the back to algorithm, through HMM parameter lambda=(A, B; State transition probability π) and observed reading probability; Calculate the motion model probability, the gesture motion direction of total system is judged, infer out the direction of gesture motion according to the motion model that the motion model probability of maximum is corresponding.
Wherein the forward direction of HMM is calculated as:
At movement angle observation sequence O (o
1, o
2, Λ o
n) on the basis, definition forward variable α
t(i): α
t(i)=P (o
1, o
2, Λ o
t, q
t=θ
i/ λ), (N representes the length of observation sequence to 1≤t≤N, and span is 6~8, P (o
1, o
2, L o
t, q
t=θ
i/ λ) the current new model parameter lambda of expression obtains sequence O ' (o
1, o
2, Λ o
t) probability); Initialization forward variable: α
1(i)=π
ib
i(o
1); Recursive calculation:
1≤t≤N-1,1≤j≤N, N=6~8; α wherein
T+1(j) be illustrated in the probability that t+1 moment new model parameter lambda obtains sequence O ', b
i(o
1) be illustrated under the i state, produce considerable measured value o
1Probability, π
iThe expression original state is the probability of i, α
i(i) a
IjExpression is currently located under the i state, jumps to the probability under the j state; Stop:
α
T(i) be the probability of T new model parameter lambda acquisition constantly sequence O ';
The forward direction of HMM is calculated as:
At movement angle observation sequence O (o
1, o
2, Λ o
n) on the basis, the definition back is to variable β
t(i): β
t(i)=P (o
T-1, o
T-2, Λ o
T, q
t=θ
i/ λ), ((N representes the length of observation sequence to 1≤t≤N-1, and span is 6~8, P (o
T-1, o
T-2, L o
T, q
t=θ
i/ λ) the given model parameter λ of expression obtains sequence O ' (o
T-1, o
T-2, Λ o
N) probability); After the initialization to variable: β
T(i)=1; Recursive calculation:
T=N-1, N-2, L, 1,1≤i≤N; Stop:
Wherein β is illustrated in the probability of pairing subscript new model parameter lambda acquisition constantly sequence O '.
Claims (9)
1. judge the method for gesture motion direction based on HMM, it is characterized by and comprise:
A. carry out people's face through camera and detect, to confirm that the user gets into the system identification scope;
B. from video flowing, obtain current moving image, set up the skin color probability model, as the foundation of skin color segmentation in the gesture motion process, the data after skin color probability map and frame of video difference result multiplied each other are carried out filtering through median filter, obtain gesture area grayscale figure;
C. based on current gesture area grayscale figure, upgrade the gesture motion historigram, and obtain kinergety figure;
D. effective gesture motion historigram is carried out the image gradient vector calculation, obtain gesture motion vector sum vector horizontal sextant angle, more vectorial horizontal sextant angle is planned in the respective regions corresponding codes of 360 degree plane domains divisions;
E. obtain the zero hour and the finish time of gesture motion through kinergety figure, be partitioned into gesture motion;
F. through the analysis of step a~step e to one section video; Obtain the coded sequence of a direction of motion of this video; Obtain the visible state sequence of HMM through described coded sequence; The HMM parameter is trained, utilize the forward direction of HMM, back to infer the gesture motion direction to algorithm.
2. the method based on HMM judgement gesture motion direction as claimed in claim 1, the method that it is characterized by the described renewal gesture motion of step c historigram is:
H wherein
τ(x, y t) are current gesture motion historigram,
Be current gesture area grayscale figure, τ is for describing the parameter of the time range of moving, and δ is the attenuation parameter of gesture motion historigram, and ξ is the threshold value that gesture zone two-value is cut apart.
3. the method based on HMM judgement gesture motion direction as claimed in claim 2 is characterized by described attenuation parameter δ=0.5, threshold xi=10s.
4. the method based on HMM judgement gesture motion direction as claimed in claim 1, the method that it is characterized by the described acquisition kinergety of step c figure is:
E wherein
τ(x, y t) are kinergety figure, and τ is for describing the parameter of the time range of moving, and x, y are respectively the horizontal ordinate of kinergety figure, and t is constantly.
5. the method for judging the gesture motion direction based on HMM as claimed in claim 1; It is characterized by that step e is described to obtain the zero hour of gesture motion and the method for the finish time is through kinergety figure: calculate poor that two different kinergety figure constantly add up; If described difference>400 are for gesture motion begins, if finish for gesture motion described difference<20.
6. the method for judging the gesture motion direction based on HMM as claimed in claim 1; It is characterized by the forward direction, the back that utilize HMM described in the step f infers the gesture motion direction to algorithm and is: according to the visible state sequence of described HMM; State transition probability and observed reading probability through the HMM parameter; Calculate the motion model probability, the gesture motion direction of total system is judged according to the motion model that the motion model probability of maximum is corresponding.
7. as claimed in claim 1ly judge the method for gesture motion direction, it is characterized by among the step f HMM parameter trained and comprise based on HMM:
F1. the training pattern parameter is treated in initialization, sets each state-transition matrix of treating the training pattern parameter according to the action coding;
F2. according to the observed value sequence of treating training pattern parameter and HMM visible state; Training new model parameter: if the logarithm of the probability of the said observed value sequence of acquisition under the new model-treat to obtain under the model logarithm<delta of the probability of said observed value sequence, then training finishes; Otherwise with repeating step f2.
8. like the described method of one of claim 1 to 7, it is characterized by in the steps d motion of gesture four direction in upper and lower, left and right is calculated based on HMM judgement gesture motion direction.
9. like the described method based on HMM judgement gesture motion direction of one of claim 1 to 7,6~8 of centres that it is characterized by the coded sequence that intercepting obtained among the step f are encoded to the visible state sequence of HMM.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201110429580.2A CN102592112B (en) | 2011-12-20 | 2011-12-20 | Method for determining gesture moving direction based on hidden Markov model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201110429580.2A CN102592112B (en) | 2011-12-20 | 2011-12-20 | Method for determining gesture moving direction based on hidden Markov model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102592112A true CN102592112A (en) | 2012-07-18 |
CN102592112B CN102592112B (en) | 2014-01-29 |
Family
ID=46480727
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201110429580.2A Active CN102592112B (en) | 2011-12-20 | 2011-12-20 | Method for determining gesture moving direction based on hidden Markov model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102592112B (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103065132A (en) * | 2013-01-14 | 2013-04-24 | 公安部第三研究所 | Method capable of achieving detection and control of rain and snow information in video based on Bayesian spatial-temporal model |
CN103152060A (en) * | 2013-01-17 | 2013-06-12 | 中山大学 | Grouping Markov overlapping coding method |
CN103226692A (en) * | 2012-11-22 | 2013-07-31 | 广东科学中心 | Video stream image frame recognizing system and method thereof |
CN103971109A (en) * | 2013-02-04 | 2014-08-06 | 佳能株式会社 | Method and equipment used for extracting features and recognizing tracks |
CN104700088A (en) * | 2015-03-23 | 2015-06-10 | 南京航空航天大学 | Gesture track recognition method based on monocular vision motion shooting |
CN107077609A (en) * | 2014-11-06 | 2017-08-18 | 高通股份有限公司 | Non-parametric model for detecting spatially different temporal modes |
WO2017186017A1 (en) * | 2016-04-28 | 2017-11-02 | 深圳市鼎盛智能科技有限公司 | Target detection method and device |
CN109102511A (en) * | 2018-07-06 | 2018-12-28 | 深圳先进技术研究院 | A kind of cerebrovascular dividing method, system and electronic equipment |
US10365722B2 (en) | 2013-07-19 | 2019-07-30 | Microchip Technology Germany Gmbh | Human interface device and method |
CN110188710A (en) * | 2019-06-03 | 2019-08-30 | 石家庄铁道大学 | Train driver dynamic behaviour recognition methods |
CN110555391A (en) * | 2019-08-09 | 2019-12-10 | 上海交通大学 | Intelligent wireless operating system and method based on grating diffraction and gesture recognition |
CN112019892A (en) * | 2020-07-23 | 2020-12-01 | 深圳市玩瞳科技有限公司 | Behavior identification method, device and system for separating client and server |
CN112347991A (en) * | 2020-11-30 | 2021-02-09 | 北京理工大学 | Method for analyzing skiing motion sequence based on hidden Markov |
CN112734795A (en) * | 2020-12-31 | 2021-04-30 | 北京深睿博联科技有限责任公司 | Method and equipment for judging motion trend and direction of object |
US11030774B2 (en) | 2019-03-19 | 2021-06-08 | Ford Global Technologies, Llc | Vehicle object tracking |
US11460851B2 (en) | 2019-05-24 | 2022-10-04 | Ford Global Technologies, Llc | Eccentricity image fusion |
US11521494B2 (en) | 2019-06-11 | 2022-12-06 | Ford Global Technologies, Llc | Vehicle eccentricity mapping |
US11662741B2 (en) | 2019-06-28 | 2023-05-30 | Ford Global Technologies, Llc | Vehicle visual odometry |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11783707B2 (en) | 2018-10-09 | 2023-10-10 | Ford Global Technologies, Llc | Vehicle path planning |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101741952A (en) * | 2009-12-10 | 2010-06-16 | 中国科学技术大学 | Mobile phone interactive system for blind and device thereof |
CN101763515A (en) * | 2009-09-23 | 2010-06-30 | 中国科学院自动化研究所 | Real-time gesture interaction method based on computer vision |
CN102136066A (en) * | 2011-04-29 | 2011-07-27 | 电子科技大学 | Method for recognizing human motion in video sequence |
-
2011
- 2011-12-20 CN CN201110429580.2A patent/CN102592112B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101763515A (en) * | 2009-09-23 | 2010-06-30 | 中国科学院自动化研究所 | Real-time gesture interaction method based on computer vision |
CN101741952A (en) * | 2009-12-10 | 2010-06-16 | 中国科学技术大学 | Mobile phone interactive system for blind and device thereof |
CN102136066A (en) * | 2011-04-29 | 2011-07-27 | 电子科技大学 | Method for recognizing human motion in video sequence |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103226692A (en) * | 2012-11-22 | 2013-07-31 | 广东科学中心 | Video stream image frame recognizing system and method thereof |
CN103065132A (en) * | 2013-01-14 | 2013-04-24 | 公安部第三研究所 | Method capable of achieving detection and control of rain and snow information in video based on Bayesian spatial-temporal model |
CN103152060A (en) * | 2013-01-17 | 2013-06-12 | 中山大学 | Grouping Markov overlapping coding method |
CN103152060B (en) * | 2013-01-17 | 2016-11-09 | 中山大学 | A kind of packet markov supercomposed coding method |
CN103971109A (en) * | 2013-02-04 | 2014-08-06 | 佳能株式会社 | Method and equipment used for extracting features and recognizing tracks |
US10365722B2 (en) | 2013-07-19 | 2019-07-30 | Microchip Technology Germany Gmbh | Human interface device and method |
EP3033658B1 (en) * | 2013-07-19 | 2020-01-01 | Microchip Technology Incorporated | Method for hidden-markov-model based gesture recognition |
CN107077609A (en) * | 2014-11-06 | 2017-08-18 | 高通股份有限公司 | Non-parametric model for detecting spatially different temporal modes |
CN107077609B (en) * | 2014-11-06 | 2020-08-07 | 高通股份有限公司 | Non-parametric model for detecting spatially distinct temporal patterns |
CN104700088B (en) * | 2015-03-23 | 2017-11-24 | 南京航空航天大学 | A kind of gesture track recognition method under the follow shot based on monocular vision |
CN104700088A (en) * | 2015-03-23 | 2015-06-10 | 南京航空航天大学 | Gesture track recognition method based on monocular vision motion shooting |
WO2017186017A1 (en) * | 2016-04-28 | 2017-11-02 | 深圳市鼎盛智能科技有限公司 | Target detection method and device |
CN109102511A (en) * | 2018-07-06 | 2018-12-28 | 深圳先进技术研究院 | A kind of cerebrovascular dividing method, system and electronic equipment |
US11030774B2 (en) | 2019-03-19 | 2021-06-08 | Ford Global Technologies, Llc | Vehicle object tracking |
US11460851B2 (en) | 2019-05-24 | 2022-10-04 | Ford Global Technologies, Llc | Eccentricity image fusion |
CN110188710A (en) * | 2019-06-03 | 2019-08-30 | 石家庄铁道大学 | Train driver dynamic behaviour recognition methods |
US11521494B2 (en) | 2019-06-11 | 2022-12-06 | Ford Global Technologies, Llc | Vehicle eccentricity mapping |
US11662741B2 (en) | 2019-06-28 | 2023-05-30 | Ford Global Technologies, Llc | Vehicle visual odometry |
CN110555391A (en) * | 2019-08-09 | 2019-12-10 | 上海交通大学 | Intelligent wireless operating system and method based on grating diffraction and gesture recognition |
CN110555391B (en) * | 2019-08-09 | 2022-04-05 | 上海交通大学 | Intelligent wireless operating system and method based on grating diffraction and gesture recognition |
CN112019892A (en) * | 2020-07-23 | 2020-12-01 | 深圳市玩瞳科技有限公司 | Behavior identification method, device and system for separating client and server |
CN112347991A (en) * | 2020-11-30 | 2021-02-09 | 北京理工大学 | Method for analyzing skiing motion sequence based on hidden Markov |
CN112347991B (en) * | 2020-11-30 | 2022-04-01 | 北京理工大学 | Method for analyzing skiing motion sequence based on hidden Markov |
CN112734795A (en) * | 2020-12-31 | 2021-04-30 | 北京深睿博联科技有限责任公司 | Method and equipment for judging motion trend and direction of object |
Also Published As
Publication number | Publication date |
---|---|
CN102592112B (en) | 2014-01-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102592112B (en) | Method for determining gesture moving direction based on hidden Markov model | |
CN102324030B (en) | Target tracking method and system based on image block characteristics | |
CN108635861B (en) | Method, device and equipment for controlling vehicle in application and storage medium | |
CN100573548C (en) | The method and apparatus of tracking bimanual movements | |
CN107368787A (en) | A kind of Traffic Sign Recognition algorithm that application is driven towards depth intelligence | |
CN109447994A (en) | In conjunction with the remote sensing image segmentation method of complete residual error and Fusion Features | |
CN103198493B (en) | A kind ofly to merge and the method for tracking target of on-line study based on multiple features self-adaptation | |
CN104574439A (en) | Kalman filtering and TLD (tracking-learning-detection) algorithm integrated target tracking method | |
CN103003846B (en) | Articulation region display device, joint area detecting device, joint area degree of membership calculation element, pass nodular region affiliation degree calculation element and joint area display packing | |
CN105981075A (en) | Efficient facial landmark tracking using online shape regression method | |
CN110427807A (en) | A kind of temporal events motion detection method | |
CN105528794A (en) | Moving object detection method based on Gaussian mixture model and superpixel segmentation | |
CN110188733A (en) | Timing behavioral value method and system based on the region 3D convolutional neural networks | |
CN102789568A (en) | Gesture identification method based on depth information | |
CN108573246A (en) | A kind of sequential action identification method based on deep learning | |
CN104834916A (en) | Multi-face detecting and tracking method | |
Wang et al. | Learning deep conditional neural network for image segmentation | |
CN105335701A (en) | Pedestrian detection method based on HOG and D-S evidence theory multi-information fusion | |
CN103105924B (en) | Man-machine interaction method and device | |
CN101470809A (en) | Moving object detection method based on expansion mixed gauss model | |
CN106557173A (en) | Dynamic gesture identification method and device | |
CN101996310A (en) | Face detection and tracking method based on embedded system | |
CN103440510A (en) | Method for positioning characteristic points in facial image | |
CN113763424B (en) | Real-time intelligent target detection method and system based on embedded platform | |
CN103902989A (en) | Human body motion video recognition method based on non-negative matrix factorization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |