CN103793056A - Mid-air gesture roaming control method based on distance vector - Google Patents

Mid-air gesture roaming control method based on distance vector Download PDF

Info

Publication number
CN103793056A
CN103793056A CN201410038474.5A CN201410038474A CN103793056A CN 103793056 A CN103793056 A CN 103793056A CN 201410038474 A CN201410038474 A CN 201410038474A CN 103793056 A CN103793056 A CN 103793056A
Authority
CN
China
Prior art keywords
gesture
image
staff
area
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410038474.5A
Other languages
Chinese (zh)
Inventor
徐向民
邱福浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201410038474.5A priority Critical patent/CN103793056A/en
Publication of CN103793056A publication Critical patent/CN103793056A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a mid-air gesture roaming control method based on a distance vector. The method includes the following steps that 1, a video image sequence is obtained, analyzed and processed; 2, a five-finger stretching gesture and a fist making gesture are detected so as to initialize a control area; 3, skin color information of an interest area is obtained; 4, the movement information of a hand in the interest area is obtained; 5, the position coordinate information of the hand in each frame of image is obtained through calculation according to the skin color information of the hand in the step 3 and the movement information of the hand in the step 4; 6, the movement directions and the movement speed of the gestures in an interface are determined; 7, the gestures in the interface make corresponding response according to the directions and the speed so achieve gesture roaming, wherein the directions and the speed are determined in the step 6. The method has the advantages of realizing reachable small-range full-interface operation, rapid gesture roaming when the distance between a current gesture and an initial position is long, accurate gesture roaming when the distance between the current gesture and the initial position is short and the like.

Description

Based on the aerial gesture roam control method of distance vector
Technical field
The present invention relates to a kind of human-computer interaction technology, particularly a kind of aerial gesture roam control method based on distance vector.
Background technology
In daily life, gesture is a kind of behavior of conventional expression wish, having stronger expressive function, is also the main interactive mode of existing man-machine interactive system, if mouse, keyboard, telepilot, touch-screen etc. are all the ingredients of common contact man-machine interactive system.Some emerging man-machine interactive systems catch the behavior of user in sensor catching range by make a video recording first-class sensor of common camera or the degree of depth, by technology such as image processing, machine learning, pattern-recognitions, the actions such as identification, tracking user's gesture, user's behavior intention in the image sequence that analysis captures, mutual by with interface, realizes the non-contact type human-machine interaction based on gesture.
Gesture roaming is that gesture motion is mapped in interface, and with the motion of the hand in the motion control interface of the hand in reality, the operation such as realize selection to interface information, browse, is a critical function of the man-machine interactive system based on gesture.Existing common mapping mode is the direct mapping of gesture coordinate,, the coordinate of the gesture in the image sequence that sensor is captured, or be directly mapped as the gesture coordinate in interface by the coordinate that some prioris obtain the gesture in " the comfortable moving region " in image sequence that sensor captures.Wide=640 pixel * 480 pixels that for example, in the image sequence that sensor captures the image of each frame size is long *, the position at gesture place is (200,100) pixel, the size at interface for long * wide=1280 pixel * 720 pixels, so by the direct mapping of coordinate, gesture coordinate in interface is (1280/640*200=400,720/480*100=150) pixel.
Plant the direct mapping method of gesture coordinate and only used coordinate information, and in gesture roam procedure in the time that user wishes to select some distance current gesture place coordinates project far away, gesture just need to motion larger distance, increase user's fatigue sense, in the time selecting the close project of some coordinates, often not reaching enough precision because of the restriction of current technical merit again causes being difficult to choose or falsely drop, therefore, reduce the ease for use of the man-machine interactive system based on gesture, impersonality.
Except coordinate mapping, in some inventions, mention the method for speed mapping.Speed mapping is a kind of relative mapping mode, the movement velocity of gesture and direction in the image sequence that its calculating sensor captures, be indifferent to its concrete position coordinates, according to specific proportionate relationship, gesture in operation interface is according to corresponding direction certain distance of moving, relevant with speed apart from length.
This gesture speed mapping method has only been used the relative coordinate information of gesture motion, and sensor captures the difference of front and back two frame gesture absolute coordinatess in image.Like this in actual mechanical process, especially one rigidly connects and touches speed and the position that the new user of this system cannot hold gesture intuitively, there will be and in interface, wants the destination roaming into exceed the scope that active user's gesture can reach.For example, user uses right-hand operated, now its right hand be stretched over to the right its can and farthest, and because be the speed mapping adopting, the gesture in interface may be at the Far Left at interface, now user must get hand back, re-starts operation.Reduce like this ease for use of man-machine interactive system, increased the time that user is familiar with, learns and adapt to.
Therefore, should be in conjunction with the advantage of these two kinds of mapping modes, formulate a kind of control method of new aerial gesture roaming.
Summary of the invention
The shortcoming that the object of the invention is to overcome prior art, with not enough, provides a kind of aerial gesture roam control method based on distance vector.This control method has solved in the direct mapping of gesture coordinate, when the current gesture coordinate of chosen distance project far away user need to motion gesture accurate not problem to larger distance and while selecting the close project of coordinate; This control method has also solved in the speed mapping of gesture, wants the position at the project place of selecting to exceed the position that in present reality, user's gesture can arrive.
Object of the present invention is achieved through the following technical solutions: a kind of aerial gesture roam control method based on distance vector, comprises the following steps:
Step 1, obtain and analyzing and processing sequence of video images;
Step 2, detect the five fingers open gesture and the gesture of clenching fist, confining the staff region detecting is area-of-interest, and record user start control initial position, with initialization control area;
Step 3, in area-of-interest, image is carried out to skin color segmentation algorithm operating, obtain the Skin Color Information in area-of-interest;
Step 4, in area-of-interest, the image of adjacent two frames is carried out to difference operation, obtain the movable information of staff in area-of-interest;
Step 5, the staff Skin Color Information being obtained by described step 3 and step 4 and movable information calculate the location coordinate information of staff in each two field picture;
Step 6, determine gesture motion direction and movement rate in interface;
Gesture in step 7, interface is made corresponding response according to direction definite in step 6 and speed, makes gesture roaming.
Described step 2 comprises the following steps:
Steps A, utilize the fixing gestures detection detection of classifier the five fingers that Adaboost Algorithm for Training obtains to open gesture and the gesture of clenching fist; The five fingers open gesture and the sorter of the gesture of clenching fist is got by positive sample set and negative sample training respectively, in described sample set, comprise the gesture samples pictures in different background, different illumination conditions, different people, described negative sample collection has comprised the image under different background, different illumination conditions equally, but does not wherein comprise gesture;
Step B, use Haar-like feature and integral image extract calculating to the feature of sample image, the Weak Classifier that each training in rotation gets has different weights, the Weak Classifier that discrimination is high has larger weight, the Weak Classifier weight that discrimination is low is low, after many wheel training, several Weak Classifiers that obtain are joined together to obtain a strong classifier that recognition success rate is higher, the sorter of the cascade structure of multiple strong classifier composition that training is obtained, has the very high power that is detected as;
The five fingers in image are opened the sorter that step C, use training obtain and the two kinds of gestures of clenching fist detect, and successfully finding behind staff region, records the rectangle positional information at place, staff region, and its upper left corner is (x 0, y 0), wide is w, height is h; Setting this rectangular area is area-of-interest, obtains the center point (x of staff simultaneously c, y c), wherein x c=x 0+ 0.5*w, y c=y 0+ 0.5*h, records the center point of staff, starts the initial position controlled as user, to determine initial position point, and initialization annulus control area.
Described step 3 comprises the following steps:
Step I, according to colour of skin sample analysis, the staff colour of skin has good cluster at YCrCb color space, remove the impact of brightness Y, the Cr of the colour of skin and Cb passage all concentrate in a fritter elliptic region, and the transformational relation of YCrCb color space and RGB color space is as follows:
Y=0.257R+0.504G+0.098B+16,
Cb=-0.148R-0.219G+0.439B+128,
Cr=0.439R-0.368G-0.071B+128,
Analyze the threshold value of staff colour of skin Cr, Cb passage according to staff colour of skin sample set:
Thres(Cb,Cr)={Cb,Cr│95<Cb<139,122<Cr<167},
Wherein, Thres (Cb, Cr) represents threshold value;
Step II, the RGB image obtaining in video sequence is first converted to the image on YCrCb color space, recycling threshold value Thres (Cb, Cr) carries out skin color segmentation to image, obtains the bianry image of the colour of skin, that is:
Figure BDA0000462229510000041
Wherein, Thres (Cb, Cr) represents threshold value.
In described step 4, to the method for operating of in area-of-interest, the image of adjacent two frames being carried out to difference operation be: establish I tfor current frame image, I t-1for former frame image, calculate the difference result I of two two field pictures diff=I t-I t-1, and difference result is made to binary conversion treatment, that is:
Figure BDA0000462229510000042
And difference result is carried out to the processing of morphological image.
In described step 5, staff Skin Color Information and movable information by obtaining in step 3 and step 4 are combined, get both unions, in area-of-interest, obtain one and remove ground unrest interference, the bianry image I of staff information is described, by the barycenter of the target in zeroth order square and two first moment computed image I
Figure BDA0000462229510000043
zeroth order square is the summation of image pixel value:
m 00 = &Sigma; x &Sigma; y I ( x , y ) ,
First moment has two, is respectively:
m 10 = &Sigma; x &Sigma; y xI ( x , y ) ,
m 01 = &Sigma; x &Sigma; y yI ( x , y ) ,
Can obtain thus:
x &OverBar; = m 10 m 00 ,
y &OverBar; = m 01 m 00 ,
Obtain the positional information of present frame staff.
In described step 6, determine that gesture motion direction in interface and the method for movement rate are: hand tracking gained position result is shone upon, and the initial center location point (x that obtains of the coordinate information (x, y) of the present frame gesture being obtained by described step 5 and described step 2 c, y c) distance size, according to the proportionate relationship of distance and speed, determine the rate travel of gesture in interface; Meanwhile, determine the moving direction of gesture in interface according to the vector direction of initial position and current gesture position.
Principle of work of the present invention: the initial position that the present invention controls according to user, by user's performance constraint in annulus as shown in Figure 1, in Fig. 1, the 1 point representative sensor referring to captures in image sequence the initial position coordinate of gesture when user starts to operate, and this completes in single job and can not change user.In Fig. 1,4 points that refer to represent the position at active user's gesture place.User's gesture is considered as the motion that the labile factors such as the shake of the gesture of user own cause when the motion in 2 circles that refer in Fig. 1, and system does not respond.When user's gesture is in Fig. 1 outside 3 circles that refer to time, system does not respond.User's gesture, in Fig. 1, while moving, is considered as valid function in the annulus that the circle that 3 circles that refer to and 2 refer to forms.In the time that user's gesture is in this annulus, gesture in interface just starts mobile, the direction being moved is the position of gesture in annulus now, be coordinate and the gesture initial position of 4 points that refer in Fig. 1, be the vector direction of the coordinate formation of 1 point referring in Fig. 1, direction is pointed to 4 points that refer to from 1 point referring to.Mobile speed and the position of current gesture in annulus, i.e. 4 points that refer in Fig. 1, apart from initial position, i.e. the distance dependent of 1 point referring in Fig. 1.
In distance between current hand gesture location and initial position and interface, the corresponding relation of the movement rate of gesture as shown in Figure 2, in Fig. 2, horizontal ordinate represents the distance of the position of current gesture in annulus apart from initial position, R1 is the 2 circular radiuses that refer in Fig. 1, and R2 is 3 circular radius that refer in Fig. 1.Ordinate is ratio.Stipulate a speed V0, represent the number of pixels that in unit interval inner boundary, gesture moves.Ordinate represents the movement rate of gesture and the ratio of V0 in interface, is Pmax to the maximum, Pmax>1, and minimum value is Pmin, Pmin >=0.When distance is less than R1, gesture is in Fig. 1 in 2 circles that refer to time, and ratio is 0, and rate travel is 0, and the effect showing is that system does not respond.When distance is while being greater than R2, gesture is in Fig. 1 outside 3 circles that refer to time, and ratio is also 0, and rate travel is 0, and the effect showing is that system does not respond.In the time that distance is greater than R1 and is less than R2, in the annulus that the gesture circle that 3 circles that refer to and 2 refer in Fig. 1 forms, suppose that distance is now RX, ratio is:
Figure BDA0000462229510000061
like this, as long as user's gesture remains on the position in annulus, the gesture in interface will be according to a certain direction continues mobilely with speed, and user changes the position of gesture in annulus and just can change direction of motion and the movement rate of gesture in interface.
By this gesture roam control method, carry out regulations speed by distance, with vector come controlling party to, the item location of selecting when hope is apart from current hand gesture location time far away, user can move to gesture in annulus apart from the movement that realizes quick low precision on initial position position far away, in the time of the nearer project of the current gesture of chosen distance, user can say that gesture moves on the nearer position of annulus middle distance initial position and realize high precision movement at a slow speed.So just solve the problem occurring in the direct mapping process of coordinate.In the method, the gesture in interface be do not stop mobile, user's operation change be direction and the speed that in interface, gesture moves.Therefore roaming and the control of any distance in can realizing interface among a small circle at one.Avoid the problem there will be in speed mapping.
The present invention has following advantage and effect with respect to prior art:
1, correspondingly mobile gesture is far away while having solved the project that in the direct mapping process of coordinate, the current hand gesture location of chosen distance is far away, to need user, and accurate not problem while selecting the close project of coordinate.Make user can pass through to change the distance of current gesture and initial position, realize the quick gesture roaming of distance of current gesture and initial position and the accurate gesture roaming of the near distance of current gesture and initial position.
2, solved user in speed mapping process and wished that the project of selecting has exceeded the problem of the current gesture coverage pattern of user.Allow the gesture in interface automatically move, user controls its direction of motion and speed, realizes among a small circle and operation that full interface can reach
3, provide in a kind of aerial gesture roam control method based on distance vector, to current hand gesture location and the proportional speed control mode of initial position distance.
Accompanying drawing explanation
Fig. 1 is the schematic diagram of the aerial gesture roam control method based on distance vector of the present invention; In figure, 1 is initial position point, 2 circular scope that do not respond for system, and the annulus scope that the circular scope that 3 circular scope that refer to and 2 refer to forms is system responses scope, 4 is current hand gesture location.
Fig. 2 is in the aerial gesture roam control method based on distance vector of the present invention, the corresponding relation figure of the movement rate of gesture in the distance between current hand gesture location and initial position and interface; Horizontal ordinate is the distance between current hand gesture location and initial position, and ordinate is the movement rate of gesture in interface and the ratio value of system schedule speed.
Embodiment
Below in conjunction with embodiment and accompanying drawing, the present invention is described in further detail, but embodiments of the present invention are not limited to this.
Embodiment
As shown in Figure 1,1 is initial position point, 2 circular scope that do not respond for system, the annulus scope that the circular scope that 3 circular scope that refer to and 2 refer to forms is system responses scope, 4 is current hand gesture location, based on an aerial gesture roam control method for distance vector, comprise the following steps:
Step 1, make a video recording first-class equipment as front end sensors by common camera or the degree of depth, the image sequence that sensor is captured carries out Treatment Analysis;
Step 2, utilize fixing gestures detection sorter that Adaboost Algorithm for Training obtains to open and the two kinds of gestures of clenching fist detect the five fingers; The sorter of two kinds of gestures is respectively to be obtained by different sample set training, in its positive sample set, comprise the gesture samples pictures in different background, different illumination conditions, different people, and negative sample has comprised the image under different background, different illumination conditions equally, but wherein do not comprise gesture; Use Haar-like feature and integral image to extract calculating to the feature of sample image.The Weak Classifier that each training in rotation gets has different weights, and the Weak Classifier that discrimination is high has larger weight, and the Weak Classifier weight that discrimination is low is low; After many wheel training, several Weak Classifiers that obtain are joined together to obtain a strong classifier that recognition success rate is higher; The sorter of the cascade structure of multiple strong classifier composition that training is obtained, has the very high power that is detected as; The five fingers in image are opened the sorter that uses training to obtain and the two kinds of gestures of clenching fist detect, and successfully finding behind staff region, records the rectangle positional information at place, staff region, and its upper left corner is (x 0, y 0), wide is w, height is h; Setting this rectangular area is area-of-interest, can obtain the center point (x of staff simultaneously c, y c), wherein x c=x 0+ 0.5*w, y c=y 0+ 0.5*h; Record the center point of this staff, start the initial position controlled as user, can determine the position of initial position 1 in Fig. 1 with this, and whole annulus control area in initialization Fig. 1;
Step 3, in interested region, image is carried out to skin color segmentation algorithm operating.Known by colour of skin sample analysis, the staff colour of skin has good cluster at YCrCb color space, removes the impact of brightness Y, and the Cr of the colour of skin and Cb passage all concentrate in a fritter elliptic region.The transformational relation of YCrCb color space and RGB color space is as follows:
Y=0.257R+0.504G+0.098B+16,
Cb=-0.148R-0.219G+0.439B+128,
Cr=0.439R-0.368G-0.071B+128,
Analyzed the threshold value of staff colour of skin Cr, Cb passage from staff colour of skin sample set:
Thres(Cb,Cr)={Cb,Cr│95<Cb<139,122<Cr<167},
The RGB image obtaining in video sequence is first converted to the image on YCrCb color space, and recycling threshold value Thres (Cb, Cr) carries out skin color segmentation to image, obtains the bianry image of the colour of skin, that is:
Figure BDA0000462229510000081
Wherein, Thres (Cb, Cr) represents threshold value;
Step 4, in area-of-interest, the image of adjacent two frames is carried out to difference operation.If I tfor current frame image, I t-1for former frame image, calculate the difference result I of two two field pictures diff=I t-I t-1, and difference result is made to binary conversion treatment, that is:
Figure BDA0000462229510000091
For obtaining more clear certain motion outline information, fill up its profile interior void, can carry out to difference result the processing of morphological image, be mainly dilation and erosion, the interference of removing further picture noise;
Step 5: staff Skin Color Information and movable information by obtaining in step 3 and step 4 are combined, get both unions, obtain one and remove ground unrest interference in area-of-interest, describe the bianry image I of staff information.By the barycenter of the target in zeroth order square and two first moment computed image I
Figure BDA0000462229510000092
Zeroth order square is the summation of image pixel value:
m 00 = &Sigma; x &Sigma; y I ( x , y ) ,
First moment has two, is respectively
m 10 = &Sigma; x &Sigma; y xI ( x , y ) ,
m 01 = &Sigma; x &Sigma; y yI ( x , y ) ,
Can obtain thus:
x &OverBar; = m 10 m 00 ,
y &OverBar; = m 01 m 00 ,
Finally obtain the positional information (x, y) of present frame staff;
Step 6: hand tracking gained position result is shone upon according to the control mode shown in Fig. 1.The coordinate information (x, y) of the present frame gesture being obtained by step 5 and the initial center location point (x that step 2 obtains c, y c) distance size, according to the distance shown in Fig. 2 and speed proportionate relationship, determine the rate travel of gesture in interface.Meanwhile, determine the moving direction of gesture in interface according to the vector direction of initial position and current gesture position;
Step 7: rate travel and moving direction that the gesture in interface obtains according to mapping in step 6, make corresponding response, realize gesture roaming.
Above-described embodiment is preferably embodiment of the present invention; but embodiments of the present invention are not restricted to the described embodiments; other any do not deviate from change, the modification done under Spirit Essence of the present invention and principle, substitutes, combination, simplify; all should be equivalent substitute mode, within being included in protection scope of the present invention.

Claims (6)

1. the aerial gesture roam control method based on distance vector, is characterized in that, comprises the following steps:
Step 1, obtain and analyzing and processing sequence of video images;
Step 2, detect the five fingers open gesture and the gesture of clenching fist, confining the staff region detecting is area-of-interest, and record user start control initial position, with initialization control area;
Step 3, in area-of-interest, image is carried out to skin color segmentation algorithm operating, obtain the Skin Color Information in area-of-interest;
Step 4, in area-of-interest, the image of adjacent two frames is carried out to difference operation, obtain the movable information of staff in area-of-interest;
Step 5, the staff Skin Color Information being obtained by described step 3 and step 4 and movable information calculate the location coordinate information of staff in each two field picture;
Step 6, determine gesture motion direction and movement rate in interface;
Gesture in step 7, interface is made corresponding response according to direction definite in step 6 and speed, makes gesture roaming.
2. the aerial gesture roam control method based on distance vector according to claim 1, is characterized in that, described step 2 comprises the following steps:
Steps A, utilize the fixing gestures detection detection of classifier the five fingers that Adaboost Algorithm for Training obtains to open gesture and the gesture of clenching fist; The five fingers open gesture and the sorter of the gesture of clenching fist is got by positive sample set and negative sample training respectively, in described sample set, comprise the gesture samples pictures in different background, different illumination conditions, different people, described negative sample collection has comprised the image under different background, different illumination conditions equally, but does not wherein comprise gesture;
Step B, use Haar-like feature and integral image extract calculating to the feature of sample image, the Weak Classifier that each training in rotation gets has different weights, the Weak Classifier that discrimination is high has larger weight, the Weak Classifier weight that discrimination is low is low, after many wheel training, several Weak Classifiers that obtain are joined together to obtain a strong classifier that recognition success rate is higher, the sorter of the cascade structure of multiple strong classifier composition that training is obtained, has the very high power that is detected as;
The five fingers in image are opened the sorter that step C, use training obtain and the two kinds of gestures of clenching fist detect, and successfully finding behind staff region, records the rectangle positional information at place, staff region, and its upper left corner is (x 0, y 0), wide is w, height is h; Setting this rectangular area is area-of-interest, obtains the center point (x of staff simultaneously c, y c), wherein x c=x 0+ 0.5*w, y c=y 0+ 0.5*h, records the center point of staff, starts the initial position controlled as user, to determine initial position point, and initialization annulus control area.
3. the aerial gesture roam control method based on distance vector according to claim 1, is characterized in that, described step 3 comprises the following steps:
Step I, according to colour of skin sample analysis, the staff colour of skin has good cluster at YCrCb color space, remove the impact of brightness Y, the Cr of the colour of skin and Cb passage all concentrate in a fritter elliptic region, and the transformational relation of YCrCb color space and RGB color space is as follows:
Y=0.257R+0.504G+0.098B+16,
Cb=-0.148R-0.219G+0.439B+128,
Cr=0.439R-0.368G-0.071B+128,
Analyze the threshold value of staff colour of skin Cr, Cb passage according to staff colour of skin sample set:
Thres(Cb,Cr)={Cb,Cr│95<Cb<139,122<Cr<167},
Wherein, Thres (Cb, Cr) represents threshold value;
Step II, the RGB image obtaining in video sequence is first converted to the image on YCrCb color space, recycling threshold value Thres (Cb, Cr) carries out skin color segmentation to image, obtains the bianry image of the colour of skin, that is:
Figure FDA0000462229500000021
Wherein, Thres (Cb, Cr) represents threshold value.
4. the aerial gesture roam control method based on distance vector according to claim 1, is characterized in that, in described step 4, is: establish I to the method for operating of in area-of-interest, the image of adjacent two frames being carried out to difference operation tfor current frame image, I t-1for former frame image, calculate the difference result I of two two field pictures diff=I t-I t-1, and difference result is made to binary conversion treatment, that is:
Figure FDA0000462229500000031
And difference result is carried out to the processing of morphological image.
5. the aerial gesture roam control method based on distance vector according to claim 1, it is characterized in that, in described step 5, staff Skin Color Information and movable information by obtaining in step 3 and step 4 are combined, get both unions, in area-of-interest, obtain one and remove ground unrest interference, describe the bianry image I of staff information, by the barycenter of the target in zeroth order square and two first moment computed image I zeroth order square is the summation of image pixel value:
m 00 = &Sigma; x &Sigma; y I ( x , y ) ,
First moment has two, is respectively:
m 10 = &Sigma; x &Sigma; y xI ( x , y ) ,
m 01 = &Sigma; x &Sigma; y yI ( x , y ) ,
Can obtain thus:
x &OverBar; = m 10 m 00 ,
y &OverBar; = m 01 m 00 ,
Obtain the positional information of present frame staff.
6. the aerial gesture roam control method based on distance vector according to claim 1, it is characterized in that, in described step 6, determine that gesture motion direction in interface and the method for movement rate are: hand tracking gained position result is shone upon, and the initial center location point (x that obtains of the coordinate information (x, y) of the present frame gesture being obtained by described step 5 and described step 2 c, y c) distance size, according to the proportionate relationship of distance and speed, determine the rate travel of gesture in interface; Meanwhile, determine the moving direction of gesture in interface according to the vector direction of initial position and current gesture position.
CN201410038474.5A 2014-01-26 2014-01-26 Mid-air gesture roaming control method based on distance vector Pending CN103793056A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410038474.5A CN103793056A (en) 2014-01-26 2014-01-26 Mid-air gesture roaming control method based on distance vector

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410038474.5A CN103793056A (en) 2014-01-26 2014-01-26 Mid-air gesture roaming control method based on distance vector

Publications (1)

Publication Number Publication Date
CN103793056A true CN103793056A (en) 2014-05-14

Family

ID=50668814

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410038474.5A Pending CN103793056A (en) 2014-01-26 2014-01-26 Mid-air gesture roaming control method based on distance vector

Country Status (1)

Country Link
CN (1) CN103793056A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104123570A (en) * 2014-07-22 2014-10-29 西安交通大学 Shared weak classifier combination based hand classifier and training and detection method
CN105306914A (en) * 2014-06-26 2016-02-03 原相科技(槟城)有限公司 Color image sensor and operating method thereof
CN105955450A (en) * 2016-04-15 2016-09-21 范长英 Natural interaction system based on computer virtual interface
CN109474850A (en) * 2018-11-29 2019-03-15 北京字节跳动网络技术有限公司 Move pixel special video effect adding method, device, terminal device and storage medium
CN110069137A (en) * 2019-04-30 2019-07-30 徐州重型机械有限公司 Gestural control method, control device and control system
CN111226226A (en) * 2018-06-29 2020-06-02 杭州眼云智家科技有限公司 Motion-based object detection method, object detection device and electronic equipment
CN112116598A (en) * 2020-08-04 2020-12-22 北京农业信息技术研究中心 Flower type identification method and system
CN112686231A (en) * 2021-03-15 2021-04-20 南昌虚拟现实研究院股份有限公司 Dynamic gesture recognition method and device, readable storage medium and computer equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090284469A1 (en) * 2008-05-16 2009-11-19 Tatung Company Video based apparatus and method for controlling the cursor
CN101901052A (en) * 2010-05-24 2010-12-01 华南理工大学 Target control method based on mutual reference of both hands
CN102662464A (en) * 2012-03-26 2012-09-12 华南理工大学 Gesture control method of gesture roaming control system
CN102707802A (en) * 2012-05-09 2012-10-03 华南理工大学 Method for controlling speed of mapping of gesture movement to interface
CN102799875A (en) * 2012-07-25 2012-11-28 华南理工大学 Tracing method of arbitrary hand-shaped human hand

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090284469A1 (en) * 2008-05-16 2009-11-19 Tatung Company Video based apparatus and method for controlling the cursor
CN101901052A (en) * 2010-05-24 2010-12-01 华南理工大学 Target control method based on mutual reference of both hands
CN102662464A (en) * 2012-03-26 2012-09-12 华南理工大学 Gesture control method of gesture roaming control system
CN102707802A (en) * 2012-05-09 2012-10-03 华南理工大学 Method for controlling speed of mapping of gesture movement to interface
CN102799875A (en) * 2012-07-25 2012-11-28 华南理工大学 Tracing method of arbitrary hand-shaped human hand

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105306914A (en) * 2014-06-26 2016-02-03 原相科技(槟城)有限公司 Color image sensor and operating method thereof
CN105306914B (en) * 2014-06-26 2018-08-31 原相科技股份有限公司 Chromatic image sensor and its operating method
CN104123570A (en) * 2014-07-22 2014-10-29 西安交通大学 Shared weak classifier combination based hand classifier and training and detection method
CN104123570B (en) * 2014-07-22 2018-06-05 西安交通大学 Human hand grader and training and detection method based on the combination of shared Weak Classifier
CN105955450A (en) * 2016-04-15 2016-09-21 范长英 Natural interaction system based on computer virtual interface
CN111226226A (en) * 2018-06-29 2020-06-02 杭州眼云智家科技有限公司 Motion-based object detection method, object detection device and electronic equipment
CN109474850A (en) * 2018-11-29 2019-03-15 北京字节跳动网络技术有限公司 Move pixel special video effect adding method, device, terminal device and storage medium
CN110069137A (en) * 2019-04-30 2019-07-30 徐州重型机械有限公司 Gestural control method, control device and control system
CN110069137B (en) * 2019-04-30 2022-07-08 徐州重型机械有限公司 Gesture control method, control device and control system
CN112116598A (en) * 2020-08-04 2020-12-22 北京农业信息技术研究中心 Flower type identification method and system
CN112686231A (en) * 2021-03-15 2021-04-20 南昌虚拟现实研究院股份有限公司 Dynamic gesture recognition method and device, readable storage medium and computer equipment
CN112686231B (en) * 2021-03-15 2021-06-01 南昌虚拟现实研究院股份有限公司 Dynamic gesture recognition method and device, readable storage medium and computer equipment

Similar Documents

Publication Publication Date Title
Mukherjee et al. Fingertip detection and tracking for recognition of air-writing in videos
CN103793056A (en) Mid-air gesture roaming control method based on distance vector
KR102465532B1 (en) Method for recognizing an object and apparatus thereof
CN103098076B (en) Gesture recognition system for TV control
EP2365420B1 (en) System and method for hand gesture recognition for remote control of an internet protocol TV
CN110221699B (en) Eye movement behavior identification method of front-facing camera video source
CN103488294B (en) A kind of Non-contact gesture based on user&#39;s interaction habits controls to map method of adjustment
CN103353935A (en) 3D dynamic gesture identification method for intelligent home system
Jalab et al. Human computer interface using hand gesture recognition based on neural network
CN101406390A (en) Method and apparatus for detecting part of human body and human, and method and apparatus for detecting objects
McAllister et al. Hand tracking for behaviour understanding
CN111444764A (en) Gesture recognition method based on depth residual error network
CN105912126A (en) Method for adaptively adjusting gain, mapped to interface, of gesture movement
Doan et al. Recognition of hand gestures from cyclic hand movements using spatial-temporal features
CN108614988A (en) A kind of motion gesture automatic recognition system under complex background
CN113220114A (en) Embedded non-contact elevator key interaction method integrating face recognition
Elakkiya et al. Intelligent system for human computer interface using hand gesture recognition
Guo et al. Gesture recognition for Chinese traffic police
Khan et al. Computer vision based mouse control using object detection and marker motion tracking
Kushwaha et al. Rule based human activity recognition for surveillance system
Shin et al. Welfare interface implementation using multiple facial features tracking for the disabled people
Raza et al. An integrative approach to robust hand detection using CPM-YOLOv3 and RGBD camera in real time
Manresa-Yee et al. Towards hands-free interfaces based on real-time robust facial gesture recognition
Phung et al. A new image feature for fast detection of people in images
Ji et al. Design of human machine interactive system based on hand gesture recognition

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140514