CN104299004B - A kind of gesture identification method based on multiple features fusion and finger tip detection - Google Patents

A kind of gesture identification method based on multiple features fusion and finger tip detection Download PDF

Info

Publication number
CN104299004B
CN104299004B CN201410568977.3A CN201410568977A CN104299004B CN 104299004 B CN104299004 B CN 104299004B CN 201410568977 A CN201410568977 A CN 201410568977A CN 104299004 B CN104299004 B CN 104299004B
Authority
CN
China
Prior art keywords
gesture
defect
point
finger tip
boundary rectangle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410568977.3A
Other languages
Chinese (zh)
Other versions
CN104299004A (en
Inventor
于慧敏
盛亚婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201410568977.3A priority Critical patent/CN104299004B/en
Publication of CN104299004A publication Critical patent/CN104299004A/en
Application granted granted Critical
Publication of CN104299004B publication Critical patent/CN104299004B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a kind of gesture identification method based on multiple features fusion and finger tip detection.Including training process and identification process:In training process, for complicated gesture, rational gesture feature is selected, and using the feature extraction algorithm of multiple features fusion, vector machine training is supported to gesture, forms training pattern.In identification process, for the sequence of video images of input, gestures detection is carried out first, then carries out multi-feature extraction and merge and be input in support vector machines to obtain recognition result;Meanwhile the finger tip detection based on defect is carried out to gesture, by defect screening washer, the position of each finger tip of finger is navigated to, then, will twice identify and be integrated with testing result, obtains final gesture identification result.The problem of the method for the present invention can be solved effectively under complex scene, and gesture identification rate is not high, and meet requirement of real-time, it can preferably apply to human-computer interaction.

Description

A kind of gesture identification method based on multiple features fusion and finger tip detection
Technical field
The present invention relates to a kind of method of gesture identification, is related to a kind of based on the knowledge of the gesture of multiple features fusion and finger tip detection Other method.
Background technology
As the development of computer is in modern society using more and more extensive and rapid, the demand of human-computer interaction technology exists Also become higher and higher in human lives, in these interaction techniques, gesture is a kind of nature and meets the behavioural habits of people Interactive mode, it with it is directly perceived, conveniently, it is natural the characteristics of receive everybody concern, be as novel human-machine interaction technology One of ideal chose.And gesture identification is as one of the step of most critical, its recognition effect directly influences people in interactive system Communication capability between computer.
With reference to all kinds of researchs and practical application, technical difficult points existing for current gesture identification field can be analyzed The contradiction being between system real time and gesture identification rate.Higher gesture identification rate in order to obtain, usual researcher can select Select reservation feature as much as possible and go characterization gesture, gesture is identified with complicated algorithm, and will certainly so reduce the speed of identification Rate, the real-time of system are not being met;And in order to suitable for real-time system, usually again can only by reduce the dimension of feature come Reduce calculation amount, and the reduction of characteristic dimension can increase the influence of noise, and have to the result of Hand Gesture Segmentation early period very high Requirement, the discrimination of gesture decreases, can identified gesture species become unification, lack certain application.
The content of the invention
In order to solve the problems of the prior art, the invention discloses a kind of hand based on multiple features fusion and finger tip detection Gesture recognition methods, this method are utilized by selecting rational gesture feature, including Hu moment characteristics, defect characteristic and ratio characteristic The feature extraction algorithm of multiple features fusion, makes the calculation amount of feature small, and validity is high, and combines the finger tip detection side based on defect Method, further improves the accuracy rate of identification.Can effectively it solve in this way under complex scene, gesture identification rate is not high to ask Topic, and meet the requirement of real-time.
The present invention uses following technical scheme:A kind of gesture identification method based on multiple features fusion and finger tip detection, bag Include following steps:
Step 1):Training process:Rational gesture feature is selected to complicated gesture, and is carried using the feature of multiple features fusion Algorithm is taken, vector machine training is supported to gesture, forms training pattern;
Step 2):Identification process:Gestures detection is carried out to input image sequence, carrying out multiple features to the gesture detected carries Take and merge, and be input to the recognition result that SVM is obtained in support vector machines;Meanwhile the finger tip based on defect is carried out to gesture Detection, comprehensive two times result export final recognition result.
Further, the training process described in step 1), selects complicated gesture rational gesture feature, and utilizes more The feature extraction algorithm of Fusion Features, is supported gesture vector machine training, forms training pattern, its detailed process is as follows:
Step 1.1):Hu Moment Feature Extractions:
The Hu moment characteristics of each images of gestures are calculated, and it is normalized, Hu moment characteristics can be represented with formula:
Hu=(φ1234567)
Wherein, φ17Each component of Hu moment characteristics is represented respectively.
Step 1.2):Defect characteristic extracts:
The defects of gesture is exactly partly the part that gesture convex closure subtracts gesture profile.The method for calculating defect number is as follows:
The profile of gesture is obtained by eight neighborhood search method, judge the gesture profile periphery polygon whether be it is convex, if Convex, then the convex polygon is gesture convex closure;It is smooth that polygonal approximation is carried out to gesture convex closure again;According to gesture profile, gesture Convex closure calculates the defects of gesture number.Each defect includes three points:Starting point, end point, depth point and depth point and convex The distance between bag, is respectively defined as ptStart, ptEnd, ptFar and Depth.
Step 1.3):Ratio characteristic extracts:
Profile girth and area ratio:
Gesture profile Zhou Changwei ConLenght are defined, it is ConArea to define gesture contour area.Define the profile of gesture Girth compares feature with area:
Profile girth and boundary rectangle girth ratio:
The boundary rectangle girth RectLenght of gesture is defined, the girth for defining profile girth and boundary rectangle compares feature:
Contour area and boundary rectangle area ratio:
The boundary rectangle area for defining gesture is RectArea, and contour area compares feature with boundary rectangle area:
Boundary rectangle the ratio of width to height:
Define boundary rectangle wide W, a height of H, boundary rectangle the ratio of width to height:
Center of gravity and boundary rectangle up-and-down boundary ratio of distances constant:
H1、H2Represent that gesture center of gravity to the distance on the upper and lower border of boundary rectangle, is defined above and below center of gravity and boundary rectangle respectively The ratio between frontier distance:
Center of gravity and boundary rectangle right boundary ratio of distances constant:
W1、W2Represent that gesture center of gravity to the distance on the left and right border of boundary rectangle, defines center of gravity and boundary rectangle or so respectively Lower boundary ratio of distances constant:
Step 1.4):Multiple features fusion:
By step 1.1)~1.3) in Hu moment characteristics, defect characteristic and six ratio characteristics of extraction permeate feature Vector f eature, for characterizing the characteristic information of images of gestures:
Feature={ Hu, numDefects, ConLA, LenCR, AreaCR, α, β, η }
Wherein, numDefects is defect number, Hu=(φ1234567)。
Step 1.5):Support vector machines is trained:
By the feature vector feature and classification of above-mentioned gesture sample image mark be input to together in support vector machines into Row training, wherein classification is labeled as the mark of gesture species.
Further, the identification process described in step 2), carries out gestures detection, to what is detected to input image sequence Gesture carries out multi-feature extraction and fusion, and is input to the recognition result that SVM is obtained in support vector machines;Meanwhile to gesture into Finger tip detection of the row based on defect, comprehensive two times result export final recognition result, its detailed process is as follows:
Step 2.1):Gestures detection:
The sequence of video images of input is carried out being based on the improved mixed Gaussian background modeling of space time information, and combines and is based on The Face Detections of multiple color spaces as a result, both synthesis results are filtered with processing and morphological operation after obtain binaryzation Gesture segmentation image.
Step 2.2):Feature extraction:
To the binary map of gesture according to step 1.1)~1.4) be calculated the feature vector of the images of gestures;
Step 2.3):SVM is identified:
By in the feature vector input support vector machines of images of gestures, SVM recognition results are exported.
Step 2.4):Finger tip detection:
The number of finger tip is exactly the number of finger, and the detection to finger tip includes finger tip number and the positional information of finger tip. Three defect points ptStart, ptEnd, ptFar that gesture defect and each defect included are obtained according to step 1.2), these Finger tip point is contained in defect point, so establishing an effective defect point screening washer, it is finger tip to filter out effective defect point Point.Comprehensive recognition result is as follows:
I. the starting point of defect is more than a certain proportion of high H of gesture boundary rectangle with depth point distance
Lenght (ptStart, ptFar) > α H, wherein α are proportionality coefficient.
Ii. the depth point of defect is more than a certain proportion of high H of gesture boundary rectangle with end point distance
Lenght (ptEnd, ptFar) > α H
Iii. the angle that the starting point of defect, depth point, end point are formed is less than threshold value Tangle
Angle (ptStart, ptFar, ptEnd) < Tangle
Iv. the starting point of defect, depth point, end point are in a certain range of gesture boundary rectangle
ybounding< yptStar< ybounding+βH
ybounding< yptEnd< ybounding+βH
ybounding< yptFar< ybounding+βH
Wherein β is proportionality coefficient, and H is high for gesture boundary rectangle.
V. when between defect point distance be less than TdisWhen, two defect points are approximate to be overlapped, and can determine that as same defect point
Lenght(pti,ptj) < Tdis
When the defect points meet the above conditions at the same time, then it can determine that as effective defect point, recorded effectively by screening washer Defect point number and its position, are finger tip point.
Step 2.5):Comprehensive recognition result:
Compare the result and the finger tip point number of finger tip detection output of support vector machines output, it is when both are consistent, then defeated Go out recognition result.
The present invention compared with prior art, has following technique effect using above technical scheme:
1) feature extracting method of multiple features fusion is used, feature calculation amount is small, and each feature can describe hand from different perspectives Gesture, can mistake that effectively modifying factor single features flase drop is brought, higher discrimination is brought under calculation amount as small as possible.
2) Fingertip Detection based on defect is intuitive and easy to understand, meets the priori of gesture and the understanding habit of the mankind, Calculation amount is small, compared to the template matching method dependent on template, a large amount of calculating curvature, the edge analysis methods of distance of needs and is related to multiple The heuristics directly perceived of miscellaneous thinning process, the Fingertip Detection based on defect can in a straightforward manner fast positioning to gesture Tip portion, the real-time of system is further increased on the basis of accuracy rate is ensured.
3) support vector machines recognition result based on multi-feature fusion is combined with finger tip detection result, is together decided on final The recognition result of output, further increases gesture identification rate, reduces the flase drop risk that single recognition methods is brought.
Brief description of the drawings
Fig. 1 is the flow chart of the gesture identification of the present invention;
Fig. 2 is the flow chart of training process;
Fig. 3 is the flow chart of finger tip detection;
Embodiment
Below in conjunction with the accompanying drawings and by specific embodiment, technical scheme is described in further detail.
Following embodiments are implemented under premised on technical solution of the present invention, give detailed embodiment and tool The operating process of body, but protection scope of the present invention is not limited to following embodiments.
Embodiment
The present embodiment to one section by Logitech C710 IP Cameras shooting video sequence (640X480 pixels, 30ftps) handled.Video random shooting in scene indoors, comprising complicated background in scene, there is the back of the body of the class colour of skin Scenery body occurs, and has a change of illumination, the species of gesture includes 0,1,2,3,4,5,8 seven kind of gesture.The present embodiment includes following Step:
Step 1):Training process:All gesture sample images are input in tranining database one by one, selection Hu moment characteristics, Defect characteristic and ratio characteristic, and using the feature extraction algorithm of multiple features fusion, vector machine training, shape are supported to gesture Into training pattern;
In the present embodiment, the training process described in step 1), Fig. 2 are the flow chart of training process, its detailed process is such as Under:
Step 1.1):Training sample prepares:
The other gesture sample image of seven species and its classification mark are input in tranining database one by one, in database 1369 width gesture sample images are shared, seven kinds of gestures are respectively labeled as 0,1,2,3,4,5,8.
Step 1.2):Hu Moment Feature Extractions:
The Hu moment characteristics of each images of gestures are calculated, and it is normalized, Hu moment characteristics can be represented with formula:
Hu=(φ1234567)
Wherein, φ17Each component of Hu moment characteristics is represented respectively.
Step 1.3):Defect characteristic extracts:
The defects of gesture is exactly partly the part that gesture convex closure subtracts gesture profile.The method for calculating defect number is as follows:
The profile of gesture is obtained by eight neighborhood search method, judge the gesture profile periphery polygon whether be it is convex, if Convex, then the convex polygon is gesture convex closure;It is smooth that polygonal approximation is carried out to gesture convex closure again;According to gesture profile, gesture Convex closure calculates the defects of gesture number.Each defect includes three points:Starting point, end point, depth point and depth point and convex The distance between bag, is respectively defined as ptStart, ptEnd, ptFar and Depth.
Step 1.4):Ratio characteristic extracts:
Profile girth and area ratio:
Gesture profile Zhou Changwei ConLenght are defined, it is ConArea to define gesture contour area.Define the profile of gesture Girth compares feature with area:
Profile girth and boundary rectangle girth ratio:
The boundary rectangle girth RectLenght of gesture is defined, the girth for defining profile girth and boundary rectangle compares feature:
Contour area and boundary rectangle area ratio:
The boundary rectangle area for defining gesture is RectArea, and contour area compares feature with boundary rectangle area:
Boundary rectangle the ratio of width to height:
Define boundary rectangle wide W, a height of H, boundary rectangle the ratio of width to height:
Center of gravity and boundary rectangle up-and-down boundary ratio of distances constant:
H1、H2Represent that gesture center of gravity to the distance on the upper and lower border of boundary rectangle, is defined above and below center of gravity and boundary rectangle respectively The ratio between frontier distance:
Center of gravity and boundary rectangle right boundary ratio of distances constant:
W1、W2Represent that gesture center of gravity to the distance on the left and right border of boundary rectangle, defines center of gravity and boundary rectangle or so respectively Lower boundary ratio of distances constant:
Step 1.5):Multiple features fusion:
By step 1.2)~1.4) in Hu moment characteristics, defect characteristic and six ratio characteristics of extraction permeate feature Vector f eature, for characterizing the characteristic information of images of gestures:
Feature={ Hu, numDefects, ConLA, LenCR, AreaCR, α, β, η }
Wherein, numDefects is defect number, Hu=(φ1234567)。
Step 1.6):Support vector machines is trained:
By the feature vector feature and classification of above-mentioned gesture sample image mark be input to together in support vector machines into Row training.
Step 2):Identification process:Gestures detection is carried out to input image sequence, carrying out multiple features to the gesture detected carries Take and merge, and be input to the recognition result that SVM is obtained in support vector machines;Meanwhile the finger tip based on defect is carried out to gesture Detection, comprehensive two times result export final recognition result.
In the present embodiment, the identification process described in step 2), Fig. 1 are the flow charts of the gesture identification of the present invention, it has Body process is as follows:
Step 2.1):Gestures detection:
Face Detection is carried out to the sequence of video images of input, using the skin color detection method of multiple color spaces component, is built A new color space HLS-CbCr color space is found, image is transformed on HLS-CbCr color spaces, by carrying in advance The colour of skin Sample Establishing complexion model taken, according to the complexion model distribution situation on HLS-CbCr color spaces, detects figure Area of skin color as in;Meanwhile carry out being based on the improved mixed Gaussian background modeling of space time information, by for each background pixel A mixture gaussian modelling is established, judges the background parts in image, so as to further extract foreground area.And root Detection zone R (x, y) is set according to the result of Face Detection, is that detection zone and non-detection area distribute different learning rates, and record Each pixel is judged as background number, different learning rates is distributed according to the number, so as to detect more quickly in image Foreground area;After two kinds of testing result synthesis are filtered processing and morphological operation, the gesture point of binaryzation is obtained Cut image.
Step 2.2):Feature extraction:
To the binary map of gesture according to step 1.2)~1.5) be calculated the feature vector of the images of gestures;
Step 2.3):SVM is identified:
Will images of gestures feature vector input support vector machines in, export SVM recognition results, recognition result 0,1,2, 3rd, 4, the 5, type of 8 seven kind of gesture.
Step 2.4):Finger tip detection:
Fig. 3 is the flow chart of finger tip detection, and the number of finger tip is exactly the number of finger, and the detection to finger tip includes finger tip The positional information of number and finger tip.Three defect points that gesture defect and each defect included are obtained according to step 1.3) PtStart, ptEnd, ptFar, contain finger tip point in these defect points, so establishing an effective defect point screening washer, sieve It is finger tip point to select effective defect point.Comprehensive recognition result is as follows:
I. the starting point of defect is more than a certain proportion of high H of gesture boundary rectangle with depth point distance
Lenght (ptStart, ptFar) > α H, wherein α are proportionality coefficient.
Ii. the depth point of defect is more than a certain proportion of high H of gesture boundary rectangle with end point distance
Lenght (ptEnd, ptFar) > α H
Iii. the angle that the starting point of defect, depth point, end point are formed is less than threshold value Tangle
Angle (ptStart, ptFar, ptEnd) < Tangle
Iv. the starting point of defect, depth point, end point are in a certain range of gesture boundary rectangle
ybounding< yptStar< ybounding+βH
ybounding< yptEnd< ybounding+βH
ybounding< yptFar< ybounding+βH
Wherein β is proportionality coefficient, and H is high for gesture boundary rectangle.
V. when between defect point distance be less than TdisWhen, two defect points are approximate to be overlapped, and can determine that as same defect point
Lenght(pti,ptj) < Tdis
When the defect points meet the above conditions at the same time, then it can determine that as effective defect point, recorded effectively by screening washer Defect point number and its position, are finger tip point.
Step 2.5):Comprehensive recognition result:
Compare the result and the finger tip point number of finger tip detection output of support vector machines output, it is when both are consistent, then defeated Go out recognition result.
All experiments realize that COMPUTER PARAMETER is on PC:Central processing unit Intel (R) Core (TM) i5CPU750@ 2.67GHz, memory 4.00GB.

Claims (1)

1. a kind of gesture identification method based on multiple features fusion and finger tip detection, it is characterised in that comprise the following steps:
Step 1):Training process:Rational gesture feature is selected complicated gesture, and is calculated using the feature extraction of multiple features fusion Method, is supported gesture vector machine training, forms training pattern;
Step 2):Identification process:Gestures detection is carried out to input image sequence, the gesture that detects is carried out multi-feature extraction and Fusion, and it is input to the recognition result that SVM is obtained in support vector machines;Meanwhile the finger tip detection based on defect is carried out to gesture, The recognition result of comprehensive SVM and the result of finger tip detection export final recognition result;The defects of gesture is partly exactly that gesture is convex Bag subtracts the part of gesture profile;
Training process described in step 1), its detailed process are as follows:
Step 1.1):Feature extraction:
Feature extraction is carried out to images of gestures, extracts Hu moment characteristics, defect characteristic and the ratio characteristic of gesture, the ratio Feature includes following six kinds of features:Profile girth and contour area ratio, profile girth and boundary rectangle girth ratio, contour area and Boundary rectangle area ratio, boundary rectangle the ratio of width to height, center of gravity and boundary rectangle up-and-down boundary ratio of distances constant, center of gravity and boundary rectangle are left Right margin ratio of distances constant;
Step 1.2):Multiple features fusion:
Hu moment characteristics, defect characteristic and six ratio characteristics of extraction in step 1.1) are permeated a feature vector, are used for Characterize the characteristic information of images of gestures;
Step 1.3):Support vector machines is trained:
The feature vector of above-mentioned gesture sample image and classification mark are input in support vector machines are trained together, wherein Classification is labeled as the mark of gesture species;
Identification process described in step 2), its detailed process are as follows:
Step 2.1):Gestures detection:
Foreground detection and Face Detection are carried out to the sequence of video images of input, both testing results are integrated, then is filtered Processing and morphological operation, obtain the gesture segmentation image of binaryzation;
Step 2.2):Feature extraction:
To the binary map of gesture according to step 1.1)~1.2) carry out feature extraction and fusion obtain the feature of the images of gestures to Amount;
Step 2.3):SVM is identified:
By in the feature vector input support vector machines of images of gestures, SVM recognition results are exported;
Step 2.4):Finger tip detection:
Using the Fingertip Detection based on gesture defect, gesture defect obtained according to step 1.1) and each defect is included Three defect points ptStart, ptEnd, ptFar, establish an effective defect point screening washer, filter out effective defect point and are Finger tip point;Comprehensive recognition result is as follows:
I. the starting point of defect is more than a certain proportion of high H of gesture boundary rectangle with depth point distance
Lenght (ptStart, ptFar) > α H, wherein α are proportionality coefficient;
Ii. the depth point of defect is more than a certain proportion of high H of gesture boundary rectangle with end point distance
Lenght (ptEnd, ptFar) > α H
Iii. the angle that the starting point of defect, depth point, end point are formed is less than threshold value Tangle
Angle (ptStart, ptFar, ptEnd) < Tangle
Iv. the starting point of defect, depth point, end point are in a certain range of gesture boundary rectangle
ybounding< yptStar< ybounding+βH
ybounding< yptEnd< ybounding+βH
ybounding< yptFar< ybounding+βH
Wherein β is proportionality coefficient, and H is high for gesture boundary rectangle;
V. when between defect point distance be less than TdisWhen, two defect points are approximate to be overlapped, and can determine that as same defect point
Lenght(pti,ptj) < Tdis
When the defect points meet the above conditions at the same time, then it can determine that by screening washer as effective defect point, record effective defect Point number and its position, are finger tip point;
Step 2.5):Comprehensive recognition result:
Compare the result and the finger tip point number of finger tip detection output of support vector machines output, when both are consistent, then output is known Other result.
CN201410568977.3A 2014-10-23 2014-10-23 A kind of gesture identification method based on multiple features fusion and finger tip detection Active CN104299004B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410568977.3A CN104299004B (en) 2014-10-23 2014-10-23 A kind of gesture identification method based on multiple features fusion and finger tip detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410568977.3A CN104299004B (en) 2014-10-23 2014-10-23 A kind of gesture identification method based on multiple features fusion and finger tip detection

Publications (2)

Publication Number Publication Date
CN104299004A CN104299004A (en) 2015-01-21
CN104299004B true CN104299004B (en) 2018-05-01

Family

ID=52318725

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410568977.3A Active CN104299004B (en) 2014-10-23 2014-10-23 A kind of gesture identification method based on multiple features fusion and finger tip detection

Country Status (1)

Country Link
CN (1) CN104299004B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106295464A (en) * 2015-05-15 2017-01-04 济南大学 Gesture identification method based on Shape context
CN105678150A (en) * 2016-01-11 2016-06-15 成都布林特信息技术有限公司 User authority managing method
CN106599771B (en) * 2016-10-21 2019-11-22 上海未来伙伴机器人有限公司 A kind of recognition methods and system of images of gestures
CN107133562B (en) * 2017-03-17 2021-05-14 华南理工大学 Gesture recognition method based on extreme learning machine
WO2018184233A1 (en) * 2017-04-07 2018-10-11 深圳市柔宇科技有限公司 Hand gesture recognition method and related device
CN107133361B (en) * 2017-05-31 2020-02-07 北京小米移动软件有限公司 Gesture recognition method and device and terminal equipment
CN108932053B (en) * 2018-05-21 2021-06-11 腾讯科技(深圳)有限公司 Drawing method and device based on gestures, storage medium and computer equipment
CN109271838B (en) * 2018-07-19 2020-11-03 重庆邮电大学 FMCW radar-based three-parameter feature fusion gesture recognition method
CN111160173B (en) * 2019-12-19 2024-04-26 深圳市优必选科技股份有限公司 Gesture recognition method based on robot and robot
CN111626364B (en) * 2020-05-28 2023-09-01 中国联合网络通信集团有限公司 Gesture image classification method, gesture image classification device, computer equipment and storage medium
CN111950514B (en) * 2020-08-26 2022-05-03 重庆邮电大学 Depth camera-based aerial handwriting recognition system and method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102194097A (en) * 2010-03-11 2011-09-21 范为 Multifunctional method for identifying hand gestures
CN102402680A (en) * 2010-09-13 2012-04-04 株式会社理光 Hand and indication point positioning method and gesture confirming method in man-machine interactive system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8831379B2 (en) * 2008-04-04 2014-09-09 Microsoft Corporation Cartoon personalization
JP6155786B2 (en) * 2013-04-15 2017-07-05 オムロン株式会社 Gesture recognition device, gesture recognition method, electronic device, control program, and recording medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102194097A (en) * 2010-03-11 2011-09-21 范为 Multifunctional method for identifying hand gestures
CN102402680A (en) * 2010-09-13 2012-04-04 株式会社理光 Hand and indication point positioning method and gesture confirming method in man-machine interactive system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"基于SVM的生物特征融合技术研究";周伟芳;《万方数据企业知识服务平台》;20130724;第3章、第5章 *
"融合深度数据的人机交互手势识别研究";张凯;《中国博士学位论文全文数据库(信息科技辑)》;20140515(第05期);第4.3.3节 *

Also Published As

Publication number Publication date
CN104299004A (en) 2015-01-21

Similar Documents

Publication Publication Date Title
CN104299004B (en) A kind of gesture identification method based on multiple features fusion and finger tip detection
CN107168527B (en) The first visual angle gesture identification and exchange method based on region convolutional neural networks
CN107430771B (en) System and method for image segmentation
EP2980755B1 (en) Method for partitioning area, and inspection device
Khan et al. Hand gesture recognition: a literature review
Berger et al. Style and abstraction in portrait sketching
Jia et al. Category-independent object-level saliency detection
CN104298982B (en) A kind of character recognition method and device
CN102194108B (en) Smile face expression recognition method based on clustering linear discriminant analysis of feature selection
CN110796033B (en) Static gesture recognition method based on bounding box model
CN104463138B (en) The text positioning method and system of view-based access control model structure attribute
Li et al. Saliency based image segmentation
CN108846359A (en) It is a kind of to divide the gesture identification method blended with machine learning algorithm and its application based on skin-coloured regions
CN103440035A (en) Gesture recognition system in three-dimensional space and recognition method thereof
CN103034852A (en) Specific color pedestrian detecting method in static video camera scene
Fu et al. Robust image segmentation using contour-guided color palettes
CN113112498B (en) Grape leaf spot identification method based on fine-grained countermeasure generation network
Cui et al. Transductive object cutout
CN111080670A (en) Image extraction method, device, equipment and storage medium
CN105631456B (en) A kind of leucocyte method for extracting region based on particle group optimizing ITTI model
CN107392105B (en) Expression recognition method based on reverse collaborative salient region features
Chen et al. Salient object detection: integrate salient features in the deep learning framework
US9053383B2 (en) Recognizing apparatus and method, program, and recording medium
CN111259972A (en) Flotation bubble identification method based on cascade classifier
Zhang et al. Salient object detection through over-segmentation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant