CN108614991A - A kind of depth image gesture identification method based on Hu not bending moments - Google Patents

A kind of depth image gesture identification method based on Hu not bending moments Download PDF

Info

Publication number
CN108614991A
CN108614991A CN201810184924.XA CN201810184924A CN108614991A CN 108614991 A CN108614991 A CN 108614991A CN 201810184924 A CN201810184924 A CN 201810184924A CN 108614991 A CN108614991 A CN 108614991A
Authority
CN
China
Prior art keywords
hand region
depth image
gesture
depth
separated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810184924.XA
Other languages
Chinese (zh)
Inventor
王伟行
葛昊
邹耀
应忍冬
刘佩林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Digital Intelligent Technology Co Ltd
Original Assignee
Shanghai Digital Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Digital Intelligent Technology Co Ltd filed Critical Shanghai Digital Intelligent Technology Co Ltd
Priority to CN201810184924.XA priority Critical patent/CN108614991A/en
Publication of CN108614991A publication Critical patent/CN108614991A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/113Recognition of static hand signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

A kind of depth image gesture identification method based on Hu not bending moments disclosed by the invention, includes the following steps:1, depth image is obtained;2, depth image is filtered according to depth information so that depth image realizes front and back scape separation, and the hand region for the gesture that target person is made is separated from complex background;3, Hu not bending moment calculating is carried out to the hand region separated, obtains the Hu invariant moment features vectors of hand region;4, dimension-reduction treatment is carried out to the Hu invariant moment features vectors of hand region using PCA algorithms, and the Hu invariant moment features vectors of the hand region after dimension-reduction treatment are identified in trained Linear SVM grader and classification is handled for use, obtain static gesture classification results.The present invention directly obtains three-dimensional information using depth image, effectively reduces calculation amount, improves the robustness of algorithm, realizes the gesture identification of real-time high accuracy high robust.

Description

A kind of depth image gesture identification method based on Hu not bending moments
Technical field
The present invention relates to human-computer interaction technique field more particularly to a kind of depth image gesture identifications based on Hu not bending moments Method.
Background technology
Growing with artificial intelligence technology, traditional man-machine interaction mode is also changing and is innovating, voice The new interactive mode such as identification, fingerprint recognition, recognition of face, gesture identification is more and more widely used.Wherein, gesture Identification technology has become one of most important effective way of human-computer interaction.User can be controlled using simple gesture or with Equipment interacts, and the behavior of the computer understanding mankind, core technology is allowed to be that Hand Gesture Segmentation, gesture analysis and gesture are known Not.
Current gesture identification method is to shoot image based on common RGB camera, presets several gesture templates, will Hand region is split from background, is carried out shape matching to gesture extraction contour feature to be identified, is calculated similarity, root Sort out according to similarity calculation result and respectively.This method calculation amount is low, and algorithm is simple, it is easy to accomplish, but RGB camera is to environment The information such as illumination, color, texture are more sensitive, and the hand Segmentation algorithm difficulty based on RGB image is larger, have identification accurate The low disadvantage of rate can not show good performance in real-time grading identification.
Bending moment phenogram is not as the geometric properties in region by Hu, the invariant features with characteristics such as rotation, translation, scales.Square It is used to the distribution situation of reflection stochastic variable in statistics, is generalized in mechanics, it is used as describing the matter of space object Amount is spread.Same principle, if the gray value of image to be regarded as to the density throughout function of a two dimension or three-dimensional, square Mode can be used to image and dissect scope and the extraction as characteristics of image.In image processing field, bending moment can conduct by Hu The feature vector of target object is used for the classification of object.
Depth image refers to the image as pixel value by the distance (depth) of each point in from image acquisition device to scene, it The geometry for directly reflecting scenery visible surface, can directly acquire the 3D information of target object.Common depth camera There are TOF, structure light etc..
Based on Hu, bending moment and depth image technology have not carried out beneficial exploration and trial to the applicant, and it is existing to have found solution The method of the problems of some gesture identification methods, technical solution described below are to generate in this background 's.
Invention content
The technical problems to be solved by the invention:One kind is provided in view of the deficiencies of the prior art not by ambient enviroment shadow Ring, hand region segmentation is accurate, recognition accuracy is high, calculation amount is low, can be achieved real-time Classification and Identification based on Hu not bending moments Depth image gesture identification method.
Following technical scheme may be used to realize in technical problem solved by the invention:
A kind of depth image gesture identification method based on Hu not bending moments, includes the following steps:
Step S10 obtains the depth image for the gesture that target person is made using depth camera;
Step S20 is filtered depth image according to the depth information of depth image so that depth image is realized Front and back scape separation, and the hand region for the gesture that target person is made is separated from complex background;
Step S30, carrying out Hu to the hand region separated, bending moment does not calculate, and obtaining the Hu of hand region, bending moment is not special Sign vector;
Step S40 carries out dimension-reduction treatment using PCA algorithms to the Hu invariant moment features vectors of hand region, and using Trained Linear SVM grader to the Hu invariant moment features vectors of the hand region after dimension-reduction treatment be identified and classification at Reason, obtains static gesture classification results.
In a preferred embodiment of the invention, in the step S30, Hu is carried out to the hand region separated Not bending moment calculating includes the following steps:
Step S31 carries out profile calculating to the hand region separated;
Step S32, the finger tip of detection gesture and as outer profile concurrently set the big of threshold value constraint hand region It is small;
Step S33 calculates separately Hu not bending moments according to the inside and outside contour of hand region, obtains the Hu of hand region not bending moment Feature vector, with series of features information such as the centre of the palm, the sizes that characterize current gesture.
In a preferred embodiment of the invention, in the step S40, the model of the Linear SVM grader is instructed Practice method to include the following steps:
Step S41 is designed multiple Pre-defined gestures, and is adopted respectively to each Pre-defined gesture using depth camera Collection forms depth image data collection;
Step S42 is filtered each amplitude deepness image that depth image data is concentrated according to depth information, makes It obtains depth image and realizes front and back scape separation, and the hand region for the gesture that target person is made is isolated from complex background Come;
Step S43, carrying out Hu to the hand region separated, bending moment does not calculate, and obtaining the Hu of hand region, bending moment is not special Sign vector;
Step S44 carries out dimension-reduction treatment to the Hu invariant moment features vectors of hand region using PCA algorithms, and uses line Property SVM classifier classification based training is carried out to the Hu invariant moment features vectors of the hand region after dimension-reduction treatment.
As a result of technical solution as above, the beneficial effects of the present invention are:
1, the present invention can carry out hand Segmentation under complex background and severe illumination condition, effectively by hand region from again It is separated in miscellaneous background;
2, the present invention is made it have rotation, is put down by the extraction and calculating of the hu invariant moment features vectors to hand region Shifting and scale invariability;
3, at least six class static gestures can be identified classification, such as " the five fingers in Linear SVM grader of the invention Open ", " clenching fist ", " perpendicular thumb ", " triumph ", " 6 " and other etc. gestures, reach 90% or more identification classification accuracy;
4, the present invention also supports user to increase new definition gesture;
5, the present invention directly obtains three-dimensional information using depth image, effectively reduces calculation amount, improves the Shandong of algorithm Stick realizes the gesture identification of real-time high accuracy high robust.
Description of the drawings
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technology description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with Obtain other attached drawings according to these attached drawings.
Fig. 1 is the flow diagram of the present invention.
Specific implementation mode
In order to make the technical means, the creative features, the aims and the efficiencies achieved by the present invention be easy to understand, tie below Conjunction is specifically illustrating, and the present invention is further explained.
Referring to Fig. 1, what is provided in figure is a kind of depth image gesture identification method based on Hu not bending moments, including following step Suddenly:
Step S10 obtains the depth image for the gesture that target person is made using depth camera;
Step S20, since depth image can directly acquire target object to the range information of the camera of depth camera, Depth image can be filtered according to the depth information of depth image so that depth image realizes front and back scape separation, and The hand region for the gesture that target person is made is separated from complex background;
Step S30, carrying out Hu to the hand region separated, bending moment does not calculate, and obtaining the Hu of hand region, bending moment is not special Sign vector;
Step S40 carries out dimension-reduction treatment using PCA algorithms to the Hu invariant moment features vectors of hand region, and using Trained Linear SVM grader to the Hu invariant moment features vectors of the hand region after dimension-reduction treatment be identified and classification at Reason, obtains static gesture classification results.
In the step S30, carrying out Hu to the hand region separated, bending moment calculating does not include the following steps:
Step S31 carries out profile calculating to the hand region separated;
Step S32, the finger tip of detection gesture and as outer profile concurrently set the big of threshold value constraint hand region It is small;
Step S33 calculates separately Hu not bending moments according to the inside and outside contour of hand region, obtains the Hu of hand region not bending moment Feature vector, with series of features information such as the centre of the palm, the sizes that characterize current gesture.
In the step S40, the model training method of the Linear SVM grader includes the following steps:
Step S41 is designed multiple Pre-defined gestures, and is adopted respectively to each Pre-defined gesture using depth camera Collection forms depth image data collection;
Step S42 is filtered each amplitude deepness image that depth image data is concentrated according to depth information, makes It obtains depth image and realizes front and back scape separation, and the hand region for the gesture that target person is made is isolated from complex background Come;
Step S43, carrying out Hu to the hand region separated, bending moment does not calculate, and obtaining the Hu of hand region, bending moment is not special Sign vector;
Step S44 carries out dimension-reduction treatment to the Hu invariant moment features vectors of hand region using PCA algorithms, and uses line Property SVM classifier classification based training is carried out to the Hu invariant moment features vectors of the hand region after dimension-reduction treatment.
The above shows and describes the basic principles and main features of the present invention and the advantages of the present invention.The technology of the industry Personnel are it should be appreciated that the present invention is not limited to the above embodiments, and the above embodiments and description only describe this The principle of invention, without departing from the spirit and scope of the present invention, various changes and improvements may be made to the invention, these changes Change and improvement all fall within the protetion scope of the claimed invention.The claimed scope of the invention by appended claims and its Equivalent thereof.

Claims (3)

1. a kind of depth image gesture identification method based on Hu not bending moments, which is characterized in that include the following steps:
Step S10 obtains the depth image for the gesture that target person is made using depth camera;
Step S20 is filtered depth image according to the depth information of depth image so that before and after depth image is realized Scape detaches, and the hand region for the gesture that target person is made is separated from complex background;
Step S30, carrying out Hu to the hand region separated, bending moment does not calculate, obtain the Hu invariant moment features of hand region to Amount;
Step S40 carries out dimension-reduction treatment using PCA algorithms to the Hu invariant moment features vectors of hand region, and use has been trained The Hu invariant moment features vectors of hand region after dimension-reduction treatment are identified good Linear SVM grader and classification processing, Obtain static gesture classification results.
2. the depth image gesture identification method as described in claim 1 based on Hu not bending moments, which is characterized in that in the step In rapid S30, carrying out Hu to the hand region separated, bending moment calculating does not include the following steps:
Step S31 carries out profile calculating to the hand region separated;
Step S32, the finger tip of detection gesture and as outer profile concurrently set the size of threshold value constraint hand region;
Step S33 calculates separately Hu not bending moments according to the inside and outside contour of hand region, obtains the Hu invariant moment features of hand region Vector, with series of features information such as the centre of the palm, the sizes that characterize current gesture.
3. the depth image gesture identification method as described in claim 1 based on Hu not bending moments, which is characterized in that in the step In rapid S40, the model training method of the Linear SVM grader includes the following steps:
Step S41 is designed multiple Pre-defined gestures, and is acquired respectively to each Pre-defined gesture using depth camera, Form depth image data collection;
Step S42 is filtered each amplitude deepness image that depth image data is concentrated according to depth information so that deep It spends image and realizes front and back scape separation, and the hand region for the gesture that target person is made is separated from complex background;
Step S43, carrying out Hu to the hand region separated, bending moment does not calculate, obtain the Hu invariant moment features of hand region to Amount;
Step S44 carries out dimension-reduction treatment to the Hu invariant moment features vectors of hand region using PCA algorithms, and uses Linear SVM Grader carries out classification based training to the Hu invariant moment features vectors of the hand region after dimension-reduction treatment.
CN201810184924.XA 2018-03-06 2018-03-06 A kind of depth image gesture identification method based on Hu not bending moments Pending CN108614991A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810184924.XA CN108614991A (en) 2018-03-06 2018-03-06 A kind of depth image gesture identification method based on Hu not bending moments

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810184924.XA CN108614991A (en) 2018-03-06 2018-03-06 A kind of depth image gesture identification method based on Hu not bending moments

Publications (1)

Publication Number Publication Date
CN108614991A true CN108614991A (en) 2018-10-02

Family

ID=63658599

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810184924.XA Pending CN108614991A (en) 2018-03-06 2018-03-06 A kind of depth image gesture identification method based on Hu not bending moments

Country Status (1)

Country Link
CN (1) CN108614991A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109961010A (en) * 2019-02-16 2019-07-02 天津大学 A kind of gesture identification method based on intelligent robot
CN110321828A (en) * 2019-06-27 2019-10-11 四川大学 A kind of front vehicles detection method based on binocular camera and vehicle bottom shade
CN111102920A (en) * 2019-12-18 2020-05-05 佛山科学技术学院 Mechanical component quality inspection method and system based on augmented reality
CN111476158A (en) * 2020-04-07 2020-07-31 金陵科技学院 Multi-channel physiological signal somatosensory gesture recognition method based on PSO-PCA-SVM
CN111652925A (en) * 2020-06-29 2020-09-11 中国科学院合肥物质科学研究院 Method for extracting target global feature Hu invariant moment by using single-pixel imaging
CN112836662A (en) * 2021-02-15 2021-05-25 苏州优它科技有限公司 Static gesture recognition method based on Kinect sensor
CN112967290A (en) * 2021-02-22 2021-06-15 中国人民解放军空军航空大学 Method for automatically identifying enemies of target aircraft in air by unmanned aerial vehicle
WO2022170635A1 (en) * 2021-02-15 2022-08-18 苏州优它科技有限公司 Kinect sensor-based hog feature static gesture recognition method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘阳: "《基于Kinect的手势识别技术研究》", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109961010A (en) * 2019-02-16 2019-07-02 天津大学 A kind of gesture identification method based on intelligent robot
CN110321828A (en) * 2019-06-27 2019-10-11 四川大学 A kind of front vehicles detection method based on binocular camera and vehicle bottom shade
CN111102920A (en) * 2019-12-18 2020-05-05 佛山科学技术学院 Mechanical component quality inspection method and system based on augmented reality
CN111476158A (en) * 2020-04-07 2020-07-31 金陵科技学院 Multi-channel physiological signal somatosensory gesture recognition method based on PSO-PCA-SVM
CN111476158B (en) * 2020-04-07 2020-12-04 金陵科技学院 Multi-channel physiological signal somatosensory gesture recognition method based on PSO-PCA-SVM
CN111652925A (en) * 2020-06-29 2020-09-11 中国科学院合肥物质科学研究院 Method for extracting target global feature Hu invariant moment by using single-pixel imaging
CN111652925B (en) * 2020-06-29 2023-04-07 合肥中科迪宏自动化有限公司 Method for extracting target global feature Hu invariant moment by using single-pixel imaging
CN112836662A (en) * 2021-02-15 2021-05-25 苏州优它科技有限公司 Static gesture recognition method based on Kinect sensor
WO2022170635A1 (en) * 2021-02-15 2022-08-18 苏州优它科技有限公司 Kinect sensor-based hog feature static gesture recognition method
CN112967290A (en) * 2021-02-22 2021-06-15 中国人民解放军空军航空大学 Method for automatically identifying enemies of target aircraft in air by unmanned aerial vehicle

Similar Documents

Publication Publication Date Title
CN108614991A (en) A kind of depth image gesture identification method based on Hu not bending moments
CN107808143B (en) Dynamic gesture recognition method based on computer vision
CN107742102B (en) Gesture recognition method based on depth sensor
Xu et al. Online dynamic gesture recognition for human robot interaction
Zhu et al. Vision based hand gesture recognition using 3D shape context
CN108256421A (en) A kind of dynamic gesture sequence real-time identification method, system and device
CN104899600B (en) A kind of hand-characteristic point detecting method based on depth map
Nair et al. Hand gesture recognition system for physically challenged people using IOT
Oprisescu et al. Automatic static hand gesture recognition using tof cameras
Cheng et al. Image-to-class dynamic time warping for 3D hand gesture recognition
CN104392223B (en) Human posture recognition method in two-dimensional video image
CN110569817B (en) System and method for realizing gesture recognition based on vision
CN105759967B (en) A kind of hand overall situation attitude detecting method based on depth data
CN103971102A (en) Static gesture recognition method based on finger contour and decision-making trees
CN102521616B (en) Pedestrian detection method on basis of sparse representation
Huang et al. Hand gesture recognition with skin detection and deep learning method
Vishwakarma et al. An efficient interpretation of hand gestures to control smart interactive television
CN110232308A (en) Robot gesture track recognizing method is followed based on what hand speed and track were distributed
Ahuja et al. Hand gesture recognition using PCA
CN102402289A (en) Mouse recognition method for gesture based on machine vision
Prakash et al. Gesture recognition and finger tip detection for human computer interaction
Vishwakarma et al. Simple and intelligent system to recognize the expression of speech-disabled person
Thongtawee et al. A novel feature extraction for American sign language recognition using webcam
CN107133562B (en) Gesture recognition method based on extreme learning machine
Salunke et al. Power point control using hand gesture recognition based on hog feature extraction and K-NN classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20181002

RJ01 Rejection of invention patent application after publication