CN108255288A - Gesture detecting method based on acceleration compensation and complexion model - Google Patents

Gesture detecting method based on acceleration compensation and complexion model Download PDF

Info

Publication number
CN108255288A
CN108255288A CN201611240826.0A CN201611240826A CN108255288A CN 108255288 A CN108255288 A CN 108255288A CN 201611240826 A CN201611240826 A CN 201611240826A CN 108255288 A CN108255288 A CN 108255288A
Authority
CN
China
Prior art keywords
image
acceleration
gestures
complexion model
gesture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611240826.0A
Other languages
Chinese (zh)
Inventor
钟鸿飞
覃争鸣
杨旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rich Intelligent Science And Technology Ltd Is Reflected In Guangzhou
Original Assignee
Rich Intelligent Science And Technology Ltd Is Reflected In Guangzhou
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rich Intelligent Science And Technology Ltd Is Reflected In Guangzhou filed Critical Rich Intelligent Science And Technology Ltd Is Reflected In Guangzhou
Priority to CN201611240826.0A priority Critical patent/CN108255288A/en
Publication of CN108255288A publication Critical patent/CN108255288A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/113Recognition of static hand signs

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The present invention proposes a kind of gesture detecting method based on acceleration compensation and complexion model, includes the following steps:S1 image preprocessings pre-process the images of gestures of camera capture;S2 acceleration and angle calculation calculate the acceleration and angle change of pixel in image;S3 illumination balances, and illumination variation is compensated using histogram equalization;S4 gestures detections are detected human hand parts of images using complexion model;S5 image filterings filter non-gesture part, extract images of gestures.The present invention program carries out eliminating head jitter or the mobile interference for extracting human hand coordinate using acceleration transducer acquisition acceleration information and calculating inclination angle, complexion model is fitted using linear equation, gestures detection is realized using skin color segmentation, reduce the interference brought due to body shake, improve the accuracy of gestures detection.

Description

Gesture detecting method based on acceleration compensation and complexion model
Technical field
The present invention relates to field of human-computer interaction, and in particular to a kind of gestures detection based on acceleration compensation and complexion model Method.
Background technology
In order to help disabled person/the elderly keep with the exchanging of the external world, link up, improve their independent living ability, subtract Light family, the burden of society, all over the world many scientists start the novel man-machine interaction mode of exploratory development.So-called interaction Technology includes people and the interaction of executing agency (such as robot) and the interaction of executing agency and environment.The former meaning is can It is gone to realize the planning and decision that executing agency is difficult in unknown or uncertain condition by people;And be can for the meaning of the latter By robot go to complete people job task in inaccessiable adverse circumstances or long distance environment.
Traditional human-computer interaction device mainly has keyboard, mouse, handwriting pad, touch screen, game console etc., these equipment The function of human-computer interaction is realized using the hand exercise of user.Gesture interaction supports more more natural interactive modes, carries Human-centred rather than facility center management interaction technique is supplied, being primarily focused on original this thereby using family does In thing and content rather than concentrate in equipment.
Common gesture interaction technology is divided into the gesture interaction technology based on data glove sensor and is regarded based on computer Two kinds of the gesture interaction technology of feel.
Gesture interaction technology based on data glove sensor needs user to wear data glove or position sensor etc. Hardware device acquires the information such as finger state and movement locus using sensor, computer is allowed to identify so as to carry out calculation process Gesture motion realizes various interactive controllings.This mode advantage is to identify that accurate robust performance is good, algorithm is relatively easy, operation Data are few and quick, can precisely obtain the solid space action of hand, change without the ambient lighting of vision system and carry on the back completely The problems such as scape is complicated interferes.Shortcoming is that equipment wearing is complicated, of high cost, user's operation is inconvenient for use and gesture motion is by certain Restrict, therefore, it is difficult to largely put into actual production to use.
Gesture interaction technology based on computer vision by machine vision to camera collected gesture image sequence Processing identification, so as to be interacted with computer, this method acquires gesture information using camera, the constraint to human hand interaction Less, interactive experience is more preferable, recognition accuracy higher, and the accuracy rate of the gesture interaction technology is dependent on the accurate of gestures detection Property.But containing much information of obtaining of this method, processing data model difficulty is big, and when human hand is shaken, verification and measurement ratio can drop It is low.
Invention content
The purpose of the present invention is to overcome the deficiency in the prior art, especially solves the existing gesture interaction based on computer vision In technology, acquisition contains much information, and processing data model difficulty is big, and when human hand is shaken, recognition rate can reduce Problem.
In order to solve the above technical problems, the present invention proposes a kind of gestures detection side based on acceleration compensation and complexion model Method, key step include:
S1, image preprocessing pre-process the images of gestures of camera capture;
S2, acceleration and angle calculation calculate the acceleration and angle change of pixel in image;
S3, illumination balance, compensates illumination variation using histogram equalization;
S4, gestures detection are detected human hand parts of images using complexion model;
S5, image filtering filter non-gesture part, extract images of gestures.
Further, the step S1 image pretreatment operations include:Image gray processing, image smoothing, image binaryzation Operation;
Further, in the step S2 acceleration calculations operation, acceleration value variation speed collaboration image gesture is utilized Caused by coordinate points movement speed determines whether head or body shake, sat using acceleration value angle of inclination collaboration image gesture Punctuate moving range is determined whether caused by head or body movement;
Further, in the step S4 gestures detections operation, complexion model is fitted using linear equation.
The present invention has following advantageous effect compared with prior art:
The present invention program carries out elimination head jitter using acceleration transducer acquisition acceleration information and calculating inclination angle Or the mobile interference extracted to human hand coordinate, complexion model is fitted using linear equation, hand is realized using skin color segmentation Gesture detects, and reduces the interference brought due to body shake, improves the accuracy of gestures detection.
Description of the drawings
Fig. 1 is the flow the present invention is based on acceleration compensation and one embodiment of the gesture detecting method of complexion model Figure.
Fig. 2 is the acceleration squint angle schematic diagram of the embodiment of the present invention.
Fig. 3 is the design sketch that gestures detection is carried out using complexion model of the embodiment of the present invention.
Fig. 4 is the image filtering design sketch of the embodiment of the present invention.
Specific embodiment
Below in conjunction with the accompanying drawings and specific embodiment the present invention is carried out in further detail with complete explanation.It is appreciated that It is that specific embodiment described herein is only used for explaining the present invention rather than limitation of the invention.
Referring to Fig. 1, the gesture detecting method based on acceleration compensation and complexion model of the embodiment of the present invention, key step Including:
S1, image preprocessing, detailed process include:S11 image gray processings, S13 image binaryzations, S12 image smoothings.
S11 image gray processings:Camera obtain eye image be coloured image, comprising contain much information, image procossing speed Degree is slower.High in view of requirement of the human-computer interaction to real-time, it is necessary that gray processing processing is carried out to coloured image.Gray processing The process for exactly making R, G of colour element, B component value equal, the gray value in gray level image are equal to the RGB in original color image Average value, i.e.,
Gray=(R+G+B)/3 (1)
S12 image binaryzations:Image binaryzation is carried out with maximum variance between clusters, process is:
If image shares L gray level, gray value is that the pixel of i shares niA, image shares N number of pixel, normalizing Change grey level histogram, enable
A threshold value t is set, pixel is divided by c according to gray value0And c1Two classes.c0Probability ω0, mean μ0
c1Probability ω1, mean μ1
Wherein,It can thus be appreciated that c0And c1Inter-class variance σ2(t) it is:
σ2(t)=ω0(μ-μ0)211-μ)2 (6)
Then t is subjected to value from 0 to i, t is optimal threshold when σ is maximized, you can obtains best binary picture Picture.
S13 image smoothings:Bianry image is carried out smoothly with Mathematical Morphology Method, reduction noise in image, process is:
1) operator of corrosion is Θ, and set A is aggregated B corrosion and is defined as formula (7):
2) operator of expansion isSet A is aggregated B expansions and is defined as formula (8):
Using dilation erosion type gradient operator, i.e., the image after subtracting corrosion with the image after expansion, you can obtain image In edge.Since edge at this time is not that single pixel wide connects, it is also necessary to again with region framework extraction algorithm to edge into Row refinement.
3) it is image to set B, and S (A) represents the skeleton of A, and B is structural element, then is represented with formula (9):
Wherein, K represents to corrode A into the iterations before empty set, i.e., is expressed as with formula (10):
Sk(A) it is known as skeleton subset, can be written as according to formula (11):
A Θ kB represent to corrode A with B for continuous k times.
S2, acceleration and angle calculation calculate the acceleration and angle change of pixel in image;It is taken with the typical case of device To as a reference point, wherein, x-axis and y-axis are in horizontal plane, and z-axis is in and horizontal line.Referring to Fig. 2, Fig. 2 is accelerates Spend squint angle schematic diagram.Wherein θ, ψ,Respectively acceleration transducer x-axis and horizontal angle, y-axis and horizontal folder Angle, z-axis and the angle in acceleration of gravity direction.When acceleration transducer is in state shown in Fig. 2 a, x, y, tri- directions of z tilt Angle is all 0.
Projection of the gravitational vectors in reference axis can be formed equal to acceleration transducer direction and reference axis angle sine value Output acceleration, x, y, the acceleration output valve in tri- directions of z is Ax, Ay, Az
Tiltangleθ, ψ,It is respectively calculated as follows:
It can determine whether using acceleration value variation speed collaboration image gesture coordinate points movement speed as head or body Caused by shake, it can decide whether using acceleration value angle of inclination collaboration image gesture coordinate points moving range as head or body Caused by body movement.
S3, illumination balance, compensates illumination variation using histogram equalization;Histogram equalization makes the ash of image Degree spacing is pulled open or makes intensity profile uniform, so as to increase contrast, makes image detail clear, achievees the purpose that image enhancement.Its Specific method is:
All gray level S of original image are provided firstk(k=0,1 ..., L-1);Then statistics original image is each The pixel number n of gray levelk;The accumulation for (14) formula being used to calculate original image again after the histogram of original image is calculated using formula (13) Histogram:
P(Sk)=nk/ n, k=0,1 ..., L-1 (13)
p(tk)=nk/n (15)
Wherein, n is total number of image pixels.To gray value tkRounding determines Sk→tkMapping relations after count new histogram Each gray-scale pixel number nk;New histogram is finally calculated using formula (15).
S4, gestures detection are detected human hand parts of images using complexion model;Use determining Cb and Cr maximum values with And the rectangle colour of skin model of linear equation of minimum value, rectangular model can use four straight lines L1, L2, L3, L4 expressions are as follows:
L1:Cb×T1+T2< Cr (16)
L2:Cb×T2+T3< Cr (17)
L3:Cb×T5+T6> Cr (18)
L4:Cb×T7+T8> Cr (19)
Wherein T1=-1.22265625, T2=267.3330078125, T3=0.875, T4=29.375, T5=- 1.3330078125、T6=316.3330078125, T7=0.064453125, T8=170.612903225.Above-mentioned linear equation The parameter of parted pattern carries out off-line training to Finite Amplitude image and obtains, and passes through the parameter detecting different application of off-line training The colour of skin under scene.The result of gestures detection is referring to Fig. 3.
S5, image filtering filter non-gesture part, extract images of gestures.Pass through the human hand got from gestures detection 8*8 pixel is extracted in region as colour of skin reference zone, it is straight to four boundaries in linear equation parted pattern using it Line L1, L2, L3, L4 carry out its intercept adjustment respectively, and the intercept adjusting range of every line is 16 units.In the present embodiment, The initial intercept of L1 is 267, the intercept of L1 rounding successively in [259,275] during the adjustment, until being adjusted to most preferably cut Away from until, the intercept of L2, L3, L4 are also to be adjusted according to same method.The judging basis of best intercept is its performance scoring Value S gets highest one group of intercept.The calculation formula of score value S is as follows
Wherein, NrefTo belong to the number of pixels of the colour of skin, N in colour of skin reference zoneneighTo belong in tracking initiation frame neighborhood The number of pixels of the colour of skin, ArefFor reference zone size, AneighFor neighborhood size.S is defined as colour of skin reference zone quilt Detection colour of skin ratio and nontarget area are detected the ratio of colour of skin ratio.The result of image filtering is referring to Fig. 4.

Claims (4)

1. a kind of gesture detecting method based on acceleration compensation and complexion model, which is characterized in that include the following steps:
S1 image preprocessings pre-process the images of gestures of camera capture;
S2 acceleration and angle calculation calculate the acceleration and angle change of pixel in image;
S3 illumination balances, and illumination variation is compensated using histogram equalization;
S4 gestures detections are detected human hand parts of images using complexion model;
S5 image filterings filter non-gesture part, extract images of gestures.
2. a kind of gesture detecting method based on acceleration compensation and complexion model according to claim 1, feature exist In the step S1 image pretreatment operations include:Image gray processing, image smoothing, image binaryzation operation.
3. a kind of gesture detecting method based on acceleration compensation and complexion model according to claim 1, the step In the operation of S2 acceleration calculations, head is determined whether using acceleration value variation speed collaboration image gesture coordinate points movement speed Caused by portion or body shake, head is determined whether using acceleration value angle of inclination collaboration image gesture coordinate points moving range Or caused by body movement.
4. a kind of gesture detecting method based on acceleration compensation and complexion model according to claim 1, feature exist In in step S4 gestures detections operation, being fitted using linear equation to complexion model.
CN201611240826.0A 2016-12-29 2016-12-29 Gesture detecting method based on acceleration compensation and complexion model Pending CN108255288A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611240826.0A CN108255288A (en) 2016-12-29 2016-12-29 Gesture detecting method based on acceleration compensation and complexion model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611240826.0A CN108255288A (en) 2016-12-29 2016-12-29 Gesture detecting method based on acceleration compensation and complexion model

Publications (1)

Publication Number Publication Date
CN108255288A true CN108255288A (en) 2018-07-06

Family

ID=62720429

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611240826.0A Pending CN108255288A (en) 2016-12-29 2016-12-29 Gesture detecting method based on acceleration compensation and complexion model

Country Status (1)

Country Link
CN (1) CN108255288A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109801320A (en) * 2019-01-26 2019-05-24 武汉嫦娥医学抗衰机器人股份有限公司 A kind of dry skin state Intelligent Identify method and system based on facial subregion

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109801320A (en) * 2019-01-26 2019-05-24 武汉嫦娥医学抗衰机器人股份有限公司 A kind of dry skin state Intelligent Identify method and system based on facial subregion
CN109801320B (en) * 2019-01-26 2020-12-01 武汉嫦娥医学抗衰机器人股份有限公司 Intelligent skin dryness state identification method and system based on facial partition

Similar Documents

Publication Publication Date Title
Hasan et al. RETRACTED ARTICLE: Static hand gesture recognition using neural networks
CN108595008B (en) Human-computer interaction method based on eye movement control
CN110221699B (en) Eye movement behavior identification method of front-facing camera video source
CN107688391A (en) A kind of gesture identification method and device based on monocular vision
CN108230383A (en) Hand three-dimensional data determines method, apparatus and electronic equipment
JP6066093B2 (en) Finger shape estimation device, finger shape estimation method, and finger shape estimation program
KR102649930B1 (en) Systems and methods for finding and classifying patterns in images with a vision system
CN110688965A (en) IPT (inductive power transfer) simulation training gesture recognition method based on binocular vision
CN105912126B (en) A kind of gesture motion is mapped to the adaptive adjusting gain method at interface
CN109325408A (en) A kind of gesture judging method and storage medium
CN106200971A (en) Man-machine interactive system device based on gesture identification and operational approach
Chowdhury et al. Gesture recognition based virtual mouse and keyboard
CN108268125A (en) A kind of motion gesture detection and tracking based on computer vision
CN106814853A (en) A kind of eye control tracking based on machine learning
Bei et al. Sitting posture detection using adaptively fused 3D features
CN104898971B (en) A kind of mouse pointer control method and system based on Visual Trace Technology
CN107329564B (en) Man-machine finger guessing method based on gesture intelligent perception and man-machine cooperation mechanism
CN108256379A (en) A kind of eyes posture identification method based on Pupil diameter
CN108255285A (en) It is a kind of based on the motion gesture detection method that detection is put between the palm
CN108256391A (en) A kind of pupil region localization method based on projecting integral and edge detection
Chaudhary et al. A vision-based method to find fingertips in a closed hand
Elakkiya et al. Intelligent system for human computer interface using hand gesture recognition
US11676357B2 (en) Modification of projected structured light based on identified points within captured image
CN108255288A (en) Gesture detecting method based on acceleration compensation and complexion model
CN116453230A (en) Living body detection method, living body detection device, terminal equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180706