CN108133206A - Static gesture identification method, device and readable storage medium storing program for executing - Google Patents
Static gesture identification method, device and readable storage medium storing program for executing Download PDFInfo
- Publication number
- CN108133206A CN108133206A CN201810142872.XA CN201810142872A CN108133206A CN 108133206 A CN108133206 A CN 108133206A CN 201810142872 A CN201810142872 A CN 201810142872A CN 108133206 A CN108133206 A CN 108133206A
- Authority
- CN
- China
- Prior art keywords
- image
- gesture
- point
- segmentation
- characteristic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/107—Static hand or arm
- G06V40/113—Recognition of static hand signs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/107—Static hand or arm
- G06V40/117—Biometrics derived from hands
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the present invention provides a kind of static gesture identification method, device and readable storage medium storing program for executing.The embodiment of the present invention to static gesture image to be identified by carrying out Hand Gesture Segmentation, obtain gesture segmentation image, then the integral image of the gesture segmentation image is calculated, and the corresponding scale space of the gesture segmentation image is built according to integral image, all extreme points for the decision condition for meeting the extreme point based on Hessian matrixes are then searched in the scale space, and screen target feature point from all extreme points found, finally extract the characteristic value of the target feature point, and the characteristic value of the target feature point is not identified in bending moment algorithm based on Hu, to obtain static gesture recognition result.Thereby, it is possible to effectively improve the robustness and discrimination of static gesture identification, and solve the problems, such as Hand Gesture Segmentation in the process since the characteristic information that angle and dimensional variation are brought is lost.
Description
Technical field
The present invention relates to technical field of computer vision, in particular to a kind of static gesture identification method, device and
Readable storage medium storing program for executing.
Background technology
At present good illumination, simple background, gesture input are generally mainly based upon in the research identified based on static gesture
There is no a situation of angle and dimensional variation, but when static gesture identify occur under illumination variation or complex background when discrimination
It substantially reduces, while when angle and dimensional variation change, characteristic information can be brought to lose during Hand Gesture Segmentation, from
And the problem of being susceptible to wrong identification, the robustness and discrimination of static gesture identification how are improved, is people in the art
The technical issues of member is urgently to be resolved hurrily.
Invention content
In order to overcome above-mentioned deficiency of the prior art, the purpose of the present invention is to provide a kind of static gesture identification sides
Method, device and readable storage medium storing program for executing, can effectively improve the robustness and discrimination of static gesture identification, and solve Hand Gesture Segmentation
The problem of being lost in the process due to the characteristic information that angle and dimensional variation are brought.
To achieve these goals, the technical solution that present pre-ferred embodiments use is as follows:
Present pre-ferred embodiments provide a kind of static gesture identification method, applied to electronic equipment, the method includes:
Hand Gesture Segmentation is carried out to static gesture image to be identified, obtains gesture segmentation image, the static gesture image
Including depth image and coloured image;
The integral image of the gesture segmentation image is calculated, and the gesture segmentation image is built according to integral image and is corresponded to
Scale space;
All extreme values for the decision condition for meeting the extreme point based on Hessian matrixes are searched in the scale space
Point, and screen target feature point from all extreme points found;
Extract the characteristic value of the target feature point, and based on Hu not bending moment algorithm to the characteristic value of the target feature point
It is identified, to obtain static gesture recognition result.
It is described that Hand Gesture Segmentation is carried out to static gesture image to be identified in present pre-ferred embodiments, obtain gesture
The step of dividing image, including:
Image segmentation is carried out to the depth image based on grey level histogram, is obtained for the first-hand of the depth image
Gesture segmentation result;
Image segmentation is carried out to the coloured image based on the first gesture segmentation result, is obtained for the cromogram
The second gesture segmentation result of picture;
The first gesture segmentation result and the second gesture segmentation result are merged, obtain Hand Gesture Segmentation figure
Picture.
It is described that image segmentation is carried out to the depth image based on grey level histogram in present pre-ferred embodiments, it obtains
The step of to first gesture segmentation result for the depth image, including:
Gray level thresholding segmentation is carried out to the depth image, obtains the bianry image of static gesture, the bianry image
As the first gesture segmentation result.
It is described that figure is carried out to the coloured image based on the first gesture segmentation result in present pre-ferred embodiments
The step of as dividing, obtaining the second gesture segmentation result for the coloured image, including:
It is sat according to the two dimension that the bianry image calculates the minimum enclosed rectangle of static gesture and obtains the boundary rectangle
Mark, in the two dimensional coordinate map to corresponding coloured image, will obtain the minimum enclosed rectangle for including the static gesture;
Skin color segmentation is carried out to the minimum enclosed rectangle, obtains the colour of skin bianry image of minimum enclosed rectangle;
Second gesture is partitioned into from the coloured image by the bianry image and the colour of skin bianry image to divide
As a result.
In present pre-ferred embodiments, described searched in the scale space meets the pole based on Hessian matrixes
It is worth all extreme points of the decision condition of point, and from all extreme points found the step of screening target feature point, including:
All extreme values for the decision condition for meeting the extreme point based on Hessian matrixes are searched in the scale space
Point;
The target extreme point under target scale is searched from all extreme points;
The non-maxima suppression that 3 D stereo neighborhood is carried out to the target extreme point is handled, and obtains determining for Local Extremum
Position information;
Character symbol description is carried out to each characteristic point based on the location information of Local Extremum, obtains the spy of each characteristic point
Sign vector;
The corresponding SURF similarity measures of each feature vector and Euclidean distance similarity measure are calculated, obtains preliminary feature
Point the selection result;
The Euclidean distance of the feature vector of each characteristic point according to being calculated is ranked up characteristic point, selects ranking
Forward at least two groups of characteristic points are as datum mark;
All characteristic points in addition to the datum mark are calculated to the distance of each datum mark and in addition to the datum mark
Angle between all characteristic points and each datum mark;
Target feature point is screened from all extreme points found according to the distance and the angle.
In present pre-ferred embodiments, described based on Hu, bending moment algorithm does not carry out the characteristic value of the target feature point
The step of identification, including:
Using Hu, the characteristic value of the target feature point is not converted into Hu moment characteristics values by bending moment algorithm;
The characteristic value of the target feature point is identified by the Hu moment characteristics value after conversion, to obtain static gesture
Recognition result.
Present pre-ferred embodiments also provide a kind of static gesture identification device, applied to electronic equipment, described device packet
It includes:
Hand Gesture Segmentation module for carrying out Hand Gesture Segmentation to static gesture image to be identified, obtains gesture segmentation image,
The static gesture image includes depth image and coloured image.
Module is built, for calculating the integral image of the gesture segmentation image, and the hand is built according to integral image
The corresponding scale space of gesture segmentation image.
Searching module, for searching the judgement item for meeting the extreme point based on Hessian matrixes in the scale space
All extreme points of part, and screen target feature point from all extreme points found.
Identification module, for extracting the characteristic value of the target feature point, and based on Hu not bending moment algorithm to the target
The characteristic value of characteristic point is identified, to obtain static gesture recognition result.
Present pre-ferred embodiments also provide a kind of readable storage medium storing program for executing, and computer is stored in the readable storage medium storing program for executing
Program, the computer program, which is performed, realizes above-mentioned static gesture identification method.
In terms of existing technologies, the invention has the advantages that:
The embodiment of the present invention provides a kind of static gesture identification method, device and readable storage medium storing program for executing, by to be identified
Static gesture image carry out Hand Gesture Segmentation, obtain gesture segmentation image, then calculate the integrogram of the gesture segmentation image
Picture, and the corresponding scale space of the gesture segmentation image is built according to integral image, then searched in the scale space
Meet all extreme points of the decision condition of the extreme point based on Hessian matrixes, and sieved from all extreme points found
Select target feature point, finally extract the characteristic value of the target feature point, and based on Hu not bending moment algorithm to the target signature
The characteristic value of point is identified, to obtain static gesture recognition result.It is anti-in static gesture identification process thereby, it is possible to improve
Interference has higher robustness and discrimination under the interference such as illumination variation, complex background, and solves Hand Gesture Segmentation process
In the characteristic information that is brought due to angle and dimensional variation the problem of losing.
Description of the drawings
It in order to illustrate the technical solution of the embodiments of the present invention more clearly, below will be to needed in the embodiment attached
Figure is briefly described, it should be understood that the following drawings illustrates only certain embodiments of the present invention, therefore is not construed as pair
The restriction of range, for those of ordinary skill in the art, without creative efforts, can also be according to this
A little attached drawings obtain other relevant attached drawings.
Fig. 1 is a kind of flow diagram of static gesture identification method that present pre-ferred embodiments provide;
Fig. 2 is a kind of flow diagram of each sub-steps that the step S210 shown in Fig. 1 includes;
Fig. 3 is a kind of functional block diagram of static gesture identification device that present pre-ferred embodiments provide;
Fig. 4 is the one of the electronic equipment for being used to implement above-mentioned static gesture identification method that present pre-ferred embodiments provide
Kind structural schematic block diagram.
Icon:100- electronic equipments;110- buses;120- processors;130- storage mediums;140- bus interface;150-
Network adapter;160- user interfaces;200- static gesture identification devices;210- Hand Gesture Segmentation modules;220- builds module;
230- searching modules;240- identification modules.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present invention, the technical solution in the embodiment of the present invention is carried out clear, complete
Site preparation describes, it is clear that described embodiment is part of the embodiment of the present invention, instead of all the embodiments.Usually herein
The component of the embodiment of the present invention described and illustrated in place's attached drawing can be configured to arrange and design with a variety of different.
Therefore, below the detailed description of the embodiment of the present invention to providing in the accompanying drawings be not intended to limit it is claimed
The scope of the present invention, but be merely representative of the present invention selected embodiment.Based on the embodiments of the present invention, this field is common
All other embodiment that technical staff is obtained without creative efforts belongs to the model that the present invention protects
It encloses.
It should be noted that:Similar label and letter represents similar terms in following attached drawing, therefore, once a certain Xiang Yi
It is defined in a attached drawing, does not then need to that it is further defined and explained in subsequent attached drawing.Meanwhile the present invention's
In description, term " first ", " second " etc. are only used for distinguishing description, and it is not intended that instruction or hint relative importance.
Referring to Fig. 1, a kind of flow diagram of the static gesture identification method provided for present pre-ferred embodiments.Institute
It should be noted that static gesture identification method provided in an embodiment of the present invention is not limited with Fig. 1 and particular order as described below
System.The idiographic flow of the method is as follows:
Step S210 carries out Hand Gesture Segmentation to static gesture image to be identified, obtains gesture segmentation image.
In the present embodiment, the static gesture image includes depth image and coloured image.In detail, the present embodiment can be with
Include the image information of static gesture information by Kinect sensor acquisition, described image information includes depth image and RGB
Coloured image.The depth image has object dimensional characteristic information, i.e. depth information.Since depth image is not irradiated by light source
The influence of the emission characteristics of direction and body surface, while also there is no shades, can more accurately show acquisition target surface
Three-dimensional depth information.
Through inventor the study found that in individually processing depth image, wrist or elbows may be erroneously detected
It is blocked for gesture or object, the shade of hand microseism generation has an impact segmentation result.And individually handle coloured image progress
It is easily interfered during Hand Gesture Segmentation based on features of skin colors by such as illumination, class colour of skin object.To solve the above-mentioned problems, one
In kind embodiment, referring to Fig. 2, the step S210 can be realized by following sub-step:
Sub-step S211 carries out image segmentation to the depth image based on grey level histogram, obtains for the depth
The first gesture segmentation result of image.
In the present embodiment, the two of static gesture can be obtained by carrying out gray level thresholding segmentation to the depth image
It is worth image, the bianry image is as the first gesture segmentation result.For example, the gray level image based on threshold value point can be used
Algorithm is cut to handle the depth image.Dynamic gesture and background are distinguished by determining gray scale thresholding, with the ash of pixel
Angle value is compared to divide pixel to dynamic gesture area with threshold value, obtains grey level histogram.According to the grey level histogram
Suitable segmentation threshold is chosen, dynamic gesture is split, obtains the bianry image for including the dynamic gesture.
Sub-step S212 carries out image segmentation to the coloured image based on the first gesture segmentation result, obtains needle
To the second gesture segmentation result of the coloured image.
The first gesture segmentation result and the second gesture segmentation result are merged, obtained by sub-step S213
Gesture segmentation image.
In the present embodiment, the minimum enclosed rectangle of static gesture can be calculated according to the bianry image first and obtains institute
The two-dimensional coordinate of boundary rectangle is stated, in the two dimensional coordinate map to corresponding coloured image, will obtain including the static hand
Then the minimum enclosed rectangle of gesture carries out skin color segmentation to the minimum enclosed rectangle, obtains the colour of skin two of minimum enclosed rectangle
It is worth image, second gesture point is partitioned into from the coloured image finally by the bianry image and the colour of skin bianry image
It cuts as a result, specifically can be most described quiet at last by carrying out AND operation to the bianry image and the colour of skin bianry image
State gesture information is split from described image information.
Step S220 calculates the integral image of the gesture segmentation image, and builds the gesture point according to integral image
Cut the corresponding scale space of image.
Step S230 searches the decision condition for meeting extreme point based on Hessian matrixes in the scale space
All extreme points, and screen target feature point from all extreme points found.
It is carefully studied through invention, the wrong identification brought for angular transformation during Hand Gesture Segmentation and change of scale is asked
Topic, it is proposed that the feature extracting method of screening error characteristic point combination SURF algorithm, in detail, features described above extracting method can be with
It realizes in the following way:
First, all of the decision condition that meets the extreme point based on Hessian matrixes are searched in the scale space
Extreme point then searches the target extreme point under target scale from all extreme points, and the target extreme value is clicked through
The non-maxima suppression processing of row 3 D stereo neighborhood, obtains the location information of Local Extremum.Then, based on Local Extremum
Location information to each characteristic point carry out character symbol description, obtain the feature vector of each characteristic point, and calculate each feature
The corresponding SURF similarity measures of vector and Euclidean distance similarity measure, obtain preliminary characteristic point the selection result, further according to meter
The obtained Euclidean distance of the feature vector of each characteristic point is ranked up characteristic point, selects in the top at least two groups
Characteristic point is as datum mark.Then, all characteristic points in addition to the datum mark are calculated to the distance of each datum mark and are removed
The angle between all characteristic points and each datum mark outside the datum mark, finally according to the distance and the angle from looking into
Target feature point is screened in all extreme points found.
In detail, in above process, first by the integral image that calculates the gesture segmentation image and scale is built
The whole for meeting Hessian extreme point decision conditions then by calculating detection response, is searched in space in the scale space
Extreme point, and the non-maxima suppression such as 3 D stereo neighborhood is carried out to whole extreme points under some scale, obtain local pole
It is worth the precise location information of point.Then character symbol description is carried out, four dimensional feature vectors of each characteristic point is obtained, and then calculates
Characteristic point SURF similarity measures and Euclidean distance similarity measure obtain preliminary characteristic point the selection result.Then by calculating
The Euclidean distance of the characteristic point feature vector arrived carries out ascending order arrangement, obtains two point sets, and selection comes two groups of spies of foremost
Sign point is as datum mark.Then be calculated all characteristic points in addition to datum mark to the distance d1 of datum mark p1 and p1 ' and
D1 ', to ensure image there are the accuracy that distance in the case of dimensional variation compares, the present embodiment introduces gesture profile perimeter L pair
It is weighted to obtain the big D of Weighted distance apart from small d, and judges whether the distance apart from big D is true.Last angle parameter consistency
It examines, calculates all characteristic points in addition to the datum mark respectively to the distance of each datum mark and in addition to the datum mark
All characteristic points and each datum mark between angle, then correspond to angle and apart from establishment namely in error range, then sentence
This feature point is determined for correct characteristic point, is otherwise error characteristic point, which is rejected, so as to solve
The problem of being lost during Hand Gesture Segmentation due to the characteristic information that angle and dimensional variation are brought.
Step S240, extracts the characteristic value of the target feature point, and based on Hu not bending moment algorithm to the target signature
The characteristic value of point is identified, to obtain static gesture recognition result.
The present embodiment can the characteristic value of the target feature point be converted into Hu squares spy by bending moment algorithm by using Hu
Value indicative, and the characteristic value of the target feature point is identified in the Hu moment characteristics value passed through after conversion, to obtain static gesture
Recognition result.
In detail, first, the step method of the static gesture identification based on Hu square algorithms is:
By expression above it is easy to see that in discrete state, when the scale of image changes, after normalization in
The heart is not only related with the exponent number of square away from functional value, also suffers from the influence of scale factor, these can all influence Hu not bending moments.By
, when the space vector for carrying out structure Hu squares is matched and compared, discrimination and robustness can all reduce for this.
Therefore, to solve the above-mentioned problems, present inventor improves more than expression formula, by using scale
Normalization method eliminates influence of the scale factor to Hu squares, and it is as follows to construct one group of new moment characteristics:
Image is matched using the moment characteristics value after structure, selects improved M1-M6And originalCome into
Row static gesture identifies, can effectively improve the discrimination and robustness of static gesture.
Further, it is described referring to Fig. 3, present pre-ferred embodiments also provide a kind of static gesture identification device 200
Device can include:
Hand Gesture Segmentation module 210 for carrying out Hand Gesture Segmentation to static gesture image to be identified, obtains Hand Gesture Segmentation figure
Picture, the gesture segmentation image include depth image and coloured image.
Module 220 is built, for calculating the integral image of the gesture segmentation image, and according to being built integral image
The corresponding scale space of gesture segmentation image.
Searching module 230, for searching the judgement for meeting the extreme point based on Hessian matrixes in the scale space
All extreme points of condition, and screen target feature point from all extreme points found.
Identification module 240, for extracting the characteristic value of the target feature point, and based on Hu not bending moment algorithm to the mesh
The characteristic value of mark characteristic point is identified, to obtain static gesture recognition result.
In one embodiment, described searched in the scale space meets the extreme point based on Hessian matrixes
Decision condition all extreme points, and from all extreme points found screen target feature point mode, including:
All extreme values for the decision condition for meeting the extreme point based on Hessian matrixes are searched in the scale space
Point;
The target extreme point under target scale is searched from all extreme points;
The non-maxima suppression that 3 D stereo neighborhood is carried out to the target extreme point is handled, and obtains determining for Local Extremum
Position information;
Character symbol description is carried out to each characteristic point based on the location information of Local Extremum, obtains the spy of each characteristic point
Sign vector;
The corresponding SURF similarity measures of each feature vector and Euclidean distance similarity measure are calculated, obtains preliminary feature
Point the selection result;
The Euclidean distance of the feature vector of each characteristic point according to being calculated is ranked up characteristic point, selects ranking
Forward at least two groups are used as datum mark;
All characteristic points in addition to the datum mark are calculated to the distance of each datum mark and in addition to the datum mark
Angle between all characteristic points and each datum mark;
Target feature point is screened from all extreme points found according to the distance and the angle.
It is in one embodiment, described that based on Hu, the characteristic value of the target feature point is not identified in bending moment algorithm
Mode, including:
Using Hu, the characteristic value of the target feature point is not converted into Hu moment characteristics values by bending moment algorithm;
The characteristic value of the target feature point is identified by the Hu moment characteristics value after conversion, to obtain static gesture
Recognition result.
The concrete operation method of each function module in the present embodiment can refer to corresponding steps in above method embodiment
Detailed description, it is no longer repeated herein.
Further, referring to Fig. 4, a kind of structural representation of the electronic equipment 100 provided for present pre-ferred embodiments
Block diagram.In the present embodiment, the electronic equipment 100 can be the terminal device that mobile terminal, server etc. have computing capability.
As shown in figure 4, the electronic equipment 100 can make general bus architecture to realize by bus 110.Root
According to the concrete application of electronic equipment 100 and overall design constraints condition, bus 110 can include any number of interconnection bus and
Bridge joint.Bus 110 is electrically connected to various together, these circuits include processor 120, storage medium 130 and bus interface
140.Optionally, electronic equipment 100 can be connected 150 grade of network adapter via bus 110 using bus interface 140.Net
Network adapter 150 can be used for realizing the signal processing function of physical layer in cordless communication network, and realize that radio frequency is believed by antenna
Number send and receive.User interface 160 can connect external equipment, such as:Keyboard, display, mouse or control stick etc..
Bus 110 can also connect various other circuits, such as timing source, peripheral equipment, voltage regulator or management circuit,
These circuits are known in the art, therefore are no longer described in detail.
It can replace, electronic equipment 100 may also be configured to generic processing system, such as be commonly referred to as chip, the general place
Reason system includes:The one or more microprocessors of processing function are provided and at least part of of storage medium 130 is provided
External memory, it is all these all to be linked together by external bus architecture and other support circuits.
Alternatively, electronic equipment 100 can be realized using following:With processor 120, bus interface 140, user
The ASIC (application-specific integrated circuit) of interface 160;And be integrated in storage medium 130 in one single chip at least part or
Person, electronic equipment 100 can be realized using following:One or more FPGA (field programmable gate array), PLD are (programmable
Logical device), controller, state machine, gate logic, discrete hardware components, any other suitable circuit or be able to carry out this
The arbitrary combination of the invention circuit of described various functions in the whole text.
Wherein, processor 120 is responsible for bus 110 and general processing (is stored in including performing on storage medium 130
Software).Processor 120 can be realized using one or more general processors and/or application specific processor.Processor 120
Example includes microprocessor, microcontroller, dsp processor and the other circuits for being able to carry out software.It should be by software broadly
Be construed to represent instruction, data or its it is arbitrary combine, regardless of being called it as software, firmware, middleware, microcode, hard
Part description language or other.
Storage medium 130 is illustrated as detaching with processor 120 in Fig. 4, however, those skilled in the art be easy to it is bright
In vain, storage medium 130 or its arbitrary portion can be located at except electronic equipment 100.For example, storage medium 130 can include
Transmission line, the carrier waveform modulated with data, and/or the computer product that is separated with radio node, these media can be with
It is accessed by processor 120 by bus interface 140.Alternatively, storage medium 130 or its arbitrary portion are desirably integrated into place
It manages in device 120, for example, it may be cache and/or general register.
The processor 120 can perform above-described embodiment, specifically, can be stored in the storage medium 130 described
Static gesture identification device 200, the processor 120 can be used for performing the static gesture identification device 200.
In conclusion the embodiment of the present invention provides a kind of static gesture identification method, device and readable storage medium storing program for executing, pass through
Hand Gesture Segmentation is carried out to static gesture image to be identified, gesture segmentation image is obtained, then calculates the gesture segmentation image
Integral image, and the corresponding scale space of the gesture segmentation image is built according to integral image, then in scale sky
Between the middle all extreme points for searching the decision condition for meeting the extreme point based on Hessian matrixes, and from all poles found
Target feature point is screened in value point, finally extracts the characteristic value of the target feature point, and based on Hu not bending moment algorithm to described
The characteristic value of target feature point is identified, to obtain static gesture recognition result.Thereby, it is possible to improve static gesture to identify
Anti-interference in journey has higher robustness and discrimination under the interference such as illumination variation, complex background, and solves gesture
The problem of being lost in cutting procedure due to the characteristic information that angle and dimensional variation are brought.
In embodiment provided by the present invention, it should be understood that disclosed device and method, it can also be by other
Mode realize.Device and method embodiment described above is only schematical, for example, flow chart and frame in attached drawing
Figure shows the system frame in the cards of the system of multiple embodiments according to the present invention, method and computer program product
Structure, function and operation.In this regard, each box in flow chart or block diagram can represent a module, program segment or code
A part, the part of the module, program segment or code include one or more be used to implement as defined in logic function
Executable instruction.It should also be noted that at some as in the realization method replaced, the function of being marked in box can also be with not
It is same as the sequence marked in attached drawing generation.For example, two continuous boxes can essentially perform substantially in parallel, they have
When can also perform in the opposite order, this is depended on the functions involved.It is also noted that in block diagram and/or flow chart
Each box and the box in block diagram and/or flow chart combination, the special of function or action as defined in performing can be used
Hardware based system realize or can be realized with the combination of specialized hardware and computer instruction.
In addition, each function module in each embodiment of the present invention can integrate to form an independent portion
Point or modules individualism, can also two or more modules be integrated to form an independent part.
It can replace, can be realized wholly or partly by software, hardware, firmware or its arbitrary combination.When
When being realized using software, can entirely or partly it realize in the form of a computer program product.The computer program product
Including one or more computer instructions.It is all or part of when loading on computers and performing the computer program instructions
Ground is generated according to the flow or function described in the embodiment of the present invention.
It should be noted that herein, term " including ", " including " or its any other variant are intended to non-row
Its property includes, so that process, method, article or equipment including a series of elements not only include those elements, and
And it further includes the other elements being not explicitly listed or further includes intrinsic for this process, method, article or equipment institute
Element.In the absence of more restrictions, the element limited by sentence " including one ... ", it is not excluded that including institute
State in process, method, article or the equipment of element that also there are other identical elements.
It is obvious to a person skilled in the art that the present invention is not limited to the details of above-mentioned exemplary embodiment, Er Qie
In the case of without departing substantially from spirit or essential attributes of the invention, the present invention can be realized in other specific forms.Therefore, no matter
From the point of view of which point, the present embodiments are to be considered as illustrative and not restrictive, and the scope of the present invention is by appended power
Profit requirement rather than above description limit, it is intended that all by what is fallen within the meaning and scope of the equivalent requirements of the claims
Variation is included within the present invention.Any reference numeral in claim should not be considered as to the involved claim of limitation.
Claims (10)
1. a kind of static gesture identification method, which is characterized in that applied to electronic equipment, the method includes:
Hand Gesture Segmentation is carried out to static gesture image to be identified, obtains gesture segmentation image, the static gesture image includes
Depth image and coloured image;
The integral image of the gesture segmentation image is calculated, and the corresponding ruler of the gesture segmentation image is built according to integral image
Spend space;
All extreme points for the decision condition for meeting the extreme point based on Hessian matrixes are searched in the scale space, and
Target feature point is screened from all extreme points found;
The characteristic value of the target feature point is extracted, and bending moment algorithm does not carry out the characteristic value of the target feature point based on Hu
Identification, to obtain static gesture recognition result.
2. static gesture identification method according to claim 1, which is characterized in that described to static gesture figure to be identified
The step of as carrying out Hand Gesture Segmentation, obtaining gesture segmentation image, including:
Image segmentation is carried out to the depth image based on grey level histogram, obtains the first gesture point for the depth image
Cut result;
Image segmentation is carried out to the coloured image based on the first gesture segmentation result, is obtained for the coloured image
Second gesture segmentation result;
The first gesture segmentation result and the second gesture segmentation result are merged, obtain gesture segmentation image.
3. static gesture identification method according to claim 2, which is characterized in that the grey level histogram that is based on is to described
Depth image carries out image segmentation, the step of obtaining being directed to the first gesture segmentation result of the depth image, including:
Gray level thresholding segmentation is carried out to the depth image, obtains the bianry image of static gesture, the bianry image conduct
The first gesture segmentation result.
4. static gesture identification method according to claim 3, which is characterized in that described to be divided based on the first gesture
As a result the step of image segmentation being carried out to the coloured image, obtaining the second gesture segmentation result for the coloured image,
Including:
The minimum enclosed rectangle of static gesture is calculated according to the bianry image and obtains the two-dimensional coordinate of the boundary rectangle, it will
In the two dimensional coordinate map to corresponding coloured image, the minimum enclosed rectangle for including the static gesture is obtained;
Skin color segmentation is carried out to the minimum enclosed rectangle, obtains the colour of skin bianry image of minimum enclosed rectangle;
Second gesture segmentation result is partitioned into from the coloured image by the bianry image and the colour of skin bianry image.
5. static gesture identification method according to claim 1, which is characterized in that described to be searched in the scale space
Meet all extreme points of the decision condition of the extreme point based on Hessian matrixes, and sieved from all extreme points found
The step of selecting target feature point, including:
All extreme points for the decision condition for meeting the extreme point based on Hessian matrixes are searched in the scale space;
The target extreme point under target scale is searched from all extreme points;
The non-maxima suppression that 3 D stereo neighborhood is carried out to the target extreme point is handled, and obtains the positioning letter of Local Extremum
Breath;
Character symbol description is carried out to each characteristic point based on the location information of Local Extremum, obtain the feature of each characteristic point to
Amount;
The corresponding SURF similarity measures of each feature vector and Euclidean distance similarity measure are calculated, obtains preliminary characteristic point sieve
Select result;
The Euclidean distance of the feature vector of each characteristic point according to being calculated is ranked up characteristic point, selects in the top
At least two groups of characteristic points as datum mark;
It is all to the distance of each datum mark and in addition to the datum mark to calculate all characteristic points in addition to the datum mark
Angle between characteristic point and each datum mark;
Target feature point is screened from all extreme points found according to the distance and the angle.
6. static gesture identification method according to claim 1, which is characterized in that it is described based on Hu not bending moment algorithm to institute
The step of characteristic value of target feature point is identified is stated, including:
Using Hu, the characteristic value of the target feature point is not converted into Hu moment characteristics values by bending moment algorithm;
The characteristic value of the target feature point is identified by the Hu moment characteristics value after conversion, to obtain static gesture identification
As a result.
7. a kind of static gesture identification device, which is characterized in that applied to electronic equipment, described device includes:
Hand Gesture Segmentation module for carrying out Hand Gesture Segmentation to static gesture image to be identified, obtains gesture segmentation image, described
Static gesture image includes depth image and coloured image;
Module is built, for calculating the integral image of the gesture segmentation image, and the gesture point is built according to integral image
Cut the corresponding scale space of image;
Searching module, for searching the decision condition for meeting extreme point based on Hessian matrixes in the scale space
All extreme points, and screen target feature point from all extreme points found;
Identification module, for extracting the characteristic value of the target feature point, and based on Hu not bending moment algorithm to the target signature
The characteristic value of point is identified, to obtain static gesture recognition result.
8. static gesture identification device according to claim 7, which is characterized in that described to be searched in the scale space
Meet all extreme points of the decision condition of the extreme point based on Hessian matrixes, and sieved from all extreme points found
The mode of target feature point is selected, including:
All extreme points for the decision condition for meeting the extreme point based on Hessian matrixes are searched in the scale space;
The target extreme point under target scale is searched from all extreme points;
The non-maxima suppression that 3 D stereo neighborhood is carried out to the target extreme point is handled, and obtains the positioning letter of Local Extremum
Breath;
Character symbol description is carried out to each characteristic point based on the location information of Local Extremum, obtain the feature of each characteristic point to
Amount;
The corresponding SURF similarity measures of each feature vector and Euclidean distance similarity measure are calculated, obtains preliminary characteristic point sieve
Select result;
The Euclidean distance of the feature vector of each characteristic point according to being calculated is ranked up characteristic point, selects in the top
At least two groups of characteristic points as datum mark;
It is all to the distance of each datum mark and in addition to the datum mark to calculate all characteristic points in addition to the datum mark
Angle between characteristic point and each datum mark;
Target feature point is screened from all extreme points found according to the distance and the angle.
9. static gesture identification device according to claim 7, which is characterized in that it is described based on Hu not bending moment algorithm to institute
The mode that the characteristic value of target feature point is identified is stated, including:
Using Hu, the characteristic value of the target feature point is not converted into Hu moment characteristics values by bending moment algorithm;
The characteristic value of the target feature point is identified by the Hu moment characteristics value after conversion, to obtain static gesture identification
As a result.
10. a kind of readable storage medium storing program for executing, which is characterized in that computer program, the meter are stored in the readable storage medium storing program for executing
Calculation machine program is performed the static gesture identification method realized in claim 1-6 described in any one.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810142872.XA CN108133206B (en) | 2018-02-11 | 2018-02-11 | Static gesture recognition method and device and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810142872.XA CN108133206B (en) | 2018-02-11 | 2018-02-11 | Static gesture recognition method and device and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108133206A true CN108133206A (en) | 2018-06-08 |
CN108133206B CN108133206B (en) | 2020-03-06 |
Family
ID=62430828
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810142872.XA Active CN108133206B (en) | 2018-02-11 | 2018-02-11 | Static gesture recognition method and device and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108133206B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110738118A (en) * | 2019-09-16 | 2020-01-31 | 平安科技(深圳)有限公司 | Gesture recognition method and system, management terminal and computer readable storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103927555A (en) * | 2014-05-07 | 2014-07-16 | 重庆邮电大学 | Static sign language letter recognition system and method based on Kinect sensor |
WO2014113346A1 (en) * | 2013-01-18 | 2014-07-24 | Microsoft Corporation | Part and state detection for gesture recognition |
CN106557173A (en) * | 2016-11-29 | 2017-04-05 | 重庆重智机器人研究院有限公司 | Dynamic gesture identification method and device |
-
2018
- 2018-02-11 CN CN201810142872.XA patent/CN108133206B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014113346A1 (en) * | 2013-01-18 | 2014-07-24 | Microsoft Corporation | Part and state detection for gesture recognition |
CN103927555A (en) * | 2014-05-07 | 2014-07-16 | 重庆邮电大学 | Static sign language letter recognition system and method based on Kinect sensor |
CN106557173A (en) * | 2016-11-29 | 2017-04-05 | 重庆重智机器人研究院有限公司 | Dynamic gesture identification method and device |
Non-Patent Citations (1)
Title |
---|
杨磊: "融合多特征和压缩感知的手势识别", 《全国优秀硕士学位论文全文数据库》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110738118A (en) * | 2019-09-16 | 2020-01-31 | 平安科技(深圳)有限公司 | Gesture recognition method and system, management terminal and computer readable storage medium |
CN110738118B (en) * | 2019-09-16 | 2023-07-07 | 平安科技(深圳)有限公司 | Gesture recognition method, gesture recognition system, management terminal and computer readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN108133206B (en) | 2020-03-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11854237B2 (en) | Human body identification method, electronic device and storage medium | |
CN103971102B (en) | Static Gesture Recognition Method Based on Finger Contour and Decision Tree | |
Tian et al. | Optimization in multi‐scale segmentation of high‐resolution satellite images for artificial feature recognition | |
Tasse et al. | Cluster-based point set saliency | |
US20130071816A1 (en) | Methods and systems for building a universal dress style learner | |
US11816915B2 (en) | Human body three-dimensional key point detection method, model training method and related devices | |
CN105493078B (en) | Colored sketches picture search | |
CN106537305A (en) | Touch classification | |
CN106295591A (en) | Gender identification method based on facial image and device | |
Krejov et al. | Multi-touchless: Real-time fingertip detection and tracking using geodesic maxima | |
CN104200240A (en) | Sketch retrieval method based on content adaptive Hash encoding | |
CN107194361A (en) | Two-dimentional pose detection method and device | |
CN112418216A (en) | Method for detecting characters in complex natural scene image | |
CN109934180A (en) | Fingerprint identification method and relevant apparatus | |
CN114758362A (en) | Clothing changing pedestrian re-identification method based on semantic perception attention and visual masking | |
JP2016014954A (en) | Method for detecting finger shape, program thereof, storage medium of program thereof, and system for detecting finger shape | |
CN111857334A (en) | Human body gesture letter recognition method and device, computer equipment and storage medium | |
CN114445853A (en) | Visual gesture recognition system recognition method | |
CN111862031A (en) | Face synthetic image detection method and device, electronic equipment and storage medium | |
CN113610809B (en) | Fracture detection method, fracture detection device, electronic equipment and storage medium | |
CN109858402B (en) | Image detection method, device, terminal and storage medium | |
CN104751513A (en) | Human skeleton model establishing method and device | |
Gheitasi et al. | Estimation of hand skeletal postures by using deep convolutional neural networks | |
CN108133206A (en) | Static gesture identification method, device and readable storage medium storing program for executing | |
CN112488126A (en) | Feature map processing method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |