CN108647654A - The gesture video image identification system and method for view-based access control model - Google Patents
The gesture video image identification system and method for view-based access control model Download PDFInfo
- Publication number
- CN108647654A CN108647654A CN201810462581.9A CN201810462581A CN108647654A CN 108647654 A CN108647654 A CN 108647654A CN 201810462581 A CN201810462581 A CN 201810462581A CN 108647654 A CN108647654 A CN 108647654A
- Authority
- CN
- China
- Prior art keywords
- gesture
- hand
- hand region
- image
- dynamic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/255—Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
- G06V10/507—Summing image-intensity values; Histogram projection analysis
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of gesture video image identification system and method for view-based access control model, the gesture recognition system includes hand region extraction module, hand region tracking module, gesture feature extraction module, classifier training module, identification module;By proposing that a kind of new hand region extraction algorithm based on image border and hand skin color is partitioned into hand region and new dynamic gesture feature extraction mode and characteristic processing mode from every frame image of gesture video in this programme, dynamic gesture grader is constructed using hidden Markov model, grader is trained in conjunction with hand shape feature and hand exercise feature, finally, trained grader can be used for the new gesture other than real-time recognition training sample set, can low computational complexity identify dynamic gesture so as to practical application.
Description
Technical field:
The invention belongs to human-computer interaction and mode identification technology, the gesture video image for relating generally to view-based access control model is known
Other system and method.
Background technology:
With the fast development of information technology, the interacting activity of people and various computer systems has become inevitable.
Therefore human-computer interaction technology receives more and more attention.Wherein, dynamic gesture for human-computer interaction provide it is a kind of it is more convenient,
More natural mode, to replace traditional interactive devices such as mouse, keyboard.Pass through the physical motion of finger and palm, dynamic hand
Gesture can both express important information, can also be interacted with external environment.It, can according to the difference of gesture data input mode
With the system that dynamic hand gesture recognition system is divided into system and view-based access control model based on data glove.Based on data glove
In identifying system, user needs to have on the data glove equipped with special sensor, therefore application scenarios have certain limitation.
In the identifying system of view-based access control model, usually only need that one or more cameras are arranged, user, which uses, can be more convenient, more
It is natural.In dynamic gesture, both include the variation of hand shape, also includes the spatial movement of hand.Therefore, only simultaneously opponent
Portion's shape and hand exercise modeling, could more accurately indicate dynamic gesture.But existing automation dynamic hand gesture recognition side
Method typically only uses the different dynamic gesture of hand exercise feature differentiation, therefore cannot be used for expressing more rich gesture and refer to
It enables;And the identification method of combination hand shape feature and hand exercise feature in the prior art, for how to determine and extract
The characteristic value of dynamic gesture does not have preferable mode, due to not preferable dynamic gesture characteristic value setting means and extracted amount
Change algorithm to which to identifying that the realization method effect of hand shape feature is poor, there are larger hand shape recognition errors, together
When in conjunction with hand shape feature and hand exercise feature identification complexity it is higher, tend not to be applied to real-time identification field.
The gesture area in video image is extracted in the prior art, due to illumination, other objects during background is reflective and image
The influence of body, the hand region that extracting mode in the prior art extracts may include a large amount of noise, cause machine difficult
Correctly to identify gesture.
The present invention is directed the real-time dynamic hand gesture recognition of view-based access control model, proposes a kind of gesture video figure of view-based access control model
As identifying system and method, solve to combine the identification complexity of hand shape feature and hand exercise feature higher in the prior art
Problem, while the hand region of high quality can be partitioned into from every frame image of gesture video, to accurately identify hand
Shape feature.It disclosure satisfy that following requirement:The hand region of high quality is partitioned into from every frame image of gesture video;Indicate hand
Portion's shape and the feature of hand exercise have lower computation complexity;Guarantee system has higher recognition accuracy and effect
Rate.
Invention content:
The present invention provides the gesture video image identification system and method for a view-based access control model, which has discrimination
High, the characteristics of arithmetic speed is fast, strong robustness.What simple shape description and the direction of motion as dynamic gesture feature encoded
Computation complexity is all linear, therefore system can be applied in real-time dynamic hand gesture recognition.
A primary object of the present invention is to provide a kind of real-time dynamic hand gesture recognition system of view-based access control model, the hand
Gesture identifying system includes hand region extraction module, hand region tracking module, gesture feature extraction module, classifier training mould
Block, identification module;
(1) hand region extraction module:For the hand region extracting method based on image border and complexion model, improve
The hand region quality that is split from the lower image of resolution ratio, uses the method based on hand skin color histogram first
The hand region in every frame image is extracted, hand region bianry image G is obtainedh, the edge of every frame image is then extracted, is obtained
To edge image Ge, comprehensively utilize the hand region that image edge information and hand skin color information are refined;
(2) hand region tracking module:For use the CAMShift in cross-platform computer vision library OpenCV with
Track algorithm carries out hand region tracking;
(3) gesture feature extraction module:For using simple shape to describe subrepresentation hand shape, description attached bag includes hand
Convexity matter (convexity), main axis length ratio (ratio of principal axes) and the circle variance of profile
(circular variance) indicates hand exercise track using the coded sequence of hand exercise direction (orientation),
Build a dynamic direction encoding sequence;
(4) classifier training module:For using hidden Markov model (Hidden Markov Model, HMM) to construct
The grader of dynamic gesture, each dynamic gesture classification are modeled by a HMM, and the output of classifier training module is the result is that one
A dynamic gesture database, wherein containing a series of trained HMM, each HMM corresponds to a dynamic gesture classification;
(5) identification module:For when inputting the new gesture of a unknown classification, it is new that gesture recognition system calculates separately this
Matching degree in gesture and dynamic gesture database between each HMM, and the dynamic hand for therefrom selecting most Matching Model to represent
Gesture classification is as recognition result.
According to another aspect of the present invention, a kind of real-time dynamic gesture identification method of view-based access control model, the hand are provided
Gesture recognition methods includes the following steps:
(1) hand region is extracted:Hand region extracting method based on image border and complexion model, improves from resolution
The hand region quality split in the lower image of rate uses the method based on hand skin color histogram to extract often first
Hand region in frame image obtains hand region bianry image Gh,Then the edge for extracting every frame image, obtains edge graph
As Ge, comprehensively utilize the hand region that image edge information and hand skin color information are refined;
(2) hand region tracks:Using to the CAMShift track algorithms in cross-platform computer vision library OpenCV into
Row hand region tracks;
(3) gesture feature extracts:Subrepresentation hand shape is described using simple shape, description attached bag includes the convex of hand profile
Property (convexity), main axis length ratio (ratio of principal axes) and circle variance (circular
Variance), indicate hand exercise track using the coded sequence of hand exercise direction (orientation), build one and move
The direction encoding sequence of state;
(4) classifier training:Dynamic gesture is constructed using hidden Markov model (Hidden Markov Model, HMM)
Grader, each dynamic gesture classification models by a HMM, and the output of classifier training module is the result is that a dynamic hand
Gesture database, wherein containing a series of trained HMM, each HMM corresponds to a dynamic gesture classification;
(5) gesture identification:When inputting the new gesture of a unknown classification, gesture recognition system calculates separately the new gesture
With the matching degree between each HMM in dynamic gesture database, and therefrom select most Matching Model represent dynamic gesture class
It Zuo Wei not recognition result.
Using the real-time dynamic hand gesture recognition system and method for view-based access control model provided by the invention, improve from gesture video
Every frame image in be partitioned into the quality of hand region, carry out gesture identification in combination with hand shape and hand exercise modeling,
Filled up the prior art do not have it is a kind of capableing of the feature extraction defect of accurate identification dynamic gesture, and propose a kind of relatively low fortune
The recognizer for calculating complexity, enriches gesture identification application mode, and be effectively reduced in conjunction with hand shape feature and
The identification computation complexity of hand exercise feature thus allows for being applied to identification in real time and applies.
Description of the drawings:
Fig. 1 merges the general frame of hand and the hidden Markov model dynamic hand gesture recognition system of motion feature
The overall flow figure of Fig. 2 hand region extraction modules
The convex closure and main shaft schematic diagram of Fig. 3 hand profiles
Fig. 4 simple shapes describe sub- discretization process schematic diagram
Fig. 5 hand exercise direction calculating schematic diagrames
Fig. 6 hand exercise discrete--direction schematic diagrames
The structural schematic diagram of Fig. 7 present system embodiments
Specific implementation mode:
Elements and features described in one drawing or one embodiment of the invention can with it is one or more
Elements and features are combined shown in other attached drawings or embodiment.It should be noted that for purposes of clarity, attached drawing and explanation
In the expression and description of component unrelated to the invention, known to persons of ordinary skill in the art and processing is omitted.
Referring to the hidden Markov model dynamic hand gesture recognition system of fusion hand and motion feature as disclosed in fig 1
General frame, first, system of the invention is using a kind of new hand region extraction algorithm from every frame image of gesture video
It is partitioned into hand region.Then, system describes hand shape of the subrepresentation per frame image using combination simple shape, and uses hand
The coded sequence of portion's direction of motion indicates hand exercise track.Next, system constructs dynamic hand using hidden Markov model
Gesture grader is trained grader in conjunction with hand shape feature and hand exercise feature.Finally, trained grader can
For the new gesture other than real-time recognition training sample set.System is broadly divided into following five modules:
(1) hand region extraction module
Hand region extraction is the first step of dynamic hand gesture recognition, and target is by hand region from dynamic gesture video
It is split in per frame image.The present invention proposes a kind of hand region extracting method based on image border and complexion model,
Improve the hand region quality split from the lower image of resolution ratio.In real-time dynamic hand gesture recognition system,
Requirement due to the limitation of the conditions such as gesture video capture device, acquisition environment and to performance indicators such as system response times,
The resolution ratio of collected gesture video is typically lower.Utilizable information is only limitted to from the every frame image of video
Coarse colour information and image edge information etc..
Therefore, the present invention extracts the edge of every frame image using Canny edge detection algorithms first, obtains edge image
Ge.Then, the hand region in every frame image is extracted using the method based on hand skin color histogram, and led on this basis
The quality for crossing smoothing denoising and Morphological scale-space technological improvement hand region obtains relatively rough hand region bianry image
Gh.Finally, it laterally traverses respectively and longitudinal traversal image GeWith image Gh, comprehensively utilize image edge information and hand skin color letter
Cease the hand region refined.
(2) hand region tracking module
In dynamic hand gesture recognition system, the tracking to hand region is also related to.It is past according to the continuity of dynamic gesture
Go out hand in the position of next frame image toward the position estimating that can occur in previous frame image from hand.By tracking hand
Movement locus can improve the accuracy rate and efficiency of hand region extraction.Common hand region track algorithm includes Mean
Shift algorithms, CAMShift (Continuously Adaptive Mean Shift) algorithm, Kalman filter algorithm and
Particle filter algorithm etc..In the system of invention calculation is tracked using to the CAMShift in cross-platform computer vision library OpenCV
Method.
(3) gesture feature extraction module
The target of gesture feature extraction is by calculating a series of variable, to hand shape, hand position, hand exercise
The states such as direction, rate are described.Therefore, the input of gesture feature extraction module is divided from every frame image of gesture video
The hand region for cutting out, i.e. hand region are extracted and the result of tracking module.
The present invention describes subrepresentation hand shape using combination simple shape, wherein each simple shape description has
Translation scaling and rotation invariant, slight change to the same hand shape are simultaneously insensitive.Meanwhile each simple shape is retouched
Stating son all has linear computation complexity, is highly suitable for real-time dynamic hand gesture recognition.Combine multiple simple shape descriptions
Son can distinguish all kinds of different hand shapes well.The simple shape being used in the present invention describes attached bag and includes hand wheel
Wide convexity matter (convexity), main axis length ratio (ratio of principal axes) and circle variance (circular
Variance), calculating process and discretization process combine diagram to be discussed in detail in the section of specific implementation mode one.
Next, the present invention indicates hand exercise track using the coded sequence of hand exercise direction (orientation),
The direction of motion is discretized as 8 sections, and section number is used to be encoded as the direction of motion, constitutes a dynamic direction volume
Code sequence, detailed calculation formula provide in the section of specific implementation mode one.
(4) classifier training module
After the operation of characteristic extracting module, for every frame image, obtains one and transported comprising hand shape and hand
The feature vector of dynamic information, in chronological sequence sequential organization constitutes entire dynamic hand to the feature vector of all frame images together
The characteristic vector sequence of gesture.The result of gesture feature extraction module is by the input as classifier training and identification module.
The present invention constructs the classification of dynamic gesture using hidden Markov model (Hidden Markov Model, HMM)
Device.HMM is the dual probabilistic model for having Markov property.Model contains the state (hidden state) of multiple not observables,
Wherein each state is associated with a random function.Due to the state not observable of model, it is referred to as hidden Markov
Model.In any one discrete instants, model is in one of them hidden state, and is generated according to the random function of the state relation
One observation symbol Oi.Then, according to state transition probability matrix, model is transferred to another new shape from current state
State.It generates observation symbol manipulation and state transfer operation iteration carries out, finally obtain a HMM observation sequences O=O1O2…OT。
In view of HMM is a kind of time-space domain modeling technique of mature, and there is good Time alignment characteristic, in the system of invention
Dynamic gesture grader is constructed using HMM.
A HMM is indicated usually using a triple λ=(A, B, Π):
A={ aijIt is state transition probability matrix, wherein aijIt is model from state siIt is transferred to state sjProbability;
B={ bjkIt is the generating probability matrix for observing symbol, wherein bjkIt is model in state sjGenerate observation symbol vk's
Probability;
Π={ πi, i=1,2 ..., N is the probability distribution of model primitive, wherein πiBe original state be siIt is general
Rate.
In the dynamic hand gesture recognition system of invention, each dynamic gesture classification is modeled by a HMM, from dynamic gesture
The characteristic vector sequence extracted in sample corresponds to the observation sequence O=O that HMM is generated1O2…OT.The mesh of classifier training
Mark is sequence O=O according to the observation1O2…OT, parameter lambda=(A, B, the Π) of HMM is adjusted, to maximize conditional probability P (O | λ).Thing
In reality, maximization problems Mathematical Solution not stringent so far.However, it is possible to adjust model parameter λ=(A, B, Π) so that
Conditional probability P (O | λ) local maxima.Classical HMM training algorithms include Baum-Welch algorithms based on iteration thought and
EM (expectation-modification) algorithms and gradient algorithm etc..The output of classifier training module is the result is that one
A dynamic gesture database, wherein containing a series of trained HMM, each HMM corresponds to a dynamic gesture classification.
(5) identification module
After classifier training, in trained dynamic gesture database, a HMM corresponds to a dynamic gesture class
Not.At this point, the dynamic hand gesture recognition system of invention can be with the new gesture other than automatic identification training sample set.When input one
When the new gesture of unknown classification, identifying system calculates separately the matching between each HMM in the gesture and dynamic gesture database
Degree, and therefrom select the dynamic gesture classification that most Matching Model represents as recognition result.If the corresponding observation of dynamic gesture
Sequence (namely characteristic vector sequence of dynamic gesture) is O=O1O2…OT, then evaluating of the dynamic gesture and a HMM
Observation sequence O=O can be generated with use condition probability P (O | λ), i.e., parameter lambda=(A, B, the Π) of known HMM with degree1O2…
OTConditional probability.Strict mathematical description about identification module provides in one section of specific implementation.
The system of invention is further described below in conjunction with attached drawing, formula and table, since hand region tracks mould
Block mainly uses the CAMShift track algorithms in cross-platform computer vision library OpenCV, is not related to core of the invention
Innovative point, therefore the module is not described in detail.
(1) hand region extraction module is embodied
In the dynamic hand gesture recognition system of invention, hand region extraction module specific implementation mode is as shown in Figure 2.It is first
First, using the hand region binarization method based on hand skin color histogram, relatively rough hand region is extracted.Then,
The salt-pepper noise in hand region image is eliminated using image smoothing and de-noising method, and hand is filled using Morphological scale-space technology
Tiny hole and narrow gap in region, to improve the quality of the hand region extracted.Next, using
Canny edge detection operators extract the edge of hand images, and jointing edge information further refines hand region.Finally, make
Hand profile is extracted with Laplce's contour extraction method.
In the hand region extracting method of invention, main innovation point is to extract using image edge information refinement
Hand region.Due to the influence of other objects in illumination, background and image, the side based on hand skin color histogram is used alone
The hand region that method extracts may include a large amount of noise.Since hand region boundary can generally have obvious side
Boundary can further increase the quality of hand region using these boundary informations.If original image size be height ×
Width, edge image Ge, the hand region bianry image obtained after smoothing denoising and Morphological scale-space is Gh, then profit
The detailed process that hand region is refined with image edge information is as follows:
Step (a) begins stepping through all rows of original image from the 1st row, remembers and is designated as under current line:I, 1≤i≤height,
The operation of Step (b)-Step (c) is executed to every a line of original image.
Step (b) from left to right traverses the row all pixels position to the i-th row of original image, and note is when forefront subscript:j,
1≤j≤width.Check edge image GeIt whether there is edge at pixel (i, j), and edge point coordinates pressed into traversal order
It is stored in array EdgePoint, if the quantity of marginal point that the i-th row includes is Ki。
Step (c) is to the i-th row of original image, the K in array EdgePointiA marginal point defines Ki- 1 section, it is right
In kth, 1≤k≤Ki- 1 section, judges whether entire section belongs to hand region.First according to hand region bianry image Gh
Calculate the pixel quantity N for belonging to hand region in k-th of sectioni, and calculate the percentage of sum of all pixels between these pixel occupied areas
Pi.Work as PiMore than preset threshold value TPWhen, judge that k-th of section belongs to hand region, and all pixels in section are all marked
For hand region pixel.Traverse all Ki- 1 section executes identical operation.
For Step (d) to each row of original image, the operation for executing similar Step (b)-Step (c) is (i.e. longitudinal to execute hand
Portion region micronization processes).In conjunction with horizontal and vertical micronization processes as a result, obtaining final refinement hand region.
After above-mentioned Refinement operation, most of noise region for being mistaken for hand region is eliminated.Such as due to background
The reasons such as reflective, local background region will appear and color similar in the colour of skin, therefore the hand extracting method based on complexion model
These background areas can be mistaken for hand region.But the general gentle image background of grey scale change does not have apparent edge,
Hand region refinement based on image border can remove this noise like.
(2) gesture feature extraction module is embodied
In the dynamic hand gesture recognition system of invention, subrepresentation hand shape is described by combining 3 simple shapes.This 3
A description is respectively the convexity matter (convexity) of hand profile, main axis length ratio (ratio of principal
Axes) and justify variance (circular variance), specific calculating process is as follows:
Convexity matter (convexity):
Point set { piConvex closure refer to a minimal convex polygon, meet { piIn point in it or on its side.Object
The convex closure of body profile is like the polygon that the rubber tape of a tension being looped around outside object is formed.The left figure of Fig. 3 is one
The convex closure example of a hand profile.One direct-vision method of description profile convex-hull property uses convex closure perimeter and profile perimeter
Ratio:
Wherein PcontourAnd PconvexhullThe respectively perimeter of profile and its convex closure.
Main axis length ratio (ratio of principal axes):
The main shaft of hand profile refers to by the profile centre of form and two orthogonal line segments that crossing dependency is zero.The right side of Fig. 3
Figure is the main shaft example of a hand profile.Main axis length ratio can describe the extensibility of a shape, ratio value well
Bigger shape usually seems more slender on the whole.If the covariance matrix of hand profile is C, note Matrix C is as follows:
Then profile main axis length ratio can calculate as follows:
Circle variance (circular variance):
Usually, the shape can be compared with a general template when describing a shape, such as description one
It refers to the shape with round very close to round herein to play the role of common template that a shape is very round.The circle of hand profile
The difference exactly between hand shape and circular shuttering of variance description.The centre of form of the centre of form and hand profile of generally rounded template
It overlaps, radius is the mean radius of hand profile.Circle variance can be defined as the average variance of profile and circular shuttering:
Wherein pi=[xi,yi]TIt is the point coordinates on profile, μ=(1/N) ∑i piIt is centre of form coordinate, μr=(1/N)
∑i||pi- μ | | it is the mean radius of hand profile, | | | | it indicates to calculate vector length, N is that the phase vegetarian refreshments of hand region contour is total
Number.
After the calculating of above three simple shape description, the original feature vector of a frame image is:
F=[conv, prax, cvar, x, y]T,
Wherein conv, prax and cvar are the convexity matter of hand profile, main axis length ratio and circle variance respectively, (x,
Y) be hand profile centre of form coordinate.
The discretization of simple shape description:
The observation symbol quantity that each states of HMM may generate is limited, and usually observation symbol is discrete type.Cause
This, the codomain extracted from dynamic gesture video is that the primitive character of real number needs to carry out discretization operations.In addition, due to light
It is continually changing during entire dynamic gesture according to other conditions such as, backgrounds, is extracted from a part of frame image of video
Hand profile out may include serious defect.The profile of these existing defects has interrupted the continuity of dynamic gesture, because
This is considered data noise.One of feature discretization important goal is exactly to reduce noise frame hand as much as possible
Influence of the profile to continuous gesture.It should be noted that for primitive character, computing unit is a frame image, and for discretization
Feature afterwards, computing unit are one section of gesture video clips for including several successive frames.Specific feature discretization method is as follows:
As shown in figure 4, carrying out discretization operations from two dimensions pair, 3 simple shape description.Due to each simple
The discretization operations of shape descriptor are all identical, therefore it only is described in detail by taking the convexity matter conv of hand profile as an example
Discretization process.
The codomain of profile convexity matter conv is divided into the section of limited quantity by Step (a), since 1 to each section into
Row number.Note section quantity is Ninter, then belonging to convexity matter conv original values section number calculate it is as follows:
Wherein convmaxAnd convminThe maximum value and minimum value of the original codomains of respectively conv.As shown in figure 4, horizontal axis table
Show that every frame image of gesture video, the longitudinal axis indicate the convexity matter conv values calculated the hand profile that every frame image zooming-out comes out.
In this example, the original codomain [conv of convmin,convmax] it is divided into 4 sections.
Gesture video is divided into several segments by Step (b) according to time sequencing, and each segment is images of gestures sequence
The subsequence of row.The gesture number of fragments of division is denoted as Ns, i-th of segment be denoted as Si.In the example of fig. 4, gesture video is by quilt
It is divided into 3 segments.
Step (c) is for i-th of gesture segment Si, calculate the convexity matter conv of hand profile in each frame image, and according to
Method discretization in Step (a) then finds to corresponding section conv ' and is distributed most intensive section.As shown in figure 4, the
One gesture segment S1Including 6 frame images, wherein the conv original values that 4 frame images calculate belong to first section, therefore
First gesture segment S1The most intensive sections conv number is 1.
After the discretization operations of Step (a) to Step (c), each gesture segment can use the segment most intensive
The sections conv characterize, therefore corresponding section number can as a feature of gesture segment.Each gesture after discretization
The hand profile convexity texture of segment at characteristic sequence it is as follows:
Wherein conv 'iIt is i-th of gesture segment SiThe corresponding most intensive sections conv number.
Since in the same gesture segment, the conv values calculated per frame image are usually all gathered in the same area
Between, falling the conv values in other sections can be abandoned by above-mentioned discretization method.Usually, if conv values occur suddenly it is larger
Variation, the variation that the hand profile of respective frame image is likely due to the external environments such as illumination, background lead to the presence of serious lack
Sunken.This frame image contributions continuity of gesture, is consequently belonging to noise frame, an important goal of feature discretization method
It is exactly to reduce influence of the noise frame to final recognition result as far as possible.
Hand exercise Directional feature extraction and discretization:
In Figure 5, (xt,yt) and (xt+i,yt+i) indicate hand in the position of t moment and t+i moment, θ respectivelytIndicate hand
Portion t moment the direction of motion, then
Since the original codomain of the direction of motion is real number, it is therefore desirable to which conversion generates discrete feature coding.In invention
In system, θtCodomain section [0,360 °] be divided into 8 subintervals, the span in each subinterval is 45 °, and since 1
Each subinterval is numbered, as shown in Figure 6.In specific calculate, i-th of gesture segment S is selected firstiFirst frame and
Last frame calculates hand exercise direction, and by the discrete 8 sub- Interval Coding values turned in section [0,360 °] of its angle value,
Obtain gesture segment SiIn hand exercise direction encoding θ 'i, the movement locus of entire dynamic gesture can be expressed as moving
The coded sequence in direction:
Wherein NsFor the sum of gesture segment.
Finally obtained discretized features sequence vector:
After simple shape description and the discretization operations in hand exercise direction, i-th of gesture segment SiIt can use
Following feature vector indicates:
Wherein conv 'i, prax 'iWith cvar 'iGesture segment S respectively after discretizationiCorresponding 3 simple shapes description
Son, θ 'iIt is gesture segment SiIn hand exercise direction encoding.Entire dynamic gesture can be expressed as a discrete feature to
Measure sequence:
Wherein NsIt is the sum of gesture segment.
(3) classifier training module is embodied
For the dynamic gesture of a classification, we using the gesture video sample for belonging to the category train one from left-hand
The HMM of right type (left-to-right).The observation sequence of HMM is the gesture feature sequence vector after discretization operations.Cause
This, the input of training process is following observation sequence:
WhereinIt is from i-th of gesture segment SiThe discrete features vector of acquisition.HMM
Number of states according to the complexity of gesture determine.Usually, very little number of states can cause final recognition correct rate to drop
It is low, and too many number of states then needs a large amount of gesture training sample.In addition, research shows that the state as HMM increases to spy
When fixed number amount, recognition correct rate can reach a maximum value, if continuing growing the status number of HMM, discrimination on this basis
Declined instead.In the dynamic hand gesture recognition system of invention, the status number of the HMM as gesture model is according to experimental data
Complexity be set as fixed value.Baum-Welch algorithms are for training HMM gesture models.The parameter of model is iterated tune
It is whole, to maximize conditional probabilityThat is under conditions of known models λ, observation sequence is generatedProbability.If the other sum of gesture class is Ng, then after training, N is stored in gesture databaseg
The HMM of a type from left to right:
Wherein λiIndicate the corresponding model of i-th of gesture classification.
(4) identification module is embodied
In the gesture identification stage, after inputting the dynamic gesture video of a unknown classification, be first carried out feature extraction and
Feature discretization operations obtain the observation sequence of HMM(namely the discrete features vector sequence of dynamic gesture
Row).For i-th of gesture model λ in trained gesture databasei, design conditions probabilityI.e.
Know λiUnder conditions of, generate observation sequenceProbability.Classification belonging to dynamic gesture to be identified calculates such as
Under:
Indicate in selection gesture database with the most matched model of gesture to be identified, and using its subscript as recognition result.
Fig. 7 is the structural schematic diagram of present system embodiment, wherein gesture edge image extraction module and gesture
Colour of skin histogram extraction module extracts gesture edge graph G respectivelyeAnd gesture colour of skin histogram GH,Image synthesis processing module pair
Gesture edge graph GeAnd gesture colour of skin histogram GhIntegrated treatment is carried out, refinement hand region figure, the refinement hand that will be obtained are obtained
Portion's administrative division map is input to gesture feature extraction unit device and carries out feature extraction, and the characteristic information of extraction is input to identification module
It is identified.
Separator training module is used for using hidden Markov model (Hidden Markov Model, HMM) construction dynamic
The grader of gesture, each dynamic gesture classification are modeled by a HMM, and the output of classifier training module is the result is that one dynamic
State gesture database, wherein containing a series of trained HMM, each HMM corresponds to a dynamic gesture classification.
Identification module:For when inputting the new gesture of a unknown classification, gesture recognition system to calculate separately the new hand
Matching degree in gesture and dynamic gesture database between each HMM, and the dynamic gesture for therefrom selecting most Matching Model to represent
Classification is as recognition result.
(5) system testing
The dynamic hand gesture recognition system of invention may be implemented to carry out mechanized classification to a variety of different dynamic gestures, below
Provide specific system test result:
Dynamic gesture library:
The data set that system testing uses includes 9 dynamic gesture classifications, by 3 kinds of basic deformation and 3 kinds of basic exercise sides
To being composed, it is defined as follows shown in table.Each gesture classification contains 40 dynamic gesture examples, and (dynamic gesture regards
Frequently), wherein random selection 20 is used as HMM training samples, remaining 20 dynamic gesture examples to be used to verify the identification of system
Accuracy.The resolution ratio of camera head for shooting gesture video is 320 × 240 pixels, and frame speed is that 15 frames are per second.
3 kinds of basic hands | It opens | It is closed | V-arrangement |
3 kinds of basic deformation | It is closed from opening up into | It is opened from being closed into | From opening up into V-arrangement |
3 kinds of basic exercise directions | From left to right | From lower-left to upper right | From left to bottom right |
Recognition result:
In the training stage, primitive character is extracted from each trained gesture sample first and execute discretization operations.It is similar
The discrete features sequence vector of gesture sample is used to train the HMM of a type from left to right.After training, all gesture classifications
Corresponding HMM constitutes gesture model database.It is obtained first after inputting the example gestures of a unknown classification in Qualify Phase
The discrete features sequence vector of the example is obtained, the matching journey of each model and the gesture in gesture model database is then evaluated
Degree, finally selects most matched model, and using corresponding gesture classification as recognition result.In testing, the state of each HMM
Quantity is both configured to 5.
Following table summarize the dynamic hand gesture recognition system of invention for identification unknown classification gesture when accuracy rate.Test knot
Fruit shows that the average recognition rate of system is up to 88.3%.
The computation complexity of each module of dynamic hand gesture recognition system of invention is all relatively low, especially the response speed of identification module
Degree is very fast, therefore system can be applied to real-time dynamic hand gesture recognition.Above system is tested, is instructed when completing grader
After the operation for practicing module, the 9 class dynamic gestures that system can in real time in identification database, recognition accuracy and response speed are all
It is ideal.
It these are only the preferred embodiment of the present invention, be not intended to restrict the invention, for those skilled in the art
For member, the invention may be variously modified and varied.Any modification made by all within the spirits and principles of the present invention,
Equivalent replacement, improvement etc., should all be included in the protection scope of the present invention.
Claims (6)
1. a kind of gesture video image identification system of view-based access control model, gesture video image identification system include gesture edge image
Extraction module, gesture colour of skin histogram extraction module, image synthesis processing module;
The gesture edge image extraction module, for being extracted using Canny edge detection operators to the hand in video image
The edge of hand images further uses Laplce's contour extraction method and extracts hand profile, obtains gesture edge graph Ge;
The gesture colour of skin histogram extraction module uses the hand region binarization method based on hand skin color side's figure, extraction
The colour of skin histogram of hand region is eliminated using image smoothing and de-noising method in hand region image after obtaining colour of skin histogram
Salt-pepper noise, and using Morphological scale-space technology filling hand region in tiny hole and narrow gap, to improve
The quality of hand region in the colour of skin histogram of the hand region extracted obtains gesture colour of skin histogram Gh;
Described image integrated treatment module, to gesture edge graph GeAnd gesture colour of skin histogram GhIntegrated treatment is carried out, is obtained more
Accurately refinement hand region figure.
2. the gesture video image identification system of view-based access control model according to claim 1, it is characterised in that described image is comprehensive
Processing module is closed to gesture edge graph GeAnd gesture colour of skin histogram GhIntegrated treatment is carried out, obtains more accurately refining hand
Administrative division map, specially:
Step (a) begins stepping through all rows of original image from the 1st row, remembers and is designated as under current line:I, 1≤i≤height, to original
Every a line of beginning image executes the operation of Step (b)-Step (c), and wherein original image size is height × width;
Step (b) from left to right traverses the row all pixels position to the i-th row of original image, and note is when forefront subscript:j,1≤j
≤ width checks edge image GeIt whether there is edge at pixel (i, j), and edge point coordinates stored by traversal order
In array EdgePoint, if the quantity of marginal point that the i-th row includes is Ki;
Step (c) is to the i-th row of original image, the K in array EdgePointiA marginal point defines Ki- 1 section, for
k,1≤k≤Ki- 1 section, judges whether entire section belongs to hand region, first according to hand region bianry image GhIt calculates
Belong to the pixel quantity N of hand region in k-th of sectioni, and calculate the percentage P of sum of all pixels between these pixel occupied areasi;When
PiMore than preset threshold value TPWhen, judge that k-th of section belongs to hand region, and all pixels in section are collectively labeled as hand
Portion's area pixel traverses all Ki- 1 section executes identical operation;
Step (d) is to each row of original image, the identical operation for executing Step (b)-Step (c), in conjunction with horizontal and vertical thin
Change handling result, obtains final refinement hand region.
3. the gesture video image identification system of view-based access control model according to claim 2, the gesture video image identification
System further includes gesture feature extraction module, classifier training module, identification module;
Gesture feature extraction module:For carrying out characteristic value sequence processing to above-mentioned refinement hand region figure, specially make
Subrepresentation hand shape is described with simple shape, description attached bag includes the convexity matter (convexity) of hand profile, main axis length ratio
Example (ratio of principal axes) and circle variance (circular variance), while to using hand exercise direction
(orientation) coded sequence indicates hand exercise track, builds a dynamic direction encoding sequence;
Classifier training module:For using hidden Markov model (Hidden Markov Model, HMM) to construct dynamic hand
The grader of gesture, each dynamic gesture classification are modeled by a HMM, and the output of classifier training module is the result is that a dynamic
Gesture database, wherein containing a series of trained HMM, each HMM corresponds to a dynamic gesture classification;
Identification module:For when input a unknown classification new gesture when, gesture recognition system calculate separately the new gesture with
Matching degree in dynamic gesture database between each HMM, and the dynamic gesture classification for therefrom selecting most Matching Model to represent
As recognition result.
4. a kind of gesture video image identification method of view-based access control model, gesture video image identification method executes following steps:
(1) edge for extracting hand images using Canny edge detection operators to the hand in video image, further uses drawing
This contour extraction method of pula extracts hand profile, obtains gesture edge graph Ge;
(2) the hand region binarization method based on hand skin color side's figure extracts the colour of skin histogram of hand region, is obtaining skin
The salt-pepper noise in hand region image is eliminated using image smoothing and de-noising method after Color Histogram, and uses Morphological scale-space skill
Art fills tiny hole and narrow gap in hand region, to improve the colour of skin histogram of the hand region extracted
In hand region quality, obtain gesture colour of skin histogram Gh;
(3) to gesture edge graph GeAnd gesture colour of skin histogram GhIntegrated treatment is carried out, obtains more accurately refining hand region
Figure.
5. the gesture video image identification method of view-based access control model according to claim 4, it is characterised in that described to gesture
Edge graph GeAnd gesture colour of skin histogram GhIntegrated treatment is carried out, obtains more accurately refining hand region figure, specially:
Step (a) begins stepping through all rows of original image from the 1st row, remembers and is designated as under current line:I, 1≤i≤height, to original
Every a line of beginning image executes the operation of Step (b)-Step (c), and wherein original image size is height × width;
Step (b) from left to right traverses the row all pixels position to the i-th row of original image, and note is when forefront subscript:j,1≤j
≤ width checks edge image GeIt whether there is edge at pixel (i, j), and edge point coordinates stored by traversal order
In array EdgePoint, if the quantity of marginal point that the i-th row includes is Ki;
Step (c) is to the i-th row of original image, the K in array EdgePointiA marginal point defines Ki- 1 section, for
k,1≤k≤Ki- 1 section, judges whether entire section belongs to hand region, first according to hand region bianry image GhIt calculates
Belong to the pixel quantity N of hand region in k-th of sectioni, and calculate the percentage P of sum of all pixels between these pixel occupied areasi;When
PiMore than preset threshold value TPWhen, judge that k-th of section belongs to hand region, and all pixels in section are collectively labeled as hand
Portion's area pixel traverses all Ki- 1 section executes identical operation;
Step (d) is to each row of original image, the identical operation for executing Step (b)-Step (c), in conjunction with horizontal and vertical thin
Change handling result, obtains final refinement hand region.
6. the gesture video image identification method of view-based access control model according to claim 5, the gesture video image identification
Method further include further include gesture feature extraction step, classifier training step, gesture identification step;
Gesture feature extraction step:For carrying out characteristic value sequence processing to above-mentioned refinement hand region figure, specially make
Subrepresentation hand shape is described with simple shape, description attached bag includes the convexity matter (convexity) of hand profile, main axis length ratio
Example (ratio of principal axes) and circle variance (circular variance), while to using hand exercise direction
(orientation) coded sequence indicates hand exercise track, builds a dynamic direction encoding sequence;
Classifier training step:For using hidden Markov model (Hidden Markov Model, HMM) to construct dynamic hand
The grader of gesture, each dynamic gesture classification are modeled by a HMM, and the output of classifier training module is the result is that a dynamic
Gesture database, wherein containing a series of trained HMM, each HMM corresponds to a dynamic gesture classification;
Gesture identification step:For when inputting the new gesture of a unknown classification, gesture recognition system to calculate separately the new hand
Matching degree in gesture and dynamic gesture database between each HMM, and the dynamic gesture for therefrom selecting most Matching Model to represent
Classification is as recognition result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810462581.9A CN108647654A (en) | 2018-05-15 | 2018-05-15 | The gesture video image identification system and method for view-based access control model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810462581.9A CN108647654A (en) | 2018-05-15 | 2018-05-15 | The gesture video image identification system and method for view-based access control model |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108647654A true CN108647654A (en) | 2018-10-12 |
Family
ID=63755695
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810462581.9A Withdrawn CN108647654A (en) | 2018-05-15 | 2018-05-15 | The gesture video image identification system and method for view-based access control model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108647654A (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109376730A (en) * | 2018-12-29 | 2019-02-22 | 龙岩学院 | A kind of gesture identification method and device |
CN110147764A (en) * | 2019-05-17 | 2019-08-20 | 天津科技大学 | A kind of static gesture identification method based on machine learning |
CN110751082A (en) * | 2019-10-17 | 2020-02-04 | 烟台艾易新能源有限公司 | Gesture instruction identification method for intelligent home entertainment system |
CN111435429A (en) * | 2019-01-15 | 2020-07-21 | 北京伟景智能科技有限公司 | Gesture recognition method and system based on binocular stereo data dynamic cognition |
CN111435558A (en) * | 2018-12-26 | 2020-07-21 | 杭州萤石软件有限公司 | Identity authentication method and device based on biological characteristic multi-mode image |
CN111523435A (en) * | 2020-04-20 | 2020-08-11 | 安徽中科首脑智能医疗研究院有限公司 | Finger detection method, system and storage medium based on target detection SSD |
CN111680618A (en) * | 2020-06-04 | 2020-09-18 | 西安邮电大学 | Dynamic gesture recognition method based on video data characteristics, storage medium and device |
CN111695408A (en) * | 2020-04-23 | 2020-09-22 | 西安电子科技大学 | Intelligent gesture information recognition system and method and information data processing terminal |
CN111897433A (en) * | 2020-08-04 | 2020-11-06 | 吉林大学 | Method for realizing dynamic gesture recognition and control in integrated imaging display system |
CN112019892A (en) * | 2020-07-23 | 2020-12-01 | 深圳市玩瞳科技有限公司 | Behavior identification method, device and system for separating client and server |
CN112949512A (en) * | 2021-03-08 | 2021-06-11 | 豪威芯仑传感器(上海)有限公司 | Dynamic gesture recognition method, gesture interaction method and interaction system |
CN112989925A (en) * | 2021-02-02 | 2021-06-18 | 豪威芯仑传感器(上海)有限公司 | Method and system for identifying hand sliding direction |
CN114724243A (en) * | 2022-03-29 | 2022-07-08 | 赵新博 | Bionic action recognition system based on artificial intelligence |
CN115111964A (en) * | 2022-06-02 | 2022-09-27 | 中国人民解放军东部战区总医院 | MR holographic intelligent helmet for individual training |
-
2018
- 2018-05-15 CN CN201810462581.9A patent/CN108647654A/en not_active Withdrawn
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111435558A (en) * | 2018-12-26 | 2020-07-21 | 杭州萤石软件有限公司 | Identity authentication method and device based on biological characteristic multi-mode image |
CN109376730B (en) * | 2018-12-29 | 2021-07-16 | 龙岩学院 | Gesture recognition method and device |
CN109376730A (en) * | 2018-12-29 | 2019-02-22 | 龙岩学院 | A kind of gesture identification method and device |
CN111435429B (en) * | 2019-01-15 | 2024-03-01 | 北京伟景智能科技有限公司 | Gesture recognition method and system based on binocular stereo data dynamic cognition |
CN111435429A (en) * | 2019-01-15 | 2020-07-21 | 北京伟景智能科技有限公司 | Gesture recognition method and system based on binocular stereo data dynamic cognition |
CN110147764A (en) * | 2019-05-17 | 2019-08-20 | 天津科技大学 | A kind of static gesture identification method based on machine learning |
CN110751082B (en) * | 2019-10-17 | 2023-12-12 | 烟台艾易新能源有限公司 | Gesture instruction recognition method for intelligent home entertainment system |
CN110751082A (en) * | 2019-10-17 | 2020-02-04 | 烟台艾易新能源有限公司 | Gesture instruction identification method for intelligent home entertainment system |
CN111523435A (en) * | 2020-04-20 | 2020-08-11 | 安徽中科首脑智能医疗研究院有限公司 | Finger detection method, system and storage medium based on target detection SSD |
CN111695408A (en) * | 2020-04-23 | 2020-09-22 | 西安电子科技大学 | Intelligent gesture information recognition system and method and information data processing terminal |
CN111680618A (en) * | 2020-06-04 | 2020-09-18 | 西安邮电大学 | Dynamic gesture recognition method based on video data characteristics, storage medium and device |
CN111680618B (en) * | 2020-06-04 | 2023-04-18 | 西安邮电大学 | Dynamic gesture recognition method based on video data characteristics, storage medium and device |
CN112019892A (en) * | 2020-07-23 | 2020-12-01 | 深圳市玩瞳科技有限公司 | Behavior identification method, device and system for separating client and server |
CN111897433A (en) * | 2020-08-04 | 2020-11-06 | 吉林大学 | Method for realizing dynamic gesture recognition and control in integrated imaging display system |
CN112989925A (en) * | 2021-02-02 | 2021-06-18 | 豪威芯仑传感器(上海)有限公司 | Method and system for identifying hand sliding direction |
CN112949512A (en) * | 2021-03-08 | 2021-06-11 | 豪威芯仑传感器(上海)有限公司 | Dynamic gesture recognition method, gesture interaction method and interaction system |
CN114724243A (en) * | 2022-03-29 | 2022-07-08 | 赵新博 | Bionic action recognition system based on artificial intelligence |
CN115111964A (en) * | 2022-06-02 | 2022-09-27 | 中国人民解放军东部战区总医院 | MR holographic intelligent helmet for individual training |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108647654A (en) | The gesture video image identification system and method for view-based access control model | |
CN108921011A (en) | A kind of dynamic hand gesture recognition system and method based on hidden Markov model | |
CN107168527B (en) | The first visual angle gesture identification and exchange method based on region convolutional neural networks | |
CN104463250B (en) | A kind of Sign Language Recognition interpretation method based on Davinci technology | |
Zhou et al. | Tonguenet: accurate localization and segmentation for tongue images using deep neural networks | |
CN101763515B (en) | Real-time gesture interaction method based on computer vision | |
CN108595014A (en) | A kind of real-time dynamic hand gesture recognition system and method for view-based access control model | |
CN108509839A (en) | One kind being based on the efficient gestures detection recognition methods of region convolutional neural networks | |
CN109086660A (en) | Training method, equipment and the storage medium of multi-task learning depth network | |
CN111460976B (en) | Data-driven real-time hand motion assessment method based on RGB video | |
CN107657233A (en) | Static sign language real-time identification method based on modified single multi-target detection device | |
KR20130013122A (en) | Apparatus and method for detecting object pose | |
CN103971102A (en) | Static gesture recognition method based on finger contour and decision-making trees | |
EP3908964A1 (en) | Detecting pose using floating keypoint(s) | |
CN109033953A (en) | Training method, equipment and the storage medium of multi-task learning depth network | |
CN105975934A (en) | Dynamic gesture identification method and system for augmented reality auxiliary maintenance | |
CN109598234A (en) | Critical point detection method and apparatus | |
CN109558855B (en) | A kind of space gesture recognition methods combined based on palm contour feature with stencil matching method | |
CN108460790A (en) | A kind of visual tracking method based on consistency fallout predictor model | |
CN109101869A (en) | Test method, equipment and the storage medium of multi-task learning depth network | |
CN106599785A (en) | Method and device for building human body 3D feature identity information database | |
CN110210426A (en) | Method for estimating hand posture from single color image based on attention mechanism | |
CN103985143A (en) | Discriminative online target tracking method based on videos in dictionary learning | |
CN109034012A (en) | First person gesture identification method based on dynamic image and video sequence | |
CN106408579A (en) | Video based clenched finger tip tracking method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20181012 |
|
WW01 | Invention patent application withdrawn after publication |