CN107633205B - lip motion analysis method, device and storage medium - Google Patents
lip motion analysis method, device and storage medium Download PDFInfo
- Publication number
- CN107633205B CN107633205B CN201710708364.9A CN201710708364A CN107633205B CN 107633205 B CN107633205 B CN 107633205B CN 201710708364 A CN201710708364 A CN 201710708364A CN 107633205 B CN107633205 B CN 107633205B
- Authority
- CN
- China
- Prior art keywords
- lip
- real
- point
- region
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of lip motion analysis method, device and storage mediums, this method comprises: obtaining the realtime graphic of photographic device shooting, a real-time face image is extracted from the realtime graphic;The real-time face image is inputted into trained lip averaging model in advance, identifies t lip feature point for representing Hp position in the real-time face image;Determine lip region according to the t lip feature point, by the lip region input in advance trained lip disaggregated model, judge the lip region whether be people lip region;If so, the direction of motion and move distance of lip in the real-time face image is calculated according to x, the y-coordinate of t lip feature point in the real-time face image.The present invention calculates the motion information of lip in real-time face image according to the coordinate of lip feature point, realizes the analysis to lip region and the real-time capture to lip motion.
Description
Technical field
The present invention relates to computer vision processing technology field more particularly to a kind of lip motion analysis method, device and
Computer readable storage medium.
Background technique
It is a kind of bio-identification for carrying out the identification of user's lip motion based on facial feature information of people that lip motion, which captures,
Technology.Currently, the application field that lip motion captures is very extensive, played very in various fields such as access control and attendance, identifications
Important role brings convenience to people's lives.The capture of lip motion, the way of common product are to use depth
Learning method trains the disaggregated model of lip feature by deep learning, and the feature of lip is then judged using disaggregated model.
However, training lip feature using the method for deep learning, the number of lip feature depends entirely on lip sample
This type, such as judgement are opened one's mouth, and are shut up, then at least needing to take the great amount of samples opened one's mouth, shut up, are skimmed if rethinking judgement
Mouth, it is necessary to take the great amount of samples curled one's lip again, then re -training.It is not only time-consuming in this way, it can't accomplish real-time capture.Separately
Outside, lip feature is judged according to the disaggregated model of lip feature, can not analyze the lip region that identifies whether be people mouth
Lip region.
Summary of the invention
The present invention provides a kind of lip motion analysis method, device and computer readable storage medium, main purpose and exists
In the motion information for calculating lip in real-time face image according to the coordinate of lip feature point, realize to the analysis of lip region and
To the real-time capture of lip motion.
To achieve the above object, the present invention provides a kind of electronic device, which includes: memory, processor and camera shooting
Device includes that lip motion analyzes program in the memory, when the lip motion analysis program is executed by the processor
Realize following steps:
Real-time face image acquisition step: obtain photographic device shooting realtime graphic, using face recognition algorithms from this
A real-time face image is extracted in realtime graphic;
Feature point recognition step: the real-time face image is inputted into trained lip averaging model in advance, utilizes the mouth
Lip averaging model identifies the t lip feature point that Hp position is represented in the real-time face image;
Lip region identification step: determining lip region according to the t lip feature point, which is inputted pre-
First trained lip disaggregated model, judge the lip region whether be people lip region;And
Lip motion judgment step: if the lip region is the lip region of people, according to t in the real-time face image
The direction of motion and move distance of lip in the real-time face image is calculated in x, the y-coordinate of lip feature point.
Optionally, when the lip motion analysis program is executed by the processor, following steps are also realized:
Prompt step: it when it is the lip region of people that lip disaggregated model, which judges the lip region not, prompts not from current
The lip region of people is detected in realtime graphic, can not judge lip motion, and is back to real-time face image acquisition step.
Optionally, the training step of the lip averaging model includes:
Establish the first sample library for there are n facial images, the mouth in every facial image in first sample library
Lip position marks t characteristic point, and the t characteristic point is uniformly distributed in upper and lower lip and left and right labial angle;And
Face characteristic identification model is trained to obtain about people using the facial image of the label lip feature point
The lip averaging model of face.
Optionally, the training step of the lip disaggregated model includes:
It collects that m opens one's mouth lip positive sample image and k opens one's mouth lip negative sample image, constitutes the second sample database;
Extract the local feature of often open one's mouth lip positive sample image, lip negative sample image;And
Support vector machine classifier is instructed using lip positive sample image, lip negative sample image and its local feature
Practice, obtains the lip disaggregated model of face.
Optionally, the lip motion judgment step includes:
Upper lip medial center characteristic point in real-time face image is calculated to sentence at a distance from lower lip medial center characteristic point
The opening degree of disconnected lip;
By characteristic point nearest from left side outer lip corner characteristics point on left side outer lip corner characteristics point and upper and lower lip outer contour
It is respectively connected with to form vectorCalculate vectorBetween angle, obtain the degree of the left slash of lip;And
By characteristic point nearest from right side outer lip corner characteristics point on right side outer lip corner characteristics point and upper and lower lip outer contour
It is respectively connected with to form vectorCalculate vectorBetween angle, obtain the degree of the right slash of lip.
In addition, to achieve the above object, the present invention also provides a kind of lip motion analysis methods, this method comprises:
Real-time face image acquisition step: obtain photographic device shooting realtime graphic, using face recognition algorithms from this
A real-time face image is extracted in realtime graphic;
Feature point recognition step: the real-time face image is inputted into trained lip averaging model in advance, utilizes the mouth
Lip averaging model identifies the t lip feature point that Hp position is represented in the real-time face image;
Lip region identification step: determining lip region according to the t lip feature point, which is inputted pre-
First trained lip disaggregated model, judge the lip region whether be people lip region;And
Lip motion judgment step: if the lip region is the lip region of people, according to t in the real-time face image
The direction of motion and move distance of lip in the real-time face image is calculated in x, the y-coordinate of lip feature point.
Optionally, when the lip motion analysis program is executed by the processor, following steps are also realized:
Prompt step: it when it is the lip region of people that lip disaggregated model, which judges the lip region not, prompts not from current
The lip region of people is detected in realtime graphic, can not judge lip motion, and is back to real-time face image acquisition step.
Optionally, the training step of the lip averaging model includes:
Establish the first sample library for there are n facial images, the mouth in every facial image in first sample library
Lip position marks t characteristic point, and the t characteristic point is uniformly distributed in upper and lower lip and left and right labial angle;And
Face characteristic identification model is trained to obtain about people using the facial image of the label lip feature point
The lip averaging model of face.
Optionally, the training step of the lip disaggregated model includes:
It collects that m opens one's mouth lip positive sample image and k opens one's mouth lip negative sample image, constitutes the second sample database;
Extract the local feature of often open one's mouth lip positive sample image, lip negative sample image;And
Support vector machine classifier is instructed using lip positive sample image, lip negative sample image and its local feature
Practice, obtains the lip disaggregated model of face.
Optionally, the lip motion judgment step includes:
Upper lip medial center characteristic point in real-time face image is calculated to sentence at a distance from lower lip medial center characteristic point
The opening degree of disconnected lip;
By characteristic point nearest from left side outer lip corner characteristics point on left side outer lip corner characteristics point and upper and lower lip outer contour
It is respectively connected with to form vectorCalculate vectorBetween angle, obtain the degree of the left slash of lip;And
By characteristic point nearest from right side outer lip corner characteristics point on right side outer lip corner characteristics point and upper and lower lip outer contour
It is respectively connected with to form vectorCalculate vectorBetween angle, obtain the degree of the right slash of lip.
In addition, to achieve the above object, it is described computer-readable the present invention also provides a kind of computer readable storage medium
It include that lip motion analysis program realizes institute as above when the lip motion analysis program is executed by processor in storage medium
The arbitrary steps in lip motion analysis method stated.
Lip motion analysis method, device and computer readable storage medium proposed by the present invention, by from real-time face
Lip feature point is identified in image, judge lip feature point composition region whether be people lip region, if so, according to
The motion information of lip is calculated in the coordinate of lip feature point, does not need to take the sample of the various movements of lip to carry out depth
It practises, the analysis to lip region and the real-time capture to lip motion can be realized.
Detailed description of the invention
Fig. 1 is the schematic diagram of electronic device preferred embodiment of the present invention;
Fig. 2 is the functional block diagram that lip motion analyzes program in Fig. 1;
Fig. 3 is the flow chart of lip motion analysis method preferred embodiment of the present invention;
Fig. 4 is the refinement flow diagram of lip motion analysis method step S40 of the present invention.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
The present invention provides a kind of electronic device 1.It is the signal of 1 preferred embodiment of electronic device of the present invention shown in referring to Fig.1
Figure.
In the present embodiment, electronic device 1 can be server, smart phone, tablet computer, portable computer, on table
Type computer etc. has the terminal device of calculation function.
The electronic device 1 includes: processor 12, memory 11, photographic device 13, network interface 14 and communication bus 15.
Wherein, photographic device 13 is installed on particular place, real-time to the target for entering the particular place such as office space, monitoring area
Shooting obtains realtime graphic, and the realtime graphic that shooting obtains is transmitted to processor 12 by network.Network interface 14 is optionally
It may include standard wireline interface and wireless interface (such as WI-FI interface).Communication bus 15 is for realizing between these components
Connection communication.
Memory 11 includes the readable storage medium storing program for executing of at least one type.The readable storage medium storing program for executing of at least one type
It can be the non-volatile memory medium of such as flash memory, hard disk, multimedia card, card-type memory.In some embodiments, described can
Reading storage medium can be the internal storage unit of the electronic device 1, such as the hard disk of the electronic device 1.In other realities
It applies in example, the readable storage medium storing program for executing is also possible to the external memory of the electronic device 1, such as on the electronic device 1
The plug-in type hard disk of outfit, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD)
Card, flash card (Flash Card) etc..
In the present embodiment, the readable storage medium storing program for executing of the memory 11 is installed on the electronic device commonly used in storage
1 lip motion analysis program 10, facial image sample database, the lip sample database of people and building and trained lip are averaged mould
Type and lip disaggregated model etc..The memory 11 can be also used for temporarily storing the number that has exported or will export
According to.
Processor 12 can be in some embodiments a central processing unit (Central Processing Unit,
CPU), microprocessor or other data processing chips, program code or processing data for being stored in run memory 11, example
Such as execute lip motion analysis program 10.
Fig. 1 illustrates only the electronic device 1 with component 11-15 and lip motion analysis program 10, it should be understood that
Be, it is not required that implement all components shown, the implementation that can be substituted is more or less component.
Optionally, which can also include user interface, and user interface may include input unit such as keyboard
(Keyboard), speech input device such as microphone (microphone) etc. has the equipment of speech identifying function, voice defeated
Device such as sound equipment, earphone etc. out, optionally user interface can also include standard wireline interface and wireless interface.
Optionally, which can also include display, and display appropriate can also be known as display screen or display
Unit.It can be light-emitting diode display, liquid crystal display, touch-control liquid crystal display and OLED in some embodiments
(Organic Light-Emitting Diode, Organic Light Emitting Diode) touches device etc..Display is for being shown in electronics dress
Set the information handled in 1 and for showing visual user interface.
Optionally, which further includes touch sensor.It is touched provided by the touch sensor for user
The region for touching operation is known as touch area.In addition, touch sensor described here can be resistive touch sensor, capacitor
Formula touch sensor etc..Moreover, the touch sensor not only includes the touch sensor of contact, proximity may also comprise
Touch sensor etc..In addition, the touch sensor can be single sensor, or such as multiple biographies of array arrangement
Sensor.
In addition, the area of the display of the electronic device 1 can be identical as the area of the touch sensor, it can also not
Together.Optionally, display and touch sensor stacking are arranged, to form touch display screen.The device is based on touching aobvious
Display screen detects the touch control operation of user's triggering.
Optionally, which can also include RF (Radio Frequency, radio frequency) circuit, sensor, audio
Circuit etc., details are not described herein.
In Installation practice shown in Fig. 1, as may include in a kind of memory 11 of computer storage medium behaviour
Make system and lip motion analysis program 10;Processor 12 executes the lip motion analysis program 10 stored in memory 11
Shi Shixian following steps:
The realtime graphic that photographic device 13 is shot is obtained, processor 12 is mentioned from the realtime graphic using face recognition algorithms
Real-time face image is taken out, calls eye averaging model and human eye disaggregated model from memory 11, and by the real-time face figure
As inputting the lip averaging model, identifies the lip feature point in the real-time face image, lip feature point is determined
Lip region inputs the lip disaggregated model, judge the lip region whether be people lip region, if so, according to lip
The motion information of lip in the real-time face image is calculated in the coordinate of characteristic point, is otherwise back to the acquisition of real-time face image
Step.
In other embodiments, lip motion analysis program 10 can also be divided into one or more module, and one
Or multiple modules are stored in memory 11, and are executed by processor 12, to complete the present invention.The so-called module of the present invention
It is the series of computation machine program instruction section for referring to complete specific function.
It is the functional block diagram that lip motion analyzes program 10 in Fig. 1 referring to shown in Fig. 2.
The lip motion analysis program 10 can be divided into: obtain module 110, identification module 120, judgment module
130, computing module 140 and cue module 150.
Module 110 is obtained, it is real-time from this using face recognition algorithms for obtaining the realtime graphic of the shooting of photographic device 13
A real-time face image is extracted in image.When photographic device 13 takes a realtime graphic, photographic device 13 is by this reality
When image be sent to processor 12, after processor 12 receives the realtime graphic, the acquisition module 110 utilize recognition of face
Algorithm extracts real-time face image.
Specifically, the face recognition algorithms that real-time face image is extracted from the realtime graphic can be for based on geometrical characteristic
Method, Local Features Analysis method, eigenface method, the method based on elastic model, neural network method, etc..
Identification module 120 utilizes the mouth for the real-time face image to be inputted trained lip averaging model in advance
Lip averaging model identifies the t lip feature point that Hp position is represented in the real-time face image.
Assuming that there is 20 lip feature points in lip averaging model, which is uniformly distributed.The identification
Module 120 after calling trained lip averaging model in memory 11, by real-time face image and lip averaging model into
Row alignment, is then searched for and 20 lip spies of the lip averaging model using feature extraction algorithm in the real-time face image
The matched 20 lip features point of sign point.Wherein, the lip averaging model of the face is preparatory building and trained, specifically
Embodiment will be illustrated in following lip motion analysis methods.
Assuming that 20 lip feature points that the identification module 120 is identified from the real-time face image are still denoted as P1
The coordinate of~P20,20 lip features point are respectively as follows: (x1、y1)、(x2、y2)、(x3、y3)、…、(x20、y20)。
Wherein, as shown in Fig. 2, the upper and lower lip of lip have respectively 8 characteristic points (be denoted as P1~P8 respectively, P9~
P16), left and right labial angle has 2 characteristic points (being denoted as P17~P18, P19~P20 respectively) respectively.In 8 characteristic points of upper lip, 5
It is a be located at upper lip outer contour (P1~P5), 3 be located at upper lip inner outline (P6~P8, P7 be upper lip medial center
Characteristic point);In 8 characteristic points of lower lip, 5 are located at lower lip lubrication groove positioned at lower lip outer contour (P9~P13), 3
Profile (P14~P16, P15 are lower lip medial center characteristic point).In respective 2 characteristic points of left and right labial angle, 1 is located at mouth
Lip outer contour (such as P18, P20, hereinafter referred to as outer lip corner characteristics point), 1 be located at lip inner outline (such as P17, P19,
Hereinafter referred to as epipharynx corner characteristics point).
In the present embodiment, this feature extraction algorithm is SIFT (scale-invariant feature transform)
Algorithm.SIFT algorithm extracts the local feature of each lip feature point after the lip averaging model of face, selects a lip
Characteristic point is fixed reference feature point, and the same or similar spy of local feature with the fixed reference feature point is searched in real-time face image
It levies point (for example, the difference of the local feature of two characteristic points is within a preset range), principle is until in real-time face image according to this
In find out all lip feature points.In other embodiments, this feature extraction algorithm can also be SURF (Speeded Up
Robust Features) algorithm, LBP (Local Binary Patterns) algorithm, HOG (Histogram of Oriented
Gridients) algorithm etc..
Judgment module 130 inputs the lip region pre- for determining lip region according to the t lip feature point
First trained lip disaggregated model, judge the lip region whether be people lip region.When the identification module 120 is from reality
When face image in recognize 20 lip feature points after, can determine a lip region according to 20 lip feature points,
Then determining lip region is inputted into trained lip disaggregated model, the determination is judged according to the resulting result of model
Lip region whether be people lip region.Wherein, the lip disaggregated model is preparatory building and trained, specific implementation
Mode will be illustrated in following lip motion analysis methods.
Computing module 140, if when for lip region that the lip region is people, according to t in the real-time face image
The direction of motion and move distance of lip in the real-time face image is calculated in x, the y-coordinate of lip feature point.
Specifically, the computing module 140 is used for:
Upper lip medial center characteristic point in real-time face image is calculated to sentence at a distance from lower lip medial center characteristic point
The opening degree of disconnected lip;
By characteristic point nearest from left side outer lip corner characteristics point on left side outer lip corner characteristics point and upper and lower lip outer contour
It is respectively connected with to form vectorCalculate vectorBetween angle, obtain the degree of the left slash of lip;And
By characteristic point nearest from right side outer lip corner characteristics point on right side outer lip corner characteristics point and upper and lower lip outer contour
It is respectively connected with to form vectorCalculate vectorBetween angle, obtain the degree of the right slash of lip.
In real-time face image, the coordinate of upper lip medial center characteristic point P7 is (x7、y7), lower lip medial center
The coordinate of characteristic point P15 is (x15、y15), and judgment module 130 judge the lip region for the lip region of people, then, two
Range formula between point is as follows:
If d=0, then it represents that P7, P15 two o'clock are overlapped, that is to say, that lip is in closed state;If d > 0, according to d
Size judge the opening degree of lip, d is bigger, then it represents that lip opening degree is bigger.
The coordinate of left side outer lip corner characteristics point P18 is (x18、y18), and it is nearest from P18 on upper and lower lip outer contour
The coordinate of characteristic point P1, P9 is respectively (x1、y1)、(x9、y9), P18 is connected with P1, P9, is respectively formed vectorMeter
Calculate vectorBetween angle α, calculation formula is as follows:
Wherein,α indicates vectorIt
Between angle, by calculate corner dimension, can determine whether the degree of the left slash of lip;Angle is smaller, indicates that the left slash degree of lip is bigger.
Similarly, the coordinate of right side outer lip corner characteristics point P20 is (x20、y20), on upper and lower lip outer contour most from P20
The coordinate of close characteristic point P5, P13 is respectively (x5、y5)、(x13、y13), P20 is connected with P5, P13, is respectively formed vectorCalculate vectorBetween angle, calculation formula is as follows:
Wherein,β indicates vectorIt
Between angle, by calculate corner dimension, can determine whether the degree of the right slash of lip;Angle is smaller, indicates that the right slash degree of lip is bigger.
Cue module 150, for prompting not when it is the lip region of people that lip disaggregated model, which judges the lip region not,
The lip region of people is detected from current realtime graphic, can not judge lip motion, and process is walked back to realtime graphic capture
Suddenly, next realtime graphic is captured.If the lip region that judgment module 130 determines 20 lip features point inputs lip
After disaggregated model, judging the lip region not according to model result is the lip region of people, and the prompt of cue module 150 is not known
It is clipped to the lip region of people, next step lip motion judgment step can not be carried out, meanwhile, reacquire the reality of photographic device shooting
When image, and carry out subsequent step.
The electronic device 1 that the present embodiment proposes extracts real-time face image from realtime graphic, utilizes lip averaging model
Identify the lip feature point in the real-time face image, the lip region determined using lip disaggregated model to lip characteristic point
It is analyzed, if the real-time face is calculated according to the coordinate of lip feature point in the lip region that the lip region is people
The motion information of lip in image realizes the analysis to lip region and the real-time capture to lip motion.
In addition, the present invention also provides a kind of lip motion analysis methods.Referring to shown in Fig. 3, for lip motion of the present invention point
The flow chart of analysis method preferred embodiment.This method can be executed by a device, which can be by software and/or hardware reality
It is existing.
In the present embodiment, lip motion analysis method includes: step S10- step S50.
Step S10 is obtained the realtime graphic of photographic device shooting, is extracted from the realtime graphic using face recognition algorithms
One real-time face image.When photographic device takes a realtime graphic, photographic device sends this realtime graphic everywhere
Reason device extracts real-time face image using face recognition algorithms after processor receives the realtime graphic.
Specifically, the face recognition algorithms that real-time face image is extracted from the realtime graphic can be for based on geometrical characteristic
Method, Local Features Analysis method, eigenface method, the method based on elastic model, neural network method, etc..
The real-time face image is inputted trained lip averaging model in advance, is averaged mould using the lip by step S20
Type identifies the t lip feature point that Hp position is represented in the real-time face image.
Establish the first sample library for there are n facial images, the mouth in every facial image in first sample library
Lip position t characteristic point of handmarking, the t characteristic point are uniformly distributed in upper and lower lip and left and right labial angle.
Face characteristic identification model is trained to obtain about people using the facial image of the label lip feature point
The lip averaging model of face.The face characteristic identification model is Ensemble of RegressionTress (abbreviation ERT) calculation
Method.ERT algorithm is formulated as follows:
Wherein t indicates cascade serial number, τt() indicates the recurrence device for working as prime.Each recurrence device is returned by many
(tree) composition is set, trained purpose is exactly to obtain these regression trees.
WhereinEstimate for the shape of "current" model;Each recurrence device τt() according to input picture I andCome pre-
Survey an incrementThis increment is added in current shape estimation and improves "current" model.Wherein every level-one is returned
Returning device all is predicted according to characteristic point.Training dataset are as follows: (I1, S1) ..., (In, Sn) wherein I is the sample inputted
This image, S be feature point group in sample image at shape eigenvectors.
During model training, the quantity of facial image is n in sample database, it is assumed that t=20, i.e. each sample graph
As there is 20 characteristic points, take all sample images Partial Feature point (such as in 20 characteristic points of each sample image with
Machine takes 15 characteristic points) first regression tree is trained, by the true of the predicted value of first regression tree and the Partial Feature point
The residual error of real value (weighted average for 15 characteristic points that each sample image is taken) is used to train second tree ... successively class
It pushes away, until training the predicted value of the N tree and the true value of the Partial Feature point close to 0, obtains all of ERT algorithm
Regression tree obtains the lip averaging model of face according to these regression trees, and model file and sample database is saved to memory
In.Because 20 lip feature points are marked in the sample image of training pattern, then the lip averaging model for the face that training obtains
It can be used for identifying 20 lip feature points from facial image.
After calling trained lip averaging model in memory, real-time face image and lip averaging model are carried out
Then alignment is searched for and 20 lip features of the lip averaging model using feature extraction algorithm in the real-time face image
The matched 20 lip features point of point.Assuming that the 20 lip feature points identified from the real-time face image are still denoted as P1
The coordinate of~P20,20 lip features point are respectively as follows: (x1、y1)、(x2、y2)、(x3、y3)、…、(x20、y20)。
Wherein, as shown in Fig. 2, the upper and lower lip of lip have respectively 8 characteristic points (be denoted as P1~P8 respectively, P9~
P16), left and right labial angle has 2 characteristic points (being denoted as P17~P18, P19~P20 respectively) respectively.In 8 characteristic points of upper lip, 5
It is a be located at upper lip outer contour (P1~P5), 3 be located at upper lip inner outline (P6~P8, P7 be upper lip medial center
Characteristic point);In 8 characteristic points of lower lip, 5 are located at lower lip lubrication groove positioned at lower lip outer contour (P9~P13), 3
Profile (P14~P16, P15 are lower lip medial center characteristic point).In respective 2 characteristic points of left and right labial angle, 1 is located at mouth
Lip outer contour (such as P18, P20, hereinafter referred to as outer lip corner characteristics point), 1 be located at lip inner outline (such as P17, P19,
Hereinafter referred to as epipharynx corner characteristics point).
Specifically, this feature extraction algorithm can also be SIFT algorithm, SURF algorithm, LBP algorithm, HOG algorithm etc..
Step S30 determines lip region according to the t lip feature point, and lip region input is trained in advance
Lip disaggregated model, judge the lip region whether be people lip region.
It collects that m opens one's mouth lip positive sample image and k opens one's mouth lip negative sample image, constitutes the second sample database.Lip positive sample figure
Seem the image for referring to the lip comprising the mankind, lip portion can be plucked out from facial image sample database as lip positive sample figure
Picture.Lip negative sample image refers to that the lip in the lip region incompleteness or image of people is not the mouth of the mankind (such as animal)
The image of lip, multiple lip positive sample images and negative sample image form the second sample database.
Extract the local feature of often open one's mouth lip positive sample image, lip negative sample image.It is extracted using feature extraction algorithm
Histograms of oriented gradients (Histogram of Oriented Gradient, abbreviation HOG) feature of lip sample image.Due to
Colouring information effect less, is usually translated into grayscale image, and whole image is normalized in lip sample image, counts
Nomogram and calculates the gradient direction value of each location of pixels as the gradient of abscissa and ordinate direction accordingly, with capture profile,
The shadow and some texture informations, and the influence that further weakened light shines.Then it is mono- whole image to be divided into Cell one by one
First lattice construct gradient orientation histogram for each Cell cell, to count local image gradient information and be quantified, obtain
The feature description vectors of local image region.Then Cell cell is combined into big block (block), is shone due to local light
Variation and the variation of foreground-background contrast, so that the variation range of gradient intensity is very big, this is just needed to gradient intensity
It normalizes, further illumination, shade and edge is compressed.Finally the HOG descriptor combinations for owning " block " are existed
Together, final HOG feature description vectors are formed.
Using lip positive sample image, lip negative sample image and the HOG feature of extraction to support vector machines (Support
Vector Machine, SVM) classifier is trained, obtain the lip disaggregated model of face.
After recognizing 20 lip feature points from real-time face image, it can be determined according to 20 lip feature points
Then determining lip region is inputted trained lip disaggregated model, according to the resulting result of model by one lip region
Judge the determination lip region whether be people lip region.
Step S40, if the lip region is the lip region of people, according to t lip feature in the real-time face image
X, the y-coordinate of point, are calculated the direction of motion and move distance of lip in the real-time face image.
It is the refinement flow diagram of step S40 in lip motion analysis method of the present invention referring to shown in Fig. 4.Specifically,
Step S40 includes:
Step S41 calculates upper lip medial center characteristic point and lower lip medial center characteristic point in real-time face image
Distance, judge the opening degree of lip;
Step S42, by left side outer lip corner characteristics point with it is nearest from left side outer lip corner characteristics point on upper and lower lip outer contour
Characteristic point be respectively connected with to form vectorCalculate vectorBetween angle, obtain the degree of the left slash of lip;
And
Step S43, by right side outer lip corner characteristics point with it is nearest from right side outer lip corner characteristics point on upper and lower lip outer contour
Characteristic point be respectively connected with to form vectorCalculate vectorBetween angle, obtain the degree of the right slash of lip.
In real-time face image, the coordinate of upper lip medial center characteristic point P7 is (x7、y7), lower lip medial center
The coordinate of characteristic point P15 is (x15、y15), and the lip region is the lip region of people, then, the range formula of point-to-point transmission
It is as follows:
If d=0, then it represents that P7, P15 two o'clock are overlapped, that is to say, that lip is in closed state;If d > 0, according to d
Size judge the opening degree of lip, d is bigger, then it represents that lip opening degree is bigger.
The coordinate of left side outer lip corner characteristics point P18 is (x18、y18), and it is nearest from P18 on upper and lower lip outer contour
The coordinate of characteristic point P1, P9 is respectively (x1、y1)、(x9、y9), P18 is connected with P1, P9, is respectively formed vectorMeter
Calculate vectorBetween angle α, calculation formula is as follows:
Wherein,α indicates vectorIt
Between angle, by calculate corner dimension, can determine whether the degree of the left slash of lip;Angle is smaller, indicates that the left slash degree of lip is bigger.
Similarly, the coordinate of right side outer lip corner characteristics point P20 is (x20、y20), on upper and lower lip outer contour most from P20
The coordinate of close characteristic point P5, P13 is respectively (x5、y5)、(x13、y13), P20 is connected with P5, P13, is respectively formed vectorCalculate vectorBetween angle, calculation formula is as follows:
Wherein,β indicates vectorIt
Between angle, by calculate corner dimension, can determine whether the degree of the right slash of lip;Angle is smaller, indicates that the right slash degree of lip is bigger.
Step S50 is prompted when it is the lip region of people that lip disaggregated model, which judges the lip region not, not from current reality
When image in detect the lip region of people, can not judge lip motion, process captures step back to realtime graphic, under capture
One realtime graphic.After the lip region input lip disaggregated model that 20 lip features point is determined, according to model knot
It is the lip region of people that fruit, which judges the lip region not, prompts the unidentified lip region to people, can not carry out next step lip
Motion determination step, meanwhile, the realtime graphic of photographic device shooting is reacquired, and carry out subsequent step.
The lip motion analysis method that the present embodiment proposes, is identified in the real-time face image using lip averaging model
Lip feature point, using lip disaggregated model to lip characteristic point determine lip region analyze, if the lip region
The motion information of lip in the real-time face image is calculated then according to the coordinate of lip feature point for the lip region of people,
Realize the analysis to lip region and the real-time capture to lip motion.
In addition, the embodiment of the present invention also proposes a kind of computer readable storage medium, the computer readable storage medium
In include that lip motion analyzes program, lip motion analysis program realizes following operation when being executed by processor:
Model construction step: constructing and training face characteristic identification model, obtains the lip averaging model about face, benefit
SVM is trained with lip sample image, obtains lip disaggregated model;
Real-time face image acquisition step: obtain photographic device shooting realtime graphic, using face recognition algorithms from this
A real-time face image is extracted in realtime graphic;
Feature point recognition step: the real-time face image is inputted into trained lip averaging model in advance, utilizes the mouth
Lip averaging model identifies the t lip feature point that Hp position is represented in the real-time face image;
Lip region identification step: determining lip region according to the t lip feature point, which is inputted pre-
First trained lip disaggregated model, judge the lip region whether be people lip region;And
Lip motion judgment step: if the lip region is the lip region of people, according to t in the real-time face image
The direction of motion and move distance of lip in the real-time face image is calculated in x, the y-coordinate of lip feature point.
Optionally, when the lip motion analysis program is executed by processor, following operation is also realized:
Prompt step: it when it is the lip region of people that lip disaggregated model, which judges the lip region not, prompts not from current
The lip region of people is detected in realtime graphic, can not judge lip motion, and is back to real-time face image acquisition step.
Optionally, the lip motion judgment step includes:
Upper lip medial center characteristic point in real-time face image is calculated to sentence at a distance from lower lip medial center characteristic point
The opening degree of disconnected lip;
By characteristic point nearest from left side outer lip corner characteristics point on left side outer lip corner characteristics point and upper and lower lip outer contour
It is respectively connected with to form vectorCalculate vectorBetween angle, obtain the degree of the left slash of lip;And
By characteristic point nearest from right side outer lip corner characteristics point on right side outer lip corner characteristics point and upper and lower lip outer contour
It is respectively connected with to form vectorCalculate vectorBetween angle, obtain the degree of the right slash of lip.
The specific embodiment of the computer readable storage medium of the present invention is specific with above-mentioned lip motion analysis method
Embodiment is roughly the same, and details are not described herein.
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row
His property includes, so that the process, device, article or the method that include a series of elements not only include those elements, and
And further include other elements that are not explicitly listed, or further include for this process, device, article or method institute it is intrinsic
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do
There is also other identical elements in the process, device of element, article or method.
The serial number of the above embodiments of the invention is only for description, does not represent the advantages or disadvantages of the embodiments.Pass through above embodiment party
The description of formula, it is required general that those skilled in the art can be understood that above-described embodiment method can add by software
The mode of hardware platform is realized, naturally it is also possible to which by hardware, but in many cases, the former is more preferably embodiment.It is based on
Such understanding, substantially the part that contributes to existing technology can be with software product in other words for technical solution of the present invention
Form embody, which is stored in a storage medium (such as ROM/RAM, magnetic disk, light as described above
Disk) in, including some instructions use is so that a terminal device (can be mobile phone, computer, server or the network equipment
Deng) execute method described in each embodiment of the present invention.
The above is only a preferred embodiment of the present invention, is not intended to limit the scope of the invention, all to utilize this hair
Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills
Art field, is included within the scope of the present invention.
Claims (7)
1. a kind of electronic device, which is characterized in that described device includes: memory, processor and photographic device, the memory
In include that lip motion analyzes program, lip motion analysis program realizes following steps when being executed by the processor:
Real-time face image acquisition step: obtaining the realtime graphic of photographic device shooting, real-time from this using face recognition algorithms
A real-time face image is extracted in image;
Feature point recognition step: inputting trained lip averaging model in advance for the real-time face image, flat using the lip
Equal model identifies the t lip feature point that Hp position is represented in the real-time face image;
The training step of the lip averaging model includes: to establish the first sample library for having n facial images, in the first sample
Lip position in every facial image in this library marks t characteristic point;Utilize the face figure of the label lip feature point
As being trained to obtain the lip averaging model about face, the face characteristic identification model to face characteristic identification model
ERT algorithm is formulated as follows:
Wherein t indicates cascade serial number, τt() indicates the recurrence device for working as prime,Estimate for the shape of "current" model;Each
Return τt() according to input picture I andTo predict an incrementDuring model training, institute is taken
There is the Partial Feature point of sample image to train first regression tree, by the predicted value of first regression tree and the Partial Feature
The residual error of the true value of point is used to train second tree ..., until train the N tree predicted value and the portion
Divide the true value of characteristic point close to 0, obtains all regression trees of ERT algorithm, obtain the lip of face according to these regression trees
Averaging model;
Lip region identification step: determining lip region according to the t lip feature point, which is inputted instruction in advance
The lip disaggregated model perfected, judge the lip region whether be people lip region;And
Lip motion judgment step: if the lip region is the lip region of people, according to t lip in the real-time face image
The direction of motion and move distance of lip in the real-time face image is calculated in x, the y-coordinate of characteristic point;
There are 20 lip feature points in the lip averaging model, in which:
The upper and lower lip of lip has 8 characteristic points respectively, and left and right labial angle has 2 characteristic points respectively;
In 8 characteristic points of upper lip, 5 are located at upper lip inner outline positioned at upper lip outer contour, 3 and are located at centre
The characteristic point of position is upper lip medial center characteristic point;
In 8 characteristic points of lower lip, 5 are located at lower lip inner outline positioned at lower lip outer contour, 3 and are located at centre
The characteristic point of position is lower lip medial center characteristic point;
In respective 2 characteristic points of left and right labial angle, 1 is located at lip outer contour and is referred to as outer lip corner characteristics point, and 1 is located at lip
Inner outline is referred to as epipharynx corner characteristics point;
The lip motion judgment step includes:
It calculates upper lip medial center characteristic point in real-time face image and judges mouth at a distance from lower lip medial center characteristic point
The opening degree of lip;
Characteristic point nearest from left side outer lip corner characteristics point on left side outer lip corner characteristics point and upper and lower lip outer contour is distinguished
It is connected to form vectorCalculate vectorBetween angle, obtain the degree of the left slash of lip;And
Characteristic point nearest from right side outer lip corner characteristics point on right side outer lip corner characteristics point and upper and lower lip outer contour is distinguished
It is connected to form vectorCalculate vectorBetween angle, obtain the degree of the right slash of lip.
2. electronic device according to claim 1, which is characterized in that the lip motion analysis program is by the processor
When execution, following steps are also realized:
Prompt step: it when it is the lip region of people that lip disaggregated model, which judges the lip region not, prompts not from current real-time
The lip region of people is detected in image, can not judge lip motion, and is back to real-time face image acquisition step.
3. electronic device according to claim 1 or 2, which is characterized in that the training step packet of the lip disaggregated model
It includes:
It collects that m opens one's mouth lip positive sample image and k opens one's mouth lip negative sample image, constitutes the second sample database;
Extract the local feature of often open one's mouth lip positive sample image, lip negative sample image;And
Support vector machine classifier is trained using lip positive sample image, lip negative sample image and its local feature,
Obtain the lip disaggregated model of face.
4. a kind of lip motion analysis method, which is characterized in that the described method includes:
Real-time face image acquisition step: obtaining the realtime graphic of photographic device shooting, real-time from this using face recognition algorithms
A real-time face image is extracted in image;
Feature point recognition step: inputting trained lip averaging model in advance for the real-time face image, flat using the lip
Equal model identifies the t lip feature point that Hp position is represented in the real-time face image;The instruction of the lip averaging model
Practicing step includes: to establish the first sample library for having n facial images, in every facial image in first sample library
Lip position marks t characteristic point;Face characteristic identification model is carried out using the facial image of the label lip feature point
Training obtains the lip averaging model about face, and the face characteristic identification model ERT algorithm is formulated as follows:
Wherein t indicates cascade serial number, τt() indicates the recurrence device for working as prime,Estimate for the shape of "current" model;Each
Return τt() according to input picture I andTo predict an incrementDuring model training, institute is taken
There is the Partial Feature point of sample image to train first regression tree, by the predicted value of first regression tree and the Partial Feature
The residual error of the true value of point is used to train second tree ..., until train the N tree predicted value and the portion
Divide the true value of characteristic point close to 0, obtains all regression trees of ERT algorithm, obtain the lip of face according to these regression trees
Averaging model;
Lip region identification step: determining lip region according to the t lip feature point, which is inputted instruction in advance
The lip disaggregated model perfected, judge the lip region whether be people lip region;And
Lip motion judgment step: if the lip region is the lip region of people, according to t lip in the real-time face image
The direction of motion and move distance of lip in the real-time face image is calculated in x, the y-coordinate of characteristic point;
There are 20 lip feature points in the lip averaging model, in which:
The upper and lower lip of lip has 8 characteristic points respectively, and left and right labial angle has 2 characteristic points respectively;
In 8 characteristic points of upper lip, 5 are located at upper lip inner outline positioned at upper lip outer contour, 3 and are located at centre
The characteristic point of position is upper lip medial center characteristic point;
In 8 characteristic points of lower lip, 5 are located at lower lip inner outline positioned at lower lip outer contour, 3 and are located at centre
The characteristic point of position is lower lip medial center characteristic point;
In respective 2 characteristic points of left and right labial angle, 1 is located at lip outer contour and is referred to as outer lip corner characteristics point, and 1 is located at lip
Inner outline is referred to as epipharynx corner characteristics point;
The lip motion judgment step includes:
It calculates upper lip medial center characteristic point in real-time face image and judges mouth at a distance from lower lip medial center characteristic point
The opening degree of lip;
Characteristic point nearest from left side outer lip corner characteristics point on left side outer lip corner characteristics point and upper and lower lip outer contour is distinguished
It is connected to form vectorCalculate vectorBetween angle, obtain the degree of the left slash of lip;And
Characteristic point nearest from right side outer lip corner characteristics point on right side outer lip corner characteristics point and upper and lower lip outer contour is distinguished
It is connected to form vectorCalculate vectorBetween angle, obtain the degree of the right slash of lip.
5. lip motion analysis method according to claim 4, which is characterized in that this method further include:
Prompt step: it when it is the lip region of people that lip disaggregated model, which judges the lip region not, prompts not from current real-time
The lip region of people is detected in image, can not judge lip motion, and is back to real-time face image acquisition step.
6. lip motion analysis method according to claim 4 or 5, which is characterized in that the instruction of the lip disaggregated model
Practicing step includes:
It collects that m opens one's mouth lip positive sample image and k opens one's mouth lip negative sample image, constitutes the second sample database;
Extract the local feature of often open one's mouth lip positive sample image, lip negative sample image;And
Support vector machine classifier is trained using lip positive sample image, lip negative sample image and its local feature,
Obtain the lip disaggregated model of face.
7. a kind of computer readable storage medium, which is characterized in that include lip motion in the computer readable storage medium
Program is analyzed to realize as described in any one of claim 4 to 6 when the lip motion analysis program is executed by processor
The step of lip motion analysis method.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710708364.9A CN107633205B (en) | 2017-08-17 | 2017-08-17 | lip motion analysis method, device and storage medium |
PCT/CN2017/108749 WO2019033570A1 (en) | 2017-08-17 | 2017-10-31 | Lip movement analysis method, apparatus and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710708364.9A CN107633205B (en) | 2017-08-17 | 2017-08-17 | lip motion analysis method, device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107633205A CN107633205A (en) | 2018-01-26 |
CN107633205B true CN107633205B (en) | 2019-01-18 |
Family
ID=61099627
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710708364.9A Active CN107633205B (en) | 2017-08-17 | 2017-08-17 | lip motion analysis method, device and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107633205B (en) |
WO (1) | WO2019033570A1 (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108710836B (en) * | 2018-05-04 | 2020-10-09 | 南京邮电大学 | Lip detection and reading method based on cascade feature extraction |
CN108763897A (en) * | 2018-05-22 | 2018-11-06 | 平安科技(深圳)有限公司 | Method of calibration, terminal device and the medium of identity legitimacy |
CN108874145B (en) * | 2018-07-04 | 2022-03-18 | 深圳美图创新科技有限公司 | Image processing method, computing device and storage medium |
CN110223322B (en) * | 2019-05-31 | 2021-12-14 | 腾讯科技(深圳)有限公司 | Image recognition method and device, computer equipment and storage medium |
CN110738126A (en) * | 2019-09-19 | 2020-01-31 | 平安科技(深圳)有限公司 | Lip shearing method, device and equipment based on coordinate transformation and storage medium |
CN111241922B (en) * | 2019-12-28 | 2024-04-26 | 深圳市优必选科技股份有限公司 | Robot, control method thereof and computer readable storage medium |
CA3177529A1 (en) * | 2020-05-05 | 2021-11-11 | Ravindra Kumar Tarigoppula | System and method for controlling viewing of multimedia based on behavioural aspects of a user |
CN111259875B (en) * | 2020-05-06 | 2020-07-31 | 中国人民解放军国防科技大学 | Lip reading method based on self-adaptive semantic space-time diagram convolutional network |
CN113095146A (en) * | 2021-03-16 | 2021-07-09 | 深圳市雄帝科技股份有限公司 | Mouth state classification method, device, equipment and medium based on deep learning |
CN116405635A (en) * | 2023-06-02 | 2023-07-07 | 山东正中信息技术股份有限公司 | Multi-mode conference recording method and system based on edge calculation |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101702199A (en) * | 2009-11-13 | 2010-05-05 | 深圳华为通信技术有限公司 | Smiling face detection method and device and mobile terminal |
CN104951730A (en) * | 2014-03-26 | 2015-09-30 | 联想(北京)有限公司 | Lip movement detection method, lip movement detection device and electronic equipment |
CN105975935A (en) * | 2016-05-04 | 2016-09-28 | 腾讯科技(深圳)有限公司 | Face image processing method and apparatus |
CN106529379A (en) * | 2015-09-15 | 2017-03-22 | 阿里巴巴集团控股有限公司 | Method and device for recognizing living body |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007094906A (en) * | 2005-09-29 | 2007-04-12 | Toshiba Corp | Characteristic point detection device and method |
CN104616438B (en) * | 2015-03-02 | 2016-09-07 | 重庆市科学技术研究院 | A kind of motion detection method of yawning for fatigue driving detection |
CN105139503A (en) * | 2015-10-12 | 2015-12-09 | 北京航空航天大学 | Lip moving mouth shape recognition access control system and recognition method |
CN106997451A (en) * | 2016-01-26 | 2017-08-01 | 北方工业大学 | Lip contour positioning method |
CN106250815B (en) * | 2016-07-05 | 2019-09-20 | 上海引波信息技术有限公司 | A kind of quick expression recognition method based on mouth feature |
CN106485214A (en) * | 2016-09-28 | 2017-03-08 | 天津工业大学 | A kind of eyes based on convolutional neural networks and mouth state identification method |
-
2017
- 2017-08-17 CN CN201710708364.9A patent/CN107633205B/en active Active
- 2017-10-31 WO PCT/CN2017/108749 patent/WO2019033570A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101702199A (en) * | 2009-11-13 | 2010-05-05 | 深圳华为通信技术有限公司 | Smiling face detection method and device and mobile terminal |
CN104951730A (en) * | 2014-03-26 | 2015-09-30 | 联想(北京)有限公司 | Lip movement detection method, lip movement detection device and electronic equipment |
CN106529379A (en) * | 2015-09-15 | 2017-03-22 | 阿里巴巴集团控股有限公司 | Method and device for recognizing living body |
CN105975935A (en) * | 2016-05-04 | 2016-09-28 | 腾讯科技(深圳)有限公司 | Face image processing method and apparatus |
Non-Patent Citations (1)
Title |
---|
基于图像的嘴唇特征提取及口型分类研究;杨恒翔;《中国优秀硕士学位论文全文数据库》;20150515;I138-887 |
Also Published As
Publication number | Publication date |
---|---|
CN107633205A (en) | 2018-01-26 |
WO2019033570A1 (en) | 2019-02-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107633204B (en) | Face occlusion detection method, apparatus and storage medium | |
CN107633205B (en) | lip motion analysis method, device and storage medium | |
CN107679448B (en) | Eyeball action-analysing method, device and storage medium | |
CN107633207B (en) | AU characteristic recognition methods, device and storage medium | |
CN109961009B (en) | Pedestrian detection method, system, device and storage medium based on deep learning | |
CN107679449B (en) | Lip motion method for catching, device and storage medium | |
US10635946B2 (en) | Eyeglass positioning method, apparatus and storage medium | |
Xu et al. | Human re-identification by matching compositional template with cluster sampling | |
CN107633206B (en) | Eyeball motion capture method, device and storage medium | |
CN112052186B (en) | Target detection method, device, equipment and storage medium | |
CN107679447A (en) | Facial characteristics point detecting method, device and storage medium | |
WO2019033573A1 (en) | Facial emotion identification method, apparatus and storage medium | |
Patruno et al. | People re-identification using skeleton standard posture and color descriptors from RGB-D data | |
US9575566B2 (en) | Technologies for robust two-dimensional gesture recognition | |
US20160092726A1 (en) | Using gestures to train hand detection in ego-centric video | |
US20170103284A1 (en) | Selecting a set of exemplar images for use in an automated image object recognition system | |
Dantone et al. | Augmented faces | |
CN110135421A (en) | Licence plate recognition method, device, computer equipment and computer readable storage medium | |
CN113298158A (en) | Data detection method, device, equipment and storage medium | |
Lahiani et al. | Hand pose estimation system based on Viola-Jones algorithm for android devices | |
CN110175500B (en) | Finger vein comparison method, device, computer equipment and storage medium | |
CN115223239A (en) | Gesture recognition method and system, computer equipment and readable storage medium | |
Sudha et al. | A fast and robust emotion recognition system for real-world mobile phone data | |
Medjram et al. | Automatic Hand Detection in Color Images based on skin region verification | |
CN111178310A (en) | Palm feature recognition method and device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 1246925 Country of ref document: HK |
|
GR01 | Patent grant | ||
GR01 | Patent grant |