CN105447432B - A kind of face method for anti-counterfeit based on local motion mode - Google Patents

A kind of face method for anti-counterfeit based on local motion mode Download PDF

Info

Publication number
CN105447432B
CN105447432B CN201410428040.6A CN201410428040A CN105447432B CN 105447432 B CN105447432 B CN 105447432B CN 201410428040 A CN201410428040 A CN 201410428040A CN 105447432 B CN105447432 B CN 105447432B
Authority
CN
China
Prior art keywords
face
motion
region
key point
local
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201410428040.6A
Other languages
Chinese (zh)
Other versions
CN105447432A (en
Inventor
杨健伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yang Jianwei
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201410428040.6A priority Critical patent/CN105447432B/en
Publication of CN105447432A publication Critical patent/CN105447432A/en
Application granted granted Critical
Publication of CN105447432B publication Critical patent/CN105447432B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of face method for anti-counterfeit based on local motion mode, comprising the following steps: 1) detect the facial image region of camera acquisition, and face key point is positioned;2) motion information of the statistics of the regional area locating for face key point face and non-face region;3) according to the local motion information at all key points obtained, the local motion mode of face is calculated;4) based on the local motion mode of face, judged using the true and false of the preconfigured pattern classifier to the face.The invention has the benefit that can effectively be combined with actual face identification system, real human face is fast and effeciently identified in the case where not needing user's interaction substantially and forge face.

Description

A kind of face method for anti-counterfeit based on local motion mode
Technical field
The present invention relates to computer vision and area of pattern recognition, the face method for anti-counterfeit in living things feature recognition field is ground Study carefully more particularly to a kind of face method for anti-counterfeit based on local motion mode.
Background technique
Currently, biometrics identification technology has been widely used in the every aspect in daily life.Face biology Feature identification technique, since it is with easy-to-use, user friendly, it is contactless the advantages that, achieve prominent fly in recent years The development pushed ahead vigorously, these development have been embodied in each research field, including Face datection, face characteristic extract, classifier design And hardware device manufacture etc..However, some tests are still faced on application based on the living things feature recognition of face, Wherein, it is the most outstanding be exactly identifying system safety issue;As a kind of device for identification, they are easy to By an illegal person personation at legal user, and true face all cannot be distinguished in current most of face identification system And photo, as long as having got the photo of legitimate user, then this kind of identifying system that can easily out-trick, and now hair The social networks reached becomes abnormal easy by this attack pattern;In addition, may with the mask of the video or forgery recorded Attack is generated to face identification system.
The anti-fake also known as face In vivo detection of face, gradually receives the attention from academia and industry;Face is anti-fake Main purpose be to discriminate between the facial image of real human face and above-mentioned forgery, identification dummy's face image attacks face identification system It hits, to improve the safety of face identification system;It is different according to the clue used, face method for anti-counterfeit can be divided into three Class:
1, based on the face method for anti-counterfeit of skin reflex characteristic: from the reflection characteristic of face skin, some researchers It is anti-fake that face is carried out using multispectral acquisition means;Using both true man's skin and the face skin of forgery under different spectrum This feature of reflectivity difference, achievees the purpose that face is anti-fake;The research contents of such methods be find suitable spectrum so that The difference of true and false face skin is maximum;However, such methods are with following clearly disadvantageous: 1) only in very small amount of number According to upper test, therefore performance can not be fully assessed;2) spectral band chosen can not be incuded by common camera, It needs to dispose special sensor devices, increases hardware spending;3) additional sensor devices need to develop targeted signal Conversion circuit increases the compatibility issue with existing system.
2, the face method for anti-counterfeit based on texture difference: the face method for anti-counterfeit based on microtexture, which has, to be assumed: same Equipment acquisition is forged face and is compared with the true man's face acquired with the equipment there are loss in detail or difference, and the difference in these details The different difference just caused in image microtexture;The hypothesis is in most cases to set up, and the face of forgery is by making It is formed with real human face picture making, by taking the photo of printing as an example, photo is printed upon on paper by illegal user first, then will The human face photo of printing is attacked before being placed in face identification system;In this process, it can at least be caused there are two link Difference, first is that printing link, printer can not reappear without distortion photo content;Second is that the secondary imaging of photograph print, is adopted Collection equipment can not capture the content perfection on photo;In addition to this, real human face and printing face are in surface shape Difference, the difference etc. of local bloom can all cause difference of the two in microtexture.
3, based drive face method for anti-counterfeit: such methods are intended to the physiological reaction by detecting face to determine to acquire Whether be real human face;In view of real human face is compared with face is forged, there are more independences, such methods pass through requirement User carries out specified movement as the foundation determined;Common exchange method includes blink, is shaken the head, mouth action etc.;It removes It is that the movement based on entire head is judged there are also a kind of method except detection method based on local motion;This kind of side The effective reason of method is the three-dimensional structure of photo and face, and there are apparent differences, so that the head movement mode obtained is also deposited In certain difference;It is a kind of to be suggested based on multi-modal face method for anti-counterfeit in order to further increase face anti-counterfeiting performance;It should Whether the content of text that method requires user's reading specified, the subsequent lip motion for passing through analysis user and corresponding voice content It coincide to judge the true and false of face;However, it is this based on the method for anti-counterfeit of human-computer interaction due to requiring user specifically to be moved To make, the requirement to user is excessively high, so that user experience is bad, meanwhile, it is also a big drawback of the above method that authenticated time is longer.
In three of the above method, based drive face method for anti-counterfeit has the condition that is not illuminated by the light, and picture quality influences The advantages that, however, such methods are not accurately positioned each region of face when extracting motion feature, thus nothing The actual motion state of the acquired face of method accurate description;For example, the image of acquisition is broadly divided into face square by certain methods Shape region and background area determine the true and false of face by the motion state of both comparisons, however, the people determined by rectangle frame Face region includes a large amount of background area, so that real human face is very big may to be mistaken for forging face;Meanwhile in this feelings Under condition, by folding, torsional deformation can also out-trick face anti-counterfeiting system easily for the face of forgery;Therefore, how to be precisely located Human face region and non-face region, and find the regional area of the most distinctive local motion mode strong with Extraction and discrimination Information is the key that can face anti-counterfeiting system be applied in practice.
Summary of the invention
The object of the present invention is to provide a kind of face method for anti-counterfeit based on local motion mode, to overcome current existing skill Art above shortcomings.
The purpose of the present invention is be achieved through the following technical solutions:
A kind of face method for anti-counterfeit based on local motion mode, comprising:
Video image gathered in advance is analyzed, determines human face region, and analyze the human face region, really Each face key point in the fixed human face region;
According to video frame corresponding to the video image, the direction of motion and width of pixel in the video image are obtained Value information;
According to the direction of motion and amplitude information of the pixel of acquisition, the face key point is analyzed, determines institute The direction of motion and amplitude information in the regional area of face key point place are stated, and determines the movement of regional area according to the information Relationship between direction and between amplitude, to obtain the local motion mode of face;
Classified by local motion mode of the preconfigured pattern classifier to the face of acquisition, and according to classification As a result, verifying the true and false of face in the video image.
Further, the human face region is obtained or is utilized by human-face detector and is manually specified.
Further, the human face region is analyzed, determines that each face key point includes: in the human face region
Institute is determined by the initial position message of face key point predetermined according to the position of the human face region State the position of each face key point in human face region;
According to the position of face key point each in the human face region, extract crucial with the face on the video image The corresponding video image characteristic in position of point;
According to the video image characteristic, by preconfigured algorithm model, update on the video image with it is described The position of the corresponding face key point of human face region;
After meeting preset condition, the above process is terminated.
Further, according to the direction of motion and amplitude information of the pixel of acquisition, the face key point is divided Analysis, the direction of motion and amplitude information where determining the face key point in regional area include:
According to the position of the accurate face key point, head zone in the video image is accurately divided, really The respective image mask of head zone in the fixed video image;
According to the direction of motion and amplitude information of described image mask and the pixel of acquisition, each accurate face is extracted The direction of motion and amplitude information of head zone and non-head region of the key point in the regional area locating for it.
Further, according to the position of the face key point, head zone in the video image is accurately drawn Point, determine that corresponding image mask includes:
According to the position of the accurate face key point, face corresponding with the position of the accurate face key point is determined Envelope;And the region for using the face envelope including is as the human face region of the video image;
According to the connecting line at face envelope both ends in the video image, mirror image is carried out to the face envelope, and The face envelope is combined with its mirror image, obtains a closed curve, the region for including using the curve as The head zone of the video image;
According to the position of the position of the human face region of the video image and the head zone of the video image, institute is determined State the human face region of video image and the respective image mask of head zone.
Under the premise of can obtain face and head precise boundary, the quantity of required key point and corresponding position can Arbitrarily to select.
Further, it according to the direction of motion and amplitude information of described image mask and the pixel of acquisition, extracts every The direction of motion and amplitude of head zone and non-head region of a accurate face key point in the regional area locating for it are believed Breath includes:
According to the parameter information of preconfigured regional area size, office corresponding to each accurate face key point is determined Portion region;
According to described image mask, the pixel that head zone is fallen in the regional area is demarcated as foreground area, The pixel fallen in except head zone in the regional area is demarcated as background area;
According to the direction of motion and amplitude information of the pixel of acquisition;Count the foreground and background area in the regional area The respective direction of motion in domain and amplitude information.
Further, it according to the direction of motion and amplitude information of regional area where the face key point, calculates different Relationship between the direction of motion and amplitude in region, the local motion mode for obtaining face include:
According to the direction of motion and width of foreground and background of each face key point in the regional area locating for it Value information calculates between local foreground region, between local background region and the movement between local foreground and background area The relationship in direction and amplitude information;
According between the local foreground region of calculating, between local background region and local foreground and background area Between the direction of motion and amplitude information relationship, determine the local motion mode of face.
Further, the movement of the foreground and background in the regional area according to each face key point locating for it Direction and amplitude information, calculate between local foreground region, between local background region and local foreground and background area it Between the direction of motion and the relationship of amplitude information include:
Based on the direction of motion and amplitude information in the foreground and background region in the regional area, the direction of motion is quantified At several sections, the motion information histogram for adding up the motion amplitude of pixel in each regional area is obtained;
According to the motion information histogram, determine between the motion information histogram of regional area described in any two Ratio between related coefficient and motion amplitude.
Further, according between the local foreground region of calculating, between local background region and local foreground The relationship of the direction of motion and amplitude information between background area determines that the local motion mode of face includes:
According to the ratio between the related coefficient and the motion amplitude, by the related coefficient between all regional areas It is combined with motion amplitude ratio, determination obtains the local motion mode of face.
The invention has the benefit that based on face method for anti-counterfeit provided by the invention by carrying out accurate face and head Portion's zone location, the strong face local motion mode of Extraction and discrimination ability, can fast and effeciently distinguish the true and false of facial image; Effectively the defect of face and head movement information can not accurately be extracted by compensating for existing method, while use one kind can be more Increase the local motion mode information of effect earth's surface intelligent face motion state;This method is not set substantially by image capture environment and acquisition The influence of standby quality, meanwhile, also do not influenced substantially by the degree true to nature and forgery face deformation extent of forging photo, it can Real human face and forgery face before effectively distinguishing camera.
Detailed description of the invention
It, below will be to embodiment following for the embodiment of the present invention or technical solution in the prior art is illustrated more clearly that Needed in attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description is only some of the application Embodiment for those of ordinary skill in the art without creative efforts, can also be attached according to these Figure obtains other attached drawings.
Fig. 1 is a kind of flow chart of the face method for anti-counterfeit based on local motion mode provided in an embodiment of the present invention;
Fig. 2 is that a kind of use of face method for anti-counterfeit based on local motion mode provided in an embodiment of the present invention cascades increasing Strong regression model carries out the flow chart of face key point location;
Fig. 3 is a kind of face local motion for face method for anti-counterfeit based on local motion mode that inventive embodiments provide The flow chart of schema extraction method.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, those of ordinary skill in the art's every other embodiment obtained belong to what the present invention protected Range.
A kind of face method for anti-counterfeit based on local motion mode described in the embodiment of the present invention, such as the flow chart institute of Fig. 1 Show, comprising the following steps:
Step 1: video image gathered in advance being analyzed, determines human face region, and carry out to the human face region Analysis, determines each face key point in the human face region;The human face region is obtained by human-face detector or using manually It is specified.
Face key point (Face Landmark) mainly includes cheek, eyes, and eyebrow is multiple including nose and mouth Position with certain semanteme;After user's face detection algorithm obtains the position of human face region in the picture, it can make Crucial point location is carried out with different types of method;Currently, face key independent positioning method can be divided into multiclass, more commonly Including active shape model (Active Shape Model, ASM), active phenomenological model (Active Appearance Mode, AAM), the partial model (Constrained Local Model, CLM) of constraint, cascade enhancing shape regression model (Cascaded Boosted Shape Regression Model) etc.;As shown in Fig. 2, the application chooses face cheek key Point is used as face key point, and illustrates the basic procedure of face key point location for cascading and enhance regression model:
Step 1-1: the position based on human face region initializes the position of face key point;To the view of the human face region Frequency image is analyzed, and the position of human face region in video image is determined, and according to the position of the human face region, by preparatory The location information of the face key point of definition determines the position of each face key point on the human face region;Generally, using just The shape of dough figurine face is initialized;
Step 1-2: the position of face key point each on the human face region is analyzed, is extracted on the video image Video image characteristic corresponding with the position of the face key point;
Step 1-3: the video is determined by preconfigured image regression model according to the video image characteristic The position of accurate face key point corresponding with the human face region on image;
Step 1-4: skipping to step 1-2, carries out the recurrence of next round, until meeting certain termination condition.
In step 1, it is necessary first to Face datection be carried out to the image currently acquired, if not detecting face figure Picture then acquires next frame image;If detecting multiple facial images, chooses the maximum face of detection block area and carry out face Anti-fake analysis.
Based on above face key independent positioning method, the location information QUOTE of K face key point can be obtained , wherein the position of k-th of key point is expressed as QUOTE ;Specifically, the present embodiment successively sequentially chooses 17 key points at cheek.
Step 2: according to the video frame corresponding to video image that acquired of step 1, extracting pixel in present image The direction of motion and amplitude information.
The motion information of image refers to the image for the former frame or several frames that pixel is acquired relative to camera in image The change in location of generation, is indicated with the direction of motion and motion amplitude;It is obtained currently, being based primarily upon light stream (Optical Flow) Take the motion information of pixel in image;The concept of light stream is proposed by Gibson et al. in nineteen fifty earliest;It can describe by The movement of foreground target itself, the movement of camera in scene, or both associated movement caused by a variety of different movement moulds Formula;Currently, there are many ways to optical flow computation, such as Lucas-Kanade algorithm, Horn-Schunck algorithm and Gunnar The method etc. based on Polynomial Expansion that Farneback is proposed;Wherein the first algorithm is for extracting sparse optical flow, latter two Algorithm is used for computation-intensive light stream.
The application uses Gunnar Farneback algorithm;After given current frame image and previous frame image, the algorithm The motor pattern of each pixel in current frame image can be calculated;For ith pixel point,
Its motor pattern is expressed as QUOTE , wherein QUOTE It indicates in image coordinate The movement in X direction in system
Amplitude, QUOTE Indicate the motion amplitude on y direction.
Step 3: the direction of motion and amplitude information of the pixel calculated by step 2 --- based on light stream It calculates and obtains, the face key point is analyzed, determines the direction of motion and width of the face key point in regional area Value information, and according to the direction of motion and amplitude information of the face key point, it determines between the direction of motion and amplitude information Relationship, obtain the local motion mode of face;Extract the fortune of regional area locating for 17 face key points in step 1 Dynamic information;As shown in figure 3, specific step is as follows for motor pattern extraction in order to realize more accurate face antiforge function:
Step 3-1: according to the position of the accurate face key point, to human face region and header area in the video image Domain is accurately divided, and determines that human face region and the respective image of head zone are covered in the video image, obtain image mask Specific steps are as follows:
Step 3-1-1: according to the position of the accurate face key point, the determining position with the accurate face key point Corresponding face envelope;And the region for using the face envelope including is as the human face region of the video image;
Step 3-1-2: the face envelope that step 3-1-1 is obtained, it is right according to the connecting line of face envelope two-end-point The face envelope carries out mirror image, and the face envelope is combined with its mirror image, obtains a closed curve, The region for including using the curve is as the head zone of the video image;
Step 3-1-3: according to the head zone of the position of the human face region of the video image and the video image Position determines the human face region of the video image and the respective image mask of head zone.
Step 3-2: the direction of motion and the amplitude letter of the pixel based on step 3-1 image mask obtained and acquisition Breath, extract each accurate foreground area and background area of the face key point in the regional area locating for it the direction of motion and Amplitude information, by taking key point k as an example, the specific steps are as follows:
Step 3-2-1: according to the parameter information of preconfigured regional area size, each accurate face key point is determined Corresponding regional area;If the width of human face region is W, a height of H, centered on face key point k, local rectangular portions are selected QUOTE Width be 0.2 × W, a height of 0.2 × H;
Step 3-2-2: the light stream direction of all pixels point and amplitude in rectangular area are calculated, QUOTE is expressed as
Step 3-2-3: in rectangular area QUOTE In the regional area for including, determination falls in face and header area The inside and outside pixel in domain, is defined as foreground and background region, respectively indicates collection and is combined into QUOTE And QUOTE
Step 3-2-4: QUOTE is counted respectively And QUOTE Motion information;Firstly, by light stream Direction (0 ° to 360 °) uniform quantization is to 18 sections;Then, add up to fall in pixel in each section light stream amplitude it With;
It obtains two histogram tables that dimension is 18 and is shown as QUOTE And QUOTE
Using the above method, the motion information at 17 key points of face can be obtained --- the direction of motion and amplitude letter Breath;The motion information will be used to extract local motion mode.
Step 3-3: the direction of motion and the amplitude letter based on prospect and background at extracted key point in step 3-2 Breath calculates between local foreground region, between local background region and the direction of motion between local foreground and background area The specific of the local motion mode of face is obtained to obtain the local motion mode of current face with the relationship of amplitude information Steps are as follows:
Step 3-3-1: any two are calculated according to the histogram of local foreground or background area extraction where key point Between related coefficient;
Step 3-3-2: the motion amplitude that any two are extracted according to local foreground where key point or background area is calculated Between ratio;
Step 3-3-3: the step 3-3-1 all related coefficients being calculated and step 3-3-2 are calculated all Motion amplitude ratio is combined, the local motion mode as current face.
By step 3-2, the histogram for indicating totally 34 18 dimensions of the local motion information at key point has been obtained; Then, the present invention is transported by calculating 34 histograms related coefficient between any two and Amplitude Ration come the part of quantificational expression face Dynamic pattern information;Wherein, in step 3-3-1, any two histogram is given, vector QUOTE is expressed as With QUOTE , the calculation formula of related coefficient is as follows:
QUOTE (1)
Wherein QUOTE And QUOTE Respectively QUOTE And QUOTE It is equal Value;Based on above-mentioned formula, 34*33/2=561 related coefficient can be obtained;It, must by calculating meanwhile in step 3-3-2 To the ratio of 561 histogram amplitudes;Wherein the amplitude of histogram is the average light stream amplitude of pixel, i.e. all pixels in region The sum of light stream amplitude divided by the area pixel point number;So far, related coefficient and Amplitude Ration are formed to the spy of 1122 dimensions altogether Sign, to indicate the local motion mode of face.
Step 4: after the local motion mode for obtaining face by step 3, passing through preconfigured pattern classifier pair The local motion mode of the face of acquisition is classified, and according to classification as a result, verifying the true of face in the video image It is pseudo-.
Use pattern classifier determines the true and false of current facial image collected;It is extracted from current face's image To local motion mode, i.e. after 1122 dimensional feature vectors, preparatory trained support vector machines (Support can be used Vector Machine, SVM) pattern classification model determines the true and false of current input image.
In step 4, used support vector cassification model needs training in advance;For this purpose, being acquired using camera The video sequence of 20 real human faces and 20 forgery faces;The duration of video sequence is 30s;Wherein, true in acquisition When the video sequence of face, it is desirable that the head of gathered person and face carry out slight movement, such as shake the head, nod, and smile, speak Etc.;Forgery face video sequence collected is divided into two classes, and one is sequence of the acquisition from photograph print, secondly certainly for acquisition The sequence of tablet personal computer display screen;In collection process, the face of forgery can be static, can also carry out any form of fortune Dynamic or torsional deformation.
After obtaining above-mentioned video sequence, equally by step 1, step 2 and step 3, human face region is therefrom extracted Local motion mode feature, and dual mode classifier is obtained using linear SVM training.
The present invention is not limited to above-mentioned preferred forms, anyone can show that other are various under the inspiration of the present invention The product of form, however, make any variation in its shape or structure, it is all that there is skill identical or similar to the present application Art scheme, is within the scope of the present invention.

Claims (7)

1. a kind of face method for anti-counterfeit based on local motion mode characterized by comprising
Video image gathered in advance is analyzed, human face region is determined, and analyze the human face region, determines institute State each face key point in human face region, in which:
The human face region is analyzed, determines that each face key point includes: in the human face region
The people is determined by the initial position message of face key point predetermined according to the position of the human face region The position of each face key point in face region;
According to the position of face key point each in the human face region, extract on the video image with the face key point The corresponding video image characteristic in position;
According to the video image characteristic, by preconfigured algorithm model, update on the video image with the face The position of the corresponding face key point in region determines accurate face corresponding with the human face region on the video image The position of key point;
After meeting preset condition, the above process is terminated;
According to video frame corresponding to the video image, the direction of motion and the amplitude letter of pixel in the video image are obtained Breath;
According to the direction of motion and amplitude information of the pixel of acquisition, the face key point is analyzed, determines the people The direction of motion and amplitude information where face key point in regional area, and determine according to the information direction of motion of regional area Between and amplitude between relationship, to obtain the local motion mode of face, in which:
The direction of motion and amplitude information of the pixel according to acquisition analyze the face key point, determine institute The direction of motion and amplitude information where stating face key point in regional area include: the position according to the accurate face key point It sets, head zone in the video image is accurately divided, determines the respective image of head zone in the video image Mask;According to the direction of motion and amplitude information of described image mask and the pixel of acquisition, extracts each accurate face and close The direction of motion and amplitude information of head zone and non-head region of the key point in the regional area locating for it;
Classified by local motion mode of the preconfigured pattern classifier to the face of acquisition, and according to the knot of classification Fruit verifies the true and false of face in the video image.
2. the face method for anti-counterfeit according to claim 1 based on local motion mode, which is characterized in that the face area Domain is obtained or is utilized by human-face detector and is manually specified.
3. the face method for anti-counterfeit according to claim 1 based on local motion mode, which is characterized in that according to the people The position of face key point accurately divides head zone in the video image, determines that corresponding image mask includes:
According to the position of the accurate face key point, face envelope corresponding with the position of the accurate face key point is determined Line;And the region for using the face envelope including is as the human face region of the video image;
According to the connecting line at face envelope both ends in the video image, mirror image carried out to the face envelope, and by institute It states face envelope to be combined with its mirror image, obtains a closed curve, the region for including using the curve is as described in The head zone of video image;
According to the position of the position of the human face region of the video image and the head zone of the video image, the view is determined The human face region of frequency image and the respective image mask of head zone.
4. the face method for anti-counterfeit according to claim 1 based on local motion mode, which is characterized in that according to the figure As the direction of motion and amplitude information of mask and the pixel of acquisition, each accurate office of the face key point locating for it is extracted The direction of motion and amplitude information of head zone and non-head region in portion region includes:
According to the parameter information of preconfigured regional area size, partial zones corresponding to each accurate face key point are determined Domain;
According to described image mask, the pixel that head zone is fallen in the regional area is demarcated as foreground area, by institute It states the pixel fallen in except head zone in regional area and is demarcated as background area;
According to the direction of motion and amplitude information of the pixel of acquisition;The foreground and background region counted in the regional area is each From the direction of motion and amplitude information.
5. the face method for anti-counterfeit according to claim 4 based on local motion mode, which is characterized in that according to the people The direction of motion and amplitude information of regional area, calculate the pass between the direction of motion and amplitude of different zones where face key point System, the local motion mode for obtaining face include:
Believed according to the direction of motion of foreground and background of each face key point in the regional area locating for it and amplitude Breath calculates between local foreground region, between local background region and the direction of motion between local foreground and background area With the relationship of amplitude information;
According between the local foreground region of calculating, between local background region and local foreground and background area The direction of motion and amplitude information relationship, determine the local motion mode of face.
6. the face method for anti-counterfeit according to claim 5 based on local motion mode, which is characterized in that according to described every The direction of motion and amplitude information of foreground and background of a face key point in the regional area locating for it calculate local foreground The relationship of the direction of motion and amplitude information between region, between local background region and between local foreground and background area Include:
Based on the direction of motion and amplitude information in the foreground and background region in the regional area, if the direction of motion is quantized into Dry section, obtains the motion information histogram for adding up the motion amplitude of pixel in each regional area;
According to the motion information histogram, the correlation between the motion information histogram of regional area described in any two is determined Ratio between coefficient and motion amplitude.
7. the face method for anti-counterfeit according to claim 6 based on local motion mode, which is characterized in that according to calculating The direction of motion and width between the local foreground region, between local background region and between local foreground and background area The relationship of value information determines that the local motion mode of face includes:
According to the ratio between the related coefficient and the motion amplitude, by the related coefficient and fortune between all regional areas Dynamic amplitude ratio is combined, and determination obtains the local motion mode of face.
CN201410428040.6A 2014-08-27 2014-08-27 A kind of face method for anti-counterfeit based on local motion mode Expired - Fee Related CN105447432B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410428040.6A CN105447432B (en) 2014-08-27 2014-08-27 A kind of face method for anti-counterfeit based on local motion mode

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410428040.6A CN105447432B (en) 2014-08-27 2014-08-27 A kind of face method for anti-counterfeit based on local motion mode

Publications (2)

Publication Number Publication Date
CN105447432A CN105447432A (en) 2016-03-30
CN105447432B true CN105447432B (en) 2019-09-13

Family

ID=55557594

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410428040.6A Expired - Fee Related CN105447432B (en) 2014-08-27 2014-08-27 A kind of face method for anti-counterfeit based on local motion mode

Country Status (1)

Country Link
CN (1) CN105447432B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106228137A (en) * 2016-07-26 2016-12-14 广州市维安科技股份有限公司 A kind of ATM abnormal human face detection based on key point location
CN108229325A (en) * 2017-03-16 2018-06-29 北京市商汤科技开发有限公司 Method for detecting human face and system, electronic equipment, program and medium
CN107358155A (en) * 2017-06-02 2017-11-17 广州视源电子科技股份有限公司 Method and device for detecting ghost face action and method and system for recognizing living body
CN107688781A (en) * 2017-08-22 2018-02-13 北京小米移动软件有限公司 Face identification method and device
CN107643826A (en) * 2017-08-28 2018-01-30 天津大学 A kind of unmanned plane man-machine interaction method based on computer vision and deep learning
CN108537131B (en) * 2018-03-15 2022-04-15 中山大学 Face recognition living body detection method based on face characteristic points and optical flow field
CN108846321B (en) * 2018-05-25 2022-05-03 北京小米移动软件有限公司 Method and device for identifying human face prosthesis and electronic equipment
CN109583391B (en) * 2018-12-04 2021-07-16 北京字节跳动网络技术有限公司 Key point detection method, device, equipment and readable medium
CN109766785B (en) * 2018-12-21 2023-09-01 ***股份有限公司 Living body detection method and device for human face
CN110223322B (en) * 2019-05-31 2021-12-14 腾讯科技(深圳)有限公司 Image recognition method and device, computer equipment and storage medium
CN111460419B (en) * 2020-03-31 2020-11-27 深圳市微网力合信息技术有限公司 Internet of things artificial intelligence face verification method and Internet of things cloud server
CN111626101A (en) * 2020-04-13 2020-09-04 惠州市德赛西威汽车电子股份有限公司 Smoking monitoring method and system based on ADAS
CN112287909B (en) * 2020-12-24 2021-09-07 四川新网银行股份有限公司 Double-random in-vivo detection method for randomly generating detection points and interactive elements

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101169827A (en) * 2007-12-03 2008-04-30 北京中星微电子有限公司 Method and device for tracking characteristic point of image
CN102750518A (en) * 2012-05-30 2012-10-24 深圳光启创新技术有限公司 Face verification system and method based on visible light communications

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5795979B2 (en) * 2012-03-15 2015-10-14 株式会社東芝 Person image processing apparatus and person image processing method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101169827A (en) * 2007-12-03 2008-04-30 北京中星微电子有限公司 Method and device for tracking characteristic point of image
CN102750518A (en) * 2012-05-30 2012-10-24 深圳光启创新技术有限公司 Face verification system and method based on visible light communications

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于光流的动态人脸表情识别;余棉水等;《微电子学与计算机》;20051231;第22卷(第7期);第113-115、119页 *

Also Published As

Publication number Publication date
CN105447432A (en) 2016-03-30

Similar Documents

Publication Publication Date Title
CN105447432B (en) A kind of face method for anti-counterfeit based on local motion mode
Agarwal et al. Protecting World Leaders Against Deep Fakes.
Shao et al. Deep convolutional dynamic texture learning with adaptive channel-discriminability for 3D mask face anti-spoofing
Patel et al. Live face video vs. spoof face video: Use of moiré patterns to detect replay video attacks
Yang et al. Learn convolutional neural network for face anti-spoofing
Patel et al. Secure face unlock: Spoof detection on smartphones
Bharadwaj et al. Computationally efficient face spoofing detection with motion magnification
CN104933414B (en) A kind of living body faces detection method based on WLD-TOP
Shan Smile detection by boosting pixel differences
CN110458063B (en) Human face living body detection method for preventing video and photo cheating
CN108416291B (en) Face detection and recognition method, device and system
CN105243376A (en) Living body detection method and device
CN108021892A (en) A kind of human face in-vivo detection method based on extremely short video
Rehman et al. Enhancing deep discriminative feature maps via perturbation for face presentation attack detection
JP5879188B2 (en) Facial expression analysis apparatus and facial expression analysis program
Azzopardi et al. Fast gender recognition in videos using a novel descriptor based on the gradient magnitudes of facial landmarks
Huang et al. Deepfake mnist+: a deepfake facial animation dataset
CN104008364A (en) Face recognition method
CN108280421A (en) Human bodys' response method based on multiple features Depth Motion figure
Xu et al. Action recognition by saliency-based dense sampling
CN113468954B (en) Face counterfeiting detection method based on local area features under multiple channels
Putro et al. Adult image classifiers based on face detection using Viola-Jones method
Liu Face liveness detection using analysis of Fourier spectra based on hair
Gürel Development of a face recognition system
JP2009098901A (en) Method, device and program for detecting facial expression

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20160531

Address after: 425000 Yongzhou City, Hunan province Lengshuitan District Fushan Road Pearl Street No. 127

Applicant after: Yang Jianwei

Address before: 100084 B1 floor, block A, Wan Lin Building, No. 88, Nongda South Road, Beijing, Haidian District

Applicant before: QIANSOU INC.

GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190913

Termination date: 20210827