CN109117755A - A kind of human face in-vivo detection method, system and equipment - Google Patents

A kind of human face in-vivo detection method, system and equipment Download PDF

Info

Publication number
CN109117755A
CN109117755A CN201810828283.7A CN201810828283A CN109117755A CN 109117755 A CN109117755 A CN 109117755A CN 201810828283 A CN201810828283 A CN 201810828283A CN 109117755 A CN109117755 A CN 109117755A
Authority
CN
China
Prior art keywords
face
depth map
living body
image
identified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810828283.7A
Other languages
Chinese (zh)
Other versions
CN109117755B (en
Inventor
马立磊
磊董远
白洪亮
熊风烨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Feisou Technology Co ltd
Original Assignee
Beijing Feisou Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Feisou Technology Co ltd filed Critical Beijing Feisou Technology Co ltd
Priority to CN201810828283.7A priority Critical patent/CN109117755B/en
Publication of CN109117755A publication Critical patent/CN109117755A/en
Application granted granted Critical
Publication of CN109117755B publication Critical patent/CN109117755B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the present invention provides a kind of human face in-vivo detection method, system and equipment, obtains the face depth map of face to be identified;Judge whether face to be identified is living body based on the face depth map, if the face depth map meets preset condition, judges face to be identified for living body.Utilize the difference of the face depth prediction of the image of the secondary shooting image and true man of photo or video, in conjunction with the prediction result of the model of deep learning, resist the attack of photo and video to face identification system, this advantage of the feature of picture profound level can be extracted based on convolutional neural networks, learn a mapping of the facial image to face depth image end to end by convolutional neural networks, the study of network is supervised using the calculated depth map of real human face and the depth zero figure of attack face default, it is fast to detect speed, it is applied widely, mobile phone terminal and computer end are all suitable for, frame based on deep learning, it intuitively expresses real human face and attacks the essential distinction of face.

Description

A kind of human face in-vivo detection method, system and equipment
Technical field
The present invention relates to biometrics identification technology fields, more particularly, to a kind of human face in-vivo detection method, system And equipment.
Background technique
Human body has many unique features, such as face, fingerprint, iris, human ear etc., these features are collectively referred to as biological spy Sign.Biometrics identification technology is widely used in the every field in life, wherein face recognition technology is because of its collection apparatus side Just, the features such as hygienic, it is most widely used, for example, face recognition application is in security protection, gate inhibition field.With face recognition application Also there is the method for more and more attack recognitions of face in the extension in field.
Common attack method includes being known using the media such as human face photo, video and 3D mask model simulation face in face Recognition of face is attacked before other equipment.As it can be seen that most of right and wrong of attack use are carried out to recognition of face in the prior art Therefore living body medium carries out In vivo detection to face to be identified, to resist the attack carried out to identification, is one and urgently solves Certainly the problem of.
In the prior art, the method for carrying out face In vivo detection is broadly divided into two classes: static detection method, dynamic detection side Method.Wherein, dynamic testing method mostly uses greatly the interactive mode of instruction type, blinks, shakes the head, opens one's mouth, to judge to participate in The problems such as whether be true man, that there are speed is slow, the difficult cooperation of participant if detecting movable, in the case where doing attack medium with video Recognition accuracy is lower;And static In vivo detection is easily broken, it is difficult to apply on the market there are accuracy rate is low.
Summary of the invention
The present invention provides a kind of a kind of face living body inspection for overcoming the above problem or at least being partially solved the above problem Survey method, system and equipment.
According to the first aspect of the invention, a kind of human face in-vivo detection method is provided, comprising:
Obtain the face depth map of face to be identified;
Judge whether face to be identified is living body based on the face depth map, if the face depth map meets default item Part then judges face to be identified for living body.
According to the second aspect of the invention, a kind of face In vivo detection system is provided, including facial image obtain module, Face depth map prediction module and judgment module;
The facial image obtains the facial image that module is used to obtain face to be identified;
The face depth map prediction module is used to obtain face depth map according to depth image prediction model;
The judgment module is used to judge whether face to be identified is living body based on the face depth map, if the face Depth map meets preset condition, then judges face to be identified for living body.
According to the third aspect of the present invention, a kind of face In vivo detection equipment is provided, comprising:
At least one processor;And
At least one processor being connect with the processor communication, in which:
The memory is stored with the program instruction that can be executed by the processor, and the processor calls described program to refer to Order is able to carry out such as above-mentioned human face in-vivo detection method.
According to the fourth aspect of the present invention, a kind of non-transient computer readable storage medium, the non-transient meter are provided Calculation machine readable storage medium storing program for executing stores computer instruction, and the computer instruction executes the computer such as the inspection of above-mentioned face living body Survey method.
The present invention proposes a kind of human face in-vivo detection method, system and equipment, utilizes the secondary shooting figure of photo or video As and the face depth prediction of the image of true man difference, in conjunction with the prediction result of the model of deep learning, resist photo and Attack of the video to face identification system can extract this advantage of the feature of picture profound level based on convolutional neural networks, lead to Crossing convolution neural network learning one, facial image is calculated to the mapping of face depth image using real human face end to end Depth map and the depth zero figure of attack face default supervise the study of network, detection speed is fast, applied widely, mobile phone terminal It can be applicable in computer end, the frame based on deep learning, intuitively express real human face and attack the essential poor of face Not.
Detailed description of the invention
Fig. 1 is the human face in-vivo detection method schematic diagram according to the embodiment of the present invention;
Fig. 2 is the face In vivo detection flow diagram according to the embodiment of the present invention;
Fig. 3 is the face In vivo detection equipment schematic diagram according to the embodiment of the present invention.
Specific embodiment
With reference to the accompanying drawings and examples, specific embodiments of the present invention will be described in further detail.Implement below Example is not intended to limit the scope of the invention for illustrating the present invention.
In vivo detection refers to a kind of method that photo and true man, video and true man are distinguished in face identification system, nowadays Face In vivo detection in the market mostly uses greatly the interactive mode of instruction type, blinks, shakes the head, opens one's mouth, to judge to participate in The problems such as whether be true man, that there are speed is slow, the difficult cooperation of participant if detecting movable.And static In vivo detection exists accurately Rate is low, is easily broken, it is difficult to the problem of applying on the market.
But slow, the difficult cooperation of user in the presence of detection speed of the method in the prior art based on interactive mode movement, and interactivity The problem of difference, in addition, for realizing that the video recorded attack discrimination degree is low;Other existing schemes are such as: being built based on 3-D image The biopsy method of mould technology, it is computationally intensive, 3D camera is needed, hardware requirement is high;It is examined using the living body of infrared camera Survey method, it is at high cost, infrared detector is needed, hardware requirement is high;Static biopsy method, accuracy rate is low, vulnerable.
In order in the prior art the shortcomings that, in the present embodiment, as shown in Figure 1, providing a kind of face In vivo detection side Method characterized by comprising
Obtain the face depth map of face to be identified;
Judge whether face to be identified is living body based on the face depth map, if the face depth map meets default item Part then judges face to be identified for living body.
In the present embodiment, due to being rough due to real human face, so each position range image of face is adopted The distance for collecting equipment is different, so the depth of each point of facial image can be used as the important evidence of living body judgement: video photography Attack face is a plane, and real human face is a concave-convex curved surface.
In the present embodiment, using photo or video secondary shooting image and true man image face depth prediction Difference resists the attack of photo and video to face identification system in conjunction with the prediction result of the model of deep learning.Detection speed Degree is fast, and applied widely, mobile phone terminal and computer end can be applicable in, the frame based on deep learning, intuitively expresses true The essential distinction of face and attack face.
On the basis of the above embodiments, before the face depth map for obtaining face to be identified, further includes:
Using living body faces image and attack facial image as input facial image sample, by the calculated work of real human face Body face depth map and the depth zero figure of attack facial image default are input in convolutional neural networks and are instructed as supervision Practice, obtains depth image prediction model.
In the present embodiment, the structure based on real human face is rough, and video photography attack face is plane , face depth in two forms be different, and we can not be by intuitively visually judging the true of photo face Puppet, but convolutional neural networks can extract the feature of picture profound level, therefore our selections learn one with convolutional neural networks For a facial image end to end to the mapping of face depth image, obtaining a depth image prediction model can by the model To directly acquire living body faces, attack photo face, the depth image for attacking video human face, and then according to respective depth image Difference carries out vivo identification, can solve existing static vivo identification method, in interactive method of operating, cannot identify and attack simultaneously The problem of hitting photo, attack video, and in the present embodiment, it is only necessary to a camera and processing hardware, hardware cost are low.
On the basis of the various embodiments described above, using living body faces image and attack facial image as input facial image sample After this, further includes:
The corresponding living body faces depth map of living body faces image is obtained, is based on intensive face alignment method for the living body people Face image is matched on 3D faceform, and is handled by Z-Buffer method the 3D faceform, and living body is obtained Face depth map.
In the present embodiment, a kind of a kind of intensive face alignment algorithm of 3D for wide-angle facial image is proposed, it will Facial image is matched on an optimal 3D faceform, includes thousands of characteristic points in these 3D faceforms, from And realize intensive face alignment.
3D face shape is estimated using facial image by training CNN model, it is corresponding come fitting using the shape 3D faceform, can not only detect human face characteristic point, moreover it is possible to match facial contour and SIFT feature;Specifically, logical Cross and increase by two additional constraints: one, the facial contour detected in the 3D facial contour and 2D image predicted matches;Two, Crucial SIFT feature corresponds to the same characteristic point in 3D model in the different images of the same face.
Specifically, indicating the 3D shape of face using matrix S, according to the representation method of 3DMM, can indicate such as formula (1) shown in:
In above formula, it is made of three parts: shape mean value, the shape basic function (199) of individual difference, expression difference Shape basic function (29), each basic function are the vectors of 53,215 dimensions, and p is the weight of basic function.
It obtains can use projection matrix after three-dimensional S to obtain corresponding intensive face shape A:
Wherein A is projection matrix, includes six-freedom degree, can simulate scale, rotation and linear transformation.After obtaining A 2D plane can be projected to by orthogonal two-dimensional projection matrix Pr, critical point detection, available N number of people are carried out to facial image The key point U of face:
U=PrA (4)
Wherein A is projection matrix, includes six-freedom degree, can simulate scale, rotation and linear transformation.After obtaining A 2D plane can be projected to by orthogonal two-dimensional projection matrix Pr, obtain matrix U.
In 2 dimensional planes, Z coordinate transformation coefficient m12It is set as 0, available according to the property of projection matrix:
The thus coefficient that the dense feature point of any 2D facial image can be tieed up by calculating the parameter m and 228 of 8 dimensions P is realized, and then converts the intensive face alignment model of 3D to the computational problem of parameter m and p.It only needs to pass through facial image M is obtained in the good model of pre-training1~m8With 199+29 p parameter.
Result in face 3D model S.Next face depth map is obtained using Z-Buffer algorithm.Again with the depth Degree figure supervises the study of convolutional neural networks as the depth map of real human face.
In the present embodiment, Z-Buffer algorithm main purpose exactly removes the hidden surface in face 3D model, that is, Hidden surface elimination (or visible face is found out, Visible surface detemination, this is same The sample meaning).In 3D drawing, as long as there are two above triangular facets, it is possible to which will appear some triangular facet can cover another The situation of triangular facet.This is apparent phenomenon because close thing can always cover it is remote.
Z Buffer (Z buffer), Z-buffering are to execute " hidden surface elimination " work when being coloured for object A technology, so hide object behind part would not be revealed.It can be utilized in each pixel in 3D environment One group of data information come define pixel display when longitudinal degree (i.e. Z axis coordinate value).Digit used in Z Buffer is higher, then It is also more accurate to represent object depth feelings provided by the display card.Current 3D accelerator card generally can all support 16 Z Buffer, some advanced cards newly released can support the Z Buffer to 32.One is connected containing many objects More complex 3D model for, more digit can be possessed to show sense of depth be considerable thing.
It on the basis of the various embodiments described above, is input in convolutional neural networks and is trained, specifically include:
It is input, corresponding living body faces depth map as output using the living body faces image, with the attack face figure Picture is input, depth zero figure is output, carries out convolutional neural networks training.
In the present embodiment, selection convolutional neural networks learn one end to end facial image to face depth image Mapping, supervise the study of network using the calculated depth map of real human face and the depth zero figure of attack face default.Inspection Degree of testing the speed is fast, applied widely, and mobile phone terminal and computer end can be applicable in, and the frame based on deep learning intuitively expresses The essential distinction of real human face and attack face.
On the basis of the various embodiments described above, the face depth map for obtaining face to be identified is specifically included:
The facial image of face to be identified is obtained, and detects the position of face frame in facial image, the position based on face frame It sets and the facial image is cut, the facial image after cutting is input in depth image prediction model, is obtained wait know The face depth map of others' face.
In the present embodiment, the structure based on real human face is rough, and video photography attack face is plane , face depth in two forms be different, and we can not be by intuitively visually judging the true of photo face Puppet, by convolutional neural networks train come depth image prediction model can extract the feature of picture profound level, Jin Ershi Whether face not to be detected is living body, and fast in the present embodiment method detection speed, applied widely, mobile phone terminal and computer end all may be used To be applicable in, the frame based on deep learning intuitively expresses real human face and attacks the essential distinction of face.
On the basis of the various embodiments described above, judge whether face to be identified is living body based on the face depth map, has Body includes:
Binary conversion treatment is carried out to the face depth map, and two models are solved to the face depth map after binary conversion treatment Number judges face to be detected for living body if two norm is greater than given threshold.
The binary conversion treatment of image is exactly the gray scale of the point on image to be set to 0 or 255, that is, say that whole image is presented Apparent black and white effect out.The gray level image of 256 brightness degrees chosen to obtain still can be with by threshold values appropriate Reflect the whole binary image with local feature of image.In Digital Image Processing, bianry image occupies very importantly Position, especially in practical image procossing, with binary Images Processing realization and the system that is constituted be it is very much, to carry out two-value The processing and analysis of image, first have to a Binary Sketch of Grey Scale Image, obtain binary image, so to be conducive to again do image When being further processed, the set property of image is only related with the position of point that pixel value is 0 or 255, does not further relate to the more of pixel Grade value, makes processing become simple, and data processing and decrement it is small.
In the present embodiment, ideal bianry image in order to obtain, the area not overlapped using the boundary definition of closing, connection Domain.The pixel that all gray scales are greater than or equal to threshold values is judged as belonging to certain objects, and gray value is 255 to indicate, otherwise this A little pixels are excluded other than object area, and gray value 0 indicates the object area of background or exception.If certain is specific Object has uniform gray value in inside, and it is in the homogeneous background with other level gray values, makes It can be obtained by the segmentation effect compared with threshold method.If object shows (such as the line not on gray value with the difference of background Reason is different), this distinction can be converted to the difference of gray scale, the image is then divided using threshold values selecting technology. Dynamic regulation threshold values realizes the concrete outcome of its segmented image of the binaryzation dynamic observation of image.
Depth image provides the three-dimensional structure information of scene, these information provide for Activity recognition than color and texture Stronger discriminant information;Depth image is not illuminated by the light the influence of variation simultaneously, in the present embodiment, when parameter is excessive, meeting Cause model complexity to rise, be easy over-fitting, that is, training error can very little.But small training error is not the present embodiment Final goal, the test error that the target of the present embodiment is desirable to model is small, that is, can accurately predict new sample.
In the present embodiment, the over-fitting of model prediction result is prevented by two norms, the generalization ability of lift scheme is Final judging result provides reliable foundation.
As shown in Fig. 2, the testing process of the present embodiment is to obtain a photograph first on the basis of the various embodiments described above Piece detects face using Face datection, obtains the position of face frame, in next step, according to the face frame detected before, to picture into Row is cut, and the picture after cutting is input in the depth map prediction model after convolutional neural networks training, the people predicted Then face depth map carries out binaryzation to depth map, then solves 2- norm to the depth map after binaryzation, then set to the norm Certain threshold value as whether the foundation of living body classification, be greater than the threshold value, be then judged as that living body passes through our In vivo detection system System.
On the basis of the various embodiments described above, before judging whether face to be identified is living body based on the face depth map, Further include:
The corresponding living body faces depth map of several living body faces images is obtained, two norms, and base are solved to face depth map The two norm ranges after solution set preset condition, i.e. given threshold.
A kind of face In vivo detection system is additionally provided in the present embodiment, using the face In vivo detection of the various embodiments described above Method, including facial image obtain module, face depth map prediction module and judgment module;
The facial image obtains the facial image that module is used to obtain face to be identified;
The face depth map prediction module is used to obtain face depth map according to depth image prediction model;
The judgment module is used to judge whether face to be identified is living body based on the face depth map, if the face Depth map meets preset condition, then judges face to be identified for living body.
Fig. 3 is the structural block diagram for showing the face In vivo detection equipment of the embodiment of the present application.
Referring to Fig. 3, the face In vivo detection equipment, comprising: processor (processor) 810, memory (memory) 830, communication interface (Communications Interface) 820 and bus 840;
Wherein,
The processor 810, memory 830, communication interface 820 complete mutual communication by the bus 840;
The communication interface 820 is for the information transmission between the test equipment and the communication equipment of display device;
The processor 810 is used to call the program instruction in the memory 830, to execute above-mentioned each method embodiment Provided human face in-vivo detection method, for example,
Obtain the face depth map of face to be identified;
Judge whether face to be identified is living body based on the face depth map, if the face depth map meets default item Part then judges face to be identified for living body.
The present embodiment discloses a kind of computer program product, and the computer program product includes being stored in non-transient calculating Computer program on machine readable storage medium storing program for executing, the computer program include program instruction, when described program instruction is calculated When machine executes, computer is able to carry out such as above-mentioned human face in-vivo detection method, for example,
Obtain the face depth map of face to be identified;
Judge whether face to be identified is living body based on the face depth map, if the face depth map meets default item Part then judges face to be identified for living body.
A kind of non-transient computer readable storage medium is additionally provided in the present embodiment, the non-transient computer is readable to deposit Storage media stores computer instruction, and the computer instruction makes the computer execute such as above-mentioned human face in-vivo detection method, For example,
Obtain the face depth map of face to be identified;
Judge whether face to be identified is living body based on the face depth map, if the face depth map meets default item Part then judges face to be identified for living body.
In conclusion the embodiment of the present invention proposes a kind of human face in-vivo detection method, system and equipment, photo or view are utilized The difference of the face depth prediction of the image of the secondary shooting image and true man of frequency, in conjunction with the prediction knot of the model of deep learning Fruit resists the attack of photo and video to face identification system, can extract picture profound level based on convolutional neural networks This advantage of feature learns a mapping of the facial image to face depth image end to end, benefit by convolutional neural networks The study of network is supervised with the calculated depth map of real human face and the depth zero figure of attack face default, detection speed is fast, Applied widely, mobile phone terminal and computer end can be applicable in, the frame based on deep learning, intuitively express real human face and Attack the essential distinction of face.
The embodiments such as the test equipment of display device described above are only schematical, wherein described as separation The unit of part description may or may not be physically separated, component shown as a unit can be or It can not be physical unit, it can it is in one place, or may be distributed over multiple network units.It can be according to reality Border needs to select some or all of the modules therein to achieve the purpose of the solution of this embodiment.Those of ordinary skill in the art Without paying creative labor, it can understand and implement.
Through the above description of the embodiments, those skilled in the art can be understood that each embodiment can It realizes by means of software and necessary general hardware platform, naturally it is also possible to pass through hardware.Based on this understanding, on Stating technical solution, substantially the part that contributes to existing technology can be embodied in the form of software products in other words, should Computer software product may be stored in a computer readable storage medium, such as ROM/RAM, magnetic disk, CD, including several fingers It enables and using so that a computer equipment (can be personal computer, server or the network equipment etc.) executes each implementation Method described in certain parts of example or embodiment.
Finally, it should be noted that the above various embodiments is only to illustrate the technical solution of the embodiment of the present invention, rather than it is right It is limited;Although the embodiment of the present invention is described in detail referring to foregoing embodiments, the ordinary skill of this field Personnel are it is understood that it is still possible to modify the technical solutions described in the foregoing embodiments, or to part Or all technical features are equivalently replaced;And these are modified or replaceed, it does not separate the essence of the corresponding technical solution The range of each embodiment technical solution of the embodiment of the present invention.

Claims (10)

1. a kind of human face in-vivo detection method characterized by comprising
Obtain the face depth map of face to be identified;
Judge whether face to be identified is living body based on the face depth map, if the face depth map meets preset condition, Then judge face to be identified for living body.
2. human face in-vivo detection method according to claim 1, which is characterized in that obtain the face depth of face to be identified Before figure, further includes:
Using living body faces image and attack facial image as input facial image sample, by the calculated living body people of real human face Face depth map and the depth zero figure of attack facial image default are input in convolutional neural networks and are trained, obtain as supervision To depth image prediction model.
3. human face in-vivo detection method according to claim 2, which is characterized in that by living body faces image and attack face After image is as input facial image sample, further includes:
The corresponding living body faces depth map of living body faces image is obtained, is based on intensive face alignment method for the living body faces figure As being matched on 3D faceform, and the 3D faceform is handled by Z-Buffer method, obtains living body faces Depth map.
4. human face in-vivo detection method according to claim 3, which is characterized in that be input in convolutional neural networks and carry out Training, specifically includes:
It is input, corresponding living body faces depth map as output using the living body faces image, is with the attack facial image Input, depth zero figure are output, carry out convolutional neural networks training.
5. human face in-vivo detection method according to claim 2, which is characterized in that obtain the face depth of face to be identified Figure specifically includes:
The facial image of face to be identified is obtained, and detects the position of face frame in facial image, the position pair based on face frame The facial image is cut, and the facial image after cutting is input in depth image prediction model, obtains people to be identified The face depth map of face.
6. human face in-vivo detection method according to claim 1, which is characterized in that based on the face depth map judge to Identify whether face is living body, is specifically included:
Binary conversion treatment is carried out to the face depth map, and two norms are solved to the face depth map after binary conversion treatment, if Two norm is greater than given threshold, then judges face to be detected for living body.
7. human face in-vivo detection method according to claim 6, which is characterized in that based on the face depth map judge to Before whether identification face is living body, further includes:
The corresponding living body faces depth map of several living body faces images is obtained, two norms are solved to face depth map, and be based on asking Two norm ranges after solution set preset condition, i.e. given threshold.
8. a kind of face In vivo detection system, which is characterized in that obtain module, face depth map prediction module including facial image And judgment module;
The facial image obtains the facial image that module is used to obtain face to be identified;
The face depth map prediction module is used to obtain face depth map according to depth image prediction model;
The judgment module is used to judge whether face to be identified is living body based on the face depth map, if the face depth Figure meets preset condition, then judges face to be identified for living body.
9. a kind of face In vivo detection equipment characterized by comprising
At least one processor;And
At least one processor being connect with the processor communication, in which:
The memory is stored with the program instruction that can be executed by the processor, and the processor calls described program to instruct energy Enough methods executed as described in claim 1 to 7 is any.
10. a kind of non-transient computer readable storage medium, which is characterized in that the non-transient computer readable storage medium is deposited Computer instruction is stored up, the computer instruction makes the computer execute the method as described in claim 1 to 7 is any.
CN201810828283.7A 2018-07-25 2018-07-25 Face living body detection method, system and equipment Active CN109117755B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810828283.7A CN109117755B (en) 2018-07-25 2018-07-25 Face living body detection method, system and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810828283.7A CN109117755B (en) 2018-07-25 2018-07-25 Face living body detection method, system and equipment

Publications (2)

Publication Number Publication Date
CN109117755A true CN109117755A (en) 2019-01-01
CN109117755B CN109117755B (en) 2021-04-30

Family

ID=64863421

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810828283.7A Active CN109117755B (en) 2018-07-25 2018-07-25 Face living body detection method, system and equipment

Country Status (1)

Country Link
CN (1) CN109117755B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222573A (en) * 2019-05-07 2019-09-10 平安科技(深圳)有限公司 Face identification method, device, computer equipment and storage medium
CN110688967A (en) * 2019-09-30 2020-01-14 上海依图信息技术有限公司 System and method for static human face living body detection
CN111507131A (en) * 2019-01-31 2020-08-07 北京市商汤科技开发有限公司 Living body detection method and apparatus, electronic device, and storage medium
WO2020164266A1 (en) * 2019-02-13 2020-08-20 平安科技(深圳)有限公司 Living body detection method and system, and terminal device
CN112784661A (en) * 2019-11-01 2021-05-11 宏碁股份有限公司 Real face recognition method and real face recognition device
CN113158838A (en) * 2021-03-29 2021-07-23 华南理工大学 Face representation attack detection method based on full-size depth map supervision
CN113312965A (en) * 2021-04-14 2021-08-27 重庆邮电大学 Method and system for detecting unknown face spoofing attack living body
CN113505717A (en) * 2021-07-17 2021-10-15 桂林理工大学 Online passing system based on face and facial feature recognition technology
CN113963425A (en) * 2021-12-22 2022-01-21 北京的卢深视科技有限公司 Testing method and device of human face living body detection system and storage medium
CN114092994A (en) * 2021-10-13 2022-02-25 北京工业大学 Human face living body detection method based on multi-view feature learning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105335722A (en) * 2015-10-30 2016-02-17 商汤集团有限公司 Detection system and detection method based on depth image information
CN105430394A (en) * 2015-11-23 2016-03-23 小米科技有限责任公司 Video data compression processing method, apparatus and equipment
US20160371537A1 (en) * 2015-03-26 2016-12-22 Beijing Kuangshi Technology Co., Ltd. Method, system, and computer program product for recognizing face
CN107066942A (en) * 2017-03-03 2017-08-18 上海斐讯数据通信技术有限公司 A kind of living body faces recognition methods and system
CN108108676A (en) * 2017-12-12 2018-06-01 北京小米移动软件有限公司 Face identification method, convolutional neural networks generation method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160371537A1 (en) * 2015-03-26 2016-12-22 Beijing Kuangshi Technology Co., Ltd. Method, system, and computer program product for recognizing face
CN105335722A (en) * 2015-10-30 2016-02-17 商汤集团有限公司 Detection system and detection method based on depth image information
CN105430394A (en) * 2015-11-23 2016-03-23 小米科技有限责任公司 Video data compression processing method, apparatus and equipment
CN107066942A (en) * 2017-03-03 2017-08-18 上海斐讯数据通信技术有限公司 A kind of living body faces recognition methods and system
CN108108676A (en) * 2017-12-12 2018-06-01 北京小米移动软件有限公司 Face identification method, convolutional neural networks generation method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YAOJIE LIU 等: "Learning Deep Models for Face Anti-Spoofing: Binary or Auxiliary Supervision", 《IEEE》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111507131A (en) * 2019-01-31 2020-08-07 北京市商汤科技开发有限公司 Living body detection method and apparatus, electronic device, and storage medium
CN111507131B (en) * 2019-01-31 2023-09-19 北京市商汤科技开发有限公司 Living body detection method and device, electronic equipment and storage medium
WO2020164266A1 (en) * 2019-02-13 2020-08-20 平安科技(深圳)有限公司 Living body detection method and system, and terminal device
CN110222573A (en) * 2019-05-07 2019-09-10 平安科技(深圳)有限公司 Face identification method, device, computer equipment and storage medium
CN110222573B (en) * 2019-05-07 2024-05-28 平安科技(深圳)有限公司 Face recognition method, device, computer equipment and storage medium
CN110688967A (en) * 2019-09-30 2020-01-14 上海依图信息技术有限公司 System and method for static human face living body detection
CN112784661A (en) * 2019-11-01 2021-05-11 宏碁股份有限公司 Real face recognition method and real face recognition device
CN112784661B (en) * 2019-11-01 2024-01-19 宏碁股份有限公司 Real face recognition method and real face recognition device
CN113158838B (en) * 2021-03-29 2023-06-20 华南理工大学 Full-size depth map supervision-based face representation attack detection method
CN113158838A (en) * 2021-03-29 2021-07-23 华南理工大学 Face representation attack detection method based on full-size depth map supervision
CN113312965A (en) * 2021-04-14 2021-08-27 重庆邮电大学 Method and system for detecting unknown face spoofing attack living body
CN113505717A (en) * 2021-07-17 2021-10-15 桂林理工大学 Online passing system based on face and facial feature recognition technology
CN114092994A (en) * 2021-10-13 2022-02-25 北京工业大学 Human face living body detection method based on multi-view feature learning
CN113963425A (en) * 2021-12-22 2022-01-21 北京的卢深视科技有限公司 Testing method and device of human face living body detection system and storage medium

Also Published As

Publication number Publication date
CN109117755B (en) 2021-04-30

Similar Documents

Publication Publication Date Title
CN109117755A (en) A kind of human face in-vivo detection method, system and equipment
CN109815893B (en) Color face image illumination domain normalization method based on cyclic generation countermeasure network
CN109583342B (en) Human face living body detection method based on transfer learning
CN111754596B (en) Editing model generation method, device, equipment and medium for editing face image
CN104599287B (en) Method for tracing object and device, object identifying method and device
CN109558832A (en) A kind of human body attitude detection method, device, equipment and storage medium
CN109102547A (en) Robot based on object identification deep learning model grabs position and orientation estimation method
WO2020103700A1 (en) Image recognition method based on micro facial expressions, apparatus and related device
CN109684925B (en) Depth image-based human face living body detection method and device
CN111709409A (en) Face living body detection method, device, equipment and medium
WO2019227479A1 (en) Method and apparatus for generating face rotation image
CN109784148A (en) Biopsy method and device
CN111274916A (en) Face recognition method and face recognition device
CN108182397B (en) Multi-pose multi-scale human face verification method
CN106778474A (en) 3D human body recognition methods and equipment
CN108470178B (en) Depth map significance detection method combined with depth credibility evaluation factor
CN113312965B (en) Face unknown spoofing attack living body detection method and system
CN112052831A (en) Face detection method, device and computer storage medium
CN111611934A (en) Face detection model generation and face detection method, device and equipment
CN106650617A (en) Pedestrian abnormity identification method based on probabilistic latent semantic analysis
WO2024060978A1 (en) Key point detection model training method and apparatus and virtual character driving method and apparatus
CN109977764A (en) Vivo identification method, device, terminal and storage medium based on plane monitoring-network
CN112633217A (en) Human face recognition living body detection method for calculating sight direction based on three-dimensional eyeball model
Sakthimohan et al. Detection and Recognition of Face Using Deep Learning
CN112329662B (en) Multi-view saliency estimation method based on unsupervised learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant