CN110580454A - Living body detection method and device - Google Patents

Living body detection method and device Download PDF

Info

Publication number
CN110580454A
CN110580454A CN201910772406.4A CN201910772406A CN110580454A CN 110580454 A CN110580454 A CN 110580454A CN 201910772406 A CN201910772406 A CN 201910772406A CN 110580454 A CN110580454 A CN 110580454A
Authority
CN
China
Prior art keywords
detection result
living body
face
determining
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910772406.4A
Other languages
Chinese (zh)
Inventor
户磊
陈智超
王军华
康凯
汪旗
张建生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Dilusense Technology Co Ltd
Original Assignee
Beijing Dilu Shenshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dilu Shenshi Technology Co Ltd filed Critical Beijing Dilu Shenshi Technology Co Ltd
Priority to CN201910772406.4A priority Critical patent/CN110580454A/en
Publication of CN110580454A publication Critical patent/CN110580454A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides a living body detection method and a living body detection device, and belongs to the technical field of living body detection. The method comprises the following steps: determining a face area on an infrared image and a depth image, wherein the infrared image and the depth image are acquired based on an object to be detected; performing living body detection on the face area on the infrared image through the first model to obtain a first detection result, and performing living body detection on the face area on the depth image through the second model to obtain a second detection result; and determining whether the object to be detected is a living body according to the first detection result and the second detection result. The face texture information and the shape information can be combined, and active matching of the detected object is not needed, so that the accuracy rate and the scene applicability of the living body detection method are greatly improved.

Description

Living body detection method and device
Technical Field
the invention relates to the technical field of in-vivo detection, in particular to a method and a device for in-vivo detection.
background
More and more scenes are provided for identity verification by using human faces, convenience is brought, and a new problem is introduced, namely how to prevent the impersonation of a prosthesis. The living body detection is used for judging whether a human face object to be detected is a real living body when the human face is used for identity verification, so that the attack of videos, paper sheets and masks on a human face identification system is effectively prevented, and the accuracy and the safety of the identity verification are ensured. In the related art, the in-vivo detection methods mainly include the following three methods: the first method is a living body detection of random commands, the object to be detected performs some random actions according to the requirements, such as shaking head, blinking, beeping mouth and the like or reads a string of random numbers, and the action is detected whether the action is consistent with the command requirements to perform living body judgment. The second mode is living body detection based on an optical flow method and rPPG characteristics, the mode captures micro motion information of a human face, reads a small video sequence, extracts motion information and an rPPG value, and compares the motion information and the rPPG value with a preset value to judge whether the human face is a living body. The third mode is a living body detection method based on multiple spectra, which utilizes special equipment to generate near infrared data of different wave bands and judges whether the living body is the living body according to reflection characteristics. The first method has the disadvantage of requiring personnel cooperation, the second method has the disadvantage of requiring a long time and a large illuminated image, and the third method has the disadvantage of requiring special customization of special equipment.
Disclosure of Invention
in order to solve the above problems, embodiments of the present invention provide a living body detecting method and apparatus that overcome the above problems or at least partially solve the above problems.
according to a first aspect of embodiments of the present invention, there is provided a method of detecting a living body, including:
Determining a face area on an infrared image and a depth image, wherein the infrared image and the depth image are acquired based on an object to be detected;
Performing living body detection on the face area on the infrared image through the first model to obtain a first detection result, and performing living body detection on the face area on the depth image through the second model to obtain a second detection result;
And determining whether the object to be detected is a living body according to the first detection result and the second detection result.
according to a second aspect of embodiments of the present invention, there is provided a living body detection apparatus including:
the first determination module is used for determining a face area on an infrared image and a depth image, wherein the infrared image and the depth image are acquired based on an object to be detected;
The detection module is used for performing living body detection on the face area on the infrared image through the first model to obtain a first detection result, and performing living body detection on the face area on the depth image through the second model to obtain a second detection result;
And the second determining module is used for determining whether the object to be detected is a living body according to the first detection result and the second detection result.
According to a third aspect of embodiments of the present invention, there is provided an electronic apparatus, including:
at least one processor; and
At least one memory communicatively coupled to the processor, wherein:
The memory stores program instructions executable by the processor, the processor invoking the program instructions to perform the liveness detection method provided by any of the various possible implementations of the first aspect.
According to a fourth aspect of the present invention, there is provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the liveness detection method provided by any one of the various possible implementations of the first aspect.
The living body detection method and the living body detection device provided by the embodiment of the invention determine the face areas on the infrared image and the depth image. And performing living body detection on the face area on the infrared image through the first model to obtain a first detection result, and performing living body detection on the face area on the depth image through the second model to obtain a second detection result. And determining whether the object to be detected is a living body according to the first detection result and the second detection result. The face texture information and the shape information can be combined, and active matching of the detected object is not needed, so that the accuracy rate and the scene applicability of the living body detection method are greatly improved.
it is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are not restrictive of embodiments of the invention.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a method for detecting a living body according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a living body detecting apparatus according to an embodiment of the present invention;
Fig. 3 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In view of the problems in the related art, the embodiments of the present invention provide a method for detecting a living body. Referring to fig. 1, the method includes: 101. determining a face area on an infrared image and a depth image, wherein the infrared image and the depth image are acquired based on an object to be detected; 102. performing living body detection on the face area on the infrared image through the first model to obtain a first detection result, and performing living body detection on the face area on the depth image through the second model to obtain a second detection result; 103. and determining whether the object to be detected is a living body according to the first detection result and the second detection result.
The first detection result and the second detection result are used for indicating whether the object to be detected is a real person or a non-real person, wherein the non-real person means that the object to be detected may be only a mask or a plane type paper sheet and the like.
The method provided by the embodiment of the invention determines the face areas on the infrared image and the depth image. And performing living body detection on the face area on the infrared image through the first model to obtain a first detection result, and performing living body detection on the face area on the depth image through the second model to obtain a second detection result. And determining whether the object to be detected is a living body according to the first detection result and the second detection result. The face texture information and the shape information can be combined, and active matching of the detected object is not needed, so that the accuracy rate and the scene applicability of the living body detection method are greatly improved.
Based on the content of the foregoing embodiment, as an alternative embodiment, the embodiment of the present invention does not specifically limit the manner of determining the face regions on the infrared image and the depth image, and includes but is not limited to: and performing face detection on the infrared image, determining a face area on the infrared image according to the detected face frame, and determining a face area on the depth image according to the face frame.
Specifically, all faces can be detected in the infrared image by using a trained face detector, and the largest face is determined as the target face, so that the face frame and the key points of the largest face are obtained. The number of the key points may be 68, which is not specifically limited in this embodiment of the present invention. The face frame can be determined by four vertices, i.e. (Top, button, Left, Right), and the area in the face frame is the face area. The face area of the depth map can be determined according to the face area on the infrared map.
Based on the content of the foregoing embodiment, as an alternative embodiment, the embodiment of the present invention does not specifically limit the manner of determining the face area on the depth map according to the face frame, and includes but is not limited to: if the infrared image and the depth image are not aligned, converting the pixels in the depth image into an infrared camera coordinate system according to the external reference and the internal reference of the infrared camera and the depth information of each pixel in the depth image, and projecting the pixels onto the infrared image; and determining a face frame of the depth map according to the pixel points closest to the face frame, and determining a face area on the depth map according to the face frame of the depth map.
specifically, if the depth map and the infrared map are in an aligned relationship, the width and the height of the face frame on the infrared map can be respectively enlarged by 1.5 times, and the face area on the depth map is directly determined by using the aligned relationship between the depth map and the infrared map. If the depth map is not aligned with the infrared map, the external reference and the internal reference of the infrared camera and the depth camera can be utilized to obtain the depth information of each pixel in the depth map, the depth information is converted into an infrared camera coordinate system, the internal reference of the infrared camera is projected to the infrared map, 4 vertexes nearest to the face frame of the infrared map are obtained and serve as the vertexes of the face frame in the depth map.
Based on the content of the above embodiment, as an alternative embodiment, the first model is an SVM classifier; accordingly, the embodiment of the present invention does not specifically limit the manner of obtaining the first detection result by performing living body detection on the face region on the infrared image through the first model, and includes but is not limited to: inputting the feature vectors corresponding to the face regions on the infrared image into an SVM classifier, if the output result of the SVM classifier is larger than a preset threshold value, determining that the first detection result is a real person, and if the output result of the SVM classifier is not larger than the preset threshold value, determining that the first detection result is a non-real person.
the preset threshold may be 0, which is not specifically limited in the embodiment of the present invention. In addition, in the training process of the SVM classifier, a camera can be used for collecting a large number of infrared pictures of real people and masks, the infrared pictures are subjected to face detection, alignment and cutting, and the cut pictures are subjected to random mirror image so as to expand sample data. By extracting and counting LBP characteristics, the label is set to be 0 for live body data of a real person, and the label is set to be 1 for mask data. The sample vector composed of the LBP features is used as input, the corresponding label is used as output, iterative training can be carried out on the SVM classifier, and the stopping condition of iteration can be set to be 100000 times of iteration or the error is smaller than 0.00001. In addition, the SVM classifier may select a linear kernel function, which is not particularly limited in the embodiment of the present invention.
Based on the content of the foregoing embodiment, as an optional embodiment, before the feature vector corresponding to the face region on the infrared image is input to the SVM classifier, the feature vector corresponding to the face region on the infrared image may also be acquired. The embodiment of the present invention does not specifically limit the manner of obtaining the feature vector, and includes but is not limited to: acquiring the whole LBP characteristics and the local LBP characteristics of a face area on an infrared image; and counting histograms corresponding to the global LBP characteristics and the local LBP characteristics to obtain a characteristic vector containing global and local information.
Specifically, before extracting LBP features of a face region on an infrared image, 68 face key points predicted by an infrared image face detection algorithm may be used to perform alignment operation, scaling operation and cropping operation on an infrared image face region image, so that the cropped face region may eliminate the influence of a human body posture. In calculating the LBP features and in counting the LBP features, an equivalence model may be employed to reduce the dimensionality of the LBP features. When counting the equivalent LBP characteristics, the infrared image can be divided into smaller modules, the local LBP characteristics of the infrared image can be counted, and the overall LBP characteristics of the infrared image can be counted by taking the image as a whole. By statistically partitioning and global LBP histograms, the respective feature vectors are concatenated together to form a feature vector containing global and local information.
based on the content of the above embodiment, as an alternative embodiment, the second model is a convolutional neural network model; accordingly, the embodiment of the present invention does not specifically limit the manner of obtaining the second detection result by performing living body detection on the face region on the depth map through the second model, and includes but is not limited to: carrying out point cloud conversion, filtering denoising and normalization processing on the face area of the depth map; and inputting the processed result into a convolutional neural network model, outputting a first probability that the object to be detected is a real person and a second probability that the object to be detected is an unreal person, if the first probability is greater than the second probability, determining that the second detection result is the real person, and if the first probability is not greater than the second probability, determining that the second detection result is the unreal person.
Specifically, the depth points of the face region can be converted into a camera coordinate system according to the depth camera internal parameters to obtain a point cloud in the space. The camera coordinate system is inconsistent with the commonly used right-hand coordinate system, so that the final point cloud information can be transformed into the right-hand coordinate system. The specific conversion process is as follows: note that the depth value of a pixel point at coordinate (m, n) on the depth map is z, and the pixel point is converted from image coordinate (m, n) to point cloud coordinate (wx, wy, wz). And by setting a filtering range, filtering the pixel points with the depth values not in the range. Along the direction of the camera coordinate system, the center of the point cloud can be calculated, and then the center coordinate is subtracted from each point in the point cloud, so that the point cloud is moved to the center of the coordinate system for normalization. And (3) re-projecting the normalized point cloud to a two-dimensional plane with a fixed size, representing each projected pixel point by three channels, respectively recording wx, wy and wz, and filling holes in the projected holes by adopting a triangular interpolation mode.
And performing center clipping on the projected image to obtain a point cloud picture for classification, inputting the point cloud picture into the trained 3D convolutional neural network model, and obtaining output results R _ livedepth and R _ period by a softmax classifier, wherein the output results R _ livedepth and R _ period respectively represent a first probability that the object to be detected is a real person and a second probability that the object to be detected is a non-real person. And comparing the two to determine a second detection result.
Of course, in an actual implementation process, the first probability R _ livedepth may be directly compared with the set initial threshold Thrdepth, and if R _ livedepth > Thrdepth, it indicates that the object to be inspected is a real person, otherwise it is a non-real person. Among them, Thrdepth may be 0.5.
In addition, the 3D convolutional neural network model may be trained by the following process:
the method can collect a large amount of data of real people and plane paper types, random rotation and expression actions are needed when a living body sample is collected, and random bending and folding are conducted when a paper sample is collected. And then, performing face detection on the infrared image, and determining a face area on the depth image according to a face frame on the infrared image.
And performing point cloud conversion, filtering, normalization, projection and interpolation on the face area on the depth map to obtain a point cloud picture with a fixed size. And (4) randomly moving the point cloud picture in a rolling, pitching and yawing manner along the coordinate axis, limiting the range within 45 degrees, and randomly cutting the point cloud. For live data of a real person, the label is set to be 0, and for data of plane paper sheets, the label is set to be 1. And (3) sending the processed data and the label into a convolutional neural network, and optimizing a cross quotient error function in a training mode by using softmax as a classifier.
Based on the content of the foregoing embodiment, as an alternative embodiment, the embodiment of the present invention does not specifically limit the manner of determining whether the object to be measured is a living body according to the first detection result and the second detection result, which includes but is not limited to: and if the first detection result and the second detection result are both true persons, determining that the object to be detected is a living body.
According to the method provided by the embodiment of the invention, because the mask data is not easy to acquire, the living body mask classifier has better generalization capability by using a traditional machine learning mode. A large amount of plane paper sheet data are easy to obtain, and a living paper sheet classifier is trained by using a deep learning method to obtain better classification performance. The accuracy and the practicability of the in-vivo detection method can be improved by combining the two methods.
Based on the content of the above embodiments, embodiments of the present invention provide a living body detection apparatus for performing the living body detection method provided in the above method embodiments. Referring to fig. 2, the apparatus includes:
a first determining module 201, configured to determine a face region on an infrared image and a depth image, where the infrared image and the depth image are acquired based on an object to be detected;
The detection module 202 is configured to perform living body detection on the face area on the infrared image through the first model to obtain a first detection result, and perform living body detection on the face area on the depth image through the second model to obtain a second detection result;
and the second determining module 203 is configured to determine whether the object to be detected is a living body according to the first detection result and the second detection result.
As an alternative embodiment, the first determining module 201 is configured to perform face detection on the infrared image, determine a face region on the infrared image according to a detected face frame, and determine a face region on the depth map according to the face frame.
As an alternative embodiment, the first determining module 201 is configured to, when the infrared map and the depth map are not aligned, convert the pixels in the depth map into the infrared camera coordinate system according to the external reference and the internal reference of the infrared camera and the depth camera, and the depth information of each pixel in the depth map, and project the converted pixels onto the infrared map; and determining a face frame of the depth map according to the pixel points closest to the face frame, and determining a face area on the depth map according to the face frame of the depth map.
as an alternative embodiment, the first model is an SVM classifier; correspondingly, the detection module 202 is configured to input the feature vector corresponding to the face region on the infrared image to the SVM classifier, determine that the first detection result is a real person if the output result of the SVM classifier is greater than a preset threshold, and determine that the first detection result is a non-real person if the output result of the SVM classifier is not greater than the preset threshold.
As an alternative embodiment, the apparatus further comprises:
The acquisition module is used for acquiring the whole LBP characteristics and the local LBP characteristics of the face area on the infrared image;
and the statistical module is used for counting the histograms corresponding to the global LBP characteristics and the local LBP characteristics to obtain a characteristic vector containing global and local information.
As an alternative embodiment, the second model is a convolutional neural network model; correspondingly, the detection module 202 is configured to perform point cloud conversion, filtering denoising and normalization processing on the face region of the depth map; and inputting the processed result into a convolutional neural network model, outputting a first probability that the object to be detected is a real person and a second probability that the object to be detected is an unreal person, if the first probability is greater than the second probability, determining that the second detection result is the real person, and if the first probability is not greater than the second probability, determining that the second detection result is the unreal person.
as an alternative embodiment, the second determining module 203 is configured to determine that the object to be detected is a living body when both the first detection result and the second detection result are true persons.
The device provided by the embodiment of the invention determines the face areas on the infrared image and the depth image. And performing living body detection on the face area on the infrared image through the first model to obtain a first detection result, and performing living body detection on the face area on the depth image through the second model to obtain a second detection result. And determining whether the object to be detected is a living body according to the first detection result and the second detection result. The face texture information and the shape information can be combined, and active matching of the detected object is not needed, so that the accuracy rate and the scene applicability of the living body detection method are greatly improved.
Fig. 3 illustrates a physical structure diagram of an electronic device, which may include, as shown in fig. 3: a processor (processor)310, a communication Interface (communication Interface)320, a memory (memory)330 and a communication bus 340, wherein the processor 310, the communication Interface 320 and the memory 330 communicate with each other via the communication bus 340. The processor 310 may call logic instructions in the memory 330 to perform the following method: determining a face area on an infrared image and a depth image, wherein the infrared image and the depth image are acquired based on an object to be detected; performing living body detection on the face area on the infrared image through the first model to obtain a first detection result, and performing living body detection on the face area on the depth image through the second model to obtain a second detection result; and determining whether the object to be detected is a living body according to the first detection result and the second detection result.
in addition, the logic instructions in the memory 330 may be implemented in the form of software functional units and stored in a computer readable storage medium when the software functional units are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, an electronic device, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
embodiments of the present invention further provide a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program is implemented to perform the method provided in the foregoing embodiments when executed by a processor, and the method includes: determining a face area on an infrared image and a depth image, wherein the infrared image and the depth image are acquired based on an object to be detected; performing living body detection on the face area on the infrared image through the first model to obtain a first detection result, and performing living body detection on the face area on the depth image through the second model to obtain a second detection result; and determining whether the object to be detected is a living body according to the first detection result and the second detection result.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A method of in vivo detection, comprising:
Determining a face area on an infrared image and a depth image, wherein the infrared image and the depth image are acquired based on an object to be detected;
Performing living body detection on the face area on the infrared image through a first model to obtain a first detection result, and performing living body detection on the face area on the depth image through a second model to obtain a second detection result;
and determining whether the object to be detected is a living body according to the first detection result and the second detection result.
2. The method of claim 1, wherein determining the face regions on the infrared map and the depth map comprises:
and carrying out face detection on the infrared image, determining a face area on the infrared image according to a detected face frame, and determining a face area on the depth image according to the face frame.
3. the method of claim 2, wherein the determining the face region on the depth map according to the face frame comprises:
if the infrared image and the depth image are not aligned, converting the pixels in the depth image into an infrared camera coordinate system according to external parameters and internal parameters of an infrared camera and the depth information of each pixel in the depth image, and projecting the pixels onto the infrared image;
And determining the face frame of the depth map according to the pixel points closest to the face frame, and determining the face area on the depth map according to the face frame of the depth map.
4. The method of claim 1, wherein the first model is an SVM classifier; correspondingly, the live body detection of the face region on the infrared image through the first model to obtain a first detection result includes:
inputting the feature vector corresponding to the face region on the infrared image into the SVM classifier, if the output result of the SVM classifier is greater than a preset threshold value, determining that the first detection result is a real person, and if the output result of the SVM classifier is not greater than the preset threshold value, determining that the first detection result is a non-real person.
5. The method of claim 4, wherein before inputting the feature vectors corresponding to the face regions on the infrared image into the SVM classifier, the method further comprises:
Acquiring the whole LBP characteristics and the local LBP characteristics of the face area on the infrared image;
and counting histograms corresponding to the global LBP characteristics and the local LBP characteristics to obtain the characteristic vector containing global and local information.
6. the method of claim 1, wherein the second model is a convolutional neural network model; correspondingly, the performing living body detection on the face region on the depth map through the second model to obtain a second detection result includes:
Carrying out point cloud conversion, filtering denoising and normalization processing on the face area of the depth map;
Inputting the processed result into the convolutional neural network model, outputting a first probability that the object to be detected is a real person and a second probability that the object to be detected is an unreal person, if the first probability is greater than the second probability, determining that the second detection result is a real person, and if the first probability is not greater than the second probability, determining that the second detection result is an unreal person.
7. The method according to claim 1, wherein the determining whether the object to be measured is a living body according to the first detection result and the second detection result comprises:
And if the first detection result and the second detection result are both true persons, determining that the object to be detected is a living body.
8. a living body detection device, comprising:
the first determination module is used for determining a face area on an infrared image and a depth image, wherein the infrared image and the depth image are acquired based on an object to be detected;
the detection module is used for performing living body detection on the face area on the infrared image through a first model to obtain a first detection result, and performing living body detection on the face area on the depth image through a second model to obtain a second detection result;
and the second determining module is used for determining whether the object to be detected is a living body according to the first detection result and the second detection result.
9. An electronic device, comprising:
At least one processor; and
at least one memory communicatively coupled to the processor, wherein:
The memory stores program instructions executable by the processor, the processor invoking the program instructions to perform the method of any of claims 1 to 7.
10. a non-transitory computer-readable storage medium storing computer instructions that cause a computer to perform the method of any one of claims 1 to 7.
CN201910772406.4A 2019-08-21 2019-08-21 Living body detection method and device Pending CN110580454A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910772406.4A CN110580454A (en) 2019-08-21 2019-08-21 Living body detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910772406.4A CN110580454A (en) 2019-08-21 2019-08-21 Living body detection method and device

Publications (1)

Publication Number Publication Date
CN110580454A true CN110580454A (en) 2019-12-17

Family

ID=68811623

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910772406.4A Pending CN110580454A (en) 2019-08-21 2019-08-21 Living body detection method and device

Country Status (1)

Country Link
CN (1) CN110580454A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111582197A (en) * 2020-05-07 2020-08-25 贵州省邮电规划设计院有限公司 Living body based on near infrared and 3D camera shooting technology and face recognition system
CN111680574A (en) * 2020-05-18 2020-09-18 北京的卢深视科技有限公司 Face detection method and device, electronic equipment and storage medium
CN113128429A (en) * 2021-04-24 2021-07-16 新疆爱华盈通信息技术有限公司 Stereo vision based living body detection method and related equipment
CN116110111A (en) * 2023-03-23 2023-05-12 平安银行股份有限公司 Face recognition method, electronic equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107798281A (en) * 2016-09-07 2018-03-13 北京眼神科技有限公司 A kind of human face in-vivo detection method and device based on LBP features
CN108875546A (en) * 2018-04-13 2018-11-23 北京旷视科技有限公司 Face auth method, system and storage medium
CN109034102A (en) * 2018-08-14 2018-12-18 腾讯科技(深圳)有限公司 Human face in-vivo detection method, device, equipment and storage medium
CN109086691A (en) * 2018-07-16 2018-12-25 阿里巴巴集团控股有限公司 A kind of three-dimensional face biopsy method, face's certification recognition methods and device
CN109101871A (en) * 2018-08-07 2018-12-28 北京华捷艾米科技有限公司 A kind of living body detection device based on depth and Near Infrared Information, detection method and its application
CN109684924A (en) * 2018-11-21 2019-04-26 深圳奥比中光科技有限公司 Human face in-vivo detection method and equipment
CN109711243A (en) * 2018-11-01 2019-05-03 长沙小钴科技有限公司 A kind of static three-dimensional human face in-vivo detection method based on deep learning
CN109754427A (en) * 2017-11-01 2019-05-14 虹软科技股份有限公司 A kind of method and apparatus for calibration
CN109977929A (en) * 2019-04-28 2019-07-05 北京超维度计算科技有限公司 A kind of face identification system and method based on TOF

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107798281A (en) * 2016-09-07 2018-03-13 北京眼神科技有限公司 A kind of human face in-vivo detection method and device based on LBP features
CN109754427A (en) * 2017-11-01 2019-05-14 虹软科技股份有限公司 A kind of method and apparatus for calibration
CN108875546A (en) * 2018-04-13 2018-11-23 北京旷视科技有限公司 Face auth method, system and storage medium
CN109086691A (en) * 2018-07-16 2018-12-25 阿里巴巴集团控股有限公司 A kind of three-dimensional face biopsy method, face's certification recognition methods and device
CN109101871A (en) * 2018-08-07 2018-12-28 北京华捷艾米科技有限公司 A kind of living body detection device based on depth and Near Infrared Information, detection method and its application
CN109034102A (en) * 2018-08-14 2018-12-18 腾讯科技(深圳)有限公司 Human face in-vivo detection method, device, equipment and storage medium
CN109711243A (en) * 2018-11-01 2019-05-03 长沙小钴科技有限公司 A kind of static three-dimensional human face in-vivo detection method based on deep learning
CN109684924A (en) * 2018-11-21 2019-04-26 深圳奥比中光科技有限公司 Human face in-vivo detection method and equipment
CN109977929A (en) * 2019-04-28 2019-07-05 北京超维度计算科技有限公司 A kind of face identification system and method based on TOF

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111582197A (en) * 2020-05-07 2020-08-25 贵州省邮电规划设计院有限公司 Living body based on near infrared and 3D camera shooting technology and face recognition system
CN111680574A (en) * 2020-05-18 2020-09-18 北京的卢深视科技有限公司 Face detection method and device, electronic equipment and storage medium
CN111680574B (en) * 2020-05-18 2023-08-04 合肥的卢深视科技有限公司 Face detection method and device, electronic equipment and storage medium
CN113128429A (en) * 2021-04-24 2021-07-16 新疆爱华盈通信息技术有限公司 Stereo vision based living body detection method and related equipment
CN116110111A (en) * 2023-03-23 2023-05-12 平安银行股份有限公司 Face recognition method, electronic equipment and storage medium
CN116110111B (en) * 2023-03-23 2023-09-08 平安银行股份有限公司 Face recognition method, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US10783354B2 (en) Facial image processing method and apparatus, and storage medium
US10262190B2 (en) Method, system, and computer program product for recognizing face
CN110580454A (en) Living body detection method and device
JP4950787B2 (en) Image processing apparatus and method
Marin et al. Learning appearance in virtual scenarios for pedestrian detection
WO2020000908A1 (en) Method and device for face liveness detection
Kose et al. Countermeasure for the protection of face recognition systems against mask attacks
JP6798183B2 (en) Image analyzer, image analysis method and program
US20160371539A1 (en) Method and system for extracting characteristic of three-dimensional face image
KR101198322B1 (en) Method and system for recognizing facial expressions
CN108416291B (en) Face detection and recognition method, device and system
US20200380248A1 (en) Human facial detection and recognition system
JP2015207280A (en) target identification method and target identification device
US20220237943A1 (en) Method and apparatus for adjusting cabin environment
CN112232109A (en) Living body face detection method and system
CN106372629A (en) Living body detection method and device
CN109815823B (en) Data processing method and related product
CN112200056A (en) Face living body detection method and device, electronic equipment and storage medium
CN110738607A (en) Method, device and equipment for shooting driving license based on artificial intelligence and storage medium
CN113947209A (en) Integrated learning method, system and storage medium based on cloud edge cooperation
Prakash et al. A rotation and scale invariant technique for ear detection in 3D
Chu et al. Spatialized epitome and its applications
Sun et al. Multimodal face spoofing detection via RGB-D images
JP6851246B2 (en) Object detector
CN115690934A (en) Master and student attendance card punching method and device based on batch face recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220628

Address after: Room 611-217, R & D center building, China (Hefei) international intelligent voice Industrial Park, 3333 Xiyou Road, high tech Zone, Hefei City, Anhui Province

Applicant after: Hefei lushenshi Technology Co.,Ltd.

Address before: Room 3032, gate 6, block B, 768 Creative Industry Park, 5 Xueyuan Road, Haidian District, Beijing 100083

Applicant before: BEIJING DILUSENSE TECHNOLOGY CO.,LTD.

TA01 Transfer of patent application right
RJ01 Rejection of invention patent application after publication

Application publication date: 20191217

RJ01 Rejection of invention patent application after publication