CN111666901A - Living body face detection method and device, electronic equipment and storage medium - Google Patents

Living body face detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111666901A
CN111666901A CN202010520464.0A CN202010520464A CN111666901A CN 111666901 A CN111666901 A CN 111666901A CN 202010520464 A CN202010520464 A CN 202010520464A CN 111666901 A CN111666901 A CN 111666901A
Authority
CN
China
Prior art keywords
face
prediction result
target object
image
color image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010520464.0A
Other languages
Chinese (zh)
Inventor
范馨予
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alnnovation Beijing Technology Co ltd
Original Assignee
Alnnovation Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alnnovation Beijing Technology Co ltd filed Critical Alnnovation Beijing Technology Co ltd
Priority to CN202010520464.0A priority Critical patent/CN111666901A/en
Publication of CN111666901A publication Critical patent/CN111666901A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application provides a method and a device for detecting a living human face, electronic equipment and a storage medium, which are used for solving the problem of low accuracy of detecting the living human face in an image. The living human face detection method can be applied to electronic equipment, and comprises the following steps: acquiring an infrared image and a color image acquired aiming at a target object; predicting whether the infrared image comprises a living human face or not by using a pre-trained first image classification model to obtain a first prediction result and an uncertainty characteristic; fusing the uncertainty characteristic and the face depth characteristic extracted from the color image to obtain a fused characteristic; predicting whether the color image comprises a living body face or not according to the fusion characteristics by using a pre-trained second image classification model to obtain a second prediction result; and determining whether the face corresponding to the target object is a detection result of the living body face according to the first prediction result and the second prediction result.

Description

Living body face detection method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of deep learning, face recognition, and living body detection, and in particular, to a method and an apparatus for detecting a living body face, an electronic device, and a storage medium.
Background
The living body detection is a process of determining the real physiological characteristics of an object in some identity verification scenes and is used for verifying whether a user is a real living body; the living body detection can effectively resist common attack means such as photos, masks, screen reproduction and the like, thereby helping to discriminate fraudulent behaviors.
In the task of judging whether a face in an image is a living face, feature analysis methods such as texture, frequency, optical flow field and the like are generally used, and these methods are, for example: judging whether the face is a living face or not by means of LBP (Local Binary Pattern), an optical flow field or the like, or based on analysis of a 3D face shape, specifically for example: the shape of the 3D face is obtained through the depth camera, so that attacks of planar photos and videos can be effectively resisted; in a specific implementation process, it is found that the method can be broken with a high probability by using a high-definition picture or video, that is, the accuracy of detecting a living human face in an image by using the method is not high.
Disclosure of Invention
An object of the embodiments of the present application is to provide a method and an apparatus for detecting a living human face, an electronic device, and a storage medium, which are used to solve the problem that the accuracy of detecting a living human face in an image is not high.
The embodiment of the application provides a living body face detection method, which is applied to electronic equipment and comprises the following steps: acquiring an infrared image and a color image acquired aiming at a target object; predicting whether the infrared image comprises a living human face or not by using a pre-trained first image classification model to obtain a first prediction result and an uncertainty characteristic; fusing the uncertainty characteristic and the face depth characteristic extracted from the color image to obtain a fused characteristic; predicting whether the color image comprises a living body face or not according to the fusion characteristics by using a pre-trained second image classification model to obtain a second prediction result; and determining whether the face corresponding to the target object is a detection result of the living body face according to the first prediction result and the second prediction result. In the implementation process, a first prediction result and uncertainty characteristics obtained by predicting whether the living human face is included in the infrared image or not by using a first image classification model trained in advance; fusing the uncertainty characteristic and the face depth characteristic to obtain a fusion characteristic; then, predicting whether the color image comprises the living human face according to the fusion characteristics by using a pre-trained second image classification model to obtain a second prediction result; determining whether the face corresponding to the target object is a detection result of the living body face according to the first prediction result and the second prediction result; that is to say, whether the face corresponding to the target object is a living face is determined by combining the first prediction result determined according to the infrared image and the second prediction result determined according to the face depth feature extracted from the color image, so that the accuracy of detecting the living face in the image is effectively improved.
Optionally, in this embodiment of the present application, before fusing the uncertainty level feature and the face depth feature extracted from the color image, the method further includes: determining a face region from the color image; and determining the depth characteristics of the human face according to the human face area. In the implementation process, the human face area is determined from the color image; determining the depth characteristics of the human face according to the human face area; therefore, the accuracy of determining the depth features of the human face is effectively improved.
Optionally, in this embodiment of the present application, determining a face depth feature according to a face region includes: extracting face features from the face region; carrying out depth estimation on the face region to obtain a depth feature map; and fusing the face features and the depth feature map by using a pre-trained depth fusion network model to obtain the face depth features. In the implementation process, the face features are extracted from the face region; carrying out depth estimation on the face region to obtain a depth feature map; fusing the face features and the depth feature map by using a pre-trained depth fusion network model to obtain face depth features; therefore, the accuracy of determining the depth features of the human face is effectively improved.
Optionally, in this embodiment of the application, the confidence of the first prediction result is not greater than a preset threshold; determining whether the face corresponding to the target object is a detection result of the living body face according to the first prediction result and the second prediction result, wherein the method comprises the following steps: judging whether the second prediction result is that the color image comprises a living human face; if so, determining that the face corresponding to the target object is a living face; and if not, determining that the face corresponding to the target object is not the living body face. In the implementation process, whether the second prediction result is a color image including a live human face is judged; if so, determining that the face corresponding to the target object is a living face; if not, determining that the face corresponding to the target object is not the living body face; therefore, the accuracy rate of determining whether the face corresponding to the target object is the living face is effectively improved.
Optionally, in this embodiment of the present application, the confidence of the first prediction result is greater than a preset threshold; determining whether the face corresponding to the target object is a detection result of the living body face according to the first prediction result and the second prediction result, and further comprising: judging whether the first prediction result is that the infrared image comprises a living human face; if so, the face corresponding to the target object is a living face; and if not, the face corresponding to the target object is not the living body face. In the implementation process, whether the first prediction result is an infrared image and comprises a living human face is judged; if so, the face corresponding to the target object is a living face; if not, the face corresponding to the target object is not the living body face; the efficiency of determining whether the face corresponding to the target object is the living body face is effectively improved.
Optionally, in this embodiment of the present application, acquiring an infrared image and a color image, which are acquired for a target object, includes: receiving an infrared image and a color image sent by terminal equipment; after determining whether the face corresponding to the target object is the detection result of the living body face according to the first prediction result and the second prediction result, the method further comprises the following steps: and sending the detection result to the terminal equipment. In the implementation process, the infrared image and the color image sent by the terminal equipment are received through the electronic equipment; after determining whether the face corresponding to the target object is the detection result of the living body face according to the first prediction result and the second prediction result, the electronic equipment also sends the detection result to the terminal equipment, so that the speed of obtaining the detection result by the terminal equipment is effectively improved.
The embodiment of the application further provides a method for detecting a living body face, which is applied to a terminal device and comprises the following steps: the method comprises the steps of sending an infrared image and a color image to electronic equipment so that the electronic equipment can determine and send a detection result whether a target object is a living human face or not according to the infrared image and the color image, wherein the infrared image and the color image are obtained by collecting the target object; and receiving a detection result sent by the electronic equipment. In the implementation process, the terminal device sends the infrared image and the color image to the electronic device, so that the electronic device determines and sends a detection result of whether the target object is a living human face according to the infrared image and the color image; the terminal equipment receives a detection result sent by the electronic equipment; therefore, the speed of the terminal equipment for obtaining the detection result is effectively improved.
The embodiment of the present application further provides a living body face detection device, which is applied to an electronic device, and includes: the image acquisition module is used for acquiring an infrared image and a color image which are acquired aiming at a target object; the first prediction module is used for predicting whether the infrared image comprises a living human face by using a pre-trained first image classification model, and obtaining a first prediction result and an uncertainty characteristic; the feature fusion module is used for fusing the uncertainty feature and the face depth feature extracted from the color image to obtain a fusion feature; the second prediction module is used for predicting whether the color image comprises the living human face or not according to the fusion characteristics by using a pre-trained second image classification model to obtain a second prediction result; and the result determining module is used for determining whether the face corresponding to the target object is the detection result of the living body face according to the first prediction result and the second prediction result.
Optionally, in this embodiment of the present application, the living body face detection apparatus further includes: the region determining module is used for determining a face region from the color image; and the characteristic determining module is used for determining the face depth characteristic according to the face area.
Optionally, in an embodiment of the present application, the feature determining module further includes: the face feature extraction module is used for extracting face features from the face region; the depth feature obtaining module is used for carrying out depth estimation on the face region to obtain a depth feature map; and the feature depth fusion module is used for fusing the face features and the depth feature map by using a pre-trained depth fusion network model to obtain the face depth features.
Optionally, in this embodiment of the application, the confidence of the first prediction result is not greater than a preset threshold; a result determination module comprising: the color image judging module is used for judging whether the second prediction result is a color image comprising a living human face; if the second prediction result is that the color image comprises the living human face, determining that the human face corresponding to the target object is the living human face; and if the second prediction result does not include the living human face in the color image, determining that the human face corresponding to the target object is not the living human face.
Optionally, in this embodiment of the present application, the confidence of the first prediction result is greater than a preset threshold; the result determination module further comprises: the infrared image judgment module is used for judging whether the first prediction result is an infrared image comprising a living body face; if the first prediction result is that the infrared image comprises a living human face, the human face corresponding to the target object is the living human face; and if the first prediction result is that the infrared image does not comprise the living human face, the human face corresponding to the target object is not the living human face.
Optionally, in an embodiment of the present application, the image obtaining module includes: the image receiving module is used for receiving the infrared image and the color image sent by the terminal equipment; live body face detection device still includes: and the result sending module is used for sending the detection result to the terminal equipment.
The embodiment of the present application further provides a living body face detection device, which is applied to a terminal device, and includes: the image sending module is used for sending the infrared image and the color image to the electronic equipment so that the electronic equipment can determine and send a detection result of whether the target object is a living human face or not according to the infrared image and the color image, wherein the infrared image and the color image are obtained by collecting the target object; and the result receiving module is used for receiving the detection result sent by the electronic equipment.
An embodiment of the present application further provides an electronic device, including: a processor and a memory, the memory storing processor-executable machine-readable instructions, the machine-readable instructions when executed by the processor performing the method as described above.
Embodiments of the present application also provide a storage medium having a computer program stored thereon, where the computer program is executed by a processor to perform the method as described above.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a schematic flow chart of a living human face detection method provided in an embodiment of the present application;
fig. 2 is a schematic diagram of a deep convergence network model provided in an embodiment of the present application;
fig. 3 is a schematic flow chart of a method for obtaining a face depth feature according to an embodiment of the present application;
fig. 4 is a schematic flowchart illustrating a communication method between an electronic device and a terminal device according to an embodiment of the present application;
fig. 5 is a schematic diagram of a living human face detection device provided in an embodiment of the present application;
fig. 6 shows a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solution in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Before introducing the living human face detection method provided by the embodiment of the present application, some concepts related to the embodiment of the present application are introduced, and some concepts related to the embodiment of the present application are as follows:
deep Learning (Deep Learning), which refers to Learning the intrinsic rules and representation levels of sample data, wherein information obtained in the Learning process is helpful to the interpretation of data such as characters, images and sounds; the final goal of deep learning is to make a machine capable of human-like analytical learning, and recognizing data such as characters, images and sounds, and the deep learning includes, but is not limited to, extracting features of data such as characters, images and sounds by using a deeper neural network model.
An Artificial Neural Network (ANN), Neural Network (NN) for short or Neural Network-like Network, is a mathematical model or computational model for simulating the structure and function of a biological Neural Network (for example, the central nervous system of an animal, such as the brain) in the field of machine learning and cognitive science, and is used for estimating or approximating functions; the neural network here is computed from a large number of artificial neuron connections.
A Convolutional Neural Network (CNN), which is an artificial Neural network, in which artificial neurons of the artificial Neural network can respond to surrounding units and can perform large-scale image processing; the convolutional neural network may include convolutional and pooling layers.
An image classification model, also called an image classification neural network model, refers to a neural network model for image classification obtained after training a neural network, that is, an image is used as an input of the image classification neural network model to obtain an output of a probability list, and a common image classification neural network model is, for example: CNN and Deep Neural Networks (DNN), among others.
Depth estimation, which refers to the computation of one or more two-dimensional pictures, for example; the depth estimation map in three dimensions is estimated using a neural network algorithm, and therefore, depth estimation can also be understood as inferring three-dimensional spatial information from two-dimensional plane information.
The depth Feature Map (Deep Feature Map) refers to a Feature matrix representing three-dimensional depth information in a two-dimensional image, where depth information may be understood as three-dimensional spatial information, that is, a Feature matrix extracting three-dimensional spatial information from a two-dimensional image.
The depth estimation network model refers to a neural network model for extracting a depth feature map from a two-dimensional image, and the depth estimation network model can be a neural network model obtained by training a CNN by using training data.
Infrared (IR), an electromagnetic wave having a wavelength between microwave and visible light, has a wavelength between 760 nanometers (nm) and 1 millimeter (mm), is invisible light having a wavelength longer than that of red light, and corresponds to a frequency in the range of about 430THz to 300 GHz.
A Charge-Coupled Device (CCD), which is a detecting element that uses Charge to represent the magnitude of a signal and transmits the signal in a coupling manner; the detector in the CCD is a unit for sensing light in a light sensing element in the camera lens, and corresponds to a pixel of an image obtained by the CCD.
An infrared camera is a camera with an infrared filter lens additionally arranged between a lens and a CCD (charge coupled device), the infrared camera has the function of allowing infrared Light in a certain waveband to pass through and absorbing or reflecting visible Light and ultraviolet Light, and most of the infrared cameras adopt Light Emitting Diodes (LEDs) as main materials of the infrared camera.
A server refers to a device that provides computing services over a network, such as: x86 server and non-x 86 server, non-x 86 server includes: mainframe, minicomputer, and UNIX server. Certainly, in a specific implementation process, the server may specifically select a mainframe or a minicomputer, where the mainframe refers to a dedicated processor that mainly supports a closed and dedicated device for providing Computing service of a UNIX operating system, and that uses Reduced Instruction Set Computing (RISC), single-length fixed-point instruction average execution speed (MIPS), and the like; a mainframe, also known as a mainframe, refers to a device that provides computing services using a dedicated set of processor instructions, an operating system, and application software.
It should be noted that the living human face detection method provided in the embodiment of the present application may be executed by an electronic device, where the electronic device refers to a device terminal having a function of executing a computer program or the server described above, and the device terminal includes, for example: a smart phone, a Personal Computer (PC), a tablet computer, a Personal Digital Assistant (PDA), a Mobile Internet Device (MID), a network switch or a network router, and the like.
Before introducing the living body face detection method provided by the embodiment of the present application, an application scenario applicable to the living body face detection method is introduced, where the application scenario includes but is not limited to: in the security field or the payment field, the living body face detection method is used for detecting whether the face corresponding to the target object is a living body face, and the situation that the face is not a living body face specifically includes: the conditions such as photo, mask and screen reproduction specifically can be applied to access control system in the security protection field, for example: an attacker holding a photo, video or simulated mask of the owner of the house cannot enter the house; it may also be applied in the field of mobile payment, for example; the attacker holding the picture, video or imitation mask of the payer cannot complete the transaction payment operation by the face recognition identity of the payer.
Please refer to fig. 1, which illustrates a schematic flow chart of a living human face detection method provided in the embodiment of the present application; the living body face detection method can be applied to electronic equipment, and the living body face detection method can comprise the following steps:
step S110: an infrared image and a color image acquired for the target object are obtained.
The target object is a target object for determining whether the target object is a face of a living body, and the target object includes, for example: real living human faces, human face photos, human face videos or printed three-dimensional human face masks and the like.
The infrared image refers to an image with infrared light collected for a target object, and the infrared image may be obtained by collecting an image of the target object by using an infrared camera.
The color image refers to an image which is acquired aiming at a target object and comprises a red channel, a green channel or a blue channel, and the color image can be obtained by acquiring the image of the target object by using a common RGB camera; here, RGB is short for three color channels of red (R), green (G), and blue (B).
The embodiment of obtaining the infrared image and the color image collected for the target object in the step S110 includes: the first mode is that a pre-stored infrared image and a pre-stored color image which are collected aiming at a target object are obtained, the infrared image and the pre-stored color image which are collected aiming at the target object are obtained from a file system, or the infrared image and the pre-stored color image are obtained from a database; a second mode, receiving and obtaining an infrared image and a color image from other terminal equipment; in the third mode, software such as a browser is used for acquiring infrared images and color images on the internet, or other application programs are used for accessing the internet to acquire the infrared images and the color images; in the fourth mode, an infrared camera is used for collecting the image of the target object to obtain an infrared image of the target object, and a common RGB camera is used for collecting the image of the target object to obtain a color image of the target object.
After step S110, step S120 is performed: and predicting whether the infrared image comprises a living human face or not by using a first image classification model trained in advance, and obtaining a first prediction result and an uncertainty characteristic.
The first image classification model is a neural network model for classifying infrared images, the input of the neural network model is the infrared images, and the output of the neural network model is a prediction result of whether the infrared images comprise living human faces or not; the pre-trained first image classification model refers to a first image classification model obtained by training a neural network model in advance by using training data.
The first prediction result is a result of classifying the infrared image using the first image classification model, where the classification result includes two results: the infrared image includes a living human face and the infrared image does not include a living human face.
The uncertainty characteristics refer to characteristics representing the degree that the classification result of the infrared image cannot be determined in the process of classifying the infrared image, and specifically include: confidence may be employed to characterize the degree of uncertainty of the classification results of the infrared images.
The embodiment of step S120 described above is, for example: if the first image classification model is a trained convolutional neural network model, detecting a face region from the infrared image, and enabling the trained first image classification model to predict whether the face in the face region is a living body face, so as to obtain a first prediction result and an uncertainty characteristic; the convolutional neural network model here is, for example: LeNet, AlexNet, VGG, GoogleNet, or ResNet, among others. The first image classification model may be trained, for example: the method comprises the steps of obtaining a convolutional neural network and training data, wherein the training data comprise infrared images and data labels, and training the convolutional neural network by using the training data to obtain a first image classification model.
In a specific implementation process, the convolutional neural network model may further use a normalized exponential function (Softmax) to classify the feature map extracted from the infrared image, where the normalized exponential function is also referred to as Softmax classifier, Softmax layer, or Softmax function, and is actually a gradient logarithm normalization of finite discrete probability distribution; in mathematics, in particular in probability theory and related fields, a normalized exponential function, or Softmax function, is a generalization of logistic functions; the normalized exponential function can "compress" a K-dimensional vector containing arbitrary real numbers into another K-dimensional real vector σ (z) such that each element ranges between (0,1) and the sum of all elements is 1.
After step S120, step S130 is performed: and fusing the uncertain degree features and the face depth features extracted from the color image to obtain fused features.
The face depth feature refers to a depth feature map extracted from the color image and representing face depth information; there are many ways to obtain the face depth features, and one of the ways to obtain the face depth features is, for example: performing depth estimation on the color image by using a depth estimation network model trained in advance to obtain the depth features of the human face, wherein the depth estimation network model specifically includes: convolutional neural network models such as LeNet, AlexNet, VGG, ResNet, and GoogLeNet; the specific manner of obtaining the face depth features will be described in detail below.
Please refer to fig. 2, which is a schematic diagram of a deep convergence network model provided in the embodiment of the present application; the embodiment of step S130 described above is, for example: fusing the uncertainty characteristics and the face depth characteristics extracted from the color image by using a depth fusion network model to obtain fusion characteristics; the depth fusion network model is also referred to herein as a Depth Map (DM) network model, and is also referred to as a DM network model or an RGB/DM network model for short. The deep fusion network model comprises: an extrusion and excitation (S & E) module, a Global Average Pooling (GAP) layer, a residual module (residual block), and a full link layer; the extrusion excitation module is also referred to as an S & E module or an S & Eblock for short, the S & E module is used for carrying out weight adjustment and fusion on the obtained features, and the S & E module can be realized by adopting a convolutional neural network layer; the GAP layer is a neural network layer for performing average pooling calculation on the whole situation; the residual error module is a residual error neural network layer, and is used for fusing the features of different scales to obtain fused features; the residual modules in the graph may include: a first residual block (res1), a second residual block (res2), and a third residual block (res 3); specifically, the residual module may employ ResNet22, ResNet38, ResNet50, ResNet101, or ResNet152, etc.; the fully connected layers in the figure may include: a first fully-connected layer, a second fully-connected layer, and a third fully-connected layer.
After step S130, step S140 is performed: and predicting whether the color image comprises the living body face or not according to the fusion characteristics by using a pre-trained second image classification model to obtain a second prediction result.
The second image classification model is a neural network model for classifying the color images according to the fusion features, the input of the neural network model is the fusion features, and the output of the neural network model is a prediction result of whether the color images comprise the living human faces or not; the pre-trained second image classification model is obtained by training the neural network model in advance by using training data.
The second prediction result is a result of classifying the fusion feature using the second image classification model, where the classification result includes two results: the color image includes a live face and the color image does not include a live face.
The embodiment of step S140 described above is, for example: obtaining a convolutional neural network and training data, wherein the training data comprise a color image and a data label, training the convolutional neural network by using the training data to obtain a second image classification model, classifying the fusion features by using the trained second image classification model, and obtaining a classification result, namely a second prediction result, of whether the color image comprises a living body face or not; the convolutional neural network here is, for example: LeNet, AlexNet, VGG, GoogleNet, or ResNet, among others.
After step S140, step S150 is performed: and determining whether the face corresponding to the target object is a detection result of the living body face according to the first prediction result and the second prediction result.
The detection result is a result of whether the face corresponding to the target object is a live face, and specifically includes: if the target object is a real living body face, the detection result is that the face corresponding to the target object is the living body face; if the target object is a face photo, a face video or a printed three-dimensional face mask, the detection result is that the face corresponding to the target object is not a living face.
The above embodiment of step S150 is divided into the following two cases:
in a first case, when the confidence of the first prediction result is not greater than the preset threshold, the implementation of step S150 may include:
the Confidence degree is an interval estimation of a certain overall parameter of a probability sample in a Confidence interval (Confidence interval) of the sample, wherein the Confidence degree refers to the credibility of the first prediction result, and the value interval of the Confidence degree is 0% to 100%.
The preset threshold refers to a preset limit threshold, and the preset threshold may be set according to a specific actual situation, for example: may be set to 20%, 50%, or 90%, etc.
Step S151: and judging whether the second prediction result is that the color image comprises the living human face.
The embodiment of step S151 described above is, for example: if the confidence of the first prediction result is 50% and the preset threshold is 80%, and the confidence of the first prediction result is not greater than the preset threshold, whether the second prediction result is that the color image comprises the living human face is judged.
Step S152: and if the second prediction result is that the color image comprises the living human face, determining that the human face corresponding to the target object is the living human face.
The embodiment of step S152 described above is, for example: if the target object is a real living body face, then the color image of the target object is acquired, and it can be known that the obtained second prediction result is that the color image includes the living body face, and the face corresponding to the target object is determined to be the living body face.
Step S153: and if the second prediction result is that the living human face is not included in the color image, determining that the human face corresponding to the target object is not the living human face.
The embodiment of step S153 described above is, for example: if the target object is a picture, a video or a mask of a human face, and then the color image of the target object is acquired, it can be known that the obtained second prediction result is that the color image does not include a living human face, and it is determined that the human face corresponding to the target object is not the living human face.
In the implementation process, whether the second prediction result is a color image including a live human face is judged; if so, determining that the face corresponding to the target object is a living face; if not, determining that the face corresponding to the target object is not the living body face; therefore, the accuracy rate of determining whether the face corresponding to the target object is the living face is effectively improved.
In the second case, when the confidence of the first prediction result is greater than the preset threshold, the implementation of step S150 may include:
step S154: and judging whether the first prediction result is that the infrared image comprises a living human face.
The implementation principle and implementation manner of this step are similar or similar to those of step S151, and therefore, the implementation manner and implementation principle of this step are not described here, and if it is not clear, reference may be made to the description of step S151.
Step S155: and if the first prediction result is that the infrared image comprises a living human face, the human face corresponding to the target object is the living human face.
The implementation principle and implementation manner of this step are similar or similar to those of step S152, and therefore, the implementation manner and implementation principle of this step are not described here, and if it is not clear, reference may be made to the description of step S152.
Step S156: and if the first prediction result is that the living human face is not included in the infrared image, the human face corresponding to the target object is not the living human face.
The implementation principle and implementation manner of this step are similar or analogous to those of step S153, and therefore, the implementation principle and implementation manner of this step are not described here, and if it is not clear, reference may be made to the description of step S153. In the implementation process, whether the first prediction result is an infrared image and comprises a living human face is judged; if so, the face corresponding to the target object is a living face; if not, the face corresponding to the target object is not the living body face; the efficiency of determining whether the face corresponding to the target object is the living body face is effectively improved.
In the implementation process, a first prediction result and uncertainty characteristics obtained by predicting whether the living human face is included in the infrared image or not by using a first image classification model trained in advance; fusing the uncertainty characteristic and the face depth characteristic to obtain a fusion characteristic; then, predicting whether the color image comprises the living human face according to the fusion characteristics by using a pre-trained second image classification model to obtain a second prediction result; determining whether the face corresponding to the target object is a detection result of the living body face according to the first prediction result and the second prediction result; that is to say, whether the face corresponding to the target object is a living face is determined by combining the first prediction result determined according to the infrared image and the second prediction result determined according to the face depth feature extracted from the color image, so that the accuracy of detecting the living face in the image is effectively improved.
Please refer to a flow diagram of a method for obtaining a face depth feature provided in an embodiment of the present application shown in fig. 3; in the embodiment of the present application, the following describes a method for obtaining a face depth feature, and after obtaining a color image acquired for a target object in step S110, and before step S130, the method may further include the following steps:
step S210: the face region is determined from the color image.
The above-mentioned embodiment of determining a face region from a color image includes: determining a face region from the color image by using a face recognition network model, wherein the face recognition network model can be a trained convolutional neural network model, and the convolutional neural network model comprises the following steps: LeNet, AlexNet, VGG, GoogleNet, or ResNet, among others.
Step S220: and determining the depth characteristics of the human face according to the human face area.
The above embodiments of determining the depth feature of the face according to the face region have many kinds:
in the first mode, a face region is calculated to obtain a geometric normal vector of each object in the face region, and then a depth estimation image is obtained according to the geometric normal vector.
In the second mode, two convolutional neural network models are used for respectively obtaining global depth estimation and local depth estimation, and then the global depth estimation and the local depth estimation are subjected to fusion adjustment; specific examples thereof include: and extracting three-dimensional space information in one or more pictures by using a neural network model, wherein the network can be obtained by cutting and splicing through a ResNet model.
In the third mode, firstly, face features are extracted, then depth estimation is carried out on a face region to obtain a depth feature map, and finally, the face features and the depth feature map are fused to obtain face depth features; specific examples thereof include: extracting face features from the face region; carrying out depth estimation on the face region to obtain a depth feature map; and fusing the face features and the depth feature map by using a pre-trained depth fusion network model to obtain the face depth features. The depth fusion network model is also referred to herein as a Depth Map (DM) network model, and is also referred to as a DM network model or an RGB/DM network model for short.
In the implementation process, the face features are extracted from the face region; carrying out depth estimation on the face region to obtain a depth feature map; fusing the face features and the depth feature map by using a pre-trained depth fusion network model to obtain face depth features; therefore, the accuracy of determining the depth features of the human face is effectively improved. Determining a face region from the color image; determining the depth characteristics of the human face according to the human face area; therefore, the accuracy of determining the depth features of the human face is effectively improved.
Please refer to fig. 4, which is a schematic flow chart of a communication method between an electronic device and a terminal device according to an embodiment of the present application; optionally, in this embodiment of the present application, the electronic device may further communicate with a terminal device, and the specific communication method may include the following steps:
step S310: the terminal device transmits the infrared image and the color image to the electronic device.
The embodiment of step S310 described above is, for example: the terminal device transmits the infrared image and the color image to the electronic device through a Transmission Control Protocol (TCP) or a User Datagram Protocol (UDP). The TCP protocol is also called a network communication protocol, and is a connection-oriented, reliable and byte stream-based transport layer communication protocol; in the Internet protocol suite (Internet protocol suite), the TCP layer is an intermediate layer located above the IP layer and below the application layer; reliable, pipe-like connections are often required between application layers of different hosts. The UDP Protocol is a short for User Datagram Protocol, a Chinese name is a User Datagram Protocol, and the UDP Protocol is a connectionless transport layer Protocol in an Open System Interconnection (OSI) reference model, and provides a transaction-oriented simple unreliable information transfer service.
Step S320: the electronic equipment receives the infrared image and the color image sent by the terminal equipment, predicts whether the infrared image comprises a living human face or not by using a pre-trained first image classification model, and obtains a first prediction result and an uncertainty characteristic.
The implementation principle and implementation manner of this step are similar or similar to those of step S120, and therefore, the implementation manner and implementation principle of this step are not described here, and if it is not clear, reference may be made to the description of step S120.
Step S330: and the electronic equipment fuses the uncertainty characteristic and the face depth characteristic extracted from the color image to obtain a fusion characteristic.
The implementation principle and implementation manner of this step are similar or similar to those of step S130, and therefore, the implementation principle and implementation manner of this step are not described here, and if it is not clear, reference may be made to the description of step S130.
Step S340: and the electronic equipment predicts whether the color image comprises the living human face or not according to the fusion characteristics by using a pre-trained second image classification model to obtain a second prediction result.
The implementation principle and implementation manner of this step are similar or similar to those of step S140, and therefore, the implementation manner and implementation principle of this step are not described here, and if it is not clear, reference may be made to the description of step S140.
Step S350: and the electronic equipment determines whether the face corresponding to the target object is the detection result of the living body face according to the first prediction result and the second prediction result.
The implementation principle and implementation manner of this step are similar or similar to those of step S150, and therefore, the implementation manner and implementation principle of this step are not described here, and if it is not clear, reference may be made to the description of step S150.
Step S360: and the electronic equipment sends the detection result to the terminal equipment.
The above-mentioned embodiment in which the electronic device sends the detection result to the terminal device is, for example: the electronic device may send the detection result to one terminal device, or send the detection result to a plurality of terminal devices, and the specific sending mode of the electronic device to one of the terminal devices may be sending through a wireless network, or sending through a wired network, or sending through an internet mode in which the wired network and the wireless network are mixed.
Step S370: and the terminal equipment receives the detection result sent by the electronic equipment.
The above embodiments of the terminal device receiving the detection result sent by the electronic device include: the terminal device receives a detection result sent by the electronic device in an Asynchronous JavaScript And XML (Asynchronous JavaScript And XML, AJAX), wherein AJAX refers to a webpage development technology for creating interactive webpage application And a technology for creating a fast dynamic webpage, And can update part of the webpage without reloading the whole webpage.
In the implementation process, the infrared image and the color image sent by the terminal equipment are received through the electronic equipment; after determining whether the face corresponding to the target object is the detection result of the living body face according to the first prediction result and the second prediction result, the electronic equipment also sends the detection result to the terminal equipment, so that the speed of obtaining the detection result by the terminal equipment is effectively improved.
Please refer to fig. 5, which illustrates a schematic diagram of a living human face detection apparatus provided in the embodiment of the present application; the embodiment of the application provides a living body face detection apparatus 400, which is applied to an electronic device and includes:
an image obtaining module 410, configured to obtain an infrared image and a color image acquired for the target object.
The first prediction module 420 is configured to predict whether the infrared image includes a living human face using a first image classification model trained in advance, and obtain a first prediction result and an uncertainty characteristic.
And the feature fusion module 430 is configured to fuse the uncertainty features and the face depth features extracted from the color image to obtain fusion features.
And the second prediction module 440 is configured to predict whether the color image includes a live human face according to the fusion feature by using a second image classification model trained in advance, so as to obtain a second prediction result.
And the result determining module 450 is configured to determine whether the face corresponding to the target object is a detection result of the living body face according to the first prediction result and the second prediction result.
Optionally, in this embodiment of the present application, the living body face detection apparatus further includes:
and the region determining module is used for determining a face region from the color image.
And the characteristic determining module is used for determining the face depth characteristic according to the face area.
Optionally, in an embodiment of the present application, the feature determining module further includes:
and the face feature extraction module is used for extracting face features from the face region.
And the depth feature obtaining module is used for carrying out depth estimation on the face region to obtain a depth feature map.
And the feature depth fusion module is used for fusing the face features and the depth feature map by using a pre-trained depth fusion network model to obtain the face depth features.
Optionally, in this embodiment of the application, the confidence of the first prediction result is not greater than a preset threshold; a result determination module comprising:
and the color image judging module is used for judging whether the second prediction result is a color image comprising a living human face.
And if the second prediction result is that the color image comprises the living human face, determining that the human face corresponding to the target object is the living human face.
And if the second prediction result does not include the living human face in the color image, determining that the human face corresponding to the target object is not the living human face.
Optionally, in this embodiment of the present application, the confidence of the first prediction result is greater than a preset threshold; the result determination module further comprises:
and the infrared image judgment module is used for judging whether the first prediction result is an infrared image including a living human face.
And if the first prediction result is that the infrared image comprises a living human face, the human face corresponding to the target object is the living human face.
And if the first prediction result is that the infrared image does not comprise the living human face, the human face corresponding to the target object is not the living human face.
Optionally, in an embodiment of the present application, the image obtaining module includes:
and the image receiving module is used for receiving the infrared image and the color image sent by the terminal equipment.
Live body face detection device still includes:
and the result sending module is used for sending the detection result to the terminal equipment.
The embodiment of the present application further provides a living body face detection device, which is applied to a terminal device, and includes:
and the image sending module is used for sending the infrared image and the color image to the electronic equipment so that the electronic equipment determines and sends a detection result of whether the target object is the living human face according to the infrared image and the color image, wherein the infrared image and the color image are acquired by acquiring the target object.
And the result receiving module is used for receiving the detection result sent by the electronic equipment.
It should be understood that the apparatus corresponds to the above-mentioned embodiment of the living human face detection method, and can perform the steps related to the above-mentioned embodiment of the method, and the specific functions of the apparatus can be referred to the above description, and the detailed description is appropriately omitted here to avoid redundancy. The device includes at least one software function that can be stored in memory in the form of software or firmware (firmware) or solidified in the Operating System (OS) of the device.
Please refer to fig. 6 for a schematic structural diagram of an electronic device according to an embodiment of the present application. An electronic device 500 provided in an embodiment of the present application includes: a processor 510 and a memory 520, the memory 520 storing machine readable instructions executable by the processor 510, the machine readable instructions when executed by the processor 510 performing the method as above.
The embodiment of the present application further provides a storage medium 530, where the storage medium 530 stores thereon a computer program, and when the computer program is executed by the processor 510, the living human face detection method as above is performed.
The storage medium 530 may be implemented by any type of volatile or nonvolatile storage device or combination thereof, such as a Static Random Access Memory (SRAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), an Erasable Programmable Read-Only Memory (EPROM), a Programmable Read-Only Memory (PROM), a Read-Only Memory (ROM), a magnetic Memory, a flash Memory, a magnetic disk, or an optical disk.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an alternative embodiment of the embodiments of the present application, but the scope of the embodiments of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the embodiments of the present application, and all the changes or substitutions should be covered by the scope of the embodiments of the present application.

Claims (10)

1. A method for detecting a human face of a living body is applied to an electronic device, and comprises the following steps:
acquiring an infrared image and a color image acquired aiming at a target object;
predicting whether the infrared image comprises a living human face or not by using a pre-trained first image classification model to obtain a first prediction result and an uncertainty characteristic;
fusing the uncertainty characteristic and the face depth characteristic extracted from the color image to obtain a fused characteristic;
predicting whether the color image comprises a living body face or not according to the fusion characteristics by using a pre-trained second image classification model to obtain a second prediction result;
and determining whether the face corresponding to the target object is a detection result of the living body face according to the first prediction result and the second prediction result.
2. The method according to claim 1, wherein before the fusing the uncertainty level feature and the face depth feature extracted from the color image, further comprising:
determining a face region from the color image;
and determining the face depth feature according to the face region.
3. The method of claim 2, wherein the determining the face depth feature from the face region comprises:
extracting face features from the face region;
carrying out depth estimation on the face region to obtain a depth feature map;
and fusing the face features and the depth feature map by using a pre-trained depth fusion network model to obtain the face depth features.
4. The method of claim 1, wherein the confidence level of the first prediction result is not greater than a preset threshold; the determining whether the face corresponding to the target object is a detection result of the living body face according to the first prediction result and the second prediction result includes:
judging whether the second prediction result is that the color image comprises a living human face;
if so, determining that the face corresponding to the target object is a living face;
and if not, determining that the face corresponding to the target object is not the living body face.
5. The method of claim 4, wherein the confidence level of the first predicted result is greater than a preset threshold; the determining whether the face corresponding to the target object is a detection result of the living body face according to the first prediction result and the second prediction result further includes:
judging whether the first prediction result is that the infrared image comprises a living human face;
if so, the face corresponding to the target object is a living face;
and if not, the face corresponding to the target object is not the living body face.
6. The method of claim 1, wherein the obtaining infrared images and color images acquired for a target object comprises:
receiving the infrared image and the color image sent by the terminal equipment;
after the determining whether the face corresponding to the target object is a detection result of the living body face according to the first prediction result and the second prediction result, the method further includes:
and sending the detection result to the terminal equipment.
7. A method for detecting a human face of a living body is applied to a terminal device and comprises the following steps:
sending an infrared image and a color image to an electronic device so that the electronic device determines and sends a detection result whether a target object is a living human face according to the infrared image and the color image, wherein the infrared image and the color image are obtained by collecting the target object;
and receiving the detection result sent by the electronic equipment.
8. The living body face detection device is applied to electronic equipment and comprises:
the image acquisition module is used for acquiring an infrared image and a color image which are acquired aiming at a target object;
the first prediction module is used for predicting whether the infrared image comprises a living human face or not by using a pre-trained first image classification model, and obtaining a first prediction result and an uncertainty characteristic;
the feature fusion module is used for fusing the uncertainty feature and the face depth feature extracted from the color image to obtain a fusion feature;
the second prediction module is used for predicting whether the color image comprises the living human face or not according to the fusion characteristics by using a pre-trained second image classification model to obtain a second prediction result;
and the result determining module is used for determining whether the face corresponding to the target object is the detection result of the living body face according to the first prediction result and the second prediction result.
9. An electronic device, comprising: a processor and a memory, the memory storing machine-readable instructions executable by the processor, the machine-readable instructions, when executed by the processor, performing the method of any of claims 1 to 6.
10. A storage medium, having stored thereon a computer program which, when executed by a processor, performs the method of any one of claims 1 to 7.
CN202010520464.0A 2020-06-09 2020-06-09 Living body face detection method and device, electronic equipment and storage medium Withdrawn CN111666901A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010520464.0A CN111666901A (en) 2020-06-09 2020-06-09 Living body face detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010520464.0A CN111666901A (en) 2020-06-09 2020-06-09 Living body face detection method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111666901A true CN111666901A (en) 2020-09-15

Family

ID=72386573

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010520464.0A Withdrawn CN111666901A (en) 2020-06-09 2020-06-09 Living body face detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111666901A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112215180A (en) * 2020-10-20 2021-01-12 腾讯科技(深圳)有限公司 Living body detection method and device
CN113536582A (en) * 2021-07-22 2021-10-22 中国有色金属工业昆明勘察设计研究院有限公司 Method for obtaining ultimate corrosion durability of karst cave top plate
CN113569707A (en) * 2021-07-23 2021-10-29 北京百度网讯科技有限公司 Living body detection method, living body detection device, electronic apparatus, and storage medium
CN114202805A (en) * 2021-11-24 2022-03-18 北京百度网讯科技有限公司 Living body detection method, living body detection device, electronic apparatus, and storage medium
CN115512428A (en) * 2022-11-15 2022-12-23 华南理工大学 Human face living body distinguishing method, system, device and storage medium
WO2023273050A1 (en) * 2021-06-30 2023-01-05 北京市商汤科技开发有限公司 Living body detection method and apparatus, electronic device, and storage medium
WO2023273297A1 (en) * 2021-06-30 2023-01-05 平安科技(深圳)有限公司 Multi-modality-based living body detection method and apparatus, electronic device, and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160117544A1 (en) * 2014-10-22 2016-04-28 Hoyos Labs Ip Ltd. Systems and methods for performing iris identification and verification using mobile devices
CN108280418A (en) * 2017-12-12 2018-07-13 北京深醒科技有限公司 The deception recognition methods of face image and device
CN109034102A (en) * 2018-08-14 2018-12-18 腾讯科技(深圳)有限公司 Human face in-vivo detection method, device, equipment and storage medium
CN109558840A (en) * 2018-11-29 2019-04-02 中国科学院重庆绿色智能技术研究院 A kind of biopsy method of Fusion Features
CN110516616A (en) * 2019-08-29 2019-11-29 河南中原大数据研究院有限公司 A kind of double authentication face method for anti-counterfeit based on extensive RGB and near-infrared data set
CN110909693A (en) * 2019-11-27 2020-03-24 深圳市华付信息技术有限公司 3D face living body detection method and device, computer equipment and storage medium
CN111079576A (en) * 2019-11-30 2020-04-28 腾讯科技(深圳)有限公司 Living body detection method, living body detection device, living body detection equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160117544A1 (en) * 2014-10-22 2016-04-28 Hoyos Labs Ip Ltd. Systems and methods for performing iris identification and verification using mobile devices
CN108280418A (en) * 2017-12-12 2018-07-13 北京深醒科技有限公司 The deception recognition methods of face image and device
CN109034102A (en) * 2018-08-14 2018-12-18 腾讯科技(深圳)有限公司 Human face in-vivo detection method, device, equipment and storage medium
CN109558840A (en) * 2018-11-29 2019-04-02 中国科学院重庆绿色智能技术研究院 A kind of biopsy method of Fusion Features
CN110516616A (en) * 2019-08-29 2019-11-29 河南中原大数据研究院有限公司 A kind of double authentication face method for anti-counterfeit based on extensive RGB and near-infrared data set
CN110909693A (en) * 2019-11-27 2020-03-24 深圳市华付信息技术有限公司 3D face living body detection method and device, computer equipment and storage medium
CN111079576A (en) * 2019-11-30 2020-04-28 腾讯科技(深圳)有限公司 Living body detection method, living body detection device, living body detection equipment and storage medium

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112215180A (en) * 2020-10-20 2021-01-12 腾讯科技(深圳)有限公司 Living body detection method and device
CN112215180B (en) * 2020-10-20 2024-05-07 腾讯科技(深圳)有限公司 Living body detection method and device
WO2023273050A1 (en) * 2021-06-30 2023-01-05 北京市商汤科技开发有限公司 Living body detection method and apparatus, electronic device, and storage medium
WO2023273297A1 (en) * 2021-06-30 2023-01-05 平安科技(深圳)有限公司 Multi-modality-based living body detection method and apparatus, electronic device, and storage medium
CN113536582A (en) * 2021-07-22 2021-10-22 中国有色金属工业昆明勘察设计研究院有限公司 Method for obtaining ultimate corrosion durability of karst cave top plate
CN113569707A (en) * 2021-07-23 2021-10-29 北京百度网讯科技有限公司 Living body detection method, living body detection device, electronic apparatus, and storage medium
CN114202805A (en) * 2021-11-24 2022-03-18 北京百度网讯科技有限公司 Living body detection method, living body detection device, electronic apparatus, and storage medium
CN115512428A (en) * 2022-11-15 2022-12-23 华南理工大学 Human face living body distinguishing method, system, device and storage medium

Similar Documents

Publication Publication Date Title
CN111666901A (en) Living body face detection method and device, electronic equipment and storage medium
CN112052787B (en) Target detection method and device based on artificial intelligence and electronic equipment
CN111709408B (en) Image authenticity detection method and device
CN108269254B (en) Image quality evaluation method and device
US10769261B2 (en) User image verification
CN111738357B (en) Junk picture identification method, device and equipment
CN111444744A (en) Living body detection method, living body detection device, and storage medium
CN110580428A (en) image processing method, image processing device, computer-readable storage medium and electronic equipment
CN111783749A (en) Face detection method and device, electronic equipment and storage medium
CN114331829A (en) Countermeasure sample generation method, device, equipment and readable storage medium
CN112052830B (en) Method, device and computer storage medium for face detection
CN112488218A (en) Image classification method, and training method and device of image classification model
CN110674800A (en) Face living body detection method and device, electronic equipment and storage medium
CN110059579B (en) Method and apparatus for in vivo testing, electronic device, and storage medium
CN113128481A (en) Face living body detection method, device, equipment and storage medium
CN113642639B (en) Living body detection method, living body detection device, living body detection equipment and storage medium
CN115049675A (en) Generation area determination and light spot generation method, apparatus, medium, and program product
CN112308093B (en) Air quality perception method based on image recognition, model training method and system
CN111652320B (en) Sample classification method and device, electronic equipment and storage medium
CN117765348A (en) Target detection model deployment method, target detection method and electronic equipment
CN108038496B (en) Marriage and love object matching data processing method and device based on big data and deep learning, computer equipment and storage medium
CN112749686B (en) Image detection method, image detection device, computer equipment and storage medium
CN113822199B (en) Object attribute identification method and device, storage medium and electronic device
CN112990156B (en) Optimal target capturing method and device based on video and related equipment
CN114387496A (en) Target detection method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20200915