CN115116146A - Living body detection method, device, equipment and system - Google Patents

Living body detection method, device, equipment and system Download PDF

Info

Publication number
CN115116146A
CN115116146A CN202210620254.8A CN202210620254A CN115116146A CN 115116146 A CN115116146 A CN 115116146A CN 202210620254 A CN202210620254 A CN 202210620254A CN 115116146 A CN115116146 A CN 115116146A
Authority
CN
China
Prior art keywords
nose
respiratory
living body
breathing
biological characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210620254.8A
Other languages
Chinese (zh)
Inventor
曹佳炯
丁菁汀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202210620254.8A priority Critical patent/CN115116146A/en
Publication of CN115116146A publication Critical patent/CN115116146A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The present specification provides a method, an apparatus, a device, and a system for detecting a living body, which are capable of rapidly and accurately detecting whether a target detection object belongs to a living body object based on a detected breathing action and a detected breathing frequency by acquiring a biometric image of the target detection object during biometric identification, identifying a nose region and a key point thereof for the biometric image, and detecting a breathing action and a breathing frequency of the target detection object based on the identified nose region and the key point thereof. The living body detection is completed by detecting the breathing action of the user in a natural state, the additional cooperation of the user is not needed, the applicability is wider, the accuracy and the applicability of the living body detection are improved, and the safety of biological identification is further improved.

Description

Living body detection method, device, equipment and system
Technical Field
The present disclosure relates to computer technologies, and in particular, to a method, an apparatus, a device, and a system for detecting a living body.
Background
With the development of computer internet technology, the application of face recognition technology is more and more as follows: the applications of other biometric technologies are also increasing continuously, such as face-brushing payment, face-brushing login, face-brushing attendance checking, face-brushing identity authentication for travel and the like. The biometric identification technology needs to collect a biometric image and then realizes the purpose of identity authentication on the biometric image.
As the application of face recognition technology is becoming more widespread, the social influence surface that may be brought by the potential safety hazard is also becoming more serious. One of the main risks threatening the safety of face recognition is live attack, namely, the face recognition is bypassed through media such as photos, screens, masks and the like, and the account of a victim is stolen.
Therefore, how to propose a living body detection scheme, which can prevent living body attack during biological identification and improve the safety of biological identification is a technical problem to be solved in the field.
Disclosure of Invention
An object of the embodiments of the present specification is to provide a method, an apparatus, a device, and a system for detecting a living body, which improve accuracy of living body detection, and further improve safety of biometric identification.
In one aspect, an embodiment of the present specification provides a method for in vivo detection, the method including:
acquiring a specified number of biological characteristic images of a target detection object during biological identification;
detecting the nose areas of the biological characteristic images of the specified number, and determining the nose areas in the biological characteristic images and key points corresponding to the nose areas;
carrying out respiration detection on the nose areas in the biological characteristic images of the specified number, and determining the respiration type and the respiration characteristics of the target detection object;
detecting the respiratory frequency according to the respiratory characteristics and key points corresponding to the nose area, and determining the respiratory frequency of the target detection object;
determining whether the target detection object is a living object based on the breathing category and the breathing frequency.
In another aspect, the present specification provides a living body detection apparatus comprising:
the image acquisition module is used for acquiring a specified number of biological characteristic images of the target detection object during biological identification;
the nose region detection module is used for detecting the nose regions of the biological characteristic images and determining the nose regions in the biological characteristic images and key points corresponding to the nose regions;
the respiratory action detection module is used for carrying out respiratory detection on the nose areas in the biological characteristic images of the specified number and determining the respiratory type and the respiratory characteristics of the target detection object;
the respiratory frequency detection module is used for detecting the respiratory frequency according to the respiratory characteristics and key points corresponding to the nose area to determine the respiratory frequency of the target detection object;
and the living body detection module is used for determining whether the target detection object is a living body object or not based on the breathing category and the breathing frequency.
In another aspect, embodiments of the present specification provide a living body detection apparatus, including at least one processor and a memory for storing processor-executable instructions, which when executed by the processor implement the living body detection method described above.
In another aspect, an embodiment of the present specification provides a living body detection system, including: the living body detection method comprises an image acquisition device and an image processing device, wherein the image acquisition device is used for acquiring a biological characteristic image of a target detection object for biological identification, the image processing device comprises at least one processor and a memory for storing processor executable instructions, and the processor executes the instructions to realize the living body detection method.
The living body detection method, the device, the equipment and the system provided by the specification can be used for rapidly and accurately detecting whether a target detection object belongs to a living body object or not based on the detected breathing action and the detected breathing frequency by acquiring a biological characteristic image of the target detection object during biological identification, identifying a nose area and key points of the nose area on the biological characteristic image, detecting the breathing action and the breathing frequency of the target detection object based on the identified nose area and key points of the nose area. The living body detection is completed by detecting the breathing action of the user in a natural state, the additional cooperation of the user is not needed, the applicability is wider, the accuracy and the applicability of the living body detection are improved, and the safety of biological identification is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present specification or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present specification, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort.
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of a method for detecting a living body according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart illustrating the principle of liveness detection in an example scenario of the present disclosure;
FIG. 3 is a block diagram of an embodiment of a biopsy device provided herein;
fig. 4 is a block diagram of a hardware configuration of the liveness detection server in one embodiment of the present specification.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present specification, and not all of the embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments in the present specification without any inventive step should fall within the scope of protection of the present specification.
With the continuous popularization of biometric technology, the security of biometric technology is more and more emphasized. The main threats to biometric security in general are live attacks, such as: in the process of face recognition, camouflaging is carried out by using a photo, a screen, a mask and the like, and information embezzlement is carried out on a user of face recognition. In general, a biometric system such as a face recognition system integrates a living body anti-attack algorithm to perform living body detection, and the general living body anti-attack algorithm may only use a simple living body detection model to classify users of face recognition, and this method generally has weak security capability and weak interception capability for attack types that do not appear in training data. Or, some algorithms require the user to perform a series of actions to realize the living body detection, such as blinking, nodding, and the like, and the attack difficulty is improved through the completion degree of the actions. However, such methods are often used in more private and secure scenes (e.g., banks, etc.), and are not well experienced in more people's commercial scenes (e.g., vending machines, etc.), and thus are difficult to apply.
Some embodiments of the present description may provide a living body detection method, which is mainly applied to living body detection in a face recognition process to avoid living body attack, resulting in theft of user information, and the like. The embodiments of the present specification determine whether there is a respiratory motion by acquiring a biometric image of a target detection object, identifying a nose region therein based on the acquired biometric image, detecting some micro motion of the nose region, and further estimating a respiratory frequency. By detecting the breathing action and the corresponding frequency, a safety capability similar to that of an interactive living body can be reached without additional cooperation by the user.
In addition, in the embodiments of the present specification, the following examples are: the acquisition, storage, use, processing and the like of data involved in face recognition and living body detection all accord with relevant regulations of national laws and regulations.
Fig. 1 is a schematic flow chart of an embodiment of a living body detection method provided in an embodiment of the present specification. Although the present specification provides the method steps or apparatus structures as shown in the following examples or figures, more or less steps or modules may be included in the method or apparatus structures based on conventional or non-inventive efforts. In the case of steps or structures which do not logically have the necessary cause and effect relationship, the execution order of the steps or the block structure of the apparatus is not limited to the execution order or the block structure shown in the embodiments or the drawings of the present specification. When the described method or module structure is applied to a device, a server or an end product in practice, the method or module structure according to the embodiment or the figures may be executed sequentially or in parallel (for example, in a parallel processor or multi-thread processing environment, or even in an implementation environment including distributed processing and server clustering).
In a specific embodiment of the living body detection method provided in this specification, as shown in fig. 1, the method may be applied to a server, a computer, a tablet computer, a server, a smart phone, a smart wearable device, an in-vehicle device, a smart home device, and other devices capable of performing image processing, and may be mainly applied to a biometric device such as: the face recognition device, the method may include the steps of:
step 102, acquiring a specified number of biological characteristic images of the target detection object during biological identification.
In a specific implementation, biometric identification is performed such as: when the human face is recognized, a camera in the biometric device can be used for collecting a biometric image of a target detection object being recognized, and the target detection object can be understood as an object for biometric recognition. The biometric image may be determined according to a biometric type, and may refer to image information with a human face, or pupil information or fingerprint information, and the like. The biometric image may be a picture or a video, which may be determined according to actual needs, and the embodiments of the present specification are not limited in particular.
In addition, when acquiring the biometric image of the target detection object, image information of the target detection object within a specified time after the start of biometric identification may be acquired to obtain a specified number of biometric images, where the specified number of acquired biometric images may be continuously acquired, and based on the continuously acquired biometric images of the target detection object, a change of nostrils of the target detection object during biometric identification may be analyzed, so as to perform identification of a respiratory motion and a respiratory frequency. Such as: the user performs face recognition through IoT device acquisition, and may acquire face data of about 5S from the user' S face recognition, which may be 5S video data, or may also be 5S continuously shot photo data. Selecting images of a specified number of the images as follows: 10 images are taken as biometric images of the target detection object. The biometric image in the embodiment of the present specification may acquire a high-definition biometric image, such as: the resolution of the face region is greater than 400 × 400, the face region of a general face recognition algorithm is generally 100 × 100, the information amount is less, and the accuracy of living body detection and biological recognition can be improved by collecting high-definition biological feature images.
By collecting image information within a certain time and selecting a specified number of images as biological characteristic images, the uniformity and the comprehensiveness of biological sampling are ensured, a more accurate data base is provided for living body detection and biological identification, and the accuracy of the living body detection and the accuracy of the biological identification are improved.
And 104, detecting the nose areas of the specified number of biological characteristic images, and determining the nose areas in the biological characteristic images and key points corresponding to the nose areas.
In a specific implementation process, after the biometric image of the target detection object is acquired, the acquired biometric image may be subjected to nose region detection, that is, the nose region in the biometric image and key points corresponding to the nose region are identified, the nose region in the biometric image may be framed and marked, and an image of the nose region is obtained for subsequent use. Each biological characteristic image can correspond to a nose area and a corresponding key point, and each nose area can be continuous according to the corresponding biological characteristic image, so that a data base is laid for the identification and detection of subsequent breathing actions. The nose region detection method can select a proper key point detection algorithm according to actual needs, or utilizes an intelligent learning algorithm to train and construct a nose region detection model, and utilizes the intelligent learning model to detect the nose region of a biological characteristic image of a target detection object, and the embodiment of the specification is not limited to a specific method.
In some embodiments of the present specification, the detecting a nose region of the biometric images and determining a nose region and a key point corresponding to the nose region in each biometric image includes:
setting model parameters of a nose region detection model, wherein the model parameters comprise a network structure and a loss function of the nose region detection model;
acquiring living body biological characteristic sample images of living body sample objects of different age groups and attack biological characteristic sample images of different types of attack sample objects;
marking the living body biological characteristic sample image and the attack biological characteristic sample image according to a preset nose area key point template, and marking a nose area label in the living body biological characteristic sample image and the attack biological characteristic sample image and a key point label corresponding to the nose area;
taking the living body biological characteristic sample image and the attack biological characteristic sample image as the input of the nose region detection model, taking a nose region label and a key point label corresponding to the living body biological characteristic sample image and the attack biological characteristic sample image as the output of the nose region detection model, and carrying out model training until the loss function of the nose region detection model is converged, thereby completing the training of the nose region detection model;
and carrying out nose region detection on the biological characteristic images by using a pre-trained nose region detection model, and determining the nose regions in the biological characteristic images and key points corresponding to the nose regions.
In a specific implementation process, when identifying the nose region and the key point, a nose region detection model may be trained and constructed in advance, and the trained nose region detection model is used to identify the nose region of the biological image feature of the target detection object. The training party of the nose detection model can refer to the following:
firstly, setting model parameters of a nose region detection model, such as: the network structure of the model, the loss function, and the like, in some embodiments of the present disclosure, the network structure of the nose region detection model may further include a keypoint regressor (two convolution layers of 3 × 3) connected to yolo v3, so as to detect the nose region of the detection object and the corresponding keypoints. Among them, yoolov 3 can be understood as a V3 version of the You Only Look Once to identify the type and position of an object in the graph, and is a target detection model. The loss function may be set according to actual needs, and the embodiments of the present specification are not particularly limited.
After the model parameters of the nose region detection model are set, training samples can be acquired, living body biological characteristic sample images of living body sample objects of different ages and different sexes, namely living body biological characteristic sample images of real users during biological identification and attack biological characteristic sample images of attack sample objects of different categories (such as photos, screens, masks and the like) can be acquired, then the acquired living body biological characteristic sample images and the attack biological characteristic sample images are marked, and the nose region and key points corresponding to the nose region are marked to be used as labels for model training. And then, the collected sample image is used as the input of the model, the marked nose region and key points of the sample image are used as the output of the model, model training is carried out, and the training of the nose region detection model is completed until the loss function is converged or the training times meet the requirements. When the target detection object needs to be subjected to living body detection, the biological characteristic images of the target detection object during biological identification can be sequentially input into the trained nose region detection model, the nose region detection model is utilized to identify the nose region and the key points of each biological characteristic image corresponding to the target detection object, and a data base is laid for subsequent living body detection. The training process of the model may be performed offline in advance, and may be completed in a server or other clients, which is not limited in the embodiments of the present specification.
In some embodiments of the present description, in training the nose region detection model, the constraint conditions of the model may include a nose region, a coordinate constraint of a key point, and a shape constraint using a three-dimensional face statistic algorithm.
In a specific implementation process, a general constraint signal for identification and detection of a nose region only includes a frame coordinate of the nose region and a key point coordinate, a three-dimensional Face statistical algorithm, namely a 3D portable Face Model (3D DMM) shape constraint and a 3D DMM shape constraint are added in the embodiment of the specification, the 3D DMM nose region of an average Face can be used as a supervision signal, a deviation after affine transformation of a corresponding key point is calculated, regression of two-dimensional Face data into a three-dimensional Face shape is realized, the nose region and a related key point can be more accurately identified, and the identification accuracy of the nose region and the key point is improved.
In some embodiments of the present description, the nose region key point template includes: the nose region key points comprise a base key point and a nostril region key point corresponding to the nose region frame.
In a specific implementation process, when a nose region detection model is trained and a collected sample image is marked with a nose region and key points, a nose region key point template can be set first, the template can comprise a nose region frame and nose region key points, the nose region frame can be understood as an approximate range corresponding to the nose region, and the nose region key points can be understood as coordinate points corresponding to key parts of the nose. In the embodiment of the present specification, when setting the key points of the nose region, the key points of the nostril region are additionally added, such as: a certain number of key points may be added near each of the two nostrils, and based on the key points of the nostril area, the action of the nostrils may be identified for the subsequent respiratory action detection and respiratory frequency calculation based on the key points.
And 106, carrying out breathing detection on the nose areas in the biological characteristic images of the specified number, and determining the breathing type and the breathing characteristics of the target detection object.
In a specific implementation process, after the nose region of the biometric image of the target detection object is determined, the nose region corresponding to each biometric image may be subjected to respiration detection, and the respiration type and the respiration characteristic of the target detection object are determined. Such as: the change characteristics of the nose areas of the continuous biological characteristic images are calculated by comparing the nose areas of the continuous biological characteristic images, and then the breathing type, the breathing characteristics and the like of the target detection object are determined. The specific algorithm of the breath detection may be selected according to actual needs, and the embodiments of the present specification are not specifically limited, such as: the optical flow algorithm may be used to perform optical flow calculation on the images of the nose areas of the consecutive biometric images, thereby determining the breathing type and the breathing features of the target detection object. The breathing category can include two categories of breathing and non-breathing, and the breathing feature can be understood as a vector feature for breathing category identification, such as: and after the optical flow of the nose area part is calculated based on an optical flow algorithm, carrying out data processing on the optical flow data to obtain vector characteristics. The optical flow method is a concept in object motion detection in the field of view, and can be used to describe the motion of an observation target, a surface or an edge caused by the motion relative to an observer, and can be used for motion detection, object cutting, calculation of collision time and object expansion, motion compensation coding, or stereo measurement through the surface and the edge of an object, and the like.
In actual use, it is possible to perform intelligent model training by calculating optical flows between the nose region images using the sample images of the nose region and the label corresponding to whether or not there is breathing by using an intelligent learning algorithm, and to detect a breathing action in the nose region of the target detection object by using the trained model. Or other methods may be used to detect the breathing action, and the embodiments of the present disclosure are not limited in particular.
In some embodiments of the present specification, the determining the breathing category and the breathing characteristic of the target detection object by performing breathing detection on the nose region in the biometric image includes:
setting model parameters of a respiratory motion detection model, wherein the model parameters comprise a network structure and a loss function of the respiratory motion detection model;
collecting living body sample objects of different ages and nose area samples of different types of attack sample objects, and acquiring the breathing type corresponding to each nose area sample;
aligning the nose area samples, and calculating optical flows of the two adjacent frames of nose area samples by using an optical flow algorithm to obtain optical flow image samples;
taking the optical flow image sample as the input of the respiratory motion detection model, taking the respiratory category and the respiratory characteristics corresponding to the nose area sample as the output of the respiratory motion detection model, and performing model training until the loss function of the respiratory motion detection model is converged to finish the training of the respiratory motion detection model;
and carrying out breathing detection on the nose areas in the biological characteristic images of the specified number by using a pre-trained breathing action detection model, and determining the breathing type and the breathing characteristics of the target detection object.
In a specific implementation process, a breathing action detection model for breathing action detection can be trained and constructed in advance, when the target detection object needs to be subjected to living body detection, a biological feature image of the target detection object can be collected, the nose area of the target detection object is identified by using the method of the embodiment, and then the trained breathing action detection model is used for identifying the breathing action of the nose area of the target detection object to identify whether the target detection object has the breathing action. The training process of the model may be performed offline in advance, and may be completed in a server or other clients, which is not limited in the embodiments of the present specification.
The training method of the respiratory motion detection model can refer to the following steps:
setting model parameters of the respiratory motion detection model as follows: a network structure, a loss function, and the like, in this specification, in an embodiment, a model structure of the respiratory motion detection model may select ShuffleNetV2 (a CNN model with high computational efficiency) as a backhaul, and the loss function may select a two-classification loss function, or select another network structure and loss function according to actual needs, which is not specifically limited in this specification.
After the model parameters of the breathing action detection model are set, training samples of model training can be collected, such as: samples of the nose region of live specimen subjects of different ages, different sexes, and different attack categories such as: the method comprises the following steps of attacking nose area samples of sample objects such as photos, masks, screens and the like, and marking the breathing types corresponding to the nose area samples according to the types of the objects corresponding to the samples, such as: the breathing type of the nose region corresponding to the living body sample object is breath, and the breathing type of the nose region corresponding to the attack sample object is no breath. In addition, in the embodiment of the present specification, the nose region identified by the nose region detection model trained in the above embodiment may be used as a training sample of the breathing motion detection model, such as: the nose region output during the training of the nose region detection model can be directly used as a training sample of the respiratory motion detection model, or biological characteristic images of living objects of different ages and different sexes and different attack categories can be collected and input into the trained nose region detection model, and the nose region detection model is used for identifying the corresponding nose region and used as the training sample of the respiratory motion detection model.
The sizes and positions of the nose areas marked by different biological feature images may be different, and in the embodiment of the present specification, the acquired nose area samples may be aligned to a uniform template, and then optical flow calculation may be performed on each aligned nose area sample. A living subject or an attack subject may correspond to a specified number of biometric images, each biometric image corresponding to a nose region, and then each living subject or each attack subject may correspond to a consecutive number of nose regions. Respectively aligning the nose areas corresponding to each living object and each attack object, arranging the aligned nose areas of each object based on the sequence of acquisition time, performing optical flow calculation on the image of the nose area of each object, calculating optical flow once in the adjacent two frames of nose areas, and obtaining an optical flow image sample corresponding to each object. In the embodiment of the present specification, optical flow calculation may be performed on an image of the identified nose region to obtain a corresponding optical flow image, and whether the target detection object breathes or not may be identified based on the optical flow image, thereby providing a data basis for subsequent living body detection.
And (3) taking the optical flow image sample as the input of the respiratory motion detection model, taking the respiratory type and the respiratory characteristics corresponding to the nose area as the output of the respiratory motion detection model, and performing model training until the loss function of the respiratory motion detection model is converged or the training frequency meets the requirement, namely finishing the training of the respiratory motion detection model. The respiratory characteristics can be understood as vectors obtained by the respiratory motion detection model in a step before the respiratory motion detection model outputs the respiratory type, the respiratory motion detection model can obtain corresponding vector characteristics, namely respiratory characteristics, after processing the optical flow image sample, the respiratory characteristics can be further classified based on the obtained vector characteristics, and whether the object corresponding to the optical flow image sample has respiratory motion or not is determined.
When the target detection object is subjected to the living body detection, the biological characteristic image of the target detection object during the biological identification can be collected firstly, the method of the embodiment is used for identifying the biological characteristics such as the nose area in the image, the identified nose area is input into the trained breathing action detection model, and the breathing action detection model is used for identifying whether the target detection object has the breathing action or not, so that a data base is laid for the subsequent living body detection.
In addition, in some embodiments of the present specification, after determining the breathing class and the breathing characteristic of the target detection object, the method further comprises:
and if the breathing type of the target detection object is determined to be no breathing, directly determining that the target detection object is not a living object.
In a specific implementation process, if the breathing type of the target detection object is identified as no breathing based on the nose area in the biological feature image of the target detection object, it can be determined that the target detection object has no breathing action, so that the target detection object can be directly determined not to be a living object, subsequent calculation of breathing frequency is not needed, data processing amount is reduced, and living body detection efficiency is improved.
And 108, detecting the respiratory frequency according to the respiratory characteristics and the key points corresponding to the nose area, and determining the respiratory frequency of the target detection object.
In a specific implementation process, after the respiratory characteristics, the nose region and the corresponding key points of the target detection object are identified by using the method of the above embodiment, the respiratory frequency of the target detection object may be predicted based on the respiratory characteristics, the nose region and the corresponding key points of the target detection object. Such as: the variation of the size of the nostrils of the target detection object can be calculated according to the key points and the breathing characteristics of the nose area of the target detection object, the breathing action of the nose of the target detection object is analyzed, and then the breathing frequency of the target detection object is calculated.
In some embodiments of the present specification, the detecting a respiratory frequency according to the respiratory feature and the key point corresponding to the nose region to determine the respiratory frequency of the target detection object includes:
acquiring biological characteristic sample images and corresponding respiratory frequency labels of living body sample objects of different ages and attack sample objects of different classes during biological identification, wherein the respiratory frequency labels of the attack sample objects are 0;
obtaining key point samples and breathing characteristics corresponding to the nose areas of a plurality of biological characteristic sample images, and calculating the maximum nostril diameter, the minimum nostril diameter, the nostril diameter difference value between the maximum nostril diameter and the minimum nostril diameter and the key point difference value of key points of the nose areas of two adjacent frames according to the key point samples;
constructing a respiratory rate detection model based on the respiratory characteristics, the key point samples corresponding to the nose region, the maximum nostril diameter, the minimum nostril diameter, the nostril diameter difference, the key point difference and the respiratory rate label training;
and detecting the respiratory frequency according to the respiratory characteristics and the key points corresponding to the nose area by using a pre-trained respiratory frequency detection model, and determining the respiratory frequency of the target detection object.
In a specific implementation process, a respiratory frequency detection model can be trained in advance, and during the detection of a living body, the respiratory frequency of a detection object is predicted by using the trained respiratory frequency detection model. The training method of the respiratory frequency detection model can refer to the following steps:
setting model parameters of the respiratory frequency detection model as follows: a network structure, a loss function, and the like, in this specification, in an embodiment, a model structure of the respiratory rate detection model may be a MLP (multi layer Perceptron, a feedforward artificial neural network model) with 5 layers, and the loss function may be a regression loss function, a regression frequency value, or another network structure and loss function according to actual needs, which is not specifically limited in this specification.
After the model parameters of the respiratory frequency detection model are set, training samples of model training can be collected, such as: live sample objects of different ages and sexes, different attack categories such as: the biological characteristic sample image and the corresponding respiratory frequency label when the attack sample object such as a photo, a mask, a screen and the like carries out biological identification, wherein the respiratory frequency label of the attack sample object is 0. In some embodiments of the present description, the method for acquiring a respiratory rate tag of a living sample object includes:
selecting living body objects of different age groups as living body sample objects, wherein each living body sample object is worn with a heart rate counter;
and acquiring a biological characteristic sample image of the living body sample object during biological identification and a heart rate count corresponding to a heart rate counter of the living body sample object, and taking the heart rate count as a respiratory frequency label corresponding to the living body sample object.
In a specific implementation process, after the living sample object is selected, each living sample object can be subjected to biological identification, and a biological characteristic sample image of the living sample object during the biological identification is acquired. Simultaneously, can wear the heart rate counter of finger clip formula on hand when each live body sample object carries out biological identification, utilize heart rate counter to gather the heart rate count when live body sample object carries out biological identification to the heart rate count that will gather is as the respiratory frequency label that the live body sample object corresponds. Respiratory rate is difficult to collect, and the embodiment of this specification collects user's heart rate through the heart rate counter, and as user's respiratory rate, carries out the training of respiratory rate detection model, has laid a data basis for respiratory rate's calculation.
After the biological characteristic sample images of different sample objects and the corresponding respiratory frequency labels are acquired, key point samples corresponding to the nose areas of the biological characteristic sample images can be acquired, the nose areas and the key points of the biological characteristic sample images can be marked to acquire corresponding key point samples, and then the identified nose areas are subjected to optical flow calculation to acquire corresponding respiratory characteristics. In some embodiments of the present specification, the trained nose region detection model and respiratory motion detection model in the above embodiments may be used to obtain a training sample of the respiratory frequency detection model, such as: the key points of the nose region output during the training of the nose region detection model and the respiratory characteristics output during the training of the respiratory motion detection model can be directly used as the training samples of the respiratory frequency detection model, or the biological characteristic images of living objects with different ages and different sexes and different attack types can be collected and input into the trained nose area detection model, the method comprises the steps of identifying a corresponding nose area and corresponding key points by using a nose area detection model, taking the key points output by the nose area detection model as training key point samples of a respiratory frequency detection model, inputting the nose area output by the nose area detection model into the trained respiratory motion detection model, obtaining respiratory characteristics corresponding to each nose area by using the respiratory motion detection model, and taking the respiratory characteristics output by the respiratory motion detection model as training respiratory characteristic samples of the respiratory frequency detection model.
After obtaining the key point samples and the breathing characteristic samples of the breathing frequency detection model, the maximum nostril diameter, the minimum nostril diameter, the nostril diameter difference between the maximum nostril diameter and the minimum nostril diameter of each biological characteristic sample image and the key point difference of the key points of the adjacent two frames of nose areas can be calculated based on the key point samples. And then, taking the respiratory characteristics, the key point characteristics, the calculated nostril diameter and other data as the input of a respiratory frequency detection model, taking the corresponding respiratory frequency label as the output of the respiratory frequency detection model, and carrying out model training until the loss function is converged or the training frequency meets the preset requirement, thereby finishing the training of the respiratory frequency detection model.
The breathing characteristic of a target detection object and key points of a nose region can be predicted by utilizing a trained breathing frequency detection model, the nose region detection model, a breathing action detection model and a breathing frequency detection model are trained by adopting an intelligent learning algorithm, and in the in-vivo detection process, the nose region, the breathing action and the breathing frequency of the detection object are rapidly and accurately detected, so that an accurate data base is laid for in-vivo detection.
Step 110, determining whether the target detection object is a living object or not based on the breathing category and the breathing frequency.
In a specific implementation process, after the breathing type and the breathing frequency of the target detection object are determined, whether the target detection object is a living object can be further determined, a living object identification rule can be preset, and the living object is identified based on the preset rule. For example: if the breathing type is none, the attack is determined to be a live attack. In some embodiments of the present specification, the determining whether the target detection object is a living object based on the breathing category and the breathing frequency includes:
and if the breathing type is breathing and the breathing frequency is in a preset frequency interval, determining that the target detection object is a living object.
In a specific implementation process, a preset frequency interval may be set according to a change of a respiratory frequency of different users during biometric identification, and a specific value of the preset frequency interval is not specifically limited in the embodiments of the present specification. If the identified breathing type of the target detection object is no breathing and the breathing frequency is in the preset frequency interval, the target detection object can be determined to be a living object. Whether the detected object belongs to the living body attack or not can be quickly judged through the existence of the breathing type and the preset frequency interval.
In the living body detection method provided in the embodiment of the present specification, a biological feature image of a target detection object during biological recognition is acquired, a nose region and a key point thereof are recognized for the biological feature image, a breathing action and a breathing frequency of the target detection object are detected based on the recognized nose region and the key point thereof, and whether the target detection object belongs to the living body object is quickly and accurately detected based on the detected breathing action and breathing frequency. The living body detection is completed by detecting the breathing action of the user in a natural state, the additional cooperation of the user is not needed, the applicability is wider, the accuracy and the applicability of the living body detection are improved, and the safety of biological identification is further improved.
Fig. 2 is a schematic diagram of a principle of live body detection in a face recognition process in a scene example of the present specification, and as shown in fig. 2, in some scene examples of the present specification, live body detection mainly includes 4 steps:
1. nose region detection and key point localization: after a user starts face recognition, detecting a nose region strongly related to respiratory motion, and performing nose region key point regression according to a key point structure defined in advance;
2. multi-frame action respiratory action: detecting a user action in combination with a nose region of multiple frames and corresponding key points (about 5 s);
3. and (3) respiratory frequency estimation: estimating the respiratory frequency according to the key points of the first step and the multi-frame characteristics of the second step;
4. judging the living body: and judging the living body/attack according to the result of the respiratory motion identification and the result of the respiratory frequency estimation.
The specific process is as follows:
1) nose area detection and key point localization:
defining key points of a nose region: on the basis of 9 points defined by dlib, adding 10 key points (5 key points on the upper, lower, left and right sides and the central point on the left and right sides) of two nostril areas, and taking the final 19 key points as templates;
nose region detection & key point regression model training based on 3DMM supervision (done in the server beforehand):
the network structure is as follows: the network structure is that a key point regressor (two convolution layers of 3x 3) is additionally connected with YoloV 3;
input and output: inputting a single face recognition image of a user, and outputting the single face recognition image of the user as a nose area and a corresponding key point position of the user in the image;
supervision signals: the method comprises the steps that monitoring signals of a general method are coordinates of a frame of a nose area and coordinates of key points, shape monitoring of 3DMM is additionally added, the 3DMM nose area of an average face is used as a monitoring signal, and deviation after affine transformation of the corresponding key points is calculated;
the training method comprises the following steps: training by using the network structure and the loss function until the model converges;
nose region detection and keypoint regression: continuously acquiring 5s images after a user starts face recognition; picking 10 frames from the 5s image (uniform sampling); and performing nose region detection and key point regression on the 10 frames of data to obtain 10 nose regions and corresponding key points.
2) Multi-frame action respiratory action:
data preprocessing: carrying out alignment processing on the detected nose area to align the detected nose area to a uniform template; calculating optical flows of 10 nose areas subjected to alignment operation, and calculating two adjacent frames to obtain 9 frames of optical flow images;
respiratory maneuver detection model training- (done in advance at the cloud server):
model structure: as a backbone, ShuffleNetV2 was used;
and (3) inputting and outputting a model: the model input is 9 frames of optical flow images, the total number of channels is 18, and the output is whether the respiratory motion is included or not;
loss function: a second classification loss function;
the training method comprises the following steps: training by using the model structure and the loss function until the model converges;
and (3) respiratory motion detection: after the nose area detection is finished, performing optical flow calculation and respiratory motion classification to obtain a result of whether breathing is performed and corresponding respiratory characteristics (characteristics of a previous layer of classification);
3) and (3) respiratory frequency estimation:
performing frequency estimation based on the key points of the first step and the respiratory characteristics of the second step;
extracting key point features: respectively calculating the maximum nostril diameter and the minimum nostril diameter of the nose area for 10 frames of key point information, and simultaneously calculating the difference value of the maximum nostril diameter and the minimum nostril diameter; in addition, calculating the difference value of the key points of the adjacent frames to obtain 9 difference value key point sets;
respiratory rate estimation model training-done in the cloud in advance:
model structure: a 5-layer MLP;
loss function: regression loss function, regression frequency value;
input and output: inputting key points, key point difference values, maximum and minimum diameters and difference values thereof, and breathing characteristics of the previous step;
model training: training by using the model structure and the loss function until the model converges;
and (3) respiratory frequency estimation: after the breathing action detection is finished, inputting the characteristics into the trained model by using the characteristic calculation method to obtain the breathing frequency;
4) judging the living body: when the following conditions are met, judging the living body, otherwise, judging the living body as an attack: the result of the breath classification is true; the frequency of breathing is estimated to be between 10-30.
In the embodiment of the specification, the detection of the breathing behavior based on the optical flow of the nose, the estimation of the breathing frequency based on the breathing characteristics and the key points of the nose, and the living body detection of the detection object based on the breathing action and the breathing frequency can achieve the safety capability similar to that of an interactive living body without additional cooperation of a user, so that the accuracy of the living body detection and the safety of biological identification are improved.
In the present specification, each embodiment of the method is described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. The relevant points can be obtained by referring to the partial description of the method embodiment.
Based on the living body detection method, one or more embodiments of the present specification further provide a system for living body detection. The system may include systems (including distributed systems), software (applications), modules, components, servers, clients, etc. that use the methods described in embodiments of the present specification in conjunction with any necessary hardware-implemented devices. Based on the same innovative conception, embodiments of the present specification provide an apparatus as described in the following embodiments. Since the implementation scheme of the apparatus for solving the problem is similar to that of the method, the specific apparatus implementation in the embodiment of the present specification may refer to the implementation of the foregoing method, and repeated details are not repeated. As used hereinafter, the term "unit" or "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Specifically, fig. 3 is a schematic block diagram of an embodiment of a living body detecting apparatus provided in the present specification, which can be applied to a biometric device, such as: a face brushing device, a face brushing payment device, and the like, as shown in fig. 3, the living body detecting apparatus provided in the present specification may include:
an image acquisition module 31, configured to acquire a specified number of biometric images of a target detection object during biometric identification;
a nose region detection module 32, configured to perform nose region detection on the biometric images, and determine a nose region in each biometric image and a key point corresponding to the nose region;
the respiratory action detection module 33 is configured to perform respiratory detection on the nose regions in the specified number of biometric images, and determine the respiratory type and respiratory characteristics of the target detection object;
the respiratory frequency detection module 34 is configured to perform respiratory frequency detection according to the respiratory characteristics and the key points corresponding to the nose region, and determine the respiratory frequency of the target detection object;
a living body detection module 35, configured to determine whether the target detection object is a living body object based on the breathing category and the breathing frequency.
The living body detecting device provided by the embodiment of the specification judges whether the breathing action exists or not by detecting some micro actions in the nose area, and further estimates the breathing frequency. By detecting the breathing action and the corresponding frequency, a safety capability similar to that of an interactive living body can be reached without additional cooperation by the user.
It should be noted that the above-mentioned apparatus may also include other embodiments according to the description of the corresponding method embodiment. The specific implementation manner may refer to the description of the above corresponding method embodiment, and is not described in detail herein.
An embodiment of the present specification further provides a living body detection apparatus, including: at least one processor and a memory for storing processor-executable instructions, the processor implementing the liveness detection method of the above embodiments when executing the instructions, such as:
acquiring a biological characteristic image of a target detection object;
predicting brain wave signals of the target detection object based on the biological feature image;
determining emotional characteristics of the target detection object according to the brain wave signal;
and determining whether the target detection object is a living object according to the emotional characteristic.
An embodiment of the present specification further provides a living body detection system, including: an image acquisition device and an image processing device, wherein the image acquisition device is used for acquiring a biological characteristic image of a target detection object for biological recognition, the image processing device comprises at least one processor and a memory for storing processor executable instructions, and the living body detection method performs living body detection on the biological characteristic image acquired by the image acquisition device when the processor executes the instructions, such as:
acquiring a specified number of biological characteristic images of a target detection object during biological identification;
detecting the nose areas of the biological characteristic images of the specified number, and determining the nose areas in the biological characteristic images and key points corresponding to the nose areas;
carrying out respiration detection on the nose areas in the biological characteristic images of the specified number, and determining the respiration type and the respiration characteristics of the target detection object;
detecting the respiratory frequency according to the respiratory characteristics and key points corresponding to the nose area, and determining the respiratory frequency of the target detection object;
determining whether the target detection object is a living object based on the breathing category and the breathing frequency.
It should be noted that the above description of the apparatus and system according to the method embodiments may also include other embodiments. The specific implementation manner may refer to the description of the related method embodiment, and is not described in detail herein.
The living body detecting device provided by the present specification can also be applied to various data analysis processing systems. The system or server or terminal or device may be a single server, or may include a server cluster, a system (including a distributed system), software (applications), actual operating devices, logical gate devices, quantum computers, etc. using one or more of the methods described herein or one or more embodiments of the system or server or terminal or device, in combination with necessary end devices implementing hardware. The system for checking for discrepancies may comprise at least one processor and a memory storing computer-executable instructions that, when executed by the processor, implement the steps of the method of any one or more of the embodiments described above.
The method embodiments provided by the embodiments of the present specification can be executed in a mobile terminal, a computer terminal, a server or a similar computing device. Taking the example of the operation on the server, fig. 4 is a block diagram of the hardware structure of the biopsy server in one embodiment of the present specification, and the computer terminal may be the biopsy server or the biopsy device in the above embodiment. The server 10 as shown in fig. 4 may include one or more (only one shown) processors 100 (the processors 100 may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA, etc.), a non-volatile memory 200 for storing data, and a transmission module 300 for communication functions. It will be understood by those skilled in the art that the structure shown in fig. 4 is only an illustration and is not intended to limit the structure of the electronic device. For example, the server 10 may also include more or fewer components than shown in FIG. 4, and may also include other processing hardware, such as a database or multi-level cache, a GPU, or have a different configuration than shown in FIG. 4, for example.
The non-volatile memory 200 may be used to store software programs and modules of application software, such as program instructions/modules corresponding to the living body detection method in the embodiments of the present specification, and the processor 100 executes various functional applications and resource data updates by running the software programs and modules stored in the non-volatile memory 200. Non-volatile memory 200 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the non-volatile memory 200 may further include memory located remotely from the processor 100, which may be connected to a computer terminal through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission module 300 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal. In one example, the transmission module 300 includes a Network adapter (NIC) that can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission module 300 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The method or apparatus provided in this specification and described in the foregoing embodiments may implement service logic through a computer program and record the service logic on a storage medium, where the storage medium may be read and executed by a computer, and implement the effects of the solutions described in the embodiments of this specification, such as:
acquiring a specified number of biological characteristic images of a target detection object during biological identification;
detecting the nose areas of the biological characteristic images of the specified number, and determining the nose areas in the biological characteristic images and key points corresponding to the nose areas;
carrying out respiration detection on the nose areas in the biological characteristic images of the specified number, and determining the respiration type and the respiration characteristics of the target detection object;
detecting the respiratory frequency according to the respiratory characteristics and key points corresponding to the nose area, and determining the respiratory frequency of the target detection object;
determining whether the target detection object is a living object based on the breathing category and the breathing frequency.
The living body detection method or apparatus provided in the embodiment of the present specification may be implemented by a processor executing corresponding program instructions in a computer, for example, implemented by using a c + + language of a windows operating system on a PC side, implemented by using a linux system, or implemented by using android and iOS system programming languages on an intelligent terminal, or implemented by using processing logic based on a quantum computer.
It should be noted that descriptions of the apparatus, the computer storage medium, and the system described above according to the related method embodiments may also include other embodiments, and specific implementations may refer to descriptions of corresponding method embodiments, which are not described in detail herein.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the hardware + program class embodiment, since it is substantially similar to the method embodiment, the description is simple, and the relevant points can be referred to only the partial description of the method embodiment.
The embodiments of the present description are not limited to what must be consistent with industry communications standards, standard computer resource data updating and data storage rules, or what is described in one or more embodiments of the present description. Certain industry standards or implementations modified slightly from those described using custom modes or examples can also achieve the same, equivalent or similar, or other expected implementation results after being modified. The embodiments obtained by applying the modified or modified data obtaining, storing, judging, processing modes and the like can still fall within the scope of alternative implementations of the embodiments in the present specification.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a vehicle-mounted human-computer interaction device, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
Although one or more embodiments of the present description provide method operational steps as described in the embodiments or flowcharts, more or fewer operational steps may be included based on conventional or non-inventive approaches. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. When the device or the end product in practice executes, it can execute sequentially or in parallel according to the method shown in the embodiment or the figures (for example, in the environment of parallel processors or multi-thread processing, even in the environment of distributed resource data update). The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the presence of additional identical or equivalent elements in a process, method, article, or apparatus that comprises the recited elements is not excluded. The terms first, second, etc. are used to denote names, but not any particular order.
For convenience of description, the above devices are described as being divided into various modules by functions, and are described separately. Of course, when implementing one or more of the present description, the functions of each module may be implemented in one or more software and/or hardware, or a module implementing the same function may be implemented by a combination of multiple sub-modules or sub-units, etc. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of logical functional division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable resource data updating apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable resource data updating apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable resource data update apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable resource data update apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage, graphene storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
As will be appreciated by one skilled in the art, one or more embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, one or more embodiments of the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
One or more embodiments of the specification may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. One or more embodiments of the present specification can also be practiced in distributed computing environments where tasks are performed by remote devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, and the relevant points can be referred to only part of the description of the method embodiments. In the description of the specification, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the specification. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
The above description is merely exemplary of one or more embodiments of the present disclosure and is not intended to limit the scope of one or more embodiments of the present disclosure. Various modifications and alterations to one or more embodiments described herein will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement or the like made within the spirit and principle of the present specification should be included in the scope of the claims.

Claims (12)

1. A method of in vivo detection, the method comprising:
acquiring a specified number of biological characteristic images of a target detection object during biological identification;
detecting the nose areas of the biological characteristic images of the specified number, and determining the nose areas in the biological characteristic images and key points corresponding to the nose areas;
carrying out respiration detection on the nose areas in the biological characteristic images of the specified number, and determining the respiration type and the respiration characteristics of the target detection object;
detecting the respiratory frequency according to the respiratory characteristics and key points corresponding to the nose area, and determining the respiratory frequency of the target detection object;
determining whether the target detection object is a living object based on the breathing category and the breathing frequency.
2. The method of claim 1, after determining the breathing class and breathing characteristics of the target detection object, the method further comprising:
and if the breathing type of the target detection object is determined to be no breathing, directly determining that the target detection object is not a living object.
3. The method of claim 1, wherein the detecting the nose region of the biometric images and determining the nose region and the corresponding key point of the nose region in each biometric image comprises:
setting model parameters of a nose region detection model, wherein the model parameters comprise a network structure and a loss function of the nose region detection model;
acquiring living body biological characteristic sample images of living body sample objects of different age groups and attack biological characteristic sample images of different types of attack sample objects;
marking the living body biological characteristic sample image and the attack biological characteristic sample image according to a preset nose area key point template, and marking a nose area label in the living body biological characteristic sample image and the attack biological characteristic sample image and a key point label corresponding to the nose area;
taking the living body biological characteristic sample image and the attack biological characteristic sample image as the input of the nose region detection model, taking a nose region label and a key point label corresponding to the living body biological characteristic sample image and the attack biological characteristic sample image as the output of the nose region detection model, and carrying out model training until the loss function of the nose region detection model is converged, thereby completing the training of the nose region detection model;
and detecting the nose areas of the biological characteristic images of the appointed number by using a pre-trained nose area detection model, and determining the nose areas in the biological characteristic images and key points corresponding to the nose areas.
4. The method of claim 3, the constraints of the nose region detection model comprising:
the nose area, the coordinate constraints of the key points and the shape constraints by adopting a three-dimensional face statistical algorithm.
5. The method of claim 3, the nose region keypoint template comprising: the nose region key points comprise a base key point and a nostril region key point corresponding to the nose region frame.
6. The method of claim 1, wherein the detecting the respiration in the nose region of the biometric image and determining the respiration type and the respiration characteristic of the target detection object comprises:
setting model parameters of a respiratory motion detection model, wherein the model parameters comprise a network structure and a loss function of the respiratory motion detection model;
collecting living body sample objects of different ages and nose area samples of attack sample objects of different types, and obtaining the breathing type corresponding to each nose area sample;
aligning the nose area samples, and calculating optical flows of the two adjacent frames of nose area samples by using an optical flow algorithm to obtain optical flow image samples;
taking the optical flow image sample as the input of the respiratory motion detection model, taking the respiratory category and the respiratory characteristics corresponding to the nose area sample as the output of the respiratory motion detection model, and performing model training until the loss function of the respiratory motion detection model is converged to finish the training of the respiratory motion detection model;
and carrying out respiratory detection on the nose areas in the biological characteristic images of the specified number by using a pre-trained respiratory motion detection model, and determining the respiratory type and the respiratory characteristics of the target detection object.
7. The method according to claim 1, wherein the determining the breathing frequency of the target detection object by performing breathing frequency detection according to the breathing characteristics and key points corresponding to the nose region comprises:
acquiring biological characteristic sample images and corresponding respiratory frequency labels of living body sample objects of different ages and attack sample objects of different categories during biological identification, wherein the respiratory frequency labels of the attack sample objects are 0;
obtaining key point samples and breathing characteristics corresponding to the nose areas of a plurality of biological characteristic sample images, and calculating the maximum nostril diameter, the minimum nostril diameter, the nostril diameter difference value between the maximum nostril diameter and the minimum nostril diameter and the key point difference value of key points of the nose areas of two adjacent frames according to the key point samples;
constructing a respiratory rate detection model based on the respiratory characteristics, the key point sample corresponding to the nose region, the maximum nostril diameter, the minimum nostril diameter, the nostril diameter difference, the key point difference and the respiratory rate label training;
and detecting the respiratory frequency according to the respiratory characteristics and the key points corresponding to the nose area by using a pre-trained respiratory frequency detection model, and determining the respiratory frequency of the target detection object.
8. The method of claim 7, the method of acquiring a respiratory frequency signature of a living sample subject comprising:
selecting living body objects of different age groups as living body sample objects, wherein each living body sample object is worn with a heart rate counter;
and acquiring a biological characteristic sample image of the living body sample object during biological identification and a heart rate count corresponding to a heart rate counter of the living body sample object, and taking the heart rate count as a respiratory frequency label corresponding to the living body sample object.
9. The method of claim 1, the determining whether the target detection object is a living object based on the breathing category and the breathing frequency, comprising:
and if the breathing type is breathing and the breathing frequency is in a preset frequency interval, determining that the target detection object is a living object.
10. A living body detection apparatus, the apparatus comprising:
the image acquisition module is used for acquiring a specified number of biological characteristic images of the target detection object during biological identification;
the nose region detection module is used for detecting the nose regions of the biological characteristic images and determining the nose regions in the biological characteristic images and key points corresponding to the nose regions;
the respiratory action detection module is used for carrying out respiratory detection on the nose areas in the biological characteristic images of the specified number and determining the respiratory type and the respiratory characteristics of the target detection object;
the respiratory frequency detection module is used for detecting the respiratory frequency according to the respiratory characteristics and key points corresponding to the nose area to determine the respiratory frequency of the target detection object;
and the living body detection module is used for determining whether the target detection object is a living body object or not based on the breathing category and the breathing frequency.
11. A living body examination apparatus comprising: at least one processor and a memory for storing processor-executable instructions, the processor implementing the method of any one of claims 1-9 when executing the instructions.
12. A living body detection system comprising: an image acquisition device for acquiring a biometric image of a target detection object for biometric identification, and an image processing device comprising at least one processor and a memory for storing processor-executable instructions which, when executed by the processor, implement the method of any one of claims 1-9 for in vivo detection of the biometric image acquired by the image acquisition device.
CN202210620254.8A 2022-06-02 2022-06-02 Living body detection method, device, equipment and system Pending CN115116146A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210620254.8A CN115116146A (en) 2022-06-02 2022-06-02 Living body detection method, device, equipment and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210620254.8A CN115116146A (en) 2022-06-02 2022-06-02 Living body detection method, device, equipment and system

Publications (1)

Publication Number Publication Date
CN115116146A true CN115116146A (en) 2022-09-27

Family

ID=83326449

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210620254.8A Pending CN115116146A (en) 2022-06-02 2022-06-02 Living body detection method, device, equipment and system

Country Status (1)

Country Link
CN (1) CN115116146A (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2013256179A1 (en) * 2012-05-02 2014-11-27 Aliphcom Physiological characteristic detection based on reflected components of light
CN109446981A (en) * 2018-10-25 2019-03-08 腾讯科技(深圳)有限公司 A kind of face's In vivo detection, identity identifying method and device
CN110236547A (en) * 2018-03-09 2019-09-17 浙江清华柔性电子技术研究院 The detection method of respiratory rate and the detection device detected for respiratory rate
CN111110211A (en) * 2020-01-08 2020-05-08 中国农业大学 Sow respiratory frequency and heart rate detection device and method
CN111178233A (en) * 2019-12-26 2020-05-19 北京天元创新科技有限公司 Identity authentication method and device based on living body authentication
US20200229775A1 (en) * 2016-09-06 2020-07-23 Photorithm, Inc. Generating a breathing alert
US20200302609A1 (en) * 2019-03-19 2020-09-24 Arizona Board Of Regents On Behalf Of Arizona State University Detecting abnormalities in vital signs of subjects of videos
CN111839519A (en) * 2020-05-26 2020-10-30 合肥工业大学 Non-contact respiratory frequency monitoring method and system
CN111860079A (en) * 2019-04-30 2020-10-30 北京嘀嘀无限科技发展有限公司 Living body image detection method and device and electronic equipment
CN113100748A (en) * 2021-03-30 2021-07-13 联想(北京)有限公司 Respiratory frequency determination method and device
CN113255400A (en) * 2020-02-10 2021-08-13 深圳市光鉴科技有限公司 Training and recognition method, system, equipment and medium of living body face recognition model
CN113326781A (en) * 2021-05-31 2021-08-31 合肥工业大学 Non-contact anxiety recognition method and device based on face video
US20220031191A1 (en) * 2020-07-30 2022-02-03 National Yunlin University Of Science And Technology Contactless breathing detection method and system thereof

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2013256179A1 (en) * 2012-05-02 2014-11-27 Aliphcom Physiological characteristic detection based on reflected components of light
US20200229775A1 (en) * 2016-09-06 2020-07-23 Photorithm, Inc. Generating a breathing alert
CN110236547A (en) * 2018-03-09 2019-09-17 浙江清华柔性电子技术研究院 The detection method of respiratory rate and the detection device detected for respiratory rate
CN109446981A (en) * 2018-10-25 2019-03-08 腾讯科技(深圳)有限公司 A kind of face's In vivo detection, identity identifying method and device
US20200302609A1 (en) * 2019-03-19 2020-09-24 Arizona Board Of Regents On Behalf Of Arizona State University Detecting abnormalities in vital signs of subjects of videos
CN111860079A (en) * 2019-04-30 2020-10-30 北京嘀嘀无限科技发展有限公司 Living body image detection method and device and electronic equipment
CN111178233A (en) * 2019-12-26 2020-05-19 北京天元创新科技有限公司 Identity authentication method and device based on living body authentication
CN111110211A (en) * 2020-01-08 2020-05-08 中国农业大学 Sow respiratory frequency and heart rate detection device and method
CN113255400A (en) * 2020-02-10 2021-08-13 深圳市光鉴科技有限公司 Training and recognition method, system, equipment and medium of living body face recognition model
CN111839519A (en) * 2020-05-26 2020-10-30 合肥工业大学 Non-contact respiratory frequency monitoring method and system
US20220031191A1 (en) * 2020-07-30 2022-02-03 National Yunlin University Of Science And Technology Contactless breathing detection method and system thereof
CN113100748A (en) * 2021-03-30 2021-07-13 联想(北京)有限公司 Respiratory frequency determination method and device
CN113326781A (en) * 2021-05-31 2021-08-31 合肥工业大学 Non-contact anxiety recognition method and device based on face video

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈鑫强;陈兴;黄志成;苏鹭梅;: "非接触式呼吸频率检测技术的设计和实现", 厦门理工学院学报, no. 01, 28 February 2020 (2020-02-28) *

Similar Documents

Publication Publication Date Title
Yuan et al. Fingerprint liveness detection using an improved CNN with image scale equalization
TWI687879B (en) Server, client, user verification method and system
WO2021139324A1 (en) Image recognition method and apparatus, computer-readable storage medium and electronic device
CN111191616A (en) Face shielding detection method, device, equipment and storage medium
CN109766755B (en) Face recognition method and related product
CN109766785B (en) Living body detection method and device for human face
CN108197592B (en) Information acquisition method and device
CN102629320B (en) Ordinal measurement statistical description face recognition method based on feature level
CN108182412A (en) For the method and device of detection image type
CN108229375B (en) Method and device for detecting face image
CN108875509A (en) Biopsy method, device and system and storage medium
CN116311400A (en) Palm print image processing method, electronic device and storage medium
CN113298158A (en) Data detection method, device, equipment and storage medium
CN114140880A (en) Gait recognition method and device
CN111738199A (en) Image information verification method, image information verification device, image information verification computing device and medium
CN115439927A (en) Gait monitoring method, device, equipment and storage medium based on robot
CN111783677B (en) Face recognition method, device, server and computer readable medium
CN111626212B (en) Method and device for identifying object in picture, storage medium and electronic device
CN112633222A (en) Gait recognition method, device, equipment and medium based on confrontation network
CN113706550A (en) Image scene recognition and model training method and device and computer equipment
CN115035608A (en) Living body detection method, device, equipment and system
US20230245495A1 (en) Face recognition systems data collection process
CN115116146A (en) Living body detection method, device, equipment and system
CN115797451A (en) Acupuncture point identification method, device and equipment and readable storage medium
CN110688878A (en) Living body identification detection method, living body identification detection device, living body identification detection medium, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination