CN113505722A - In-vivo detection method, system and device based on multi-scale feature fusion - Google Patents

In-vivo detection method, system and device based on multi-scale feature fusion Download PDF

Info

Publication number
CN113505722A
CN113505722A CN202110835583.XA CN202110835583A CN113505722A CN 113505722 A CN113505722 A CN 113505722A CN 202110835583 A CN202110835583 A CN 202110835583A CN 113505722 A CN113505722 A CN 113505722A
Authority
CN
China
Prior art keywords
face image
image
feature
features
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110835583.XA
Other languages
Chinese (zh)
Other versions
CN113505722B (en
Inventor
***
孙亚
曾莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202110835583.XA priority Critical patent/CN113505722B/en
Publication of CN113505722A publication Critical patent/CN113505722A/en
Application granted granted Critical
Publication of CN113505722B publication Critical patent/CN113505722B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a living body detection method, a system and a device based on multi-scale feature fusion, which comprises the following steps: acquiring a training image, carrying out face key point detection on the training image, and cutting to obtain a cut training image; performing feature extraction on the cut training image based on a feature extraction network to obtain real face image features and attack face image features; reconstructing the real face image features based on the generated countermeasure network to obtain reconstructed real face image features; constraining the reconstructed real face image features and the attack face image features based on the triple loss function to obtain a classification boundary; and detecting the image to be detected according to the classification boundary by using the full connection layer as a classifier to obtain a detection result. The method of the invention improves the human face living body detection performance. The living body detection method, system and device based on multi-scale feature fusion can be widely applied to the field of computer vision.

Description

In-vivo detection method, system and device based on multi-scale feature fusion
Technical Field
The invention relates to the field of computer vision, in particular to a living body detection method, a living body detection system and a living body detection device based on multi-scale feature fusion.
Background
As one of representative technologies of the biometric recognition system, a face recognition technology is gradually matured and commercialized, and has been widely used in various fields including authentication, criminal investigation, and the like. However, this also raises a series of security issues that cannot be ignored, and hackers can attack the face recognition system with inexpensive photos or videos. The existing face living body detection technology has the problems of low user friendliness, high sensitivity to illumination intensity and the like, and has low stability in a complex and variable environment, and the detection performance is greatly reduced by changing illumination, distance, expression, posture, attack types, the resolution of camera equipment and the like.
Disclosure of Invention
In order to solve the above problems, the present invention provides a living body detection method, system and device based on multi-scale feature fusion, so as to improve the living body detection performance of human faces.
The first technical scheme adopted by the invention is as follows: a living body detection method based on multi-scale feature fusion comprises the following steps:
s1, acquiring a training image, carrying out face key point detection on the training image, and cutting to obtain a cut training image;
s2, extracting the features of the cut training image based on the feature extraction network to obtain real face image features and attack face image features;
s3, reconstructing the real face image features based on the generated countermeasure network to obtain reconstructed real face image features;
s4, constraining the reconstructed real face image features and the attack face image features based on the triple loss function to obtain a classification boundary;
and S5, detecting the image to be detected according to the classification boundary by using the full connection layer as a classifier to obtain a detection result.
Further, the step of obtaining a training image, performing face key point detection on the training image, and performing clipping to obtain a clipped training image specifically includes:
s11, acquiring a real face image for training and a corresponding attack face image;
s12, detecting key points of the human face on the basis of a multitask convolution neural network on the real human face image and the corresponding attack human face image to obtain the position of the human face;
and S13, cutting the real face image and the corresponding attack face image according to the face position to obtain a cut training image.
Further, the feature extraction network comprises a neural network and a multi-scale feature fusion module, the neural network is composed of 8 cascaded convolutional layers, the feature extraction is carried out on the clipped training image based on the feature extraction network, and the step of obtaining real face image features and attack face image features specifically comprises the following steps:
respectively extracting features based on the training images respectively cut by the plurality of convolution layers to obtain each layer of features of a real face image and an attack face image;
and fusing all layers of characteristics of the real face image and the attack face image based on the multi-scale characteristic fusion module to obtain the characteristics of the real face image and the attack face image.
Further, the generation countermeasure network includes a feature generator and a feature discriminator, and the step of reconstructing the real face image features based on the generation countermeasure network to obtain reconstructed real face image features specifically includes:
s31, generating a plurality of reconstruction features according to the features of the real face image by a feature generator;
s32, distinguishing the reconstruction characteristics by the characteristic discriminator;
s33, when the reconstructed feature is judged to be a false feature, adjusting parameters of the feature generator and the feature discriminator;
and S34, looping the steps S31-S33 until the generated reconstructed feature is determined to be the real face image feature.
Further, the parameter adjustment of the feature generator and the feature discriminator is specifically performed based on a loss function, and the formula is as follows:
Figure BDA0003177178780000021
in the above formula, D represents a feature discriminator, G represents a feature generator, D (x) represents the probability that x is a real picture, Ex~pdata(x)Expressing data (x) expectation of distribution function, Ez~pz(z)Denotes the expectation of the z (z) distribution function.
Further, the expression of the triplet loss function is as follows:
Figure BDA0003177178780000022
in the above equation, G denotes an input face picture data set,
Figure BDA0003177178780000023
it is shown that the euclidean distance is calculated,
Figure BDA0003177178780000024
representing an anchor point in the data set,
Figure BDA0003177178780000025
a positive example corresponding to an anchor point is represented,
Figure BDA0003177178780000026
representing a negative example of correspondence of an anchor point, α is
Figure BDA0003177178780000027
Finger-shaped
Figure BDA0003177178780000028
And a distance therebetween
Figure BDA0003177178780000029
And
Figure BDA00031771787800000210
the minimum separation of the distance between.
Further, before the step of detecting the image to be detected according to the classification boundary by using the full connection layer as the classifier, the steps of performing image alignment and image clipping on the image to be detected are also included.
The first technical scheme adopted by the invention is as follows: the living body detection system based on multi-scale feature fusion comprises the following modules:
the preprocessing module is used for acquiring a training image, detecting key points of a human face on the training image and cutting the training image to obtain a cut training image;
the feature extraction module is used for extracting features of the cut training images based on a feature extraction network to obtain real face image features and attack face image features;
the reconstruction module is used for reconstructing the real face image characteristics based on the generated countermeasure network to obtain reconstructed real face image characteristics;
the constraint module is used for constraining the reconstructed real face image characteristics and the attack face image characteristics based on the triple loss function to obtain a classification boundary;
and the detection module is used for detecting the image to be detected by adopting the full connection layer as a classifier according to the classification boundary to obtain a detection result.
The third technical scheme adopted by the invention is as follows: a living body detection device based on multi-scale feature fusion comprises:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement a multi-scale feature fusion based liveness detection method as described above.
The method has the beneficial effects that: according to the invention, domain information in the face image is ignored by the anti-generation network, so that the extracted features have obvious robustness to environmental changes such as image resolution, illumination, posture and the like, and the features on the spatial dimension which are more beneficial to classification are extracted through the feature extraction network, so that the features with higher discriminability and flexibility are obtained on the basis of the same parameters, and the detection performance is further improved.
Drawings
FIG. 1 is a flow chart illustrating the steps of a method for in vivo detection based on multi-scale feature fusion according to the present invention;
FIG. 2 is a schematic diagram of a living body detection process based on multi-scale feature fusion according to an embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the figures and the specific embodiments. The step numbers in the following embodiments are provided only for convenience of illustration, the order between the steps is not limited at all, and the execution order of each step in the embodiments can be adapted according to the understanding of those skilled in the art.
Referring to fig. 1 and 2, the invention provides a living body detection method based on multi-scale feature fusion, comprising the following steps:
s1, acquiring a training image, carrying out face key point detection on the training image, and cutting to obtain a cut training image;
s2, extracting the features of the cut training image based on the feature extraction network to obtain real face image features and attack face image features;
specifically, in order to ensure consistency of the feature extraction modes, the real face and attack face image feature extractors share network weight parameters.
S3, reconstructing the real face image features based on the generated countermeasure network to obtain reconstructed real face image features;
s4, constraining the reconstructed real face image features and the attack face image features based on the triple loss function to obtain a classification boundary;
specifically, triple loss is utilized to constrain image features, and a real face and an attack face are separated, so that a better classification boundary is sought.
And S5, detecting the image to be detected according to the classification boundary by using the full connection layer as a classifier to obtain a detection result.
Further, as a preferred embodiment of the method, the step of obtaining the training image, performing face keypoint detection on the training image, and performing clipping to obtain the clipped training image specifically includes:
s11, acquiring a real face image for training and a corresponding attack face image;
s12, detecting key points of the human face on the basis of a multitask convolution neural network on the real human face image and the corresponding attack human face image to obtain the position of the human face;
and S13, cutting the real face image and the corresponding attack face image according to the face position to obtain a cut training image.
Specifically, the size of the crop is 256 × 256 × 3. And then, randomly cutting the image into different image blocks as the input of a network, randomly inputting local blocks of the face image, and neglecting the spatial sequence of each pixel point of the face image, so that the extracted features have better generalization capability on various attack types and external conditions.
As a further preferred embodiment of the method, the feature extraction network includes a neural network and a multi-scale feature fusion module, the neural network is composed of 8 cascaded convolutional layers, the feature extraction network performs feature extraction on the clipped training image based on the feature extraction network to obtain a real face image feature and an attack face image feature, and the method specifically includes the following steps:
s21, respectively extracting features of the training images respectively cut based on the plurality of convolution layers to obtain each layer of features of a real face image and an attack face image;
and S22, fusing the characteristics of each layer of the real face image and the attacking face image based on the multi-scale characteristic fusion module to obtain the characteristics of the real face image and the attacking face image.
In particular, the multi-scale feature fusion module fuses features from different convolutional layers in a spatial dimension, thereby obtaining as effective information as possible. Because features from a higher layer have deeper semantic information, and features of a lower layer contain information with finer granularity, different attention degrees need to be applied to the features of different layers, that is, different attention needs to be paid by adopting a spatial attention module. In order to reduce the number of used parameters and keep the dimensionality of the feature map, the module performs downsampling and 1 × 1 convolution operations on the feature map, and finally extracts effective information.
As a preferred embodiment of the method, the generation countermeasure network includes a feature generator and a feature discriminator, and the step of reconstructing the real face image features based on the generation countermeasure network to obtain the reconstructed real face image features specifically includes:
s31, generating a plurality of reconstruction features according to the features of the real face image by a feature generator;
s32, distinguishing the reconstruction characteristics by the characteristic discriminator;
s33, when the reconstructed feature is judged to be a false feature, adjusting parameters of the feature generator and the feature discriminator;
and S34, looping the steps S31-S33 until the generated reconstructed feature is determined to be the real face image feature.
Specifically, the generation of the countermeasure network reconstructs the real face, so that the capability of the network for identifying the real face is improved, in other words, the capability of generalization on the attack type is provided. The generation countermeasure network is composed of a feature generator G and a feature discriminator D, wherein D receives true data and false data generated by G in the training process and has the function of judging whether the picture belongs to the true data or the false data. Training continues until both enter a state of equilibrium harmony, i.e., until G generates enough features to be spurious.
Further, as a preferred embodiment of the method, the parameter adjustment of the feature generator and the feature discriminator is specifically performed based on a loss function, and the formula is as follows:
Figure BDA0003177178780000051
in the above formula, D represents a feature discriminator, G represents a feature generator, and D (x) representsx is the probability of the true picture, Ex~pdata(x)Expressing data (x) expectation of distribution function, Ez~pz(z)Denotes the expectation of the z (z) distribution function.
As a further preferred embodiment of the method, the expression of the triplet loss function is as follows:
Figure BDA0003177178780000052
in the above equation, G denotes an input face picture data set,
Figure BDA0003177178780000053
it is shown that the euclidean distance is calculated,
Figure BDA0003177178780000054
representing an anchor point in the data set,
Figure BDA0003177178780000055
a positive example corresponding to an anchor point is represented,
Figure BDA0003177178780000056
representing a negative example of correspondence of an anchor point, α is
Figure BDA0003177178780000057
Finger-shaped
Figure BDA0003177178780000058
And a distance therebetween
Figure BDA0003177178780000059
And
Figure BDA00031771787800000510
the minimum separation of the distance between.
Further, as a preferred embodiment of the method, before the step of detecting the image to be detected according to the classification boundary by using the full connection layer as the classifier, the step of performing image alignment and image cropping on the image to be detected is further included.
Specifically, the size after clipping is unified to 256 × 256 × 3, which is advantageous for enhancing the detectability of the related information and simplifying the data to the maximum extent.
A living body detection system based on multi-scale feature fusion comprises:
the preprocessing module is used for acquiring a training image, detecting key points of a human face on the training image and cutting the training image to obtain a cut training image;
the feature extraction module is used for extracting features of the cut training images based on a feature extraction network to obtain real face image features and attack face image features;
the reconstruction module is used for reconstructing the real face image characteristics based on the generated countermeasure network to obtain reconstructed real face image characteristics;
the constraint module is used for constraining the reconstructed real face image characteristics and the attack face image characteristics based on the triple loss function to obtain a classification boundary;
and the detection module is used for detecting the image to be detected by adopting the full connection layer as a classifier according to the classification boundary to obtain a detection result.
A living body detection device based on multi-scale feature fusion:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement a multi-scale feature fusion based liveness detection method as described above.
The contents in the above method embodiments are all applicable to the present apparatus embodiment, the functions specifically implemented by the present apparatus embodiment are the same as those in the above method embodiments, and the advantageous effects achieved by the present apparatus embodiment are also the same as those achieved by the above method embodiments.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (9)

1. A living body detection method based on multi-scale feature fusion is characterized by comprising the following steps:
s1, acquiring a training image, carrying out face key point detection on the training image, and cutting to obtain a cut training image;
s2, extracting the features of the cut training image based on the feature extraction network to obtain real face image features and attack face image features;
s3, reconstructing the real face image features based on the generated countermeasure network to obtain reconstructed real face image features;
s4, constraining the reconstructed real face image features and the attack face image features based on the triple loss function to obtain a classification boundary;
and S5, detecting the image to be detected according to the classification boundary by using the full connection layer as a classifier to obtain a detection result.
2. The method according to claim 1, wherein the step of obtaining the training image, performing face keypoint detection on the training image, and performing cropping to obtain a cropped training image further comprises:
s11, acquiring a real face image for training and a corresponding attack face image;
s12, detecting key points of the human face on the basis of a multitask convolution neural network on the real human face image and the corresponding attack human face image to obtain the position of the human face;
and S13, cutting the real face image and the corresponding attack face image according to the face position to obtain a cut training image.
3. The in-vivo detection method based on multi-scale feature fusion as claimed in claim 2, wherein the feature extraction network comprises a neural network and a multi-scale feature fusion module, the neural network is composed of 8 cascaded convolutional layers, the step of performing feature extraction on the clipped training image based on the feature extraction network to obtain real face image features and attack face image features specifically comprises:
s21, respectively extracting features of the training images respectively cut based on the plurality of convolution layers to obtain each layer of features of a real face image and an attack face image;
and S22, fusing the characteristics of each layer of the real face image and the attacking face image based on the multi-scale characteristic fusion module to obtain the characteristics of the real face image and the attacking face image.
4. The in-vivo detection method based on multi-scale feature fusion as claimed in claim 3, wherein the generation countermeasure network includes a feature generator and a feature discriminator, and the step of reconstructing the real face image features based on the generation countermeasure network to obtain the reconstructed real face image features specifically includes:
s31, generating a plurality of reconstruction characteristics according to the real face image by a characteristic generator;
s32, distinguishing the reconstruction characteristics by the characteristic discriminator;
s33, when the reconstructed feature is judged to be a false feature, adjusting parameters of the feature generator and the feature discriminator;
and S34, looping the steps S31-S33 until the generated reconstructed feature is determined to be the real face image feature.
5. The in-vivo detection method based on multi-scale feature fusion as claimed in claim 4, wherein the parameter adjustment of the feature generator and the feature discriminator is specifically based on a loss function, and the formula is as follows:
Figure FDA0003177178770000021
in the above formula, D represents a feature discriminator, G represents a feature generator, D (x) represents the probability that x is a real picture, Ex~pdata(x)Expressing data (x) expectation of distribution function, Ez~pz(z)Denotes the expectation of the z (z) distribution function.
6. The in-vivo detection method based on multi-scale feature fusion as claimed in claim 1, wherein the expression of the triple loss function is as follows:
Figure FDA0003177178770000022
in the above equation, G denotes an input face picture data set,
Figure FDA0003177178770000023
it is shown that the euclidean distance is calculated,
Figure FDA0003177178770000024
representing an anchor point in the data set,
Figure FDA0003177178770000025
a positive example corresponding to an anchor point is represented,
Figure FDA0003177178770000026
representing a negative example of correspondence of an anchor point, α is
Figure FDA0003177178770000027
Finger-shaped
Figure FDA0003177178770000028
And a distance therebetween
Figure FDA0003177178770000029
And
Figure FDA00031771787700000210
the minimum separation of the distance between.
7. The method according to claim 1, wherein the step of detecting the image to be detected according to the classification boundary by using the full connection layer as the classifier further comprises performing image alignment and image cropping on the image to be detected.
8. A living body detection system based on multi-scale feature fusion is characterized by comprising:
the preprocessing module is used for acquiring a training image, detecting key points of a human face on the training image and cutting the training image to obtain a cut training image;
the feature extraction module is used for extracting features of the cut training images based on a feature extraction network to obtain real face image features and attack face image features;
the reconstruction module is used for reconstructing the real face image characteristics based on the generated countermeasure network to obtain reconstructed real face image characteristics;
the constraint module is used for constraining the reconstructed real face image characteristics and the attack face image characteristics based on the triple loss function to obtain a classification boundary;
and the detection module is used for detecting the image to be detected by adopting the full connection layer as a classifier according to the classification boundary to obtain a detection result.
9. A living body detection device based on multi-scale feature fusion is characterized by comprising:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement a method for in vivo detection based on multi-scale feature fusion as claimed in any one of claims 1-7.
CN202110835583.XA 2021-07-23 2021-07-23 Living body detection method, system and device based on multi-scale feature fusion Active CN113505722B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110835583.XA CN113505722B (en) 2021-07-23 2021-07-23 Living body detection method, system and device based on multi-scale feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110835583.XA CN113505722B (en) 2021-07-23 2021-07-23 Living body detection method, system and device based on multi-scale feature fusion

Publications (2)

Publication Number Publication Date
CN113505722A true CN113505722A (en) 2021-10-15
CN113505722B CN113505722B (en) 2024-01-02

Family

ID=78014378

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110835583.XA Active CN113505722B (en) 2021-07-23 2021-07-23 Living body detection method, system and device based on multi-scale feature fusion

Country Status (1)

Country Link
CN (1) CN113505722B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114596615A (en) * 2022-03-04 2022-06-07 湖南中科助英智能科技研究院有限公司 Face living body detection method, device, equipment and medium based on counterstudy

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104217195A (en) * 2014-08-07 2014-12-17 中国矿业大学 A stereo imaging and recognition system for hand veins
CN104851123A (en) * 2014-02-13 2015-08-19 北京师范大学 Three-dimensional human face change simulation method
CN107194376A (en) * 2017-06-21 2017-09-22 北京市威富安防科技有限公司 Mask fraud convolutional neural networks training method and human face in-vivo detection method
CN109523463A (en) * 2018-11-20 2019-03-26 中山大学 A kind of face aging method generating confrontation network based on condition
CN109543640A (en) * 2018-11-29 2019-03-29 中国科学院重庆绿色智能技术研究院 A kind of biopsy method based on image conversion
CN111967331A (en) * 2020-07-20 2020-11-20 华南理工大学 Face representation attack detection method and system based on fusion feature and dictionary learning
CN112052761A (en) * 2020-08-27 2020-12-08 腾讯科技(深圳)有限公司 Method and device for generating confrontation face image
CN112215043A (en) * 2019-07-12 2021-01-12 普天信息技术有限公司 Human face living body detection method
CN113112411A (en) * 2020-01-13 2021-07-13 南京信息工程大学 Human face image semantic restoration method based on multi-scale feature fusion
CN115376184A (en) * 2022-07-20 2022-11-22 新大陆数字技术股份有限公司 IR image in-vivo detection method based on generation countermeasure network

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104851123A (en) * 2014-02-13 2015-08-19 北京师范大学 Three-dimensional human face change simulation method
CN104217195A (en) * 2014-08-07 2014-12-17 中国矿业大学 A stereo imaging and recognition system for hand veins
CN107194376A (en) * 2017-06-21 2017-09-22 北京市威富安防科技有限公司 Mask fraud convolutional neural networks training method and human face in-vivo detection method
CN109523463A (en) * 2018-11-20 2019-03-26 中山大学 A kind of face aging method generating confrontation network based on condition
CN109543640A (en) * 2018-11-29 2019-03-29 中国科学院重庆绿色智能技术研究院 A kind of biopsy method based on image conversion
CN112215043A (en) * 2019-07-12 2021-01-12 普天信息技术有限公司 Human face living body detection method
CN113112411A (en) * 2020-01-13 2021-07-13 南京信息工程大学 Human face image semantic restoration method based on multi-scale feature fusion
CN111967331A (en) * 2020-07-20 2020-11-20 华南理工大学 Face representation attack detection method and system based on fusion feature and dictionary learning
CN112052761A (en) * 2020-08-27 2020-12-08 腾讯科技(深圳)有限公司 Method and device for generating confrontation face image
CN115376184A (en) * 2022-07-20 2022-11-22 新大陆数字技术股份有限公司 IR image in-vivo detection method based on generation countermeasure network

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114596615A (en) * 2022-03-04 2022-06-07 湖南中科助英智能科技研究院有限公司 Face living body detection method, device, equipment and medium based on counterstudy

Also Published As

Publication number Publication date
CN113505722B (en) 2024-01-02

Similar Documents

Publication Publication Date Title
Nguyen et al. Modular convolutional neural network for discriminating between computer-generated images and photographic images
Qiu et al. Finger vein presentation attack detection using total variation decomposition
Ayyappan et al. Criminals and missing children identification using face recognition and web scrapping
CN111274947A (en) Multi-task multi-thread face recognition method, system and storage medium
Parashar et al. Intra-class variations with deep learning-based gait analysis: A comprehensive survey of covariates and methods
Yeh et al. Face liveness detection based on perceptual image quality assessment features with multi-scale analysis
WO2023165616A1 (en) Method and system for detecting concealed backdoor of image model, storage medium, and terminal
CN112001285B (en) Method, device, terminal and medium for processing beauty images
Agarwal et al. Deceiving face presentation attack detection via image transforms
Liu et al. Overview of image inpainting and forensic technology
CN115240280A (en) Construction method of human face living body detection classification model, detection classification method and device
Sharma et al. A survey on face presentation attack detection mechanisms: hitherto and future perspectives
CN116453232A (en) Face living body detection method, training method and device of face living body detection model
CN113505722A (en) In-vivo detection method, system and device based on multi-scale feature fusion
Dosi et al. Seg-dgdnet: Segmentation based disguise guided dropout network for low resolution face recognition
Wang et al. Fighting malicious media data: A survey on tampering detection and deepfake detection
Geradts et al. Interpol review of forensic video analysis, 2019–2022
CN116311434A (en) Face counterfeiting detection method and device, electronic equipment and storage medium
CN114140674B (en) Electronic evidence availability identification method combined with image processing and data mining technology
CN112906508B (en) Face living body detection method based on convolutional neural network
Reddy et al. Facial Recognition Enhancement Using Deep Learning Techniques
CN114663930A (en) Living body detection method and device, terminal equipment and storage medium
Yin et al. Adversarial Examples Detection with Enhanced Image Difference Features based on Local Histogram Equalization
Sun et al. Presentation attacks in palmprint recognition systems
KR101031369B1 (en) Apparatus for identifying face from image and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant