CN112257685A - Face copying recognition method and device, electronic equipment and storage medium - Google Patents

Face copying recognition method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112257685A
CN112257685A CN202011420901.8A CN202011420901A CN112257685A CN 112257685 A CN112257685 A CN 112257685A CN 202011420901 A CN202011420901 A CN 202011420901A CN 112257685 A CN112257685 A CN 112257685A
Authority
CN
China
Prior art keywords
image
model
face
copying
data set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011420901.8A
Other languages
Chinese (zh)
Inventor
王小东
吕文勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu New Hope Finance Information Co Ltd
Original Assignee
Chengdu New Hope Finance Information Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu New Hope Finance Information Co Ltd filed Critical Chengdu New Hope Finance Information Co Ltd
Priority to CN202011420901.8A priority Critical patent/CN112257685A/en
Publication of CN112257685A publication Critical patent/CN112257685A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a face copying identification method and device, electronic equipment and a storage medium, and relates to the technical field of image identification. The method comprises the following steps: preliminarily determining whether the complete image is a copied image or not through a complete image copying and recognizing model, wherein the complete image copying and recognizing model is obtained on the basis of a deep convolution neural network; when the complete image is preliminarily determined not to be the copied image, acquiring a face image from the complete image; and secondarily determining whether the face image is a copied image or not through a face copying recognition model, wherein the face copying recognition model is obtained by fusing a depth estimation model, a texture estimation model and an image classification model. According to the method, the multi-level face copying recognition is carried out through the whole image copying recognition model and the face copying recognition model, so that the accuracy and the efficiency of recognizing printing attacks, mask attacks, screen attacks and other types common in face verification based on RGB images are improved.

Description

Face copying recognition method and device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of image processing, in particular to a face copying and identifying method, a face copying and identifying device, electronic equipment and a storage medium.
Background
At present, the face recognition has gone into the square in life, the mobile phone is taken up to scan the face and pay for the account, the face is scanned to complete attendance checking, and the face is brushed to check in a hotel dispute, so that the life of people is facilitated. An indispensable technology in face recognition is face live body detection, namely, an algorithm needs to determine that the face image is not only a matched face image, but also a live body face image which exists really and is shot by a current camera. Human beings can accomplish the discrimination of living beings and objects only by vision, however, machines need to do this by learning. Just as the recognition is completed when the face needs to be brushed, after the camera of the mobile phone is opened, the machine can detect whether the face is a living human face image. If a picture, or face image in a video frame, or a face mask made of a face model appears in front of the lens, the machine needs to judge by itself and ensure that the non-currently shot living body face image cannot pass through the recognition. This is the significance of human face live body detection.
The human face living body detection is mainly carried out by identifying physiological information on a living body, and the physiological information is taken as a vital sign to distinguish the biological sign of a false seat made of non-living materials such as photos, silica gel, plastics and the like. In order to ensure that the live face image is currently shot, currently, the commonly used live face detection usually includes several identification steps, such as blink discrimination: for an application system which can require the cooperation of users, the user is required to blink for one to two times, and the face recognition system can distinguish a picture from a face according to the change condition of the opening and closing state of eyes obtained by automatic judgment; or mouth opening and closing judgment: similar to blink discrimination, a user is required to open and close the mouth once or twice, and the face recognition system distinguishes a picture from a real face according to the picture, but even if the user blinks, the user may record a section of blink video of another person to attack, and happens to be blinked, so that the face needs to be shot and recognized after the action. In addition, a living body detection is a silent living body technology, a user does not need to cooperate with actions, whether a living body exists or not is judged through a static RGB of the user, but due to the fact that the definition and the resolution of a camera are high at present, attacks of a real person and a non-living body are difficult to distinguish through an RGB picture, if the camera with infrared or structured light is based, the attack of the real person and a false attack are easy to distinguish through a picture, but the attack of the real person and the false attack are difficult to distinguish based on a common RGB camera, and the identification precision is not high.
Disclosure of Invention
In view of this, an embodiment of the present application provides a face copying recognition method, a face copying recognition device, an electronic device, and a storage medium, so as to solve the problems in the prior art that the difficulty of recognizing a live face based on a common RGB camera is high and the accuracy is low.
The embodiment of the application provides a face copying and identifying method, which comprises the following steps: preliminarily determining whether the complete image is a copied image or not through a complete image copying and recognizing model, wherein the complete image copying and recognizing model is obtained on the basis of a deep convolution neural network; when the complete image is preliminarily determined not to be the copied image, acquiring a face image from the complete image; and secondarily determining whether the face image is a copied image or not through a face copying recognition model, wherein the face copying recognition model is obtained by fusing a depth estimation model, a texture estimation model and an image classification model.
In the implementation process, the whole image is subjected to simple primary judgment through the whole image copying identification model, the face image judged through the whole image copying identification model is subjected to secondary judgment of the face copying identification model, and image features such as moire fringes and reflection are identified based on the fusion features of the depth estimation model, the texture estimation model and the image classification output, so that whether the current face image is a copied image or not is more accurately distinguished, and the identification capability and accuracy of common printing attacks, mask attacks, screen attacks and the like in face verification based on the RGB camera are improved.
Optionally, before the preliminary determining whether the complete image is a copied image through the complete image copying recognition model, the method further includes: acquiring a first data set; building a network structure by utilizing a deep convolutional neural network, wherein the network structure is used for fusing HSV space and RGB space of an image; and carrying out neural network training based on the network structure and the first data set to obtain the whole image reproduction recognition model.
In the implementation mode, the HSV space and the RGB space of the image are fused by training the whole image copying recognition model, so that the color transmission and the machine calculation applicability can be more balanced, and the recognition accuracy is improved.
Optionally, the acquiring the first data set includes: acquiring first image data of a first number of real persons irradiated by screen light; determining a first marker of the first image data, the first marker comprising one or more of a shooting environment, a lighting intensity, a light type, a screen device, and a face age group; copying the first image data to obtain copied image data; determining a second mark of the copied image data, wherein the second mark comprises one or more of screen shooting copying, printing copying and frame leakage; and taking the marked first image data as a positive sample in the first data set, and taking the marked reproduced image data as a negative sample in the first data set.
In the implementation mode, the subsequent training of the whole image copying recognition model is carried out by adopting images with different shooting environments, illumination intensities, light types, screen devices and human face ages and different images with different screen shooting types, copying, printing and copying and without frame leakage, so that the recognition accuracy of the whole image copying recognition model to the human face image which is obviously a non-living body can be improved.
Optionally, after the step of taking the marked first image data as a positive sample in the first data set and the marked reproduced image data as a negative sample in the first data set, the step of acquiring a first data set further includes: performing data enhancement on the first data set, the data enhancement comprising at least one step of scaling, rotation, exposure, hue, blur, translation, and compression.
In the implementation mode, more training sets can be obtained by performing data enhancement on the first data set for training, so that the identification accuracy of the whole image reproduction identification model obtained by training is improved.
Optionally, before the secondarily determining whether the face image is a copied image by the face copying recognition model, the method further includes: respectively extracting feature information of the face image through the depth estimation model, the texture estimation model and the image classification model; model fusion is carried out on the depth estimation model, the texture estimation model and the image classification model based on the characteristic information; classifying the feature information output by the fused model by adopting SoftMax; training the fused model and the SoftMax to obtain the face copying recognition model.
In the implementation mode, secondary judgment of the face copying recognition model is carried out on the face image judged by the whole image copying recognition model, image features such as moire patterns, reflection and the like are recognized based on the depth estimation model, the texture estimation model and fusion features output by image classification, and then classification is carried out based on the fusion features by adopting SoftMax, so that whether the current face image is the copied image or not is more accurately distinguished, and the living face image detection accuracy based on the RGB image is improved.
Optionally, before the extracting the feature information of the face image through the depth estimation model, the texture estimation model and the image classification model respectively, the method further includes: generating a positive sample of a second data set for real three-dimensional image data by using the PRNEs, and generating a negative sample of the second data set by using prosthesis attack image data without three-dimensional information; training and obtaining the depth estimation model based on the second data set; generating a positive sample of a third data set by adopting a real person image without printing characteristic information, and generating a negative sample of the third data set by adopting a printing attack image with the printing characteristic information; training and obtaining the texture estimation model based on the third data set; adopting the marked first image data as a positive sample in the fourth data set, and the marked reproduction image data as a negative sample in the fourth data set; and training and obtaining the image classification model based on the fourth data.
In the implementation mode, the PRNEs are introduced to carry out model training on the samples generated by the real three-dimensional image data, the samples generated by the real images without the printing characteristic information, the first image data, the copying image data and the like, so that the face copying recognition model can more accurately recognize common attacks such as 3D printing attack, headgear attack, mask attack and the like.
Optionally, the training the fused model and SoftMax to obtain the face copying recognition model includes: training the fused model and the SoftMax based on an attention mechanism to obtain the face copying recognition model.
In the implementation mode, the attention mechanism is introduced to enable the model to process the sensitive area more specifically, so that the recognition efficiency and accuracy of the face copying recognition model are improved.
The embodiment of the application further provides a face copying and recognizing device, the device includes: the whole image recognition module is used for preliminarily determining whether the whole image is a copied image or not through a whole image copying recognition model, and the whole image copying recognition model is obtained on the basis of a deep convolution neural network; the matting module is used for acquiring a face image from the complete image when the complete image is preliminarily determined not to be a copied image; and the face recognition module is used for secondarily determining whether the face image is a copied image or not through a face copying recognition model, and the face copying recognition model is obtained by fusing a depth estimation model, a texture estimation model and an image classification model.
In the implementation process, the whole image is subjected to simple primary judgment through the whole image copying identification model, the face image judged through the whole image copying identification model is subjected to secondary judgment of the face copying identification model, and image features such as moire fringes and reflection are identified based on the fusion features of the depth estimation model, the texture estimation model and the image classification output, so that whether the current face image is a copied image or not is more accurately distinguished, and the identification capability and accuracy of common printing attacks, mask attacks, screen attacks and the like in face verification based on the RGB camera are improved.
Optionally, the face duplication recognition apparatus further includes: a first training module for obtaining a first data set; building a network structure by utilizing a deep convolutional neural network, wherein the network structure is used for fusing HSV space and RGB space of an image; and carrying out neural network training based on the network structure and the first data set to obtain the whole image reproduction recognition model.
In the implementation mode, the HSV space and the RGB space of the image are fused by training the whole image copying recognition model, so that the color transmission and the machine calculation applicability can be more balanced, and the recognition accuracy is improved.
Optionally, the first training module is specifically configured to: acquiring first image data of a first number of real persons irradiated by screen light; determining a first marker of the first image data, the first marker comprising one or more of a shooting environment, a lighting intensity, a light type, a screen device, and a face age group; copying the first image data to obtain copied image data; determining a second mark of the copied image data, wherein the second mark comprises one or more of screen shooting copying, printing copying and frame leakage; and taking the marked first image data as a positive sample in the first data set, and taking the marked reproduced image data as a negative sample in the first data set.
In the implementation mode, the subsequent training of the whole image copying recognition model is carried out by adopting images with different shooting environments, illumination intensities, light types, screen devices and human face ages and different images with different screen shooting types, copying, printing and copying and without frame leakage, so that the recognition accuracy of the whole image copying recognition model to the human face image which is obviously a non-living body can be improved.
Optionally, the first training module is specifically configured to: performing data enhancement on the first data set, the data enhancement comprising at least one step of scaling, rotation, exposure, hue, blur, translation, and compression.
In the implementation mode, more training sets can be obtained by performing data enhancement on the first data set for training, so that the identification accuracy of the whole image reproduction identification model obtained by training is improved.
Optionally, the face duplication recognition apparatus further includes: the second training module is used for respectively extracting the characteristic information of the face image through the depth estimation model, the texture estimation model and the image classification model; model fusion is carried out on the depth estimation model, the texture estimation model and the image classification model based on the characteristic information; classifying the feature information output by the fused model by adopting SoftMax; training the fused model and the SoftMax to obtain the face copying recognition model.
In the implementation mode, secondary judgment of the face copying recognition model is carried out on the face image judged by the whole image copying recognition model, image features such as moire patterns, reflection and the like are recognized based on the depth estimation model, the texture estimation model and fusion features output by image classification, and then classification is carried out based on the fusion features by adopting SoftMax, so that whether the current face image is the copied image or not is more accurately distinguished, and the living face image detection accuracy based on the RGB image is improved.
Optionally, the second training module is specifically configured to: generating a positive sample of a second data set for real three-dimensional image data by using the PRNEs, and generating a negative sample of the second data set by using prosthesis attack image data without three-dimensional information; training and obtaining the depth estimation model based on the second data set; generating a positive sample of a third data set by adopting a real person image without printing characteristic information, and generating a negative sample of the third data set by adopting a printing attack image with the printing characteristic information; training and obtaining the texture estimation model based on the third data set; adopting the marked first image data as a positive sample in the fourth data set, and the marked reproduction image data as a negative sample in the fourth data set; and training and obtaining the image classification model based on the fourth data.
In the implementation mode, the PRNEs are introduced to carry out model training on the samples generated by the real three-dimensional image data, the samples generated by the real images without the printing characteristic information, the first image data, the copying image data and the like, so that the face copying recognition model can more accurately recognize common attacks such as 3D printing attack, headgear attack, mask attack and the like.
Optionally, the second training module is specifically configured to: training the fused model and the SoftMax based on an attention mechanism to obtain the face copying recognition model.
In the implementation mode, the attention mechanism is introduced to enable the model to process the sensitive area more specifically, so that the recognition efficiency and accuracy of the face copying recognition model are improved.
An embodiment of the present application further provides an electronic device, where the electronic device includes a memory and a processor, where the memory stores program instructions, and the processor executes steps in any one of the above implementation manners when reading and executing the program instructions.
The embodiment of the present application further provides a readable storage medium, in which computer program instructions are stored, and the computer program instructions are read by a processor and executed to perform the steps in any of the above implementation manners.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a schematic flow chart of a face copying and recognizing method provided in an embodiment of the present application.
Fig. 2 is a schematic flowchart of a training step of the entire image reproduction recognition model according to the embodiment of the present application.
Fig. 3 is a schematic flowchart of a first data set obtaining step according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram of a deep convolutional neural network according to an embodiment of the present disclosure.
Fig. 5 is a schematic flow chart illustrating a training procedure of a face duplication recognition model according to an embodiment of the present application.
Fig. 6 is a schematic block diagram of a face copying and recognizing device according to an embodiment of the present disclosure.
Icon: 20-a face copying recognition device; 21-whole image identification module; 22-matting module; 23-face recognition module.
Detailed Description
The technical solution in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
The applicant researches and discovers that the existing technical scheme for recognizing whether the face image is copied based on RGB is not fast enough, the precision is difficult to meet the requirements of the industry, many algorithms have the same effect as random guessing, most algorithms are based on that living bodies and non-living bodies are judged only by face information after the face image is scratched, and for a clear camera, the misjudgment rate is high because only the face information cannot utilize information such as frames and paper carried by the attack.
In order to solve the above problems, an embodiment of the present application provides a face copying recognition method, please refer to fig. 1, where fig. 1 is a schematic flow diagram of the face copying recognition method provided in the embodiment of the present application, and the specific steps of the face copying recognition method may be as follows:
step S12: and preliminarily determining whether the complete image is a copied image or not through the whole image copying and recognizing model.
In most cases, the image has frames, moire patterns and the like of a computer, a mobile phone and printing paper, and can be regarded as a copy with high probability, if a person stands in front of the computer, the image is difficult to identify by using a traditional algorithm. Therefore, in order to improve the accuracy of the face duplication recognition of the whole image, the present embodiment proposes step S12 of performing face duplication recognition on the whole image based on deep learning.
The entire image copying and recognizing model in the embodiment can be obtained by building a network structure based on a deep convolutional neural network. Convolutional Neural Networks (CNN) are a class of feed forward Neural Networks (fed forward Neural Networks) that contain convolution computations and have a deep structure, and are one of the algorithms that represent deep learning. The convolutional neural network has the characteristic learning capability and can carry out shift-invariant classification (shift-invariant classification) on input information according to the hierarchical structure of the convolutional neural network. The convolutional neural network is constructed by imitating a visual perception (visual perception) mechanism of a living being, can perform supervised learning and unsupervised learning, and has the advantages that the convolutional neural network can learn grid-like topologic features such as pixels and audio with small calculation amount, has stable effect and has no additional feature engineering (feature engineering) requirement on data due to the fact that convolutional kernel parameter sharing in an implicit layer and sparsity of connection between layers.
Optionally, the whole image reproduction recognition model in this embodiment may be obtained based on a deep convolutional neural network, a specific training process of the whole image reproduction recognition model may refer to fig. 2, and fig. 2 is a schematic flow chart of a training step of the whole image reproduction recognition model provided in this embodiment, where the training step may specifically be as follows:
step S111: a first data set is acquired.
Specifically, in this embodiment, the image with a relatively obvious copied frame and a relatively obvious missed frame needs to be recognized by the entire image copying recognition model, so that the first data set of the embodiment needs to acquire face images under various conditions for model training, specifically, refer to fig. 3, where fig. 3 is a schematic flow diagram of a first data set acquisition step provided in this embodiment of the present application.
Step S1111: first image data of a first number of real persons illuminated by screen light is acquired.
Optionally, the first image data of the real person irradiated by the screen light in this embodiment may be real person image data of the real person under the attack of the screen when the real person stands in the irradiation range of a common screen such as a computer, a mobile phone, a tablet computer, an electronic book, and the like.
Step S1112: a first marker of the first image data is determined, the first marker including one or more of a shooting environment, a lighting intensity, a light type, a screen device, and a face age group.
Specifically, the shooting environment in this embodiment may include indoor, outdoor, and in-vehicle, the illumination intensity may include strong light, natural light, weak light, and the like, the light type may include front light, side light, backlight, and the like, and the screen device may include an android device, an IOS device, and various device specific models, and the like.
Step S1113: and copying the first image data to obtain copied image data.
Optionally, in this embodiment, the first image data may be subjected to screen shooting type copying, print copying, frame missing copying, and the like through various mobile phone type, computer type, or tablet computer type devices. Wherein, printing reproduction can include normal color printing reproduction, color printing deformation reproduction, color printing cutout reproduction, and the like.
Step S1114: and determining a second mark of the copied image data, wherein the second mark comprises one or more of screen shooting copying, printing copying and frame missing and missing.
It should be understood that, before the training data is used for model training, the training data needs to be labeled to enable the model to be classified and learned, and in this embodiment, a relatively common image type is selected as the first label and the second label, so that the recognition accuracy of the entire image reproduction recognition model obtained through training can be improved.
Step S1115: and taking the marked first image data as a positive sample in the first data set, and taking the marked reproduced image data as a negative sample in the first data set.
Optionally, after the first data set is obtained, data enhancement may be performed on the data in the first data set, and mainly scaling, rotating, exposing, hue, blurring, translating, and/or compressing, so that the data is richer, and the model training effect is further improved.
Step S112: and constructing a network structure by utilizing the deep convolutional neural network, wherein the network structure is used for fusing the HSV space and the RGB space of the image.
RGB (Red, Green, Blue) is a color standard in the industry, and various colors are obtained by changing three color channels of Red (R), Green (G), and Blue (B) and superimposing them on each other, RGB represents colors of three channels of Red, Green, and Blue, and this standard almost includes all colors that can be perceived by human vision, and is one of the most widely used color systems. The RGB color space is suitable for display systems but not for image processing.
HSV (Hue, Saturation) is a color space created by a.r. Smith in 1978 based on the intuitive properties of color, also known as the hexagonal cone Model (Hexcone Model), in which the parameters of the color are: hue (H), saturation (S), lightness (V). Therefore, the HSV color space, which is closer to the human perception experience of color than RGB, is used more frequently in image processing. The color tone, the brightness degree and the brightness degree of the color are visually expressed, and the color contrast is convenient to carry out. Under the HSV color space, it is easier to track objects of a certain color, which is often used to segment objects of a given color.
Therefore, the embodiment fuses the RGB and the HSV, so that the whole image copying and recognizing model can analyze and recognize various color spaces, comprehensive color characteristic information is obtained, and the recognition accuracy of the model is improved.
Step S113: and carrying out neural network training based on the network structure and the first data set to obtain a whole image reproduction recognition model.
The network structure of the deep convolutional neural network may include an input layer (hsv _ InputLayer and yuv _ InputLayer), a convolutional layer (vgg _ hsv: Model and yuv _ hsv: Model), a connection layer (conditioner), a flattening layer (Flatten: Flatten), a first fully-connected layer (Dense: detect), a first training acceleration layer (batch _ normalization), a first Activation layer (Activation), a first discarding layer (drop: drop), a second fully-connected layer (Dense _1: detect), a second training acceleration layer (batch _ normalization _1: batch normalization), a second Activation layer (drop _1: drop), and an output layer (discard _1: drop) connected in sequence, and a schematic diagram of the network structure is provided in fig. 4.
It should be understood that the design loss function during model training can adopt the design mode commonly used by the deep convolutional neural network.
Optionally, since the entire-image-reproduction identification model only filters images that are apparently reproduced images, when the threshold is set for model pre-deployment in this embodiment, the threshold may be set to be lower, for example, when the threshold is set according to the rejection rate, the rejection rate correspondence threshold of 4/1000 may be 0.00001.
Step S14: and when the complete image is preliminarily determined not to be the copied image, acquiring the face image from the complete image.
Optionally, yoloV5 training a face detection model can be adopted in this embodiment to carry out the extraction of facial image, and in the training process, the mark to model data is more meticulous, and the side face of big face, little face, fuzzy face and various angles in the image all carries out accurate mark, has richened face image database, so the algorithm can all detect to fuzzy, very big face, very little face, guarantees face detection's speed simultaneously.
Step S16: and determining whether the face image is a copied image or not through a face copying recognition model twice.
It should be understood that, before using the face duplication recognition model, the face duplication recognition model needs to be trained, please refer to fig. 5, where fig. 5 is a schematic flow chart of a training step of the face duplication recognition model provided in the embodiment of the present application, and the training step of the face duplication recognition model may specifically be as follows:
step S151: and respectively extracting the characteristic information of the face image through a depth estimation model, a texture estimation model and an image classification model.
Meanwhile, before the depth estimation model, the texture estimation model and the image classification model are used, the depth estimation model, the texture estimation model and the image classification model also need to be trained and obtained.
The training steps of the depth estimation model may be as follows: generating a positive sample of a second data set for the real three-dimensional image data by adopting the PRNEs, and generating a negative sample of the second data set by adopting the prosthesis attack image data without three-dimensional information; and training based on the second data set to obtain a depth estimation model.
The PRNet algorithm is used for realizing the joint task of 3D face reconstruction and dense face alignment by a single RGB face image from end to end.
The training steps of the texture estimation model may be as follows: generating a positive sample of the third data set by adopting the real person image without the printing characteristic information, and generating a negative sample of the third data set by adopting the printing attack image with the printing characteristic information; a texture estimation model is obtained based on the third data set training.
Alternatively, the print characteristics in the present embodiment may refer to normal color print reproduction, color print distortion reproduction, color print matting reproduction, and the like. The sample training of the texture estimation model mainly utilizes the characteristic information of real people such as no paper texture, light reflection, Moire lines, wrinkles and the like of images, the characteristic information exists in the copying image attack, the model structure is similar to that of a picture 4, the printing attack can be effectively prevented, and meanwhile, positive samples and negative samples similar to the whole picture copying recognition model are trained by utilizing HSV and RGB color spaces.
The training steps of the image classification model may be as follows: adopting the marked first image data as a positive sample in the fourth data set, and adopting the marked reproduced image data as a negative sample in the fourth data set; and training and obtaining an image classification model based on the fourth data.
The image classification model is used for identifying other types of attack types except the attack types aimed by the depth estimation model and the image classification model, such as mask attack images, 3D model attack images and the like, the image layer of the image classification model is different from that of a real person, the model structure diagram of the image classification model is based on Resnet50, and the steps of training data are similar to the samples of the whole image copying identification model.
Step S152: and performing model fusion on the depth estimation model, the texture estimation model and the image classification model based on the characteristic information.
The model fusion is output feature fusion, and the model fusion in this embodiment may adopt a model fusion mode common in the prior art.
Step S153: and classifying the characteristic information output by the fused model by adopting SoftMax.
The Softmax logistic regression model is a generalization of logistic regression model to multi-classification problems in which class labels y can take more than two values and thus can be used for classification.
Step S154: and training the fused model and SoftMax to obtain a face copying recognition model.
Optionally, in this embodiment, an attention mechanism may be introduced during model training, and the attention mechanism may be intuitively interpreted by using a human visual mechanism. For example, our vision system tends to focus on some information in the image that assists in the determination, and ignore irrelevant information. Also, in questions related to language or vision, some parts of the input may be more helpful to decision making than others. Attention models combine this notion of relevance by allowing the model to dynamically focus on certain parts of the input that contribute to performing the task at hand. The embodiment improves the interpretability and the performance of the neural network by introducing an attention mechanism, thereby improving the recognition efficiency and the accuracy of the face copying recognition model.
The above-mentioned people's face reproduction identification method that this application embodiment provided utilizes the frame that the reproduction image itself carried earlier, reflection of light information etc. carry out reproduction discernment to whole picture, the discernment has the image of reproduction frame obviously, the further face image of scratching of not having the image of frame or frame information ambiguity carries out reproduction discernment again based on, to scratching the face image and carry out the multiple dimension transform, the amalgamation of many model characteristics, and add the attention mechanism, let the algorithm pay close attention to mole line more, image characteristics such as reflection of light, thereby whether accurate differentiation current people's face image is the reproduction image. Therefore, the problem that real persons and attacks are difficult to distinguish based on images acquired by the RGB camera is solved, and real application and landing can be achieved in the industry.
In order to cooperate with the above-mentioned face copying recognition method provided in this embodiment, the embodiment of the present application further provides a face copying recognition device 20.
Referring to fig. 6, fig. 6 is a schematic block diagram of a face copying and recognizing device according to an embodiment of the present disclosure.
The face copying recognition device 20 includes:
the whole image recognition module 21 is configured to preliminarily determine whether the whole image is a copied image or not through a whole image copying recognition model, where the whole image copying recognition model is obtained based on a deep convolutional neural network;
a matting module 22 for obtaining a face image from the complete image when it is preliminarily determined that the complete image is not a copied image;
and the face recognition module 23 is configured to secondarily determine whether the face image is a copied image or not through a face copying recognition model, and the face copying recognition model is obtained by fusing a depth estimation model, a texture estimation model and an image classification model.
Optionally, the face duplication recognition apparatus 20 further includes: a first training module for obtaining a first data set; building a network structure by utilizing a deep convolutional neural network, wherein the network structure is used for fusing HSV space and RGB space of an image; and carrying out neural network training based on the network structure and the first data set to obtain a whole image reproduction recognition model.
Optionally, the first training module is specifically configured to: acquiring first image data of a first number of real persons irradiated by screen light; determining a first marker of the first image data, wherein the first marker comprises one or more of shooting environment, illumination intensity, light type, screen equipment and age bracket of the human face; copying the first image data to obtain copied image data; determining a second mark of the copied image data, wherein the second mark comprises one or more of screen shooting copying, printing copying and frame leakage and no frame leakage; and taking the marked first image data as a positive sample in the first data set, and taking the marked reproduced image data as a negative sample in the first data set.
Optionally, the first training module is specifically configured to: data enhancement is performed on the first data set, the data enhancement including at least one step of scaling, rotation, exposure, hue, blur, translation, and compression.
Optionally, the face duplication recognition apparatus 20 further includes: the second training module is used for respectively extracting the characteristic information of the face image through the depth estimation model, the texture estimation model and the image classification model; model fusion is carried out on the depth estimation model, the texture estimation model and the image classification model based on the characteristic information; classifying the feature information output by the fused model by adopting SoftMax; and training the fused model and SoftMax to obtain a face copying recognition model.
Optionally, the second training module is specifically configured to: generating a positive sample of a second data set for the real three-dimensional image data by adopting the PRNEs, and generating a negative sample of the second data set by adopting the prosthesis attack image data without three-dimensional information; training based on a second data set to obtain a depth estimation model; generating a positive sample of the third data set by adopting the real person image without the printing characteristic information, and generating a negative sample of the third data set by adopting the printing attack image with the printing characteristic information; training based on the third data set to obtain a texture estimation model; adopting the marked first image data as a positive sample in the fourth data set, and adopting the marked reproduced image data as a negative sample in the fourth data set; and training and obtaining an image classification model based on the fourth data.
Optionally, the second training module is specifically configured to: training the fused model and SoftMax based on an attention mechanism to obtain a face copying recognition model.
The embodiment of the present application further provides an electronic device, which includes a memory and a processor, where the memory stores program instructions, and when the processor reads and runs the program instructions, the processor executes the steps in any one of the methods of face copying and recognizing provided by the present embodiment.
It should be understood that the electronic device may be a Personal Computer (PC), a tablet PC, a smart phone, a Personal Digital Assistant (PDA), or other electronic device having a logical computing function.
The embodiment of the application also provides a readable storage medium, wherein a computer program instruction is stored in the readable storage medium, and the computer program instruction is read by a processor and executed when the computer program instruction is executed by the processor, so that the steps in the face copying and recognizing method are executed.
In summary, the embodiment of the present application provides a face copying recognition method, a face copying recognition device, an electronic device, and a storage medium, where the method includes: preliminarily determining whether the complete image is a copied image or not through a complete image copying and recognizing model, wherein the complete image copying and recognizing model is obtained on the basis of a deep convolution neural network; when the complete image is preliminarily determined not to be the copied image, acquiring a face image from the complete image; and secondarily determining whether the face image is a copied image or not through a face copying recognition model, wherein the face copying recognition model is obtained by fusing a depth estimation model, a texture estimation model and an image classification model.
In the implementation process, the whole image is subjected to simple primary judgment through the whole image copying identification model, the face image judged through the whole image copying identification model is subjected to secondary judgment of the face copying identification model, and image features such as moire fringes and reflection are identified based on the fusion features of the depth estimation model, the texture estimation model and the image classification output, so that whether the current face image is a copied image or not is more accurately distinguished, and the identification capability and accuracy of common printing attacks, mask attacks, screen attacks and the like in face verification based on the RGB camera are improved.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. The apparatus embodiments described above are merely illustrative, and for example, the block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of devices according to various embodiments of the present application. In this regard, each block in the block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams, and combinations of blocks in the block diagrams, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Therefore, the present embodiment further provides a readable storage medium, in which computer program instructions are stored, and when the computer program instructions are read and executed by a processor, the computer program instructions perform the steps of any of the block data storage methods. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a RanDom Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.

Claims (10)

1. A face copying and recognizing method is characterized by comprising the following steps:
preliminarily determining whether the complete image is a copied image or not through a complete image copying and recognizing model, wherein the complete image copying and recognizing model is obtained on the basis of a deep convolution neural network;
when the complete image is preliminarily determined not to be the copied image, acquiring a face image from the complete image;
and secondarily determining whether the face image is a copied image or not through a face copying recognition model, wherein the face copying recognition model is obtained by fusing a depth estimation model, a texture estimation model and an image classification model.
2. The method of claim 1, wherein before the preliminary determining whether the full image is a copied image by the full-image-copying recognition model, the method further comprises:
acquiring a first data set;
building a network structure by utilizing a deep convolutional neural network, wherein the network structure is used for fusing HSV space and RGB space of an image;
and carrying out neural network training based on the network structure and the first data set to obtain the whole image reproduction recognition model.
3. The method of claim 2, wherein said obtaining a first data set comprises:
acquiring first image data of a first number of real persons irradiated by screen light;
determining a first marker of the first image data, the first marker comprising one or more of a shooting environment, a lighting intensity, a light type, a screen device, and a face age group;
copying the first image data to obtain copied image data;
determining a second mark of the copied image data, wherein the second mark comprises one or more of screen shooting copying, printing copying and frame leakage;
and taking the marked first image data as a positive sample in the first data set, and taking the marked reproduced image data as a negative sample in the first data set.
4. The method of claim 3, wherein said acquiring a first data set after said labeling said first image data as positive samples in said first data set and said labeling said flap image data as negative samples in said first data set, further comprises:
performing data enhancement on the first data set, the data enhancement comprising at least one step of scaling, rotation, exposure, hue, blur, translation, and compression.
5. The method of claim 3, wherein before the determining whether the face image is a copied image twice by the face copying recognition model, the method further comprises:
respectively extracting feature information of the face image through the depth estimation model, the texture estimation model and the image classification model;
model fusion is carried out on the depth estimation model, the texture estimation model and the image classification model based on the characteristic information;
classifying the feature information output by the fused model by adopting SoftMax;
training the fused model and the SoftMax to obtain the face copying recognition model.
6. The method according to claim 5, wherein before the extracting feature information of the face image by the depth estimation model, the texture estimation model and the image classification model respectively, the method further comprises:
generating a positive sample of a second data set for real three-dimensional image data by using the PRNEs, and generating a negative sample of the second data set by using prosthesis attack image data without three-dimensional information;
training and obtaining the depth estimation model based on the second data set;
generating a positive sample of a third data set by adopting a real person image without printing characteristic information, and generating a negative sample of the third data set by adopting a printing attack image with the printing characteristic information;
training and obtaining the texture estimation model based on the third data set;
adopting the marked first image data as a positive sample in a fourth data set, and taking the marked reproduction image data as a negative sample in the fourth data set;
and training and obtaining the image classification model based on the fourth data.
7. The method of claim 5, wherein the training of the fused model and the SoftMax to obtain the face duplication recognition model comprises:
training the fused model and the SoftMax based on an attention mechanism to obtain the face copying recognition model.
8. A face duplication recognition apparatus, the apparatus comprising:
the whole image recognition module is used for preliminarily determining whether the whole image is a copied image or not through a whole image copying recognition model, and the whole image copying recognition model is obtained on the basis of a deep convolution neural network;
the matting module is used for acquiring a face image from the complete image when the complete image is preliminarily determined not to be a copied image;
and the face recognition module is used for secondarily determining whether the face image is a copied image or not through a face copying recognition model, and the face copying recognition model is obtained by fusing a depth estimation model, a texture estimation model and an image classification model.
9. An electronic device comprising a memory having stored therein program instructions and a processor that, when executed, performs the steps of the method of any of claims 1-7.
10. A readable storage medium having stored thereon computer program instructions for executing the steps of the method according to any one of claims 1 to 7 when executed by a processor.
CN202011420901.8A 2020-12-08 2020-12-08 Face copying recognition method and device, electronic equipment and storage medium Pending CN112257685A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011420901.8A CN112257685A (en) 2020-12-08 2020-12-08 Face copying recognition method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011420901.8A CN112257685A (en) 2020-12-08 2020-12-08 Face copying recognition method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112257685A true CN112257685A (en) 2021-01-22

Family

ID=74225018

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011420901.8A Pending CN112257685A (en) 2020-12-08 2020-12-08 Face copying recognition method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112257685A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113033530A (en) * 2021-05-31 2021-06-25 成都新希望金融信息有限公司 Certificate copying detection method and device, electronic equipment and readable storage medium
CN113837310A (en) * 2021-09-30 2021-12-24 四川新网银行股份有限公司 Multi-scale fusion certificate copying and identifying method and device, electronic equipment and medium
CN114495192A (en) * 2021-12-09 2022-05-13 成都臻识科技发展有限公司 Multi-model-based face anti-counterfeiting method, storage medium and detection equipment
CN116071835A (en) * 2023-04-07 2023-05-05 平安银行股份有限公司 Face recognition attack post screening method and device and electronic equipment

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105389554A (en) * 2015-11-06 2016-03-09 北京汉王智远科技有限公司 Face-identification-based living body determination method and equipment
CN107066942A (en) * 2017-03-03 2017-08-18 上海斐讯数据通信技术有限公司 A kind of living body faces recognition methods and system
CN107358157A (en) * 2017-06-07 2017-11-17 阿里巴巴集团控股有限公司 A kind of human face in-vivo detection method, device and electronic equipment
WO2018058554A1 (en) * 2016-09-30 2018-04-05 Intel Corporation Face anti-spoofing using spatial and temporal convolutional neural network analysis
CN109784148A (en) * 2018-12-06 2019-05-21 北京飞搜科技有限公司 Biopsy method and device
CN110135259A (en) * 2019-04-15 2019-08-16 深圳壹账通智能科技有限公司 Silent formula living body image identification method, device, computer equipment and storage medium
CN111160216A (en) * 2019-12-25 2020-05-15 开放智能机器(上海)有限公司 Multi-feature multi-model living human face recognition method
WO2020115154A1 (en) * 2018-12-04 2020-06-11 Yoti Holding Limited Anti-spoofing
CN111310724A (en) * 2020-03-12 2020-06-19 苏州科达科技股份有限公司 In-vivo detection method and device based on deep learning, storage medium and equipment
CN111860394A (en) * 2020-07-28 2020-10-30 成都新希望金融信息有限公司 Gesture estimation and gesture detection-based action living body recognition method
CN112001240A (en) * 2020-07-15 2020-11-27 浙江大华技术股份有限公司 Living body detection method, living body detection device, computer equipment and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105389554A (en) * 2015-11-06 2016-03-09 北京汉王智远科技有限公司 Face-identification-based living body determination method and equipment
WO2018058554A1 (en) * 2016-09-30 2018-04-05 Intel Corporation Face anti-spoofing using spatial and temporal convolutional neural network analysis
CN107066942A (en) * 2017-03-03 2017-08-18 上海斐讯数据通信技术有限公司 A kind of living body faces recognition methods and system
CN107358157A (en) * 2017-06-07 2017-11-17 阿里巴巴集团控股有限公司 A kind of human face in-vivo detection method, device and electronic equipment
WO2018226990A1 (en) * 2017-06-07 2018-12-13 Alibaba Group Holding Limited Face liveness detection method and apparatus, and electronic device
WO2020115154A1 (en) * 2018-12-04 2020-06-11 Yoti Holding Limited Anti-spoofing
CN109784148A (en) * 2018-12-06 2019-05-21 北京飞搜科技有限公司 Biopsy method and device
CN110135259A (en) * 2019-04-15 2019-08-16 深圳壹账通智能科技有限公司 Silent formula living body image identification method, device, computer equipment and storage medium
CN111160216A (en) * 2019-12-25 2020-05-15 开放智能机器(上海)有限公司 Multi-feature multi-model living human face recognition method
CN111310724A (en) * 2020-03-12 2020-06-19 苏州科达科技股份有限公司 In-vivo detection method and device based on deep learning, storage medium and equipment
CN112001240A (en) * 2020-07-15 2020-11-27 浙江大华技术股份有限公司 Living body detection method, living body detection device, computer equipment and storage medium
CN111860394A (en) * 2020-07-28 2020-10-30 成都新希望金融信息有限公司 Gesture estimation and gesture detection-based action living body recognition method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JIANWEI YANG 等: "Learn Convolutional Neural Network for Face Anti-Spoofing", 《ARXIV》 *
MEIGUI ZHANG 等: "A Survey on Face Anti-Spoofing Algorithms", 《JOURNAL OF INFORMATION HIDING AND PRIVACY PROTECTION》 *
卢子谦 等: "人脸反欺诈活体检测综述", 《信息安全学报》 *
邓雄 等: "基于深度学习和特征融合的人脸活体检测算法", 《计算机应用》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113033530A (en) * 2021-05-31 2021-06-25 成都新希望金融信息有限公司 Certificate copying detection method and device, electronic equipment and readable storage medium
CN113033530B (en) * 2021-05-31 2022-02-22 成都新希望金融信息有限公司 Certificate copying detection method and device, electronic equipment and readable storage medium
CN113837310A (en) * 2021-09-30 2021-12-24 四川新网银行股份有限公司 Multi-scale fusion certificate copying and identifying method and device, electronic equipment and medium
CN113837310B (en) * 2021-09-30 2023-05-23 四川新网银行股份有限公司 Multi-scale fused certificate flap recognition method and device, electronic equipment and medium
CN114495192A (en) * 2021-12-09 2022-05-13 成都臻识科技发展有限公司 Multi-model-based face anti-counterfeiting method, storage medium and detection equipment
CN116071835A (en) * 2023-04-07 2023-05-05 平安银行股份有限公司 Face recognition attack post screening method and device and electronic equipment

Similar Documents

Publication Publication Date Title
Matern et al. Exploiting visual artifacts to expose deepfakes and face manipulations
CN112257685A (en) Face copying recognition method and device, electronic equipment and storage medium
Huh et al. Fighting fake news: Image splice detection via learned self-consistency
Yao et al. Oscar: On-site composition and aesthetics feedback through exemplars for photographers
US8995725B2 (en) On-site composition and aesthetics feedback through exemplars for photographers
CN105608700B (en) Photo screening method and system
CN103198316A (en) Method, apparatus and system for identifying distracting elements in an image
Leuner A replication study: Machine learning models are capable of predicting sexual orientation from facial images
CN111008971B (en) Aesthetic quality evaluation method of group photo image and real-time shooting guidance system
CN108604040A (en) By expose management effectively using capturing the image used in face recognition
CN111259757B (en) Living body identification method, device and equipment based on image
CN110363111B (en) Face living body detection method, device and storage medium based on lens distortion principle
Saxena et al. Gender and age detection using deep learning
WO2022156214A1 (en) Liveness detection method and apparatus
JPH11306348A (en) Method and device for object detection
Abraham Digital image forgery detection approaches: A review and analysis
CN115995103A (en) Face living body detection method, device, computer readable storage medium and equipment
Gomes Model learning in iconic vision
Bruce et al. Learning new faces
CN112507985A (en) Face image screening method and device, electronic equipment and storage medium
Nallapati et al. Identification of Deepfakes using Strategic Models and Architectures
CN113963391A (en) Silent in-vivo detection method and system based on binocular camera
Bennur et al. Face Mask Detection and Face Recognition of Unmasked People in Organizations
JPH11283036A (en) Object detector and object detection method
Jang et al. Skin region segmentation using an image-adapted colour model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210122

RJ01 Rejection of invention patent application after publication