CN113139517B - Face living body model training method, face living body model detection method, storage medium and face living body model detection system - Google Patents

Face living body model training method, face living body model detection method, storage medium and face living body model detection system Download PDF

Info

Publication number
CN113139517B
CN113139517B CN202110528863.6A CN202110528863A CN113139517B CN 113139517 B CN113139517 B CN 113139517B CN 202110528863 A CN202110528863 A CN 202110528863A CN 113139517 B CN113139517 B CN 113139517B
Authority
CN
China
Prior art keywords
living body
near infrared
face
image
visible light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110528863.6A
Other languages
Chinese (zh)
Other versions
CN113139517A (en
Inventor
马琳
章烈剽
柯文辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Grg Tally Vision IT Co ltd
Guangdian Yuntong Group Co ltd
Original Assignee
Grg Tally Vision IT Co ltd
GRG Banking Equipment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Grg Tally Vision IT Co ltd, GRG Banking Equipment Co Ltd filed Critical Grg Tally Vision IT Co ltd
Priority to CN202110528863.6A priority Critical patent/CN113139517B/en
Publication of CN113139517A publication Critical patent/CN113139517A/en
Application granted granted Critical
Publication of CN113139517B publication Critical patent/CN113139517B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The application provides a human face living body model training method, a detection method, a storage medium and a detection system, wherein a visible light camera is used for collecting photos and calculating the average value of the sum of variances of three differences of the photos, and because the near infrared living body detection model has poor attack prevention capability on black-white photos and near infrared photos, the method is used for judging whether the photos are black-white photos or near infrared photos so as to prevent the attack of the photos; by utilizing the characteristic that near infrared imaging is not influenced by illumination conditions, only a near infrared camera is used for collecting training samples, and a near infrared living body detection model is trained. There is no need to train a visible light living body detection model. The method and the system solve the problem of human face living body detection under different light rays such as dim light, backlight, strong light, shade sunlight and the like, and the scheme only needs to train a near infrared human face living body detection model, so that the algorithm speed is improved, and the detection efficiency is improved.

Description

Face living body model training method, face living body model detection method, storage medium and face living body model detection system
Technical Field
The application belongs to the field of intelligent living body detection of artificial intelligence, relates to a face recognition technology, and in particular relates to a face living body model training method, a detection method, a storage medium and a detection system.
Background
The face recognition technology is applied in more and more scenes, and the safety of face recognition is also receiving more and more attention, wherein one key link is the living body detection of the face, namely judging whether the current face image is a real living body or a non-real face image, such as a photo with the face, a video, a face mask, a face head model and the like. The human face living body detection is an important step before human face recognition, can effectively prevent non-living body attack and ensures the safety of a human face recognition system.
Patent CN 107358157B discloses a face living body detection method, a device and an electronic apparatus, wherein training of a first deep learning model is performed by training a global image of a face and training of a second deep learning model is performed by clipping an image of the face, and then face living body detection is performed by using the two models.
Patent CN 107862299A discloses a living body face detection method based on near infrared and visible light binocular cameras, which uses LBP features of near infrared and visible light to train a living body detection model for preventing video and photo attacks.
However, in the above two patent technologies, since the living body detection is performed by using both visible light and near infrared light, the living body detection effect is good in the case of good illumination conditions, but since the visible light living body detection is easily affected by light, especially in the case of illumination conditions such as dim light, backlight, strong light, etc., the visible light living body detection effect is poor, and in practical use, the experience is very poor, and a real human living body is easily detected as a non-living body. In addition, the visible light and near infrared light modes are used for simultaneous judgment, so that the calculation amount is large and the speed is low.
Disclosure of Invention
In order to overcome the defects in the prior art, the application aims to provide a human face living body model training method, a detection method, a storage medium and a detection system, which can solve the problems.
Design principle:
a. the visible light camera is used for collecting the photo and calculating the average value of the sum of variances of three-channel differences of the photo, and the method is used for judging whether the photo is a black-and-white photo or a near-infrared photo or not so as to prevent the attack of the photo because the near-infrared living body detection model has poor attack prevention capability on the black-and-white photo and the near-infrared photo.
b. By utilizing the characteristic that near infrared imaging is not influenced by illumination conditions, only a near infrared camera is used for collecting training samples, and a near infrared living body detection model is trained. There is no need to train a visible light living body detection model.
The design scheme is as follows:
a training method of a near infrared light human face living body detection model comprises the following steps:
s11, collecting a human face living body detection training sample by using a near infrared camera in a binocular camera, wherein the collected sample comprises a real human living body sample and a non-living body sample under different illumination conditions;
s12, sorting the collected samples, detecting an original image by using a face detector, cutting the detected face, and taking the cut face picture as a training sample; the face photo of the stored living body data is taken as a positive sample, and the face of the stored non-living body data is taken as a negative sample;
s13, resampling the sample to obtain a resampling diagram;
s14, carrying out convolution calculation on the resampling map, carrying out batch normalization, activating the ReLU, and carrying out maximum pooling calculation;
s15, inputting the result after the maximum pooling calculation into a dense block for calculation;
s16, inputting a result of dense block calculation into a transition block for calculation;
s17, performing 1x1 convolution operation on a result obtained by calculating the transition block, and performing sigmoid activation to obtain a feature map;
s18, linearizing the feature map, and performing sigmoid activation to obtain binary output;
s19, training by using the feature map calculated in the step S7 and the binary output calculated in the step S8 and using a cross entropy loss function BCE for classification as a loss function, wherein the loss function is: loss=0.5 x Loss map +0.5*Loss binary
The application also provides a human face living body detection method adapting to different light rays, which comprises the following steps:
s21, acquiring images by using a binocular camera, and respectively acquiring visible light images and near infrared light images;
s22, carrying out face detection on the acquired visible light image and near infrared light, if no face is detected, continuously carrying out detection, and if the face is detected, entering the next step;
s23, a: calculating variances of differences of three channels of visible light face images, calculating faces intercepted by the visible light images, and respectively calculating variances of differences of a face picture channel 1 minus a channel 2, variances of differences of a channel 2 minus a channel 3, and variances of differences of a channel 3 minus a channel 1;
b: resampling the near infrared face image to accord with a near infrared face living body detection model;
s24, a: calculating the variance average value of the visible light face image, comparing the variance average value with a set variance threshold value, judging that the photo is black and white or gray photo if the variance average value is smaller than the variance threshold value, and judging that the photo is color photo if the variance average value is larger than the variance threshold value;
b: leading a resampling image of the near infrared face image into a trained near infrared face living body detection model for calculation, judging and outputting whether the detection is living body;
and S25, performing logical AND operation on the visible light image judgment result and the near infrared light image judgment result, wherein when the visible light image is judged to be a color photo and the near infrared light image is judged to be a living body, the detection target is a living body, and otherwise, the detection target is a non-living body.
The present application also provides a computer-readable storage medium having stored thereon computer instructions that, when executed, perform the aforementioned living detection method.
The application also provides a human face living body detection system adapting to different light rays, the system comprises an optical image acquisition device and a computer which are connected through telecommunication, wherein the optical image acquisition device is a binocular camera comprising a visible light lens and a near infrared light lens so as to acquire a visible light image and a near infrared image at the same time, the acquired two images are transmitted to the computer, and the computer receives image data of the optical image acquisition device and runs the living body detection method to judge whether a detection target is a living body or not; the computer includes:
the visible light image pre-judging unit is used for carrying out variance calculation on three channel difference values of the received visible light image, calculating the face intercepted by the visible light image, and respectively calculating the variance of the difference value of the face picture channel 1 minus the channel 2, the variance of the difference value of the channel 2 minus the channel 3 and the variance of the difference value of the channel 3 minus the channel 1; comparing the calculated variance with a set variance threshold value to judge whether the photo is a color photo or not;
the near infrared image pre-judging unit receives the near infrared image, resamples the near infrared face image, then guides the resampled image into a trained near infrared face living body detection model for calculation, and pre-judges whether the near infrared image is detected as a living body or not;
a living body detection comprehensive judging unit which receives the pre-judging results of the visible light image pre-judging unit and the near infrared image pre-judging unit and performs logic AND operation to judge whether the detection target is a living body;
and the result output unit receives the judging result of the living body detection comprehensive judging unit and visually displays the real-time judging result.
Compared with the prior art, the application has the beneficial effects that: the scheme utilizes the characteristic that the illumination of the near infrared light Cheng Xiangdui is insensitive, only uses the near infrared light to carry out living body detection, and also utilizes the image characteristic of the visible light to calculate the average value of the variance of the difference value of three channels of the image so as to judge whether the image is a color photo or not. By combining the characteristics of visible light and near infrared imaging, the application provides a living body detection method which can adapt to various illumination conditions, and the detection efficiency is improved because only one near infrared living body detection model is used.
Drawings
FIG. 1 is a schematic flow chart of a training method of a near infrared human face living body detection model;
fig. 2 is a near infrared image and a visible light image of a normal light sample;
FIG. 3 is a near infrared image and a visible image of a dark sample;
FIG. 4 is a near infrared image and a visible image of a strong light sample;
FIG. 5 is a near infrared image and a visible image of a backlight sample;
FIG. 6 is a near infrared image and a visible image of a yin-yang light sample;
FIG. 7 is a schematic view of a face picture taken;
FIG. 8 is a flow chart of a face living body detection method adapting to different light rays;
fig. 9 is a schematic diagram of a human face living body detection system adapted to different lights.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be appreciated that "system," "apparatus," "unit," and/or "module" as used in this specification is a method for distinguishing between different components, elements, parts, portions, or assemblies at different levels. However, if other words can achieve the same purpose, the words can be replaced by other expressions.
As used in this specification and the claims, the terms "a," "an," "the," and/or "the" are not specific to a singular, but may include a plurality, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
A flowchart is used in this specification to describe the operations performed by the system according to embodiments of the present specification. It should be appreciated that the preceding or following operations are not necessarily performed in order precisely. Rather, the steps may be processed in reverse order or simultaneously. Also, other operations may be added to or removed from these processes.
First embodiment
Referring to fig. 1, the training method of the near infrared light human face living body detection model comprises the following steps:
s11, collecting a human face living body detection training sample by using a near infrared camera in a binocular camera, wherein the collected sample comprises a real human living body sample and a non-living body sample under different illumination conditions; wherein the different illumination conditions include normal light, dim light, strong light, backlight and shade sunlight. Referring specifically to fig. 2-6, where fig. 2a is a normal light sample of a near infrared light image and fig. 2b is a normal light sample of a visible light image; FIG. 3a is a dark sample of a near infrared image, and FIG. 3b is a dark sample of a visible image; fig. 4a is a strong light sample of a near infrared light image, and fig. 4b is a strong light sample of a visible light image; FIG. 5a is a backlight sample of a near infrared image, and FIG. 5b is a backlight sample of a visible image; fig. 6a is a sample of the yin-yang light of the near infrared image, and fig. 6b is a sample of the yin-yang light of the visible image.
S12, sorting the collected samples, detecting an original image by using a face detector, and cutting the detected face, wherein the cut face picture is used as a training sample, as shown in FIG. 7; the face photo of the stored living body data is taken as a positive sample, and the face of the stored non-living body data is taken as a negative sample;
s13, resampling the sample to obtain a resampling diagram; the resample map size is 112 x 112px (represented by pixel size), and of course, the above sizes are only exemplary, and other picture sizes commonly used in the art can be used as the size protection range of the resample map as long as the subsequent processing is convenient.
S14, carrying out convolution calculation on the resampling map, carrying out batch normalization, activating the ReLU, and carrying out maximum pooling calculation;
s15, inputting the result after the maximum pooling calculation into a dense block for calculation;
s16, inputting a result of dense block calculation into a transition block for calculation;
s17, performing 1x1 convolution operation on a result obtained by calculating the transition block, and performing sigmoid activation to obtain a feature map; the size of the feature map is 7x 7px (represented by pixel size), which is merely exemplary, and other picture sizes commonly used in the art can be used as the size protection range of the feature map as long as the subsequent processing is convenient.
S18, linearizing the feature map, and performing sigmoid activation to obtain binary output;
s19, training by using the feature map calculated in the step S7 and the binary output calculated in the step S8 and using a cross entropy loss function BCE for classification as a loss function, wherein the loss function is:
Loss=0.5*Loss map +0.5*Loss binary … … … … … … … … … formula 1;
in the method, in the process of the application,
Loss map =-(y 7x7 log(p)+(1-y 7x7 ) log (1-p)) … … … … … … formula 2;
wherein y is 7x7 Is a 7x7 feature matrix, wherein a value of 0 represents attack, a value of 1 represents a real living body, and p is the predicted probability;
Loss bianry = - (ylog (p) + (1-y) log (1-p)) … … … … … … … formula 3;
where y is a number, a value of 0 indicates an attack, a value of 1 indicates a true living body, and p is a predicted probability.
Second embodiment
A method for detecting human face living body adapting to different light rays, referring to fig. 8, the method comprises the following steps:
s21, acquiring images by using a binocular camera, and respectively acquiring visible light images and near infrared light images;
s22, carrying out face detection on the acquired visible light image and near infrared light, if no face is detected, continuously carrying out detection, and if the face is detected, entering the next step;
s23, a: calculating variances of differences of three channels of visible light face images, calculating faces intercepted by the visible light images, and respectively calculating variances of differences of a face picture channel 1 minus a channel 2, variances of differences of a channel 2 minus a channel 3, and variances of differences of a channel 3 minus a channel 1;
b: resampling the near infrared face image to accord with a near infrared face living body detection model;
s24, a: calculating the variance average value of the visible light face image, comparing the variance average value with a set variance threshold value, judging that the photo is black and white or gray photo if the variance average value is smaller than the variance threshold value, and judging that the photo is color photo if the variance average value is larger than the variance threshold value;
b: leading a resampling image of the near infrared face image into a trained near infrared face living body detection model for calculation, judging and outputting whether the detection is living body; specifically, the model is obtained by training according to the training method.
And S25, performing logical AND operation on the visible light image judgment result and the near infrared light image judgment result, wherein when the visible light image is judged to be a color photo and the near infrared light image is judged to be a living body, the detection target is a living body, and otherwise, the detection target is a non-living body.
Third embodiment
A computer readable storage medium having stored thereon computer instructions that, when executed, perform the aforementioned in-vivo detection method.
The method is described in detail in the foregoing section, and will not be described in detail here.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the various methods of the above-described embodiments may be implemented by a program that instructs associated hardware, the program may be stored on a computer readable storage medium, including non-transitory and non-transitory, removable and non-removable media, and the information storage may be implemented by any method or technique. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
Computer program code necessary for operation of portions of the present application may be written in any one or more programming languages, including an object oriented programming language such as Java, scala, smalltalk, eiffel, JADE, emerald, C ++, c#, vb net, python, and the like, a conventional programming language such as C language, visualBasic, fortran2003, perl, COBOL2002, PHP, ABAP, a dynamic programming language such as Python, ruby, and Groovy, or other programming languages, and the like. The program code may execute entirely on the user's computer or as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or processing device. In the latter scenario, the remote computer may be connected to the user's computer through any form of network, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or the use of services such as software as a service (SaaS) in a cloud computing environment.
Fourth embodiment
Referring to fig. 9, the system comprises an optical image acquisition device 1 and a computer 2 which are connected by telecommunication, wherein the optical image acquisition device 1 is a binocular camera comprising a visible light lens and a near infrared light lens for simultaneously acquiring visible light images and near infrared images, and the two acquired images are transmitted to the computer 2, and the computer 2 receives the image data of the optical image acquisition device 1 and operates the living body detection method according to the second embodiment to judge whether a detection target is a living body.
The computer 2 includes a visible light image pre-judging unit, a near infrared image pre-judging unit, a living body detection comprehensive judging unit and a result output unit, which are specifically described below.
The visible light image pre-judging unit is used for carrying out variance calculation on three channel difference values of the received visible light image, calculating the face intercepted by the visible light image, and respectively calculating the variance of the difference value of the face picture channel 1 minus the channel 2, the variance of the difference value of the channel 2 minus the channel 3 and the variance of the difference value of the channel 3 minus the channel 1; and comparing the calculated variance with a set variance threshold to judge whether the photo is a color photo or not.
The near infrared image pre-judging unit receives the near infrared image, resamples the near infrared face image, then guides the resampled image into the trained near infrared face living body detection model for calculation, and pre-judges whether the near infrared image is detected as a living body or not.
And the living body detection comprehensive judgment unit receives the pre-judgment results of the visible light image pre-judgment unit and the near infrared image pre-judgment unit and performs logic AND operation to judge whether the detection target is a living body.
And the result output unit receives the judging result of the living body detection comprehensive judging unit and visually displays the real-time judging result.
By the method and the system, the problem of human face living body detection under different light rays such as dim light, backlight, strong light, shade sunlight and the like is solved, and the scheme only needs to train a near infrared human face living body detection model, so that the algorithm speed is improved, and the detection efficiency is improved.
It should be noted that, the advantages that may be generated by different embodiments may be different, and in different embodiments, the advantages that may be generated may be any one or a combination of several of the above, or any other possible advantages that may be obtained.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (6)

1. The training method for the near infrared light human face living body detection model is characterized by comprising the following steps of:
s11, collecting a human face living body detection training sample by using a near infrared camera in a binocular camera, wherein the collected sample comprises a real human living body sample and a non-living body sample under different illumination conditions;
s12, sorting the collected samples, detecting an original image by using a face detector, cutting the detected face, and taking the cut face picture as a training sample; the face photo of the stored living body data is taken as a positive sample, and the face of the stored non-living body data is taken as a negative sample;
s13, resampling the sample to obtain a resampling diagram;
s14, carrying out convolution calculation on the resampling map, carrying out batch normalization, activating the ReLU, and carrying out maximum pooling calculation;
s15, inputting the result after the maximum pooling calculation into a dense block for calculation;
s16, inputting a result of dense block calculation into a transition block for calculation;
s17, performing 1x1 convolution operation on a result obtained by calculating the transition block, and performing sigmoid activation to obtain a feature map;
s18, linearizing the feature map, and performing sigmoid activation to obtain binary output;
s19, training by using the feature map calculated in the step S7 and the binary output calculated in the step S8 and using a cross entropy loss function BCE for classification as a loss function, wherein the loss function is:
Loss=0.5*Loss map +0.5*Loss binary … … … … … … … … … formula 1;
in the method, in the process of the application,
Loss map =-(y 7x7 log(p)+(1-y 7x7 ) log (1-p)) … … … … … … formula 2;
wherein y is 7x7 Is a 7x7 feature matrix, wherein a value of 0 represents attack, a value of 1 represents a real living body, and p is the predicted probability;
Loss bianry = - (ylog (p) + (1-y) log (1-p)) … … … … … … … formula 3;
where y is a number, a value of 0 indicates an attack, a value of 1 indicates a true living body, and p is a predicted probability.
2. Training method according to claim 1, characterized in that: the different lighting conditions in step S11 include normal light, dim light, strong light, backlight, and negative sunlight.
3. Training method according to claim 1, characterized in that: the resampling map size in step S13 is 112 x 112px, and the feature map size in step S17 is 7x 7px.
4. A method for detecting human face living bodies adapting to different light rays, which is characterized by comprising the following steps:
s21, acquiring images by using a binocular camera, and respectively acquiring visible light images and near infrared light images;
s22, carrying out face detection on the acquired visible light image and near infrared light, if no face is detected, continuously carrying out detection, and if the face is detected, entering the next step;
s23, a: calculating variances of differences of three channels of visible light face images, calculating faces intercepted by the visible light images, and respectively calculating variances of differences of a face picture channel 1 minus a channel 2, variances of differences of a channel 2 minus a channel 3, and variances of differences of a channel 3 minus a channel 1;
b: resampling the near infrared face image to accord with a near infrared face living body detection model;
s24, a: calculating the variance average value of the visible light face image, comparing the variance average value with a set variance threshold value, judging that the photo is black and white or gray photo if the variance average value is smaller than the variance threshold value, and judging that the photo is color photo if the variance average value is larger than the variance threshold value;
b: importing a resampling image of a near infrared face image into a near infrared face living body detection model calculation trained based on the training method of any one of claims 1 to 3, judging and outputting whether the detection is living body;
and S25, performing logical AND operation on the visible light image judgment result and the near infrared light image judgment result, wherein when the visible light image is judged to be a color photo and the near infrared light image is judged to be a living body, the detection target is a living body, and otherwise, the detection target is a non-living body.
5. A computer-readable storage medium having stored thereon computer instructions, characterized by: the computer instructions, when executed, perform the living body detection method of claim 4.
6. A human face living body detection system adapting to different light rays is characterized in that: the system comprises an optical image acquisition device (1) and a computer (2) which are connected through telecommunication, wherein the optical image acquisition device (1) is a binocular camera comprising a visible light lens and a near infrared light lens, so as to acquire visible light images and near infrared images at the same time, the acquired two images are transmitted to the computer (2), and the computer (2) receives the image data of the optical image acquisition device (1) and runs the living body detection method of claim 4 to judge whether a detection target is a living body or not; the computer (2) comprises:
the visible light image pre-judging unit is used for carrying out variance calculation on three channel difference values of the received visible light image, calculating the face intercepted by the visible light image, and respectively calculating the variance of the difference value of the face picture channel 1 minus the channel 2, the variance of the difference value of the channel 2 minus the channel 3 and the variance of the difference value of the channel 3 minus the channel 1; comparing the calculated variance with a set variance threshold value to judge whether the photo is a color photo or not;
the near infrared image pre-judging unit receives the near infrared image, resamples the near infrared face image, then guides the resampled image into a trained near infrared face living body detection model for calculation, and pre-judges whether the near infrared image is detected as a living body or not;
a living body detection comprehensive judging unit which receives the pre-judging results of the visible light image pre-judging unit and the near infrared image pre-judging unit and performs logic AND operation to judge whether the detection target is a living body;
and the result output unit receives the judging result of the living body detection comprehensive judging unit and visually displays the real-time judging result.
CN202110528863.6A 2021-05-14 2021-05-14 Face living body model training method, face living body model detection method, storage medium and face living body model detection system Active CN113139517B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110528863.6A CN113139517B (en) 2021-05-14 2021-05-14 Face living body model training method, face living body model detection method, storage medium and face living body model detection system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110528863.6A CN113139517B (en) 2021-05-14 2021-05-14 Face living body model training method, face living body model detection method, storage medium and face living body model detection system

Publications (2)

Publication Number Publication Date
CN113139517A CN113139517A (en) 2021-07-20
CN113139517B true CN113139517B (en) 2023-10-27

Family

ID=76816996

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110528863.6A Active CN113139517B (en) 2021-05-14 2021-05-14 Face living body model training method, face living body model detection method, storage medium and face living body model detection system

Country Status (1)

Country Link
CN (1) CN113139517B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107862299A (en) * 2017-11-28 2018-03-30 电子科技大学 A kind of living body faces detection method based on near-infrared Yu visible ray binocular camera
CN108764298A (en) * 2018-04-29 2018-11-06 天津大学 Electric power image-context based on single classifier influences recognition methods
CN108898112A (en) * 2018-07-03 2018-11-27 东北大学 A kind of near-infrared human face in-vivo detection method and system
CN109272048A (en) * 2018-09-30 2019-01-25 北京工业大学 A kind of mode identification method based on depth convolutional neural networks
CN109346159A (en) * 2018-11-13 2019-02-15 平安科技(深圳)有限公司 Case image classification method, device, computer equipment and storage medium
CN110516576A (en) * 2019-08-20 2019-11-29 西安电子科技大学 Near-infrared living body faces recognition methods based on deep neural network
CN111222380A (en) * 2018-11-27 2020-06-02 杭州海康威视数字技术股份有限公司 Living body detection method and device and recognition model training method thereof
CN111339369A (en) * 2020-02-25 2020-06-26 佛山科学技术学院 Video retrieval method, system, computer equipment and storage medium based on depth features
CN111398291A (en) * 2020-03-31 2020-07-10 南通远景电工器材有限公司 Flat enameled electromagnetic wire surface flaw detection method based on deep learning
CN111931594A (en) * 2020-07-16 2020-11-13 广州广电卓识智能科技有限公司 Face recognition living body detection method and device, computer equipment and storage medium
CN112183454A (en) * 2020-10-15 2021-01-05 北京紫光展锐通信技术有限公司 Image detection method and device, storage medium and terminal
CN112507922A (en) * 2020-12-16 2021-03-16 平安银行股份有限公司 Face living body detection method and device, electronic equipment and storage medium
CN112766365A (en) * 2021-01-18 2021-05-07 南京多金网络科技有限公司 Training method of neural network for intelligent shadow bending detection

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107862299A (en) * 2017-11-28 2018-03-30 电子科技大学 A kind of living body faces detection method based on near-infrared Yu visible ray binocular camera
CN108764298A (en) * 2018-04-29 2018-11-06 天津大学 Electric power image-context based on single classifier influences recognition methods
CN108898112A (en) * 2018-07-03 2018-11-27 东北大学 A kind of near-infrared human face in-vivo detection method and system
CN109272048A (en) * 2018-09-30 2019-01-25 北京工业大学 A kind of mode identification method based on depth convolutional neural networks
CN109346159A (en) * 2018-11-13 2019-02-15 平安科技(深圳)有限公司 Case image classification method, device, computer equipment and storage medium
CN111222380A (en) * 2018-11-27 2020-06-02 杭州海康威视数字技术股份有限公司 Living body detection method and device and recognition model training method thereof
CN110516576A (en) * 2019-08-20 2019-11-29 西安电子科技大学 Near-infrared living body faces recognition methods based on deep neural network
CN111339369A (en) * 2020-02-25 2020-06-26 佛山科学技术学院 Video retrieval method, system, computer equipment and storage medium based on depth features
CN111398291A (en) * 2020-03-31 2020-07-10 南通远景电工器材有限公司 Flat enameled electromagnetic wire surface flaw detection method based on deep learning
CN111931594A (en) * 2020-07-16 2020-11-13 广州广电卓识智能科技有限公司 Face recognition living body detection method and device, computer equipment and storage medium
CN112183454A (en) * 2020-10-15 2021-01-05 北京紫光展锐通信技术有限公司 Image detection method and device, storage medium and terminal
CN112507922A (en) * 2020-12-16 2021-03-16 平安银行股份有限公司 Face living body detection method and device, electronic equipment and storage medium
CN112766365A (en) * 2021-01-18 2021-05-07 南京多金网络科技有限公司 Training method of neural network for intelligent shadow bending detection

Also Published As

Publication number Publication date
CN113139517A (en) 2021-07-20

Similar Documents

Publication Publication Date Title
US11457138B2 (en) Method and device for image processing, method for training object detection model
US10896323B2 (en) Method and device for image processing, computer readable storage medium, and electronic device
CN109636754B (en) Extremely-low-illumination image enhancement method based on generation countermeasure network
US20200410212A1 (en) Fast side-face interference resistant face detection method
WO2021022983A1 (en) Image processing method and apparatus, electronic device and computer-readable storage medium
US11700457B2 (en) Flicker mitigation via image signal processing
CN108734684B (en) Image background subtraction for dynamic illumination scene
CN112424795B (en) Face anti-counterfeiting method, processor chip and electronic equipment
CN104182721A (en) Image processing system and image processing method capable of improving face identification rate
US20210350129A1 (en) Using neural networks for object detection in a scene having a wide range of light intensities
US8457423B2 (en) Object-based optical character recognition pre-processing algorithm
CN113065374A (en) Two-dimensional code identification method, device and equipment
CN114463389B (en) Moving target detection method and detection system
CN113139924A (en) Image enhancement method, electronic device and storage medium
CN113139517B (en) Face living body model training method, face living body model detection method, storage medium and face living body model detection system
CN110688926B (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN108881740B (en) Image method and device, electronic equipment and computer readable storage medium
CN116152191A (en) Display screen crack defect detection method, device and equipment based on deep learning
CN112949423B (en) Object recognition method, object recognition device and robot
Wang et al. An image edge detection algorithm based on multi-feature fusion
CN114842399B (en) Video detection method, training method and device for video detection model
CN116597527B (en) Living body detection method, living body detection device, electronic equipment and computer readable storage medium
CN115762178B (en) Intelligent electronic police violation detection system and method
CN117671472B (en) Underwater multi-target group identification method based on dynamic visual sensor
KR102521524B1 (en) Image processing apparatus for super resolution considering a characteristics of object and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: No. 001-030, Yuntong space, office building, No. 9, Kelin Road, Science City, Guangzhou hi tech Industrial Development Zone, Guangzhou, Guangdong 510000

Patentee after: GRG TALLY-VISION I.T. Co.,Ltd.

Country or region after: China

Patentee after: Guangdian Yuntong Group Co.,Ltd.

Address before: No. 001-030, Yuntong space, office building, No. 9, Kelin Road, Science City, Guangzhou hi tech Industrial Development Zone, Guangzhou, Guangdong 510000

Patentee before: GRG TALLY-VISION I.T. Co.,Ltd.

Country or region before: China

Patentee before: GRG BANKING EQUIPMENT Co.,Ltd.

CP03 Change of name, title or address
TR01 Transfer of patent right

Effective date of registration: 20240123

Address after: No. 001-030, Yuntong space, office building, No. 9, Kelin Road, Science City, Guangzhou hi tech Industrial Development Zone, Guangzhou, Guangdong 510000

Patentee after: GRG TALLY-VISION I.T. Co.,Ltd.

Country or region after: China

Address before: No. 001-030, Yuntong space, office building, No. 9, Kelin Road, Science City, Guangzhou hi tech Industrial Development Zone, Guangzhou, Guangdong 510000

Patentee before: GRG TALLY-VISION I.T. Co.,Ltd.

Country or region before: China

Patentee before: Guangdian Yuntong Group Co.,Ltd.

TR01 Transfer of patent right