CN111382646B - Living body identification method, storage medium and terminal equipment - Google Patents

Living body identification method, storage medium and terminal equipment Download PDF

Info

Publication number
CN111382646B
CN111382646B CN201811647362.4A CN201811647362A CN111382646B CN 111382646 B CN111382646 B CN 111382646B CN 201811647362 A CN201811647362 A CN 201811647362A CN 111382646 B CN111382646 B CN 111382646B
Authority
CN
China
Prior art keywords
identified
image data
living body
image
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811647362.4A
Other languages
Chinese (zh)
Other versions
CN111382646A (en
Inventor
蒋佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL Technology Group Co Ltd
Original Assignee
TCL Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TCL Technology Group Co Ltd filed Critical TCL Technology Group Co Ltd
Priority to CN201811647362.4A priority Critical patent/CN111382646B/en
Publication of CN111382646A publication Critical patent/CN111382646A/en
Application granted granted Critical
Publication of CN111382646B publication Critical patent/CN111382646B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/117Identification of persons
    • A61B5/1171Identification of persons based on the shapes or appearances of their bodies or parts thereof
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7253Details of waveform analysis characterised by using transforms
    • A61B5/7257Details of waveform analysis characterised by using transforms using Fourier transforms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/14Vascular patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/15Biometric patterns based on physiological signals, e.g. heartbeat, blood flow

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • General Physics & Mathematics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Theoretical Computer Science (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Human Computer Interaction (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a living body identification method, a storage medium and terminal equipment, wherein the method comprises the following steps: providing image data to be identified, wherein the image data comprises N continuous frames of images to be identified, and N is a positive integer greater than 1; acquiring a chrominance signal corresponding to each frame of image to be identified through RGB signals of the image; carrying out Fourier transformation on the chrominance signals to obtain N frequency domain sequences, carrying out cyclic neural network analysis on the N frequency domain sequences through a network, and iterating out heartbeat data; judging whether the heartbeat data exceeds a preset heartbeat threshold value range, if so, judging that the image data to be identified does not contain a living body; if the heartbeat data is judged not to exceed the preset heartbeat threshold value range, judging whether the image data to be identified has moire, if so, judging that the image data to be identified does not contain a living body, otherwise, judging that the image data to be identified contains the living body. The invention solves the problem of inaccurate living body identification method in the prior art.

Description

Living body identification method, storage medium and terminal equipment
Technical Field
The invention relates to the technical field of security protection, in particular to a living body identification method based on a common camera, a storage medium and terminal equipment.
Background
At present, a large number of finance or security systems adopt the face photo that the webcam took to carry out remote monitoring, not only improved the rate of accuracy of face identification, saved a large amount of manpower and materials. But fraud to upload false pictures by face photo or video flipping has also attracted social widespread attention. Therefore, the human face living body detection is used as an important safety authentication measure and is widely applied to the fields of finance, security protection and the like. The use of a broad range of biopsy techniques has also received attention in recent years due to the pre-application Jing Minglang. However, the existing face living body detection has high equipment requirement and is not rapid and accurate.
Accordingly, the prior art is still in need of improvement and development.
Disclosure of Invention
In view of the above-mentioned shortcomings of the prior art, an object of the present invention is to provide a living body identification method, a storage medium and a terminal device, which aim to solve the problem that the living body identification method in the prior art is not accurate enough.
The technical scheme of the invention is as follows:
a living body identification method, comprising the steps of:
providing image data to be identified, wherein the image data comprises N continuous frames of images to be identified, and N is a positive integer greater than 1;
acquiring a chrominance signal corresponding to each frame of image to be identified through RGB signals of the image;
carrying out Fourier transformation on the chrominance signals to obtain N frequency domain sequences, carrying out cyclic neural network analysis on the N frequency domain sequences through a network, and iterating out heartbeat data;
judging whether the heartbeat data exceeds a preset heartbeat threshold value range, if so, judging that the image data to be identified does not contain a living body;
if the heartbeat data is judged not to exceed the preset heartbeat threshold value range, judging whether the image data to be identified has moire, if so, judging that the image data to be identified does not contain a living body, otherwise, judging that the image data to be identified contains the living body.
The living body identification method, wherein the step of acquiring the chrominance signal corresponding to each frame of image to be identified through the RGB signal of the image specifically comprises the following steps:
filtering R channel signal, G channel signal and B channel signal in the image respectively to obtain R f Signal signal、G f Signal and B f A signal;
by R f Signal, G f Signal and B f The signal is subjected to orthogonal chroma signal calculation to obtain X f =3R f -2G f ,Y f =1.5R f +G f -1.5B f
Calculating the chrominance signal rpg=3× (1-Rate/2) ×r f -2×(1+Rate/2)×G f +3×Rate/2×B f Where rate=x f /Y f
The living body identification method, wherein the filtering is performed by a band-pass filter.
In the living body identification method, the circulating neural network is subjected to training treatment in advance.
The method for identifying living bodies, wherein the step of judging whether the image data to be identified has mole patterns or not, if yes, judging that the image data to be identified does not contain living bodies, otherwise, judging that the image data to be identified contains living bodies, specifically comprises the following steps:
converting M of N frames of images to be identified into gray level images through RGB, then carrying out Fourier transformation to obtain M frequency domain sequences, carrying out cyclic neural network induction analysis on the M frequency domain sequences through a network, and inducing molar line output values;
judging whether the moire output value is larger than a moire threshold value, if yes, judging that the image data to be identified does not contain living bodies, otherwise, judging that the image data to be identified contains living bodies.
The living body identification method, before the step of acquiring the chrominance signal corresponding to each frame of image to be identified through the RGB signals of the image, further comprises the steps of: each frame of image is compressed in advance.
In the living body identification method, the shooting rate of N frames of face images is greater than or equal to 6 frames/second, and the acquisition time is greater than or equal to 1 second.
The living body identification method, wherein after the step of providing the image data to be identified, further comprises the steps of: judging whether the image data to be identified is consistent with the preset data or not, and if not, judging that the image data to be identified is not matched.
A storage medium having stored thereon a plurality of instructions, wherein the instructions are adapted to be loaded by a processor and to perform a method of living body identification as described above.
A terminal device, comprising:
the camera is used for providing image data to be identified;
a processor adapted to implement instructions; and
a storage medium adapted to store a plurality of instructions adapted to be loaded by a processor and to perform a method of living body identification as described above.
The beneficial effects are that: according to the invention, through acquiring a plurality of frames of images to be identified, acquiring corresponding chrominance signals, then conducting Fourier transformation on the acquired chrominance signals, and then importing the chrominance signals into a cyclic neural network for analysis and iteration to obtain heartbeat data, if the heartbeat data exceeds a preset heartbeat threshold range, the image data to be identified can be directly judged to not contain living bodies, and if the heartbeat data is within the heartbeat threshold range, whether the image data to be identified contains living bodies is further judged according to whether mole patterns exist in the image data to be identified, so that the problem that the living body identification method in the prior art is inaccurate is solved.
Drawings
Fig. 1 is a flowchart of a method for identifying a living body according to a preferred embodiment of the present invention.
Fig. 2 is a schematic structural diagram of a terminal device according to the present invention.
Detailed Description
The invention provides a living body identification method, a storage medium and a terminal device, which are used for making the purposes, technical schemes and effects of the invention clearer and more definite, and are further described in detail below. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
A living body identification method, as shown in fig. 1, comprising the steps of:
s1, providing image data to be identified, wherein the image data comprises N continuous frames of images to be identified, and N is a positive integer greater than 1;
s2, acquiring a chrominance signal corresponding to each frame of image to be identified through RGB signals of the image;
s3, carrying out Fourier transformation on the chrominance signals to obtain N frequency domain sequences, carrying out cyclic neural network analysis on the N frequency domain sequences through a network, and iterating out heartbeat data;
s4, judging whether the heartbeat data exceeds a preset heartbeat threshold range, if so, judging that the image data to be identified does not contain a living body;
and S5, if the heartbeat data is judged not to exceed the preset heartbeat threshold value range, judging whether the image data to be identified has moire, if so, judging that the image data to be identified does not contain a living body, otherwise, judging that the image data to be identified contains the living body.
The living body identification method can be realized based on a common camera, N frames of continuous images to be identified are obtained through continuous shooting, then corresponding chrominance signals are obtained according to each frame of images to be identified, the frequency domain sequences after Fourier transformation are conducted according to the N chrominance signals are imported into a circulating neural network for analysis, heartbeat data are iterated, if the heartbeat data exceed a preset heartbeat threshold range, the image data to be identified can be directly judged to not contain living bodies, and if the heartbeat data are within the heartbeat threshold range, whether the image data to be identified contain living bodies is further judged according to whether mole patterns are contained in the image data to be identified, so that the living body identification method has the characteristics of high accuracy, low cost, convenience and quickness.
Specifically, in the step S1, when the face head image is tracked, the camera is controlled to continuously capture N frames of face images as the images to be identified, so as to calculate and obtain the heartbeat data of the current object to be detected through the continuous N frames of images to be identified.
In the invention, a chrominance signal reflecting the blood flow state of a human body is needed to be obtained through a human face image, and then the chrominance signal is converted into a frequency domain signal through Fourier transformation, so that a corresponding main frequency signal is obtained. The chrominance signal (rpg: remote Photoplethysmograpy) is obtained by tracking the reflection of visible light on human tissue, such as skin, by remote photoplethysmography. Biologically, the human face skin changes in reflection of visible light during the cardiac output blood and blood return cardiac cycle. Although not noticeable to the naked eye, this frequency of variation can be found by the rpg signal. The frequency signal can be extracted only through a single frame of picture, but when the picture is shot, the frequency signal is unstable due to light change, so that the fluctuation of the frequency signal caused by the light change is reduced as much as possible through continuously shooting a plurality of frames of face images.
Preferably, in the living body recognition method, the shooting rate of the N frames of face images is greater than or equal to 6 frames/second, the acquisition time is greater than or equal to 1 second, and the complete process of flowing blood flow out of the heart and flowing back to the heart may not be shot due to the too short acquisition time. According to the nyquist sampling theorem: the upper limit of the human heartbeat is generally 160/min, i.e. less than 3Hz, and in theory, the frequency can be calculated as long as the sampling rate on the time axis reaches 6 frames/second. In the present invention, 12 frames/second are employed.
If the photo is a flipped photo, the heart beat frequency of a real person cannot be detected because the person in the photo is a static and non-living body and the change of the reflection of light rays by human skin caused by systole and diastole is avoided.
Before step S2, each frame of image may be compressed in advance, so as to reduce the operation amount and improve the transportation efficiency, and specifically, a method of zooming the picture may be adopted, for example, the input image of the resolution I1080 may be compressed to 500×500.
Specifically, in the step S2, the provided RGB channel signals of the image to be identified are used to filter the R channel signal, the G channel signal and the B channel signal in the image respectively, so as to obtain R correspondingly f Signal, G f Signal and B f Signal, then pass through R f Signal, G f Signal and B f The signal is subjected to orthogonal chroma signal calculation to obtain X f =3R f -2G f Y and Y f =1.5R f +G f -1.5B f The chrominance signal rpg=3×isrecalculated(1-Rate/2)×R f -2×(1+Rate/2)×G f +3×Rate/2×B f Where rate=x f /Y f Thereby obtaining the chroma signal corresponding to the image to be identified of the current frame.
The R channel signal, the G channel signal, and the B channel signal in the image to be identified are respectively filtered, so that noise influence caused by the environment and the camera can be reduced, and preferably, a band-pass filter is used for filtering. In addition, considering the R channel signal, the G channel signal and the B channel signal at the same time can further improve the accuracy of the obtained chrominance signals, and because the human face is difficult to keep in an absolute static state and is often accompanied with fine jitter, the acquired result of each frame of picture has pixel deviation.
In the step S3, the N frequency domain sequences obtained by performing fourier transform on the chrominance signals obtained in the step S2 are sequentially subjected to multi-frame frequency domain signal analysis by a cyclic neural network (RNN), and heartbeat data is iterated. The RNN is a sequence classification network, and the rpg value output by each frame of image is checked frame by frame, and the rpg is taken as the input of the network, so as to finally obtain a frequency prediction output value Beta (i.e. heartbeat data obtained through a chrominance signal).
Specifically, the RNN network can be obtained by adopting the existing cyclic neural network, adding a function according to the target result value, optimizing and training. By calculating the Euclidean distance between Beta and the real heartbeat frequency value GTf, the error function can be obtained as follows:
wherein 1/N represents N output averaging, nf represents a sequence length which is used for calculating classified output, namely the number of image frames, and Wr is a parameter of the RNN network;
when training the RNN, detecting the error between Beta and the detected real heartbeat value GTf through the error function, and if the error exceeds a preset error value, namely the error is overlarge, controlling the RNN to modify the gradient value and optimizing the parameter Wr until the error is within a reasonable range, thereby completing the training process.
In the step S4, whether the heartbeat data obtained in the step S3 exceeds a preset heartbeat threshold value range is judged, if yes, the image data to be identified does not contain a living body, and the preset heartbeat threshold value range is 40-160 times/second. Photo fraud can be detected by detecting the heartbeat data, because the normal heartbeat range of the human body is 40-160 times/second, and if the heartbeat range is lower than 40 times, the photo is considered to be a photo taken. Strictly speaking, in the case of a photograph, the obtained heartbeat value is close to or equal to 0.
Since the object to be detected is likely to be a video provided by an attacker, and since there are multiple frames of pictures in the video, if the video is detected by the chrominance signal, the corresponding normal heartbeat value of the human body can be obtained, so if the heartbeat data obtained in the step S4 is in the preset heartbeat threshold range, the image data to be identified cannot be judged in this way, and further judgment is needed to determine whether there is moire (moire) in the image data to be identified, that is, if the heartbeat data is not beyond the preset heartbeat threshold range, as in the step S5, whether there is moire in the image data to be identified is determined, if so, the image data to be identified does not contain a living body, otherwise, the image data to be identified is determined to contain a living body.
Principle of mole pattern generation: because the camera photosensitive element has a spatial frequency, and the shot object (such as an image on a liquid crystal screen) has a spatial distribution rule when being presented by liquid crystal pixel points of the liquid crystal screen; when the spatial frequency of the photosensitive element of the camera is close to the spatial distribution rule of the images displayed by the liquid crystal display, the beating phenomenon is generated, and the moire is generated. Of course, if the principle of moire generation is known, an attacker may reduce the moire generation as much as possible by changing the focal length or rotating the flip device (i.e., the method of changing the angle). At this time, the RNN network may be referred to generalize the multi-frame frequency domain signal, specifically, M of the N frames of face images of the object to be detected may be converted into a gray scale image through RGB, then the gray scale image is fourier transformed to obtain M frequency domain sequences, and then the M frequency domain sequences are subjected to cyclic neural network generalization analysis through the network, to comprehensively give a mole pattern output value, and a mole pattern threshold is preset, when the RNN mole pattern output value is greater than the mole pattern threshold, it is determined that the image data to be identified does not contain a living body and a prompt of video reproduction is given, otherwise, it is determined that the image data to be identified contains a living body.
Similarly, for the mole pattern estimation signal EstM and the threshold signal GTM of the j-th frame, using the RNN network, an error function can be derived as:
wherein Mj is the average number of moles of the face image of the j frame, wm is the parameter of the IRNN network;
likewise, the RNN used herein may be obtained by training an existing recurrent neural network, and the error function is used to control the RNN to modify its gradient value and optimize its parameter Wr to an error within a reasonable range, thereby completing the training process.
In general, the invention detects the mole marks easy to be found in the video attack by using multi-frame frequency domain signals, and can effectively improve the detection probability of the mole marks appearing in the video attack.
In addition, since the living body recognition is to increase the accuracy of face recognition, an unlawful person is prevented from attacking by using a flipped photo or video, after the step of providing the image data to be recognized including the images to be recognized that are continuously photographed in N frames, it is possible to directly determine whether the current face data in the image data to be recognized is identical to the preset face data (i.e., directly determine whether the face image of the object to be recognized is identical to the preset face data), and if not, it is determined that the image data to be recognized is not matched and the flow is ended, and the subsequent operation process is omitted. Of course, after step S4, that is, when it is determined that the image data to be recognized includes a living body, it may be further determined whether the data in the image data to be recognized is consistent with the preset data.
The living body identification method can be realized based on the common camera, and the algorithm has the characteristics of low cost, convenience and quickness. The method has good effect on common photos and video fraud methods and also has good identification effect on 3D silica gel mask fraud.
Based on the above method, the present invention also provides a storage medium having stored thereon a plurality of instructions, wherein the instructions are adapted to be loaded by a processor and to perform the living body identification method as described above.
Based on the above method, the present invention further provides a terminal device, where, as shown in fig. 2, the method includes:
a camera 100 for providing image data to be recognized;
a processor 200 adapted to implement instructions; and
a storage medium 300 adapted to store a plurality of instructions adapted to be loaded by a processor and to perform a method of living body identification as described above, the method of living body identification comprising the steps of:
providing image data to be identified, wherein the image data comprises N continuous frames of images to be identified;
acquiring a chrominance signal corresponding to each frame of image to be identified through RGB signals of the image;
carrying out Fourier transformation on the chrominance signals to obtain N frequency domain sequences, carrying out cyclic neural network analysis on the N frequency domain sequences through a network, and iterating out heartbeat data;
judging whether the heartbeat data exceeds a preset heartbeat threshold value range, if so, judging that the image data to be identified does not contain a living body;
if the heartbeat data is judged not to exceed the preset heartbeat threshold value range, judging whether the image data to be identified has moire, if so, judging that the image data to be identified does not contain a living body, otherwise, judging that the image data to be identified contains the living body.
The technical details and advantages of the above terminal device and storage medium are described in detail in the above method, and are not described herein.
In summary, according to the living body identification method provided by the invention, through acquiring the images to be identified of the multi-frame objects to be detected, acquiring the corresponding chrominance signals, then conducting Fourier transformation on the acquired chrominance signals, and then importing the chrominance signals into the cyclic neural network for analysis and iterating out the heartbeat data, if the heartbeat data exceeds the preset heartbeat threshold range, the image data to be identified can be directly judged to not contain living bodies, and if the heartbeat data is in the heartbeat threshold range, whether the image data to be identified contains living bodies is further judged according to whether the images to be identified contain mole marks or not, so that the influence of noise and the factors of instability of single frame signals are effectively reduced, and the robustness and the accuracy of the obtained living body detection effect are greatly improved; in addition, because the calculated amount of the designed RNN network is not large, an inference result can be obtained quickly, and the effect of real-time detection is achieved.
It is to be understood that the invention is not limited in its application to the examples described above, but is capable of modification and variation in light of the above teachings by those skilled in the art, and that all such modifications and variations are intended to be included within the scope of the appended claims.

Claims (9)

1. A living body identification method, characterized by comprising the steps of:
providing image data to be identified, wherein the image data comprises N continuous frames of images to be identified, and N is a positive integer greater than 1;
acquiring a chrominance signal corresponding to each frame of image to be identified through RGB signals of the image;
the step of obtaining the chrominance signal corresponding to each frame of image to be identified through the RGB signals of the image specifically comprises the following steps:
filtering R channel signal, G channel signal and B channel signal in the image respectively to obtain R f Signal, G f Signal and B f A signal;
by R f Signal, G f Signal and B f The signal is subjected to orthogonal chroma signal calculation to obtain X f =3R f -2G f ,Y f =1.5R f +G f -1.5B f
Calculating the chrominance signal rpg=3× (1-Rate/2) ×r f -2×(1+Rate/2)×G f +3×Rate/2×B f Where rate=x f /Y f
Carrying out Fourier transformation on the chrominance signals to obtain N frequency domain sequences, carrying out cyclic neural network analysis on the N frequency domain sequences through a network, checking the rPPG value output by each frame of image frame by the network, taking the rPPG as the input of the network, and iterating out heartbeat data;
judging whether the heartbeat data exceeds a preset heartbeat threshold value range, if so, judging that the image data to be identified does not contain a living body;
if the heartbeat data is judged not to exceed the preset heartbeat threshold value range, judging whether the image data to be identified has moire, if so, judging that the image data to be identified does not contain a living body, otherwise, judging that the image data to be identified contains the living body.
2. The living body identification method according to claim 1, wherein the filtering is performed by a band-pass filter.
3. The living body identification method according to claim 1, wherein the recurrent neural network is previously trained.
4. The living body recognition method according to claim 1, wherein the step of determining whether there is a moire in the image data to be recognized, if so, determines that the image data to be recognized does not contain a living body, otherwise, determines that the image data to be recognized contains a living body, specifically comprises:
converting M of N frames of images to be identified into gray level images through RGB, then carrying out Fourier transformation to obtain M frequency domain sequences, carrying out cyclic neural network induction analysis on the M frequency domain sequences through a network, and inducing output values with mole patterns;
judging whether the moire output value is larger than a moire threshold value, if yes, judging that the image data to be identified does not contain living bodies, otherwise, judging that the image data to be identified contains living bodies.
5. The living body recognition method according to claim 1, characterized by further comprising, before the step of acquiring a chromaticity signal corresponding to an image to be recognized for each frame from RGB signals of the image, the steps of: each frame of image is compressed in advance.
6. The living body recognition method according to claim 1, wherein a shooting rate of N frames of images to be recognized is 6 frames/sec or more, and an acquisition time is 1 sec or more.
7. The living body recognition method according to claim 1, characterized by further comprising, after the step of providing the image data to be recognized, the step of: judging whether the image data to be identified is consistent with the preset data or not, and if not, judging that the image data to be identified is not matched.
8. A storage medium having stored thereon a plurality of instructions adapted to be loaded by a processor and to perform the method of identifying a living organism according to any of claims 1 to 7.
9. A terminal device, comprising:
the camera is used for providing image data to be identified;
a processor adapted to implement instructions; and
a storage medium adapted to store a plurality of instructions adapted to be loaded by a processor and to perform the living body identification method according to any one of claims 1 to 7.
CN201811647362.4A 2018-12-29 2018-12-29 Living body identification method, storage medium and terminal equipment Active CN111382646B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811647362.4A CN111382646B (en) 2018-12-29 2018-12-29 Living body identification method, storage medium and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811647362.4A CN111382646B (en) 2018-12-29 2018-12-29 Living body identification method, storage medium and terminal equipment

Publications (2)

Publication Number Publication Date
CN111382646A CN111382646A (en) 2020-07-07
CN111382646B true CN111382646B (en) 2023-09-05

Family

ID=71214978

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811647362.4A Active CN111382646B (en) 2018-12-29 2018-12-29 Living body identification method, storage medium and terminal equipment

Country Status (1)

Country Link
CN (1) CN111382646B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112329606B (en) * 2020-11-03 2023-08-15 济南博观智能科技有限公司 Living body detection method, living body detection device, electronic equipment and readable storage medium
CN113158773B (en) * 2021-03-05 2024-03-22 普联技术有限公司 Training method and training device for living body detection model

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105989357A (en) * 2016-01-18 2016-10-05 合肥工业大学 Human face video processing-based heart rate detection method
WO2016197298A1 (en) * 2015-06-08 2016-12-15 北京旷视科技有限公司 Living body detection method, living body detection system and computer program product
CN106599872A (en) * 2016-12-23 2017-04-26 北京旷视科技有限公司 Method and equipment for verifying living face images
CN106845395A (en) * 2017-01-19 2017-06-13 北京飞搜科技有限公司 A kind of method that In vivo detection is carried out based on recognition of face
CN108108676A (en) * 2017-12-12 2018-06-01 北京小米移动软件有限公司 Face identification method, convolutional neural networks generation method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016197298A1 (en) * 2015-06-08 2016-12-15 北京旷视科技有限公司 Living body detection method, living body detection system and computer program product
CN105989357A (en) * 2016-01-18 2016-10-05 合肥工业大学 Human face video processing-based heart rate detection method
CN106599872A (en) * 2016-12-23 2017-04-26 北京旷视科技有限公司 Method and equipment for verifying living face images
CN106845395A (en) * 2017-01-19 2017-06-13 北京飞搜科技有限公司 A kind of method that In vivo detection is carried out based on recognition of face
CN108108676A (en) * 2017-12-12 2018-06-01 北京小米移动软件有限公司 Face identification method, convolutional neural networks generation method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于微调策略的多线索融合人脸活体检测;胡斐;文畅;谢凯;贺建飚;;计算机工程(第05期);全文 *

Also Published As

Publication number Publication date
CN111382646A (en) 2020-07-07

Similar Documents

Publication Publication Date Title
JP6863408B2 (en) Information processing equipment, information processing methods and programs
US20230274577A1 (en) Device and method with image matching
CN108764071B (en) Real face detection method and device based on infrared and visible light images
US7912252B2 (en) Time-of-flight sensor-assisted iris capture system and method
EP3192008B1 (en) Systems and methods for liveness analysis
WO2018040307A1 (en) Vivo detection method and device based on infrared visible binocular image
US8203602B2 (en) Depth-aware blur kernel estimation method for iris deblurring
CN104254876B (en) Apparatus and method for extracting information from the characteristic signal of remote detection
US8374389B2 (en) Iris deblurring method based on global and local iris image statistics
JP6822482B2 (en) Line-of-sight estimation device, line-of-sight estimation method, and program recording medium
CN108596041A (en) A kind of human face in-vivo detection method based on video
CN107292269B (en) Face image false distinguishing method based on perspective distortion characteristic, storage and processing equipment
WO2018232717A1 (en) Method, storage and processing device for identifying authenticity of human face image based on perspective distortion characteristics
JP2013065119A (en) Face authentication device and face authentication method
CN112487921A (en) Face image preprocessing method and system for living body detection
CN109146906A (en) Image processing method and device, electronic equipment, computer readable storage medium
CN111382646B (en) Living body identification method, storage medium and terminal equipment
CN109327626A (en) Image-pickup method, device, electronic equipment and computer readable storage medium
JP2013228930A (en) Subject area detection device, method and program for controlling the same, imaging device and display device
CN110349163A (en) Image processing method and device, electronic equipment, computer readable storage medium
CN108629333A (en) A kind of face image processing process of low-light (level), device, equipment and readable medium
CN109064458A (en) A kind of incorrect sitting-pose intelligent prompt method and device
JP2009258770A (en) Image processing method, image processor, image processing program, and imaging device
CN108062741B (en) Binocular image processing method, imaging device and electronic equipment
JP2021061458A (en) Information processor, information processing method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 516006 TCL science and technology building, No. 17, Huifeng Third Road, Zhongkai high tech Zone, Huizhou City, Guangdong Province

Applicant after: TCL Technology Group Co.,Ltd.

Address before: 516006 Guangdong province Huizhou Zhongkai hi tech Development Zone No. nineteen District

Applicant before: TCL Corp.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant