CN113128255A - Living body detection device method, device, chip, and computer-readable storage medium - Google Patents

Living body detection device method, device, chip, and computer-readable storage medium Download PDF

Info

Publication number
CN113128255A
CN113128255A CN201911392538.0A CN201911392538A CN113128255A CN 113128255 A CN113128255 A CN 113128255A CN 201911392538 A CN201911392538 A CN 201911392538A CN 113128255 A CN113128255 A CN 113128255A
Authority
CN
China
Prior art keywords
image
face
visible light
light image
living body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911392538.0A
Other languages
Chinese (zh)
Inventor
曹立
周誉昇
何�轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Yitu Network Science and Technology Co Ltd
Original Assignee
Shanghai Yitu Network Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Yitu Network Science and Technology Co Ltd filed Critical Shanghai Yitu Network Science and Technology Co Ltd
Priority to CN201911392538.0A priority Critical patent/CN113128255A/en
Publication of CN113128255A publication Critical patent/CN113128255A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a living body detection device, a method, a device, a chip and a computer readable storage medium. A method of in vivo detection comprising: acquiring a visible light image and an infrared light image; carrying out face detection on the visible light image; when a visible light face image with the quality meeting a preset condition is detected, combining the visible light image and the infrared light image to obtain a combined image; performing living body detection on the human face in the merged image through a living body classification model based on a neural network; and carrying out living body detection on the human face in the combined image through an anti-attack model group. The living body detection method provided by the embodiment of the invention effectively resists various attack behaviors through infrared and visible various image data sources and various living body detection models, and can improve the safety of living body judgment and improve the user experience.

Description

Living body detection device method, device, chip, and computer-readable storage medium
Technical Field
The invention relates to the field of face recognition, in particular to a method, a device, a chip and a computer readable storage medium for a living body detection device.
Background
The face recognition technology has a high development prospect and economic benefit in the fields of public security investigation, access control systems, target tracking and other civil safety control systems. However, various attacks are often applied in the face recognition process, for example, a photo, a mask or even a video is used as an attack means, and for this reason, it is necessary to analyze whether the acquired image is a live face image to perform live detection.
In the prior art, different methods are used for biopsy, for example, 3D structured light can be used for biopsy, but the structured light has high cost and cannot be widely applied. Still other live body detection is performed by interaction with the user, such as requiring the user to open their eyes to detect eye shake, or requiring the user to blink, requiring the user to face the camera at an angle, and so forth. Similar methods all require the user to coordinate actions or stay for a certain time, increase the time for performing the in-vivo detection, cannot be used for scenes requiring rapid in-vivo detection, and reduce the user experience.
Disclosure of Invention
In order to solve the problems in the prior art, at least one embodiment of the present invention provides a living body detection apparatus method, an apparatus, a chip, and a computer-readable storage medium, which can improve user experience while improving safety of living body judgment.
In a first aspect, an embodiment of the present invention provides a method for detecting a living body, including: acquiring a visible light image and an infrared light image; carrying out face detection on the visible light image; when a visible light face image with the quality meeting a preset condition is detected, combining the visible light image and the infrared light image to obtain a combined image; detecting the face in the merged image, and carrying out living body detection on the face in the merged image; and carrying out anti-attack living body detection on the human face in the combined image.
In some embodiments, said combining the visible light image and the infrared light image comprises: extracting first face characteristic data in the infrared light image; extracting second face feature data in the visible light image; carrying out affine transformation on the first face feature data to eliminate the parallax of the infrared light image and the visible light image; and merging the infrared light image and the visible light image with the parallax eliminated.
In some embodiments, a method of in vivo detection, further comprising: organ detection is carried out on the detected visible light face, and the completeness of the organs of the visible light face is determined; and/or detecting the gesture of the detected visible light face, and determining that the expression gesture of the visible light face meets the preset requirement.
In some embodiments, the anti-attack liveness detection of the face in the combined image includes at least one or a combination of: performing living body detection for preventing photo attack on the human face in the merged image; performing living body detection of the face in the merged image for preventing the face from attacking; and carrying out screen attack prevention living body detection on the human face in the combined image.
In some embodiments, the live body detection of the human face in the combined image includes: and performing living body detection on the human face in the multi-frame combined image.
In a second aspect, an embodiment of the present invention further provides a living body detection apparatus, including: a visible light image acquisition unit for acquiring a visible light image; an infrared light image acquisition unit for acquiring an infrared light image; the human face detection unit is used for carrying out human face detection on the visible light image; the image merging unit is used for merging the visible light image and the infrared light image to obtain a merged image when the visible light face image with the quality meeting the preset condition is detected; the living body detection unit is used for carrying out living body detection on the human face in the combined image obtained by the image combination unit; and carrying out anti-attack living body detection on the human face in the combined image.
In some embodiments, the image merging unit includes: the first characteristic data extraction subunit is used for extracting first face characteristic data in the infrared light image; the second characteristic data extraction subunit is used for extracting second face characteristic data in the visible light image; the affine transformation subunit is used for carrying out affine transformation on the first face feature data to eliminate the parallax of the infrared light image and the visible light image; and the image merging subunit is used for merging the infrared light image and the visible light image with the parallax eliminated.
In some embodiments, the living body detecting apparatus further comprises a human face organ detecting unit and/or a posture detecting unit, wherein the human face organ detecting unit is configured to perform organ detection on the detected visible light human face and determine that an organ of the visible light human face is complete; the gesture detection unit is used for detecting the gesture of the detected visible light face and determining that the expression gesture of the visible light face meets the preset requirement.
In some embodiments, the liveness detection unit includes a face liveness detection subunit, and an attack-prevention liveness detection subunit; wherein the attack-prevention in-vivo detection subunit comprises at least one or a combination of the following: the photo attack prevention living body detection subunit is used for carrying out photo attack prevention living body detection on the human face in the combined image through photo attack prevention; a mask attack prevention living body detection subunit, configured to perform mask attack prevention living body detection on the face in the merged image through a mask attack; and the screen attack prevention living body detection subunit is used for carrying out screen attack prevention living body detection on the human face in the combined image through screen attack prevention.
In some embodiments, the live body detection unit performs live body detection on a face in a plurality of frames of the combined image.
In a third aspect, an embodiment of the present invention further provides a living body detection apparatus, including: at least one processor; a memory coupled with the at least one processor, the memory storing executable instructions, wherein the executable instructions, when executed by the at least one processor, cause performance of the method of any of the first aspects above.
In a fourth aspect, an embodiment of the present invention further provides a chip, configured to perform the method in the first aspect. Specifically, the chip includes: a processor for calling and running the computer program from the memory so that the device on which the chip is installed is used for executing the method of the first aspect.
In a fifth aspect, the present invention also provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method according to any one of the above first aspects.
In a sixth aspect, an embodiment of the present invention further provides a computer program product, which includes computer program instructions, and the computer program instructions make a computer execute the method in the first aspect.
Therefore, the living body detection method provided by the embodiment of the invention effectively resists various attack behaviors through infrared and visible image data sources and various living body detection models, and can improve the safety of living body judgment and improve the user experience.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a flow chart of an embodiment of a method for in vivo detection of the present invention;
FIG. 2 is a flow chart of another embodiment of the in-vivo detection method of the present invention;
FIG. 3 is a schematic view of an embodiment of the biopsy device according to the present invention;
FIG. 4 is a schematic view of another embodiment of the biopsy device of the present invention.
Detailed description of the preferred embodiments
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The inventor finds that in the prior art, when the face of the video is identified to remove privacy, the posture characteristics of the face, such as joy, anger, sadness, head raising or head lowering of the facial expression, cannot be kept. Resulting in a loss of value for many commercial applications. The embodiment of the invention provides the following scheme:
fig. 1 is a flowchart of an embodiment of the in-vivo detection method according to the invention, and as shown in fig. 1, the in-vivo detection method according to the first aspect of the invention comprises:
step 101, acquiring a visible light image and an infrared light image, acquiring the visible light image by a visible light imaging unit to obtain RGB (Red, Green, Blue) color image data, and acquiring the infrared light image by an infrared imaging unit to obtain ir (infrared) infrared light image data. Two kinds of image data can be obtained simultaneously in this step. In this embodiment, a dual camera including two types of images or two separate cameras may be specifically used, which is not limited in this embodiment.
102, carrying out face detection on the visible light image; because the infrared light image data is less influenced by the ambient light, is not influenced by the ambient light such as strong light, weak light, backlight and the like, and the data of the infrared light image data does not change greatly in the daytime and at night, the infrared characteristics extracted from the infrared light image are more accurate, and the human face data in the infrared light image can be considered to meet the quality requirement. Therefore, face detection is performed only on the visible light image at this step.
And 103, when the visible light face image with the quality meeting the preset condition is detected, combining the visible light image and the infrared light image to obtain a combined image.
Because the face data in the infrared light image can be generally considered to meet the quality requirement, when the visible light face image with the quality meeting the preset condition is detected, the visible light image and the infrared light image are combined to obtain a combined image.
Specifically, an image array of an RGB color image and an IR infrared light image that can be obtained simultaneously is superimposed to obtain a combined image. Specifically, the RGB image and the IR image may be combined by superimposing pixel values of the RGB channel and the IR channel on each pixel bit, and forming a new pixel value on each pixel bit, where the new pixel value corresponds to the combined image.
Further, the images may be combined after eliminating the parallax between the infrared light image and the visible light image. It is understood that the infrared light image acquisition unit acquiring the infrared light image and the visible light image acquisition unit acquiring the visible light image may generate parallax for the images acquired by the same object due to position, angle, distance, and the like. Any of the prior art techniques may be employed for parallax elimination in the present application. For example, for parallax due to different angles, eliminating such parallax may include: extracting first face characteristic data in the infrared light image; extracting second face feature data in the visible light image; and carrying out affine transformation on the first face characteristic data to ensure that the infrared light image and the visible light image are matched in the transverse direction and/or the longitudinal direction, such as eye connecting line, or in the longitudinal direction, such as nose longitudinal direction, so as to eliminate the angle parallax of the infrared light image and the visible light image. In order to enable the two images to be matched in the transverse direction or the longitudinal direction, more feature data can be extracted at the position of the transverse eye connecting line or/and the longitudinal nose connecting line when the feature data is extracted. In a specific operation, the parallax between the infrared light image and the visible light image can be eliminated by performing affine transformation on the infrared light image feature data with the visible light image as a reference. In the embodiment, the parallax of the infrared light image and the parallax of the visible light image are eliminated by extracting the key point data and carrying out affine transformation, and the expression of the obtained image can be reserved because the infrared light image and the visible light image do not need to be corrected respectively, so that the merged image retains more information.
For the parallax caused by the distance, the distance of one of the infrared light image and the visible light image may be referred to as a standard distance, or the preset distance may be referred to as a standard distance, and the imaging distances of the two images may be adjusted to be the same. The specific parallax elimination method may also be performed by other methods in the prior art, which is not limited in this application. After the parallax is eliminated, the infrared light image and the visible light image with the parallax eliminated are combined.
Step 104, detecting the human face in the merged image, and carrying out living body detection on the human face in the merged image; the living body detection can be performed by using the existing method in this step, for example, the living body classification model based on the neural network determines whether the living body is a living body (i.e. a real human face).
And 105, carrying out anti-attack living body detection on the human face in the combined image.
And if the judgment result in the step 104 is a real face, further performing anti-attack living body detection on the merged image. Specifically, for example, the anti-attack living body detection is performed for the screen attack, the photo attack, the hole mask attack, and the like. For example, the anti-attack live body detection on the human face in the combined image includes at least one or a combination of the following: performing living body detection for preventing photo attack on the human face in the merged image; performing living body detection of the face in the merged image for preventing the face from attacking; and carrying out screen attack prevention living body detection on the human face in the combined image. Generally, a biopsy is considered to have not passed as long as it has not passed one of the biopsies. The merged image is used for finally judging the living human face of the human face only when all the living body detection methods are used for detecting the merged image and the merged image is regarded as a living body.
The living body detection method provided by the embodiment of the invention effectively resists various attack behaviors through various infrared and visible image data sources and various living body detection methods, and can improve the safety of living body judgment and improve the user experience.
Optionally, the in-vivo detection method according to the embodiment of the present invention further includes: organ detection is carried out on the detected visible light face, and the completeness of the organs of the visible light face is determined; and/or detecting the gesture of the detected visible light face, and determining that the expression gesture of the visible light face meets the preset requirement. The quality of the detected visible light face image is further confirmed by the step, whether the face organs are complete or not is judged, if a mask is not available, whether the eyes are open or not, whether the organs are shielded or not and the like are obtained, the face with incomplete face and unsatisfactory posture expression can be removed, and the quality of the obtained face image is further improved.
Optionally, the living body detection method according to the embodiment of the present invention may perform living body detection on the faces of multiple frames of combined images, for example, pictures continuously obtained in a video stream, or pictures obtained at preset intervals in the video stream, and perform living body detection on multiple frames of images after obtaining the combined images through an infrared light image and a visible light image, and only when a certain number of continuously detected images pass through the living body detection, the detected face in this time period is considered as the living body face. By detecting the multi-frame images, the accuracy of the in-vivo detection can be further improved, and the safety is improved.
FIG. 2 is a flowchart of another embodiment of the in-vivo detection method of the present invention, as shown in FIG. 2, including:
step 201, acquiring a visible light image and an infrared light image;
step 202, carrying out face detection on the visible light image;
step 203, organ detection is carried out on the detected visible light face, and the completeness of the organs of the visible light face is determined;
step 204, detecting the posture of the detected visible light face, and determining that the expression posture of the visible light face meets the preset requirement;
step 205, extracting first face feature data in the infrared light image;
step 206, extracting second face feature data in the visible light image;
step 207, performing affine transformation on the first face feature data to eliminate the parallax of the infrared light image and the visible light image;
step 208, merging the infrared light image and the visible light image without parallax;
step 209, performing living body detection on the human face in the merged image through a living body classification model based on a neural network
And step 210, performing living body detection on the human face in the combined image through an anti-attack model group. In particular, the living body detection can be carried out on the human face in the combined image through a photo attack prevention model; performing living body detection on the human face in the combined image through a mask attack model; and performing living body detection on the human face in the combined image through a screen attack prevention model.
According to the embodiment of the invention, the accuracy of the in-vivo detection can be further improved and the safety can be improved by detecting a plurality of image data sources, a plurality of anti-attack models and a plurality of frame images.
Fig. 3 is a schematic view of an embodiment of a biopsy device according to the present invention, and in a second aspect, as shown in fig. 3, the embodiment provides a biopsy device, including:
a visible light image acquisition unit 301 for acquiring a visible light image;
an infrared light image acquisition unit 302 for acquiring an infrared light image;
a face detection unit 303, configured to perform face detection on the visible light image;
an image merging unit 304, configured to merge the visible light image and the infrared light image to obtain a merged image when a visible light face image with a quality meeting a preset condition is detected;
a living body detection unit 305 for performing living body detection on the human face in the combined image obtained by the image combining unit; and carrying out anti-attack living body detection on the human face in the combined image.
Optionally, the image merging unit 304 includes: the first characteristic data extraction subunit is used for extracting first face characteristic data in the infrared light image; the second characteristic data extraction subunit is used for extracting second face characteristic data in the visible light image; the affine transformation subunit is used for carrying out affine transformation on the first face feature data to eliminate the parallax of the infrared light image and the visible light image; and the image merging subunit is used for merging the infrared light image and the visible light image with the parallax eliminated.
Optionally, the living body detecting device further includes a human face organ detecting unit and/or a posture detecting unit, wherein the human face organ detecting unit is configured to perform organ detection on the detected visible light human face and determine that an organ of the visible light human face is complete; the gesture detection unit is used for detecting the gesture of the detected visible light face and determining that the expression gesture of the visible light face meets the preset requirement.
Optionally, the living body detection unit comprises a human face living body detection subunit and an attack prevention living body detection subunit; wherein the attack-prevention in-vivo detection subunit comprises at least one or a combination of the following: the photo attack prevention living body detection subunit is used for carrying out photo attack prevention living body detection on the human face in the combined image through photo attack prevention; a mask attack prevention living body detection subunit, configured to perform mask attack prevention living body detection on the face in the merged image through a mask attack; and the screen attack prevention living body detection subunit is used for carrying out screen attack prevention living body detection on the human face in the combined image through screen attack prevention.
FIG. 4 is a schematic view of another embodiment of the biopsy device of the present invention, as shown in FIG. 4, including:
a visible light image acquisition unit 301 for acquiring a visible light image;
an infrared light image acquisition unit 302 for acquiring an infrared light image;
a face detection unit 303, configured to perform face detection on the visible light image;
the human face organ detection unit 306 is used for performing organ detection on the detected visible light human face and determining that the organs of the visible light human face are complete;
a pose detection unit 307, configured to perform pose detection on the detected visible light face, and determine that an expression pose of the visible light face meets a preset requirement;
an image merging unit 304, configured to merge the visible light image and the infrared light image to obtain a merged image when a visible light face image with a quality meeting a preset condition is detected;
wherein, image merging unit includes:
a first feature data extraction subunit 3041 configured to extract first face feature data in the infrared light image;
a second feature data extracting subunit 3042, configured to extract second face feature data in the visible light image;
an affine transformation subunit 3043, configured to perform affine transformation on the first face feature data to eliminate parallax between the infrared light image and the visible light image;
an image merging subunit 3044 configured to merge the infrared light image and the visible light image from which the parallax is removed.
A living body detection unit 305 for performing living body detection on the human face in the combined image obtained by the image combining unit; and carrying out anti-attack living body detection on the human face in the combined image.
Wherein, the living body detection unit comprises a human face living body detection subunit (such as the living body classification model subunit 3051) and an attack prevention living body detection subunit (specifically exemplified by the attack prevention model group 3052).
Wherein the attack-prevention living body detection subunit comprises at least one or a combination of the following components:
a photo attack prevention live body detection subunit (specifically exemplified by a photo attack prevention model 30521) for performing photo attack prevention live body detection on the human face in the merged image by photo attack prevention;
a mask attack prevention living body detection subunit (specifically exemplified by a mask attack prevention model 30522) configured to perform a mask attack prevention living body detection on the face in the merged image through a mask attack prevention;
and the screen attack prevention living body detection subunit (specifically exemplified by a screen attack prevention model 30523) is used for performing screen attack prevention living body detection on the human face in the merged image through screen attack prevention.
The specific technical details of the above-mentioned biopsy apparatus are similar to those of the biopsy apparatus method, and the technical effects achieved in the embodiment of the biopsy apparatus can also be achieved in the embodiment of the biopsy apparatus method, and are not described herein again in order to reduce the repetition. Accordingly, the related art details mentioned in the embodiment of the living body detecting method can also be applied in the embodiment of the living body detecting apparatus.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments.
In a third aspect, the present invention also provides a living body detection apparatus comprising:
at least one processor; a memory coupled to the at least one processor, the memory storing executable instructions, wherein the executable instructions, when executed by the at least one processor, cause the method of the first aspect of the invention to be carried out.
The present embodiment provides a living body detection apparatus including: at least one processor; a memory coupled to the at least one processor. The processor and the memory may be provided separately or may be integrated together.
For example, the memory may include random access memory, flash memory, read only memory, programmable read only memory, non-volatile memory or registers, and the like. The processor may be a Central Processing Unit (CPU) or the like. Or a Graphics Processing Unit (GPU) memory may store executable instructions. The processor may execute executable instructions stored in the memory to implement the various processes described herein.
It will be appreciated that the memory in this embodiment can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The non-volatile memory may be a ROM (Read-only memory), a PROM (programmable Read-only memory), an EPROM (erasable programmable Read-only memory), an EEPROM (electrically erasable programmable Read-only memory), or a flash memory. The volatile memory may be a RAM (random access memory) which serves as an external cache. By way of illustration and not limitation, many forms of RAM are available, such as SRAM (staticaram, static random access memory), DRAM (dynamic RAM, dynamic random access memory), SDRAM (synchronous DRAM ), DDRSDRAM (double data rate SDRAM, double data rate synchronous DRAM), ESDRAM (Enhanced SDRAM, Enhanced synchronous DRAM), SLDRAM (synchlink DRAM, synchronous link DRAM), and DRRAM (directrrambus RAM, direct memory random access memory). The memory described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
In some embodiments, the memory stores elements, upgrade packages, executable units, or data structures, or a subset thereof, or an extended set thereof: an operating system and an application program.
The operating system includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, and is used for implementing various basic services and processing hardware-based tasks. The application programs comprise various application programs and are used for realizing various application services. The program for implementing the method of the embodiment of the present invention may be included in the application program.
In an embodiment of the present invention, the processor is configured to execute the method steps provided in the second aspect by calling a program or an instruction stored in the memory, specifically, a program or an instruction stored in the application program.
In a fourth aspect, an embodiment of the present invention further provides a chip, configured to perform the method in the first aspect. Specifically, the chip includes: a processor for calling and running the computer program from the memory so that the device on which the chip is installed is used for executing the method of the first aspect.
Furthermore, in a fifth aspect, the present invention also provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of the second aspect of the present invention.
For example, the machine-readable storage medium may include, but is not limited to, various known and unknown types of non-volatile memory.
In a sixth aspect, an embodiment of the present invention further provides a computer program product, which includes computer program instructions, and the computer program instructions make a computer execute the method in the first aspect.
Those of skill in the art would understand that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments of the present application, the disclosed system, apparatus and method may be implemented in other ways. For example, the division of the unit is only one logic function division, and there may be another division manner in actual implementation. For example, multiple units or components may be combined or may be integrated into another system. In addition, the coupling between the respective units may be direct coupling or indirect coupling. In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or may exist separately and physically.
It should be understood that, in the various embodiments of the present application, the sequence numbers of the processes do not mean the execution sequence, and the execution sequence of the processes should be determined by the functions and the inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a machine-readable storage medium. Therefore, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a machine-readable storage medium and may include several instructions to cause an electronic device to perform all or part of the processes of the technical solution described in the embodiments of the present application. The storage medium may include various media that can store program codes, such as ROM, RAM, a removable disk, a hard disk, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, and the scope of the present application is not limited thereto. Those skilled in the art can make changes or substitutions within the technical scope disclosed in the present application, and such changes or substitutions should be within the protective scope of the present application.

Claims (10)

1. A method of in vivo detection, comprising:
acquiring a visible light image and an infrared light image;
carrying out face detection on the visible light image;
when a visible light face image with the quality meeting a preset condition is detected, combining the visible light image and the infrared light image to obtain a combined image;
detecting the face in the merged image, and carrying out living body detection on the face in the merged image;
and carrying out anti-attack living body detection on the human face in the combined image.
2. The method of claim 1, wherein said combining the visible light image and the infrared light image comprises:
extracting first face characteristic data in the infrared light image;
extracting second face feature data in the visible light image;
carrying out affine transformation on the first face feature data to eliminate the parallax of the infrared light image and the visible light image;
and merging the infrared light image and the visible light image with the parallax eliminated.
3. The method of claim 1, further comprising:
organ detection is carried out on the detected visible light face, and the completeness of the organs of the visible light face is determined; and/or
And carrying out gesture detection on the detected visible light face, and determining that the expression gesture of the visible light face meets the preset requirement.
4. The method according to claim 1, wherein the anti-attack live body detection of the human face in the merged image comprises at least one or a combination of the following:
performing living body detection for preventing photo attack on the human face in the merged image;
performing living body detection of the face in the merged image for preventing the face from attacking;
and carrying out screen attack prevention living body detection on the human face in the combined image.
5. The method of claim 1, wherein the live body detection of the human face in the combined image comprises: and performing living body detection on the human face in the multi-frame combined image.
6. A living body detection device, comprising:
a visible light image acquisition unit for acquiring a visible light image;
an infrared light image acquisition unit for acquiring an infrared light image;
the human face detection unit is used for carrying out human face detection on the visible light image;
the image merging unit is used for merging the visible light image and the infrared light image to obtain a merged image when the visible light face image with the quality meeting the preset condition is detected;
the living body detection unit is used for carrying out living body detection on the human face in the combined image obtained by the image combination unit; and carrying out anti-attack living body detection on the human face in the combined image.
7. The apparatus of claim 6, wherein the image merging unit comprises:
the first characteristic data extraction subunit is used for extracting first face characteristic data in the infrared light image;
the second characteristic data extraction subunit is used for extracting second face characteristic data in the visible light image;
the affine transformation subunit is used for carrying out affine transformation on the first face feature data to eliminate the parallax of the infrared light image and the visible light image;
and the image merging subunit is used for merging the infrared light image and the visible light image with the parallax eliminated.
8. A living body detection apparatus comprising:
at least one processor;
a memory coupled with the at least one processor, the memory storing executable instructions, wherein the executable instructions, when executed by the at least one processor, cause the method of any of claims 1-5 to be implemented.
9. A chip, comprising: a processor for calling and running the computer program from the memory so that the device in which the chip is installed performs: the method of any one of claims 1 to 5.
10. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, realizes the steps of the method according to any one of the claims 1 to 5.
CN201911392538.0A 2019-12-30 2019-12-30 Living body detection device method, device, chip, and computer-readable storage medium Pending CN113128255A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911392538.0A CN113128255A (en) 2019-12-30 2019-12-30 Living body detection device method, device, chip, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911392538.0A CN113128255A (en) 2019-12-30 2019-12-30 Living body detection device method, device, chip, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN113128255A true CN113128255A (en) 2021-07-16

Family

ID=76767470

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911392538.0A Pending CN113128255A (en) 2019-12-30 2019-12-30 Living body detection device method, device, chip, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN113128255A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1353292A1 (en) * 2002-04-12 2003-10-15 STMicroelectronics Limited Biometric sensor apparatus and methods
CN106358006A (en) * 2016-01-15 2017-01-25 华中科技大学 Video correction method and video correction device
CN106599872A (en) * 2016-12-23 2017-04-26 北京旷视科技有限公司 Method and equipment for verifying living face images
CN108197586A (en) * 2017-12-12 2018-06-22 北京深醒科技有限公司 Recognition algorithms and device
CN110119719A (en) * 2019-05-15 2019-08-13 深圳前海微众银行股份有限公司 Biopsy method, device, equipment and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1353292A1 (en) * 2002-04-12 2003-10-15 STMicroelectronics Limited Biometric sensor apparatus and methods
CN106358006A (en) * 2016-01-15 2017-01-25 华中科技大学 Video correction method and video correction device
CN106599872A (en) * 2016-12-23 2017-04-26 北京旷视科技有限公司 Method and equipment for verifying living face images
CN108197586A (en) * 2017-12-12 2018-06-22 北京深醒科技有限公司 Recognition algorithms and device
CN110119719A (en) * 2019-05-15 2019-08-13 深圳前海微众银行股份有限公司 Biopsy method, device, equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
US20230274577A1 (en) Device and method with image matching
CN106326832B (en) Device and method for processing image based on object region
US9727974B2 (en) System for video super resolution using semantic components
KR100480781B1 (en) Method of extracting teeth area from teeth image and personal identification method and apparatus using teeth image
CN109299658B (en) Face detection method, face image rendering device and storage medium
JP5015126B2 (en) Image generation method, image authentication method, image generation apparatus, image authentication apparatus, program, and recording medium
WO2016002152A1 (en) Image processing system, image processing method and program storage medium for which personal private information has been taken into consideration
JP2019092076A (en) Image processing system, image processing method, and program
US11165974B2 (en) Image processing apparatus, image processing apparatus control method, and non-transitory computer-readable storage medium
JP2007280367A (en) Face collation device
CN110705353A (en) Method and device for identifying face to be shielded based on attention mechanism
CN110956114A (en) Face living body detection method, device, detection system and storage medium
US20160253554A1 (en) Determination device and determination method
WO2020159437A1 (en) Method and system for face liveness detection
Kanter Color Crack: Identifying Cracks in Glass
CN111222432A (en) Face living body detection method, system, equipment and readable storage medium
US9286707B1 (en) Removing transient objects to synthesize an unobstructed image
WO2015198592A1 (en) Information processing device, information processing method, and information processing program
Ciamarra et al. Deepfake detection by exploiting surface anomalies: the SurFake approach
KR101496287B1 (en) Video synopsis system and video synopsis method using the same
CN113128254A (en) Face capturing method, device, chip and computer readable storage medium
CN113128255A (en) Living body detection device method, device, chip, and computer-readable storage medium
CN112861588A (en) Living body detection method and device
CN111383255A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
KR20180108361A (en) Method and apparatus for verifying face

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination