WO2016197297A1 - Procédé de détection de corps vivant, système de détection de corps vivant et progiciel informatique - Google Patents

Procédé de détection de corps vivant, système de détection de corps vivant et progiciel informatique Download PDF

Info

Publication number
WO2016197297A1
WO2016197297A1 PCT/CN2015/080963 CN2015080963W WO2016197297A1 WO 2016197297 A1 WO2016197297 A1 WO 2016197297A1 CN 2015080963 W CN2015080963 W CN 2015080963W WO 2016197297 A1 WO2016197297 A1 WO 2016197297A1
Authority
WO
WIPO (PCT)
Prior art keywords
living body
matrix data
predetermined
detected
light source
Prior art date
Application number
PCT/CN2015/080963
Other languages
English (en)
Chinese (zh)
Inventor
范浩强
Original Assignee
北京旷视科技有限公司
北京小孔科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京旷视科技有限公司, 北京小孔科技有限公司 filed Critical 北京旷视科技有限公司
Priority to PCT/CN2015/080963 priority Critical patent/WO2016197297A1/fr
Priority to CN201580000335.6A priority patent/CN105637532B/zh
Priority to US15/580,210 priority patent/US10614291B2/en
Publication of WO2016197297A1 publication Critical patent/WO2016197297A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths

Definitions

  • the present disclosure relates to the field of living body detection, and more particularly, to a living body detecting method, a living body detecting system, and a computer program product capable of realizing human body living body detection.
  • face recognition systems are increasingly used in security, finance and other fields that require authentication, such as bank remote account opening, access control systems, and remote transaction operation verification.
  • authentication such as bank remote account opening, access control systems, and remote transaction operation verification.
  • the first person to be verified is a legal living organism. That is to say, the face recognition system needs to be able to prevent an attacker from using a photo, a 3D face model or a mask to attack.
  • the method of solving the above problem is usually called living body detection, and the purpose is to judge whether the acquired biometrics are from a living, on-the-spot, real person.
  • the existing biometric detection technology relies on special hardware devices (such as infrared cameras, depth cameras) or can only prevent simple static photo attacks.
  • the existing living body detection systems are mostly matched, that is, the test subject needs to perform corresponding actions according to the system instruction or stay in place for a certain period of time, which will affect the user experience and the living body detection efficiency.
  • the present disclosure has been made in view of the above problems.
  • the present disclosure provides a living body detecting method, a living body detecting system, and a computer program product, which are based on human skin to generate sub-surface scattering of light, thereby generating a large spot after receiving light, and photographs, screens, masks, and the like.
  • the subsurface scattering is weak compared to the principle of forming a small spot, enabling a non-cooperating in vivo detection that effectively distinguishes between normal users and photo, video, and mask attackers, and does not require users. Special cooperation increases the safety and ease of use of the living body detection system.
  • a living body detecting method comprising: illuminating a face of an object to be detected using a laser light source; capturing an image of a face of an object to be detected illuminated by the laser light source; and calculating the waiting Detecting a spot area of an image of a face of the subject; and comparing the light The spot area is compared with a first predetermined area threshold, and if the spot area is greater than the first predetermined area threshold, determining that the object to be detected is a living body.
  • the living body detecting method wherein the calculating a spot area of an image of a face of the object to be detected includes: acquiring image matrix data of an image of a face of the object to be detected; Performing a binarization conversion on the image matrix data to convert a pixel point of the image matrix data having a gray value greater than or equal to the first predetermined threshold to have a first gray scale a first type of pixel of the value, converting a pixel point of the image matrix data having a gray value smaller than the first predetermined threshold to a second type of pixel having a second gray value, obtaining a first binary value Image matrix data, the first gray value is greater than the second gray value; determining a maximum number of the first type of pixel points in the first binarized image matrix data that are connected to each other, and calculating the A maximum number of areas corresponding to the first type of pixel points that communicate with each other is used as the spot area.
  • the living body detecting method wherein the laser light source is a light source that generates a spot light spot, and the laser light source is relatively fixed to a position of the object to be detected.
  • a living body detecting method wherein the laser light source is a light source that generates a plurality of spot spots, and the laser light source is relatively changed in position with respect to the object to be detected, the capturing via the The image of the face of the object to be detected illuminated by the light source includes: capturing an image of a face of the object to be detected illuminated by the laser light source, and determining an area image of the image corresponding to the predetermined area of the object to be detected as An image of the face of the object to be detected.
  • the living body detecting method wherein the laser light source is a laser light source that can adjust a light emission direction, and the laser light source and the position of the object to be detected are relatively changed, the acquiring The image matrix data of the image of the face of the object to be detected includes: acquiring preliminary image matrix data of a face of the object to be detected illuminated by the laser light source; performing preliminary image matrix data on the preliminary image matrix data based on the first predetermined gray threshold value Binarization conversion to convert pixel points of the preliminary image matrix data having gray values greater than or equal to the first predetermined threshold into the first type of pixel points having a first gray value, Pixels having a gray value smaller than the first predetermined threshold in the preliminary image matrix data are converted into the second type of pixel points having a second gray value to obtain binarized preliminary image matrix data; Computing a maximum number of the first type of pixel points that are connected to each other in the preliminary image matrix data, and calculating the maximum number of the first connected to each other A first pixel
  • the living body detecting method further includes performing the binarization conversion on the image matrix data based on a second predetermined grayscale threshold to have greater than or equal to the image matrix data Converting a pixel point of the second predetermined threshold gray value to the first type of pixel having a first gray value, and having the gray value of the image matrix data smaller than the second predetermined threshold Converting a pixel to the second type of pixel having a second gray value, obtaining second binarized image matrix data; if the number of the first type of pixel in the second binarized image matrix data The illumination is stopped when the predetermined first predetermined number of thresholds is exceeded.
  • the living body detecting method further includes performing the binarization conversion on the image matrix data based on a third predetermined grayscale threshold to have greater than or equal to the image matrix data Converting a pixel point of the gray value of the third predetermined threshold into the first type of pixel having the first gray value, and having the gray value of the image matrix data smaller than the third predetermined threshold Converting a pixel to the second type of pixel having a second gray value, obtaining third binarized image matrix data; calculating a location of the first type of pixel in the third binarized image matrix data Corresponding third center of gravity position; if the third center of gravity position is outside the predetermined first area threshold, then the illumination is stopped.
  • the living body detecting method further includes: determining a predetermined pixel point region of the image matrix data corresponding to a predetermined region of a face of the object to be detected; calculating the maximum number of mutual a first center of gravity position corresponding to the connected first type of pixel points; if the first center of gravity position is within the predetermined pixel point area, the illumination is stopped.
  • the living body detecting method further includes comparing the spot area with a second predetermined area threshold, and stopping the irradiation if the spot area is larger than the second predetermined area threshold.
  • the living body detecting method further includes: determining a predetermined pixel point of the image matrix data corresponding to a predetermined point of the face of the object to be detected; calculating the maximum number of interconnections a first center of gravity position corresponding to the first type of pixel point; calculating a distance between the first center of gravity position and the predetermined pixel point, and stopping the illumination if the distance is less than a predetermined distance threshold.
  • the living body detecting method further includes: calculating interconnections Passing a plurality of spot areas corresponding to the first type of pixel points; if one of the plurality of spot areas is greater than a second predetermined area threshold or each of the plurality of spot areas is smaller than a third predetermined area threshold Then stop the irradiation.
  • a living body detecting system including: a laser light source unit for emitting an irradiation light to illuminate a face of an object to be detected; and an image capturing unit for capturing via the laser light source unit An image of a face of the object to be detected that is illuminated; a living body detecting unit that determines whether the object to be detected is a living body, wherein the living body detecting unit calculates a spot area of an image of a face of the object to be detected, and Comparing the spot area with a first predetermined area threshold, and if the spot area is greater than the first predetermined area threshold, determining that the object to be detected is a living body.
  • a living body detecting system acquires image matrix data of an image of a face of the object to be detected; and the image matrix based on a first predetermined grayscale threshold Performing a binarization conversion on the data to convert a pixel point of the image matrix data having a gray value greater than or equal to the first predetermined threshold into a first type of pixel having a first gray value, the image Converting a pixel point having a gray value smaller than the first predetermined threshold value into a second type of pixel point having a second gray value in the matrix data, obtaining first binarized image matrix data, the first gray value And greater than the second gray value; determining a maximum number of the first type of pixel points in the first binarized image matrix data that are connected to each other, and calculating the maximum number of the first type of pixels that are connected to each other The area corresponding to the spot is taken as the spot area.
  • a living body detecting system wherein the laser light source unit is a light source unit that generates a spot light spot, and the position of the laser light source unit and the object to be detected is relatively fixed.
  • the living body detecting system wherein the laser light source unit is a light source unit that generates a plurality of spot light spots, and the laser light source unit and the position of the object to be detected are relatively changed,
  • the image capturing unit captures an image of a face of the object to be detected illuminated by the laser light source unit, the living body detecting unit determines an area image of the image corresponding to the predetermined area of the object to be detected as the to-be-detected The image of the face of the object.
  • the living body detecting system acquires preliminary image matrix data of a face of the object to be detected illuminated by the laser light source unit; based on the first predetermined grayscale threshold, the preliminary Image matrix data performing binarization conversion to convert pixel points of the preliminary image matrix data having gray values greater than or equal to the first predetermined threshold into the first type of pixel points having a first gray value Converting, in the preliminary image matrix data, pixel points having a gray value smaller than the first predetermined threshold into the second type of pixel points having a second gray value to obtain binarized preliminary image matrix data Determining a maximum number of the first type of pixel points that are connected to each other in the binarized preliminary image matrix data, and calculating a first first center of gravity position corresponding to the maximum number of mutually connected first type of pixel points
  • the living body detecting system according to another embodiment of the present disclosure, wherein the living body detecting unit is further configured to perform the binarization conversion on the image matrix data based on a second predetermined grayscale threshold value to a pixel point having a gray value greater than or equal to the second predetermined threshold in the image matrix data is converted into the first type of pixel having the first gray value, and the image matrix data has less than the first Converting a pixel of the gray value of the predetermined threshold to the second type of pixel having the second gray value, obtaining second binarized image matrix data; if the second binarized image matrix data is When the number of the first type of pixel points exceeds a predetermined first predetermined number of thresholds, the laser light source unit is controlled to stop the illumination.
  • the living body detecting system according to another embodiment of the present disclosure, wherein the living body detecting unit is further configured to perform the binarization conversion on the image matrix data based on a third predetermined grayscale threshold value to a pixel point in the image matrix data having a gray value greater than or equal to the third predetermined threshold is converted into the first type of pixel having the first gray value, and the image matrix data has less than the first Converting a pixel point of the gray value of the predetermined threshold to the second type of pixel having the second gray value, obtaining third binarized image matrix data; calculating the third binarized image matrix data Corresponding third center of gravity position of the first type of pixel; if the third center of gravity position is outside the predetermined first area threshold, controlling the laser light source unit to stop illumination.
  • the living body detecting system according to another embodiment of the present disclosure, wherein the living body detecting unit is further configured to: determine a predetermined pixel point region of the image matrix data corresponding to a predetermined region of a face of the object to be detected Calculating a first first center of gravity position corresponding to the maximum number of interconnected first type of pixel points; if the first center of gravity position is within the predetermined pixel point area, Then, the laser light source unit is controlled to stop the illumination.
  • the living body detecting system according to another embodiment of the present disclosure, wherein the living body detecting unit is further configured to: compare the spot area with a second predetermined area threshold if the spot area is larger than the second predetermined area threshold And controlling the laser light source unit to stop the illumination.
  • the living body detecting system according to another embodiment of the present disclosure, wherein the living body detecting unit is further configured to: determine a predetermined pixel point of the image matrix data corresponding to a predetermined point of a face of the object to be detected; Calculating, by the maximum number of first centroid positions corresponding to the first type of pixel points that are connected to each other; calculating a distance between the first center of gravity position and the predetermined pixel point, if the distance is less than a predetermined distance threshold, The multi-point light source unit is controlled to stop the illumination.
  • the living body detecting system according to another embodiment of the present disclosure, wherein the living body detecting unit is further configured to: calculate a plurality of spot areas corresponding to the first type of pixel points that are in communication with each other; if the plurality of spots The laser light source unit is controlled to stop the illumination when one of the areas is greater than the second predetermined area threshold or each of the plurality of spot areas is smaller than the third predetermined area threshold.
  • a computer program product comprising a computer readable storage medium on which computer program instructions are stored, the computer program instructions being executed while being executed by a computer a step of: acquiring an image of a face of the object to be detected illuminated by the laser light source; calculating a spot area of the image of the face of the object to be detected; and comparing the spot area with a first predetermined area threshold if the spot The area is greater than the first predetermined area threshold, and the object to be detected is determined to be a living body.
  • FIG. 1 is a flow chart illustrating a living body detecting method according to an embodiment of the present invention.
  • FIG. 2 is a functional block diagram illustrating a living body detection system in accordance with an embodiment of the present invention.
  • FIG. 3 is a schematic diagram further illustrating a first example living body detection system in accordance with an embodiment of the present invention.
  • FIG. 4 is a flow chart further illustrating a first example living body detection method in accordance with an embodiment of the present invention.
  • FIG. 5 is a schematic diagram further illustrating a second example living body detection system in accordance with an embodiment of the present invention.
  • FIG. 6 is a flow chart further illustrating a second example living body detecting method according to an embodiment of the present invention.
  • FIG. 7 is a schematic diagram further illustrating a third example living body detection system in accordance with an embodiment of the present invention.
  • FIG. 8 is a flow chart further illustrating a third example living body detecting method according to an embodiment of the present invention.
  • FIG. 9 is a schematic block diagram illustrating a living body detecting system according to an embodiment of the present invention.
  • FIG. 1 is a flow chart illustrating a living body detecting method according to an embodiment of the present invention. As shown in FIG. 1, a living body detecting method according to an embodiment of the present invention includes the following steps.
  • step S101 the face of the object to be detected is illuminated using a laser light source.
  • the laser source may be a source that produces a spotted spot, or the laser source is a source that produces a plurality of spotted spots.
  • the laser source is a laser source that can adjust the direction of light emission. Thereafter, the processing proceeds to step S102.
  • step S102 an image of the face of the object to be detected illuminated via the laser light source is captured.
  • the image of the face of the object to be detected will include a spot formed by subsurface scattering.
  • step S103 the spot area S of the image of the face of the object to be detected is calculated.
  • the specific processing for calculating the spot area will be described in detail as follows.
  • the spot area may represent the area of the spot, or it may represent the thickness of the strip. Thereafter, the processing proceeds to step S104.
  • step S104 it is determined whether the spot area S calculated in step S103 is greater than the first predetermined area threshold T1.
  • the first predetermined area threshold T1 is used in advance with a large number of face images as positive samples and with photos, video playback, paper masks, and 3D model images as negative samples. Deep learning, support vector machine and other statistical learning methods are determined.
  • step S104 If a positive result is obtained in step S104, that is, the spot area S is larger than the first predetermined area threshold T1, the process proceeds to step S105.
  • step S105 it is determined that the object to be detected is a living body.
  • step S104 determines whether the spot area S is a living body. If a negative result is obtained in step S104, that is, the spot area S is not larger than the first predetermined area threshold T1, the processing proceeds to step S106. In step S106, it is determined that the object to be detected is not a living body.
  • subsurface scattering is generated for light based on human skin, so that a large spot is generated after receiving light, and subsurface scattering phases of articles such as photographs, screens, masks, and the like are generated. It is weaker than usual and only forms the principle of smaller spots.
  • the predetermined area threshold should be smaller than the spot formed by the subsurface scattering of the skin by the human skin, and larger than the spot formed by the subsurface scattering of the articles such as photos, screens, masks and the like.
  • the actual specific value of the predetermined area threshold may be set according to actual conditions, and is not limited herein.
  • the object to be detected having a spot area larger than the predetermined area threshold is determined to be a living body by judging the magnitude relationship between the obtained spot area and the predetermined area threshold.
  • the living body detecting method since the laser light source is used and there is no need to restrict the motion fit of the user, it is necessary to set a safety control mechanism in order to avoid occurrence of the laser light source illuminating the eye of the object to be detected, or to wait The detection object deviates from the detection area and other special cases.
  • the living body detection is performed by detecting the size of the subsurface scattering spot after the object to be detected is irradiated by the laser light source, thereby effectively preventing photos, 3D face models, and mask attacks.
  • the living body detecting system 20 includes a laser light source unit 21, an image capturing unit 22, and a living body detecting unit 23.
  • the laser light source unit 21, the image capturing unit 22, and the living body detecting unit 23 may be configured by, for example, hardware, software, firmware, and any feasible combination thereof.
  • the laser light source unit 21 is configured to emit an illumination light to illuminate a face of an object to be detected.
  • the laser source may be a point laser having a power of 5 mW and an output wavelength of 850 nm.
  • the position and angle of the laser light source arrangement can ensure that it can illuminate a suitable part of the face of the test subject, such as lips, cheeks, nose, etc. The place.
  • the laser light source unit 21 may be a light source that generates a spot light spot, or the laser light source unit 21 is a light source that generates a plurality of spot light spots.
  • the laser light source unit 21 is a laser light source that can adjust the direction in which light is emitted.
  • the image capturing unit 22 is for capturing an image of a face of an object to be detected illuminated by the laser light source unit 21.
  • the image capture unit 22 is configured corresponding to the laser light source unit 21.
  • the image capture unit 22 is a CCD imaging module configured with a 850 nm narrow band filter, the image capture unit 22 having an exposure parameter that is capable of capturing a spot formed by subsurface scattering.
  • the image capturing unit 22 may be physically separated from the subsequent living body detecting unit 23, or physically located at the same position or even inside the same casing.
  • the image capturing unit 22 further transmits the acquired image of the face of the object to be detected to the subsequent station via a wired or wireless method.
  • the living body detecting unit 23 is described. In the case where the image capturing unit 22 and the living body detecting unit 23 behind it are physically located at the same position or even inside the same casing, the living body detecting unit 23 will face the face of the object to be detected via the internal bus. The image is sent to the living body detecting unit 23. Before the video data is transmitted via wired or wireless means or via the internal bus, its predetermined format can be encoded and compressed into video data packets to reduce the amount of traffic and bandwidth required for transmission.
  • the living body detecting unit 23 is configured to determine whether the object to be detected is a living body. Specifically, the living body detecting unit 23 calculates a spot area S of an image of a face of the object to be detected, and compares the spot area S with a first predetermined area threshold T1, if the spot area S is larger than the first A predetermined area threshold T1 determines that the object to be detected is a living body.
  • FIGS. 1 and 2 a living body detecting method and a living body detecting system according to an embodiment of the present invention are summarized with reference to FIGS. 1 and 2.
  • first to third exemplary living body detecting methods and living body detecting systems according to embodiments of the present invention will be further described with reference to FIGS. 3 through 8.
  • FIG. 3 is a schematic diagram further illustrating a first example living body detection system in accordance with an embodiment of the present invention.
  • the position of the living body detecting system 20 and the object 30 to be detected is relatively fixed.
  • the living body detecting system 20 shown in FIG. 3 is a face puncher with a relatively close working distance.
  • the laser light source unit 21 in the living body detecting system 20 shown in FIG. 3 is a light source unit that generates a spot light spot, and the position of the laser light source unit 21 and the object to be detected 30 is relatively fixed.
  • the laser light source unit 21 emits an illumination light to illuminate a face of the object to be detected, for example, illuminating the lips, cheeks, Nose, etc.
  • the illustrated image capturing unit 22 captures an image of the face of the object to be detected illuminated by the laser light source unit 21.
  • the living body detecting unit 23 shown determines whether the object to be detected is a living body.
  • FIG. 4 is a flow chart further illustrating a first example living body detection method in accordance with an embodiment of the present invention. As shown in FIG. 4, a first exemplary living body detecting method according to an embodiment of the present invention is applied to the first exemplary living body detecting system according to an embodiment of the present invention shown in FIG. 3, which includes the following steps.
  • step S401 the face of the object to be detected is illuminated using a laser light source.
  • the laser source may be a source that produces a spotted spot.
  • step S402 an image of the face of the object to be detected illuminated via the laser light source is captured. Thereafter, the processing proceeds to step S403.
  • step S403 image matrix data of an image of the face of the object to be detected is acquired.
  • the image matrix data of the image of the face of the object to be detected may be represented as I[x, y]. Thereafter, the processing proceeds to step S404.
  • step S404 performing binarization conversion on the image matrix data based on the first predetermined grayscale threshold to convert pixel points in the image matrix data having grayscale values greater than or equal to the first predetermined threshold Converting a pixel of the image matrix data having a gray value smaller than the first predetermined threshold into a second type of pixel having a second gray value for a first type of pixel having a first gray value Obtaining first binarized image matrix data, the first gray value being greater than the second gray value.
  • the first binarized image matrix data can be expressed as:
  • t1 is the first predetermined grayscale threshold. Thereafter, the processing proceeds to step S405.
  • step S405 a maximum number of the first type of pixel points that are connected to each other in the first binarized image matrix data are determined.
  • a width-first search (BFS) algorithm is applied to the first binarized image matrix data Ib to calculate a connectivity component, and a maximum number of connectivity components are selected. Thereafter, the processing proceeds to step S406.
  • BFS width-first search
  • step S406 an area corresponding to the maximum number of mutually connected first-type pixel points is calculated as the spot area S. Thereafter, the processing proceeds to step S407.
  • step S407 it is determined whether the spot area S calculated in step S406 is greater than the first predetermined area threshold T1.
  • the first predetermined area threshold T1 is used in advance with a large number of face images as positive samples and with photos, video playback, paper masks, and 3D model images as negative samples. Deep learning, support vector machine and other statistical learning methods are determined.
  • step S407 If a positive result is obtained in step S407, that is, the spot area S is larger than the first predetermined area threshold T1, the processing proceeds to step S408.
  • step S408 it is determined that the object to be detected is a living body.
  • step S407 if a negative result is obtained in step S407, that is, the spot area S is not larger than the first predetermined area threshold T1, the processing proceeds to step S409. In step S409, it is determined that the object to be detected is not a living body.
  • a safety control mechanism is set in the first example living body detecting system according to an embodiment of the present invention.
  • step S404 After capturing an image of a face of the object to be detected illuminated by the laser light source, similar to the processing of step S404 described above, performing the binarization conversion on the image matrix data based on the second predetermined grayscale threshold t2, Converting a pixel point of the image matrix data having a gray value greater than or equal to the second predetermined threshold t2 into the first type of pixel having the first gray value, having the image matrix data therein Pixels of the gradation value smaller than the second predetermined threshold t2 are converted into the second type of pixel points having the second gradation value, and second binarized image matrix data is obtained. If the number of the first type of pixel points in the second binarized image matrix data exceeds a predetermined first predetermined number of thresholds s1, that is, a plurality of bright spots are not normally present, the illumination is stopped.
  • step S404 After capturing an image of a face of the object to be detected illuminated by the laser light source, similar to the processing of step S404 described above, performing the binarization conversion on the image matrix data based on a third predetermined grayscale threshold t3, Converting a pixel point of the image matrix data having a gray value greater than or equal to the third predetermined threshold t3 into the first type of pixel having the first gray value, having the image matrix data therein A pixel point smaller than the gradation value of the third predetermined threshold value t3 is converted into the second type of pixel point having the second gradation value, and the third binarized image matrix data is obtained.
  • determining A predetermined pixel point region of the image matrix data corresponding to a predetermined region of the face of the object to be detected For example, a pre-trained face detector (such as Haar Cascade) is used to obtain the position of the face and the left and right eyes. Similar to the processing of step S404 described above, a maximum number of first centroid positions corresponding to the first type of pixel points that are connected to each other are calculated. If the first center of gravity position is within the predetermined pixel point area (ie, the left and right eye areas), the illumination is stopped.
  • a pre-trained face detector such as Haar Cascade
  • a first exemplary living body detecting system is configured with a light source unit that generates a spot light spot, which is used in a scene where a light source unit is fixed relative to a target to be detected, and utilizes subsurface scattering properties of living skin and other materials.
  • the difference in live detection can effectively protect against photos, videos, and area attacks, and increases the safety and ease of use of the living body detection system without the need for special user cooperation.
  • FIG. 5 is a schematic diagram further illustrating a second example living body detection system in accordance with an embodiment of the present invention.
  • the position of the living body detecting system 50 and the object 30 to be detected is relatively unfixed.
  • the living body detection system 50 shown in FIG. 5 is an access control system having a working distance farther than the face puncher of FIG.
  • the laser light source 51 is a light source that generates a plurality of spot lights, and the laser light source 51 and the position of the object 30 to be detected relatively change.
  • the laser source 51 is configured by a laser having a power of 500 mW and a wavelength of 850 nm and a grating. Through the grating, the laser projects a plurality of spot spots, and walks evenly over the range in which the object 30 to be detected may exist, and the spot ranges of the respective points do not coincide.
  • the illustrated image capturing unit 52 captures an image of the face of the object to be detected illuminated by the laser light source unit 51.
  • the living body detecting unit 53 shown determines whether the object to be detected is a living body.
  • FIG. 6 is a flow chart further illustrating a second example living body detecting method according to an embodiment of the present invention. As shown in FIG. 6, a second exemplary living body detecting method according to an embodiment of the present invention is applied to the second exemplary living body detecting system according to an embodiment of the present invention shown in FIG. 5, which includes the following steps.
  • Steps S601 to S605 illustrated in FIG. 6 are respectively the same as steps S401 to S405 illustrated in FIG. 4 described above, and a repetitive description thereof will be omitted herein.
  • step S06 After determining the maximum number of mutually connected first-type pixel points in the first binarized image matrix data by binarizing the image of the face of the object to be detected, in step S06, determining the maximum number of mutually connected Whether the first type of pixel is located in a predetermined area of the face of the object to be detected. Since the laser light source 51 is a light source that generates a plurality of spot spots in the second exemplary living body detecting system according to an embodiment of the present invention, if an affirmative result is obtained in step S606, it indicates that One of the plurality of dot spots generated by the laser light source 51 falls into a suitable area such as the lips, cheeks, and nose, and the processing proceeds to step S607.
  • step S607 an area corresponding to the maximum number of mutually connected first-type pixel points is calculated as the spot area S. Thereafter, the processing proceeds to step S608.
  • step S608 it is determined whether the spot area S calculated in step S607 is greater than the first predetermined area threshold T1.
  • the first predetermined area threshold T1 is a setting determined by using a large number of face images as a positive sample and a photo, video playback, a paper mask, and a 3D model image as negative samples, using a deep learning, a support vector machine, or the like. of.
  • step S608 If a positive result is obtained in step S608, that is, the spot area S is larger than the first predetermined area threshold T1, the processing proceeds to step S609. In step S609, it is determined that the object to be detected is a living body.
  • step S608 if a negative result is obtained in step S608, that is, the spot area S is not larger than the first predetermined area threshold T1, the processing proceeds to step S610.
  • step S610 it is determined that the object to be detected is not a living body.
  • step S606 if a negative result is obtained in step S606, indicating that the plurality of spot spots generated by the laser light source 51 do not fall into a suitable area such as a lip, a cheek, or a nose, the process returns to step S602 to continue capturing. An image of the face of the object to be detected illuminated by the laser light source.
  • a safety control mechanism is also provided in the second example living body detecting system according to an embodiment of the present invention.
  • step S404 based on the second predetermined grayscale threshold t2
  • Performing the binarization conversion on the image matrix data to convert a pixel point of the image matrix data having a gray value greater than or equal to the second predetermined threshold t2 into the first having the first gray value a pixel-like point converting a pixel point of the image matrix data having a gray value smaller than the second predetermined threshold t2 into the second type of pixel point having a second gray value to obtain a second binarization Image matrix data. If the number of the first type of pixel points in the second binarized image matrix data exceeds a predetermined first predetermined number of thresholds s1, that is, a plurality of bright spots are not normally present, the illumination is stopped.
  • step S607 Calculating, in step S607, obtaining the maximum number of mutually connected first-type pixel points After the corresponding area is used as the spot area S, the spot area S is compared with a second predetermined area threshold T2, and the second predetermined area threshold T2 is greater than the first predetermined area threshold T1. If the spot area S is greater than the second predetermined area threshold T2, that is, there is a spot having an excessive area, the irradiation is stopped.
  • determining a face of the image matrix data corresponding to the face of the object to be detected A predetermined pixel area of the predetermined area.
  • a pre-trained face detector such as Haar Cascade
  • the position of the spot closest to the left and right eyes is determined, and the distance D of the nearest spot to the left and right eyes is calculated. If the distance D is smaller than the predetermined distance threshold d, that is, the spot is too close to the eye of the object to be detected, the irradiation is stopped.
  • a second exemplary living body detecting system is configured with a light source unit that generates a plurality of spot spots, which are used in a scene where the light source unit and the object to be detected are not fixed in relative positions, and which utilizes living skin and other materials.
  • the difference in surface scattering properties enables in vivo detection, which can effectively prevent photos, videos, and area attacks, and increases the safety and ease of use of the living body detection system without special user cooperation.
  • FIG. 7 is a schematic diagram further illustrating a third example living body detection system in accordance with an embodiment of the present invention.
  • the position of the living body detecting system 70 and the object 30 to be detected is relatively unfixed, and the mutual position may vary greatly.
  • the living body detection system 70 shown in FIG. 7 is an access control system or a monitoring system that has a working distance farther than the face card driver of FIG. 3 and the door access system of FIG.
  • the laser light source unit 71 is a laser light source that can adjust the direction in which light is emitted.
  • the laser source 71 is configured by a power 20 mW, wavelength 850 nm laser and an exit direction drive unit (not shown).
  • the living body detecting unit 73 can capture a preliminary image of the face of the object to be detected irradiated via the laser light source unit 71 according to the image capturing unit 72, and acquire a face and a face using a pre-trained face detector such as Haar Cascade. The location of the appropriate part of the section, as well as the location of the current spot.
  • the exit direction driving unit adjusts the light emission direction of the laser light source unit 71 by the living body detecting unit 73 (alternatively, the separately configured spot position tracking unit) so that the spot position falls at a position of the appropriate portion.
  • FIG. 8 is a flowchart further illustrating a third example living body detecting method according to an embodiment of the present invention.
  • a third exemplary living body detecting method according to an embodiment of the present invention is applied to a third exemplary living body detecting system according to an embodiment of the present invention shown in FIG. 7, which includes the following steps.
  • step S801 the face of the object to be detected is illuminated using a laser light source.
  • the laser light source is a laser light source that can adjust a light emission direction. Thereafter, the processing proceeds to step S802.
  • step S802 preliminary image matrix data of a face of the object to be detected illuminated via the laser light source is acquired. Thereafter, the processing proceeds to step S803.
  • step S803 similar to the processing in step S404, performing binarization conversion on the preliminary image matrix data based on the first predetermined grayscale threshold to have the first image matrix data having greater than or equal to the first Pixels of gray values of predetermined threshold values are converted into the first type of pixel points having a first gray value, and pixel points of the preliminary image matrix data having gray values smaller than the first predetermined threshold are converted The second type of pixel points having the second gray value are obtained to obtain binarized preliminary image matrix data. Thereafter, the processing proceeds to step S804.
  • step S804 determining a maximum number of the first type of pixel points that are connected to each other in the binarized preliminary image matrix data, and calculating a maximum number of the first type of pixel points that are connected to each other A center of gravity.
  • a first number of positions of the first center of gravity of the first type of pixel points that are connected to each other in the binarized preliminary image matrix data is a location of a current spot.
  • step S805 a second center of gravity position of the predetermined area corresponding to the face of the object to be detected in the preliminary image is determined.
  • the second center of gravity position of the predetermined area of the face is the position of the appropriate portion of the face.
  • step S806 the light emitting direction of the laser light source is adjusted such that the first center of gravity position coincides with the second center of gravity position. That is, the direction in which the light emitted by the laser light source is emitted is adjusted such that the spot position falls on a suitable portion of the face. Thereafter, the processing proceeds to step S807.
  • step S807 image matrix data of an image of a face of the object to be detected illuminated by the laser light source that adjusts the light emission direction is acquired.
  • the image matrix data of the image of the face of the object to be detected acquired in step S807 is image matrix data finally used for living body detection.
  • step S404 shown in FIG. 4 steps S404 to S409 shown in FIG. 4 as above are performed to perform the living body detection based on the image matrix data of the image of the face of the object to be detected.
  • a safety control mechanism is also set in the living body detection system.
  • a plurality of spot areas corresponding to the first type of pixel points that are in communication with each other are calculated. If one of the plurality of spot areas is greater than the second predetermined area threshold T2 or each of the plurality of spot areas is smaller than the third predetermined area threshold T3, ie, there is an excessively large or too small spot, the illumination is stopped.
  • the average pixel value of the image of the face of the object to be detected may be determined, and if the average pixel value is not within the preset range, that is, if the entire image is too bright or too dark, the illumination is stopped.
  • the illumination is stopped.
  • a third exemplary living body detecting system is configured with a light source unit that can adjust a light emission direction to track a suitable portion of an object to be detected, in a scene where the relative position of the light source unit and the object to be detected is not fixed and the distance is long.
  • a light source unit that can adjust a light emission direction to track a suitable portion of an object to be detected, in a scene where the relative position of the light source unit and the object to be detected is not fixed and the distance is long.
  • it utilizes the difference in subsurface scattering properties of living skin and other materials for living body detection, which can effectively prevent photos, videos, and area attacks, and increases the safety and ease of use of the living body detection system without special user cooperation.
  • FIG. 9 is a schematic block diagram illustrating a living body detecting system according to an embodiment of the present invention.
  • a living body detecting system 9 according to an embodiment of the present invention includes a processor 91, a memory 92, and computer program instructions 93 stored in the memory 92.
  • the computer program instructions 93 may implement the functions of the respective functional modules of the living body detection system according to an embodiment of the present invention when the processor 91 is in operation, and/or may perform various steps of the living body detection method according to an embodiment of the present invention.
  • the following steps are performed: acquiring video data acquired via the video data collecting device; capturing an image of a face of the object to be detected illuminated via the laser light source; Calculating a spot area of the image of the face of the object to be detected; and comparing the spot area with a first predetermined area threshold, and if the spot area is greater than the first predetermined area threshold, determining that the object to be detected is Living body.
  • the step of calculating the spot area of the image of the face of the object to be detected when the computer program instruction 93 is executed by the processor 91 comprises: acquiring an image of the face of the object to be detected Image matrix data; performing binarization conversion on the image matrix data based on the first predetermined grayscale threshold to have greater than or equal to the first predetermined threshold in the image matrix data a pixel of the gray value of the value is converted into a first type of pixel having a first gray value, and a pixel of the image matrix data having a gray value smaller than the first predetermined threshold is converted to have a second a second type of pixel of the gray value, the first binarized image matrix data is obtained, the first gray value is greater than the second gray value; and the maximum number of the first binarized image matrix data is determined
  • the first type of pixel points that are connected to each other, and the area corresponding to the maximum number of the first type of pixel points that are connected to each other is calculated as the spot area.
  • the step of acquiring an image of the face of the object to be detected illuminated by the light source when the computer program instruction 93 is executed by the processor 91 includes: acquiring a face of the object to be detected illuminated via the laser light source The image of the portion determines an area image of the image corresponding to the predetermined area of the object to be detected as an image of the face of the object to be detected.
  • the step of acquiring the image matrix data of the image of the face of the object to be detected when the computer program instruction 93 is executed by the processor 91 comprises: acquiring an object to be detected illuminated by the laser light source Preliminary image matrix data of the face; performing binarization conversion on the preliminary image matrix data based on the first predetermined grayscale threshold to have gray in the preliminary image matrix data equal to or greater than the first predetermined threshold a pixel of the degree value is converted into the first type of pixel having the first gray value, and a pixel point of the preliminary image matrix data having a gray value smaller than the first predetermined threshold is converted to have a second The second type of pixel points of the gray value to obtain binarized preliminary image matrix data; determining a maximum number of the first type of pixel points that are connected to each other in the binarized preliminary image matrix data, and calculating the a first number of first centroid positions corresponding to the first type of pixel points connected to each other; determining a predetermined area of the preliminary image corresponding to the face of the
  • Each module in the living body detecting system according to an embodiment of the present invention may be implemented by a computer program stored in a memory stored in a processor in a living body detecting system according to an embodiment of the present invention, or may be in a computer according to an embodiment of the present invention
  • the computer instructions stored in the computer readable storage medium of the program product are implemented by the computer when executed.
  • the computer readable storage medium can be any combination of one or more computer readable storage media, for example, a computer readable storage medium includes computer readable program code for randomly generating a sequence of action instructions, and another computer can The read storage medium contains computer readable program code for performing face activity recognition.
  • the computer readable storage medium may include, for example, a memory card of a smart phone, a storage component of a tablet, a hard disk of a personal computer, a random access memory (RAM), a read only memory (ROM), an erasable programmable read only memory. (EPROM), Portable Compact Disk Read Only Memory (CD-ROM), USB memory, or any combination of the above storage media.
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable read only memory
  • CD-ROM Portable Compact Disk Read Only Memory
  • USB memory or any combination of the above storage media.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Processing (AREA)

Abstract

La présente invention porte sur un procédé de détection de corps vivant, un système de détection de corps vivant et un progiciel informatique qui peut réaliser la détection de corps vivant. Le procédé de détection de corps vivant consiste : à utiliser une source de lumière laser pour éclairer la face d'un objet à détecter ; à capturer une image de la face de l'objet à détecter qui est éclairée par la source de lumière laser ; à calculer une aire de tache lumineuse de l'image de la face de l'objet à détecter ; et à comparer l'aire de tache lumineuse à une première valeur de seuil d'aire préétablie, et si l'aire de tache lumineuse est supérieure à la première valeur de seuil d'aire préétablie, à déterminer que l'objet à détecter est un corps vivant.
PCT/CN2015/080963 2015-06-08 2015-06-08 Procédé de détection de corps vivant, système de détection de corps vivant et progiciel informatique WO2016197297A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2015/080963 WO2016197297A1 (fr) 2015-06-08 2015-06-08 Procédé de détection de corps vivant, système de détection de corps vivant et progiciel informatique
CN201580000335.6A CN105637532B (zh) 2015-06-08 2015-06-08 活体检测方法、活体检测***以及计算机程序产品
US15/580,210 US10614291B2 (en) 2015-06-08 2015-06-08 Living body detection method, living body detection system and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/080963 WO2016197297A1 (fr) 2015-06-08 2015-06-08 Procédé de détection de corps vivant, système de détection de corps vivant et progiciel informatique

Publications (1)

Publication Number Publication Date
WO2016197297A1 true WO2016197297A1 (fr) 2016-12-15

Family

ID=56050768

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/080963 WO2016197297A1 (fr) 2015-06-08 2015-06-08 Procédé de détection de corps vivant, système de détection de corps vivant et progiciel informatique

Country Status (3)

Country Link
US (1) US10614291B2 (fr)
CN (1) CN105637532B (fr)
WO (1) WO2016197297A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108664880A (zh) * 2017-03-27 2018-10-16 三星电子株式会社 活性测试方法和设备
CN111046703A (zh) * 2018-10-12 2020-04-21 杭州海康威视数字技术股份有限公司 人脸防伪检测方法、装置及多目相机
CN112084980A (zh) * 2020-09-14 2020-12-15 北京数衍科技有限公司 行人的脚步状态识别方法和装置

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105912986B (zh) * 2016-04-01 2019-06-07 北京旷视科技有限公司 一种活体检测方法和***
CN107992794B (zh) * 2016-12-30 2019-05-28 腾讯科技(深圳)有限公司 一种活体检测方法、装置和存储介质
CN108363939B (zh) * 2017-01-26 2022-03-04 阿里巴巴集团控股有限公司 特征图像的获取方法及获取装置、用户认证方法
CN107451556B (zh) * 2017-07-28 2021-02-02 Oppo广东移动通信有限公司 检测方法及相关产品
WO2019171827A1 (fr) * 2018-03-08 2019-09-12 ソニー株式会社 Dispositif de traitement d'informations, procédé de traitement d'informations et programme
US10867506B2 (en) * 2018-03-16 2020-12-15 Sean Michael Siembab Surrounding intelligent motion sensor with adaptive recognition
US10438477B1 (en) * 2018-03-16 2019-10-08 Sean Michael Siembab Surrounding intelligent motion sensor
JP7131118B2 (ja) * 2018-06-22 2022-09-06 富士通株式会社 認証装置、認証プログラム、認証方法
CN112639802B (zh) * 2018-09-18 2024-06-28 Oppo广东移动通信有限公司 用于生成伪结构光照射面部的方法、***及存储介质
US10885363B2 (en) 2018-10-25 2021-01-05 Advanced New Technologies Co., Ltd. Spoof detection using structured light illumination
US10783388B2 (en) * 2018-10-26 2020-09-22 Alibaba Group Holding Limited Spoof detection using multiple image acquisition devices
CN111310528B (zh) * 2018-12-12 2022-08-12 马上消费金融股份有限公司 一种图像检测方法、身份验证方法、支付方法及装置
US11170242B2 (en) 2018-12-26 2021-11-09 Advanced New Technologies Co., Ltd. Spoof detection using dual-band fluorescence
US10970574B2 (en) 2019-02-06 2021-04-06 Advanced New Technologies Co., Ltd. Spoof detection using dual-band near-infrared (NIR) imaging
US11328043B2 (en) 2019-03-15 2022-05-10 Advanced New Technologies Co., Ltd. Spoof detection by comparing images captured using visible-range and infrared (IR) illuminations
CN110059638A (zh) * 2019-04-19 2019-07-26 中控智慧科技股份有限公司 一种身份识别方法及装置
CN110720105A (zh) * 2019-09-11 2020-01-21 深圳市汇顶科技股份有限公司 人脸防伪检测方法、装置、芯片、电子设备和计算机可读介质
CN110781770B (zh) * 2019-10-08 2022-05-06 高新兴科技集团股份有限公司 基于人脸识别的活体检测方法、装置及设备
US11250282B2 (en) * 2019-11-14 2022-02-15 Nec Corporation Face spoofing detection using a physical-cue-guided multi-source multi-channel framework
CN113096059B (zh) * 2019-12-19 2023-10-31 合肥君正科技有限公司 一种车内监控相机排除夜晚光源干扰遮挡检测的方法
US20210334505A1 (en) * 2020-01-09 2021-10-28 AuthenX Inc. Image capturing system and method for capturing image
US11468712B2 (en) * 2020-01-09 2022-10-11 AuthenX Inc. Liveness detection apparatus, system and method
CN111401223B (zh) * 2020-03-13 2023-09-19 北京新氧科技有限公司 一种脸型对比方法、装置及设备
JP7383542B2 (ja) * 2020-03-24 2023-11-20 株式会社東芝 光検出器及び距離計測装置
CN113468920A (zh) * 2020-03-31 2021-10-01 深圳市光鉴科技有限公司 基于人脸光斑图像的活体检测方法、***、设备及介质
CN111814659B (zh) * 2020-07-07 2024-03-29 杭州海康威视数字技术股份有限公司 一种活体检测方法、和***
CN112633181B (zh) * 2020-12-25 2022-08-12 北京嘀嘀无限科技发展有限公司 数据处理方法、***、装置、设备和介质
CN112766175B (zh) * 2021-01-21 2024-05-28 宠爱王国(北京)网络科技有限公司 活体检测方法、装置及非易失性存储介质
CN115205246B (zh) * 2022-07-14 2024-04-09 中国南方电网有限责任公司超高压输电公司广州局 换流阀电晕放电紫外图像特征提取方法和装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5077803A (en) * 1988-09-16 1991-12-31 Fujitsu Limited Biological detecting system and fingerprint collating system employing same
WO2001001329A1 (fr) * 1999-06-24 2001-01-04 British Telecommunications Public Limited Company Identification personnelle
CN1426760A (zh) * 2001-12-18 2003-07-02 中国科学院自动化研究所 基于活体虹膜的身份识别方法
CN102129558A (zh) * 2011-01-30 2011-07-20 哈尔滨工业大学 基于普尔钦斑分析的虹膜采集***及虹膜采集方法

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10105707A (ja) * 1996-09-25 1998-04-24 Sony Corp 画像照合装置
JP2003075135A (ja) 2001-08-31 2003-03-12 Nec Corp 指紋画像入力装置および指紋画像による生体識別方法
CA2479664A1 (fr) * 2004-09-24 2006-03-24 Edythe P. Lefeuvre Methode et systeme de detection de l'orientation d'images
JP2007193729A (ja) * 2006-01-23 2007-08-02 Seiko Epson Corp 印刷装置、画像処理装置、印刷方法、および画像処理方法
JP4951291B2 (ja) 2006-08-08 2012-06-13 株式会社日立メディアエレクトロニクス 生体認証装置
CN100573553C (zh) * 2007-01-18 2009-12-23 中国科学院自动化研究所 基于薄板样条形变模型的活体指纹检测方法
US8218862B2 (en) * 2008-02-01 2012-07-10 Canfield Scientific, Incorporated Automatic mask design and registration and feature detection for computer-aided skin analysis
EP2420971A4 (fr) * 2009-04-13 2017-08-23 Fujitsu Limited Dispositif d'enregistrement d'informations biométriques, procédé d'enregistrement d'informations biométriques, programme d'ordinateur pour enregistrer des informations biométriques, dispositif d'authentification biométrique, procédé d'authentification biométrique, et programme d'ordinateur pour une authentification biométrique
JP5365407B2 (ja) * 2009-08-17 2013-12-11 ソニー株式会社 画像取得装置及び画像取得方法
JP5507181B2 (ja) * 2009-09-29 2014-05-28 富士フイルム株式会社 放射線画像撮影装置及び放射線画像撮影装置の動作方法
KR20140002034A (ko) * 2011-05-12 2014-01-07 애플 인크. 존재 감지
JP5831018B2 (ja) * 2011-07-29 2015-12-09 富士通株式会社 生体認証装置及び生体認証装置における利用者の手の位置の調整方法
FR2981769B1 (fr) 2011-10-25 2013-12-27 Morpho Dispositif anti-fraude
JP5896792B2 (ja) * 2012-03-09 2016-03-30 キヤノン株式会社 非球面計測方法、非球面計測装置および光学素子加工装置
CN102860845A (zh) * 2012-08-30 2013-01-09 中国科学技术大学 活体动物体内的细胞的捕获、操控方法及相应的装置
JP5859934B2 (ja) * 2012-09-04 2016-02-16 富士フイルム株式会社 放射線撮影システム並びにその作動方法、および放射線画像検出装置並びにその作動プログラム
JP6091866B2 (ja) * 2012-11-30 2017-03-08 株式会社キーエンス 計測顕微鏡装置、画像生成方法及び計測顕微鏡装置操作プログラム並びにコンピュータで読み取り可能な記録媒体
JP6041669B2 (ja) * 2012-12-28 2016-12-14 キヤノン株式会社 撮像装置及び撮像システム
CN103393401B (zh) * 2013-08-06 2015-05-06 中国科学院光电技术研究所 一种双波前矫正器活体人眼视网膜高分辨力成像***
JP6303332B2 (ja) * 2013-08-28 2018-04-04 富士通株式会社 画像処理装置、画像処理方法および画像処理プログラム
EP4250738A3 (fr) * 2014-04-22 2023-10-11 Snap-Aid Patents Ltd. Procédé de commande d'un appareil photo d'après un traitement d'une image capturée par un autre appareil photo
US20170119298A1 (en) * 2014-09-02 2017-05-04 Hong Kong Baptist University Method and Apparatus for Eye Gaze Tracking and Detection of Fatigue

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5077803A (en) * 1988-09-16 1991-12-31 Fujitsu Limited Biological detecting system and fingerprint collating system employing same
WO2001001329A1 (fr) * 1999-06-24 2001-01-04 British Telecommunications Public Limited Company Identification personnelle
CN1426760A (zh) * 2001-12-18 2003-07-02 中国科学院自动化研究所 基于活体虹膜的身份识别方法
CN102129558A (zh) * 2011-01-30 2011-07-20 哈尔滨工业大学 基于普尔钦斑分析的虹膜采集***及虹膜采集方法

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108664880A (zh) * 2017-03-27 2018-10-16 三星电子株式会社 活性测试方法和设备
EP3382598A3 (fr) * 2017-03-27 2018-12-19 Samsung Electronics Co., Ltd. Procédé et appareil de test de vitalité
US11176392B2 (en) 2017-03-27 2021-11-16 Samsung Electronics Co., Ltd. Liveness test method and apparatus
US11721131B2 (en) 2017-03-27 2023-08-08 Samsung Electronics Co., Ltd. Liveness test method and apparatus
CN108664880B (zh) * 2017-03-27 2023-09-05 三星电子株式会社 活性测试方法和设备
CN111046703A (zh) * 2018-10-12 2020-04-21 杭州海康威视数字技术股份有限公司 人脸防伪检测方法、装置及多目相机
CN111046703B (zh) * 2018-10-12 2023-04-18 杭州海康威视数字技术股份有限公司 人脸防伪检测方法、装置及多目相机
CN112084980A (zh) * 2020-09-14 2020-12-15 北京数衍科技有限公司 行人的脚步状态识别方法和装置
CN112084980B (zh) * 2020-09-14 2024-05-28 北京数衍科技有限公司 行人的脚步状态识别方法和装置

Also Published As

Publication number Publication date
CN105637532B (zh) 2020-08-14
US20180165512A1 (en) 2018-06-14
CN105637532A (zh) 2016-06-01
US10614291B2 (en) 2020-04-07

Similar Documents

Publication Publication Date Title
WO2016197297A1 (fr) Procédé de détection de corps vivant, système de détection de corps vivant et progiciel informatique
US10621454B2 (en) Living body detection method, living body detection system, and computer program product
EP2680191B1 (fr) Reconnaissance faciale
EP2680192B1 (fr) Reconnaissance faciale
US9524421B2 (en) Differentiating real faces from representations
KR102317180B1 (ko) 3차원 깊이정보 및 적외선정보에 기반하여 생체여부의 확인을 행하는 얼굴인식 장치 및 방법
Kose et al. Mask spoofing in face recognition and countermeasures
JP2007280367A (ja) 顔照合装置
WO2019017080A1 (fr) Dispositif et procédé de vérification
US11594076B2 (en) Remote biometric identification and lighting
KR101610525B1 (ko) 조도를 고려한 동공 검출 장치 및 그 방법
US20230222842A1 (en) Improved face liveness detection using background/foreground motion analysis
KR20200080533A (ko) 특징점 변동을 이용한 위조 얼굴 판별장치 및 그 방법
KR101310040B1 (ko) 적응적 조명조절을 이용한 얼굴 인식장치 및 그 방법
Das et al. Face liveness detection based on frequency and micro-texture analysis
Ohki et al. Efficient spoofing attack detection against unknown sample using end-to-end anomaly detection
Garud et al. Face liveness detection
KR101704717B1 (ko) 홍채 인식 장치 및 그 동작 방법
KR20170076894A (ko) 디지털 이미지 판단시스템 및 그 방법, 이를 위한 애플리케이션 시스템
JP6896307B1 (ja) 画像判定方法および画像判定装置
KR102439216B1 (ko) 인공지능 딥러닝 모델을 이용한 마스크 착용 얼굴 인식 방법 및 서버
RU2791821C1 (ru) Биометрическая идентификационная система и способ биометрической идентификации
WO2023229498A1 (fr) Système et procédé d'identification biométrique pour identification biométrique
Hemalatha et al. A study of liveness detection in face biometric systems
Saroha Enhancement of Face Recognition technology in Biometrics

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15894567

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 15580210

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15894567

Country of ref document: EP

Kind code of ref document: A1