WO2019011206A1 - 活体检测方法及相关产品 - Google Patents

活体检测方法及相关产品 Download PDF

Info

Publication number
WO2019011206A1
WO2019011206A1 PCT/CN2018/094964 CN2018094964W WO2019011206A1 WO 2019011206 A1 WO2019011206 A1 WO 2019011206A1 CN 2018094964 W CN2018094964 W CN 2018094964W WO 2019011206 A1 WO2019011206 A1 WO 2019011206A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
human eye
target area
living body
eye image
Prior art date
Application number
PCT/CN2018/094964
Other languages
English (en)
French (fr)
Inventor
周意保
唐城
周海涛
张学勇
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2019011206A1 publication Critical patent/WO2019011206A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Definitions

  • the present application relates to the field of electronic device technologies, and in particular, to a living body detecting method and related products.
  • iris recognition is increasingly favored by electronic equipment manufacturers, and the safety of iris recognition is also one of the important issues of concern.
  • the iris is inspected before the iris is recognized. How to achieve the detection of the living body needs to be solved urgently.
  • the embodiment of the present application provides a living body detecting method and related products, which can realize living body detection.
  • An embodiment of the present application provides a living body detecting method, where the method includes:
  • Whether the human eye image is from a living body is determined according to the target area.
  • an embodiment of the present application provides an electronic device, including a camera and an application processor (AP), where
  • the camera is configured to acquire an image of a human eye
  • the AP is configured to determine, from the human eye image, a target area in the pupil area, where the first average brightness value of the target area is greater than a first preset threshold;
  • the AP is further configured to determine, according to the target area, whether the human eye image is from a living body.
  • the embodiment of the present application provides a living body detecting device, where the living body detecting device includes an acquiring unit, a first determining unit, and a determining unit, where
  • the acquiring unit is configured to acquire an image of a human eye
  • the first determining unit is configured to determine, from the human eye image, a target area in the pupil area, where the first average brightness value of the target area is greater than a first preset threshold;
  • the determining unit is configured to determine, according to the target area, whether the human eye image is from a living body.
  • an embodiment of the present application provides an electronic device, including a camera, an application processor AP, and a memory, and one or more programs, where the one or more programs are stored in the memory and configured The execution is performed by the AP, the program comprising instructions for performing some or all of the steps as described in the first aspect.
  • the embodiment of the present application provides a computer readable storage medium, wherein the computer readable storage medium is used to store a computer program, wherein the computer program causes a computer to perform the first aspect of the embodiment of the present application.
  • an embodiment of the present application provides a computer program product, where the computer program product includes a non-transitory computer readable storage medium storing a computer program, the computer program being operative to cause a computer to execute Apply some or all of the steps described in the first aspect of the embodiment.
  • the computer program product can be a software installation package.
  • the human eye image is acquired, and the target area in the pupil area is determined from the human eye image, the first average brightness value of the target area is greater than the first preset threshold, and the human eye image is determined according to the target area.
  • the highlight area in the pupil area can be separated from the human eye image, and further, whether the human eye image is from the living body can be confirmed according to the highlight area, and the living body detection can be realized.
  • FIG. 1A is a schematic structural diagram of a smart phone according to an embodiment of the present application.
  • FIG. 1B is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • FIG. 1C is another schematic structural diagram of an electronic device according to an embodiment of the present application.
  • 1D is a schematic flow chart of a living body detecting method disclosed in an embodiment of the present application.
  • FIG. 1E is a schematic diagram showing a human eye structure disclosed in an embodiment of the present application.
  • FIG. 1F is another schematic diagram of a human eye structure disclosed in an embodiment of the present application.
  • FIG. 2 is a schematic flow chart of another living body detecting method disclosed in an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • FIG. 4A is a schematic structural diagram of a living body detecting apparatus according to an embodiment of the present application.
  • FIG. 4B is a schematic structural diagram of an acquiring unit of the living body detecting device described in FIG. 4A according to an embodiment of the present application;
  • FIG. 4C is a schematic structural diagram of a first extraction module of the acquiring unit described in FIG. 4B according to an embodiment of the present disclosure
  • 4D is another schematic structural diagram of a living body detecting device provided by an embodiment of the present application.
  • 4E is another schematic structural diagram of a living body detecting device according to an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of another electronic device disclosed in the embodiment of the present application.
  • references to "an embodiment” herein mean that a particular feature, structure, or characteristic described in connection with the embodiments can be included in at least one embodiment of the present application.
  • the appearances of the phrases in various places in the specification are not necessarily referring to the same embodiments, and are not exclusive or alternative embodiments that are mutually exclusive. Those skilled in the art will understand and implicitly understand that the embodiments described herein can be combined with other embodiments.
  • the electronic device involved in the embodiments of the present application may include various handheld devices having wireless communication functions, in-vehicle devices, wearable devices, computing devices, or other processing devices connected to the wireless modem, and various forms of user devices (user Equipment, UE), mobile station (MS), terminal device, etc.
  • user Equipment user Equipment
  • MS mobile station
  • terminal device etc.
  • the devices mentioned above are collectively referred to as electronic devices.
  • the iris recognition device of the smart phone 100 may include an infrared fill light 21 and an infrared camera 22. During the operation of the iris recognition device, the light of the infrared fill light 21 is hit.
  • the iris recognition device collects the iris image.
  • the visible light camera 23 is a front camera
  • the fill light 24 can be a visible light fill lamp, which can be used in a dark visual environment before assisting. Set the camera to finish shooting.
  • FIG. 1B is a schematic structural diagram of an electronic device 100 .
  • the electronic device 100 includes an application processor AP 110 , a camera 120 , and an iris recognition device 130 .
  • the iris recognition device 130 and the camera 120 Integrated, or the iris recognition device and the camera 120 can exist independently, wherein the AP 110 is connected to the camera 120 and the iris recognition device 130 via the bus 150.
  • FIG. 1C and FIG. 1C is the electronic device depicted in FIG. 1B.
  • a variant of the device 100, with respect to FIG. 1B, further includes a fill light 160, which is a visible light fill light, and is mainly used to fill light when the camera 120 takes a picture.
  • the camera 120 is configured to acquire a human eye image, and send the human eye image to the AP 110;
  • the AP 110 is configured to determine, from the human eye image, a target area in the pupil area, where the first average brightness value of the target area is greater than a first preset threshold;
  • the AP 110 is further configured to determine, according to the target area, whether the human eye image is from a living body.
  • the determining, according to the target area, whether the human eye image is from a living body is specifically configured to:
  • Performing feature extraction on the target area to obtain a target feature set Performing feature extraction on the target area to obtain a target feature set; training the target feature set by using a preset living body detection classifier to obtain a training result, and determining, according to the training result, whether the human eye image is from a living body.
  • performing feature extraction on the target area to obtain a target feature set where the AP 110 is specifically configured to:
  • the processed target area is subjected to smoothing processing; and the smoothed-processed target area is subjected to feature extraction to obtain the target feature set.
  • the AP 110 is further specifically configured to:
  • Determining an iris region in the human eye image determining a second average luminance value corresponding to the iris region, and a difference between the first average luminance value and the second average luminance value is greater than a second pre-
  • the step of determining whether the human eye image is from a living body according to the target region is performed.
  • the electronic device is provided with a fill light 160;
  • the fill light 160 is specifically configured to activate the fill light when the ambient brightness is lower than the preset brightness threshold, adjust the brightness of the fill light, and control the camera 120 to take a picture.
  • the AP 110 in the determining the target area in the pupil area from the human eye image, is specifically configured to:
  • An area within a predetermined radius range centered on the target pixel point is selected as the target area.
  • FIG. 1 is a schematic flowchart of an iris living body detecting method, which is applied to an electronic device including a camera and an application processor AP.
  • the physical figure and structure diagram of the electronic device can be referred to FIG. 1A-FIG. 1C, the present iris detection method includes:
  • the iris recognition device can be used to acquire the human eye image, or the human eye image can be acquired by the camera.
  • the iris recognition device can be installed on the same side of the touch display screen, for example, embedded in the touch display screen. Or, it can be installed next to the front camera.
  • the human eye image may be the entire human eye, or may be a partial human eye (for example, an iris image), or may be an image including a human eye image (for example, a human face image).
  • FIG. 1E shows a simple schematic diagram of the structure of the human eye.
  • the human eye image may include a pupil region, an eye white region and an iris region.
  • the pupil region is A highlighted area will also appear, see Figure 1F.
  • the pupil region may be extracted from the human eye image, and further, the target region (highlight region) is extracted from the pupil region, and the average luminance value (ie, the first average luminance value) of the target region is greater than
  • the first preset threshold the first preset threshold may be set by the user or the system defaults. Usually, during the user's shooting, a part of the pupil area of the living body will be brighter than other areas.
  • determining a target area in the pupil area from the human eye image may include the following steps:
  • A11. Determine a target pixel point corresponding to a maximum brightness value in the pupil area.
  • A12 Select an area within a preset radius centering on the target pixel point as the target area.
  • the preset radius range may be set by the user or the system defaults.
  • the brightness value of each pixel in the pupil area can be separately obtained, and then the target pixel corresponding to the maximum brightness value is selected.
  • a region of a preset radius range centered on the target pixel corresponding to the maximum luminance value may be selected as the target region.
  • determining a target area in the pupil area from the human eye image may include the following steps:
  • the X areas may have the same size, and the X may be set by the user or the system defaults. Further, the pupil region can be divided into X regions, and X is an integer greater than 1. The average luminance value of each of the X regions is calculated, and X average luminance values can be obtained, and the X average luminance values are selected. The maximum brightness value and the corresponding area as the target area.
  • step 101 the following steps may be further included:
  • Image enhancement processing is performed on the human eye image.
  • the image enhancement processing may include, but is not limited to, image denoising (eg, wavelet transform for image denoising, or high light denoising), image restoration (eg, Wiener filtering), dark visual enhancement algorithm (eg, histogram) Equalization, grayscale stretching, etc.), after the image enhancement processing of the iris image, the quality of the iris image can be improved to some extent. Further, in the process of performing step 102, the target area in the pupil area may be determined in the human eye image after the enhancement processing.
  • image denoising eg, wavelet transform for image denoising, or high light denoising
  • image restoration eg, Wiener filtering
  • dark visual enhancement algorithm eg, histogram Equalization, grayscale stretching, etc.
  • step 101 the following steps may be further included:
  • A1. Perform image quality evaluation on the human eye image to obtain an image quality evaluation value
  • Step 102 is performed when the image quality evaluation value is greater than a preset quality threshold.
  • the preset quality threshold may be set by the user or the system defaults, and the image quality of the human eye image may be first evaluated to obtain an image quality evaluation value, and the image quality evaluation value is used to determine whether the quality of the human eye image is good or bad.
  • the image quality evaluation value is greater than the preset quality threshold, the human eye image quality is considered to be good, and step 102 is performed.
  • the image quality evaluation value is less than the preset quality threshold, the human eye image quality may be considered to be poor, and step 102 may not be performed.
  • At least one image quality evaluation index may be used to perform image quality evaluation on the iris image, thereby obtaining an image quality evaluation value.
  • Image quality evaluation indicators may be included, and each image quality evaluation index also corresponds to a weight. Thus, each image quality evaluation index can obtain an evaluation result when performing image quality evaluation on the iris image, and finally, a weighting operation is performed. The final image quality evaluation value is also obtained.
  • Image quality evaluation indicators may include, but are not limited to, mean, standard deviation, entropy, sharpness, signal to noise ratio, and the like.
  • Image quality can be evaluated by using 2 to 10 image quality evaluation indicators. Specifically, the number of image quality evaluation indicators and which indicator are selected are determined according to specific implementation conditions. Of course, it is also necessary to select image quality evaluation indicators in combination with specific scenes, and the image quality indicators in the dark environment and the image quality evaluation in the bright environment may be different.
  • an image quality evaluation index may be used for evaluation.
  • the image quality evaluation value is processed by entropy processing, and the entropy is larger, indicating that the image quality is higher.
  • the smaller the entropy the worse the image quality.
  • the image may be evaluated by using multiple image quality evaluation indicators, and the plurality of image quality evaluation indicators may be set when the image quality is evaluated.
  • the weight of each image quality evaluation index in the image quality evaluation index may obtain a plurality of image quality evaluation values, and the final image quality evaluation value may be obtained according to the plurality of image quality evaluation values and corresponding weights, for example, three images
  • the quality evaluation indicators are: A index, B index and C index.
  • the weight of A is a1
  • the weight of B is a2
  • the weight of C is a3.
  • A, B and C are used to evaluate the image quality of an image
  • a The corresponding image quality evaluation value is b1
  • the image quality evaluation value corresponding to B is b2
  • the image quality evaluation value corresponding to C is b3
  • the final image quality evaluation value a1b1+a2b2+a3b3.
  • the larger the image quality evaluation value the better the image quality.
  • determining, according to the target area, whether the human eye image is from a living body for example, acquiring a current ambient brightness, and determining a target brightness range corresponding to the current ambient brightness according to a mapping relationship between the preset ambient brightness and the target area brightness, When the first average brightness value belongs to the target brightness range, it is confirmed that the human eye image is from the living body.
  • determining, according to the target area, whether the human eye image is from a living body may include the following steps:
  • the target feature set is trained by using a preset living body detection classifier to obtain a training result, and according to the training result, whether the human eye image is from a living body is determined.
  • the above-mentioned living body detection classifier may include, but is not limited to, a support vector machine (SVM), a genetic algorithm classifier, a neural network algorithm classifier, a cascade classifier (such as a genetic algorithm + SVM), and the like.
  • Feature extraction can be performed on the target area to obtain the target feature set.
  • the above feature extraction can be implemented by using an algorithm such as a Harris corner detection algorithm, a scale invariant feature transform (SIFT), a SUSAN corner detection algorithm, and the like, and details are not described herein.
  • the target feature set can be trained by using a preset living body detection classifier to obtain a training result, and according to the training result, it is determined whether the human eye image is from a living body.
  • the training result may be a probability value. For example, if the probability value is 80%, the human eye image may be considered to be from a living body, and if the human eye image is from a non-living body, the non-living body may be one of the following: 3D Printed human eyes, human eyes in photos, or human eyes without vital signs.
  • the preset biometric detection classifier may be configured before the implementation of the foregoing embodiment of the present application, and the main settings may include the following steps B1-B7:
  • the first class target classifier and the second class target classifier are used as the preset living body detection classifier.
  • the target area refers to an area in which the average brightness value in the pupil area is greater than the first preset threshold. Both A and B can be set by the user.
  • the negative sample set contains B positive samples, and each negative sample is the target area of the non-living pupil, the better the classification effect of the classifier.
  • first designated classifier and the second designated classifier may be the same classifier or different classifiers, whether it is the first designated classifier or
  • the second designated classifier can include, but is not limited to, support vector machines, genetic algorithm classifiers, neural network algorithm classifiers, cascade classifiers (eg, genetic algorithm + SVM), and the like.
  • performing feature extraction on the target area to obtain a target feature set may include the following steps:
  • the image denoising process may include, but is not limited to, denoising by wavelet transform, denoising by means of a mean filter, denoising by a median filter, denoising by a morphological noise filter, and the like.
  • After performing image denoising on the target area determining a target smoothing processing coefficient corresponding to the first average brightness according to a correspondence between the preset brightness value and the smoothing processing coefficient, and further, according to the target smoothing processing coefficient, the image is removed.
  • the target area after the noise processing is smoothed, and to some extent, the image quality of the target area can be improved, and the target area after the smoothing process can be extracted to obtain the target feature set. In this way, features can be extracted from the target area more.
  • the above target feature set may be a set of multiple feature points.
  • step 102 the following steps may be further included:
  • the target area determines whether the human eye image is from a living body.
  • the second preset threshold may be set by the user or the system defaults.
  • the iris area can be determined from the human eye image.
  • the iris area can be extracted from the human eye image by image segmentation, and the average brightness value of the iris area can be determined to obtain a second average brightness value.
  • the iris area brightness There may be a certain difference between the brightness of the target area and the brightness of the target area. Further, it may be determined that the difference between the first average brightness value and the second average brightness value is greater than whether the second preset threshold is purchased, and if yes, step 103 is performed. No, it means that the human eye image is from a non-living body.
  • the human eye image is acquired, and the target area in the pupil area is determined from the human eye image, the first average brightness value of the target area is greater than the first preset threshold, and the human eye image is determined according to the target area.
  • the highlight area in the pupil area can be separated from the human eye image, and further, whether the human eye image is from the living body can be confirmed according to the highlight area, and the living body detection can be realized.
  • the living human eye will have a reflective condition, and this phenomenon will not occur in the non-living human eye, especially the pupil. Therefore, the living body detection can be performed according to the feature, thereby realizing the detection of the iris living body.
  • FIG. 2 is a schematic flowchart of an iris living body detecting method, which is applied to an electronic device including a camera, a fill light, and an application processor AP.
  • the physical diagram and structure diagram of the electronic device can be referred to. 1A-1C, the iris living body detection method includes:
  • the preset brightness threshold may be set by the user or the system defaults.
  • the ambient light sensor can be used to detect the ambient brightness, and the fill light can be turned on, and the fill light is matched with the iris recognition device or the face recognition device for human eye image acquisition. Further, the brightness of the fill light can be adjusted. Specifically, the correspondence between the ambient brightness and the adjustment coefficient of the fill light can be set in advance. Further, after the ambient brightness is determined, the brightness of the fill light can be adjusted.
  • the camera can be controlled to take a picture, obtain an output image, and obtain an image of the human eye from the output image.
  • the fill light when the ambient brightness is lower than the preset brightness threshold, the fill light is activated, the brightness of the fill light is adjusted, the camera is controlled to take a picture, the image of the human eye is acquired, and the pupil is determined from the image of the human eye.
  • a target area in the area the first average brightness value of the target area is greater than the first preset threshold, and determining whether the human eye image is from the living body according to the target area, thereby separating the highlighted area in the pupil area from the human eye image
  • the living body detection can be realized. In this way, it can be applied in a dark vision environment and achieve living body detection.
  • the living human eye will have a reflective condition, and this phenomenon will not occur in the non-living human eye, especially the pupil. Therefore, the living body detection can be performed according to the feature, thereby realizing the detection of the iris living body.
  • FIG. 3 is an electronic device according to an embodiment of the present application, including: an application processor AP and a memory, the electronic device may further include a camera and a fill light; and one or more programs, the one Or a plurality of programs are stored in the memory and configured to be executed by the AP, the program comprising instructions for performing the following steps:
  • Whether the human eye image is from a living body is determined according to the target area.
  • the program includes instructions for performing the following steps:
  • the target feature set is trained by using a preset living body detection classifier to obtain a training result, and according to the training result, it is determined whether the human eye image is from a living body.
  • the program includes instructions for performing the following steps:
  • Feature extraction is performed on the smoothed target region to obtain the target feature set.
  • the program further includes instructions for performing the following steps:
  • Determining a second average brightness value corresponding to the iris area, and performing the according to the target when a difference between the first average brightness value and the second average brightness value is greater than a second preset threshold The area determines whether the human eye image is from a living body.
  • the electronic device is provided with a fill light
  • the program further comprises instructions for performing the following steps:
  • the fill light When the ambient brightness is lower than the preset brightness threshold, the fill light is activated, the brightness of the fill light is adjusted, and the camera is controlled to take a picture.
  • the program includes instructions for performing the following steps in determining the target area in the pupil region from the human eye image:
  • An area within a predetermined radius range centered on the target pixel point is selected as the target area.
  • FIG. 4A is a schematic structural diagram of a living body detecting apparatus according to the embodiment.
  • the living body detecting device is applied to an electronic device including a camera and an application processor AP, and the living body detecting device includes an obtaining unit 401, a first determining unit 402, and a determining unit 403, wherein
  • the acquiring unit 401 is configured to control the camera to acquire a human eye image
  • the first determining unit 402 is configured to determine, from the human eye image, a target area in the pupil area, where the first average brightness value of the target area is greater than a first preset threshold;
  • the determining unit 403 is configured to determine, according to the target area, whether the human eye image is from a living body.
  • FIG. 4B is a specific detail structure of the determining unit 403 of the living body detecting apparatus described in FIG. 4A, and the determining unit 403 may include: a first extracting module 4031 and a training module 4032, as follows:
  • a first extraction module 4031 configured to perform feature extraction on the target area to obtain a target feature set
  • the training module 4032 is configured to train the target feature set by using a preset living body detection classifier to obtain a training result, and determine, according to the training result, whether the human eye image is from a living body.
  • FIG. 4C is a specific detailed structure of the first extraction module 4031 of the determining unit 403 described in FIG. 4B, where the first extraction module 4031 may include: a denoising module 510, a determining module 520, and processing.
  • the module 530 and the second extraction module 540 are as follows:
  • the denoising module 510 is configured to perform image denoising processing on the target area
  • a determining module 520 configured to determine, according to a correspondence between the brightness value and the smoothing processing coefficient, a target smoothing processing coefficient corresponding to the first average brightness value
  • the processing module 530 is configured to perform smoothing processing on the target area after the image denoising process according to the target smoothing processing coefficient;
  • the second extraction module 540 is configured to perform feature extraction on the smoothed target region to obtain the target feature set.
  • FIG. 4D is a modified structure of the living body detecting device described in FIG. 4A, and the device may further include: a second determining unit 404, which is specifically as follows:
  • a second determining unit 404 configured to determine an iris region in the human eye image
  • the second determining unit 404 is further configured to determine a second average brightness value corresponding to the iris area, and a difference between the first average brightness value and the second average brightness value is greater than a second pre- When the threshold is set, the determining unit 403 performs the step of determining whether the human eye image is from a living body according to the target region.
  • the electronic device is provided with a fill light, as shown in FIG. 4E, FIG. 4E is a modified structure of the living body detecting device described in FIG. 4A, the electronic device may include a fill light, and the device may further include: a starting unit 405 and adjustment unit 406, as follows:
  • the starting unit 405 is configured to control the fill light to activate the fill light when the ambient brightness is lower than a preset brightness threshold
  • the adjusting unit 406 is configured to adjust the brightness of the fill light, and the acquiring unit 401 controls the camera to take a picture and acquire a human eye image.
  • the first determining unit 402 is specifically configured to:
  • An area within a predetermined radius range centered on the target pixel point is selected as the target area.
  • the living body detecting apparatus described in the embodiment of the present application acquires an image of a human eye, and determines a target area in the pupil area from the image of the human eye, and the first average brightness value of the target area is greater than a first preset threshold, according to The target area determines whether the human eye image is from the living body, and thus the highlight area in the pupil area can be separated from the human eye image, and further, whether the human eye image is from the living body can be confirmed according to the highlight area, and the living body detection can be realized.
  • the living human eye will have a reflective condition, and this phenomenon will not occur in the non-living human eye, especially the pupil. Therefore, the living body detection can be performed according to the feature, thereby realizing the detection of the iris living body.
  • the embodiment of the present application further provides another electronic device. As shown in FIG. 5, for the convenience of description, only the parts related to the embodiment of the present application are shown. If the specific technical details are not disclosed, refer to the method of the embodiment of the present application. section.
  • the electronic device may be any terminal device including a mobile phone, a tablet computer, a PDA (personal digital assistant), a POS (point of sales), an in-vehicle computer, and the like, and the electronic device is used as a mobile phone as an example:
  • FIG. 5 is a block diagram showing a partial structure of a mobile phone related to an electronic device provided by an embodiment of the present application.
  • the mobile phone includes: a radio frequency (RF) circuit 910, a memory 920, an input unit 930, a sensor 950, an audio circuit 960, a wireless fidelity (WiFi) module 970, an application processor AP980, and a power supply. 990 and other components.
  • RF radio frequency
  • the input unit 930 can be configured to receive input numeric or character information and to generate key signal inputs related to user settings and function controls of the handset.
  • the input unit 930 may include a touch display screen 933, an iris recognition device 931, and other input devices 932.
  • the structure of the iris recognition device 931 can be referred to FIG. 1A to FIG. 1C, and details are not described herein again.
  • the input unit 930 can also include other input devices 932.
  • other input devices 932 may include, but are not limited to, one or more of physical buttons, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, cameras, fill lights, and the like.
  • the application processor AP980 is configured to perform the following operations:
  • Whether the human eye image is from a living body is determined according to the target area.
  • the AP 980 is the control center of the handset, which utilizes various interfaces and lines to connect various portions of the entire handset, and executes the handset by running or executing software programs and/or modules stored in the memory 920, as well as invoking data stored in the memory 920. A variety of functions and processing data to monitor the phone as a whole.
  • the AP 980 may include one or more processing units; preferably, the AP 980 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, etc., and performs modulation and demodulation.
  • the processor primarily handles wireless communications. It can be understood that the above modem processor may not be integrated into the AP 980.
  • memory 920 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • the RF circuit 910 can be used for receiving and transmitting information.
  • RF circuit 910 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like.
  • LNA Low Noise Amplifier
  • RF circuitry 910 can also communicate with the network and other devices via wireless communication.
  • the above wireless communication may use any communication standard or protocol, including but not limited to global system of mobile communication (GSM), general packet radio service (GPRS), code division multiple access (code division) Multiple access (CDMA), wideband code division multiple access (WCDMA), long term evolution (LTE), e-mail, short messaging service (SMS), and the like.
  • GSM global system of mobile communication
  • GPRS general packet radio service
  • CDMA code division multiple access
  • WCDMA wideband code division multiple access
  • LTE long term evolution
  • SMS short messaging service
  • the handset may also include at least one type of sensor 950, such as a light sensor, motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor can adjust the brightness of the touch display screen according to the brightness of the ambient light, and the proximity sensor can turn off the touch display when the mobile phone moves to the ear. And / or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity.
  • the mobile phone can be used to identify the gesture of the mobile phone (such as horizontal and vertical screen switching, related Game, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), etc.; as for the mobile phone can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, no longer Narration.
  • the gesture of the mobile phone such as horizontal and vertical screen switching, related Game, magnetometer attitude calibration
  • vibration recognition related functions such as pedometer, tapping
  • the mobile phone can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, no longer Narration.
  • An audio circuit 960, a speaker 961, and a microphone 962 can provide an audio interface between the user and the handset.
  • the audio circuit 960 can transmit the converted electrical data of the received audio data to the speaker 961 for conversion to the sound signal by the speaker 961; on the other hand, the microphone 962 converts the collected sound signal into an electrical signal by the audio circuit 960. After receiving, it is converted into audio data, and then the audio data is played by the AP 980, sent to the other mobile phone via the RF circuit 910, or the audio data is played to the memory 920 for further processing.
  • WiFi is a short-range wireless transmission technology
  • the mobile phone can help users to send and receive emails, browse web pages, and access streaming media through the WiFi module 970, which provides users with wireless broadband Internet access.
  • FIG. 5 shows the WiFi module 970, it can be understood that it does not belong to the essential configuration of the mobile phone, and may be omitted as needed within the scope of not changing the essence of the invention.
  • the mobile phone also includes a power source 990 (such as a battery) that supplies power to various components.
  • a power source 990 such as a battery
  • the power source can be logically connected to the AP980 through a power management system to manage functions such as charging, discharging, and power management through the power management system.
  • the mobile phone may further include a camera, a Bluetooth module, and the like, and details are not described herein again.
  • each step method flow can be implemented based on the structure of the mobile phone.
  • each unit function can be implemented based on the structure of the mobile phone.
  • the embodiment of the present application further provides a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, the computer program causing the computer to execute any part of the living body detection method as described in the foregoing method embodiment. Or all steps.
  • the embodiment of the present application further provides a computer program product, comprising: a non-transitory computer readable storage medium storing a computer program, the computer program being operative to cause a computer to perform the operations as recited in the foregoing method embodiments Any or all of the steps of any living body detection method.
  • the disclosed apparatus may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be electrical or otherwise.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software program module.
  • the integrated unit if implemented in the form of a software program module and sold or used as a standalone product, may be stored in a computer readable memory.
  • a computer readable memory A number of instructions are included to cause a computer device (which may be a personal computer, server or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present application.
  • the foregoing memory includes: a U disk, a read-only memory (ROM), a random access memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and the like, which can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Ophthalmology & Optometry (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Eye Examination Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

本申请实施例公开了一种活体检测方法及相关产品,所述方法包括:获取人眼图像;从所述人眼图像中确定瞳孔区域中的目标区域,所述目标区域的第一平均亮度值大于第一预设阈值;根据所述目标区域判断所述人眼图像是否来自于活体。采用本申请实施例可从人眼图像中分离出瞳孔区域中的高亮区域,进而,依据该高亮区域确认人眼图像是否来自于活体,可实现活体检测。

Description

活体检测方法及相关产品
本申请要求2017年7月14日递交的发明名称为“活体检测方法及相关产品”的申请号201710576784.6的在先申请优先权,上述在先申请的内容以引入的方式并入本文本中。
技术领域
本申请涉及电子设备技术领域,具体涉及一种活体检测方法及相关产品。
背景技术
随着电子设备(手机、平板电脑等)的大量普及应用,电子设备能够支持的应用越来越多,功能越来越强大,电子设备向着多样化、个性化的方向发展,成为用户生活中不可缺少的电子用品。
目前来看,虹膜识别越来越受到电子设备生产厂商的青睐,虹膜识别的安全性也是其关注的重要问题之一。出于安全性考虑,通常情况下,会在虹膜识别之前,先对虹膜进行活体检测,如何实现活体检测的问题亟待解决。
发明内容
本申请实施例提供了一种活体检测方法及相关产品,可以实现活体检测。
本申请实施例提供了一种活体检测方法,所述方法包括:
获取人眼图像;
从所述人眼图像中确定瞳孔区域中的目标区域,所述目标区域的第一平均亮度值大于第一预设阈值;
根据所述目标区域判断所述人眼图像是否来自于活体。
第二方面,本申请实施例提供了一种电子设备,包括摄像头以及应用处理器(Application Processor,AP),其中,
所述摄像头,用于获取人眼图像;
所述AP,用于从所述人眼图像中确定瞳孔区域中的目标区域,所述目标区域的第一平均亮度值大于第一预设阈值;
所述AP,还用于根据所述目标区域判断所述人眼图像是否来自于活体。
第三方面,本申请实施例提供了一种活体检测装置,所述活体检测装置包括获取单元、第一确定单元和判断单元,其中,
所述获取单元,用于获取人眼图像;
所述第一确定单元,用于从所述人眼图像中确定瞳孔区域中的目标区域,所述目标区域的第一平均亮度值大于第一预设阈值;
所述判断单元,用于根据所述目标区域判断所述人眼图像是否来自于活体。
第四方面,本申请实施例提供了一种电子设备,包括摄像头、应用处理器AP和存储器;以及一个或多个程序,所述一个或多个程序被存储在所述存储器中,并且被配置成由所述AP执行,所述程序包括用于执行如第一方面中所描述的部分或全部步骤的指令。
第五方面,本申请实施例提供了一种计算机可读存储介质,其中,所述计算机可读存储介质用于存储计算机程序,其中,所述计算机程序使得计算机执行如本申请实施例第一方面中所描述的部分或全部步骤。
第六方面,本申请实施例提供了一种计算机程序产品,其中,所述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,所述计算机程序可操作来使计算机执行如本申请实施例第一方面中所描述的部分或全部步骤。该计算机程序产品可以为一个软件安装包。
实施本申请实施例,具有如下有益效果:
可以看出,本申请实施例中,获取人眼图像,从人眼图像中确定瞳孔区域中的目标区域,目标区域的第一平均亮度值大于第一预设阈值,根据目标区域判断人眼图像是否来自于活体,从而,可从人眼图像中分离出瞳孔区域中的高亮区域,进而,依据该高亮区域确认人眼图像是否来自于活体,可实现活体检测。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1A是本申请实施例提供的一种智能手机的结构示意图;
图1B是本申请实施例提供的一种电子设备的结构示意图;
图1C是本申请实施例提供的一种电子设备的另一结构示意图;
图1D是本申请实施例公开的一种活体检测方法的流程示意图;
图1E是本申请实施例公开的人眼结构的演示示意图;
图1F是本申请实施例公开的人眼结构的另一演示示意图;
图2是本申请实施例公开的另一种活体检测方法的流程示意图;
图3是本申请实施例提供的一种电子设备的结构示意图;
图4A是本申请实施例提供的一种活体检测装置的结构示意图;
图4B是本申请实施例提供的图4A所描述的活体检测装置的获取单元的结构示意图;
图4C是本申请实施例提供的图4B所描述的获取单元的第一提取模块的结构示意图;
图4D是本申请实施例提供的一种活体检测装置的另一结构示意图;
图4E是本申请实施例提供的一种活体检测装置的另一结构示意图;
图5是本申请实施例公开的另一种电子设备的结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、***、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其他步骤或单元。
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。
本申请实施例所涉及到的电子设备可以包括各种具有无线通信功能的手持设备、车载设备、可穿戴设备、计算设备或连接到无线调制解调器的其他处理设备,以及各种形式的用户设备(user equipment,UE),移动台(mobile station,MS),终端设备(terminal device)等等。为方便描述,上面提到的设备统称为电子设备。下面对本申请实施例进行详细介绍。如图1A所示的一种示例智能手机100,该智能手机100的虹膜识别装置可以包括红外补光灯21和红外摄像头22,在虹膜识别装置工作过程中,红外补光灯21的光线打到虹膜上之后,经过虹膜反射回红外摄像头22,虹膜识别装置采集虹膜图像,另外,可见光摄像头23为前置摄像头,补光灯24可为可见光补光灯,可用于在暗视觉环境下,辅助前置摄像头完成拍摄。
请参阅图1B,图1B是所示的一种电子设备100的结构示意图,所述电子设备100包括:应用处理器AP110、摄像头120、虹膜识别装置130,其中,虹膜识别装置130可与摄像头120集成在一起,或者,虹膜识别装置与摄像头120可独立存在,其中,所述AP110通过总线150连接摄像头120和虹膜识别装置130,进一步地,请参阅图1C,图1C为图 1B所描述的电子设备100的一种变型结构,相对于图1B而言,图1C还包括补光灯160,其为可见光补光灯,主要用于在摄像头120拍照时,进行补光。
在一些可能的实施例中,所述摄像头120,用于获取人眼图像,并将所述人眼图像发送给所述AP110;
所述AP110,用于从所述人眼图像中确定瞳孔区域中的目标区域,所述目标区域的第一平均亮度值大于第一预设阈值;
所述AP110,还用于根据所述目标区域判断所述人眼图像是否来自于活体。
在一些可能的实施例中,在所述根据所述目标区域判断所述人眼图像是否来自于活体,所述AP110具体用于:
对所述目标区域进行特征提取,得到目标特征集;采用预设活体检测分类器对所述目标特征集进行训练,得到训练结果,并根据该训练结果判断所述人眼图像是否来自于活体。
在一些可能的实施例中,在所述对所述目标区域进行特征提取,得到目标特征集,所述AP110具体用于:
对所述目标区域进行图像去噪处理;根据亮度值与平滑处理系数之间的对应关系,确定所述第一平均亮度值对应的目标平滑处理系数;根据所述目标平滑处理系数对图像去噪处理后的所述目标区域进行平滑处理;对平滑处理后的所述目标区域进行特征提取,得到所述目标特征集。
在一些可能的实施例中,所述AP110还具体用于:
确定所述人眼图像中的虹膜区域;确定所述虹膜区域对应的第二平均亮度值,并在所述第一平均亮度值与所述第二平均亮度值之间的差值大于第二预设阈值时,执行所述根据所述目标区域判断所述人眼图像是否来自于活体的步骤。
在一些可能的实施例中,所述电子设备设置有补光灯160;
所述补光灯160具体用于在环境亮度低于预设亮度阈值时,启动所述补光灯,调节所述补光灯的亮度,并控制所述摄像头120进行拍照。
在一些可能的实施例中,在所述从所述人眼图像中确定瞳孔区域中的目标区域方面,所述AP110具体用于:
确定所述瞳孔区域中最大亮度值对应的目标像素点;
选取以所述目标像素点为中心的预设半径范围内的区域作为所述目标区域。
请参阅图1D,图1是本申请实施例提供了一种虹膜活体检测方法的流程示意图,应用于包括摄像头以及应用处理器AP的电子设备,电子设备实物图与结构图可参考图1A-图1C,本虹膜活体检测方法包括:
101、获取人眼图像。
其中,可利用虹膜识别装置获取人眼图像,或者,可通过摄像头获取人眼图像,本申请实施例中虹膜识别装置可安装于与触控显示屏同侧区域,例如,镶嵌于触控显示屏,或者,可安装在前置摄像头旁边。上述人眼图像可为整个人眼,也可以指部分人眼(例如,虹膜图像),也可以指包含人眼图像的图像(例如,人脸图像)。
102、从所述人眼图像中确定瞳孔区域中的目标区域,所述目标区域的第一平均亮度值大于第一预设阈值。
其中,如图1E、1F所示,图1E示出了人眼结构的简单示意图,可见,人眼图像中可包含瞳孔区域、眼白区域和虹膜区域,在外界光源影响的情况下,瞳孔区域中还会出现高亮区域,参见图1F。本申请实施例,可从人眼图像中提取出瞳孔区域,进一步地,从该瞳孔区域中提取出目标区域(高亮区域),该目标区域的平均亮度值(即第一平均亮度值)大于第一预设阈值,上述第一预设阈值可由用户自行设置或者***默认。通常情况下,在用户拍摄过程中,活体的瞳孔区域中会出现一部分区域是亮度明显高于其他区域的。
可选地,上述步骤102,从所述人眼图像中确定瞳孔区域中的目标区域,可包括如下步骤:
A11、确定所述瞳孔区域中最大亮度值对应的目标像素点;
A12、选取以所述目标像素点为中心的预设半径范围内的区域作为所述目标区域。
其中,上述预设半径范围可由用户自行设置或者***默认。可分别获取瞳孔区域中每一像素点的亮度值,进而,选取最大亮度值对应的目标像素点。可从选取该最大亮度值对应的目标像素点为中心的预设半径范围的区域作为目标区域。
可选地,上述步骤102,从所述人眼图像中确定瞳孔区域中的目标区域,可包括如下步骤:
B11、将所述瞳孔区域划分为X个区域,所述X为大于1的整数;
B12、分别计算所述X个区域中每一区域的平均亮度值,得到所述X个平均亮度值;
B13、从所述X个平均亮度值中选取最大平均亮度值对应的区域作为所述目标区域。
其中,上述X个区域可面积大小一样,上述X可由用户自行设置或者***默认。进而,可将瞳孔区域划分为X个区域,X为大于1的整数,分别计算该X个区域中每一区域的平均亮度值,可得到X个平均亮度值,选取该X个平均亮度值中的最大亮度值,并将其对应的区域作为目标区域。
可选的,在上述步骤101与步骤102之间,还可以包含如下步骤:
对所述人眼图像进行图像增强处理。
其中,图像增强处理可包括但不仅限于:图像去噪(例如,小波变换进行图像去噪,或者,高光去噪)、图像复原(例如,维纳滤波)、暗视觉增强算法(例如,直方图均衡化、灰度拉伸等等),在对虹膜图像进行图像增强处理之后,虹膜图像的质量可在一定程度上得 到提升。进一步地,在执行步骤102的过程中,可对增强处理之后的人眼图像中确定瞳孔区域中的目标区域。
可选地,在上述步骤101与步骤102之间,还可以包含如下步骤:
A1、对所述人眼图像进行图像质量评价,得到图像质量评价值;
A2、在所述图像质量评价值大于预设质量阈值时,执行步骤102。
其中,上述预设质量阈值可由用户自行设置或者***默认,可先对人眼图像进行图像质量评价,得到一个图像质量评价值,通过该图像质量评价值判断该人眼图像的质量是好还是坏,在图像质量评价值大于预设质量阈值时,可认为人眼图像质量好,执行步骤102,在图像质量评价值小于预设质量阈值时,可认为人眼图像质量差,可不执行步骤102。
其中,上述步骤A1中,可采用至少一个图像质量评价指标对虹膜图像进行图像质量评价,从而,得到图像质量评价值。
可包含多个图像质量评价指标,每一图像质量评价指标也对应一个权重,如此,每一图像质量评价指标对虹膜图像进行图像质量评价时,均可得到一个评价结果,最终,进行加权运算,也就得到最终的图像质量评价值。图像质量评价指标可包括但不仅限于:均值、标准差、熵、清晰度、信噪比等等。
需要说明的是,由于采用单一评价指标对图像质量进行评价时,具有一定的局限性,因此,可采用多个图像质量评价指标对图像质量进行评价,当然,对图像质量进行评价时,并非图像质量评价指标越多越好,因为图像质量评价指标越多,图像质量评价过程的计算复杂度越高,也不见得图像质量评价效果越好,因此,在对图像质量评价要求较高的情况下,可采用2~10个图像质量评价指标对图像质量进行评价。具体地,选取图像质量评价指标的个数及哪个指标,依据具体实现情况而定。当然,也得结合具体地场景选取图像质量评价指标,在暗环境下进行图像质量评价和亮环境下进行图像质量评价选取的图像质量指标可不一样。
可选地,在对图像质量评价精度要求不高的情况下,可用一个图像质量评价指标进行评价,例如,以熵对待处理图像进行图像质量评价值,可认为熵越大,则说明图像质量越好,相反地,熵越小,则说明图像质量越差。
可选地,在对图像质量评价精度要求较高的情况下,可以采用多个图像质量评价指标对图像进行评价,在多个图像质量评价指标对图像进行图像质量评价时,可设置该多个图像质量评价指标中每一图像质量评价指标的权重,可得到多个图像质量评价值,根据该多个图像质量评价值及其对应的权重可得到最终的图像质量评价值,例如,三个图像质量评价指标分别为:A指标、B指标和C指标,A的权重为a1,B的权重为a2,C的权重为a3,采用A、B和C对某一图像进行图像质量评价时,A对应的图像质量评价值为b1,B对应的图像质量评价值为b2,C对应的图像质量评价值为b3,那么,最后的图像质量评价值 =a1b1+a2b2+a3b3。通常情况下,图像质量评价值越大,说明图像质量越好。
103、根据所述目标区域判断所述人眼图像是否来自于活体。
其中,可根据该目标区域判断人眼图像是否来自于活体,例如,可获取当前环境亮度,并根据预设的环境亮度与目标区域亮度之间的映射关系确定当前环境亮度对应的目标亮度范围,在该第一平均亮度值属于目标亮度范围时,则确认人眼图像来自于活体。
可选地,上述步骤103中,根据所述目标区域判断所述人眼图像是否来自于活体,可包括如下步骤:
31、对所述目标区域进行特征提取,得到目标特征集;
32、采用预设活体检测分类器对所述目标特征集进行训练,得到训练结果,并根据该训练结果判断所述人眼图像是否来自于活体。
其中,上述活体检测分类器可包括但不仅限于:支持向量机(support vector machine,SVM)、遗传算法分类器、神经网络算法分类器、级联分类器(如遗传算法+SVM)等等。可对目标区域进行特征提取,可得到目标特征集。上述特征提取可采用如下算法实现:Harris角点检测算法、尺度不变特征变换(scale invariant feature transform,SIFT)、SUSAN角点检测算法等等,在此不再赘述。进而,可采用预设活体检测分类器对目标特征集进行训练,得到训练结果,并根据该训练结果判断人眼图像是否来自于活体。其中,训练结果可为一个概率值,例如,概率值为80%,则可认为人眼图像来自于活体,低于则认为人眼图像来自于非活体,该非活体可为以下一种:3D打印的人眼、照片中的人眼或者,没有生命特征的人眼。
其中,上述预设活体检测分类器可在执行上述本申请实施例之前设置,其主要设置可包含如下步骤B1-B7:
B1、获取正样本集,所述正样本集包含A个活体的上述目标区域,所述A为正整数;
B2、获取负样本集,所述负样本集包含B个非活体的上述目标区域,所述A为正整数;
B3、对所述正样本集进行特征提取,得到所述A组特征;
B4、对所述负样本集进行特征提取,得到所述B组特征;
B5、采用第一指定分类器对所述A组特征进行训练,得到第一类目标分类器;
B6、采用第二指定分类器对所述B组特征进行训练,得到第二类目标分类器;
B7、将所述第一类目标分类器和所述第二类目标分类器作为所述预设活体检测分类器。
其中,上述目标区域均指瞳孔区域中平均亮度值大于第一预设阈值的区域,A与B均可由用户设置,其具体数量越大,正样本集包含A个正样本,每一正样本均为活体的瞳孔的目标区域,负样本集包含B个正样本,每一负样本均为非活体的瞳孔的目标区域,则分类器分类效果越好。上述步骤B3、B4中的特征提取的具体方式可参考上述步骤31,另 外,第一指定分类器和第二指定分类器可为同一分类器或者不同的分类器,无论是第一指定分类器还是第二指定分类器均可包括但不仅限于:支持向量机、遗传算法分类器、神经网络算法分类器、级联分类器(如遗传算法+SVM)等等。
可选地,上述步骤31中,对所述目标区域进行特征提取,得到目标特征集,可包括如下步骤:
311、对所述目标区域进行图像去噪处理;
312、根据亮度值与平滑处理系数之间的对应关系,确定所述第一平均亮度值对应的目标平滑处理系数;
313、根据所述目标平滑处理系数对图像去噪处理后的所述目标区域进行平滑处理;
314、对平滑处理后的所述目标区域进行特征提取,得到所述目标特征集。
其中,上述图像去噪处理可包括但不仅限于:采用小波变换进行去噪、采用均值滤波器进行去噪、采用中值滤波器进行去去噪、采用形态学噪声滤波器进行去噪等等。在对目标区域进行图像去噪之后,可根据预先设置的亮度值与平滑处理系数之间的对应关系,确定第一平均亮度对应的目标平滑处理系数,进而,根据该目标平滑处理系数对图像去噪处理后的目标区域进行平滑处理,在一定程度上,可以提升目标区域的图像质量,并可对平滑处理之后的目标区域进行特征提取,得到目标特征集。如此,可较多地从目标区域中提取出特征。上述目标特征集可为多个特征点的集合。
可选地,在上述步骤102和步骤103之间,还可以包含如下步骤:
C1、确定所述人眼图像中的虹膜区域;
C2、确定所述虹膜区域对应的第二平均亮度值,并在所述第一平均亮度值与所述第二平均亮度值之间的差值大于第二预设阈值时,执行所述根据所述目标区域判断所述人眼图像是否来自于活体的步骤。
其中,上述第二预设阈值可由用户自行设置或者***默认。可从人眼图像中确定出虹膜区域,例如,可通过图像分割方式从人眼图像中提取出虹膜区域,可确定该虹膜区域的平均亮度值,得到第二平均亮度值,当然,虹膜区域亮度与目标区域的亮度会存在一定的存差异,进而,可判断第一平均亮度值与第二平均亮度值之间的差值大于是否买第二预设阈值时,若是,则执行步骤103,若否,则说明该人眼图像来自于非活体。
可以看出,本申请实施例中,获取人眼图像,从人眼图像中确定瞳孔区域中的目标区域,目标区域的第一平均亮度值大于第一预设阈值,根据目标区域判断人眼图像是否来自于活体,从而,可从人眼图像中分离出瞳孔区域中的高亮区域,进而,依据该高亮区域确认人眼图像是否来自于活体,可实现活体检测。在实际应用中,活体人眼会出现反光的情况,非活体人眼则不会出现这种现象,尤其是瞳孔,因而,可依据该特征进行活体检测,进而,实现对虹膜活体检测。
请参阅图2,图2是本申请实施例提供了一种虹膜活体检测方法的流程示意图,应用于包括摄像头、补光灯以及应用处理器AP的电子设备,电子设备实物图与结构图可参考图1A-图1C,本虹膜活体检测方法包括:
201、在环境亮度低于预设亮度阈值时,启动所述补光灯,调节所述补光灯的亮度。
其中,上述预设亮度阈值可由用户自行设置或者***默认。可采用环境光传感器检测环境亮度,可打开补光灯,该补光灯配合虹膜识别装置或者人脸识别装置进行人眼图像获取。进而,可调节补光灯的亮度,具体地,可预先设置环境亮度与补光灯的调节系数之间的对应关系,进而,可在环境亮度确定之后,也可以调节补光灯的亮度,进而,可控制摄像头进行拍照,得到输出图像,从输出图像中获取人眼图像。
202、控制摄像头进行拍照,并获取人眼图像。
203、从所述人眼图像中确定瞳孔区域中的目标区域,所述目标区域的第一平均亮度值大于第一预设阈值。
204、根据所述目标区域判断所述人眼图像是否来自于活体。
其中,上述步骤202-步骤204的具体描述可参照图1所描述的活体检测方法的对应步骤,在此不再赘述。
可以看出,本申请实施例中,在环境亮度低于预设亮度阈值时,启动补光灯,调节补光灯的亮度,控制摄像头进行拍照,获取人眼图像,从人眼图像中确定瞳孔区域中的目标区域,目标区域的第一平均亮度值大于第一预设阈值,根据目标区域判断人眼图像是否来自于活体,从而,可从人眼图像中分离出瞳孔区域中的高亮区域,进而,依据该高亮区域确认人眼图像是否来自于活体,可实现活体检测。如此,可应用在暗视觉环境下,并实现活体检测。在实际应用中,活体人眼会出现反光的情况,非活体人眼则不会出现这种现象,尤其是瞳孔,因而,可依据该特征进行活体检测,进而,实现对虹膜活体检测。
请参阅图3,图3是本申请实施例提供的一种电子设备,包括:应用处理器AP和存储器,该电子设备还可包括摄像头以及补光灯;以及一个或多个程序,所述一个或多个程序被存储在所述存储器中,并且被配置成由所述AP执行,所述程序包括用于执行以下步骤的指令:
获取人眼图像;
从所述人眼图像中确定瞳孔区域中的目标区域,所述目标区域的第一平均亮度值大于第一预设阈值;
根据所述目标区域判断所述人眼图像是否来自于活体。
在一个可能的示例中,所述根据所述目标区域判断所述人眼图像是否来自于活体,所 述程序包括用于执行以下步骤的指令:
对所述目标区域进行特征提取,得到目标特征集;
采用预设活体检测分类器对所述目标特征集进行训练,得到训练结果,并根据该训练结果判断所述人眼图像是否来自于活体。
在一个可能的示例中,在所述对所述目标区域进行特征提取,得到目标特征集方面,所述程序包括用于执行以下步骤的指令:
对所述目标区域进行图像去噪处理;
根据亮度值与平滑处理系数之间的对应关系,确定所述第一平均亮度值对应的目标平滑处理系数;
根据所述目标平滑处理系数对图像去噪处理后的所述目标区域进行平滑处理;
对平滑处理后的所述目标区域进行特征提取,得到所述目标特征集。
在一个可能的示例中,所述程序还包括用于执行以下步骤的指令:
确定所述人眼图像中的虹膜区域;
确定所述虹膜区域对应的第二平均亮度值,并在所述第一平均亮度值与所述第二平均亮度值之间的差值大于第二预设阈值时,执行所述根据所述目标区域判断所述人眼图像是否来自于活体的步骤。
在一个可能的示例中,其特征在于,所述电子设备设置有补光灯,所述程序还包括用于执行以下步骤的指令:
在环境亮度低于预设亮度阈值时,启动所述补光灯,调节所述补光灯的亮度,并控制所述摄像头进行拍照。
在一个可能的示例中,在所述从所述人眼图像中确定瞳孔区域中的目标区域方面,所述程序包括用于执行以下步骤的指令:
确定所述瞳孔区域中最大亮度值对应的目标像素点;
选取以所述目标像素点为中心的预设半径范围内的区域作为所述目标区域。
请参阅图4A,图4A是本实施例提供的一种活体检测装置的结构示意图。该活体检测装置应用于包括摄像头以及应用处理器AP的电子设备,活体检测装置包括获取单元401、第一确定单元402和判断单元403,其中,
所述获取单元401,用于控制所述摄像头获取人眼图像;
所述第一确定单元402,用于从所述人眼图像中确定瞳孔区域中的目标区域,所述目标区域的第一平均亮度值大于第一预设阈值;
所述判断单元403,用于根据所述目标区域判断所述人眼图像是否来自于活体。
可选地,如图4B,图4B是图4A所描述的活体检测装置的判断单元403的具体细节 结构,所述判断单元403可包括:第一提取模块4031和训练模块4032,具体如下:
第一提取模块4031,用于对所述目标区域进行特征提取,得到目标特征集;
训练模块4032,用于采用预设活体检测分类器对所述目标特征集进行训练,得到训练结果,并根据该训练结果判断所述人眼图像是否来自于活体。
可选地,如图4C,图4C是图4B所描述的判断单元403的第一提取模块4031的具体细节结构,所述第一提取模块4031可包括:去噪模块510、确定模块520、处理模块530和第二提取模块540,具体如下:
去噪模块510,用于对所述目标区域进行图像去噪处理;
确定模块520,用于根据亮度值与平滑处理系数之间的对应关系,确定所述第一平均亮度值对应的目标平滑处理系数;
处理模块530,用于根据所述目标平滑处理系数对图像去噪处理后的所述目标区域进行平滑处理;
第二提取模块540,用于对平滑处理后的所述目标区域进行特征提取,得到所述目标特征集。
可选地,如图4D,图4D为图4A所描述的活体检测装置的变型结构,所述装置还可包括:第二确定单元404,具体如下:
第二确定单元404,用于确定所述人眼图像中的虹膜区域;
所述第二确定单元404,还用于确定所述虹膜区域对应的第二平均亮度值,并在所述第一平均亮度值与所述第二平均亮度值之间的差值大于第二预设阈值时,并由所述判断单元403执行所述根据所述目标区域判断所述人眼图像是否来自于活体的步骤。
可选地,所述电子设备设置有补光灯,如图4E,图4E为图4A所描述的活体检测装置的变型结构,电子设备可包括补光灯,所述装置还可包括:启动单元405和调节单元406,具体如下:
启动单元405,用于控制所述补光灯在环境亮度低于预设亮度阈值时,启动所述补光灯;
调节单元406,用于调节所述补光灯的亮度,由所述获取单元401控制所述摄像头进行拍照,并获取人眼图像。
可选地,在所述从所述人眼图像中确定瞳孔区域中的目标区域方面,所述第一确定单元402具体用于:
确定所述瞳孔区域中最大亮度值对应的目标像素点;
选取以所述目标像素点为中心的预设半径范围内的区域作为所述目标区域。
可以看出,本申请实施例中所描述的活体检测装置,获取人眼图像,从人眼图像中确定瞳孔区域中的目标区域,目标区域的第一平均亮度值大于第一预设阈值,根据目标区域 判断人眼图像是否来自于活体,从而,可从人眼图像中分离出瞳孔区域中的高亮区域,进而,依据该高亮区域确认人眼图像是否来自于活体,可实现活体检测。在实际应用中,活体人眼会出现反光的情况,非活体人眼则不会出现这种现象,尤其是瞳孔,因而,可依据该特征进行活体检测,进而,实现对虹膜活体检测。
可以理解的是,本实施例的活体检测装置的各程序模块的功能可根据上述方法实施例中的方法具体实现,其具体实现过程可以参照上述方法实施例的相关描述,此处不再赘述。
本申请实施例还提供了另一种电子设备,如图5所示,为了便于说明,仅示出了与本申请实施例相关的部分,具体技术细节未揭示的,请参照本申请实施例方法部分。该电子设备可以为包括手机、平板电脑、PDA(personal digital assistant,个人数字助理)、POS(point of sales,销售终端)、车载电脑等任意终端设备,以电子设备为手机为例:
图5示出的是与本申请实施例提供的电子设备相关的手机的部分结构的框图。参考图5,手机包括:射频(radio frequency,RF)电路910、存储器920、输入单元930、传感器950、音频电路960、无线保真(wireless fidelity,WiFi)模块970、应用处理器AP980、以及电源990等部件。本领域技术人员可以理解,图5中示出的手机结构并不构成对手机的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
下面结合图5对手机的各个构成部件进行具体的介绍:
输入单元930可用于接收输入的数字或字符信息,以及产生与手机的用户设置以及功能控制有关的键信号输入。具体地,输入单元930可包括触控显示屏933、虹膜识别装置931以及其他输入设备932。虹膜识别装置931的结构可参照图1A-图1C,在此不再赘述。输入单元930还可以包括其他输入设备932。具体地,其他输入设备932可以包括但不限于物理按键、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆、摄像头、补光灯等中的一种或多种。
所述应用处理器AP980,用于执行如下操作:
获取人眼图像;
从所述人眼图像中确定瞳孔区域中的目标区域,所述目标区域的第一平均亮度值大于第一预设阈值;
根据所述目标区域判断所述人眼图像是否来自于活体。
AP980是手机的控制中心,利用各种接口和线路连接整个手机的各个部分,通过运行或执行存储在存储器920内的软件程序和/或模块,以及调用存储在存储器920内的数据,执行手机的各种功能和处理数据,从而对手机进行整体监控。可选的,AP980可包括一个或多个处理单元;优选的,AP980可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作***、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以 理解的是,上述调制解调处理器也可以不集成到AP980中。
此外,存储器920可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
RF电路910可用于信息的接收和发送。通常,RF电路910包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器(Low Noise Amplifier,LNA)、双工器等。此外,RF电路910还可以通过无线通信与网络和其他设备通信。上述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯***(global system of mobile communication,GSM)、通用分组无线服务(general packet radio service,GPRS)、码分多址(code division multiple access,CDMA)、宽带码分多址(wideband code division multiple access,WCDMA)、长期演进(long term evolution,LTE)、电子邮件、短消息服务(short messaging service,SMS)等。
手机还可包括至少一种传感器950,比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节触控显示屏的亮度,接近传感器可在手机移动到耳边时,关闭触控显示屏和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于手机还可配置的陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。
音频电路960、扬声器961,传声器962可提供用户与手机之间的音频接口。音频电路960可将接收到的音频数据转换后的电信号,传输到扬声器961,由扬声器961转换为声音信号播放;另一方面,传声器962将收集的声音信号转换为电信号,由音频电路960接收后转换为音频数据,再将音频数据播放AP980处理后,经RF电路910以发送给比如另一手机,或者将音频数据播放至存储器920以便进一步处理。
WiFi属于短距离无线传输技术,手机通过WiFi模块970可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图5示出了WiFi模块970,但是可以理解的是,其并不属于手机的必须构成,完全可以根据需要在不改变发明的本质的范围内而省略。
手机还包括给各个部件供电的电源990(比如电池),优选的,电源可以通过电源管理***与AP980逻辑相连,从而通过电源管理***实现管理充电、放电、以及功耗管理等功能。
尽管未示出,手机还可以包括摄像头、蓝牙模块等,在此不再赘述。
前述图1D、图2所示的实施例中,各步骤方法流程可以基于该手机的结构实现。
前述图3、图4A~图4E所示的实施例中,各单元功能可以基于该手机的结构实现。
本申请实施例还提供一种计算机存储介质,其中,该计算机存储介质存储用于电子数据交换的计算机程序,该计算机程序使得计算机执行如上述方法实施例中记载的任何一种活体检测方法的部分或全部步骤。
本申请实施例还提供一种计算机程序产品,所述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,所述计算机程序可操作来使计算机执行如上述方法实施例中记载的任何一种活体检测方法的部分或全部步骤。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置,可通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件程序模块的形式实现。
所述集成的单元如果以软件程序模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储器中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储器中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储器包括:U盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的 介质。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储器中,存储器可以包括:闪存盘、ROM、RAM、磁盘或光盘等。
以上对本申请实施例进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (20)

  1. 一种活体检测方法,其特征在于,所述方法包括:
    获取人眼图像;
    从所述人眼图像中确定瞳孔区域中的目标区域,所述目标区域的第一平均亮度值大于第一预设阈值;
    根据所述目标区域判断所述人眼图像是否来自于活体。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述目标区域判断所述人眼图像是否来自于活体,包括:
    对所述目标区域进行特征提取,得到目标特征集;
    采用预设活体检测分类器对所述目标特征集进行训练,得到训练结果,并根据该训练结果判断所述人眼图像是否来自于活体。
  3. 根据权利要求2所述的方法,其特征在于,所述对所述目标区域进行特征提取,得到目标特征集,包括:
    对所述目标区域进行图像去噪处理;
    根据亮度值与平滑处理系数之间的对应关系,确定所述第一平均亮度值对应的目标平滑处理系数;
    根据所述目标平滑处理系数对图像去噪处理后的所述目标区域进行平滑处理;
    对平滑处理后的所述目标区域进行特征提取,得到所述目标特征集。
  4. 根据权利要求1至3任一项所述的方法,其特征在于,所述方法还包括:
    确定所述人眼图像中的虹膜区域;
    确定所述虹膜区域对应的第二平均亮度值,并在所述第一平均亮度值与所述第二平均亮度值之间的差值大于第二预设阈值时,执行所述根据所述目标区域判断所述人眼图像是否来自于活体的步骤。
  5. 根据权利要求1至4任一项所述的方法,其特征在于,所述电子设备设置有补光灯,所述方法还包括:
    在环境亮度低于预设亮度阈值时,启动所述补光灯,调节所述补光灯的亮度,并进行拍照。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,所述从所述人眼图像中确定瞳孔区域中的目标区域,包括:
    确定所述瞳孔区域中最大亮度值对应的目标像素点;
    选取以所述目标像素点为中心的预设半径范围内的区域作为所述目标区域。
  7. 一种电子设备,其特征在于,包括摄像头以及应用处理器AP,其中,
    所述摄像头,用于获取人眼图像;
    所述AP,用于从所述人眼图像中确定瞳孔区域中的目标区域,所述目标区域的第一平均亮度值大于第一预设阈值;
    所述AP,还用于根据所述目标区域判断所述人眼图像是否来自于活体。
  8. 根据权利要求7所述的电子设备,其特征在于,在所述根据所述目标区域判断所述人眼图像是否来自于活体,所述AP具体用于:
    对所述目标区域进行特征提取,得到目标特征集;采用预设活体检测分类器对所述目标特征集进行训练,得到训练结果,并根据该训练结果判断所述人眼图像是否来自于活体。
  9. 根据权利要求8所述的电子设备,其特征在于,在所述对所述目标区域进行特征提取,得到目标特征集,所述AP具体用于:
    对所述目标区域进行图像去噪处理;根据亮度值与平滑处理系数之间的对应关系,确定所述第一平均亮度值对应的目标平滑处理系数;根据所述目标平滑处理系数对图像去噪处理后的所述目标区域进行平滑处理;对平滑处理后的所述目标区域进行特征提取,得到所述目标特征集。
  10. 根据权利要求7至9任一项所述的电子设备,其特征在于,所述AP还具体用于:
    确定所述人眼图像中的虹膜区域;确定所述虹膜区域对应的第二平均亮度值,并在所述第一平均亮度值与所述第二平均亮度值之间的差值大于第二预设阈值时,执行所述根据所述目标区域判断所述人眼图像是否来自于活体的步骤。
  11. 根据权利要求7至10任一项所述的电子设备,其特征在于,所述电子设备设置有补光灯;
    所述补光灯具体用于在环境亮度低于预设亮度阈值时,启动所述补光灯,调节所述补光灯的亮度,并控制所述摄像头进行拍照。
  12. 根据权利要求7-11任一项所述的电子设备,其特征在于,在所述从所述人眼图像中确定瞳孔区域中的目标区域方面,所述AP具体用于:
    确定所述瞳孔区域中最大亮度值对应的目标像素点;
    选取以所述目标像素点为中心的预设半径范围内的区域作为所述目标区域。
  13. 一种活体检测装置,其特征在于,所述活体检测装置包括获取单元、第一确定单元和判断单元,其中,
    所述获取单元,用于获取人眼图像;
    所述第一确定单元,用于从所述人眼图像中确定瞳孔区域中的目标区域,所述目标区域的第一平均亮度值大于第一预设阈值;
    所述判断单元,用于根据所述目标区域判断所述人眼图像是否来自于活体。
  14. 根据权利要求13所述的装置,其特征在于,所述判断单元包括:
    第一提取模块,用于对所述目标区域进行特征提取,得到目标特征集;
    训练模块,用于采用预设活体检测分类器对所述目标特征集进行训练,得到训练结果,并根据该训练结果判断所述人眼图像是否来自于活体。
  15. 根据权利要求14所述的装置,其特征在于,所述第一提取模块包括:
    去燥模块,用于对所述目标区域进行图像去噪处理;
    确定模块,用于根据亮度值与平滑处理系数之间的对应关系,确定所述第一平均亮度值对应的目标平滑处理系数;
    处理模块,用于根据所述目标平滑处理系数对图像去噪处理后的所述目标区域进行平滑处理;
    第二提取模块,用于对平滑处理后的所述目标区域进行特征提取,得到所述目标特征集。
  16. 根据权利要求13至15任一项所述的装置,其特征在于,所述装置还包括:
    第二确定单元,用于确定所述人眼图像中的虹膜区域;以及确定所述虹膜区域对应的第二平均亮度值,并在所述第一平均亮度值与所述第二平均亮度值之间的差值大于第二预设阈值时,由所述判断单元执行所述根据所述目标区域判断所述人眼图像是否来自于活体的步骤。
  17. 根据权利要求13至16任一项所述的装置,其特征在于,所述装置还包括:
    启动单元,用于在环境亮度低于预设亮度阈值时,启动补光灯;
    调节单元,用于调节所述补光灯的亮度,并进行拍照。
  18. 一种电子设备,其特征在于,包括:摄像头、应用处理器AP和存储器;以及一个或多个程序,所述一个或多个程序被存储在所述存储器中,并且被配置成由所述AP执行,所述程序包括用于如权利要求1-6任一项方法的指令。
  19. 一种计算机可读存储介质,其特征在于,其用于存储计算机程序,其中,所述计算机程序使得计算机执行如权利要求1-6任一项所述的方法。
  20. 一种计算机程序产品,其特征在于,所述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,所述计算机程序可操作来使计算机执行如权利要求1-6任一项所述的方法。
PCT/CN2018/094964 2017-07-14 2018-07-09 活体检测方法及相关产品 WO2019011206A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710576784.6A CN107423699B (zh) 2017-07-14 2017-07-14 活体检测方法及相关产品
CN201710576784.6 2017-07-14

Publications (1)

Publication Number Publication Date
WO2019011206A1 true WO2019011206A1 (zh) 2019-01-17

Family

ID=60426898

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/094964 WO2019011206A1 (zh) 2017-07-14 2018-07-09 活体检测方法及相关产品

Country Status (2)

Country Link
CN (1) CN107423699B (zh)
WO (1) WO2019011206A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110570873A (zh) * 2019-09-12 2019-12-13 Oppo广东移动通信有限公司 声纹唤醒方法、装置、计算机设备以及存储介质
CN111079688A (zh) * 2019-12-27 2020-04-28 中国电子科技集团公司第十五研究所 一种人脸识别中的基于红外图像的活体检测的方法
CN112507923A (zh) * 2020-12-16 2021-03-16 平安银行股份有限公司 证件翻拍检测方法、装置、电子设备及介质
CN112668396A (zh) * 2020-12-03 2021-04-16 浙江大华技术股份有限公司 一种二维虚假目标识别方法、装置、设备和介质
CN112906440A (zh) * 2019-12-04 2021-06-04 深圳君正时代集成电路有限公司 一种活体识别的防破解方法
CN116030042A (zh) * 2023-02-24 2023-04-28 智慧眼科技股份有限公司 一种针对医生目诊的诊断装置、方法、设备及存储介质

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107423699B (zh) * 2017-07-14 2019-09-13 Oppo广东移动通信有限公司 活体检测方法及相关产品
CN107992866B (zh) * 2017-11-15 2018-06-29 上海聚虹光电科技有限公司 基于视频流眼部反光点的活体检测方法
CN109190522B (zh) * 2018-08-17 2021-05-07 浙江捷尚视觉科技股份有限公司 一种基于红外相机的活体检测方法
CN111507201B (zh) * 2020-03-27 2023-04-18 北京万里红科技有限公司 人眼图像处理方法、人眼识别方法、装置及存储介质
CN112052726A (zh) * 2020-07-28 2020-12-08 北京极豪科技有限公司 图像处理方法及装置
CN112149580B (zh) * 2020-09-25 2024-05-14 江苏邦融微电子有限公司 一种区分真实人脸与照片的图像处理方法
CN114973426B (zh) * 2021-06-03 2023-08-15 中移互联网有限公司 活体检测方法、装置及设备
CN117952859B (zh) * 2024-03-27 2024-06-07 吉林大学 一种基于热成像技术的压力性损伤图像优化方法及***

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6785406B1 (en) * 1999-07-19 2004-08-31 Sony Corporation Iris authentication apparatus
CN1842296A (zh) * 2004-08-03 2006-10-04 松下电器产业株式会社 活体判别装置和使用该装置的认证装置以及活体判别方法
CN100511266C (zh) * 2003-07-04 2009-07-08 松下电器产业株式会社 活体眼睛判定方法及活体眼睛判定装置
CN107423699A (zh) * 2017-07-14 2017-12-01 广东欧珀移动通信有限公司 活体检测方法及相关产品

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002312772A (ja) * 2001-04-13 2002-10-25 Oki Electric Ind Co Ltd 個人識別装置及び眼偽造判定方法
CN105138996A (zh) * 2015-09-01 2015-12-09 北京上古视觉科技有限公司 一种具有活体检测功能的虹膜识别***

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6785406B1 (en) * 1999-07-19 2004-08-31 Sony Corporation Iris authentication apparatus
CN100511266C (zh) * 2003-07-04 2009-07-08 松下电器产业株式会社 活体眼睛判定方法及活体眼睛判定装置
CN1842296A (zh) * 2004-08-03 2006-10-04 松下电器产业株式会社 活体判别装置和使用该装置的认证装置以及活体判别方法
CN107423699A (zh) * 2017-07-14 2017-12-01 广东欧珀移动通信有限公司 活体检测方法及相关产品

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110570873A (zh) * 2019-09-12 2019-12-13 Oppo广东移动通信有限公司 声纹唤醒方法、装置、计算机设备以及存储介质
CN112906440A (zh) * 2019-12-04 2021-06-04 深圳君正时代集成电路有限公司 一种活体识别的防破解方法
CN111079688A (zh) * 2019-12-27 2020-04-28 中国电子科技集团公司第十五研究所 一种人脸识别中的基于红外图像的活体检测的方法
CN112668396A (zh) * 2020-12-03 2021-04-16 浙江大华技术股份有限公司 一种二维虚假目标识别方法、装置、设备和介质
CN112507923A (zh) * 2020-12-16 2021-03-16 平安银行股份有限公司 证件翻拍检测方法、装置、电子设备及介质
CN112507923B (zh) * 2020-12-16 2023-10-31 平安银行股份有限公司 证件翻拍检测方法、装置、电子设备及介质
CN116030042A (zh) * 2023-02-24 2023-04-28 智慧眼科技股份有限公司 一种针对医生目诊的诊断装置、方法、设备及存储介质
CN116030042B (zh) * 2023-02-24 2023-06-16 智慧眼科技股份有限公司 一种针对医生目诊的诊断装置、方法、设备及存储介质

Also Published As

Publication number Publication date
CN107423699B (zh) 2019-09-13
CN107423699A (zh) 2017-12-01

Similar Documents

Publication Publication Date Title
WO2019011206A1 (zh) 活体检测方法及相关产品
WO2019011099A1 (zh) 虹膜活体检测方法及相关产品
CN108594997B (zh) 手势骨架构建方法、装置、设备及存储介质
WO2019052329A1 (zh) 人脸识别方法及相关产品
CN107172364B (zh) 一种图像曝光补偿方法、装置和计算机可读存储介质
US11074466B2 (en) Anti-counterfeiting processing method and related products
WO2019020014A1 (zh) 解锁控制方法及相关产品
CN107590461B (zh) 人脸识别方法及相关产品
RU2731370C1 (ru) Способ распознавания живого организма и терминальное устройство
US11055547B2 (en) Unlocking control method and related products
CN107657218B (zh) 人脸识别方法及相关产品
WO2019011098A1 (zh) 解锁控制方法及相关产品
CN107403147B (zh) 虹膜活体检测方法及相关产品
WO2018233480A1 (zh) 照片推荐方法及相关产品
CN107451454B (zh) 解锁控制方法及相关产品
WO2019015418A1 (zh) 解锁控制方法及相关产品
WO2019001254A1 (zh) 虹膜活体检测方法及相关产品
CN107506697B (zh) 防伪处理方法及相关产品
CN107613550B (zh) 解锁控制方法及相关产品
US10706282B2 (en) Method and mobile terminal for processing image and storage medium
CN110807769A (zh) 图像显示控制方法及装置
WO2019015574A1 (zh) 解锁控制方法及相关产品
US11200437B2 (en) Method for iris-based living body detection and related products
WO2019015432A1 (zh) 识别虹膜活体的方法及相关产品
CN110930372A (zh) 一种图像处理方法、电子设备及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18832879

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18832879

Country of ref document: EP

Kind code of ref document: A1