WO2019127262A1 - 基于云端的人脸活体检测方法、电子设备和程序产品 - Google Patents

基于云端的人脸活体检测方法、电子设备和程序产品 Download PDF

Info

Publication number
WO2019127262A1
WO2019127262A1 PCT/CN2017/119543 CN2017119543W WO2019127262A1 WO 2019127262 A1 WO2019127262 A1 WO 2019127262A1 CN 2017119543 W CN2017119543 W CN 2017119543W WO 2019127262 A1 WO2019127262 A1 WO 2019127262A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
user
image
distance
human face
Prior art date
Application number
PCT/CN2017/119543
Other languages
English (en)
French (fr)
Inventor
刘兆祥
廉士国
王敏
Original Assignee
深圳前海达闼云端智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海达闼云端智能科技有限公司 filed Critical 深圳前海达闼云端智能科技有限公司
Priority to CN201780002701.0A priority Critical patent/CN108124486A/zh
Priority to PCT/CN2017/119543 priority patent/WO2019127262A1/zh
Publication of WO2019127262A1 publication Critical patent/WO2019127262A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Definitions

  • the present invention relates to the field of face detection technologies, and in particular, to a cloud-based method for detecting a living body of a living body, an electronic device, and a program product.
  • face recognition technology can directly acquire the camera through the camera. It is convenient and fast, but it also brings some information security issues, such as face photos or face videos. Deceive the face recognition system.
  • the embodiment of the present application provides a cloud-based human face detection method, an electronic device, and a program product, which are mainly used for blind navigation.
  • the embodiment of the present application provides a cloud-based method for detecting a living body of a human body, including:
  • each first face image is a living image, identifying whether there is a micro motion in the plurality of consecutive first face images;
  • an embodiment of the present application provides an electronic device, where the electronic device includes:
  • a memory one or more processors; a memory coupled to the processor via a communication bus; a processor configured to execute instructions in the memory; the storage medium having stored therein for performing the steps of the method of the first aspect of the claims instruction.
  • an embodiment of the present application provides a computer program product for use in conjunction with an electronic device including a display, the computer program product comprising a computer readable storage medium and a computer program mechanism embedded therein, the computer
  • the program mechanism includes instructions for performing the various steps in the method of the first aspect described above.
  • a plurality of first face images of the user are continuously collected, and after each first face image is a living image, whether there is a micro motion in the plurality of consecutive first face images, if there is a micro
  • the action confirms that the user's face is detected by the living body, and the living body detection is performed on the user through the living body recognition and the micro motion recognition, thereby effectively improving the accuracy of the face detection, and preventing the face recognition system from being passed through the face photo or the face video.
  • the behavior of the real person to distinguish the function of the real person dummy, to ensure information security.
  • FIG. 1 is a schematic flowchart of a cloud-based method for detecting a human face in a cloud according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of a key feature part of a face in the embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of a deep neural network for micro-expression recognition according to an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of another cloud-based method for detecting a human face in the embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • face recognition applications are more and more widely, but there is a core security problem in face recognition: face fraud, such as face recognition system can be deceived by face photo, face video or 3D face film.
  • the embodiment of the present application provides a cloud-based method for detecting a human face in vivo, which continuously collects a plurality of first face images of a user, and determines that each first face image is a living image, and identifies a plurality of consecutive first persons. Whether there is a micro-action in the face image. If there is a micro-action, it is confirmed that the user's face is detected by the living body, and the living body detection is performed on the user through the living body recognition and the micro-motion recognition, thereby effectively improving the accuracy of the face detection and preventing the person passing through.
  • the face photo or the face video deceives the behavior of the face recognition system to realize the function of distinguishing the real person dummy and ensure the information security.
  • the cloud-based method for detecting a human face in vivo includes:
  • intrusion methods for face recognition are usually printed photos including face images or face videos/mobile screens/computer screens/3D masks, etc. These invasive tools usually have characteristic differences from normal living faces.
  • this proposal first requires the distance between the user (such as the face) from the recognition device (such as the camera), and recognizes the difference between the features while keeping the camera and the face at an appropriate distance. .
  • Step 1 Acquire a second face image of the user.
  • the second face image is an image used by the user to adjust the distance, which is different from the image used for subsequent face recognition.
  • Step 2 Acquire a face area in the second face image.
  • step 3 the user distance is determined according to the face area.
  • the user distance may be determined according to the proportion of the face area occupying the second face image. It is also possible to extract the distance between the face preset parts from the face area, and determine the user distance according to the ratio of the distance to the width and height of the second face image.
  • Step 4 If the user distance matches the distance requirement, it is determined that the user meets the distance requirement.
  • Step 5 If the user distance does not match the distance requirement, the user is instructed to move to meet the distance requirement.
  • a prompt (such as a voice prompt or a text prompt) can be sent to the user to guide the user to adjust their position, manner, and the like.
  • step 1 to step 3 are performed again to determine whether the adjusted distance matches the distance requirement. If it matches, step 4 is performed again; otherwise, step 5 is performed again. This cycle until the user meets the distance requirement.
  • face detection is performed first, and a face area is obtained.
  • the distance of the face can be approximated according to the size of the face area and the proportion of the area in the image. If it is within the proper specific gravity range, it is considered to be within the optimal distance, otherwise the user is approached or away according to the magnitude of the ratio.
  • the 2D coordinates of the key points can be obtained, and then the 3D pose Euler angle and the 3D translation (T x , T y , T z ) of the face relative to the camera are obtained by the solvepnp algorithm, and the 3D distance is further obtained, and then judged. Is the distance within the proper range?
  • the posture of the user's face may be reminded (such as roll, pitch, yaw) according to the detection result of the position and posture described above, and the position in the 2D image is reminded ( Left, right, upper, lower, etc.).
  • the roll is rotated around the Z axis, also called the roll angle.
  • the pitch is rotated around the X axis, also known as the pitch angle.
  • Yaw is rotated around the Y axis, also called the yaw angle.
  • the user's continuous and multiple face images that is, the first face image
  • the first face image user performs the basis for the face detection of the user.
  • any of the first face images is a living image
  • any one of the first face images is stored in the image sequence.
  • the image sequence here is initially empty, and it is determined that a first face image is a living image, and the first face image is stored in the image sequence, and then whether the next face image of a certain frame is performed For the detection of the living image, if the next picture of a certain picture is a living body image, the next picture of a certain picture is stored in the image sequence, and the loop is repeated until all the first face images are subjected to the living body image detection. If a next non-living image of a certain piece is found during the detection, the face image in the image sequence is cleared at this time.
  • the process is terminated, the image sequence is cleared, and the face detection of the user does not pass.
  • non-living images including but not limited to: photos (such as print photos, photos on the phone screen, photos on the computer screen), videos (such as video on the phone screen, video on the computer screen), facial mask (such as 3D face mask).
  • photos such as print photos, photos on the phone screen, photos on the computer screen
  • videos such as video on the phone screen, video on the computer screen
  • facial mask such as 3D face mask
  • Filtering of a single image can be achieved by step 103.
  • the single image is classified and discriminated by a method of machine learning.
  • CNN Convolutional Neural Network
  • deep learning is used for classification and discrimination, such as using a very popular resnet classification network for classification and discrimination.
  • each first face image is classified and identified by using the successfully trained network model and weight. Which type of output probability is large, that is, which type can be considered, and the threshold can be set for further discrimination, for example.
  • the maximum probability is greater than a set value.
  • step 104 is performed to perform image sequence classification discrimination. If the classification results in other categories, the image sequence is cleared and the entire inspection process is restarted.
  • step 105 is performed.
  • the process is terminated, the image sequence is cleared, and the user's face biometric detection does not pass.
  • the recognition of the print photo/mobile phone screen/computer screen/3D face film including the face image or the face video can be realized, but only the recognition result is determined to determine whether the user face biometric detection still exists. The case of misjudgment.
  • the image sequence classification filtering is performed through step 104.
  • image sequence classification filtering is performed.
  • the image sequence is input into a deep neural network for classification and discrimination, and the output is two categories: normal face and abnormal face.
  • the deep neural network can be directly based on the 3D convolutional neural network, or a general 2D convolutional neural network, such as resnet, except that the network input is the stacked sequence image data, as shown in FIG.
  • the general resnet classification network input is 1 channel or 3 channels. After the image sequence is stacked, the color image is taken as an example, which is equivalent to inputting N*3 channel data.
  • N is the length of the image sequence of the input depth neural network, ie the number of first face images in the image sequence of the input depth neural network.
  • N is the number of all the first face images in the image sequence.
  • the 3D convolutional neural network or the 2D convolutional neural network can be trained. After the training is completed, the input image sequence is directly discriminated using the trained model and weight. Which category has a large output probability, which is the same type, and the threshold can also be set for further filtering.
  • the image sequence is directly input into the deep neural network for classification and discrimination, if the final output is a normal face, it is determined that there is a micro-action, and the step 105 is performed to perform the bio-detection, otherwise it is determined that there is no micro-action, the process is terminated, and the image sequence is cleared. The user's face is not detected by the living body, and the entire detection process is restarted.
  • the face distance detection is performed to remind the user and the camera to maintain a suitable distance for subsequent living body detection; then, a single face image is collected for classification, and it is judged to be a print photo/mobile screen/computer screen/3D face/normal face. Filtering abnormal faces; finally, sorting the sequence of consecutive pictures filtered by face to determine whether it is a real person.
  • a plurality of first face images of the user are continuously collected, and after each first face image is a living body image, whether there are micro motions in the plurality of consecutive first face images, if there is a micro action , to confirm that the user's face is detected by the living body, and the living body detection is performed on the user through the living body recognition and the micro motion recognition, thereby effectively improving the accuracy of the face detection, and preventing the face recognition system from being deceived by the face photo or the face video. Behavior, to achieve the function of distinguishing between real people and dummy, to ensure information security.
  • the embodiment of the present application further provides an electronic device.
  • the electronic device includes:
  • the storage medium stores instructions for performing the following steps:
  • each first face image is a living image, identifying whether there are micro actions in the plurality of consecutive first face images;
  • the method before continuously acquiring the plurality of first face images, the method further includes:
  • determining that the user meets the distance requirement includes:
  • determining the user distance according to the face area including:
  • the distance between the face preset parts is extracted from the face area, and the user distance is determined according to the ratio of the distance to the width and height of the second face image.
  • the method further includes:
  • the user is instructed to move to meet the distance requirement.
  • the method further includes:
  • any one of the first face images is determined to be a living image, any one of the first face images is stored in the image sequence; if any one of the first face images is determined If the image is not a live image, the process is terminated, the image sequence is cleared, and the user's face is not detected.
  • the non-living image includes: a photo, a video, a face film.
  • the micro-motions include micro-changes in the face organs, slight changes in the face muscles, and micro-movement of the faces.
  • the method further includes:
  • the process is terminated, and the first face image in the image sequence is cleared, and the user's face biometric detection does not pass.
  • a plurality of first face images of the user are continuously collected, and after each first face image is a living body image, whether there are micro motions in the plurality of consecutive first face images, if there is a micro action , to confirm that the user's face is detected by the living body, and the living body detection is performed on the user through the living body recognition and the micro motion recognition, thereby effectively improving the accuracy of the face detection, and preventing the face recognition system from being deceived by the face photo or the face video. Behavior, to achieve the function of distinguishing between real people and dummy, to ensure information security.
  • an embodiment of the present application further provides a computer program product for use in conjunction with an electronic device including a display, the computer program product comprising a computer readable storage medium and a computer program mechanism embedded therein,
  • the computer program mechanism includes instructions for performing the various steps described below:
  • each first face image is a living image, identifying whether there are micro actions in the plurality of consecutive first face images;
  • the method before continuously acquiring the plurality of first face images, the method further includes:
  • determining that the user meets the distance requirement includes:
  • determining the user distance according to the face area including:
  • the distance between the face preset parts is extracted from the face area, and the user distance is determined according to the ratio of the distance to the width and height of the second face image.
  • the method further includes:
  • the user is instructed to move to meet the distance requirement.
  • the method further includes:
  • any one of the first face images is determined to be a living image, any one of the first face images is stored in the image sequence; if any one of the first face images is determined If the image is not a live image, the process is terminated, the image sequence is cleared, and the user's face is not detected.
  • the non-living image includes: a photo, a video, a face film.
  • the micro-motions include micro-changes in the face organs, slight changes in the face muscles, and micro-movement of the faces.
  • the method further includes:
  • the process is terminated, and the first face image in the image sequence is cleared, and the user's face biometric detection does not pass.
  • a plurality of first face images of the user are continuously collected, and after each first face image is a living body image, whether there are micro motions in the plurality of consecutive first face images, if there is a micro action , to confirm that the user's face is detected by the living body, and the living body detection is performed on the user through the living body recognition and the micro motion recognition, thereby effectively improving the accuracy of the face detection, and preventing the face recognition system from being deceived by the face photo or the face video. Behavior, to achieve the function of distinguishing between real people and dummy, to ensure information security.
  • embodiments of the present application can be provided as a method, system, or computer program product.
  • the present application can take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment in combination of software and hardware.
  • the application can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

一种基于云端的人脸活体检测方法、电子设备和程序产品,应用于人脸检测技术领域,该方法连续采集用户的多张第一人脸图像;确定每张第一人脸图像均为活体图像后,识别多张连续第一人脸图像中是否存在微动作;若存在微动作,则确认用户人脸活体检测通过。基于云端,连续采集用户的多张第一人脸图像,确定每张第一人脸图像均为活体图像后,识别多张连续第一人脸图像中是否存在微动作,若存在微动作,则确认用户人脸活体检测通过,通过活体识别和微动作识别对用户进行人脸活体检测,有效提升人脸活体检测的准确性,防止通过人脸照片或者人脸视频欺骗人脸识别***的行为,实现区分真人假人的功能,保证信息安全。

Description

基于云端的人脸活体检测方法、电子设备和程序产品 技术领域
本申请涉及人脸检测技术领域,特别涉及一种基于云端的人脸活体检测方法、电子设备和程序产品。
背景技术
随着深度学习技术的发展,人脸已经成为一种新的身份验证。
人脸识别技术与其他生物特征识别技术相比,通过摄像头直接获取,可以非接触的方式完成识别过程,方便快捷,但是也带来了一些信息安全问题,比如可以通过人脸照片或者人脸视频欺骗人脸识别***。
发明内容
本申请实施例提供了一种基于云端的人脸活体检测方法、电子设备和程序产品,主要用于盲人导航。
第一方面,本申请实施例提供了一种基于云端的人脸活体检测方法,包括:
连续采集用户的多张第一人脸图像;
确定每张第一人脸图像均为活体图像后,识别所述多张连续第一人脸图像中是否存在微动作;
若存在微动作,则确认所述用户人脸活体检测通过。
第二方面,本申请实施例提供了一种电子设备,所述电子设备包括:
存储器,一个或多个处理器;存储器与处理器通过通信总线相连;处理器被配置为执行存储器中的指令;所述存储介质中存储有用于执行权利要求第一方面所述方法中各个步骤的指令。
第三方面,本申请实施例提供了一种与包括显示器的电子设备结合使 用的计算机程序产品,所述计算机程序产品包括计算机可读的存储介质和内嵌于其中的计算机程序机制,所述计算机程序机制包括用于执行上述第一方面所述方法中各个步骤的指令。
有益效果如下:
本申请实施例中,连续采集用户的多张第一人脸图像,确定每张第一人脸图像均为活体图像后,识别多张连续第一人脸图像中是否存在微动作,若存在微动作,则确认用户人脸活体检测通过,通过活体识别和微动作识别对用户进行人脸活体检测,有效提升人脸活体检测的准确性,防止通过人脸照片或者人脸视频欺骗人脸识别***的行为,实现区分真人假人的功能,保证信息安全。
附图说明
下面将参照附图描述本申请的具体实施例,其中:
图1为本申请实施例中的一种基于云端的人脸活体检测方法流程示意图;
图2为本申请实施例中的一种人脸关键特征部位示意图;
图3为本申请实施例中的一种用于微表情识别的深度神经网络结构示意图;
图4为本申请实施例中的另一种基于云端的人脸活体检测方法流程示意图;
图5为本申请实施例中的一种电子设备的结构示意图。
具体实施方式
为了使本申请的技术方案及优点更加清楚明白,以下结合附图对本申请的示例性实施例进行进一步详细的说明,显然,所描述的实施例仅是本申请的一部分实施例,而不是所有实施例的穷举。并且在不冲突的情况下, 本申请中的实施例及实施例中的特征可以互相结合。
目前人脸识别应用越来越广泛,但是人脸识别存在一个核心安全问题:人脸欺诈,比如可以通过人脸照片、人脸视频或者3D脸膜欺骗人脸识别***。
为了解决上述人脸欺诈问题,提高人脸识别***的安全性。本申请实施例提供了一种基于云端的人脸活体检测方法,连续采集用户的多张第一人脸图像,确定每张第一人脸图像均为活体图像后,识别多张连续第一人脸图像中是否存在微动作,若存在微动作,则确认用户人脸活体检测通过,通过活体识别和微动作识别对用户进行人脸活体检测,有效提升人脸活体检测的准确性,防止通过人脸照片或者人脸视频欺骗人脸识别***的行为,实现区分真人假人的功能,保证信息安全。
参见图1,本实施例提供的基于云端的人脸活体检测方法,包括:
101,确定用户满足距离要求。
人脸识别常用入侵手段通常为包含人脸图像或者人脸视频的打印照片/手机屏幕/电脑屏幕/3D脸膜等,这些入侵工具通常和正常的活体人脸存在特征上的差异。为了更好的识别该差异,本提案首先要对用户(如人脸)距识别装置(如摄像头)的距离进行要求,在保持摄像头和人脸在适当距离的基础上,对这些特征差异进行识别。
确定用户满足距离要求的实现方案,包括但不限于:
步骤1,获取用户的第二人脸图像。
其中第二人脸图像为用户调距离用的图像,与后续人脸识别所用图像不同。
步骤2,获取第二人脸图像中的人脸区域。
步骤3,根据人脸区域确定用户距离。
具体的,可以根据人脸区域占第二人脸图像的比重,确定用户距离。 还可以从人脸区域中提取人脸预设部位之间的距离,根据距离与第二人脸图像的宽高的比值,确定用户距离。
步骤4,若用户距离与距离要求匹配,则确定用户满足距离要求。
步骤5,若用户距离与距离要求不匹配,则指导用户进行移动,以满足距离要求。
具体的,可以向用户发送提示(如语音提示,或者文字提示),以指导用户调整其位置,仪态等。调整后,再次执行步骤1至步骤3,确定调整后的距离是否与距离要求匹配,若匹配则再次执行步骤4,否则,再次执行步骤5。如此循环,直至用户满足距离要求。
例如,本实施例进入执行时,先进行人脸检测,获得人脸区域。可以根据人脸区域的大小以及在图像中的区域比重近似估计人脸的距离,如果在合适的比重范围之内则认为在最佳距离内,否则根据比值的大小相应的提醒用户靠近或者远离。除此之外,还可以通过检测人脸一些关键特征部位(点),如图2所示,根据这些关键部位(点)的之间的距离与图像宽高的比值来判断远近,比如先检测两眼,在根据两眼中心的距离与图像宽的比值。
在计算用户距离时,可以获得关键点的2D坐标,然后通过solvepnp算法获得人脸相对摄像机的3D姿态欧拉角和3D平移(T x,T y,T z),进一步得到3D距离,然后判断距离是否在合适的范围之内。
在判断距离是否在合适的范围之内以后,还可以根据上述的位置和姿态的检测结果对用户的脸的姿态进行提醒(如roll,pitch,yaw),以及在2D图像中的位置进行提醒(偏左,偏右,偏上,偏下等)。
此处的用户提醒:可以语音提示,也可以在图像上以文字的形式提示。
其中,roll是围绕Z轴旋转,也叫翻滚角。pitch是围绕X轴旋转,也叫做俯仰角。yaw是围绕Y轴旋转,也叫偏航角。
102,连续采集用户的多张第一人脸图像。
在确认用户满足距离要求之后,会采集该用户的连续、多张人脸图像,即第一人脸图像。此处的第一人脸图像用户对该用户进行人脸活体检测的依据。
103,确定各张第一人脸图像是否为活体图像。
对于任一张第一人脸图像,
若确定任一张第一人脸图像为活体图像,则将任一张第一人脸图像存入图像序列中。
此处的图像序列,开始时为空,确定某张第一人脸图像为活体图像,会将该张第一人脸图像存入图像序列中,进而进行某张的下一张人脸图像是否为活体图像的检测,若某张的下一张为活体图像,则将某张的下一张存入图像序列中,如此循环,直至所有第一人脸图像均进行活体图像检测。若检测过程中发现某张的下一张非活体图像,此时将图像序列中的人脸图像清空。
若确定任一张第一人脸图像非活体图像,则终止流程,清空图像序列,该用户的人脸活体检测不通过。
其中,非活体图像,包括但不限于:照片(如打印照片,手机屏幕中的照片,电脑屏幕中的照片)、视频(如手机屏幕中的视频,电脑屏幕中的视频)、脸膜(如3D脸膜)。
通过步骤103可以实现单张图像的过滤。
具体的,通过机器学习的方法,对单张图像进行分类判别。
例如,采用基于深度学习的CNN(卷积神经网络)进行分类判别,如采用非常流行的resnet分类网络进行分类判别。
首先收集各种可能的欺诈样本,进行训练,比如可分为打印照片/手机屏幕/电脑屏幕/3D脸膜/正常脸几个类别进行训练。
CNN训练完毕后,利用训练成功的网络模型和权重对每张第一人脸图像进行分类识别,哪个类别的输出概率大,即可以认为是哪类,同时可以设定阈值进行进一步的判别,比如最大概率要大于一个设定值。
若如分类结果为正常人脸,则执行步骤104进行图像序列分类判别。若分类结果其他类别,则清空图像序列,返回重新开始整个检测流程。
104,识别多张连续第一人脸图像中是否存在微动作。
若存在微动作,则执行步骤105。
若不存在微动作,则终止流程,清空图像序列,该用户人脸活体检测不通过。
在执行103之后,可以实现对包含人脸图像或者人脸视频的打印照片/手机屏幕/电脑屏幕/3D脸膜等的识别,但仅依靠该识别结论确定用户人脸活体检测是否通过还会存在误判的情况。
在人脸识别的整个过程中,人往会做出许多不经意的微动作,比如眼睛和嘴部会发生一些微变化,或者脸部肌肉的运动变形,或者头部的轻微晃动,通过对该微动作的识别可以进一步提升人脸活体检测的准确性。
具体的,通过步骤104进行图像序列分类过滤。
如当图像序列长度满足一定长度时,则进行图像序列分类过滤。将图像序列输入一个深度神经网络直接进行分类判别,输出为正常人脸和非正常人脸两个类别。
深度神经网络可以直接基于3D卷积神经网络,也可以采用一般的2D卷积神经网络,比如resnet,只不过此时网络输入为堆叠的序列图像数据,如图3所示。
一般的resnet分类网络输入为1通道或者3通道,将图像序列堆叠后,以彩色图像为例,相当于输入为N*3通道数据。
其中N为输入深度神经网络的图像序列的长度,即输入深度神经网络 的图像序列中的第一人脸图像的数量。
例如,若将图像序列输入深度神经网络直接进行分类判别,则N为图像序列中所有第一人脸图像的数量。
然后根据采集的两类样本,对3D卷积神经网络或者2D卷积神经网络进行训练即可。训练完成后,直接利用训练后的模型和权重对输入的图像序列进行判别。哪个类别的输出概率大,即为哪类,同也可以设定阈值进行进一步过滤。
对于将图像序列输入深度神经网络直接进行分类判别的情况,若最终的输出为正常人脸,则确定存在微动作,执行步骤105活体检测通过,否则确定不存在微动作,终止流程,清空图像序列,该用户人脸活体检测不通过,重新开始整个检测流程。
105,确认用户人脸活体检测通过。
执行至此,本实施例的基于云端的人脸活体检测方法执行完毕。
下面参见图4所示的流程,再次说明本实施例的基于云端的人脸活体检测方法。
首先进行人脸距离检测,提醒用户和摄像头保持合适的距离,方便后续的活体检测;然后采集单张人脸图像进行分类,判断是打印照片/手机屏幕/电脑屏幕/3D脸膜/正常脸,对非正常人脸进行过滤;最后对通过人脸过滤的连续图片序列进行分类判断是否是真人。
有益效果:
本申请实施例,连续采集用户的多张第一人脸图像,确定每张第一人脸图像均为活体图像后,识别多张连续第一人脸图像中是否存在微动作,若存在微动作,则确认用户人脸活体检测通过,通过活体识别和微动作识别对用户进行人脸活体检测,有效提升人脸活体检测的准确性,防止通过人脸照片或者人脸视频欺骗人脸识别***的行为,实现区分真人假人的功 能,保证信息安全。
基于同一构思,本申请实施例还提供了一种电子设备,参见图5,电子设备包括:
存储器501,一个或多个处理器502;以及收发组件503,存储器、处理器以及收发组件503通过通信总线(本申请实施例中是以通信总线为I/O总线进行的说明)相连;所述存储介质中存储有用于执行下述各个步骤的指令:
连续采集用户的多张第一人脸图像;
确定每张第一人脸图像均为活体图像后,识别多张连续第一人脸图像中是否存在微动作;
若存在微动作,则确认用户人脸活体检测通过。
可选地,连续采集多张第一人脸图像之前,还包括:
确定用户满足距离要求。
可选地,确定用户满足距离要求,包括:
获取用户的第二人脸图像;
获取第二人脸图像中的人脸区域;
根据人脸区域确定用户距离;
若用户距离与距离要求匹配,则确定用户满足距离要求。
可选地,根据人脸区域确定用户距离,包括:
根据人脸区域占第二人脸图像的比重,确定用户距离;或者,
从人脸区域中提取人脸预设部位之间的距离,根据距离与第二人脸图像的宽高的比值,确定用户距离。
可选地,根据人脸区域确定用户距离之后,还包括:
若用户距离与距离要求不匹配,则指导用户进行移动,以满足距离要求。
可选地,连续采集用户的多张第一人脸图像之后,还包括:
确定各张第一人脸图像是否为活体图像;
对于任一张第一人脸图像,若确定任一张第一人脸图像为活体图像,则将任一张第一人脸图像存入图像序列中;若确定任一张第一人脸图像非活体图像,则终止流程,清空图像序列,用户人脸活体检测不通过。
可选地,非活体图像包括:照片、视频、脸膜。
可选地,微动作,包括人脸器官微变化,人脸肌肉微变化,人脸微移动。
可选地,识别多张连续第一人脸图像中是否存在微动作之后,还包括:
若不存在微动作,则终止流程,清空图像序列中的第一人脸图像,用户人脸活体检测不通过。
不难理解的是,在具体实施时,就为了实现本申请的基本目的而言,上述的并不必然的需要包含上述的收发组件503。
有益效果:
本申请实施例,连续采集用户的多张第一人脸图像,确定每张第一人脸图像均为活体图像后,识别多张连续第一人脸图像中是否存在微动作,若存在微动作,则确认用户人脸活体检测通过,通过活体识别和微动作识别对用户进行人脸活体检测,有效提升人脸活体检测的准确性,防止通过人脸照片或者人脸视频欺骗人脸识别***的行为,实现区分真人假人的功能,保证信息安全。
再一方面,本申请实施例还提供了一种与包括显示器的电子设备结合使用的计算机程序产品,所述计算机程序产品包括计算机可读的存储介质和内嵌于其中的计算机程序机制,所述计算机程序机制包括用于执行下述各个步骤的指令:
连续采集用户的多张第一人脸图像;
确定每张第一人脸图像均为活体图像后,识别多张连续第一人脸图像中是否存在微动作;
若存在微动作,则确认用户人脸活体检测通过。
可选地,连续采集多张第一人脸图像之前,还包括:
确定用户满足距离要求。
可选地,确定用户满足距离要求,包括:
获取用户的第二人脸图像;
获取第二人脸图像中的人脸区域;
根据人脸区域确定用户距离;
若用户距离与距离要求匹配,则确定用户满足距离要求。
可选地,根据人脸区域确定用户距离,包括:
根据人脸区域占第二人脸图像的比重,确定用户距离;或者,
从人脸区域中提取人脸预设部位之间的距离,根据距离与第二人脸图像的宽高的比值,确定用户距离。
可选地,根据人脸区域确定用户距离之后,还包括:
若用户距离与距离要求不匹配,则指导用户进行移动,以满足距离要求。
可选地,连续采集用户的多张第一人脸图像之后,还包括:
确定各张第一人脸图像是否为活体图像;
对于任一张第一人脸图像,若确定任一张第一人脸图像为活体图像,则将任一张第一人脸图像存入图像序列中;若确定任一张第一人脸图像非活体图像,则终止流程,清空图像序列,用户人脸活体检测不通过。
可选地,非活体图像包括:照片、视频、脸膜。
可选地,微动作,包括人脸器官微变化,人脸肌肉微变化,人脸微移动。
可选地,识别多张连续第一人脸图像中是否存在微动作之后,还包括:
若不存在微动作,则终止流程,清空图像序列中的第一人脸图像,用户人脸活体检测不通过。
有益效果:
本申请实施例,连续采集用户的多张第一人脸图像,确定每张第一人脸图像均为活体图像后,识别多张连续第一人脸图像中是否存在微动作,若存在微动作,则确认用户人脸活体检测通过,通过活体识别和微动作识别对用户进行人脸活体检测,有效提升人脸活体检测的准确性,防止通过人脸照片或者人脸视频欺骗人脸识别***的行为,实现区分真人假人的功能,保证信息安全。
本领域内的技术人员应明白,本申请的实施例可提供为方法、***、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、设备(***)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理 设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管已描述了本申请的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例作出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本申请范围的所有变更和修改。

Claims (11)

  1. 一种基于云端的人脸活体检测方法,其特征在于,包括:
    连续采集用户的多张第一人脸图像;
    确定每张第一人脸图像均为活体图像后,识别所述多张连续第一人脸图像中是否存在微动作;
    若存在微动作,则确认所述用户人脸活体检测通过。
  2. 根据权利要求1所述的方法,其特征在于,所述连续采集多张第一人脸图像之前,还包括:
    确定用户满足距离要求。
  3. 根据权利要求2所述的方法,其特征在于,所述确定用户满足距离要求,包括:
    获取用户的第二人脸图像;
    获取所述第二人脸图像中的人脸区域;
    根据所述人脸区域确定所述用户距离;
    若所述用户距离与距离要求匹配,则确定所述用户满足距离要求。
  4. 根据权利要求3所述的方法,其特征在于,所述根据所述人脸区域确定所述用户距离,包括:
    根据所述人脸区域占所述第二人脸图像的比重,确定所述用户距离;或者,
    从所述人脸区域中提取人脸预设部位之间的距离,根据所述距离与所述第二人脸图像的宽高的比值,确定所述用户距离。
  5. 根据权利要求4所述的方法,其特征在于,所述根据所述人脸区域确定所述用户距离之后,还包括:
    若所述用户距离与距离要求不匹配,则指导所述用户进行移动,以满足距离要求。
  6. 根据权利要求1至5任一权利要求所述的方法,其特征在于,所述连续采集用户的多张第一人脸图像之后,还包括:
    确定各张第一人脸图像是否为活体图像;
    对于任一张第一人脸图像,若确定所述任一张第一人脸图像为活体图像,则将所述任一张第一人脸图像存入图像序列中;若确定所述任一张第一人脸图像非活体图像,则终止流程,清空所述图像序列,所述用户人脸活体检测不通过。
  7. 根据权利要求6所述的方法,其特征在于,所述非活体图像包括:照片、视频、脸膜。
  8. 根据权利要求1至7任一权利要求所述的方法,其特征在于,所述微动作,包括人脸器官微变化,人脸肌肉微变化,人脸微移动。
  9. 根据权利要求1至8任一权利要求所述的方法,其特征在于,所述识别所述多张连续第一人脸图像中是否存在微动作之后,还包括:
    若不存在微动作,则终止流程,清空所述图像序列,所述用户人脸活体检测不通过。
  10. 一种电子设备,其特征在于,所述电子设备包括:
    存储器,一个或多个处理器;存储器与处理器通过通信总线相连;处理器被配置为执行存储器中的指令;所述存储介质中存储有用于执行权利要求1至9任一项所述方法中各个步骤的指令。
  11. 一种与包括显示器的电子设备结合使用的计算机程序产品,所述计算机程序产品包括计算机可读的存储介质和内嵌于其中的计算机程序机制,所述计算机程序机制包括用于执行权利要求1至9任一所述方法中各个步骤的指令。
PCT/CN2017/119543 2017-12-28 2017-12-28 基于云端的人脸活体检测方法、电子设备和程序产品 WO2019127262A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201780002701.0A CN108124486A (zh) 2017-12-28 2017-12-28 基于云端的人脸活体检测方法、电子设备和程序产品
PCT/CN2017/119543 WO2019127262A1 (zh) 2017-12-28 2017-12-28 基于云端的人脸活体检测方法、电子设备和程序产品

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/119543 WO2019127262A1 (zh) 2017-12-28 2017-12-28 基于云端的人脸活体检测方法、电子设备和程序产品

Publications (1)

Publication Number Publication Date
WO2019127262A1 true WO2019127262A1 (zh) 2019-07-04

Family

ID=62233594

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/119543 WO2019127262A1 (zh) 2017-12-28 2017-12-28 基于云端的人脸活体检测方法、电子设备和程序产品

Country Status (2)

Country Link
CN (1) CN108124486A (zh)
WO (1) WO2019127262A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111259757A (zh) * 2020-01-13 2020-06-09 支付宝实验室(新加坡)有限公司 一种基于图像的活体识别方法、装置及设备
CN111783617A (zh) * 2020-06-29 2020-10-16 中国工商银行股份有限公司 人脸识别数据处理方法及装置
CN112818918A (zh) * 2021-02-24 2021-05-18 浙江大华技术股份有限公司 一种活体检测方法、装置、电子设备及存储介质
CN114863515A (zh) * 2022-04-18 2022-08-05 厦门大学 基于微表情语义的人脸活体检测方法及装置
CN115035579A (zh) * 2022-06-22 2022-09-09 支付宝(杭州)信息技术有限公司 基于人脸交互动作的人机验证方法和***

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108124486A (zh) * 2017-12-28 2018-06-05 深圳前海达闼云端智能科技有限公司 基于云端的人脸活体检测方法、电子设备和程序产品
CN109255322B (zh) * 2018-09-03 2019-11-19 北京诚志重科海图科技有限公司 一种人脸活体检测方法及装置
CN109684927A (zh) * 2018-11-21 2019-04-26 北京蜂盒科技有限公司 活体检测方法、装置、计算机可读存储介质和电子设备
CN109684924B (zh) * 2018-11-21 2022-01-14 奥比中光科技集团股份有限公司 人脸活体检测方法及设备
CN109784175A (zh) * 2018-12-14 2019-05-21 深圳壹账通智能科技有限公司 基于微表情识别的异常行为人识别方法、设备和存储介质
CN109815944A (zh) * 2019-03-21 2019-05-28 娄奥林 一种针对人工智能对视频面部替换识别的防御方法
CN111931544B (zh) * 2019-05-13 2022-11-15 ***通信集团湖北有限公司 活体检测的方法、装置、计算设备及计算机存储介质
CN112997185A (zh) * 2019-09-06 2021-06-18 深圳市汇顶科技股份有限公司 人脸活体检测方法、芯片及电子设备
CN111507286B (zh) * 2020-04-22 2023-05-02 北京爱笔科技有限公司 一种假人检测方法及装置
CN112506204B (zh) * 2020-12-17 2022-12-30 深圳市普渡科技有限公司 机器人遇障处理方法、装置、设备和计算机可读存储介质
CN112990167B (zh) * 2021-05-19 2021-08-10 北京焦点新干线信息技术有限公司 图像处理方法及装置、存储介质及电子设备
CN113591622A (zh) * 2021-07-15 2021-11-02 广州大白互联网科技有限公司 一种活体检测方法及装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090135188A1 (en) * 2007-11-26 2009-05-28 Tsinghua University Method and system of live detection based on physiological motion on human face
CN104361326A (zh) * 2014-11-18 2015-02-18 新开普电子股份有限公司 一种判别活体人脸的方法
CN105718925A (zh) * 2016-04-14 2016-06-29 苏州优化智能科技有限公司 基于近红外和面部微表情的真人活体身份验证终端设备
CN106557726A (zh) * 2015-09-25 2017-04-05 北京市商汤科技开发有限公司 一种带静默式活体检测的人脸身份认证***及其方法
CN107016608A (zh) * 2017-03-30 2017-08-04 广东微模式软件股份有限公司 一种基于身份信息验证的远程开户方法及***
CN108124486A (zh) * 2017-12-28 2018-06-05 深圳前海达闼云端智能科技有限公司 基于云端的人脸活体检测方法、电子设备和程序产品

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102662334A (zh) * 2012-04-18 2012-09-12 深圳市兆波电子技术有限公司 控制用户与电子设备屏幕之间距离的方法及其电子设备
CN104143078B (zh) * 2013-05-09 2016-08-24 腾讯科技(深圳)有限公司 活体人脸识别方法、装置和设备
CN104794464B (zh) * 2015-05-13 2019-06-07 上海依图网络科技有限公司 一种基于相对属性的活体检测方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090135188A1 (en) * 2007-11-26 2009-05-28 Tsinghua University Method and system of live detection based on physiological motion on human face
CN104361326A (zh) * 2014-11-18 2015-02-18 新开普电子股份有限公司 一种判别活体人脸的方法
CN106557726A (zh) * 2015-09-25 2017-04-05 北京市商汤科技开发有限公司 一种带静默式活体检测的人脸身份认证***及其方法
CN105718925A (zh) * 2016-04-14 2016-06-29 苏州优化智能科技有限公司 基于近红外和面部微表情的真人活体身份验证终端设备
CN107016608A (zh) * 2017-03-30 2017-08-04 广东微模式软件股份有限公司 一种基于身份信息验证的远程开户方法及***
CN108124486A (zh) * 2017-12-28 2018-06-05 深圳前海达闼云端智能科技有限公司 基于云端的人脸活体检测方法、电子设备和程序产品

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111259757A (zh) * 2020-01-13 2020-06-09 支付宝实验室(新加坡)有限公司 一种基于图像的活体识别方法、装置及设备
CN111259757B (zh) * 2020-01-13 2023-06-20 支付宝实验室(新加坡)有限公司 一种基于图像的活体识别方法、装置及设备
CN111783617A (zh) * 2020-06-29 2020-10-16 中国工商银行股份有限公司 人脸识别数据处理方法及装置
CN111783617B (zh) * 2020-06-29 2024-02-23 中国工商银行股份有限公司 人脸识别数据处理方法及装置
CN112818918A (zh) * 2021-02-24 2021-05-18 浙江大华技术股份有限公司 一种活体检测方法、装置、电子设备及存储介质
CN112818918B (zh) * 2021-02-24 2024-03-26 浙江大华技术股份有限公司 一种活体检测方法、装置、电子设备及存储介质
CN114863515A (zh) * 2022-04-18 2022-08-05 厦门大学 基于微表情语义的人脸活体检测方法及装置
CN115035579A (zh) * 2022-06-22 2022-09-09 支付宝(杭州)信息技术有限公司 基于人脸交互动作的人机验证方法和***

Also Published As

Publication number Publication date
CN108124486A (zh) 2018-06-05

Similar Documents

Publication Publication Date Title
WO2019127262A1 (zh) 基于云端的人脸活体检测方法、电子设备和程序产品
CN105612533B (zh) 活体检测方法、活体检测***以及计算机程序产品
KR102596897B1 (ko) 모션벡터 및 특징벡터 기반 위조 얼굴 검출 방법 및 장치
JP7040952B2 (ja) 顔認証方法及び装置
US10275672B2 (en) Method and apparatus for authenticating liveness face, and computer program product thereof
WO2019127365A1 (zh) 人脸活体检测方法、电子设备和计算机程序产品
CN105989264B (zh) 生物特征活体检测方法及***
US10621454B2 (en) Living body detection method, living body detection system, and computer program product
CN102375970B (zh) 一种基于人脸的身份认证方法和认证装置
CN106407914B (zh) 用于检测人脸的方法、装置和远程柜员机***
CN104361276B (zh) 一种多模态生物特征身份认证方法及***
CN106557726B (zh) 一种带静默式活体检测的人脸身份认证***及其方法
US9985963B2 (en) Method and system for authenticating liveness face, and computer program product thereof
US20180239955A1 (en) Liveness detection
CN109858375B (zh) 活体人脸检测方法、终端及计算机可读存储介质
CN107798279B (zh) 一种人脸活体检测方法及装置
US20240021015A1 (en) System and method for selecting images for facial recognition processing
WO2016172923A1 (zh) 视频检测方法、视频检测***以及计算机程序产品
CN110612530A (zh) 用于选择脸部处理中使用的帧的方法
JP5061563B2 (ja) 検出装置、生体判定方法、およびプログラム
CN111626240B (zh) 一种人脸图像识别方法、装置、设备及可读存储介质
JP7268725B2 (ja) 画像処理装置、画像処理方法、及び画像処理プログラム
CN107480628B (zh) 一种人脸识别方法及装置
CN113642497A (zh) 用于脸部防欺骗的方法、服务器和设备
CN109886084B (zh) 基于陀螺仪的人脸认证方法、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17936763

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 18.11.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17936763

Country of ref document: EP

Kind code of ref document: A1