CN108805024B - Image processing method, image processing device, computer-readable storage medium and electronic equipment - Google Patents

Image processing method, image processing device, computer-readable storage medium and electronic equipment Download PDF

Info

Publication number
CN108805024B
CN108805024B CN201810403815.2A CN201810403815A CN108805024B CN 108805024 B CN108805024 B CN 108805024B CN 201810403815 A CN201810403815 A CN 201810403815A CN 108805024 B CN108805024 B CN 108805024B
Authority
CN
China
Prior art keywords
target
image
face
depth
living body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810403815.2A
Other languages
Chinese (zh)
Other versions
CN108805024A (en
Inventor
郭子青
周海涛
惠方方
谭筱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810403815.2A priority Critical patent/CN108805024B/en
Publication of CN108805024A publication Critical patent/CN108805024A/en
Application granted granted Critical
Publication of CN108805024B publication Critical patent/CN108805024B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Processing (AREA)

Abstract

The application relates to an image processing method, an image processing device, a computer readable storage medium and an electronic device. The method comprises the following steps: acquiring a target infrared image and a target depth image, and performing face detection according to the target infrared image to determine a target face area, wherein the target depth image is used for representing depth information corresponding to the target infrared image; performing living body detection processing on the target face area according to the target depth image; if the living body detection is successful, acquiring target face attribute parameters corresponding to the target face area, and performing face matching processing on the target face area according to the target face attribute parameters to obtain a face matching result; and obtaining a face verification result according to the face matching result. The image processing method, the image processing device, the computer readable storage medium and the electronic equipment can improve the accuracy of image processing.

Description

Image processing method, image processing device, computer-readable storage medium and electronic equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image processing method and apparatus, a computer-readable storage medium, and an electronic device.
Background
Because the human face has unique characteristics, the application of the human face recognition technology in the intelligent terminal is more and more extensive. Many applications of the intelligent terminal are authenticated through a human face, for example, unlocking the intelligent terminal through the human face, and performing payment authentication through the human face. Meanwhile, the intelligent terminal can also process images containing human faces. For example, the facial features are recognized, an expression bag is made according to the facial expressions, or the facial beautification processing is performed through the facial features.
Disclosure of Invention
The embodiment of the application provides an image processing method and device, a computer readable storage medium and an electronic device, which can improve the accuracy of image processing.
An image processing method comprising:
acquiring a target infrared image and a target depth image, and performing face detection according to the target infrared image to determine a target face area, wherein the target depth image is used for representing depth information corresponding to the target infrared image;
performing living body detection processing on the target face area according to the target depth image;
if the living body detection is successful, acquiring target face attribute parameters corresponding to the target face area, and performing face matching processing on the target face area according to the target face attribute parameters to obtain a face matching result;
and obtaining a face verification result according to the face matching result.
An image processing apparatus comprising:
the system comprises a face detection module, a target infrared image acquisition module, a target depth image acquisition module and a face detection module, wherein the face detection module is used for acquiring a target infrared image and a target depth image and performing face detection according to the target infrared image to determine a target face area, and the target depth image is used for representing depth information corresponding to the target infrared image;
the living body detection module is used for carrying out living body detection processing on the target human face area according to the target depth image;
the face matching module is used for acquiring a target face attribute parameter corresponding to the target face area if the living body detection is successful, and performing face matching processing on the target face area according to the target face attribute parameter to obtain a face matching result;
and the face verification module is used for obtaining a face verification result according to the face matching result.
A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of:
acquiring a target infrared image and a target depth image, and performing face detection according to the target infrared image to determine a target face area, wherein the target depth image is used for representing depth information corresponding to the target infrared image;
performing living body detection processing on the target face area according to the target depth image;
if the living body detection is successful, acquiring target face attribute parameters corresponding to the target face area, and performing face matching processing on the target face area according to the target face attribute parameters to obtain a face matching result;
and obtaining a face verification result according to the face matching result.
An electronic device comprising a memory and a processor, the memory having stored therein computer-readable instructions that, when executed by the processor, cause the processor to perform the steps of:
acquiring a target infrared image and a target depth image, and performing face detection according to the target infrared image to determine a target face area, wherein the target depth image is used for representing depth information corresponding to the target infrared image;
performing living body detection processing on the target face area according to the target depth image;
if the living body detection is successful, acquiring target face attribute parameters corresponding to the target face area, and performing face matching processing on the target face area according to the target face attribute parameters to obtain a face matching result;
and obtaining a face verification result according to the face matching result.
The image processing method, the image processing device, the computer readable storage medium and the electronic equipment can acquire the target infrared image and the target depth image, and carry out face detection according to the target infrared image to obtain the target face area. And then, carrying out living body detection processing according to the target depth image, acquiring target face attribute parameters of a target face area after the living body detection is successful, and carrying out face matching processing according to the target face attribute parameters. And obtaining a final face verification result according to the face matching result. Therefore, in the process of face verification, living body detection can be carried out according to the depth image, face matching is carried out according to the infrared image, and accuracy of face verification is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a diagram of an exemplary embodiment of an application of an image processing method;
FIG. 2 is a flow diagram of a method of image processing in one embodiment;
FIG. 3 is a flow chart of an image processing method in another embodiment;
FIG. 4 is a schematic diagram of computing depth information in one embodiment;
FIG. 5 is a flowchart of an image processing method in yet another embodiment;
FIG. 6 is a flowchart of an image processing method in yet another embodiment;
FIG. 7 is a diagram of hardware components for implementing an image processing method in one embodiment;
FIG. 8 is a diagram showing a hardware configuration for implementing an image processing method in another embodiment;
FIG. 9 is a diagram illustrating a software architecture for implementing an image processing method according to an embodiment;
fig. 10 is a schematic structural diagram of an image processing apparatus according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, a first client may be referred to as a second client, and similarly, a second client may be referred to as a first client, without departing from the scope of the present application. Both the first client and the second client are clients, but they are not the same client.
Fig. 1 is a diagram illustrating an application scenario of an image processing method according to an embodiment. As shown in fig. 1, the application scenario includes a user 102 and an electronic device 104. The electronic device 104 may be equipped with a camera module, and obtain a target infrared image and a target depth image corresponding to the user 102, and perform face detection according to the target infrared image to determine a target face region, where the target depth image is used to represent depth information corresponding to the target infrared image. And performing living body detection processing on the target face area according to the target depth image, if face matching is successful, acquiring target face attribute parameters corresponding to the target face area, and performing face matching processing on the target face area according to the target face attribute parameters to obtain a face matching result. And obtaining a face verification result according to the face matching result. The electronic device 104 may be a smart phone, a tablet computer, a personal digital assistant, a wearable device, or the like.
FIG. 2 is a flow diagram of a method of image processing in one embodiment. As shown in fig. 2, the image processing method includes steps 202 to 208. Wherein:
step 202, obtaining a target infrared image and a target depth image, and performing face detection according to the target infrared image to determine a target face area, wherein the target depth image is used for representing depth information corresponding to the target infrared image.
In one embodiment, a camera may be mounted on the electronic device, and an image may be acquired through the mounted camera. The camera can be divided into types such as a laser camera and a visible light camera according to the difference of the obtained images, the laser camera can obtain the image formed by irradiating the laser to the object, and the visible light image can obtain the image formed by irradiating the visible light to the object. The electronic equipment can be provided with a plurality of cameras, and the installation position is not limited. For example, one camera may be installed on a front panel of the electronic device, two cameras may be installed on a back panel of the electronic device, and the cameras may be installed in an embedded manner inside the electronic device and then opened by rotating or sliding. Specifically, a front camera and a rear camera can be mounted on the electronic device, the front camera and the rear camera can acquire images from different viewing angles, the front camera can acquire images from a front viewing angle of the electronic device, and the rear camera can acquire images from a back viewing angle of the electronic device.
The target infrared image and the target depth image which are acquired by the electronic equipment are corresponding, and the target depth image is used for representing depth information corresponding to the target infrared image. The target infrared image may display collected detailed information of the photographed object, and the target depth image may represent depth information of the photographed image. After the target infrared image is acquired, face detection can be performed according to the target infrared image, so that whether the target infrared image contains a face or not can be detected. And if the target infrared image contains the human face, extracting a target human face area where the human face is located in the target infrared image. Because the target infrared image and the target depth image are corresponding, after the target face region is extracted, the depth information corresponding to each pixel point in the target face region can be obtained according to the corresponding region in the target depth image.
And 204, performing living body detection processing on the target face area according to the target depth image.
The target infrared image and the target depth image are corresponding, and after the target face area is extracted according to the target infrared image, the area where the target face is located in the target depth image can be found according to the position of the target face area. Specifically, the image is a two-dimensional pixel matrix, and the position of each pixel point in the image can be represented by a two-dimensional coordinate. For example, a coordinate system is established by taking the pixel point at the leftmost lower corner of the image as the origin of coordinates, moving one unit in the positive direction of the X-axis every time the pixel point moves rightwards on the basis of the origin of coordinates, and moving one unit in the positive direction of the Y-axis every time the pixel point moves upwards, so that the position coordinate of each pixel point in the image can be represented by one two-dimensional coordinate.
After a target face area is detected in the target infrared image, the position of any pixel point in the target face area in the target infrared image can be represented through face coordinates, and then the position of a target face in the target depth image is located according to the face coordinates, so that face depth information corresponding to the target face area is obtained. In general, a living human face is stereoscopic, and a human face displayed, for example, on a picture, a screen, or the like, is planar. Meanwhile, the collected depth information may be different for different skin types. Therefore, whether the acquired target face area is three-dimensional or planar can be judged according to the acquired face depth information, and the skin characteristic of the face can be obtained according to the acquired face depth information, so that the living body detection is carried out on the target face area.
And step 206, if the living body detection is successful, acquiring target face attribute parameters corresponding to the target face area, and performing face matching processing on the target face area according to the target face attribute parameters to obtain a face matching result.
The target face attribute parameters refer to parameters capable of representing attributes of a target face, and the target face can be identified and matched according to the target face attribute parameters. The target face attribute parameters may include, but are not limited to, face deflection angles, face brightness parameters, facial features parameters, skin quality parameters, geometric feature parameters, and the like. The electronic device may pre-store a preset face region for matching, and then obtain face attribute parameters of the preset face region. After the target face attribute parameters are acquired, the target face attribute parameters can be compared with the face attribute parameters stored in advance. If the target face attribute parameters are matched with the pre-stored face attribute parameters, the preset face area corresponding to the matched face attribute parameters is the preset face area corresponding to the target face area.
The preset face area stored in the electronic equipment is regarded as the face area with the operation authority. And if the target face area is matched with the preset face area, the user corresponding to the target face area is considered to have the operation authority. Namely, when the target face area is matched with the preset face area, the face matching is considered to be successful; and when the target face area is not matched with the preset face area, the face matching is considered to be failed.
And step 208, obtaining a face verification result according to the face matching result.
In one embodiment, the living body detection processing is carried out according to the target depth image, and if the living body detection is successful, the face matching processing is carried out according to the target infrared image. The face verification is considered to be successful only after the living body detection and the face matching are successful. The processing unit of the electronic equipment can receive a face verification instruction initiated by an upper application program, when the processing unit detects the face verification instruction, face verification processing is carried out according to the target infrared image and the target depth image, finally, a face verification result is returned to the upper application program, and the application program carries out subsequent processing according to the face verification result.
The image processing method provided by the embodiment can acquire the target infrared image and the target depth image, and perform face detection according to the target infrared image to obtain the target face area. And then, carrying out living body detection processing according to the target depth image, acquiring target face attribute parameters of a target face area after the living body detection is successful, and carrying out face matching processing according to the target face attribute parameters. And obtaining a final face verification result according to the face matching result. Therefore, in the process of face verification, living body detection can be carried out according to the depth image, face matching is carried out according to the infrared image, and accuracy of face verification is improved.
Fig. 3 is a flowchart of an image processing method in another embodiment. As shown in fig. 3, the image processing method includes steps 302 to 316. Wherein:
step 302, when the first processing unit detects a face verification instruction, controlling a camera module to collect an infrared image and a depth image; and the time interval between the first moment of acquiring the infrared image and the second moment of acquiring the depth image is less than a first threshold value.
In one embodiment, the processing unit of the electronic device can receive an instruction from an upper application program, and when the processing unit receives a face verification instruction, the processing unit can control the camera module to work, and the infrared image and the depth image are collected through the camera. The processing unit is connected to the camera, and the image that the camera obtained just can be transmitted for processing unit to carry out processing such as tailorring, brightness control, face detection, face identification through processing unit. The camera module can include, but is not limited to, a laser camera, a laser lamp and a floodlight. When the processing unit receives the face verification instruction, the infrared image and the depth image can be directly obtained, the infrared image and the speckle image can also be obtained, and the depth image is obtained through calculation according to the speckle image. Specifically, the processing unit can control the laser lamp and the floodlight to work in a time-sharing mode, and when the laser lamp is started, a speckle image is collected through the laser camera; when the floodlight is turned on, the infrared image is collected through the laser camera.
It will be appreciated that when laser light is incident on optically rough surfaces having an average fluctuation of more than a wavelength, the randomly distributed surface elements of the surface scatter wavelets which overlap each other to give the reflected light field a random spatial light intensity distribution, giving rise to a grainy structure, which is the laser speckle. The laser speckles formed are highly random, and therefore, the laser speckles generated by the laser emitted by different laser emitters are different. When the resulting laser speckle is projected onto objects of different depths and shapes, the resulting speckle images are not identical. The laser speckles formed by different laser emitters are unique, and therefore the speckle images obtained are also unique. Laser speckles formed by the laser lamp can be irradiated on an object, and then speckle images formed by irradiating the object with the laser speckles collected by the laser camera.
The laser lamp can emit a plurality of laser speckle points, and when the laser speckle points irradiate objects with different distances, the positions of the spots displayed on the image are different. The electronic device may pre-capture a standard reference image, which is the image formed by the laser speckle impinging on the plane. The speckle points in the reference image are generally uniformly distributed, and then the correspondence between each speckle point in the reference image and the reference depth is established. When speckle images need to be collected, the laser spot lamp is controlled to emit laser speckles, and the laser speckles irradiate an object and are collected by the laser camera to obtain the speckle images. Then comparing each speckle point in the speckle image with the speckle point in the reference image, acquiring the position offset of the speckle point in the speckle image relative to the corresponding scattered spot in the reference image, and acquiring the actual depth information corresponding to the speckle point by the position offset of the scattered spot and the reference depth.
The infrared image collected by the camera corresponds to the speckle image, and the speckle image can be used for calculating depth information corresponding to each pixel point in the infrared image. Therefore, the human face can be detected and identified through the infrared image, and the depth information corresponding to the human face can be calculated according to the speckle image. Specifically, in the process of calculating the depth information according to the speckle images, a relative depth is first calculated according to the position offset of the speckle images relative to the scattered spots of the reference image, and the relative depth can represent the depth information of the actual shot object to the reference plane. And then calculating the actual depth information of the object according to the acquired relative depth and the reference depth. The depth image is used for representing depth information corresponding to the infrared image, and can be relative depth from a represented object to a reference plane or absolute depth from the object to a camera.
The step of calculating the depth image according to the speckle image may specifically include: acquiring a reference image, wherein the reference image is an image with reference depth information obtained by calibration; comparing the reference image with the speckle image to obtain offset information, wherein the offset information is used for representing the horizontal offset of the speckle point in the speckle image relative to the corresponding scattered spot in the reference image; and calculating to obtain a depth image according to the offset information and the reference depth information.
FIG. 4 is a schematic diagram of computing depth information in one embodiment. As shown in fig. 4, the laser light 402 can generate laser speckles, which are reflected off of an object and then captured by the laser camera 404 to form an image. In the calibration process of the camera, laser speckles emitted by the laser lamp 402 are reflected by the reference plane 408, reflected light is collected by the laser camera 404, and a reference image is obtained by imaging through the imaging plane 410. The reference depth from reference plane 408 to laser lamp 402 is L, which is known. In the process of actually calculating the depth information, laser speckles emitted by the laser lamp 402 are reflected by the object 406, reflected light is collected by the laser camera 404, and an actual speckle image is obtained by imaging through the imaging plane 410. The calculation formula for obtaining the actual depth information is as follows:
Figure BDA0001646364750000081
where L is the distance between the laser beam 402 and the reference plane 408, f is the focal length of the lens in the laser camera 404, CD is the distance between the laser beam 402 and the laser camera 404, and AB is the offset distance between the image of the object 406 and the image of the reference plane 408. AB may be the product of the pixel offset n and the actual distance p of the pixel. When the distance Dis between the object 404 and the laser lamp 402 is greater than the distance L between the reference plane 406 and the laser lamp 402, AB is a negative value; AB is positive when the distance Dis between the object 404 and the laser lamp 402 is less than the distance L between the reference plane 406 and the laser lamp 402.
Specifically, each pixel point (x, y) in the speckle image is traversed, and a pixel block with a preset size is selected by taking the pixel point as a center. For example, it may be a block of pixels that takes a size of 31 pixels by 31 pixels. And then searching a matched pixel block on the reference image, and calculating the horizontal offset between the coordinate of the matched pixel block on the reference image and the coordinate of the pixel block (x, y), wherein the right offset is positive, and the left offset is recorded as negative. And then substituting the calculated horizontal offset into a formula (1) to obtain the depth information of the pixel point (x, y). In this way, the depth information of each pixel point in the speckle image is calculated in sequence, and the depth information corresponding to each pixel point in the speckle image can be obtained.
The depth image can be used for representing depth information corresponding to the infrared image, and each pixel point contained in the depth image represents one piece of depth information. Specifically, each scattered spot in the reference image corresponds to one piece of reference depth information, after the horizontal offset between the speckle point in the reference image and the scattered spot in the speckle image is obtained, the relative depth information from the object in the speckle image to the reference plane can be obtained through calculation according to the horizontal offset, then the actual depth information from the object to the camera can be obtained through calculation according to the relative depth information and the reference depth information, and the final depth image can be obtained.
And 304, acquiring a target infrared image according to the infrared image, and acquiring a target depth image according to the depth image.
In the embodiment provided by the application, after the infrared image and the speckle image are acquired, the depth image can be calculated according to the speckle image. The infrared image and the depth image can be corrected respectively, and the infrared image and the depth image are corrected respectively, namely, internal and external parameters in the infrared image and the depth image are corrected. For example, a laser camera generates deflection, and the acquired infrared image and depth image need to correct errors generated by the deflection parallax, so that a standard infrared image and depth image are obtained. And obtaining a target infrared image after correcting the infrared image, and obtaining a target depth image by correcting the depth image. Specifically, the infrared parallax image can be calculated according to the infrared image, and then the internal and external parameters are corrected according to the infrared parallax image to obtain the target infrared image. And calculating to obtain a depth parallax image according to the depth image, and then correcting internal and external parameters according to the depth parallax image to obtain a target depth image.
And step 306, detecting a face area in the target infrared image.
And 308, if two or more face regions exist in the target infrared image, taking the face region with the largest region area as the target face region.
It is understood that there may be no face region, one face region, or two or more face regions in the target infrared image. When the target infrared image does not have the face area, the face verification processing is not needed. When a face area exists in the target infrared image, the face area can be directly subjected to face verification processing. When two or more face areas exist in the target infrared image, one of the face areas can be acquired as a target face area to perform face verification processing. Specifically, if two or more face regions exist in the target infrared image, the area corresponding to each face region may be calculated. The area of the region can be represented by the number of pixel points contained in the face region, and the face region with the largest area can be used as a target face region for verification.
And 310, extracting a target face depth area corresponding to the target face area in the target depth image, and acquiring target living body attribute parameters according to the target face depth area.
Generally, in the process of authenticating a face, whether a face region is matched with a preset face region or not can be authenticated according to an acquired target infrared image. If the face shot is a photo, a sculpture and the like, matching can be successful. Therefore, the process of face verification includes a living body detection stage and a face matching stage, the face matching stage is a process of identifying the identity of the face, and the living body detection stage is a process of detecting whether the face to be shot is a living body. And performing living body detection processing according to the acquired target depth image, so that the verification can be successfully carried out only by acquiring the face of the living body. It can be understood that the acquired target infrared image can represent detail information of a human face, the acquired target depth image can represent depth information corresponding to the target infrared image, and the living body detection processing can be performed according to the target depth image. For example, if the photographed face is a face in a photograph, it can be determined that the acquired face is not a stereoscopic face according to the target depth image, and the acquired face can be considered as a non-living face.
And step 312, performing living body detection processing according to the target living body attribute parameters.
Specifically, the performing the living body detection according to the target depth image includes: and searching a target face depth area corresponding to the target face area in the target depth image, extracting target living body attribute parameters according to the target face depth area, and performing living body detection processing according to the target living body attribute parameters. Alternatively, the target living body attribute parameters may include face depth information, skin characteristics, a direction of texture, a density of texture, a width of texture, and the like corresponding to the face. For example, the target living body attribute parameter may be face depth information, and if the face depth information conforms to a face living body rule, the target face region is considered to have biological activity, that is, the target face region is a living body face region.
And step 314, if the living body detection is successful, acquiring target face attribute parameters corresponding to the target face area, and performing face matching processing on the target face area according to the target face attribute parameters to obtain a face matching result.
In one embodiment, in the face matching stage, the second processing unit may match the extracted target face region with a preset face region. When the target face image is matched, the target face attribute parameters of the target face image can be extracted, the extracted target face attribute parameters are matched with the face attribute parameters of the preset face image stored in the electronic equipment, and if the matching value exceeds the matching threshold value, the face matching is considered to be successful. For example, the characteristics of the deflection angle, brightness information, facial features and the like of the face in the face image can be extracted as the face attribute parameters, and if the matching degree of the extracted target face attribute parameters and the stored face attribute parameters exceeds 90%, the face matching is considered to be successful. Specifically, judging whether a target face attribute parameter of a target face area is matched with a face attribute parameter of a preset face area; if so, successfully matching the face of the target face area; and if not, failing to match the face of the target face area.
And step 316, obtaining a face verification result according to the face matching result.
In the embodiment provided by the present application, after the face is subjected to the live body detection processing, if the live body detection is successful, the face is subjected to the matching processing. And the human face verification is considered to be successful only when the living body detection is successful and the human face matching is successful. Specifically, step 316 may include: if the face matching is successful, obtaining a result of successful face verification; and if the face matching fails, obtaining a face verification failure result. The image processing method may further include: and if the living body detection fails, obtaining a result of face verification failure. After the processing unit obtains the face verification result, the processing unit can send the face verification result to an upper application program, and the application program can perform corresponding processing according to the face verification result.
For example, when performing payment verification based on a human face, after the processing unit transmits a human face verification result to the application program, the application program may perform payment processing based on the human face verification result. If the face verification is successful, the application program can continue to carry out payment operation and display successful payment information to the user; if the face verification fails, the application program can stop the payment operation and display the payment failure information to the user.
In one embodiment, the step of acquiring the infrared image and the depth image of the target may comprise:
step 502, the first processing unit calculates an infrared parallax image according to the infrared image, and calculates a depth parallax image according to the depth image.
Specifically, the electronic device may include a first processing unit and a second processing unit, and both the first processing unit and the second processing unit operate in a secure operating environment. The secure operating environment may include a first secure environment in which the first processing unit operates and a second secure environment in which the second processing unit operates. The first processing unit and the second processing unit are processing units distributed on different processors and are in different safety environments. For example, the first Processing Unit may be an external MCU (micro controller Unit) module or a secure Processing module in a DSP (Digital Signal Processing), and the second Processing Unit may be a CPU (Central Processing Unit) core in a TEE (trusted Execution Environment).
The CPU in the electronic device has 2 operating modes: TEE and REE (Rich Execution Environment). Normally, the CPU operates under the REE, but when the electronic device needs to acquire data with a higher security level, for example, the electronic device needs to acquire face data for identification verification, the CPU may switch from the REE to the TEE for operation. When a CPU in the electronic equipment is a single core, the single core can be directly switched from REE to TEE; when the CPU in the electronic equipment has multiple cores, the electronic equipment switches one core from REE to TEE, and other cores still run in REE.
And step 504, the first processing unit sends the infrared parallax image and the depth parallax image to the second processing unit.
Specifically, the first processing unit is connected with two data transmission channels, including a secure transmission channel and a non-secure transmission channel. When the face verification processing is performed, the processing is usually performed in a secure operating environment, and the second processing unit is a processing unit in the secure operating environment, so that when the first processing unit is connected to the second processing unit, it is described that the first processing unit is currently connected to the secure transmission channel. When the first processing unit is connected to the processing unit in the non-secure operating environment, the condition that the first processing unit is currently connected to the non-secure transmission channel is described. When the first processing unit detects the face verification instruction, the first processing unit can switch to a safe transmission channel to transmit data. Step 504 may include: judging whether the first processing unit is connected to the second processing unit, and if so, sending the infrared parallax image and the depth parallax image to the second processing unit; and if not, controlling the first processing unit to be connected to the second processing unit, and sending the infrared parallax image and the depth parallax image to the second processing unit through the first processing unit.
And step 506, the second processing unit corrects the infrared parallax image to obtain a target infrared image, and corrects the target infrared image according to the depth parallax image to obtain a target depth image.
In one embodiment, the step of performing a biopsy may comprise:
step 602, extracting a target face depth region corresponding to the target face region in the target depth image, and acquiring a first target living body attribute parameter according to the target face depth region.
In one embodiment, the in vivo detection may be performed based on the target depth image, and may also be performed based on the target depth image and the speckle image. Specifically, a first target living body attribute parameter is obtained according to a target depth image, a second target living body attribute parameter is obtained according to a speckle image, and then living body detection is carried out according to the first target living body attribute parameter and the second target living body attribute parameter.
And step 604, acquiring a speckle image, wherein the speckle image is an image formed by irradiating the object with laser speckles collected by a laser camera, and the target depth image is calculated according to the speckle image.
And 606, extracting a target face speckle region corresponding to the target face region in the speckle image, and acquiring a second target living body attribute parameter according to the target face speckle region.
The speckle images and the infrared images are corresponding, so that the target face speckle area can be found in the speckle images according to the target face area, and then the second target living body attribute parameters are obtained according to the target face speckle area. The electronic equipment can control the laser lamp to be started and collects speckle images through the laser camera. Generally, an electronic device may be equipped with two or more cameras at the same time, and if two or more cameras are equipped, the fields of view collected by each camera will be different. In order to ensure that different cameras acquire images corresponding to the same scene, the images acquired by different cameras need to be aligned, so that the images acquired by the cameras correspond to each other. Therefore, after the camera acquires the original speckle image, the original speckle image is generally corrected to obtain a corrected speckle image. The speckle image for in vivo detection may be the original speckle image or the corrected speckle image.
Specifically, if the obtained speckle image is an original speckle image collected by a camera, step 606 may further include: and calculating to obtain a speckle parallax image according to the speckle image, and correcting according to the speckle parallax image to obtain a target speckle image. Step 606 may include: and extracting a target face speckle region corresponding to the target face region in the target speckle image, and acquiring a second target living body attribute parameter according to the target face speckle region.
And 608, performing living body detection processing according to the first target living body attribute parameter and the second target living body attribute parameter.
It is understood that the first target living body attribute parameter and the second target living body attribute parameter may be obtained according to a web learning algorithm, and after the first target living body attribute parameter and the second target living body attribute parameter are obtained, living body detection processing may be performed according to the first target living body attribute parameter and the second target living body attribute parameter. For example, the first target living body attribute parameter may be face depth information, and the second target living body attribute parameter may be a skin characteristic parameter. The speckle images can be trained through a network learning algorithm, so that skin characteristic parameters corresponding to the collected speckle images are obtained, and whether the target face is a living body is judged according to the face depth information and the skin characteristic parameters.
In the embodiment provided by the present application, the face verification processing may be performed in the second processing unit, and after obtaining the face verification result, the second processing unit may send the face verification result to the target application program initiating the face verification instruction. Specifically, the face verification result may be encrypted, and the encrypted face verification result may be sent to the target application program that initiates the face verification instruction. And encrypting the face verification result, wherein a specific encryption algorithm is not limited. For example, the Data Encryption Standard (DES), the MD5(Message-Digest Algorithm 5), and the HAVAL (Diffie-Hellman) may be used.
Specifically, the encryption process may be performed according to a network environment of the electronic device: acquiring the network security level of the network environment where the electronic equipment is currently located; and acquiring an encryption grade according to the network security grade, and carrying out encryption processing corresponding to the encryption grade on the face verification result. It will be appreciated that applications typically need to be networked when they are taking images to operate. For example, when the face is subjected to payment authentication, the face verification result can be sent to the application program, and the application program is sent to the corresponding server to complete the corresponding payment operation. When sending the face verification result, the application program needs to connect with the network and then sends the face verification result to the corresponding server through the network. Therefore, when the face verification result is sent, the face verification result may be encrypted first. And detecting the network security level of the network environment where the electronic equipment is currently located, and carrying out encryption processing according to the network security level.
The lower the network security level, the lower the security of the network environment is considered, and the higher the corresponding encryption level. The electronic equipment establishes a corresponding relation between the network security level and the encryption level in advance, can acquire the corresponding encryption level according to the network security level, and encrypts the face verification result according to the encryption level. The face verification result can be encrypted according to the acquired reference image.
In one embodiment, the reference image is a speckle image acquired by the electronic device when calibrating the camera module, and the reference image acquired by different electronic devices is different due to the high uniqueness of the reference image. The reference image itself can be used as an encryption key for encrypting data. The electronic device can store the reference image in a secure environment, which can prevent data leakage. Specifically, the acquired reference image is formed by a two-dimensional pixel matrix, and each pixel point has a corresponding pixel value. The face verification result may be encrypted according to all or part of the pixel points of the reference image. For example, if the face verification result may include a depth image, the reference image may be directly superimposed on the depth image to obtain an encrypted image. Or performing product operation on the pixel matrix corresponding to the depth image and the pixel matrix corresponding to the reference image to obtain the encrypted image. The pixel value corresponding to one or more pixel points in the reference image may also be taken as an encryption key to encrypt the depth image, and a specific encryption algorithm is not limited in this embodiment.
The reference image is generated when the electronic device is calibrated, so that the electronic device can store the reference image in a safe environment in advance, and when the face verification result needs to be encrypted, the reference image can be read in the safe environment, and the face verification result is encrypted according to the reference image. Meanwhile, the same reference image is stored in a server corresponding to the target application program, after the electronic equipment sends the face verification result after encryption processing to the server corresponding to the target application program, the server of the target application program acquires the reference image and decrypts the encrypted face verification result according to the acquired reference image.
It is understood that the server of the target application may store a plurality of reference images acquired by different electronic devices, and the reference image corresponding to each electronic device is different. Therefore, the server may define a reference image identifier for each reference image, store the device identifier of the electronic device, and then establish a corresponding relationship between the reference image identifier and the device identifier. When the server receives the face verification result, the received face verification result can carry the equipment identifier of the electronic equipment at the same time. The server can search the corresponding reference image identification according to the equipment identification, find the corresponding reference image according to the reference image identification, and then decrypt the face verification result according to the found reference image.
In other embodiments provided in the present application, the method for performing encryption processing according to a reference image may specifically include: acquiring a pixel matrix corresponding to a reference image, and acquiring an encryption key according to the pixel matrix; and encrypting the face verification result according to the encryption key. The reference image is composed of a two-dimensional pixel matrix, and since the acquired reference image is unique, the pixel matrix corresponding to the reference image is also unique. The pixel matrix can be used as an encryption key to encrypt the face verification result, and can also be converted to obtain an encryption key, and the encryption key obtained by conversion is used for encrypting the face verification result. For example, the pixel matrix is a two-dimensional matrix formed by a plurality of pixel values, and the position of each pixel value in the pixel matrix can be represented by a two-dimensional coordinate, so that the corresponding pixel value can be obtained by one or more position coordinates, and the obtained one or more pixel values are combined into an encryption key. After the encryption key is obtained, the face verification result may be encrypted according to the encryption key, and specifically, the encryption algorithm is not limited in this embodiment. For example, the encryption key may be directly superimposed or multiplied with the data, or the encryption key may be inserted as a value into the data to obtain the final encrypted data.
The electronic device may also employ different encryption algorithms for different applications. Specifically, the electronic device may pre-establish a correspondence between an application identifier of the application program and the encryption algorithm, and the face verification instruction may include a target application identifier of the target application program. After the face verification instruction is received, the target application identifier contained in the face verification instruction can be obtained, the corresponding encryption algorithm is obtained according to the target application identifier, and the face verification result is encrypted according to the obtained encryption algorithm.
Specifically, the image processing method may further include: acquiring one or more of a target infrared image, a target speckle image and a target depth image as an image to be transmitted; acquiring an application level of a target application program initiating a face verification instruction, and acquiring a corresponding precision level according to the application level; and adjusting the precision of the image to be sent according to the precision level, and sending the adjusted image to be sent to the target application program. The application level may represent a corresponding importance level of the target application. Generally, the higher the application level of the target application, the higher the accuracy of the transmitted image. The electronic equipment can preset the application level of the application program, establish the corresponding relation between the application level and the precision level, and obtain the corresponding precision level according to the application level. For example, the application programs may be divided into four application levels, such as a system security application, a system non-security application, a third-party security application, and a third-party non-security application, and the corresponding precision levels are gradually reduced.
The precision of the image to be transmitted can be expressed as the resolution of the image or the number of scattered spots contained in the speckle image, so that the precision of the target depth image and the target speckle image obtained according to the speckle image are different. Specifically, adjusting the image precision may include: adjusting the resolution of the image to be sent according to the precision level; or, the number of scattered spots included in the acquired speckle image is adjusted according to the precision level. The number of the scattered spots included in the scattered spot image can be adjusted in a software mode or a hardware mode. When the software mode is adjusted, the speckle points in the acquired speckle pattern can be directly detected, and part of the speckle points are merged or eliminated, so that the number of the speckle points contained in the adjusted speckle pattern is reduced. When the hardware mode is adjusted, the number of laser scattered spots generated by the laser lamp in a diffraction mode can be adjusted. For example, when the precision is high, the number of generated laser scattered spots is 30000; when the precision is low, the number of the generated laser scattered spots is 20000. The accuracy of the corresponding calculated depth image is reduced accordingly.
Specifically, different Diffractive Optical Elements (DOEs) may be preset in the laser lamp, where the number of scattered spots formed by the diffraction of different DOEs is different. And switching different DOEs according to the precision level to perform diffraction to generate speckle images, and obtaining depth images with different precisions according to the obtained speckle images. When the application level of the application program is higher, the corresponding precision level is also higher, and the laser lamp can control the DOE with more scattered spots to emit laser speckles, so that speckle images with more scattered spots are obtained; when the application level of the application program is low, the corresponding precision level is also low, and the laser lamp can control the DOE with the small number of scattered spots to emit laser speckles, so that a speckle image with the small number of scattered spots is obtained.
The image processing method provided by the embodiment can acquire the target infrared image and the target depth image, and perform face detection according to the target infrared image to obtain the target face area. And then, carrying out living body detection processing according to the target depth image, acquiring target face attribute parameters of a target face area after the living body detection is successful, and carrying out face matching processing according to the target face attribute parameters. And obtaining a final face verification result according to the face matching result. Therefore, in the process of face verification, living body detection can be carried out according to the depth image, face matching is carried out according to the infrared image, and accuracy of face verification is improved.
It should be understood that, although the steps in the flowcharts of fig. 2, 3, 5, and 6 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2, 3, 5, and 6 may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least some of the sub-steps or stages of other steps.
Fig. 7 is a hardware configuration diagram for implementing an image processing method in one embodiment. As shown in fig. 7, the electronic device may include a camera module 710, a Central Processing Unit (CPU)720 and a first processing unit 730, wherein the camera module 710 includes a laser camera 712, a floodlight 714, an RGB (Red/Green/Blue, Red/Green/Blue color mode) camera 716 and a laser 718. The first processing unit 730 includes a PWM (Pulse Width Modulation) module 732, an SPI/I2C (Serial Peripheral Interface/Inter-Integrated Circuit) module 734, a RAM (Random Access Memory) module 736, and a Depth Engine module 738. The second processing Unit 722 may be a CPU core in a TEE (Trusted execution environment), and the first processing Unit 730 is an MCU (micro controller Unit) processor. It is understood that the central processing unit 720 may be in a multi-core operation mode, and the CPU core in the central processing unit 720 may operate in a TEE or REE (Rich Execution Environment). Both the TEE and the REE are running modes of an ARM module (Advanced RISC Machines). Generally, the operation behavior with higher security in the electronic device needs to be executed under the TEE, and other operation behaviors can be executed under the REE. In this embodiment, when the central processing unit 720 receives a face verification instruction initiated by a target application, the CPU core running under the TEE, i.e., the second processing unit 722, sends the face verification instruction to the SPI/I2C module 734 in the MCU730 through the SECURE SPI/I2C to the first processing unit 730. After receiving the face verification instruction, the first processing unit 730 transmits a pulse wave through the PWM module 732 to control the floodlight 714 in the camera module 710 to be turned on to collect an infrared image, and controls the laser light 718 in the camera module 710 to be turned on to collect a speckle image. The camera module 710 may transmit the collected infrared image and speckle image to a Depth Engine module 738 in the first processing unit 730, and the Depth Engine module 738 may calculate an infrared parallax image according to the infrared image, calculate a Depth image according to the speckle image, and obtain a Depth parallax image according to the Depth image. The infrared parallax image and the depth parallax image are then sent to the second processing unit 722 operating under the TEE. The second processing unit 722 performs correction according to the infrared parallax image to obtain a target infrared image, and performs correction according to the depth parallax image to obtain a target depth image. Then, carrying out face detection according to the target infrared image, detecting whether a target face area exists in the target infrared image, and then carrying out living body detection on the target face area according to the target depth image; and if the living body detection is successful, detecting whether the target face area is matched with the preset face area. Obtaining a final face verification result according to the face matching result, and obtaining a result of successful face verification if the face matching is successful; and if the face matching fails, obtaining a face verification failure result. It can be understood that, after the living body detection fails, a result of face verification failure is obtained, and the face matching process is not continued. The second processing unit 722 sends the face verification result to the target application after obtaining the face verification result.
Fig. 8 is a hardware configuration diagram for implementing an image processing method in another embodiment. As shown in fig. 8, the hardware structure includes a first processing unit 80, a camera module 82, and a second processing unit 84. The camera module 82 comprises a laser camera 820, a floodlight 822, an RGB camera 824 and a laser light 826. The central processing unit may include a CPU core under TEE and a CPU core under REE, the first processing unit 80 is a DSP processing module developed in the central processing unit, the second processing unit 84 is the CPU core under TEE, and the second processing unit 84 and the first processing unit 80 may be connected through a secure buffer (secure buffer), so that security in the image transmission process may be ensured. In general, when a central processing unit processes an operation behavior with higher security, the central processing unit needs to switch a processor core to be executed under a TEE, and an operation behavior with lower security can be executed under a REE. In the embodiment of the application, the face verification instruction sent by the upper layer application can be received by the second processing unit 84, then the floodlight 822 in the camera module 82 is controlled to be turned on to collect an infrared image by the pulse wave emitted by the PWM module, and the laser light 826 in the camera module 82 is controlled to be turned on to collect a speckle image. The camera module 82 can transmit the collected infrared image and the speckle image to the first processing unit 80, the first processing unit 80 can calculate to obtain a depth image according to the speckle image, then calculate to obtain a depth parallax image according to the depth image, and calculate to obtain the infrared parallax image according to the infrared image. The infrared parallax image and the depth parallax image are then sent to the second processing unit 84. The second processing unit 84 may perform correction according to the infrared parallax image to obtain a target infrared image, and perform correction according to the depth parallax image to obtain a target depth image. The second processing unit 84 performs face detection according to the target infrared image, detects whether a target face region exists in the target infrared image, and performs living body detection on the target face region according to the target depth image; and if the living body detection is successful, detecting whether the target face area is matched with the preset face area. Obtaining a final face verification result according to the face matching result, and obtaining a result of successful face verification if the face matching is successful; and if the face matching fails, obtaining a face verification failure result. It can be understood that, after the living body detection fails, a result of face verification failure is obtained, and the face matching process is not continued. After obtaining the face verification result, the second processing unit 84 sends the face verification result to the target application.
FIG. 9 is a diagram illustrating a software architecture for implementing an image processing method according to an embodiment. As shown in fig. 9, the software architecture includes an application layer 910, an operating system 920, and a secure runtime environment 930. The modules in the secure operating environment 930 include a first processing unit 931, a camera module 932, a second processing unit 933, an encryption module 934, and the like; the operating system 930 comprises a security management module 921, a face management module 922, a camera driver 923 and a camera frame 924; the application layer 910 contains an application program 911. The application 911 may initiate an image capturing instruction and send the image capturing instruction to the first processing unit 931 for processing. For example, when operations such as payment, unlocking, beautifying, Augmented Reality (AR) and the like are performed by acquiring a human face, the application program may initiate an image acquisition instruction for acquiring a human face image. It is to be understood that the image capturing instruction initiated by the application 911 may be first sent to the second processing unit 933, and then sent by the second processing unit 933 to the first processing unit 931.
After the first processing unit 931 receives the image acquisition instruction, if it is determined that the image acquisition instruction is a face verification instruction for verifying a face, the camera module 932 is controlled to acquire an infrared image and a speckle image according to the face verification instruction, and the infrared image and the speckle image acquired by the camera module 932 are transmitted to the first processing unit 931. The first processing unit 931 calculates a depth image including depth information according to the speckle image, calculates a depth parallax image according to the depth image, and calculates an infrared parallax image according to the infrared image. The depth parallax image and the infrared parallax image are then transmitted to the second processing unit 933 through a secure transmission channel. The second processing unit 933 corrects the infrared parallax image to obtain a target infrared image, and corrects the depth parallax image to obtain a target depth image. Then, carrying out face detection according to the target infrared image, detecting whether a target face area exists in the target infrared image, and then carrying out living body detection on the target face area according to the target depth image; and if the living body detection is successful, detecting whether the target face area is matched with the preset face area. Obtaining a final face verification result according to the face matching result, and obtaining a result of successful face verification if the face matching is successful; and if the face matching fails, obtaining a face verification failure result. It can be understood that, after the living body detection fails, a result of face verification failure is obtained, and the face matching process is not continued. The face verification result obtained by the second processing unit 933 can be sent to the encryption module 934, and after being encrypted by the encryption module 934, the encrypted face verification result is sent to the security management module 921. Generally, different application programs 911 all have corresponding security management modules 921, and the security management modules 921 perform decryption processing on the encrypted face verification result and send the face verification result obtained after the decryption processing to corresponding face management modules 922. The face management module 922 sends the face verification result to the upper application 911, and the application 911 performs corresponding operations according to the face verification result.
If the face verification instruction received by the first processing unit 931 is not a face verification instruction, the first processing unit 931 may control the camera module 932 to collect a speckle image, calculate a depth image according to the speckle image, and then obtain a depth parallax image according to the depth image. The first processing unit 931 sends the depth parallax image to the camera driver 923 through the non-secure transmission channel, the camera driver 923 corrects the depth parallax image to obtain a target depth image, and then sends the target depth image to the camera frame 924, and the camera frame 924 sends the target depth image to the face management module 922 or the application 911.
Fig. 10 is a schematic structural diagram of an image processing apparatus according to an embodiment. As shown in fig. 10, the image processing apparatus 1000 includes a face detection module 1002, a living body detection module 1004, a face matching module 1006, and a face verification module 1008. Wherein:
the face detection module 1002 is configured to obtain a target infrared image and a target depth image, and perform face detection according to the target infrared image to determine a target face region, where the target depth image is used to represent depth information corresponding to the target infrared image.
And a living body detection module 1004, configured to perform living body detection processing on the target face region according to the target depth image.
A face matching module 1006, configured to, if the living body detection is successful, obtain a target face attribute parameter corresponding to the target face area, and perform face matching processing on the target face area according to the target face attribute parameter, to obtain a face matching result.
And the face verification module 1008 is used for obtaining a face verification result according to the face matching result.
The image processing device provided by the embodiment can acquire the target infrared image and the target depth image, and perform face detection according to the target infrared image to obtain the target face area. And then, carrying out living body detection processing according to the target depth image, acquiring target face attribute parameters of a target face area after the living body detection is successful, and carrying out face matching processing according to the target face attribute parameters. And obtaining a final face verification result according to the face matching result. Therefore, in the process of face verification, living body detection can be carried out according to the depth image, face matching is carried out according to the infrared image, and accuracy of face verification is improved.
In one embodiment, the face detection module 1002 is further configured to control the camera module to acquire an infrared image and a depth image when the first processing unit detects a face verification instruction; wherein a time interval between a first time of acquiring the infrared image and a second time of acquiring the depth image is less than a first threshold; and acquiring a target infrared image according to the infrared image, and acquiring a target depth image according to the depth image.
In one embodiment, the face detection module 1002 is further configured to calculate, by the first processing unit, an infrared parallax image according to the infrared image, and calculate a depth parallax image according to a depth image; the first processing unit sends the infrared parallax image and the depth parallax image to the second processing unit; and the second processing unit corrects the infrared parallax image to obtain a target infrared image and corrects the target infrared image according to the depth parallax image to obtain a target depth image.
In one embodiment, the face detection module 1002 is further configured to detect a face region in the target infrared image; and if two or more face regions exist in the target infrared image, taking the face region with the largest region area as the target face region.
In one embodiment, the living body detection module 1004 is further configured to extract a target face depth region corresponding to the target face region in the target depth image, and obtain a target living body attribute parameter according to the target face depth region; and performing living body detection processing according to the target living body attribute parameters.
In one embodiment, the living body detection module 1004 is further configured to extract a target face depth region corresponding to the target face region in the target depth image, and obtain a first target living body attribute parameter according to the target face depth region; acquiring a speckle image, wherein the speckle image is an image formed by irradiating an object with laser speckles collected by a laser camera, and the target depth image is calculated according to the speckle image; extracting a target face speckle region corresponding to the target face region in the speckle image, and acquiring a second target living body attribute parameter according to the target face speckle region; and performing living body detection processing according to the first target living body attribute parameter and the second target living body attribute parameter.
In one embodiment, the face verification module 1008 is further configured to obtain a result of successful face recognition if the face matching is successful; if the face matching fails, obtaining a face recognition failure result; and if the living body detection fails, obtaining a result of face recognition failure.
The division of the modules in the image processing apparatus is only for illustration, and in other embodiments, the image processing apparatus may be divided into different modules as needed to complete all or part of the functions of the image processing apparatus.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the image processing methods provided by the above-described embodiments.
A computer program product comprising instructions which, when run on a computer, cause the computer to perform the image processing method provided by the above embodiments.
Any reference to memory, storage, database, or other medium used herein may include non-volatile and/or volatile memory. Suitable non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. An image processing method, comprising:
acquiring a target infrared image and a target depth image, and performing face detection according to the target infrared image to determine a target face area, wherein the target depth image is used for representing depth information corresponding to the target infrared image, the target depth image is obtained by calculation according to a speckle image and a reference image, the speckle image is an image formed by irradiating an object with laser speckles collected by a laser camera, and the reference image is an image with reference depth information obtained by calibration;
performing living body detection processing on the target face area according to the target depth image;
if the living body detection is successful, acquiring target face attribute parameters corresponding to the target face area, and performing face matching processing on the target face area according to the target face attribute parameters to obtain a face matching result;
obtaining a face verification result according to the face matching result;
acquiring a pixel matrix corresponding to the reference image, and acquiring an encryption key according to the pixel matrix; the reference image is a speckle image acquired by the electronic equipment when the camera module is calibrated; encrypting the face verification result according to an encryption key;
and storing a reference image on a server corresponding to the target application program, and after the electronic equipment sends the encrypted face verification result to the server corresponding to the target application program, acquiring the reference image by the server corresponding to the target application program, and decrypting the encrypted face verification result according to the acquired reference image.
2. The method of claim 1, wherein the acquiring the target infrared image and the target depth image comprises:
when the first processing unit detects a face verification instruction, controlling a camera module to acquire an infrared image and a depth image; wherein a time interval between a first time of acquiring the infrared image and a second time of acquiring the depth image is less than a first threshold;
and acquiring a target infrared image according to the infrared image, and acquiring a target depth image according to the depth image.
3. The method of claim 2, wherein the obtaining a target infrared image from the infrared image and a target depth image from the depth image comprises:
the first processing unit calculates to obtain an infrared parallax image according to the infrared image and calculates to obtain a depth parallax image according to the depth image;
the first processing unit sends the infrared parallax image and the depth parallax image to the second processing unit;
and the second processing unit corrects the infrared parallax image to obtain a target infrared image and corrects the target infrared image according to the depth parallax image to obtain a target depth image.
4. The method of claim 1, wherein determining a target face region by performing face detection according to the target infrared image comprises:
detecting a face region in the target infrared image;
and if two or more face regions exist in the target infrared image, taking the face region with the largest region area as the target face region.
5. The method of claim 1, wherein the performing the living body detection processing on the target human face region according to the target depth image comprises:
extracting a target face depth area corresponding to the target face area in the target depth image, and acquiring target living body attribute parameters according to the target face depth area;
and performing living body detection processing according to the target living body attribute parameters.
6. The method of claim 1, wherein the performing the living body detection processing on the target human face region according to the target depth image comprises:
extracting a target face depth area corresponding to the target face area in the target depth image, and acquiring a first target living body attribute parameter according to the target face depth area;
acquiring a speckle image;
extracting a target face speckle region corresponding to the target face region in the speckle image, and acquiring a second target living body attribute parameter according to the target face speckle region;
and performing living body detection processing according to the first target living body attribute parameter and the second target living body attribute parameter.
7. The method according to any one of claims 1 to 6, wherein the obtaining a face verification result according to the face matching process comprises:
if the face matching is successful, obtaining a result of successful face recognition;
if the face matching fails, obtaining a face recognition failure result;
the method further comprises the following steps:
and if the living body detection fails, obtaining a result of face recognition failure.
8. An image processing apparatus characterized by comprising:
the system comprises a face detection module, a target infrared image and a target depth image, wherein the face detection module is used for obtaining the target infrared image and the target depth image and carrying out face detection according to the target infrared image to determine a target face area, the target depth image is used for representing depth information corresponding to the target infrared image, the target depth image is obtained through calculation according to a speckle image and a reference image, the speckle image is an image formed by irradiating laser speckles collected by a laser camera on an object, and the reference image is an image which is obtained through calibration and has reference depth information;
the living body detection module is used for carrying out living body detection processing on the target human face area according to the target depth image;
the face matching module is used for acquiring a target face attribute parameter corresponding to the target face area if the living body detection is successful, and performing face matching processing on the target face area according to the target face attribute parameter to obtain a face matching result;
the face verification module is used for obtaining a face verification result according to the face matching result; acquiring a pixel matrix corresponding to the reference image, and acquiring an encryption key according to the pixel matrix; the reference image is a speckle image acquired by the electronic equipment when the camera module is calibrated; encrypting the face verification result according to an encryption key; and storing a reference image on a server corresponding to the target application program, and after the electronic equipment sends the encrypted face verification result to the server corresponding to the target application program, acquiring the reference image by the server corresponding to the target application program, and decrypting the encrypted face verification result according to the acquired reference image.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the image processing method according to any one of claims 1 to 7.
10. An electronic device comprising a memory and a processor, the memory having stored therein computer-readable instructions that, when executed by the processor, cause the processor to perform the image processing method of any of claims 1 to 7.
CN201810403815.2A 2018-04-28 2018-04-28 Image processing method, image processing device, computer-readable storage medium and electronic equipment Active CN108805024B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810403815.2A CN108805024B (en) 2018-04-28 2018-04-28 Image processing method, image processing device, computer-readable storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810403815.2A CN108805024B (en) 2018-04-28 2018-04-28 Image processing method, image processing device, computer-readable storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN108805024A CN108805024A (en) 2018-11-13
CN108805024B true CN108805024B (en) 2020-11-24

Family

ID=64093671

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810403815.2A Active CN108805024B (en) 2018-04-28 2018-04-28 Image processing method, image processing device, computer-readable storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN108805024B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109284597A (en) * 2018-11-22 2019-01-29 北京旷视科技有限公司 A kind of face unlocking method, device, electronic equipment and computer-readable medium
CN109614910B (en) * 2018-12-04 2020-11-20 青岛小鸟看看科技有限公司 Face recognition method and device
CN111310528B (en) * 2018-12-12 2022-08-12 马上消费金融股份有限公司 Image detection method, identity verification method, payment method and payment device
CN109683698B (en) * 2018-12-25 2020-05-22 Oppo广东移动通信有限公司 Payment verification method and device, electronic equipment and computer-readable storage medium
CN111382596A (en) * 2018-12-27 2020-07-07 鸿富锦精密工业(武汉)有限公司 Face recognition method and device and computer storage medium
CN110163097A (en) * 2019-04-16 2019-08-23 深圳壹账通智能科技有限公司 Discrimination method, device, electronic equipment and the storage medium of three-dimensional head portrait true or false
CN110287900B (en) * 2019-06-27 2023-08-01 深圳市商汤科技有限公司 Verification method and verification device
CN110462633B (en) * 2019-06-27 2023-05-26 深圳市汇顶科技股份有限公司 Face recognition method and device and electronic equipment
CN110287672A (en) * 2019-06-27 2019-09-27 深圳市商汤科技有限公司 Verification method and device, electronic equipment and storage medium
CN110659617A (en) * 2019-09-26 2020-01-07 杭州艾芯智能科技有限公司 Living body detection method, living body detection device, computer equipment and storage medium
CN112711968A (en) * 2019-10-24 2021-04-27 浙江舜宇智能光学技术有限公司 Face living body detection method and system
CN112861568A (en) * 2019-11-12 2021-05-28 Oppo广东移动通信有限公司 Authentication method and device, electronic equipment and computer readable storage medium
CN111882324A (en) * 2020-07-24 2020-11-03 南京华捷艾米软件科技有限公司 Face authentication method and system
CN113327348A (en) * 2021-05-08 2021-08-31 宁波盈芯信息科技有限公司 Networking type 3D people face intelligence lock
CN113469036A (en) * 2021-06-30 2021-10-01 北京市商汤科技开发有限公司 Living body detection method and apparatus, electronic device, and storage medium
CN115797995B (en) * 2022-11-18 2023-09-01 北京的卢铭视科技有限公司 Face living body detection method, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1708135A4 (en) * 2004-01-13 2009-04-08 Fujitsu Ltd Authenticator using organism information
CN105516613A (en) * 2015-12-07 2016-04-20 凌云光技术集团有限责任公司 Intelligent exposure method and system based on face recognition
CN107358157A (en) * 2017-06-07 2017-11-17 阿里巴巴集团控股有限公司 A kind of human face in-vivo detection method, device and electronic equipment
CN107451510A (en) * 2016-05-30 2017-12-08 北京旷视科技有限公司 Biopsy method and In vivo detection system
CN107832677A (en) * 2017-10-19 2018-03-23 深圳奥比中光科技有限公司 Face identification method and system based on In vivo detection

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106533667B (en) * 2016-11-08 2019-07-19 深圳大学 Multistage key generation method and user based on two-beam interference are classified authentication method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1708135A4 (en) * 2004-01-13 2009-04-08 Fujitsu Ltd Authenticator using organism information
CN105516613A (en) * 2015-12-07 2016-04-20 凌云光技术集团有限责任公司 Intelligent exposure method and system based on face recognition
CN107451510A (en) * 2016-05-30 2017-12-08 北京旷视科技有限公司 Biopsy method and In vivo detection system
CN107358157A (en) * 2017-06-07 2017-11-17 阿里巴巴集团控股有限公司 A kind of human face in-vivo detection method, device and electronic equipment
CN107832677A (en) * 2017-10-19 2018-03-23 深圳奥比中光科技有限公司 Face identification method and system based on In vivo detection

Also Published As

Publication number Publication date
CN108805024A (en) 2018-11-13

Similar Documents

Publication Publication Date Title
CN108764052B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN108805024B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN108804895B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN108549867B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
US11275927B2 (en) Method and device for processing image, computer readable storage medium and electronic device
US11256903B2 (en) Image processing method, image processing device, computer readable storage medium and electronic device
US11146735B2 (en) Image processing methods and apparatuses, computer readable storage media, and electronic devices
CN108668078B (en) Image processing method, device, computer readable storage medium and electronic equipment
CN108711054B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN110248111B (en) Method and device for controlling shooting, electronic equipment and computer-readable storage medium
CN108921903B (en) Camera calibration method, device, computer readable storage medium and electronic equipment
WO2019196684A1 (en) Data transmission method and apparatus, computer readable storage medium, electronic device, and mobile terminal
CN109213610B (en) Data processing method and device, computer readable storage medium and electronic equipment
CN108830141A (en) Image processing method, device, computer readable storage medium and electronic equipment
CN108712400B (en) Data transmission method and device, computer readable storage medium and electronic equipment
EP3621294B1 (en) Method and device for image capture, computer readable storage medium and electronic device
WO2019196669A1 (en) Laser-based security verification method and apparatus, and terminal device
CN108881712B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
US11308636B2 (en) Method, apparatus, and computer-readable storage medium for obtaining a target image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant