CN108280418A - The deception recognition methods of face image and device - Google Patents
The deception recognition methods of face image and device Download PDFInfo
- Publication number
- CN108280418A CN108280418A CN201810048661.XA CN201810048661A CN108280418A CN 108280418 A CN108280418 A CN 108280418A CN 201810048661 A CN201810048661 A CN 201810048661A CN 108280418 A CN108280418 A CN 108280418A
- Authority
- CN
- China
- Prior art keywords
- image
- face
- face image
- value
- depth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
This disclosure relates to a kind of deception recognition methods of face image and device, the method includes:Obtain face image to be identified;The local feature that face in the face image is extracted using trained first nerves network obtains the local feature value of the face image;The depth characteristic that face in the face image is extracted using trained nervus opticus network obtains the depth characteristic value of the face image;The local feature value and the depth characteristic value are merged, the fusion value of the face image is obtained;The fusion value and threshold value are compared, the deception recognition result of the face image is judged according to comparison result.Disclosure recognition accuracy is high, and robustness is good, copes with a variety of deception attacks.
Description
Technical field
This disclosure relates to deception recognition methods and the device of image identification technical field more particularly to a kind of face image.
Background technology
Face recognition technology is a kind of biological identification technology that the face feature based on people carries out identification, with information
The continuous development of science and technology, the extensive use of computer technology also produce many outstanding face recognition algorithms, such as in succession
Fisher face methods, locally hold sign analytic approach, subspace method etc., especially after the proposition of eigenface (eigenface) method, people
Face identification has further significant development.With going deep into Face Recognition, current face recognition algorithms
Oneself also becomes the biological identification technology of mainstream through reaching higher level, recognition of face, has obtained in practice extensive
Using, such as network account login, banking system login, access and exit control, face payment etc..Biological identification technology passes through calculating
The high-tech means such as machine and optics, acoustics, biosensor and biostatistics principle are intimately associated, and utilize the intrinsic life of human body
Characteristic, (such as fingerprint, face as, iris) and behavioural characteristic (such as person's handwriting, sound, gait) are managed to carry out the mirror of personal identification
It is fixed.Biological identification technology utilizes the behavior of such as fingerprint, the physiological characteristics such as face and iris, or such as typing rhythm and gait
Feature uniquely identifies or verifies individual.It is widely used in including mobile phone certification and access due to biological recognition system
In practical application including control, biological characteristic deception or demonstration attack (PA) and are becoming the threat of a bigger, wherein pseudo-
The biometric sample made is presented to living creature characteristic recognition system and attempts to be certified.Since face is the life for being easiest to obtain
Object feature mode, therefore a variety of different types of recognitions of face of face include that printing is attacked, Replay Attack, 3D masks etc..
Therefore, traditional face identification system is very fragile for such demonstration attack.
Invention content
To overcome the problems in correlation technique, the disclosure provides deception recognition methods and the dress of a kind of face image
It sets, for solving in conventional method, problem that the deception of face image causes face recognition accuracy rate low.
According to the one side of the embodiment of the present disclosure, a kind of deception recognition methods of face image, the method packet are provided
It includes:
Obtain face image to be identified;
The local feature that face in the face image is extracted using trained first nerves network, obtains the face
The local feature value of image;
The depth characteristic that face in the face image is extracted using trained nervus opticus network, obtains the face
The depth characteristic value of image;
The local feature value and the depth characteristic value are merged, the fusion value of the face image is obtained;
The fusion value and threshold value are compared, judge that the deception of the face image identifies knot according to comparison result
Fruit.
In one possible implementation, face image to be identified is obtained, including:
Obtain the first image and the second image of face to be identified, the imaging side of described first image and second image
Method is different;
According to described first image and second image, the face image to be identified is obtained.
In one possible implementation, face in the face image is extracted using trained first nerves network
Local feature, obtain the local feature value of the face image, including:
Determine the subregion of face at random in the face image;
Local feature is extracted in the subregion using trained first nerves network, obtains the face image
Sub-district characteristic of field;
According to the sub-district characteristic of field, the local feature value of the face image is obtained.
In one possible implementation, face in the face image is extracted using trained nervus opticus network
Depth characteristic, obtain the depth characteristic value of the face image, including:
The depth calculation region of face is determined in the face image;
The depth characteristic that the depth calculation region is extracted using trained nervus opticus network obtains face's figure
The depth characteristic value of picture.
In one possible implementation, the fusion value and threshold value are compared, institute is judged according to comparison result
The deception recognition result of face image is stated, including:
The fusion value and threshold value are compared, the face image is judged for image scene according to comparison result or is taken advantage of
Deceive image.
According to the other side of the embodiment of the present disclosure, a kind of deception identification device of face image is provided, including:
Face image acquisition module, for obtaining face image to be identified;
Local feature acquisition module, for extracting face in the face image using trained first nerves network
Local feature obtains the local feature value of the face image;
Depth characteristic acquisition module, for extracting face in the face image using trained nervus opticus network
Depth characteristic obtains the depth characteristic value of the face image;
Fusion Module obtains face's figure for merging the local feature value and the depth characteristic value
The fusion value of picture;
Recognition result acquisition module, for the fusion value and threshold value to be compared, judged according to comparison result described in
The deception recognition result of face image.
In one possible implementation, the face image acquisition module, including:
First image acquisition submodule, the first image and the second image for obtaining face to be identified, first figure
Picture is different with the imaging method of second image;
Second image acquisition submodule, for according to described first image and second image, obtaining described to be identified
Face image.
In one possible implementation, the local feature acquisition module, including:
Subregion determination sub-module, the subregion for determining face at random in the face image;
Sub-district characteristic of field acquisition submodule, for utilizing trained first nerves network extraction office in the subregion
Portion's feature obtains the sub-district characteristic of field of the face image;
Local feature acquisition submodule, for according to the sub-district characteristic of field, obtaining the local feature of the face image
Value.
In one possible implementation, the depth characteristic acquisition module, including:
Depth calculation region submodule, the depth calculation region for determining face in the face image;
Depth characteristic acquisition submodule, for extracting the depth calculation region using trained nervus opticus network
Depth characteristic obtains the depth characteristic value of the face image.
In one possible implementation, the recognition result acquisition module, including:
Recognition result acquisition submodule judges institute for being compared the fusion value and threshold value according to comparison result
It is image scene or deception image to state face image.
According to the other side of the embodiment of the present disclosure, a kind of deception identification device of face image is provided, including:
Processor;
Memory for storing processor-executable instruction;
Wherein, the processor is configured as:Execute the method described in any one of embodiment of the present disclosure.
According to the other side of the embodiment of the present disclosure, a kind of non-volatile computer readable storage medium storing program for executing is provided, thereon
It is stored with computer program instructions, which is characterized in that when the computer program instructions are executed by processor so that processor energy
Enough execute the method described in any one of embodiment of the present disclosure.
The technical scheme provided by this disclosed embodiment can include the following benefits:By extracting face's figure to be identified
The local feature and depth characteristic of picture obtain the fusion feature value of face image to be identified, by fusion feature value and threshold value comparison
Afterwards, the deception recognition result of the face image to be identified is obtained.Disclosure recognition accuracy is high, and robustness is good, copes with
A variety of deception attacks.
According to below with reference to the accompanying drawings to detailed description of illustrative embodiments, the other feature and aspect of the disclosure will become
It is clear.
Description of the drawings
Including in the description and the attached drawing of a part for constitution instruction and specification together illustrate the disclosure
Exemplary embodiment, feature and aspect, and for explaining the principles of this disclosure.
Fig. 1 is the flow chart according to the deception recognition methods of the face image shown in an exemplary embodiment;
Fig. 2 is the flow chart according to the deception recognition methods of the face image shown in an exemplary embodiment;
Fig. 3 is the flow chart according to the deception recognition methods of the face image shown in an exemplary embodiment;
Fig. 4 is the flow chart according to the deception recognition methods of the face image shown in an exemplary embodiment;
Fig. 5 is the flow chart according to the deception recognition methods of the face image shown in an exemplary embodiment;
Fig. 6 is the flow chart according to the deception recognition methods of the face image shown in an exemplary embodiment;
Fig. 7 is the block diagram according to the deception identification device of the face image shown in an exemplary embodiment;
Fig. 8 is the block diagram according to the deception identification device of the face image shown in an exemplary embodiment;
Fig. 9 is a kind of block diagram of the deception identification device of face image shown according to an exemplary embodiment.
Figure 10 is a kind of block diagram of the deception identification device of face image shown according to an exemplary embodiment.
Specific implementation mode
Various exemplary embodiments, feature and the aspect of the disclosure are described in detail below with reference to attached drawing.It is identical in attached drawing
Reference numeral indicate functionally the same or similar element.Although the various aspects of embodiment are shown in the accompanying drawings, remove
It non-specifically points out, it is not necessary to attached drawing drawn to scale.
Dedicated word " exemplary " means " being used as example, embodiment or illustrative " herein.Here as " exemplary "
Illustrated any embodiment should not necessarily be construed as preferred or advantageous over other embodiments.
In addition, in order to better illustrate the disclosure, numerous details is given in specific implementation mode below.
It will be appreciated by those skilled in the art that without certain details, the disclosure can equally be implemented.In some instances, for
Method, means, element and circuit well known to those skilled in the art are not described in detail, in order to highlight the purport of the disclosure.
The recognition methods of existing face image includes:
1. the identification method based on texture:The textural characteristics of face have higher distinctive, so extraction facial image
Textural characteristics tend to obtain good Classification and Identification effect.The texture characteristic extracting method of image can generally be classified as statistics
4 method, model method, structural approach, signal processing method major class.
2. the mode based on acquisition facial movement:Such as the movement of eyeball and lip, by the face for extracting people in image
Move to identify whether being true face.
3. the correlation technique based on picture quality and reflectivity:By extract and movement images on illumination and noise information
To discriminate whether being true face.
But there is no larger correlation between the pixel density and different attack patterns of the textural characteristics of extraction, therefore carry
The textural characteristics stablized and used are taken to seem that the mode of very difficult acquisition facial movement has the static image of identification certain
Advantage, but video or image replaying are attacked in vain.Method based on picture quality and reflectivity to image request compared with
Height is unfavorable for stablizing identification to noise-sensitive.
Fig. 1 is according to the flow chart of the deception recognition methods of the face image shown in an exemplary embodiment, such as Fig. 1 institutes
Show, the deception recognition methods of the face image may include:
Step S10 obtains face image to be identified.
Step S20 is extracted the local feature of face in the face image using trained first nerves network, obtained
The local feature value of the face image.
Step S30 is extracted the depth characteristic of face in the face image using trained nervus opticus network, obtained
The depth characteristic value of the face image.
The local feature value and the depth characteristic value are merged, obtain melting for the face image by step S40
Conjunction value.
The fusion value and threshold value are compared, the deception of the face image are judged according to comparison result by step S50
Recognition result.
Face image to be identified may include a variety of.For example, the face image of obtained live body such as people is taken on site,
With the face image that may be the non-living body for cheating image.
Due to individually extracting the local feature of face, it is unfavorable for resisting the attack of existing deception image.The disclosure will
Face in face image to be identified is divided into multiple subregions, and the part for extracting face image all subregion to be identified is special
Sign.The disclosure extracts the depth of face image entirety to be identified also using the depth of face itself in face image to be identified as reference
Spend feature.After the disclosure is merged the local feature extracted and global feature, using fusion value and the threshold value of setting into
Row compares, and judges whether face image to be identified is deception image according to comparison result, improves the recognition accuracy of deception image.
Fig. 2 is according to the flow chart of the deception recognition methods of the face image shown in an exemplary embodiment, such as Fig. 2 institutes
Show, the difference is that, step S10 includes with above-described embodiment:
Step S11 obtains the first image and the second image of face to be identified, described first image and second image
Imaging method it is different.
Step S12 obtains the face image to be identified according to described first image and second image.
In one possible implementation, the first image is obtained by infrared imaging mode, the first image is infrared light
Image.The second image is obtained by visual light imaging mode, the second image is visible images.By the first image and the second image
Image co-registration is carried out, face image to be identified is obtained.Image co-registration is divided into three levels from low to high:Pixel-based fusion, spy
Levy grade fusion and decision level fusion.
Wherein, pixel-based fusion be also referred to as Pixel-level fusion, refer to directly to sensor acquisition come data handled and
The process for obtaining blending image can keep live initial data as much as possible.In feature-based fusion, it is ensured that difference figure
As the feature comprising information, if infrared light is for the characterization of object heat, it is seen that characterization etc. of the light for object brightness.Decision level
Fusion essentially consists in subjective requirement, some rules, such as Bayesian Method, D-S evidence acts and voting method etc. may be used.
The imagery exploitation image recognition technology obtained after fusion is subjected to face recognition, is known including the use of trained face
Other neural network carries out face detection, obtains face image.
Fig. 3 is according to the flow chart of the deception recognition methods of the face image shown in an exemplary embodiment, such as Fig. 3 institutes
Show, the difference is that, step S20 includes with above-described embodiment:
Step S21 determines the subregion of face at random in the face image.
Step S22 extracts local feature in the subregion using trained first nerves network, obtains the face
The sub-district characteristic of field of portion's image.
Step S23 obtains the local feature value of the face image according to the sub-district characteristic of field.
In one possible implementation, the subregion of face determining at random in face image to be identified, can
With the subregion distinguished according to the position of face including eye subregion, nose subregion, mouth subregion etc., can also include
Subregion more smaller than each position of face.Using trained first nerves network, local feature is extracted in all subregion,
Such as extraction SIFT (Scale-invariant feature transform, Scale invariant features transform) feature, LBP
(Local Binary Patterns, local binary patterns feature) feature, HOG (Histogram of Oriented
Gradient, histograms of oriented gradients) feature.After the local feature of all subregion extracted is carried out data processing, obtain
The local feature value of face image to be identified.
Fig. 4 is according to the flow chart of the deception recognition methods of the face image shown in an exemplary embodiment, such as Fig. 4 institutes
Show, the difference is that, step S30 includes with above-described embodiment:
Step S31 determines the depth calculation region of face in the face image.
Step S32 is extracted the depth characteristic in the depth calculation region using trained nervus opticus network, obtains institute
State the depth characteristic value of face image.
In one possible implementation, in the disclosure, depth is face area using some position of itself as ginseng
According to depth, rather than using other objects as the depth of reference, nor distance of the face area to other external positions.It calculates
The depth of face image need not calculate all areas in face image.A kind of mode is first to be determined in face image deep
Zoning is spent, the depth of face is only calculated in depth calculation region.In determining depth calculation region, the second god is utilized
After network extracts depth characteristic, the depth characteristic value of face image is obtained according to the depth characteristic extracted.
Fig. 5 is according to the flow chart of the deception recognition methods of the face image shown in an exemplary embodiment, such as Fig. 5 institutes
Show, the difference is that, step S50 includes with above-described embodiment:
The fusion value and threshold value are compared by step S51, judge the face image for scene according to comparison result
Image or deception image.
After in one possible implementation, the local feature value of face image and depth characteristic value are merged
To fusion value.The threshold value of fusion value and setting is compared again.If fusion value is higher than threshold value, face image to be identified
To cheat image.If fusion value is less than threshold value, face image to be identified is image scene.
To better illustrate disclosed method, following embodiment is one exemplary embodiment of the disclosure.Fig. 6 is according to another
The flow chart of the deception recognition methods of face image shown in one exemplary embodiment, as shown in fig. 6, including:
Step 1, image is obtained respectively using different imaging modes.Such as it is obtained using infrared camera shooting infrared
Light image, the visible images obtained using the shooting of visible image capturing head.Binocular camera can be utilized same in practical applications
When obtain infrared light image and visible images.
Step 2, by after infrared light image and visual image fusion, input picture is obtained.Referring in above-described embodiment
Associated description.
Step 3, after detecting face in the input image, the face image for analysis is obtained.Including the use of image recognition
Technology carries out face recognition in the image of input.Referring to the associated description in above-described embodiment.
Step 4, face image is inputted respectively in two neural networks, is identified respectively using upper and lower two flows in Fig. 6
Processing step in two CNN (Convolutional Neural Network, convolutional neural networks).In Fig. 6, upper part
The local feature of CNN flow processing face images.Local patch feature is extracted in face image, i.e., it is random in face image
After dividing subregion, the local feature of all subregion is extracted respectively.In the CNN flows of lower part, face image input is based on
In the CNN of depth, the overall depth feature of face image is extracted.
As the mode of perception environment and deception is more and more, individually extraction feature is identified and cannot cover
All attacks, therefore the data of magnanimity are learnt using convolutional neural networks, using the training data of magnanimity come area
Divide scene and deception sample.For the CNN based on patch, the end-to-end study of convolutional neural networks of one depth of training is abundant
External appearance characteristic can use the patch extracted at random from facial image to distinguish living and non-at-scene facial image.For base
In the CNN of depth, complete convolutional network (FCN) is trained to estimate the depth of facial image, it is assumed that printing or replay image attack
With flat depth map, and living person's face includes normal face depth.And the CNN based on depth is based on appearance or depth cue
It can be attacked with independent detection face.
Step 5, in the CNN flows of upper part, by the local special medical treatment of all subregion extracted, input is based on local repair
It is handled in the CNN of fourth.In the CNN flows of lower part, according to the depth characteristic of the CNN outputs based on depth, face is obtained
The depth characteristic of portion's image.Depth characteristic is utilized entire face, and face is described as a 3D object, and by non-living body face
Portion is described as a flat plane.
It is a basic problem in computer vision from single RGB image estimating depth.For facial image, can incite somebody to action
A kind of mode for being considered as estimation of Depth is rebuild from the face of an image or multiple images.The disclosure estimates live body face and non-
The depth of live body face, using face itself as depth reference, rather than by face and shoot face camera between fixation away from
It is calculated from as depth.
Step 6, in the CNN flows of upper part, liveness score is estimated for each local feature, to obtain face image
Local feature score.In the CNN flows of lower part, SVM (Support Vector are carried out to global feature
Machine, support vector machines) classification, obtain the score of the depth characteristic of face image.
The CNN of upper part is trained end to end, and the patch score each extracted at random from face image distribution.Then
Distribute the average mark of face image.The depth map of the CNN estimation facial images of lower part and based on the depth map of estimation to people
Face image provides liveness score.
Step 7, after the score of the score of local feature and depth characteristic being merged, the threshold of score and setting will be merged
Value is compared.
Step 8, judge the face image for Vitua limage or live live body image according to comparison result.By local feature
Score and the fusion output of depth characteristic score are known as deception score.If cheating score is higher than predefined threshold value, face's figure
Picture or video clipping are classified as non-living body image.
The present embodiment proposes a kind of novel method for carrying out face verification anti-spoofing using two convolutional neural networks:It adopts
Local feature is extracted with a neural network, depth characteristic is extracted using a neural network, by being carried independently of global characteristics
Whether the depth map obtained is entity come the image for verifying input.The present embodiment builds difference based on two-way neural network
It identifies local feature and overall depth feature, the technology compared is merged to local feature and depth characteristic discriminant scores, faster more
The identification for easily realizing living body faces, detects whether in a pre-authentication as live body, while can effectively detect photo, regard
Frequently, the spoofing attack of 3D masks etc..
The present embodiment is based on infrared camera and visible image capturing head acquires image simultaneously, is carried out using two tunnel neural networks
Part and depth discrimination, technology is novel, and recognition accuracy is high, while robustness is good, copes with a variety of deception attacks.
Fig. 7 is the block diagram for cheating identification device according to the face image shown in an exemplary embodiment, as shown in fig. 7,
The deception identification device of face image includes:
Face image acquisition module 61, for obtaining face image to be identified;
Local feature acquisition module 62, for extracting face in the face image using trained first nerves network
Local feature, obtain the local feature value of the face image;
Depth characteristic acquisition module 63, for extracting face in the face image using trained nervus opticus network
Depth characteristic, obtain the depth characteristic value of the face image;
Fusion Module 64 obtains the face for merging the local feature value and the depth characteristic value
The fusion value of image;
Recognition result acquisition module 65 judges institute for being compared the fusion value and threshold value according to comparison result
State the deception recognition result of face image.
Fig. 8 is the block diagram for cheating identification device according to the face image shown in an exemplary embodiment, as shown in figure 8,
In one possible implementation, the face image acquisition module 61, including:
First image acquisition submodule 611, the first image and the second image for obtaining face to be identified, described first
Image is different with the imaging method of second image;
Second image acquisition submodule 612 obtains described waiting knowing for according to described first image and second image
Other face image.
In one possible implementation, the local feature acquisition module 62, including:
Subregion determination sub-module 621, the subregion for determining face at random in the face image;
Sub-district characteristic of field acquisition submodule 622, for being carried in the subregion using trained first nerves network
Local feature is taken, the sub-district characteristic of field of the face image is obtained;
Local feature acquisition submodule 623, for according to the sub-district characteristic of field, the part for obtaining the face image to be special
Value indicative.
In one possible implementation, the depth characteristic acquisition module 63, including:
Depth calculation region submodule 631, the depth calculation region for determining face in the face image;
Depth characteristic acquisition submodule 632, for extracting the depth calculation area using trained nervus opticus network
The depth characteristic in domain obtains the depth characteristic value of the face image.
In one possible implementation, the recognition result acquisition module 65, including:
Recognition result acquisition submodule 651 judges for being compared the fusion value and threshold value according to comparison result
The face image is image scene or deception image.
Fig. 9 is a kind of frame of the device 800 of deception identification for face image shown according to an exemplary embodiment
Figure.For example, device 800 can be mobile phone, computer, digital broadcast terminal, messaging devices, game console puts down
Panel device, Medical Devices, body-building equipment, personal digital assistant etc..
With reference to Fig. 9, device 800 may include following one or more components:Processing component 802, memory 804, power supply
Component 806, multimedia component 808, audio component 810, the interface 812 of input/output (I/O), sensor module 814, and
Communication component 816.
The integrated operation of 802 usual control device 800 of processing component, such as with display, call, data communication, phase
Machine operates and record operates associated operation.Processing component 802 may include that one or more processors 820 refer to execute
It enables, to perform all or part of the steps of the methods described above.In addition, processing component 802 may include one or more modules, just
Interaction between processing component 802 and other assemblies.For example, processing component 802 may include multi-media module, it is more to facilitate
Interaction between media component 808 and processing component 802.
Memory 804 is configured as storing various types of data to support the operation in device 800.These data are shown
Example includes instruction for any application program or method that are operated on device 800, contact data, and telephone book data disappears
Breath, image, video etc..Memory 804 can be by any kind of volatibility or non-volatile memory device or their group
It closes and realizes, such as static RAM (SRAM), electrically erasable programmable read-only memory (EEPROM) is erasable to compile
Journey read-only memory (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash
Device, disk or CD.
Power supply module 806 provides electric power for the various assemblies of device 800.Power supply module 806 may include power management system
System, one or more power supplys and other generated with for device 800, management and the associated component of distribution electric power.
Multimedia component 808 is included in the screen of one output interface of offer between described device 800 and user.One
In a little embodiments, screen may include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, screen
Curtain may be implemented as touch screen, to receive input signal from the user.Touch panel includes one or more touch sensings
Device is to sense the gesture on touch, slide, and touch panel.The touch sensor can not only sense touch or sliding action
Boundary, but also detect duration and pressure associated with the touch or slide operation.In some embodiments, more matchmakers
Body component 808 includes a front camera and/or rear camera.When device 800 is in operation mode, such as screening-mode or
When video mode, front camera and/or rear camera can receive external multi-medium data.Each front camera and
Rear camera can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio component 810 is configured as output and/or input audio signal.For example, audio component 810 includes a Mike
Wind (MIC), when device 800 is in operation mode, when such as call model, logging mode and speech recognition mode, microphone by with
It is set to reception external audio signal.The received audio signal can be further stored in memory 804 or via communication set
Part 816 is sent.In some embodiments, audio component 810 further includes a loud speaker, is used for exports audio signal.
I/O interfaces 812 provide interface between processing component 802 and peripheral interface module, and above-mentioned peripheral interface module can
To be keyboard, click wheel, button etc..These buttons may include but be not limited to:Home button, volume button, start button and lock
Determine button.
Sensor module 814 includes one or more sensors, and the state for providing various aspects for device 800 is commented
Estimate.For example, sensor module 814 can detect the state that opens/closes of device 800, and the relative positioning of component, for example, it is described
Component is the display and keypad of device 800, and sensor module 814 can be with 800 1 components of detection device 800 or device
Position change, the existence or non-existence that user contacts with device 800,800 orientation of device or acceleration/deceleration and device 800
Temperature change.Sensor module 814 may include proximity sensor, be configured to detect without any physical contact
Presence of nearby objects.Sensor module 814 can also include optical sensor, such as CMOS or ccd image sensor, at
As being used in application.In some embodiments, which can also include acceleration transducer, gyro sensors
Device, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 816 is configured to facilitate the communication of wired or wireless way between device 800 and other equipment.Device
800 can access the wireless network based on communication standard, such as WiFi, 2G or 3G or combination thereof.In an exemplary implementation
In example, communication component 816 receives broadcast singal or broadcast related information from external broadcasting management system via broadcast channel.
In one exemplary embodiment, the communication component 816 further includes near-field communication (NFC) module, to promote short range communication.Example
Such as, NFC module can be based on radio frequency identification (RFID) technology, Infrared Data Association (IrDA) technology, ultra wide band (UWB) technology,
Bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, device 800 can be believed by one or more application application-specific integrated circuit (ASIC), number
Number processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array
(FPGA), controller, microcontroller, microprocessor or other electronic components are realized, for executing the above method.
In the exemplary embodiment, a kind of non-volatile computer readable storage medium storing program for executing is additionally provided, such as including calculating
The memory 804 of machine program instruction, above computer program instruction can be executed above-mentioned to complete by the processor 820 of device 800
Method.
Figure 10 is a kind of device 1900 of deception identification for face image shown according to an exemplary embodiment
Block diagram.For example, device 1900 may be provided as a server.Referring to Fig.1 0, device 1900 includes processing component 1922, into
One step includes one or more processors and memory resource represented by a memory 1932, can be by handling for storing
The instruction of the execution of component 1922, such as application program.The application program stored in memory 1932 may include one or one
A above each corresponds to the module of one group of instruction.In addition, processing component 1922 is configured as executing instruction, in execution
State method.
Device 1900 can also include that a power supply module 1926 be configured as the power management of executive device 1900, one
Wired or wireless network interface 1950 is configured as device 1900 being connected to network and input and output (I/O) interface
1958.Device 1900 can be operated based on the operating system for being stored in memory 1932, such as Windows ServerTM, Mac
OS XTM, UnixTM, LinuxTM, FreeBSDTM or similar.
In the exemplary embodiment, a kind of non-volatile computer readable storage medium storing program for executing is additionally provided, such as including calculating
The memory 1932 of machine program instruction, above computer program instruction can be executed by the processing component 1922 of device 1900 to complete
The above method.
The disclosure can be system, method and/or computer program product.Computer program product may include computer
Readable storage medium storing program for executing, containing for making processor realize the computer-readable program instructions of various aspects of the disclosure.
Computer readable storage medium can be can keep and store the instruction used by instruction execution equipment tangible
Equipment.Computer readable storage medium for example can be-- but be not limited to-- storage device electric, magnetic storage apparatus, optical storage
Equipment, electromagnetism storage device, semiconductor memory apparatus or above-mentioned any appropriate combination.Computer readable storage medium
More specific example (non exhaustive list) includes:Portable computer diskette, random access memory (RAM), read-only is deposited hard disk
It is reservoir (ROM), erasable programmable read only memory (EPROM or flash memory), static RAM (SRAM), portable
Compact disk read-only memory (CD-ROM), digital versatile disc (DVD), memory stick, floppy disk, mechanical coding equipment, for example thereon
It is stored with punch card or groove internal projection structure and the above-mentioned any appropriate combination of instruction.Calculating used herein above
Machine readable storage medium storing program for executing is not interpreted that instantaneous signal itself, the electromagnetic wave of such as radio wave or other Free propagations lead to
It crosses the electromagnetic wave (for example, the light pulse for passing through fiber optic cables) of waveguide or the propagation of other transmission mediums or is transmitted by electric wire
Electric signal.
Computer-readable program instructions as described herein can be downloaded to from computer readable storage medium it is each calculate/
Processing equipment, or outer computer or outer is downloaded to by network, such as internet, LAN, wide area network and/or wireless network
Portion's storage device.Network may include copper transmission cable, optical fiber transmission, wireless transmission, router, fire wall, interchanger, gateway
Computer and/or Edge Server.Adapter or network interface in each calculating/processing equipment are received from network to be counted
Calculation machine readable program instructions, and the computer-readable program instructions are forwarded, for the meter being stored in each calculating/processing equipment
In calculation machine readable storage medium storing program for executing.
For execute the disclosure operation computer program instructions can be assembly instruction, instruction set architecture (ISA) instruction,
Machine instruction, machine-dependent instructions, microcode, firmware instructions, condition setup data or with one or more programming languages
Arbitrarily combine the source code or object code write, the programming language include the programming language-of object-oriented such as
Smalltalk, C++ etc., and conventional procedural programming languages-such as " C " language or similar programming language.Computer
Readable program instructions can be executed fully, partly execute on the user computer, is only as one on the user computer
Vertical software package executes, part executes or on the remote computer completely in remote computer on the user computer for part
Or it is executed on server.In situations involving remote computers, remote computer can pass through network-packet of any kind
It includes LAN (LAN) or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as profit
It is connected by internet with ISP).In some embodiments, by using computer-readable program instructions
Status information carry out personalized customization electronic circuit, such as programmable logic circuit, field programmable gate array (FPGA) or can
Programmed logic array (PLA) (PLA), the electronic circuit can execute computer-readable program instructions, to realize each side of the disclosure
Face.
Referring herein to according to the flow chart of the method, apparatus (system) of the embodiment of the present disclosure and computer program product and/
Or block diagram describes various aspects of the disclosure.It should be appreciated that flowchart and or block diagram each box and flow chart and/
Or in block diagram each box combination, can be realized by computer-readable program instructions.
These computer-readable program instructions can be supplied to all-purpose computer, special purpose computer or other programmable datas
The processor of processing unit, to produce a kind of machine so that these instructions are passing through computer or other programmable datas
When the processor of processing unit executes, work(specified in one or more of implementation flow chart and/or block diagram box is produced
The device of energy/action.These computer-readable program instructions can also be stored in a computer-readable storage medium, these refer to
It enables so that computer, programmable data processing unit and/or other equipment work in a specific way, to be stored with instruction
Computer-readable medium includes then a manufacture comprising in one or more of implementation flow chart and/or block diagram box
The instruction of the various aspects of defined function action.
Computer-readable program instructions can also be loaded into computer, other programmable data processing units or other
In equipment so that series of operation steps are executed on computer, other programmable data processing units or miscellaneous equipment, with production
Raw computer implemented process, so that executed on computer, other programmable data processing units or miscellaneous equipment
Instruct function action specified in one or more of implementation flow chart and/or block diagram box.
Flow chart and block diagram in attached drawing show the system, method and computer journey of multiple embodiments according to the disclosure
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
One module of table, program segment or a part for instruction, the module, program segment or a part for instruction include one or more use
The executable instruction of the logic function as defined in realization.In some implementations as replacements, the function of being marked in box
It can occur in a different order than that indicated in the drawings.For example, two continuous boxes can essentially be held substantially in parallel
Row, they can also be executed in the opposite order sometimes, this is depended on the functions involved.It is also noted that block diagram and/or
The combination of each box in flow chart and the box in block diagram and or flow chart can use function or dynamic as defined in executing
The dedicated hardware based system made is realized, or can be realized using a combination of dedicated hardware and computer instructions.
The presently disclosed embodiments is described above, above description is exemplary, and non-exclusive, and
It is not limited to disclosed each embodiment.Without departing from the scope and spirit of illustrated each embodiment, for this skill
Many modifications and changes will be apparent from for the those of ordinary skill in art field.The selection of term used herein, purport
In the principle, practical application or technological improvement to the technology in market for best explaining each embodiment, or this technology is made to lead
Other those of ordinary skill in domain can understand each embodiment disclosed herein.
Claims (12)
1. a kind of deception recognition methods of face image, which is characterized in that the method includes:
Obtain face image to be identified;
The local feature that face in the face image is extracted using trained first nerves network, obtains the face image
Local feature value;
The depth characteristic that face in the face image is extracted using trained nervus opticus network, obtains the face image
Depth characteristic value;
The local feature value and the depth characteristic value are merged, the fusion value of the face image is obtained;
The fusion value and threshold value are compared, the deception recognition result of the face image is judged according to comparison result.
2. according to the method described in claim 1, it is characterized in that, obtain face image to be identified, including:
Obtain the first image and the second image of face to be identified, the imaging method of described first image and second image is not
Together;
According to described first image and second image, the face image to be identified is obtained.
3. according to the method described in claim 1, it is characterized in that, extracting the face using trained first nerves network
The local feature of face in image obtains the local feature value of the face image, including:
Determine the subregion of face at random in the face image;
Local feature is extracted in the subregion using trained first nerves network, obtains the sub-district of the face image
Characteristic of field;
According to the sub-district characteristic of field, the local feature value of the face image is obtained.
4. according to the method described in claim 1, it is characterized in that, extracting the face using trained nervus opticus network
The depth characteristic of face in image obtains the depth characteristic value of the face image, including:
The depth calculation region of face is determined in the face image;
The depth characteristic that the depth calculation region is extracted using trained nervus opticus network, obtains the face image
Depth characteristic value.
5. according to the method described in claim 1, it is characterized in that, the fusion value and threshold value are compared, according to comparing
As a result judge the deception recognition result of the face image, including:
The fusion value and threshold value are compared, judge that the face image is schemed for image scene or deception according to comparison result
Picture.
6. a kind of deception identification device of face image, which is characterized in that including:
Face image acquisition module, for obtaining face image to be identified;
Local feature acquisition module, the part for extracting face in the face image using trained first nerves network
Feature obtains the local feature value of the face image;
Depth characteristic acquisition module, the depth for extracting face in the face image using trained nervus opticus network
Feature obtains the depth characteristic value of the face image;
Fusion Module obtains the face image for merging the local feature value and the depth characteristic value
Fusion value;
Recognition result acquisition module judges the face for being compared the fusion value and threshold value according to comparison result
The deception recognition result of image.
7. device according to claim 6, which is characterized in that the face image acquisition module, including:
First image acquisition submodule, the first image and the second image for obtaining face to be identified, described first image and
The imaging method of second image is different;
Second image acquisition submodule, for according to described first image and second image, obtaining the face to be identified
Portion's image.
8. device according to claim 6, which is characterized in that the local feature acquisition module, including:
Subregion determination sub-module, the subregion for determining face at random in the face image;
Sub-district characteristic of field acquisition submodule, for extracting part spy in the subregion using trained first nerves network
Sign, obtains the sub-district characteristic of field of the face image;
Local feature acquisition submodule, for according to the sub-district characteristic of field, obtaining the local feature value of the face image.
9. device according to claim 6, which is characterized in that the depth characteristic acquisition module, including:
Depth calculation region submodule, the depth calculation region for determining face in the face image;
Depth characteristic acquisition submodule, the depth for extracting the depth calculation region using trained nervus opticus network
Feature obtains the depth characteristic value of the face image.
10. device according to claim 6, which is characterized in that the recognition result acquisition module, including:
Recognition result acquisition submodule judges the face for being compared the fusion value and threshold value according to comparison result
Portion's image is image scene or deception image.
11. a kind of deception identification device of face image, which is characterized in that including:
Processor;
Memory for storing processor-executable instruction;
Wherein, the processor is configured as:Perform claim requires the method described in any one of 1 to 5.
12. a kind of non-volatile computer readable storage medium storing program for executing, is stored thereon with computer program instructions, which is characterized in that institute
When stating computer program instructions and being executed by processor so that processor is able to carry out according to any one of claim 1 to 5 institute
The method stated.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2017113148674 | 2017-12-12 | ||
CN201711314867 | 2017-12-12 |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108280418A true CN108280418A (en) | 2018-07-13 |
Family
ID=62804076
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810048661.XA Pending CN108280418A (en) | 2017-12-12 | 2018-01-18 | The deception recognition methods of face image and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108280418A (en) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109034102A (en) * | 2018-08-14 | 2018-12-18 | 腾讯科技(深圳)有限公司 | Human face in-vivo detection method, device, equipment and storage medium |
CN109344747A (en) * | 2018-09-17 | 2019-02-15 | 平安科技(深圳)有限公司 | A kind of recognition methods that distorting figure, storage medium and server |
CN109583375A (en) * | 2018-11-30 | 2019-04-05 | 中山大学 | A kind of the facial image illumination recognition methods and system of multiple features fusion |
CN109886244A (en) * | 2019-03-01 | 2019-06-14 | 北京视甄智能科技有限公司 | A kind of recognition of face biopsy method and device |
CN109948467A (en) * | 2019-02-28 | 2019-06-28 | 中国科学院深圳先进技术研究院 | Method, apparatus, computer equipment and the storage medium of recognition of face |
CN110059579A (en) * | 2019-03-27 | 2019-07-26 | 北京三快在线科技有限公司 | For the method and apparatus of test alive, electronic equipment and storage medium |
CN110414437A (en) * | 2019-07-30 | 2019-11-05 | 上海交通大学 | Face datection analysis method and system are distorted based on convolutional neural networks Model Fusion |
CN110490060A (en) * | 2019-07-10 | 2019-11-22 | 特斯联(北京)科技有限公司 | A kind of security protection head end video equipment based on machine learning hardware structure |
WO2020019760A1 (en) * | 2018-07-27 | 2020-01-30 | 北京市商汤科技开发有限公司 | Living body detection method, apparatus and system, and electronic device and storage medium |
WO2020151489A1 (en) * | 2019-01-25 | 2020-07-30 | 杭州海康威视数字技术股份有限公司 | Living body detection method based on facial recognition, and electronic device and storage medium |
CN111666901A (en) * | 2020-06-09 | 2020-09-15 | 创新奇智(北京)科技有限公司 | Living body face detection method and device, electronic equipment and storage medium |
CN111767760A (en) * | 2019-04-01 | 2020-10-13 | 北京市商汤科技开发有限公司 | Living body detection method and apparatus, electronic device, and storage medium |
CN112085035A (en) * | 2020-09-14 | 2020-12-15 | 北京字节跳动网络技术有限公司 | Image processing method, image processing device, electronic equipment and computer readable medium |
CN112434647A (en) * | 2020-12-09 | 2021-03-02 | 浙江光珀智能科技有限公司 | Human face living body detection method |
CN112668453A (en) * | 2020-12-24 | 2021-04-16 | 平安科技(深圳)有限公司 | Video identification method and related equipment |
CN113051998A (en) * | 2019-12-27 | 2021-06-29 | 豪威科技股份有限公司 | Robust anti-spoofing technique using polarization cues in near infrared and visible wavelength bands in biometric identification techniques |
CN113450806A (en) * | 2021-05-18 | 2021-09-28 | 科大讯飞股份有限公司 | Training method of voice detection model, and related method, device and equipment |
CN113537173A (en) * | 2021-09-16 | 2021-10-22 | 中国人民解放军国防科技大学 | Face image authenticity identification method based on face patch mapping |
CN113610071A (en) * | 2021-10-11 | 2021-11-05 | 深圳市一心视觉科技有限公司 | Face living body detection method and device, electronic equipment and storage medium |
CN113627263A (en) * | 2021-07-13 | 2021-11-09 | 支付宝(杭州)信息技术有限公司 | Exposure method, device and equipment based on face detection |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106874871A (en) * | 2017-02-15 | 2017-06-20 | 广东光阵光电科技有限公司 | A kind of recognition methods of living body faces dual camera and identifying device |
CN107358157A (en) * | 2017-06-07 | 2017-11-17 | 阿里巴巴集团控股有限公司 | A kind of human face in-vivo detection method, device and electronic equipment |
CN107368810A (en) * | 2017-07-20 | 2017-11-21 | 北京小米移动软件有限公司 | Method for detecting human face and device |
-
2018
- 2018-01-18 CN CN201810048661.XA patent/CN108280418A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106874871A (en) * | 2017-02-15 | 2017-06-20 | 广东光阵光电科技有限公司 | A kind of recognition methods of living body faces dual camera and identifying device |
CN107358157A (en) * | 2017-06-07 | 2017-11-17 | 阿里巴巴集团控股有限公司 | A kind of human face in-vivo detection method, device and electronic equipment |
CN107368810A (en) * | 2017-07-20 | 2017-11-21 | 北京小米移动软件有限公司 | Method for detecting human face and device |
Non-Patent Citations (1)
Title |
---|
YOUSEF ATOUM,YAOJIE LIU ET AL.: "Face Anti-Spoofing Using Patch and Depth-Based CNNs", 《2017 IEEE INTERNATIONAL JOINT CONFERENCE ON BIOMETRICS》 * |
Cited By (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7026225B2 (en) | 2018-07-27 | 2022-02-25 | ベイジン センスタイム テクノロジー デベロップメント カンパニー, リミテッド | Biological detection methods, devices and systems, electronic devices and storage media |
KR102391792B1 (en) * | 2018-07-27 | 2022-04-28 | 베이징 센스타임 테크놀로지 디벨롭먼트 컴퍼니 리미티드 | Biometric detection methods, devices and systems, electronic devices and storage media |
KR20200081450A (en) * | 2018-07-27 | 2020-07-07 | 베이징 센스타임 테크놀로지 디벨롭먼트 컴퍼니 리미티드 | Biometric detection methods, devices and systems, electronic devices and storage media |
US11321575B2 (en) | 2018-07-27 | 2022-05-03 | Beijing Sensetime Technology Development Co., Ltd. | Method, apparatus and system for liveness detection, electronic device, and storage medium |
JP2021503659A (en) * | 2018-07-27 | 2021-02-12 | ベイジン センスタイム テクノロジー デベロップメント カンパニー, リミテッド | Biodetection methods, devices and systems, electronic devices and storage media |
WO2020019760A1 (en) * | 2018-07-27 | 2020-01-30 | 北京市商汤科技开发有限公司 | Living body detection method, apparatus and system, and electronic device and storage medium |
CN109034102A (en) * | 2018-08-14 | 2018-12-18 | 腾讯科技(深圳)有限公司 | Human face in-vivo detection method, device, equipment and storage medium |
CN109344747A (en) * | 2018-09-17 | 2019-02-15 | 平安科技(深圳)有限公司 | A kind of recognition methods that distorting figure, storage medium and server |
CN109344747B (en) * | 2018-09-17 | 2024-01-05 | 平安科技(深圳)有限公司 | Tamper graph identification method, storage medium and server |
CN109583375B (en) * | 2018-11-30 | 2021-04-06 | 中山大学 | Multi-feature fusion face image illumination identification method and system |
CN109583375A (en) * | 2018-11-30 | 2019-04-05 | 中山大学 | A kind of the facial image illumination recognition methods and system of multiple features fusion |
WO2020151489A1 (en) * | 2019-01-25 | 2020-07-30 | 杭州海康威视数字技术股份有限公司 | Living body detection method based on facial recognition, and electronic device and storage medium |
US11830230B2 (en) | 2019-01-25 | 2023-11-28 | Hangzhou Hikvision Digital Technology Co., Ltd. | Living body detection method based on facial recognition, and electronic device and storage medium |
CN109948467A (en) * | 2019-02-28 | 2019-06-28 | 中国科学院深圳先进技术研究院 | Method, apparatus, computer equipment and the storage medium of recognition of face |
CN109886244A (en) * | 2019-03-01 | 2019-06-14 | 北京视甄智能科技有限公司 | A kind of recognition of face biopsy method and device |
CN110059579A (en) * | 2019-03-27 | 2019-07-26 | 北京三快在线科技有限公司 | For the method and apparatus of test alive, electronic equipment and storage medium |
CN111767760A (en) * | 2019-04-01 | 2020-10-13 | 北京市商汤科技开发有限公司 | Living body detection method and apparatus, electronic device, and storage medium |
CN110490060B (en) * | 2019-07-10 | 2020-09-11 | 特斯联(北京)科技有限公司 | Security protection front-end video equipment based on machine learning hardware architecture |
CN110490060A (en) * | 2019-07-10 | 2019-11-22 | 特斯联(北京)科技有限公司 | A kind of security protection head end video equipment based on machine learning hardware structure |
CN110414437A (en) * | 2019-07-30 | 2019-11-05 | 上海交通大学 | Face datection analysis method and system are distorted based on convolutional neural networks Model Fusion |
CN113051998A (en) * | 2019-12-27 | 2021-06-29 | 豪威科技股份有限公司 | Robust anti-spoofing technique using polarization cues in near infrared and visible wavelength bands in biometric identification techniques |
CN111666901A (en) * | 2020-06-09 | 2020-09-15 | 创新奇智(北京)科技有限公司 | Living body face detection method and device, electronic equipment and storage medium |
CN112085035A (en) * | 2020-09-14 | 2020-12-15 | 北京字节跳动网络技术有限公司 | Image processing method, image processing device, electronic equipment and computer readable medium |
CN112434647A (en) * | 2020-12-09 | 2021-03-02 | 浙江光珀智能科技有限公司 | Human face living body detection method |
CN112668453A (en) * | 2020-12-24 | 2021-04-16 | 平安科技(深圳)有限公司 | Video identification method and related equipment |
WO2022134418A1 (en) * | 2020-12-24 | 2022-06-30 | 平安科技(深圳)有限公司 | Video recognition method and related device |
CN112668453B (en) * | 2020-12-24 | 2023-11-14 | 平安科技(深圳)有限公司 | Video identification method and related equipment |
CN113450806B (en) * | 2021-05-18 | 2022-08-05 | 合肥讯飞数码科技有限公司 | Training method of voice detection model, and related method, device and equipment |
CN113450806A (en) * | 2021-05-18 | 2021-09-28 | 科大讯飞股份有限公司 | Training method of voice detection model, and related method, device and equipment |
CN113627263A (en) * | 2021-07-13 | 2021-11-09 | 支付宝(杭州)信息技术有限公司 | Exposure method, device and equipment based on face detection |
CN113627263B (en) * | 2021-07-13 | 2023-11-17 | 支付宝(杭州)信息技术有限公司 | Exposure method, device and equipment based on face detection |
CN113537173B (en) * | 2021-09-16 | 2022-03-18 | 中国人民解放军国防科技大学 | Face image authenticity identification method based on face patch mapping |
CN113537173A (en) * | 2021-09-16 | 2021-10-22 | 中国人民解放军国防科技大学 | Face image authenticity identification method based on face patch mapping |
CN113610071A (en) * | 2021-10-11 | 2021-11-05 | 深圳市一心视觉科技有限公司 | Face living body detection method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108280418A (en) | The deception recognition methods of face image and device | |
JP7040952B2 (en) | Face recognition method and equipment | |
US10275672B2 (en) | Method and apparatus for authenticating liveness face, and computer program product thereof | |
CN108197586B (en) | Face recognition method and device | |
CN107438854B (en) | System and method for performing fingerprint-based user authentication using images captured by a mobile device | |
US9652663B2 (en) | Using facial data for device authentication or subject identification | |
KR20190001066A (en) | Face verifying method and apparatus | |
KR20190038594A (en) | Face recognition-based authentication | |
TW201911130A (en) | Method and device for remake image recognition | |
CN106295511B (en) | Face tracking method and device | |
CN109815844A (en) | Object detection method and device, electronic equipment and storage medium | |
EP2863339A2 (en) | Methods and systems for determing user liveness | |
EP3229177A2 (en) | Methods and systems for authenticating users | |
WO2016084072A1 (en) | Anti-spoofing system and methods useful in conjunction therewith | |
CN110503023A (en) | Biopsy method and device, electronic equipment and storage medium | |
CN108985176A (en) | image generating method and device | |
CN110298310A (en) | Image processing method and device, electronic equipment and storage medium | |
EP3282390A1 (en) | Methods and systems for determining user liveness and verifying user identities | |
US11074469B2 (en) | Methods and systems for detecting user liveness | |
US20190384965A1 (en) | A method for selecting frames used in face processing | |
CN110287671A (en) | Verification method and device, electronic equipment and storage medium | |
KR20120139100A (en) | Apparatus and method for security management using face recognition | |
CN109934275A (en) | Image processing method and device, electronic equipment and storage medium | |
CN108197585A (en) | Recognition algorithms and device | |
CN110532956A (en) | Image processing method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180713 |
|
RJ01 | Rejection of invention patent application after publication |