WO2023241817A1 - Authenticating a person - Google Patents

Authenticating a person Download PDF

Info

Publication number
WO2023241817A1
WO2023241817A1 PCT/EP2022/072025 EP2022072025W WO2023241817A1 WO 2023241817 A1 WO2023241817 A1 WO 2023241817A1 EP 2022072025 W EP2022072025 W EP 2022072025W WO 2023241817 A1 WO2023241817 A1 WO 2023241817A1
Authority
WO
WIPO (PCT)
Prior art keywords
biometric
person
representation
occluded
domain
Prior art date
Application number
PCT/EP2022/072025
Other languages
French (fr)
Inventor
Francisco Julián ZAMORA MARTÍNEZ
Imanol SOLANO MARTINEZ
Miguel Ángel SÁNCHEZ YOLDI
Miguel Santos LUPARELLI MATHIEU
Eduardo Azanza Ladrón
Original Assignee
Veridas Digital Authentication Solutions, S.L.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Veridas Digital Authentication Solutions, S.L. filed Critical Veridas Digital Authentication Solutions, S.L.
Publication of WO2023241817A1 publication Critical patent/WO2023241817A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/778Active pattern-learning, e.g. online learning of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Definitions

  • the present disclosure relates to methods of authenticating a person and to authenticator systems and computer programs suitable for performing said authenticator methods and, furthermore, to trainer methods of training said authenticator systems and to trainer systems and computer programs suitable for performing said trainer methods.
  • This problem is not limited to masked faces, but it is extended to any form of occlusion in a predefined or pre-known region of the face due to, e.g., presence of a scarf, prominent glasses frames, non-full-face veils, etc.
  • An object of the disclosure is to provide new methods, systems and computer programs aimed at solving at least some of the aforementioned problems.
  • Authenticator methods yet further comprise transforming the first biometric mathematical representation from the first representation domain to the second representation domain, thereby generating a third biometric mathematical representation within the second representation domain.
  • Authenticator methods furthermore comprise authenticating the person depending on a comparison between the second biometric mathematical representation and the third biometric mathematical representation.
  • first (non-occluded face) image and first faceprint associated thereto may be part of an enrolment phase, at which the person may be registered in authenticator system configured to perform authenticator methods according to present disclosure.
  • first faceprint may be stored as reference faceprint of the person/user for subsequent authentications thereof.
  • the user may be authenticated whenever required or desired or requested by performing the remaining part of the authenticator method, i.e. operations regarding second (occluded face) image and second faceprint associated thereto, transformation of first faceprint to make it comparable with second faceprint, and final authentication based on such a comparison.
  • Authenticator methods proposed herein may be more efficient than prior art methods with same or similar purpose for following reasons.
  • Such authenticator methods may permit authenticating a person/user by keeping stored only one reference (non-occluded) faceprint per person/user to authenticate the person/user from presented person’s image face with either occlusion or non-occlusion at authentication time. If non-occluded face is presented at authentication time, faceprint obtained from such a presented non-occluded face may be directly compared to user’s reference (non-occluded) faceprint. If occluded face is presented at authentication time, user’s reference (non-occluded) faceprint may be transformed to make it comparable with faceprint obtained from such a presented occluded face.
  • occluded faceprint and non-occluded faceprint corresponding to same person’s face are mathematically related or correlated to each other.
  • This is conceptually consistent with the fact that occluded faceprint contains biometric information from same biological entity as non-occluded faceprint because both of them correspond to same person, but in different quantities in the one and the other case due to occlusion and non-occlusion, respectively.
  • occluded faceprint appears to present a certain mathematical bias with respect to non-occluded faceprint with origin in same person, said bias being mathematically derivable and, therefore, computable in vast majority of scenarios, as experimentally confirmed.
  • first representation domain (within which non-occluded faceprints have been confirmed to fall) may be also denominated occlusion- free or bias-free faceprint domain and, similarly, second representation domain (within which occluded faceprints have been confirmed to fall) may be also denominated occlusion-affected or bias-affected faceprint domain. Transformation from occlusion-free or bias-free faceprint domain to occlusion-affected or bias-affected faceprint domain may thus be implemented based on any mathematical technique configurable or trainable to accordingly bias the non-occluded faceprint to produce the occluded faceprint (presumably) corresponding to same person’s face. As explained in detail in other parts of the description, such a mathematical technique may be based on, e.g., machine learning, Gaussian Mixture Model(s), linear transformation(s) with or without non-linear transfer function on top thereof, etc.
  • the transforming of the first faceprint may, e.g., include inputting the first faceprint into a machine learning module trained to perform the transformation from occlusion-free faceprint domain to occlusion-affected faceprint domain.
  • the transforming of the first faceprint from occlusion- free faceprint domain to occlusion-affected faceprint domain may be based on a Gaussian Mixture Model, and/or a linear transformation, and/or applying a non-linear transfer function on top of the linear transformation, etc.
  • a non-linear transfer function may include, e.g., a sigmoid function, or a Rectified Linear Unit function, etc. or any combination thereof.
  • authenticator methods may further comprise storing the first faceprint as reference faceprint for authentication of the person and, furthermore, retrieving the first faceprint for transforming it from occlusion-free faceprint domain to occlusion-affected faceprint domain and subsequent authentication of the person.
  • authenticator systems are provided for authenticating a person.
  • Authenticator systems comprise a first image unit, a first biometric unit, a second image unit, a second biometric unit, a transformer unit, and an authenticator unit.
  • the first image unit is configured to obtain a first image of the person representing a non-occluded face of the person.
  • the first biometric unit is configured to generate, from the first image, a first faceprint corresponding to the non-occluded face, within an occlusion-free faceprint domain.
  • computing systems for authenticating a person, said computing systems comprising a memory and a processor, embodying instructions stored in the memory and executable by the processor, and the instructions comprising functionality or functionalities to execute authenticator methods, such as those described in other parts of the disclosure.
  • trainer methods are provided fortraining an authenticator system, such as those described in other parts of the disclosure.
  • Trainer methods comprise obtaining or receiving a training set of non-occluded (or occlusion-free) face images for training the authenticator system, and performing, for each non-occluded (or occlusion- free) face image in whole or part of the training set, a training loop.
  • a training loop includes converting the non-occluded (or occlusion-free) face image into a partially occluded (or occlusion-affected) face image having a non-occluded face part and an occluded face part, each of said face parts corresponding to a predefined or pre-known face part.
  • biometric and transformer units By training biometric and transformer units in such a symbiotic manner, very accurate convergence of said units may be achieved towards purposed knowledge on generating occlusion-free and occlusion-affected faceprints and transforming from occlusion-free faceprint domain to occlusion-affected faceprint domain. Even though said knowledges do not exactly correspond to same one, they are someway related to each other in the sense that biometric units are assumed to output occlusion-free faceprint and occlusion-affected faceprint, respectively, and transformer unit to transform from occlusion-free faceprint domain to occlusion-affected faceprint domain.
  • biometric and transformer units result to be more effective and efficient as a whole than in prior art training approaches with same or similar purposes, as experimentally confirmed by inventors.
  • the converter module is configured to convert the non-occluded (or occlusion-free) face image into a partially occluded (or occlusion-affected) face image having a non-occluded face part and an occluded face part, each of said face parts corresponding to a predefined or pre-known face part.
  • the first biometric trainer module is configured to train the first biometric unit of the authenticator system to generate first faceprint corresponding to the non-occluded (or occlusion-free) face image, within occlusion-free faceprint domain.
  • Figure 1 is a block diagram schematically illustrating authenticator systems for authenticating a person according to examples.
  • Figure 2 is a flow chart schematically illustrating authenticator methods of authenticating a person according to examples.
  • first biometric unit 102, second biometric unit 104 and transformer unit 105 may be implemented in many different manners, as commented in other parts of the disclosure.
  • each of said units 102, 104, 105 may be a machine learning module that may therefore be trainable using machine learning.
  • first biometric unit 102 and second biometric unit 104 may be different sub-modules in same machine learning module in such a way that, in some examples, said sub-modules 102, 104 may share aspects such as, e.g., structure, weights, biases, etc. More particularly, this same machine learning module implementing first and second biometric units 102,
  • Transformer unit 105 may be a module configured to perform the transforming of the first faceprint 108 based on a Gaussian Mixture Model, or on a linear transformation, or combination thereof.
  • the linear transformation may further include applying a non-linear transfer function on top of the linear transformation.
  • a non-linear transfer function may include a sigmoid function, or a Rectified Linear Unit function, or a combination thereof.
  • the transformer unit 105 may be, in some examples, a Generative Adversarial Neural Network or a Gaussian Mixture Neural Network.
  • first and second biometric units 102, 104 configured (e.g., trained) to perform the transformation from the occlusion-free faceprint domain to the occlusion-affected faceprint domain. Similar modular approach may be considered with respect to first and second biometric units 102, 104.
  • the authenticating of the person depending on comparing second and third faceprints may include, e.g., determining a matching score between second faceprint 110 and third faceprint 11 denoting how coincident or similar to each other they are. Once said matching score has been determined, authentication of the person may be performed depending on whether the matching score satisfies or dissatisfies a predefined matching threshold. In particular examples, authentication of the person may be determined successful if the matching score satisfies the predefined matching threshold and, otherwise, authentication of the person may be determined unsuccessful.
  • first faceprint 108 may be stored as reference faceprint of the person and, each time authentication of the person is requested, first faceprint 108 may be retrieved and accordingly transformed from occlusion-free faceprint domain to occlusion-affected faceprint domain, so as to make it comparable with second faceprint 110. Then, authentication of the person may be performed by comparing first faceprint 108 transformed into third faceprint 111 and second faceprint 110 to each other. Retrieving of the first faceprint 108 may performed depending on user credentials or user data uniquely identifying the person received by authenticator system performing the authentication.
  • FIG. 3 is a block diagram schematically illustrating trainer systems for training an authenticator system 100 according to examples.
  • trainer systems 300 may include a training set module 301 and a training loop module 302.
  • a training loop module 302 may include a converter module 303, a first biometric trainer module 304, a second biometric trainer module 305 and a transformer trainer module 306.
  • Authenticator system 100 is partially shown with only those parts thereof that are of interest in this particular example.
  • the training set module 301 may be configured to obtain (or receive or capture) a training set of non-occluded (or occlusion-free) face images for training the authenticator system 100.
  • the training loop module 302 may be configured to perform, for each non-occluded (or occlusion-free) face image in whole or part of the training set, a training loop implemented or implementable by, e.g., modules 303 - 306.
  • Converter module 303 may be configured to convert first or next non-occluded face image into a partially occluded face image having a non-occluded face part and an occluded face part, each of said face parts corresponding to a predefined or pre-known face part.
  • first or next non-occluded face image refers to either first non-occluded face image in the training set at first iteration of the training loop or next non-occluded face image in the training set at any non-first iteration of the training loop.
  • First biometric trainer module 304 may be configured to train the first biometric unit 102 of the authenticator system 100 to generate first faceprint 108 of the non-occluded face image, within occlusion-free faceprint domain.
  • Second biometric trainer module 305 may be configured to train the second biometric unit 104 of the authenticator system 100 to generate second faceprint 110 of the partially occluded face image, within occlusion- affected faceprint domain.
  • Transformer trainer module 306 may be configured to train the transformer unit 105 (of the authenticator system 100) to transform the first faceprint 108 from occlusion-free faceprint domain to occlusion-affected faceprint domain, thereby generating third faceprint 111 within occlusion-affected faceprint domain.
  • Training of the first biometric unit 102 may include providing, by the first biometric trainer module 304, current (i.e. , first or next) non-occluded face image to the first biometric unit 102 for its training to generate first faceprint 108 corresponding to the current nonoccluded face image. Training of the first biometric unit 102 may further include providing, by the first biometric trainer module 304, a classifier to classify the outputted first faceprint 108 in terms of having more or less loss (or divergence) with respect to what is expected to output the first biometric unit 102. This classifier may be denominated, e.g., first biometric classifier and may have necessary knowledge about the non-occluded face images to be processed for that classification aim.
  • Such a first biometric classifier may be trained along with the first biometric unit 102 in cooperative manner according to known machine learning techniques. If such a loss inferred by the first biometric classifier results unacceptable, same current non-occluded face image may be repeatedly processed until inferred loss becomes acceptable. This repetitive approach may cause both first biometric classifier and first biometric unit 102 to converge towards targeted knowledge.
  • Training of the second biometric unit 104 may include providing, by the second biometric trainer module 305, current (i.e., first or next) partially occluded face image (from converter module 303) to the second biometric unit 104 for its training to generate second faceprint 110 corresponding to the current partially occluded face image. Training of the second biometric unit 104 may further include providing, by the second biometric trainer module 305, a classifier to classify the outputted second faceprint 110 in terms of having more or less loss (or divergence) with respect to what is expected to output the second biometric unit 104. This classifier may be denominated, e.g., second biometric classifier and may have necessary knowledge about the partially occluded face images to be processed for that classification aim.
  • Such a second biometric classifier may be trained along with the second biometric unit 104 in cooperative manner according to known machine learning techniques. If such a loss inferred by the second biometric classifier results unacceptable, same current partially occluded face image may be repeatedly processed until inferred loss becomes acceptable. This repetitive approach may cause both second biometric classifier and second biometric unit 104 to converge towards targeted knowledge.
  • Training of the transformer unit 105 may include providing, by the transformer trainer module 306, the first faceprint 108 outputted by the first biometric unit 102 to the transformer unit 105 for its training to output third faceprint 111 as a conversion of the first faceprint 108 from occlusion-free faceprint domain to occlusion-affected faceprint domain.
  • Training of the transformer unit 105 may further include classifying, by the transformer trainer module 306, the outputted third faceprint 111 in terms of having more or less loss (or divergence) with respect to the second faceprint 110 outputted by the second biometric unit 104. It is purposed that third faceprint 111 and second faceprint 110 should converge to zero or minimum loss or divergence (between each other) because it is presumed that third faceprint 111 and second faceprint 110 correspond to same person. If such a loss inferred by the transformer trainer module 306 results unacceptable, same first faceprint 108 and second faceprint 110 may be repeatedly processed in this manner until inferred loss becomes acceptable. This repetitive approach may cause transformer unit 105 to converge towards targeted knowledge.
  • Global loss may be calculated periodically (e.g. at completion of each iteration or several iterations of the training loop) depending on loss attributed to first biometric unit 102, loss attributed to second biometric unit 104 and loss attributed to transformer unit 105.
  • Such a calculation of the global loss may be, e.g., a weighted calculation in which loss attributed to first biometric unit 102, loss attributed to second biometric unit 104 and loss attributed to transformer unit 105 may have different weights. These different weights may depend on, e.g., experimental parameters or results determined over whole training cycle or cycles over time.
  • This global loss may accurately indicate whether whole system conformed by first and second biometric units 102, 104 and transformer unit 105 is converging or not (in expected manner) to targeted knowledge, at what pace, how accurately, etc. over training cycles.
  • Transformer unit 105 may thus be trained (as, e.g., explained with respect to Figure 3) in such a way that the authenticator unit 106 of the authenticator system 100 determines that second faceprint 110 and third faceprint 111 correspond to same person. Converting the non-occluded face image into partially occluded face image may be performed by the converter module 303 by setting the predefined or pre-known face part that is to be occluded into a uniform colour in such a way that said uniform colour introduces a constant noise for all face images that are processable by the authenticator system 100.
  • This uniform colour may be, e.g., black colour.
  • the converter module 303 may convert the non-occluded face image into the partially occluded face image by cropping the non-occluded face image to remove therefrom the predefined or pre-known face part that is to be occluded. Further alternatively, the converter module 303 may perform such a conversion by combining aforementioned uniform colour introduction and croppingbased removal.
  • trainer systems according to Figure 3 may permit training biometric and transformer units 102, 104, 105 in a very interrelated manner, such that said units 102, 104, 105 may result much more effective and efficient than in prior art training approaches with same or similar purposes, as experimentally confirmed by inventors.
  • Converter module 303 as explained so far is appropriate to train authenticator systems 100 to compute positive cases, i.e. successful authentications, because it is ensured with it that both occluded and non-occluded face images correspond to same person.
  • converter module 303 may be further configured to add some biometric distortion in non-occluded part of the partially occluded face images so as to force negative cases, i.e. unsuccessful authentications.
  • partially occluded face images may correspond to different person from that of the non-occluded face image paired with pertinent partially occluded one within training pairs of face images.
  • Converter module 303 may not be used in this non-conversion-based approach. In some examples, such conversion-based and non-conversion-based approaches may be combined to train authenticator systems 100 to compute negative cases.
  • module or “unit” may be understood to refer to software, firmware, hardware and/or various combinations thereof. It is noted that the modules are exemplary. The modules may be combined, integrated, separated, and/or duplicated to support various applications. Also, a function described herein as being performed by a particular module may be performed by one or more other modules and/or by one or more other devices instead of or in addition to the function performed by the described particular module.
  • the modules may be implemented across multiple devices, associated or linked to corresponding methods of authenticating a person proposed herein, and/or to other components that may be local or remote to one another. Additionally, the modules may be moved from one device and added to another device, and/or may be included in both devices, associated to corresponding methods of authenticating a person proposed herein. Any software implementations may be tangibly embodied in one or more storage media, such as e.g. a memory device, a floppy disk, a compact disk (CD), a digital versatile disk (DVD), or other devices that may store computer code.
  • the methods of authenticating a person according to present disclosure may be implemented by computing means, electronic means or a combination thereof.
  • the computing means may be a set of instructions (e.g.
  • a controller of the system may be, for example, a CPLD (Complex Programmable Logic Device), an FPGA (Field Programmable Gate Array) or an ASIC (Application- Specific Integrated Circuit).
  • CPLD Compplex Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • ASIC Application- Specific Integrated Circuit
  • the computing means may be a set of instructions (e.g. a computer program) and the electronic means may be any electronic circuit capable of implementing corresponding steps of the methods of authenticating a person proposed herein, such as those described with reference to other figures.
  • the computer program(s) may be embodied on a storage medium (for example, a CD- ROM, a DVD, a USB drive, a computer memory or a read-only memory) or carried on a carrier signal (for example, on an electrical or optical carrier signal).
  • a storage medium for example, a CD- ROM, a DVD, a USB drive, a computer memory or a read-only memory
  • a carrier signal for example, on an electrical or optical carrier signal.
  • the computer program(s) may be in the form of source code, object code, a code intermediate source and object code such as in partially compiled form, or in any other form suitable for use in implementing the methods of authenticating a person according to present disclosure.
  • the carrier may be any entity or device capable of carrying the computer program(s).
  • the carrier may be constituted by such cable or other device or means.
  • the carrier may be an integrated circuit in which the computer program(s) is/are embedded, the integrated circuit being adapted for performing, or for use in the performance of, the methods of authenticating a person proposed herein.
  • Authenticator methods may further include (e.g. at block 202) generating, from the first image 107, a first faceprint 108 of the non-occluded face, within an occlusion-free faceprint domain.
  • This functionality implemented or implementable at/by block 202 may be performed by e.g. first biometric unit 102 previously described with reference to Figure 1. Functional details and considerations explained about said first biometric unit 102 may thus be similarly attributed or attributable to method block 202.
  • Authenticator methods may still further include (e.g. at block 203) obtaining a second image 109 of the person representing a partially occluded face of the person, the partially occluded face having a non-occluded face part and an occluded face part, each of said face parts corresponding to a predefined or pre-known face part.
  • This functionality implemented or implementable at/by block 203 may be performed by e.g. second image unit 103 previously described with reference to Figure 1. Functional details and considerations explained about said second image unit 103 may thus be similarly attributed or attributable to method block 203.
  • Authenticator methods may yet further include (e.g. at block 204) generating, from the second image 109, a second faceprint 110 of the non-occluded face part, within an occlusion-affected faceprint domain.
  • This functionality implemented or implementable at/by block 204 may be performed by e.g. second biometric unit 104 previously described with reference to Figure 1. Functional details and considerations explained about said second biometric unit 104 may thus be similarly attributed or attributable to method block 204.
  • Authenticator methods may still furthermore include (e.g. at block 206) authenticating the person depending on a comparison between the second faceprint 110 and the third faceprint 111.
  • This functionality implemented or implementable at/by block 206 may be performed by e.g. authenticator unit 106 previously described with reference to Figure 1. Functional details and considerations explained about said authenticator unit 106 may thus be similarly attributed or attributable to method block 206.
  • Authenticator methods may terminate (e.g. at block 207) when an ending condition is detected such as e.g. once authentication has been (either successfully or unsuccessfully) completed, under reception of a user termination request, under shutdown or deactivation of the authenticator system 100 performing the method, etc.
  • Figure 4 is a flow chart schematically illustrating methods of training an authenticator system (or trainer methods) according to examples.
  • trainer methods may be initiated (e.g. at block 400) upon detection of a starting condition such as e.g. a request for starting the method or an invocation of the method from user interface or the like. Since trainer methods according to Figure 4 are performable by trainer systems according to Figure 3, and since said trainer systems are purposed to train authenticator systems according to Figure 1 , number references from Figures 1 and 3 may be reused in following description of Figure 4.
  • Trainer methods may further include (e.g. at block 401) obtaining or receiving a training set of non-occluded face images for training the authenticator system 100.
  • This functionality implemented or implementable at/by block 401 may be performed by e.g. training set module 301 previously described with reference to Figure 3. Functional details and considerations explained about said training set module 301 may thus be similarly attributed or attributable to method block 401 .
  • trainer methods may further include performing, for each non-occluded face image in whole or part of the training set, a training loop that may include, e.g., method blocks 402 - 406 which are explained below.
  • This training loop may be performed by e.g. training loop module 302 previously described with reference to Figure 3. Functional details and considerations explained about said training loop module 302 may thus be similarly attributed or attributable to such a training loop.
  • T rainer methods may furthermore include (e.g. at block 404) training the second biometric unit 104 of the authenticator system 100 to generate second faceprint 110 of the partially occluded face image, within occlusion-affected faceprint domain.
  • This functionality implemented or implementable at/by block 404 may be performed by e.g. second biometric trainer module 305 previously described with reference to Figure 3. Functional details and considerations explained about said second biometric trainer module 305 may thus be similarly attributed or attributable to method block 404.
  • Trainer methods may still furthermore include (e.g.
  • Authenticator methods and systems according to present disclosure may be used in many applications of, e.g., managing 1 :N physical accesses, managing 1 :1 physical accesses, managing identity accesses, etc.
  • Managing 1 :N physical accesses may refer to monitoring/controlling accesses by many people to, e.g., a Railway Station, a Sport event/infrastructure, etc.
  • Cardinality 1 :N refers to authentication of one person (1) versus many possible people (N), the one person represented by faceprint generated at authentication time and the many people represented by stored reference faceprints.
  • Authenticating people in this scenario using authenticator methods and systems disclosed herein may reduce the cost of data storage and data communication (cost and technology benefits). This may also permit changing a semi non-cooperative situation (i.e. user wearing a scarf) into a cooperative situation (security benefits) without introducing friction or without increasing the cost (human cost) of assisting the authentication or identification process (i.e. User Experience benefits).
  • manager system should ideally support working with partially occluded faces or non-occluded faces, for reasons of convenience and user experience.
  • Reference faceprints in pertinent DB may have been generated via selfies of non-occluded faces.
  • access terminals may need to download the database for fast response with minimum latency.
  • authenticator methods and systems according to present disclosure it may be required to download a database with only 100k faceprints for supporting accesses with and without face occlusion. Otherwise, it would require downloading 200k faceprints, one occlusion-free faceprint and one occlusion-affected faceprint per person.
  • downloading 100k faceprints may require 200MB versus 400MB that would require 200k faceprints without authentication solutions according to present disclosure.
  • 2D or printed credential may be ideally kept as much smaller as possible but, at same time, it may be used for different security levels. With authenticator methods and systems disclosed herein, it may be possible to reuse same 2D or printed credential with reference faceprint of non-occluded face represented or printed thereon such that authentication may be performed against faceprint of either partially occluded face or non-occluded face generated at authentication time. Otherwise, 2D or printed credential may be required to be twice larger in order to contain two reference faceprints, one corresponding to non-occluded face and another corresponding to partially occluded face.
  • IAM may take large advantage from authenticator methods and systems disclosed herein when users/persons to be authenticated have some kind of disability (e.g., Amyotrophic lateral sclerosis, ALS) that complicates or makes impossible face mask manipulation.
  • some kind of disability e.g., Amyotrophic lateral sclerosis, ALS
  • ALS Amyotrophic lateral sclerosis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Collating Specific Patterns (AREA)

Abstract

Authenticator methods are provided for authenticating a person, said methods comprising: obtaining first image representing a non-occluded person's face; generating, from the first image, first biometric mathematical representation of the non-occluded face within first representation-domain; obtaining second image representing a partially occluded person's face having a non-occluded face part and an occluded face part, each of said parts corresponding to a predefined face part; generating, from the second image, second biometric mathematical representation of the non-occluded face part within a second representation-domain; transforming the first biometric mathematical representation from first representation-domain to second representation-domain; and authenticating the person depending on a comparison between the second biometric mathematical representation and the transformed first biometric mathematical representation. Authenticator systems and computer programs suitable for performing authenticator methods are also provided. Trainer methods for training authenticator systems are also provided, along with trainer systems and computer programs suitable for performing said trainer methods.

Description

AUTHENTICATING A PERSON
This application claims the benefit of European Patent Application EP22382577.9 filed 15 June 2022.
The present disclosure relates to methods of authenticating a person and to authenticator systems and computer programs suitable for performing said authenticator methods and, furthermore, to trainer methods of training said authenticator systems and to trainer systems and computer programs suitable for performing said trainer methods.
BACKGROUND
Due to the covid pandemic situation and the need of using face masks, the performance of face recognition systems has been adversely affected. In most cases, the ability of these models to correctly identify each subject has been biased due to a lack of information corresponding to the bottom half part of the face. This situation is affecting systems where facial biometry is used for user enrolment, authentication, forensics, etc. Face biometry is used for verification of two faces from the same person, and in this case, the system may reject bonafide cases because of lacking biometric details. It is also used for the identification of a probe among a database of registered faces, increasing the false negative probability. To reduce the impact of these problems the systems should reduce security levels and/or implement new systems that are less sensitive to this problem. This problem is not limited to masked faces, but it is extended to any form of occlusion in a predefined or pre-known region of the face due to, e.g., presence of a scarf, prominent glasses frames, non-full-face veils, etc.
An object of the disclosure is to provide new methods, systems and computer programs aimed at solving at least some of the aforementioned problems.
SUMMARY
In an aspect, methods of authenticating a person are provided. Such methods (also denominated authenticator methods herein) comprise obtaining a first image of the person representing a non-occluded face of the person and generating, from said first image, a first biometric mathematical representation of the non-occluded face, within a first representation domain. Authenticator methods further comprise obtaining a second image of the person representing a partially occluded face of the person having a non-occluded face part and an occluded face part, each of said face parts corresponding to a predefined or pre-known face part. Authenticator methods still further comprise generating, from the second image, a second biometric mathematical representation of the non-occluded face part, within a second representation domain. Authenticator methods yet further comprise transforming the first biometric mathematical representation from the first representation domain to the second representation domain, thereby generating a third biometric mathematical representation within the second representation domain. Authenticator methods furthermore comprise authenticating the person depending on a comparison between the second biometric mathematical representation and the third biometric mathematical representation.
The term “biometric mathematical representation” of a person’s face may be herein also denominated faceprint in the sense it corresponds to a digitally recorded representation of a person's face that may be used for authentication purposes because it is as individual as a fingerprint.
In these authenticator methods, operations regarding first (non-occluded face) image and first faceprint associated thereto may be part of an enrolment phase, at which the person may be registered in authenticator system configured to perform authenticator methods according to present disclosure. Once such a registration has been completed, first faceprint may be stored as reference faceprint of the person/user for subsequent authentications thereof. Then, the user may be authenticated whenever required or desired or requested by performing the remaining part of the authenticator method, i.e. operations regarding second (occluded face) image and second faceprint associated thereto, transformation of first faceprint to make it comparable with second faceprint, and final authentication based on such a comparison.
Authenticator methods proposed herein may be more efficient than prior art methods with same or similar purpose for following reasons. Such authenticator methods may permit authenticating a person/user by keeping stored only one reference (non-occluded) faceprint per person/user to authenticate the person/user from presented person’s image face with either occlusion or non-occlusion at authentication time. If non-occluded face is presented at authentication time, faceprint obtained from such a presented non-occluded face may be directly compared to user’s reference (non-occluded) faceprint. If occluded face is presented at authentication time, user’s reference (non-occluded) faceprint may be transformed to make it comparable with faceprint obtained from such a presented occluded face. This way, much less reference faceprints may be required to be stored versus, e.g., approaches based on storing two reference faceprints per user, one corresponding to non-occluded user’s face and another corresponding to occluded user’s face. Storing a single reference faceprint per user/person permits saving data storage space and reducing computational cost with respect to using two reference faceprints, one for non-occluded face and another for partially occluded face. Thus, authentication methods according to present disclosure allow authenticating a user/person using only an occluded face image presented at authentication time, without needing to ask the person to present a non-occluded face image (at authentication time) and without requiring storing two reference faceprints.
Moreover, on top of authenticator methods according to present disclosure, a distinguisher function may detect whether the second image represents an occluded or non-occluded face. If second image represents no face occlusion, authentication may be performed based on biometric comparison between faceprints corresponding to nonoccluded faces, i.e. within occlusion-free faceprint domain, using prior art techniques or methods. Otherwise (second image represents partial face occlusion), occluded face part in second image may be detected and whether it corresponds to the predefined or preknown face part may be determined. In this case, any of the authenticator methods according to present disclosure may be used to perform the authentication, i.e. based on comparing first faceprint transformed into third faceprint and second faceprint, i.e. within occlusion-affected faceprint domain.
With the aforementioned distinguisher function, all possible scenarios may be covered based on comparison between either faceprints corresponding to non-occluded faces or faceprints corresponding to partially occluded faces. This permits avoiding rigidly requesting the user/person to capture and present his/her face photo with either occlusion or no occlusion. This freedom of actuation provided to the user/person to be authenticated implies a flexibility very valuable from user/person’s perspective.
Experiments carried out by the inventors have surprisingly revealed that occluded faceprint and non-occluded faceprint corresponding to same person’s face are mathematically related or correlated to each other. This is conceptually consistent with the fact that occluded faceprint contains biometric information from same biological entity as non-occluded faceprint because both of them correspond to same person, but in different quantities in the one and the other case due to occlusion and non-occlusion, respectively. In particular, occluded faceprint appears to present a certain mathematical bias with respect to non-occluded faceprint with origin in same person, said bias being mathematically derivable and, therefore, computable in vast majority of scenarios, as experimentally confirmed. According to above principles, aforementioned first representation domain (within which non-occluded faceprints have been confirmed to fall) may be also denominated occlusion- free or bias-free faceprint domain and, similarly, second representation domain (within which occluded faceprints have been confirmed to fall) may be also denominated occlusion-affected or bias-affected faceprint domain. Transformation from occlusion-free or bias-free faceprint domain to occlusion-affected or bias-affected faceprint domain may thus be implemented based on any mathematical technique configurable or trainable to accordingly bias the non-occluded faceprint to produce the occluded faceprint (presumably) corresponding to same person’s face. As explained in detail in other parts of the description, such a mathematical technique may be based on, e.g., machine learning, Gaussian Mixture Model(s), linear transformation(s) with or without non-linear transfer function on top thereof, etc.
According to some implementations, the transforming of the first faceprint may, e.g., include inputting the first faceprint into a machine learning module trained to perform the transformation from occlusion-free faceprint domain to occlusion-affected faceprint domain. Additionally or alternatively, the transforming of the first faceprint from occlusion- free faceprint domain to occlusion-affected faceprint domain may be based on a Gaussian Mixture Model, and/or a linear transformation, and/or applying a non-linear transfer function on top of the linear transformation, etc. Such a non-linear transfer function may include, e.g., a sigmoid function, or a Rectified Linear Unit function, etc. or any combination thereof.
In some examples, the authenticating of the person depending on the comparison may include, e.g., determining a matching score between the second faceprint and the third faceprint, and authenticating the person depending on whether the determined matching score satisfies or dissatisfies a predefined matching threshold. In particular, authentication of the person may be determined successful if the matching score satisfies the predefined matching threshold and, otherwise, the authenticating of the person may be determined unsuccessful.
In implementations, authenticator methods may further comprise storing the first faceprint as reference faceprint for authentication of the person and, furthermore, retrieving the first faceprint for transforming it from occlusion-free faceprint domain to occlusion-affected faceprint domain and subsequent authentication of the person.
According to examples, authenticator methods may further comprise receiving user credentials or any user data uniquely identifying the person in such a way that the retrieving of the first faceprint may be performed depending on said user credentials or user data uniquely identifying the person. Such user data uniquely identifying the person/user to be authenticated may have any suitable form or format such as, e.g., a QR code or any one or two dimensional code or the like available in prior art.
In a further aspect, authenticator systems are provided for authenticating a person. Authenticator systems comprise a first image unit, a first biometric unit, a second image unit, a second biometric unit, a transformer unit, and an authenticator unit. The first image unit is configured to obtain a first image of the person representing a non-occluded face of the person. The first biometric unit is configured to generate, from the first image, a first faceprint corresponding to the non-occluded face, within an occlusion-free faceprint domain. The second image unit is configured to obtain a second image of the person representing a partially occluded face of the person having a non-occluded face part and an occluded face part, each of said face parts corresponding to a predefined or pre-known face part. The second biometric unit is configured to generate, from the second image, a second faceprint corresponding to the non-occluded face part, within an occlusion- affected faceprint domain. The transformer unit is configured to transform the first faceprint from occlusion-free faceprint domain to occlusion-affected faceprint domain, thereby generating a third faceprint within occlusion-affected faceprint domain. The authenticator unit is configured to authenticate the person depending on a comparison between the second faceprint and the third faceprint, both within occlusion-affected faceprint domain.
In a still further aspect, computer programs are provided comprising program instructions for causing a system or computing system to perform methods of authenticating a person, such as those described in other parts of the disclosure. These computer programs may be embodied on a storage medium and/or carried on a carrier signal.
In a yet further aspect, computing systems are provided for authenticating a person, said computing systems comprising a memory and a processor, embodying instructions stored in the memory and executable by the processor, and the instructions comprising functionality or functionalities to execute authenticator methods, such as those described in other parts of the disclosure.
In a furthermore aspect, trainer methods are provided fortraining an authenticator system, such as those described in other parts of the disclosure. Trainer methods comprise obtaining or receiving a training set of non-occluded (or occlusion-free) face images for training the authenticator system, and performing, for each non-occluded (or occlusion- free) face image in whole or part of the training set, a training loop. Such a training loop includes converting the non-occluded (or occlusion-free) face image into a partially occluded (or occlusion-affected) face image having a non-occluded face part and an occluded face part, each of said face parts corresponding to a predefined or pre-known face part. Training loop further includes training the first biometric unit of the authenticator system to generate first faceprint corresponding to the non-occluded (or occlusion-free) face image, within occlusion-free faceprint domain and, moreover, training the second biometric unit of the authenticator system to generate second faceprint corresponding to the partially occluded (or occlusion-affected) face image, within occlusion-affected faceprint domain. Training loop still further includes training the transformer unit of the authenticator system to transform the first faceprint from occlusion-free faceprint domain to occlusion-affected faceprint domain, thereby generating third faceprint within occlusion- affected faceprint domain.
Suggested trainer methods may permit training both (first and second) biometric and transformer units using same training data, within same training cycles and very centred on operational connections between them, such that very accurate performance alignment may be achieved between said units. Connections between biometric and transformer units refer to, e.g., that transformer unit’s input is first biometric unit’s output, transformer unit’s output is targeted to correspond to second biometric unit’s output when their origin is same person’s face, transformer unit’s output is purposed to diverge from second biometric unit’s output when their origin is non-same person’s face, etc. By training biometric and transformer units in such a symbiotic manner, very accurate convergence of said units may be achieved towards purposed knowledge on generating occlusion-free and occlusion-affected faceprints and transforming from occlusion-free faceprint domain to occlusion-affected faceprint domain. Even though said knowledges do not exactly correspond to same one, they are someway related to each other in the sense that biometric units are assumed to output occlusion-free faceprint and occlusion-affected faceprint, respectively, and transformer unit to transform from occlusion-free faceprint domain to occlusion-affected faceprint domain. Once trained in such an interrelated manner, biometric and transformer units result to be more effective and efficient as a whole than in prior art training approaches with same or similar purposes, as experimentally confirmed by inventors.
In a still furthermore aspect, trainer systems are provided for training an authenticator system, such as those described in other parts of the disclosure. Trainer systems comprise a training set module and a training loop module. The training set module is configured to obtain or receive a training set of non-occluded (or occlusion-free) face images for training the authenticator system. The training loop module is configured to perform, for each non-occluded (or occlusion-free) face image included in whole or part of the training set, a training loop implemented by a converter module, a first biometric trainer module, a second biometric trainer module and a transformer trainer module. The converter module is configured to convert the non-occluded (or occlusion-free) face image into a partially occluded (or occlusion-affected) face image having a non-occluded face part and an occluded face part, each of said face parts corresponding to a predefined or pre-known face part. The first biometric trainer module is configured to train the first biometric unit of the authenticator system to generate first faceprint corresponding to the non-occluded (or occlusion-free) face image, within occlusion-free faceprint domain. The second biometric trainer module is configured to train the second biometric unit of the authenticator system to generate second faceprint corresponding to the partially occluded (or occlusion-affected) face image, within occlusion-affected faceprint domain. The transformer trainer module is configured to train the transformer unit of the authenticator system to transform the first faceprint from occlusion-free faceprint domain to occlusion- affected faceprint domain, thereby generating third faceprint within occlusion-affected faceprint domain.
In a yet furthermore aspect, computer programs are provided comprising program instructions for causing a system or computing system to perform trainer methods of training an authenticator system, such as those described in other parts of the disclosure. These computer programs may be embodied on a storage medium and/or carried on a carrier signal.
In an additional aspect, computing systems are provided for training an authenticator system such as those described in other parts of the disclosure, said computing systems comprising a memory and a processor, embodying instructions stored in the memory and executable by the processor, and the instructions comprising functionality or functionalities to execute methods of training an authenticator system, such as those described in other parts of the disclosure.
BRIEF DESCRIPTION OF THE DRAWINGS
Non-limiting examples of the disclosure will be described in the following, with reference to the appended drawings, in which:
Figure 1 is a block diagram schematically illustrating authenticator systems for authenticating a person according to examples. Figure 2 is a flow chart schematically illustrating authenticator methods of authenticating a person according to examples.
Figure 3 is a block diagram schematically illustrating trainer systems for training an authenticator system such as the ones of Figure 1 , according to examples.
Figure 4 is a flow chart schematically illustrating trainer methods of training an authenticator system such as the ones of Figure 1 , according to examples.
DETAILED DESCRIPTION OF EXAMPLES
Figure 1 is a block diagram schematically illustrating authenticator systems for authenticating a person according to examples. As generally shown in the figure, such authenticator systems 100 may include a first image unit 101 , a first biometric unit 102, a second image unit 103, a second biometric unit 104, a transformer unit 105 and an authenticator unit 106. The first image unit 101 may be configured to obtain (or receive or capture) a first image 107 of the person representing a non-occluded face of the person. The first biometric unit 102 may be configured to generate, from the first image 107, a first faceprint 108 of the non-occluded face, within an occlusion-free faceprint domain. The second image unit 103 may be configured to obtain a second image 109 of the person representing a partially occluded face of the person. Such a partial occlusion may be due to, e.g., a health mask (such as a covid mask), a scarf, a non-full-face veil, prominent glasses frame, etc. The partially occluded face may include a non-occluded face part and an occluded face part, each of said parts corresponding to a predefined or pre-known face part. The second biometric unit 104 may be configured to generate, from the second image 109, a second faceprint 110 of the non-occluded face part, within an occlusion- affected faceprint domain.
First and second image units 101 , 103 may be a same image unit. First and second faceprints 108, 110 may be, e.g., biometric vectors or embeddings or any image mathematical representation suitable for biometric purposes. Any reference herein to mathematical representation or vector or embedding or faceprint may thus be understood as referring to same concept in the sense of any kind of image mathematical representation suitable for biometric purposes. The transformer unit 105 may be configured to generate a third faceprint 111 by transforming the first faceprint 108 from occlusion-free faceprint domain to occlusion-affected faceprint domain. The authenticator unit 106 may be configured to authenticate the person depending on a comparison between the second faceprint 110 and the third faceprint 111. Each of the first biometric unit 102, second biometric unit 104 and transformer unit 105 may be implemented in many different manners, as commented in other parts of the disclosure. For example, each of said units 102, 104, 105 may be a machine learning module that may therefore be trainable using machine learning. In particular, first biometric unit 102 and second biometric unit 104 may be different sub-modules in same machine learning module in such a way that, in some examples, said sub-modules 102, 104 may share aspects such as, e.g., structure, weights, biases, etc. More particularly, this same machine learning module implementing first and second biometric units 102,
104 may be a convolutional neural network which, in some examples, may be a Siamese convolutional neural network.
Transformer unit 105 may be a module configured to perform the transforming of the first faceprint 108 based on a Gaussian Mixture Model, or on a linear transformation, or combination thereof. The linear transformation may further include applying a non-linear transfer function on top of the linear transformation. Such a non-linear transfer function may include a sigmoid function, or a Rectified Linear Unit function, or a combination thereof. The transformer unit 105 may be, in some examples, a Generative Adversarial Neural Network or a Gaussian Mixture Neural Network.
According to the above possible implementations of first biometric unit 102, second biometric unit 104 and transformer unit 105, corresponding data to be processed by any of said units 102, 104, 105 may be inputted into the unit 102, 104, 105 for it to produce suitable output. For example, the transforming of the first faceprint 108 may include inputting the first faceprint 108 into corresponding module (e.g., machine learning module)
105 configured (e.g., trained) to perform the transformation from the occlusion-free faceprint domain to the occlusion-affected faceprint domain. Similar modular approach may be considered with respect to first and second biometric units 102, 104.
The authenticating of the person depending on comparing second and third faceprints (both within occlusion-affected faceprint domain) may include, e.g., determining a matching score between second faceprint 110 and third faceprint 11 denoting how coincident or similar to each other they are. Once said matching score has been determined, authentication of the person may be performed depending on whether the matching score satisfies or dissatisfies a predefined matching threshold. In particular examples, authentication of the person may be determined successful if the matching score satisfies the predefined matching threshold and, otherwise, authentication of the person may be determined unsuccessful. Once first faceprint 108 has been outputted by first biometric unit 102, first faceprint 108 may be stored as reference faceprint of the person and, each time authentication of the person is requested, first faceprint 108 may be retrieved and accordingly transformed from occlusion-free faceprint domain to occlusion-affected faceprint domain, so as to make it comparable with second faceprint 110. Then, authentication of the person may be performed by comparing first faceprint 108 transformed into third faceprint 111 and second faceprint 110 to each other. Retrieving of the first faceprint 108 may performed depending on user credentials or user data uniquely identifying the person received by authenticator system performing the authentication.
A distinguisher module may operate on top of authenticator systems 100 to implement the distinguisher function explained in other parts of the disclosure. With said distinguisher module, full flexibility may be provided to the user/person to be authenticated, as commented elsewhere.
Figure 3 is a block diagram schematically illustrating trainer systems for training an authenticator system 100 according to examples. As generally shown in the figure, such trainer systems 300 may include a training set module 301 and a training loop module 302. Such a training loop module 302 may include a converter module 303, a first biometric trainer module 304, a second biometric trainer module 305 and a transformer trainer module 306. Authenticator system 100 is partially shown with only those parts thereof that are of interest in this particular example.
The training set module 301 may be configured to obtain (or receive or capture) a training set of non-occluded (or occlusion-free) face images for training the authenticator system 100. The training loop module 302 may be configured to perform, for each non-occluded (or occlusion-free) face image in whole or part of the training set, a training loop implemented or implementable by, e.g., modules 303 - 306.
Converter module 303 may be configured to convert first or next non-occluded face image into a partially occluded face image having a non-occluded face part and an occluded face part, each of said face parts corresponding to a predefined or pre-known face part. Expression “first or next non-occluded face image” refers to either first non-occluded face image in the training set at first iteration of the training loop or next non-occluded face image in the training set at any non-first iteration of the training loop.
First biometric trainer module 304 may be configured to train the first biometric unit 102 of the authenticator system 100 to generate first faceprint 108 of the non-occluded face image, within occlusion-free faceprint domain. Second biometric trainer module 305 may be configured to train the second biometric unit 104 of the authenticator system 100 to generate second faceprint 110 of the partially occluded face image, within occlusion- affected faceprint domain. Transformer trainer module 306 may be configured to train the transformer unit 105 (of the authenticator system 100) to transform the first faceprint 108 from occlusion-free faceprint domain to occlusion-affected faceprint domain, thereby generating third faceprint 111 within occlusion-affected faceprint domain.
Training of the first biometric unit 102 may include providing, by the first biometric trainer module 304, current (i.e. , first or next) non-occluded face image to the first biometric unit 102 for its training to generate first faceprint 108 corresponding to the current nonoccluded face image. Training of the first biometric unit 102 may further include providing, by the first biometric trainer module 304, a classifier to classify the outputted first faceprint 108 in terms of having more or less loss (or divergence) with respect to what is expected to output the first biometric unit 102. This classifier may be denominated, e.g., first biometric classifier and may have necessary knowledge about the non-occluded face images to be processed for that classification aim. Such a first biometric classifier may be trained along with the first biometric unit 102 in cooperative manner according to known machine learning techniques. If such a loss inferred by the first biometric classifier results unacceptable, same current non-occluded face image may be repeatedly processed until inferred loss becomes acceptable. This repetitive approach may cause both first biometric classifier and first biometric unit 102 to converge towards targeted knowledge.
Training of the second biometric unit 104 may include providing, by the second biometric trainer module 305, current (i.e., first or next) partially occluded face image (from converter module 303) to the second biometric unit 104 for its training to generate second faceprint 110 corresponding to the current partially occluded face image. Training of the second biometric unit 104 may further include providing, by the second biometric trainer module 305, a classifier to classify the outputted second faceprint 110 in terms of having more or less loss (or divergence) with respect to what is expected to output the second biometric unit 104. This classifier may be denominated, e.g., second biometric classifier and may have necessary knowledge about the partially occluded face images to be processed for that classification aim. Such a second biometric classifier may be trained along with the second biometric unit 104 in cooperative manner according to known machine learning techniques. If such a loss inferred by the second biometric classifier results unacceptable, same current partially occluded face image may be repeatedly processed until inferred loss becomes acceptable. This repetitive approach may cause both second biometric classifier and second biometric unit 104 to converge towards targeted knowledge. Training of the transformer unit 105 may include providing, by the transformer trainer module 306, the first faceprint 108 outputted by the first biometric unit 102 to the transformer unit 105 for its training to output third faceprint 111 as a conversion of the first faceprint 108 from occlusion-free faceprint domain to occlusion-affected faceprint domain. Training of the transformer unit 105 may further include classifying, by the transformer trainer module 306, the outputted third faceprint 111 in terms of having more or less loss (or divergence) with respect to the second faceprint 110 outputted by the second biometric unit 104. It is purposed that third faceprint 111 and second faceprint 110 should converge to zero or minimum loss or divergence (between each other) because it is presumed that third faceprint 111 and second faceprint 110 correspond to same person. If such a loss inferred by the transformer trainer module 306 results unacceptable, same first faceprint 108 and second faceprint 110 may be repeatedly processed in this manner until inferred loss becomes acceptable. This repetitive approach may cause transformer unit 105 to converge towards targeted knowledge.
Global loss may be calculated periodically (e.g. at completion of each iteration or several iterations of the training loop) depending on loss attributed to first biometric unit 102, loss attributed to second biometric unit 104 and loss attributed to transformer unit 105. Such a calculation of the global loss may be, e.g., a weighted calculation in which loss attributed to first biometric unit 102, loss attributed to second biometric unit 104 and loss attributed to transformer unit 105 may have different weights. These different weights may depend on, e.g., experimental parameters or results determined over whole training cycle or cycles over time. This global loss may accurately indicate whether whole system conformed by first and second biometric units 102, 104 and transformer unit 105 is converging or not (in expected manner) to targeted knowledge, at what pace, how accurately, etc. over training cycles.
Transformer unit 105 may thus be trained (as, e.g., explained with respect to Figure 3) in such a way that the authenticator unit 106 of the authenticator system 100 determines that second faceprint 110 and third faceprint 111 correspond to same person. Converting the non-occluded face image into partially occluded face image may be performed by the converter module 303 by setting the predefined or pre-known face part that is to be occluded into a uniform colour in such a way that said uniform colour introduces a constant noise for all face images that are processable by the authenticator system 100. This uniform colour may be, e.g., black colour. Alternatively, the converter module 303 may convert the non-occluded face image into the partially occluded face image by cropping the non-occluded face image to remove therefrom the predefined or pre-known face part that is to be occluded. Further alternatively, the converter module 303 may perform such a conversion by combining aforementioned uniform colour introduction and croppingbased removal.
As commented in other parts of the disclosure, trainer systems according to Figure 3 may permit training biometric and transformer units 102, 104, 105 in a very interrelated manner, such that said units 102, 104, 105 may result much more effective and efficient than in prior art training approaches with same or similar purposes, as experimentally confirmed by inventors. Converter module 303 as explained so far is appropriate to train authenticator systems 100 to compute positive cases, i.e. successful authentications, because it is ensured with it that both occluded and non-occluded face images correspond to same person. Alternatively, converter module 303 may be further configured to add some biometric distortion in non-occluded part of the partially occluded face images so as to force negative cases, i.e. unsuccessful authentications. Further alternatively, partially occluded face images may correspond to different person from that of the non-occluded face image paired with pertinent partially occluded one within training pairs of face images. Converter module 303 may not be used in this non-conversion-based approach. In some examples, such conversion-based and non-conversion-based approaches may be combined to train authenticator systems 100 to compute negative cases.
As used herein, the term “module” or “unit” may be understood to refer to software, firmware, hardware and/or various combinations thereof. It is noted that the modules are exemplary. The modules may be combined, integrated, separated, and/or duplicated to support various applications. Also, a function described herein as being performed by a particular module may be performed by one or more other modules and/or by one or more other devices instead of or in addition to the function performed by the described particular module.
The modules may be implemented across multiple devices, associated or linked to corresponding methods of authenticating a person proposed herein, and/or to other components that may be local or remote to one another. Additionally, the modules may be moved from one device and added to another device, and/or may be included in both devices, associated to corresponding methods of authenticating a person proposed herein. Any software implementations may be tangibly embodied in one or more storage media, such as e.g. a memory device, a floppy disk, a compact disk (CD), a digital versatile disk (DVD), or other devices that may store computer code. The methods of authenticating a person according to present disclosure may be implemented by computing means, electronic means or a combination thereof. The computing means may be a set of instructions (e.g. a computer program) and then methods of authenticating a person may comprise a memory and a processor, embodying said set of instructions stored in the memory and executable by the processor. These instructions may comprise functionality or functionalities to execute corresponding methods of authenticating a person such as e.g. the ones described with reference to other figures.
In case the methods of authenticating a person are implemented only by electronic means, a controller of the system may be, for example, a CPLD (Complex Programmable Logic Device), an FPGA (Field Programmable Gate Array) or an ASIC (Application- Specific Integrated Circuit).
In case the methods of authenticating a person are a combination of electronic and computing means, the computing means may be a set of instructions (e.g. a computer program) and the electronic means may be any electronic circuit capable of implementing corresponding steps of the methods of authenticating a person proposed herein, such as those described with reference to other figures.
The computer program(s) may be embodied on a storage medium (for example, a CD- ROM, a DVD, a USB drive, a computer memory or a read-only memory) or carried on a carrier signal (for example, on an electrical or optical carrier signal).
The computer program(s) may be in the form of source code, object code, a code intermediate source and object code such as in partially compiled form, or in any other form suitable for use in implementing the methods of authenticating a person according to present disclosure. The carrier may be any entity or device capable of carrying the computer program(s).
For example, the carrier may comprise a storage medium, such as a ROM, for example a CD ROM or a semiconductor ROM, or a magnetic recording medium, for example a hard disk. Further, the carrier may be a transmissible carrier such as an electrical or optical signal, which may be conveyed via electrical or optical cable or by radio or other means.
When the computer program(s) is/are embodied in a signal that may be conveyed directly by a cable or other device or means, the carrier may be constituted by such cable or other device or means. Alternatively, the carrier may be an integrated circuit in which the computer program(s) is/are embedded, the integrated circuit being adapted for performing, or for use in the performance of, the methods of authenticating a person proposed herein.
Figure 2 is a flow chart schematically illustrating methods of authenticating a person according to examples. As generally shown in the figure, authenticator methods may be initiated (e.g. at block 200) upon detection of a starting condition such as e.g. a request for starting the method or an invocation of the method from user interface or the like. Since authenticator methods according to Figure 2 are performable by authenticator systems according to Figure 1 , number references from Figure 1 may be reused in following description of Figure 2.
Authenticator methods may further include (e.g. at block 201) obtaining a first image 107 of the person representing a non-occluded face of the person. This functionality implemented or implementable at/by block 201 may be performed by e.g. first image unit 101 previously described with reference to Figure 1 . Functional details and considerations explained about said first image unit 101 may thus be similarly attributed or attributable to method block 201.
Authenticator methods may further include (e.g. at block 202) generating, from the first image 107, a first faceprint 108 of the non-occluded face, within an occlusion-free faceprint domain. This functionality implemented or implementable at/by block 202 may be performed by e.g. first biometric unit 102 previously described with reference to Figure 1. Functional details and considerations explained about said first biometric unit 102 may thus be similarly attributed or attributable to method block 202.
Authenticator methods may still further include (e.g. at block 203) obtaining a second image 109 of the person representing a partially occluded face of the person, the partially occluded face having a non-occluded face part and an occluded face part, each of said face parts corresponding to a predefined or pre-known face part. This functionality implemented or implementable at/by block 203 may be performed by e.g. second image unit 103 previously described with reference to Figure 1. Functional details and considerations explained about said second image unit 103 may thus be similarly attributed or attributable to method block 203.
Authenticator methods may yet further include (e.g. at block 204) generating, from the second image 109, a second faceprint 110 of the non-occluded face part, within an occlusion-affected faceprint domain. This functionality implemented or implementable at/by block 204 may be performed by e.g. second biometric unit 104 previously described with reference to Figure 1. Functional details and considerations explained about said second biometric unit 104 may thus be similarly attributed or attributable to method block 204.
Authenticator methods may furthermore include (e.g. at block 205) transforming the first faceprint 108 from occlusion-free faceprint domain to occlusion-affected faceprint domain, thereby generating a third faceprint 111 within the occlusion-affected faceprint domain. This functionality implemented or implementable at/by block 205 may be performed by e.g. transformer unit 105 previously described with reference to Figure 1. Functional details and considerations explained about said transformer unit 105 may thus be similarly attributed or attributable to method block 205.
Authenticator methods may still furthermore include (e.g. at block 206) authenticating the person depending on a comparison between the second faceprint 110 and the third faceprint 111. This functionality implemented or implementable at/by block 206 may be performed by e.g. authenticator unit 106 previously described with reference to Figure 1. Functional details and considerations explained about said authenticator unit 106 may thus be similarly attributed or attributable to method block 206.
Authenticator methods may terminate (e.g. at block 207) when an ending condition is detected such as e.g. once authentication has been (either successfully or unsuccessfully) completed, under reception of a user termination request, under shutdown or deactivation of the authenticator system 100 performing the method, etc.
Figure 4 is a flow chart schematically illustrating methods of training an authenticator system (or trainer methods) according to examples. As generally shown in the figure, trainer methods may be initiated (e.g. at block 400) upon detection of a starting condition such as e.g. a request for starting the method or an invocation of the method from user interface or the like. Since trainer methods according to Figure 4 are performable by trainer systems according to Figure 3, and since said trainer systems are purposed to train authenticator systems according to Figure 1 , number references from Figures 1 and 3 may be reused in following description of Figure 4.
Trainer methods may further include (e.g. at block 401) obtaining or receiving a training set of non-occluded face images for training the authenticator system 100. This functionality implemented or implementable at/by block 401 may be performed by e.g. training set module 301 previously described with reference to Figure 3. Functional details and considerations explained about said training set module 301 may thus be similarly attributed or attributable to method block 401 .
Once the training set of non-occluded face images has been obtained or received, trainer methods may further include performing, for each non-occluded face image in whole or part of the training set, a training loop that may include, e.g., method blocks 402 - 406 which are explained below. This training loop may be performed by e.g. training loop module 302 previously described with reference to Figure 3. Functional details and considerations explained about said training loop module 302 may thus be similarly attributed or attributable to such a training loop.
Trainer methods may still further include (e.g. at block 402) converting either first or next non-occluded face image into a partially occluded face image having a non-occluded face part and an occluded face part, each of said face parts corresponding to a predefined or pre-known face part. At first iteration of the training loop, first or next non-occluded face image may be the first one and, otherwise may be the next one. This functionality implemented or implementable at/by block 402 may be performed by e.g. converter module 303 previously described with reference to Figure 3. Functional details and considerations explained about said converter module 303 may thus be similarly attributed or attributable to method block 402.
Trainer methods may yet further include (e.g. at block 403) training the first biometric unit 102 of the authenticator system 100 to generate first faceprint 108 of the non-occluded face image, within occlusion-free faceprint domain. This functionality implemented or implementable at/by block 403 may be performed by e.g. first biometric trainer module 304 previously described with reference to Figure 3. Functional details and considerations explained about said first biometric trainer module 304 may thus be similarly attributed or attributable to method block 403.
T rainer methods may furthermore include (e.g. at block 404) training the second biometric unit 104 of the authenticator system 100 to generate second faceprint 110 of the partially occluded face image, within occlusion-affected faceprint domain. This functionality implemented or implementable at/by block 404 may be performed by e.g. second biometric trainer module 305 previously described with reference to Figure 3. Functional details and considerations explained about said second biometric trainer module 305 may thus be similarly attributed or attributable to method block 404. Trainer methods may still furthermore include (e.g. at block 405) training the transformer unit 105 of the authenticator system 100 to transform the first faceprint 108 from occlusion- free faceprint domain to occlusion-affected faceprint domain, thereby generating third faceprint within occlusion-affected faceprint domain. This functionality implemented or implementable at/by block 405 may be performed by e.g. transformer trainer module 306 previously described with reference to Figure 3. Functional details and considerations explained about said transformer trainer module 306 may thus be similarly attributed or attributable to method block 405.
Trainer methods may yet furthermore include (e.g. at decision block 406) verifying whether an ending condition is satisfied, in which case Y the method may be terminated by, e.g., transitioning to ending block 407 and, otherwise N, a new iteration of the training loop may be initiated by, e.g., looping back to block 402. Ending condition may include, e.g., completion of the whole or part of the training set (i.e. all non-occluded face images to be used for the training have been processed), reception of a user termination request, shutdown or deactivation of the trainer system 300 that is performing the trainer method, etc.
Authenticator methods and systems according to present disclosure may be used in many applications of, e.g., managing 1 :N physical accesses, managing 1 :1 physical accesses, managing identity accesses, etc.
Managing 1 :N physical accesses may refer to monitoring/controlling accesses by many people to, e.g., a Railway Station, a Sport event/infrastructure, etc. Cardinality 1 :N refers to authentication of one person (1) versus many possible people (N), the one person represented by faceprint generated at authentication time and the many people represented by stored reference faceprints. Authenticating people in this scenario using authenticator methods and systems disclosed herein may reduce the cost of data storage and data communication (cost and technology benefits). This may also permit changing a semi non-cooperative situation (i.e. user wearing a scarf) into a cooperative situation (security benefits) without introducing friction or without increasing the cost (human cost) of assisting the authentication or identification process (i.e. User Experience benefits). For example, in a football match requiring identifying 100k people that may access the Stadium, manager system should ideally support working with partially occluded faces or non-occluded faces, for reasons of convenience and user experience. Reference faceprints in pertinent DB may have been generated via selfies of non-occluded faces. In operation, access terminals may need to download the database for fast response with minimum latency. With authenticator methods and systems according to present disclosure, it may be required to download a database with only 100k faceprints for supporting accesses with and without face occlusion. Otherwise, it would require downloading 200k faceprints, one occlusion-free faceprint and one occlusion-affected faceprint per person. Reasonably assuming 2KB for each faceprint, downloading 100k faceprints may require 200MB versus 400MB that would require 200k faceprints without authentication solutions according to present disclosure.
In relation to 1 :1 physical accesses, said cardinality refers to that one faceprint (1) generated at authentication time is to be compared with one reference faceprint (1) previously generated and stored at, e.g., registration time. Accessing different rooms, zones, areas, regions or the like (in, e.g., an office) may require different security levels. For some of such levels, manager system (in charge for managing accesses) may support accesses with partially occluded face (for instance, due to facial mask) and, for others, accesses with non-occluded face. Such accesses may be controlled, e.g., via two- dimensional (2D) or printed credential or credentials (e.g., QR code or codes), so that database storing reference faceprints may be avoided. 2D or printed credential may be ideally kept as much smaller as possible but, at same time, it may be used for different security levels. With authenticator methods and systems disclosed herein, it may be possible to reuse same 2D or printed credential with reference faceprint of non-occluded face represented or printed thereon such that authentication may be performed against faceprint of either partially occluded face or non-occluded face generated at authentication time. Otherwise, 2D or printed credential may be required to be twice larger in order to contain two reference faceprints, one corresponding to non-occluded face and another corresponding to partially occluded face.
Identity Access Management (IAM) within environments in which face masks are (like) another work tool such as, e.g., in industrial or healthcare or laboratory facilities or the like, may also take advantage from authenticator methods and systems disclosed herein. Authentication solutions according to present disclosure may permit reducing cost of nonassisted daily and continuous identification/authentication while executing tasks limiting hands mobility (due to, e.g., professional tasks being performed). For example, it may be costly to stop using hand(s) in work tasks to uncover face for authentication. This may be especially aggravating when authentication is to be executed repeatedly during execution of work tasks. Similarly, IAM may take large advantage from authenticator methods and systems disclosed herein when users/persons to be authenticated have some kind of disability (e.g., Amyotrophic lateral sclerosis, ALS) that complicates or makes impossible face mask manipulation. Although only a number of examples have been disclosed herein, other alternatives, modifications, uses and/or equivalents thereof are possible. Furthermore, all possible combinations of the described examples are also covered. Thus, the scope of the disclosure should not be limited by particular examples, but it should be determined only by a fair reading of the claims that follow.

Claims

1 . Method of authenticating a person, comprising: obtaining a first image of the person representing a non-occluded face of the person; generating, from the first image, a first biometric mathematical representation of the non-occluded face, within a first representation domain; obtaining a second image of the person representing a partially occluded face of the person, the partially occluded face having a non-occluded face part and an occluded face part, each of said face parts corresponding to a predefined or pre-known face part; generating, from the second image, a second biometric mathematical representation of the non-occluded face part, within a second representation domain; transforming the first biometric mathematical representation from the first representation domain to the second representation domain, thereby generating a third biometric mathematical representation within the second representation domain; authenticating the person depending on a comparison between the second biometric mathematical representation and the third biometric mathematical representation.
2. Method of authenticating a person according to claim 1 , wherein the transforming of the first biometric mathematical representation includes: inputting the first biometric mathematical representation into a machine learning module trained to perform the transformation from the first representation domain to the second representation domain.
3. Method of authenticating a person according to any of claims 1 or 2, wherein the transforming of the first biometric mathematical representation includes: transforming the first biometric mathematical representation from the first representation domain to the second representation domain based on a Gaussian Mixture Model.
4. Method of authenticating a person according to any of claims 1 to 3, wherein the transforming of the first biometric mathematical representation includes: transforming the first biometric mathematical representation from the first representation domain to the second representation domain based on a linear transformation.
5. Method of authenticating a person according to claim 4, wherein the transforming of the first biometric mathematical representation further includes applying a non-linear transfer function on top of the linear transformation.
6. Method of authenticating a person according to claim 5, wherein the non-linear transfer function includes a sigmoid function, or a Rectified Linear Unit function, or a combination thereof.
7. Method of authenticating a person according to any of claims 1 to 6, wherein the authenticating of the person depending on a comparison includes: determining a matching score between the second biometric mathematical representation and the third biometric mathematical representation; and authenticating the person depending on whether the determined matching score satisfies or dissatisfies a predefined matching threshold.
8. Method of authenticating a person according to claim 7, wherein the authenticating of the person is determined successful if the matching score satisfies the predefined matching threshold and, otherwise, the authenticating of the person is determined unsuccessful.
9. Method of authenticating a person according to any of claims 1 to 8, further comprising: storing the first biometric mathematical representation as reference biometric mathematical representation for authentication of the person; and retrieving the first biometric mathematical representation for transforming it from the first representation domain to the second representation domain and subsequent authentication of the person.
10. Method of authenticating a person according to claim 9, further comprising: receiving user credentials or any user data uniquely identifying the person in such a way that the retrieving of the first biometric mathematical representation is performed depending on said user credentials or user data uniquely identifying the person.
11 . Authenticator system for authenticating a person, comprising: a first image unit configured to obtain a first image of the person representing a nonoccluded face of the person; a first biometric unit configured to generate, from the first image, a first biometric mathematical representation of the non-occluded face, within a first representation domain; a second image unit configured to obtain a second image of the person representing a partially occluded face of the person, the partially occluded face having a non-occluded face part and an occluded face part, each of said face parts corresponding to a predefined or pre-known face part; a second biometric unit configured to generate, from the second image, a second biometric mathematical representation of the non-occluded face part, within a second representation domain; a transformer unit configured to transform the first biometric mathematical representation from the first representation domain to the second representation domain, thereby generating a third biometric mathematical representation within the second representation domain; an authenticator unit configured to authenticate the person depending on a comparison between the second biometric mathematical representation and the third biometric mathematical representation.
12. Authenticator system according to claim 11 , wherein each of the first biometric unit, second biometric unit and transformer unit is implemented as a machine learning module which is therefore trainable using machine learning.
13. Authenticator system according to claim 12, wherein the first biometric unit and the second biometric unit are a same machine learning module in such a way that the first biometric unit and the second biometric unit share weights in the same machine learning module.
14. Authenticator system according to claim 13, wherein the same machine learning module is a convolutional neural network.
15. Authenticator system according to claim 14, wherein the convolutional neural network is a Siamese convolutional neural network.
16. Authenticator system according to any of claims 12 to 15, wherein the transformer unit is a Generative Adversarial Neural Network.
17. Authenticator system according to any of claims 12 to 15, wherein the transformer unit is a Gaussian Mixture Neural Network.
18. Trainer method of training an authenticator system according to any of claims 12 to 17, the trainer method comprising: obtaining or receiving a training set of non-occluded face images for training the authenticator system; and performing, for each non-occluded face image in whole or part of the training set, a training loop including: converting the non-occluded face image into a partially occluded face image having a non-occluded face part and an occluded face part, each of said face parts corresponding to a predefined or pre-known face part, training the first biometric unit of the authenticator system to generate first biometric mathematical representation of the non-occluded face image, within first representation domain, training the second biometric unit of the authenticator system to generate second biometric mathematical representation of the partially occluded face image, within second representation domain, and training the transformer unit of the authenticator system to transform the first biometric mathematical representation from the first representation domain to the second representation domain, thereby generating third biometric mathematical representation within second representation domain.
19. T rainer method according to claim 18, wherein the transformer unit is trained in such a way that the authenticator unit of the authenticator system determines that second biometric mathematical representation and third biometric mathematical representation correspond to same person.
20. Trainer method according to any of claims 18 or 19, wherein the converting of the non-occluded face image into the partially occluded face image includes: setting the predefined or pre-known face part that is to be occluded into uniform colour in such a way that said uniform colour introduces a constant noise for all face images that are processable by the authenticator system.
21. Trainer method according to claim 20, wherein the uniform colour is black colour.
22. Trainer method according to any of claims 18 or 19, wherein the converting of the non-occluded face image into the partially occluded face image includes: cropping the non-occluded face image to remove therefrom the predefined or preknown face part that is to be occluded.
23. Trainer system for training an authenticator system according to any of claims 12 to 17, the trainer system comprising: a training set module configured to obtain or receive a training set of non-occluded face images for training the authenticator system; and a training loop module configured to perform, for each non-occluded face image included in whole or part of the training set, a training loop implemented by: a converter module configured to convert the non-occluded face image into a partially occluded face image having a non-occluded face part and an occluded face part, each of said face parts corresponding to a predefined or pre-known face part, a first biometric trainer module configured to train the first biometric unit of the authenticator system to generate first biometric mathematical representation of the nonoccluded face image, within first representation domain, a second biometric trainer module configured to train the second biometric unit of the authenticator system to generate second biometric mathematical representation of the partially occluded face image, within second representation domain, and a transformer trainer module configured to train the transformer unit of the authenticator system to transform the first biometric mathematical representation from the first representation domain to the second representation domain, thereby generating third biometric mathematical representation within second representation domain.
24. Computer program comprising program instructions for causing a computer or system to perform a method of authenticating a person according to any of claims 1 to 10.
25. Computer program according to claim 24, embodied on a storage medium or carried on a carrier signal.
26. Computer program comprising program instructions for causing a computer or system to perform a trainer method of training an authenticator system, the trainer method according to any of claims 18 to 22.
27. Computer program according to claim 26, embodied on a storage medium or carried on a carrier signal.
PCT/EP2022/072025 2022-06-15 2022-08-04 Authenticating a person WO2023241817A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP22382577 2022-06-15
EP22382577.9 2022-06-15

Publications (1)

Publication Number Publication Date
WO2023241817A1 true WO2023241817A1 (en) 2023-12-21

Family

ID=82117653

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2022/072025 WO2023241817A1 (en) 2022-06-15 2022-08-04 Authenticating a person

Country Status (1)

Country Link
WO (1) WO2023241817A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111881740A (en) * 2020-06-19 2020-11-03 杭州魔点科技有限公司 Face recognition method, face recognition device, electronic equipment and medium
US20220058426A1 (en) * 2019-10-23 2022-02-24 Tencent Technology (Shenzhen) Company Limited Object recognition method and apparatus, electronic device, and readable storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220058426A1 (en) * 2019-10-23 2022-02-24 Tencent Technology (Shenzhen) Company Limited Object recognition method and apparatus, electronic device, and readable storage medium
CN111881740A (en) * 2020-06-19 2020-11-03 杭州魔点科技有限公司 Face recognition method, face recognition device, electronic equipment and medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CEN FENG ET AL: "Dictionary Representation of Deep Features for Occlusion-Robust Face Recognition", IEEE ACCESS, vol. 7, 12 March 2019 (2019-03-12), pages 26595 - 26605, XP011714570, DOI: 10.1109/ACCESS.2019.2901376 *
DAN ZENG ET AL: "A survey of face recognition techniques under occlusion", IET BIOMETRICS, IEEE, MICHAEL FARADAY HOUSE, SIX HILLS WAY, STEVENAGE, HERTS. SG1 2AY, UK, vol. 10, no. 6, 5 April 2021 (2021-04-05), pages 581 - 606, XP006112796, ISSN: 2047-4938, DOI: 10.1049/BME2.12029 *

Similar Documents

Publication Publication Date Title
KR102535676B1 (en) Auto Resume for Face Recognition
US11122078B1 (en) Systems and methods for private authentication with helper networks
US10496804B2 (en) Fingerprint authentication method and system, and terminal supporting fingerprint authentication
US20230176815A1 (en) Systems and methods for private authentication with helper networks
US20160180068A1 (en) Technologies for login pattern based multi-factor authentication
KR20170016231A (en) Multi-modal fusion method for user authentification and user authentification method
US20070041620A1 (en) Information access method using biometrics authentication and information processing system using biometrics authentication
JP2018508888A (en) System and method for performing fingerprint-based user authentication using an image captured using a mobile device
KR20170046448A (en) Method and device for complex authentication
EP3682356B1 (en) Efficient hands free interaction using biometrics
JP2018142198A (en) Information processing device, access controlling method, and access controlling program
CN112084476A (en) Biological identification identity verification method, client, server, equipment and system
CN105868610A (en) Method and system for realizing user authentication through biological characteristic information
CN107679493A (en) Face identification method and device
US8270681B2 (en) Vein pattern management system, vein pattern registration apparatus, vein pattern authentication apparatus, vein pattern registration method, vein pattern authentication method, program, and vein data configuration
Aleluya et al. Faceture ID: face and hand gesture multi-factor authentication using deep learning
US10679028B2 (en) Method and apparatus for performing authentication based on biometric information
WO2019236284A1 (en) Multiple enrollments in facial recognition
WO2023241817A1 (en) Authenticating a person
Shang et al. Face and lip-reading authentication system based on android smart phones
CN113327212B (en) Face driving method, face driving model training device, electronic equipment and storage medium
CN109614804A (en) A kind of bi-mode biology feature encryption method, equipment and storage equipment
JP2018195067A (en) Information processing apparatus and information processing program
JP2001331804A (en) Device and method for detecting image area
Hariharan et al. Face Recognition and Database Management System for Event Participant Authentication

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22762003

Country of ref document: EP

Kind code of ref document: A1