CN103699887B - Portrait identification method and device - Google Patents

Portrait identification method and device Download PDF

Info

Publication number
CN103699887B
CN103699887B CN201310714175.4A CN201310714175A CN103699887B CN 103699887 B CN103699887 B CN 103699887B CN 201310714175 A CN201310714175 A CN 201310714175A CN 103699887 B CN103699887 B CN 103699887B
Authority
CN
China
Prior art keywords
face
sequence
feature data
recognition
score sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310714175.4A
Other languages
Chinese (zh)
Other versions
CN103699887A (en
Inventor
张俊
王晓静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANGHAI PPDAI FINANCE INFORMATION SERVICE Co Ltd
Original Assignee
SHANGHAI PPDAI FINANCE INFORMATION SERVICE Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANGHAI PPDAI FINANCE INFORMATION SERVICE Co Ltd filed Critical SHANGHAI PPDAI FINANCE INFORMATION SERVICE Co Ltd
Priority to CN201310714175.4A priority Critical patent/CN103699887B/en
Publication of CN103699887A publication Critical patent/CN103699887A/en
Application granted granted Critical
Publication of CN103699887B publication Critical patent/CN103699887B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Collating Specific Patterns (AREA)

Abstract

The invention discloses a portrait identification method and device. The portrait identification method comprises the following steps: acquiring a face video image of an identity information provider, and acquiring a face image on a valid identity certificate and a matched face image in a third-party identity data system; normalizing the face video image, the face image on the valid identity certificate and the matched face image in the third-party identity data system; acquiring face feature data from the normalized face video image, the face image on the valid identity certificate and the matched face image in the third-party identity data system for serving as first, second and third face feature data; crossly matching the first, second and third face feature data pairwise; outputting the cross matching results of the first, second and third face feature data. By adopting the scheme, the accuracy of portrait identification can be increased, and online transaction becomes safer and more reliable.

Description

Portrait identification method and device
Technical Field
The invention relates to the technical field of image recognition, in particular to a portrait recognition method and a portrait recognition device.
Background
In order to ensure the safety and reliability of online transactions, a user needs to provide an effective identification document for a website operator to check, and only the user who passes the check can perform online transactions. In practice, a user uploads a scanned identification document to a website server, and a website operator compares and identifies identification information including a portrait, an identification card number, an address and the like with identification information data in an identification information data system of a public security department after acquiring a scanned piece of the identification document uploaded by the user. In which portrait identification is an important ring.
In the existing portrait recognition, a portrait on an effective identification provided by a user is generally compared with a portrait in an identity information data system of a ministry of public security for recognition. However, the above method can only recognize to a certain extent whether the user identification document uploaded by the user matches with the identification information data of the public security department. The behavior of authenticating the identity of a counterfeited person cannot be identified, so that great hidden danger is left, and the safety of online transaction is seriously influenced.
Disclosure of Invention
The embodiment of the invention solves the problem of how to more accurately identify the portrait so as to ensure that online transaction is safer and more reliable.
In order to solve the above problem, an embodiment of the present invention provides a portrait recognition method, where the portrait recognition method includes:
acquiring a face video image of an identity information provider, and acquiring a face image on an effective identity certificate provided by the identity information provider and a face image matched with the identity information provider in a third-party identity data system;
normalizing the face video image, the face image on the effective identification provided by the identity information provider and the matched face image in the third-party identity data system;
acquiring face feature data from the face video image subjected to normalization processing, the face image on the effective identification provided by the identity information provider and the matched face image in the third-party identity data system, wherein the face image is respectively used as first, second and third face feature data;
carrying out pairwise cross matching on the first, second and third face feature data;
and outputting the result of the cross matching of the first, second and third face feature data.
Optionally, the face feature data includes face gray scale feature data, face shape feature data, and face skin texture feature data.
Optionally, the pairwise cross matching the first, second, and third facial feature data includes:
comparing the face gray feature data in the first, second and third face feature data in pairs in sequence to obtain a gray feature identification score sequence;
comparing the face shape feature data in the first, second and third face feature data in pairs in sequence to obtain a shape feature recognition score sequence;
comparing the facial skin texture feature data in the first, second and third facial feature data in pairs in sequence to obtain a texture feature recognition score sequence;
and respectively executing multi-feature fusion recognition on the numerical values of the same order in the gray feature recognition score sequence, the shape recognition score sequence and the texture recognition score sequence to obtain a fusion recognition score sequence.
Optionally, the performing pairwise cross matching on the first, second, and third face feature data further includes: and respectively executing multi-classifier fusion on the numerical values of the same order in the gray characteristic identification score sequence, the shape identification score sequence and the texture identification score sequence and the numerical values of the same order in the fusion identification score sequence to obtain a comprehensive identification score sequence.
Optionally, the portrait recognition method further includes: and comparing the cross matching result with a preset threshold value, and issuing alarm information when the cross matching result is smaller than the threshold value.
Optionally, the comparing the result of the cross matching with a preset threshold, and when the result of the cross matching is smaller than the threshold, issuing an alarm message includes: and comparing the fusion recognition score sequence with the numerical values of the same number in a preset first threshold sequence respectively, and issuing alarm information when the fusion recognition score sequence is smaller than the numerical value of the same number in the first threshold sequence.
Optionally, the comparing the result of the cross matching with a preset threshold, and when the result of the cross matching is smaller than the threshold, issuing an alarm message includes: and comparing the comprehensive identification score sequence with the numerical values of the same number in a preset second threshold sequence respectively, and issuing alarm information when the comprehensive identification score sequence is smaller than the numerical value of the same number in the second threshold sequence.
The embodiment of the invention also provides a portrait recognition device, which comprises:
the acquisition and acquisition unit is used for acquiring a face video image of the identity information provider and acquiring a face image on an effective identity certificate provided by the identity information provider and a matched face image in a third-party identity data system;
the normalization unit is used for normalizing the human face video image, the human face image on the effective identification provided by the identity information provider and the matched human face image in the third-party identity data system;
an extraction unit, configured to obtain face feature data from the normalized face video image, the face image on the valid identification provided by the identity information provider, and the face image matched in the third-party identity data system, where the face image is used as first, second, and third face feature data, respectively;
the matching unit is used for performing pairwise cross matching on the first, second and third face feature data;
and the output unit is used for outputting the result of the cross matching of the first, second and third face feature data.
Optionally, the face feature data includes face gray scale feature data, face shape feature data, and face skin texture feature data.
Optionally, the matching unit includes:
the gray matching subunit is used for comparing the face gray feature data in the first, second and third face feature data in pairs in sequence to obtain a gray feature identification score sequence;
the shape matching subunit is used for comparing the face shape feature data in the first, second and third face feature data in pairs in sequence to obtain a shape feature recognition score sequence;
the texture matching subunit is used for comparing the facial skin texture feature data in the first, second and third facial feature data in pairs in sequence to obtain a texture feature recognition score sequence;
and the fusion matching subunit is used for respectively executing multi-feature fusion recognition on the numerical values of the same order in the gray-scale feature recognition score sequence, the shape recognition score sequence and the texture recognition score sequence to obtain a fusion recognition score sequence.
Optionally, the matching unit further includes:
and the comprehensive matching subunit is used for executing multi-classifier fusion on the numerical values of the same order in the gray characteristic identification score sequence, the shape identification score sequence and the texture identification score sequence and the numerical values of the same order in the fusion identification score sequence respectively to obtain a comprehensive identification score sequence.
Optionally, the portrait recognition apparatus further includes:
and the alarm unit is used for comparing the cross matching result with a preset threshold value and issuing alarm information when the cross matching result is smaller than the threshold value.
Optionally, the alarm unit comprises:
and the first alarm subunit is used for comparing the fused identification score sequence with a preset numerical value of the same order in the first threshold sequence, and issuing alarm information when the fused identification score sequence is smaller than the numerical value of the same order in the first threshold sequence.
Optionally, the alarm unit comprises:
and the second alarm subunit is used for comparing the comprehensive identification score sequence with a preset numerical value of the same order in a second threshold sequence, and issuing alarm information when the comprehensive identification score sequence is smaller than the numerical value of the same order in the second threshold sequence.
Compared with the prior art, the technical scheme of the embodiment of the invention has the following advantages:
the acquired face video image and the face image on the effective identification provided by the identity information provider are crossed and matched with the face image matched in the third-party identity data system, so that whether the face image on the identity document is matched with the face image in the third-party identity data system or not can be identified, and whether the face image on the identity document is matched with the face image in the third-party identity data system or not can be identified, therefore, the accuracy of face identification can be improved, and online transaction is safer and more reliable.
Furthermore, when the face feature recognition is carried out, the gray feature recognition score sequence, the shape recognition score sequence and the texture recognition score sequence are respectively fused with the numerical values of the same rank in the fusion recognition score sequence by a plurality of classifiers to obtain a comprehensive recognition score sequence, so that the accuracy of the face feature recognition can be further improved.
Further, the cross matching result is compared with a preset threshold value, when the cross matching result is smaller than the threshold value, warning information is issued to prompt a user so as to take corresponding measures, and potential safety hazards caused by careless omission of the user can be reduced.
Drawings
FIG. 1 is a flow chart of a human image recognition method according to an embodiment of the present invention;
FIG. 2 is a flow chart of another method of face recognition in an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a portrait recognition apparatus in an embodiment of the present invention.
Detailed Description
The technical scheme adopted by the embodiment of the invention can identify whether the face image on the identity certification document is matched with the face image in the third-party identity data system or not and also identify whether the face image on the identity certification provider is matched with the face image in the identity certification document and the face image in the third-party identity data system or not respectively because the collected face video image and the face image on the effective identity certification provided by the identity information provider are matched with the face image in the third-party identity data system in a cross manner, thereby improving the accuracy of face identification and ensuring that online transaction is safer and more reliable.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
Referring to fig. 1, a flowchart of a portrait recognition method in an embodiment of the invention is shown. The portrait recognition method comprises the following steps:
step S11, collecting the face video image of the identity information provider, and obtaining the face image on the effective identification provided by the identity information provider and the face image matched in the third-party identity data system.
The identity information provides human face video images which can be collected through a network camera, effective identity certificates can be identity cards, passports and the like, and a third-party identity data system can be an ID5 identity network data system.
Step S12, normalizing the face video image, the face image on the effective identification provided by the identity information provider, and the face image matched with the third-party identity data system.
Step S13, obtaining face feature data from the normalized face video image, the face image on the effective identification provided by the identity information provider, and the face image matched with the third-party identity data system, and using the face feature data as the first, second, and third face feature data, respectively.
And step S14, performing pairwise cross matching on the first, second and third face feature data.
And step S15, outputting the result of the cross matching of the first, second and third face feature data.
According to the portrait recognition method provided by the embodiment of the invention, the face video image and the face image on the effective identification provided by the identity information provider are matched with the face image matched in the third-party identity data system in a pairwise crossing manner, so that not only can whether the face image on the identity document is matched with the face image in the third-party identity data system be recognized, but also whether the face image on the identity document and the face image on the identity document are respectively matched with the face image in the third-party identity data system can be recognized, the accuracy of portrait recognition is improved, and online transaction is safer and more reliable.
Referring to fig. 2, a flow chart of another portrait identification method in an embodiment of the present invention is shown. The portrait recognition method comprises the following steps:
step S21, collecting the face video image of the identity information provider, and obtaining the face image on the effective identification provided by the identity information provider and the face image matched in the third-party identity data system.
In a specific embodiment, a video image of a face of an identity information provider may be captured by a webcam, a face image on a valid identification may be obtained from a valid identification document, such as an identification card, provided by the identity information provider, and a matching face image in a third-party identity data system may be obtained from, for example, an ID5 identity web database.
Step S22, normalizing the face video image, the face image on the effective identification provided by the identity information provider, and the face image matched with the third-party identity data system.
The normalized image can be converted into a corresponding unique standard form by a series of transformations, namely, a group of parameters are found by using the invariant moment of the image, so that the influence of other transformation functions on the image transformation can be eliminated. The standard form image has invariant characteristics to affine transformations such as translation, rotation, scaling, and the like. Therefore, in step S22, the acquired video image of the face, the face image on the valid identification provided by the identity information provider, and the matched face image in the third-party identity data system may be converted into corresponding standard-format images, respectively, so as to acquire more accurate face feature data.
Step S23, obtaining face feature data from the normalized face video image, the face image on the effective identification provided by the identity information provider, and the face image matched with the third-party identity data system, and using the face feature data as the first, second, and third face feature data, respectively.
The face feature data may include face gray feature data, face shape feature data, and face skin texture feature data.
And step S24, performing pairwise cross matching on the first, second and third face feature data.
In specific implementation, the first face feature data, the second face feature data and the third face feature data are compared pairwise in sequence to obtain corresponding feature recognition scores, so as to form a feature recognition score sequence.
Specifically, step S24 may include: firstly, carrying out pairwise gray feature recognition on gray feature data in first, second and third face feature data to obtain first, second and third gray recognition scores in sequence to form a gray recognition score sequence. Carrying out pairwise shape feature recognition on shape feature data in the first, second and third face feature data to obtain a first, second and third shape recognition scores in sequence to form a shape recognition score sequence; and performing pairwise texture feature recognition on the face skin texture feature data in the first, second and third face feature data to sequentially obtain a first, second and third texture recognition scores to form a texture recognition score sequence. And then, respectively executing multi-feature fusion recognition on the numerical values of the same order in the gray-scale feature recognition score sequence, the shape recognition score sequence and the texture recognition score sequence to obtain a fusion recognition score sequence. Step S24 is intended to obtain a fusion matching result of the face feature data. The multi-feature fusion recognition can adopt multi-feature fusion recognition of matching quantization layers, firstly, each feature data is respectively matched to obtain different quantization values, then, the quantization values are normalized, and finally, a unified rule is used for calculating all the normalized quantization values to obtain one quantization value. In a specific implementation, the multi-feature fusion recognition may include: performing multi-feature fusion recognition on the first gray recognition score, the first shape recognition score and the first texture recognition score to obtain a first fusion recognition score; performing multi-feature fusion recognition on the second gray recognition score, the second shape recognition score and the second texture recognition score to obtain a second fusion recognition score; performing multi-feature fusion recognition on the third gray recognition score, the third shape recognition score and the third texture recognition score to obtain a third fusion recognition score; the first, second and third fused recognition scores comprise a fused recognition score sequence. The fusion recognition score sequence obtained by the multi-feature fusion recognition of the matching quantification layer reflects the comprehensive result of pairwise matching of the first, second and third face feature data. According to the comprehensive result, the corresponding judgment result can be output.
And step S25, outputting the result of the cross matching of the first, second and third face feature data.
Step S25 may display the combined result of pairwise matching of the first, second, and third facial feature data, that is, the first, second, and third fused recognition scores to the user in an intuitive manner, so that the user may conveniently know the matching result.
And step S26, comparing the cross matching result with a preset threshold value, and issuing warning information when the cross matching result is smaller than the threshold value.
In a specific implementation, step S26 may include: and comparing the fusion recognition score sequence with the numerical values of the same number in a preset first threshold sequence respectively, and issuing alarm information when the fusion recognition score sequence is smaller than the numerical value of the same number in the first threshold sequence. Specifically, a first fused recognition score is compared to the numerical value of the first degree in the first threshold sequence, a second fused recognition score is compared to the numerical value of the second degree in the first threshold sequence, and a third fused recognition score is compared to the numerical value of the third degree in the first threshold sequence. In the comparison, when the fusion recognition score sequence is smaller than the numerical value of the same order in the first threshold sequence, the alarm information is issued. The result of the cross matching is compared with a preset threshold value, and when the result of the cross matching is smaller than the threshold value, the alarm information is issued to prompt a user, so that the user can more comprehensively master the result of portrait recognition, and the method is convenient and practical.
In a specific implementation, in order to further improve the matching accuracy, the portrait identification method according to the embodiment of the present invention may further include: and respectively executing multi-classifier fusion on the gray characteristic identification score sequence, the shape identification score sequence and the texture identification score sequence and the numerical values of the same order in the fusion identification score sequence to obtain a comprehensive identification score sequence. Specifically, the step may include: performing multi-classifier fusion on the first gray level identification score, the first shape identification score, the first texture identification score and the first fusion identification score; performing multi-classifier fusion on the second gray recognition score, the second shape recognition score, the second texture recognition score and the second fusion recognition score; performing multi-classifier fusion on the third gray level identification score, the third shape identification score, the third texture identification score and the third fusion identification score; and sequentially executing the multi-classifier fusion to respectively obtain a first comprehensive identification score, a second comprehensive identification score and a third comprehensive identification score to form a comprehensive identification score sequence. And the accuracy of the matching of the face feature data can be further improved by using the fusion of multiple classifiers.
In a specific implementation, in order to prompt a user when the comprehensive recognition score is smaller than a certain degree, and facilitate the user to further take corresponding measures, the portrait recognition method according to the embodiment of the present invention may further include: and comparing the comprehensive identification score sequence with a preset numerical value of the same order in a second threshold sequence, and issuing alarm information when the comprehensive identification score sequence is smaller than the numerical value of the same order in the second threshold sequence.
Specifically, the step may include: the first composite identification score is compared to a value of a first order in a second threshold sequence, the second composite identification score is compared to a value of a second order in the second threshold sequence, and the third composite identification score is compared to a value of a third order in the second threshold sequence. In the comparison, when the comprehensive identification score sequence is smaller than the numerical value of the same order in the first threshold sequence, alarm information is issued to prompt relevant operators.
Fig. 3 is a schematic structural diagram of a portrait recognition apparatus according to an embodiment of the present invention. The portrait recognition device comprises a collection acquisition unit 1, a normalization unit 2, an extraction unit 3, a matching unit 4 and an output unit 5 which are connected in sequence.
The acquisition and acquisition unit 1 is used for acquiring a face video image of an identity information provider and acquiring a face image on an effective identity certificate provided by the identity information provider and a matched face image in a third-party identity data system.
And the normalization unit 2 is used for normalizing the face video image, the face image on the effective identification provided by the identity information provider and the matched face image in the third-party identity data system.
And the extraction unit 3 is used for acquiring face feature data from the normalized face video image, the face image on the effective identification provided by the identity information provider and the matched face image in the third-party identity data system, and respectively using the face feature data as first, second and third face feature data.
And the matching unit 4 is used for performing pairwise cross matching on the first, second and third face feature data.
And the output unit 5 is used for outputting the result of the cross matching of the first, second and third face feature data.
The face image on the effective identification provided by the acquired face video image and the identity information provider is crossed and matched with the face image in the third-party identity data system by the face recognition device, so that whether the face image on the identity document is matched with the face image in the third-party identity data system or not can be recognized, whether the face image on the identity document is matched with the face image in the third-party identity data system or not can be recognized, the accuracy of face recognition can be improved, and online transaction is safer and more reliable.
In order to prompt the user to take corresponding measures when the matching result of the facial image feature data is lower than a certain degree, the facial recognition apparatus according to the embodiment of the present invention may further include an alarm unit 6. The alarm unit 6 may be configured to compare the result of the cross matching with a preset threshold, and issue an alarm message when the result of the cross matching is smaller than the threshold.
In a specific implementation, the matching unit 4 may include a grayscale matching subunit 41, a shape matching subunit 42, a texture matching subunit 43, and a fusion matching subunit 44. Wherein,
and a gray matching subunit 41, configured to compare every two of the face gray feature data in the first, second, and third face feature data in sequence, so as to obtain a gray feature recognition score sequence.
And a shape matching subunit 42, configured to compare every two of the face shape feature data in the first, second, and third face feature data in sequence, so as to obtain a shape feature recognition score sequence.
And a texture matching subunit 43, configured to compare every two of the facial skin texture feature data of the first, second, and third facial feature data in sequence, so as to obtain a texture feature recognition score sequence.
And the fusion matching subunit 44 is configured to perform multi-feature fusion recognition on the numerical values of the same rank in the gray-scale feature recognition score sequence, the shape recognition score sequence, and the texture recognition score sequence, respectively, to obtain a fusion recognition score sequence.
In a specific implementation, in order to further improve the accuracy of matching the face feature data, the matching unit 4 may further include a comprehensive matching subunit 45, configured to perform multi-classifier fusion on the numerical values of the same rank in the gray-scale feature recognition score sequence, the shape recognition score sequence, and the texture recognition score sequence, and the numerical values of the same rank in the fusion recognition score sequence, respectively, to obtain a comprehensive recognition score sequence.
In a specific implementation, the alarm unit 6 may comprise a first alarm subunit 61 and a second alarm subunit 62. The first alarm subunit 61 may be configured to compare the fused recognition score sequence with a preset first threshold sequence, and issue an alarm message when the fused recognition score sequence is smaller than the preset first threshold sequence. The second alarm subunit 62 may be configured to compare the comprehensive identification score sequence with a preset second threshold sequence, and issue an alarm message when the comprehensive identification score sequence is smaller than the preset second threshold sequence.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by instructions associated with hardware via a program, which may be stored in a computer-readable storage medium, and the storage medium may include: ROM, RAM, magnetic or optical disks, and the like.
The method and apparatus of the present invention have been described in detail, and the present invention is not limited thereto. Various changes and modifications may be effected therein by one skilled in the art without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A face recognition method, comprising:
acquiring a face video image of an identity information provider, and acquiring a face image on an effective identity certificate provided by the identity information provider and a face image matched with the identity information provider in a third-party identity data system;
normalizing the face video image, the face image on the effective identification provided by the identity information provider and the matched face image in the third-party identity data system;
acquiring face feature data from the face video image subjected to normalization processing, the face image on the effective identification provided by the identity information provider and the matched face image in the third-party identity data system, wherein the face image is respectively used as first, second and third face feature data; the human face feature data comprise human face gray level feature data, human face shape feature data and human face skin texture feature data;
and performing pairwise cross matching on the first, second and third face feature data, including: comparing the face gray feature data in the first, second and third face feature data in pairs in sequence to obtain a gray feature identification score sequence; comparing the face shape feature data in the first, second and third face feature data in pairs in sequence to obtain a shape feature recognition score sequence;
comparing the facial skin texture feature data in the first, second and third facial feature data in pairs in sequence to obtain a texture feature recognition score sequence; respectively executing multi-feature fusion recognition on the numerical values of the same order in the gray feature recognition score sequence, the shape recognition score sequence and the texture recognition score sequence to obtain a fusion recognition score sequence;
and outputting the result of the cross matching of the first, second and third face feature data.
2. The face recognition method of claim 1, wherein pairwise cross matching the first, second, and third face feature data further comprises: and respectively executing multi-classifier fusion on the numerical values of the same order in the gray characteristic identification score sequence, the shape identification score sequence and the texture identification score sequence and the numerical values of the same order in the fusion identification score sequence to obtain a comprehensive identification score sequence.
3. The portrait recognition method of claim 2, further comprising: and comparing the cross matching result with a preset threshold value, and issuing alarm information when the cross matching result is smaller than the threshold value.
4. The portrait recognition method according to claim 3, wherein the comparing the cross-matching result with a preset threshold value, and issuing an alarm message when the cross-matching result is smaller than the threshold value comprises: and comparing the fusion recognition score sequence with the numerical values of the same number in a preset first threshold sequence respectively, and issuing alarm information when the fusion recognition score sequence is smaller than the numerical value of the same number in the first threshold sequence.
5. The portrait recognition method according to claim 3, wherein the comparing the cross-matching result with a preset threshold value, and issuing an alarm message when the cross-matching result is smaller than the threshold value comprises: and comparing the comprehensive identification score sequence with the numerical values of the same number in a preset second threshold sequence respectively, and issuing alarm information when the comprehensive identification score sequence is smaller than the numerical value of the same number in the second threshold sequence.
6. A face recognition apparatus, comprising:
the acquisition and acquisition unit is used for acquiring a face video image of the identity information provider and acquiring a face image on an effective identity certificate provided by the identity information provider and a matched face image in a third-party identity data system;
the normalization unit is used for normalizing the human face video image, the human face image on the effective identification provided by the identity information provider and the matched human face image in the third-party identity data system;
an extraction unit, configured to obtain face feature data from the normalized face video image, the face image on the valid identification provided by the identity information provider, and the face image matched in the third-party identity data system, where the face image is used as first, second, and third face feature data, respectively; the human face feature data comprise human face gray level feature data, human face shape feature data and human face skin texture feature data;
the matching unit is used for performing pairwise cross matching on the first, second and third face feature data, and comprises: the gray matching subunit is used for comparing the face gray feature data in the first, second and third face feature data in pairs in sequence to obtain a gray feature identification score sequence;
the shape matching subunit is used for comparing the face shape feature data in the first, second and third face feature data in pairs in sequence to obtain a shape feature recognition score sequence;
the texture matching subunit is used for comparing the facial skin texture feature data in the first, second and third facial feature data in pairs in sequence to obtain a texture feature recognition score sequence;
the fusion matching subunit is used for respectively executing multi-feature fusion recognition on the numerical values of the same order in the gray-scale feature recognition score sequence, the shape recognition score sequence and the texture recognition score sequence to obtain a fusion recognition score sequence;
and the output unit is used for outputting the result of the cross matching of the first, second and third face feature data.
7. The face recognition apparatus of claim 6, wherein the matching unit further comprises:
and the comprehensive matching subunit is used for executing multi-classifier fusion on the numerical values of the same order in the gray characteristic identification score sequence, the shape identification score sequence and the texture identification score sequence and the numerical values of the same order in the fusion identification score sequence respectively to obtain a comprehensive identification score sequence.
8. The portrait recognition apparatus of claim 7, further comprising:
and the alarm unit is used for comparing the cross matching result with a preset threshold value and issuing alarm information when the cross matching result is smaller than the threshold value.
9. The portrait recognition apparatus of claim 8, wherein the alarm unit comprises:
and the first alarm subunit is used for comparing the fused identification score sequence with a preset numerical value of the same order in the first threshold sequence, and issuing alarm information when the fused identification score sequence is smaller than the numerical value of the same order in the first threshold sequence.
10. The portrait recognition apparatus of claim 8, wherein the alarm unit comprises:
and the second alarm subunit is used for comparing the comprehensive identification score sequence with a preset numerical value of the same order in a second threshold sequence, and issuing alarm information when the comprehensive identification score sequence is smaller than the numerical value of the same order in the second threshold sequence.
CN201310714175.4A 2013-12-20 2013-12-20 Portrait identification method and device Active CN103699887B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310714175.4A CN103699887B (en) 2013-12-20 2013-12-20 Portrait identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310714175.4A CN103699887B (en) 2013-12-20 2013-12-20 Portrait identification method and device

Publications (2)

Publication Number Publication Date
CN103699887A CN103699887A (en) 2014-04-02
CN103699887B true CN103699887B (en) 2017-01-18

Family

ID=50361410

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310714175.4A Active CN103699887B (en) 2013-12-20 2013-12-20 Portrait identification method and device

Country Status (1)

Country Link
CN (1) CN103699887B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106161397A (en) * 2015-04-21 2016-11-23 富泰华工业(深圳)有限公司 There is the electronic installation of Anti-addiction function, Anti-addiction management system and method
CN105138985A (en) * 2015-08-25 2015-12-09 北京拓明科技有限公司 Real-name authentication method based on WeChat public number and system
CN106250739A (en) * 2016-07-19 2016-12-21 柳州龙辉科技有限公司 A kind of identity recognition device
CN107292620A (en) * 2017-06-14 2017-10-24 浪潮金融信息技术有限公司 Personal identification method and device, computer-readable recording medium, terminal
CN107967453A (en) * 2017-11-24 2018-04-27 河北三川科技有限公司 Hotel occupancy identity checking method and verifying system based on recognition of face

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800017A (en) * 2012-07-09 2012-11-28 高艳玲 Identity verification system based on face recognition
CN203287910U (en) * 2013-05-14 2013-11-13 苏州福丰科技有限公司 Search system based on face recognition
CN103440482A (en) * 2013-09-02 2013-12-11 北方工业大学 Method, system and device for identifying identity document holder based on hidden video
CN103440327A (en) * 2013-09-02 2013-12-11 北方工业大学 Method and system for quick comparison of online wanted men through hidden video

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800017A (en) * 2012-07-09 2012-11-28 高艳玲 Identity verification system based on face recognition
CN203287910U (en) * 2013-05-14 2013-11-13 苏州福丰科技有限公司 Search system based on face recognition
CN103440482A (en) * 2013-09-02 2013-12-11 北方工业大学 Method, system and device for identifying identity document holder based on hidden video
CN103440327A (en) * 2013-09-02 2013-12-11 北方工业大学 Method and system for quick comparison of online wanted men through hidden video

Also Published As

Publication number Publication date
CN103699887A (en) 2014-04-02

Similar Documents

Publication Publication Date Title
CN106899567B (en) User body checking method, device and system
WO2017215540A1 (en) Offline identity authentication method and apparatus
US10262190B2 (en) Method, system, and computer program product for recognizing face
CN103699887B (en) Portrait identification method and device
CN104680131A (en) Identity authentication method based on identity certificate information and human face multi-feature recognition
US9553871B2 (en) Clock synchronized dynamic password security label validity real-time authentication system and method thereof
US10489643B2 (en) Identity document validation using biometric image data
WO2017124990A1 (en) Method, system, device and readable storage medium for realizing insurance claim fraud prevention based on consistency between multiple images
WO2019062080A1 (en) Identity recognition method, electronic device, and computer readable storage medium
CN110008909B (en) Real-name system business real-time auditing system based on AI
KR101322168B1 (en) Apparatus for real-time face recognition
WO2018082011A1 (en) Living fingerprint recognition method and device
CN204155293U (en) A kind of demo plant based on recognition of face and verification system
CN108241836A (en) For the method and device of safety check
WO2019196303A1 (en) User identity authentication method, server and storage medium
US10043071B1 (en) Automated document classification
CN105138980A (en) Identify authentication method and system based on identity card information and face identification
WO2016131083A1 (en) Identity verification. method and system for online users
CN105450407A (en) Identity authentication method and device
CN103778409A (en) Human face identification method based on human face characteristic data mining and device
CN103279764A (en) Real-name network authentication system based on face identification
Anand et al. An improved local binary patterns histograms techniques for face recognition for real time application
CN104751143A (en) Person and credential verification system and method based on deep learning
CN103279744A (en) Multi-scale tri-mode texture feature-based method and system for detecting counterfeit fingerprints
Ouyang et al. Robust automatic facial expression detection method based on sparse representation plus LBP map

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant