CN109858328B - Face recognition method and device based on video - Google Patents

Face recognition method and device based on video Download PDF

Info

Publication number
CN109858328B
CN109858328B CN201811529841.6A CN201811529841A CN109858328B CN 109858328 B CN109858328 B CN 109858328B CN 201811529841 A CN201811529841 A CN 201811529841A CN 109858328 B CN109858328 B CN 109858328B
Authority
CN
China
Prior art keywords
face
face image
key value
detected
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811529841.6A
Other languages
Chinese (zh)
Other versions
CN109858328A (en
Inventor
余学儒
李琛
王鹏飞
段杰斌
王修翠
傅豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai IC R&D Center Co Ltd
Original Assignee
Shanghai IC R&D Center Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai IC R&D Center Co Ltd filed Critical Shanghai IC R&D Center Co Ltd
Priority to CN201811529841.6A priority Critical patent/CN109858328B/en
Publication of CN109858328A publication Critical patent/CN109858328A/en
Application granted granted Critical
Publication of CN109858328B publication Critical patent/CN109858328B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a face recognition method based on video, which comprises the following steps: s01: acquiring a frame of face image to be detected in a face video to be identified, and carrying out face positioning identification on the face image to be detected; s02: establishing a key value pair set corresponding to the face image to be detected; s03: when the key value pair set corresponding to the face image to be detected is not empty, adding the key value pair set into a state chain queue, and when the number of the key value pair sets in the state chain queue is equal to k, entering step S04; s04: respectively calculating the total similarity between k frames of face images to be detected and each standard face image in the white list according to k key value pair sets in the state chain queue and m standard face images stored in the white list; s05: and arranging the calculated total similarity in a descending order to form a B sequence, and outputting a recognition result according to the B sequence. The video-based face recognition method and device provided by the invention can further reduce the error rate of face recognition.

Description

Face recognition method and device based on video
Technical Field
The invention relates to the field of data identification, in particular to a method and a device for face recognition based on video.
Background
Face recognition refers to selecting faces consistent with the identity of the tested face from a white list stored in advance. At present, visible light face recognition is affected by light, background and the like, so that misjudgment exists in the practical application process. When the white list is too large, the face recognition accuracy rate is obviously reduced, so that a CMC curve, namely a cumulative matching curve (Cumulative Match Characteristic (CMC) curve) exists in the algorithm test process and is used for describing the accuracy rate change trend of a plurality of data with the highest ranking.
Because of the problem of low accuracy in single-picture identification, the means based on video multi-frame detection are gradually used for face recognition. However, in the prior art, most of the methods adopting video multi-frame recognition pay attention to statistics of multi-frame hit times, and often neglect the similarity of single hit, and the recognition method only paying attention to multi-frame hit times does not comprehensively consider the situation of less times of high similarity hit, so that even adopting a video multi-frame detection means can not accurately recognize faces. Therefore, a technology for video multi-frame face recognition is needed, and reliability of the technology is calculated by taking the reliability into a unified dimension for a plurality of low similarity hits and a few high similarity hits.
Disclosure of Invention
The invention aims to solve the technical problem of providing a face recognition method and device based on video, and provides a method for integrating multiple low-similarity hits and fewer high-similarity hits into unified dimension calculation, so that the error rate of face recognition is further reduced.
In order to achieve the above purpose, the present invention adopts the following technical scheme: a face recognition method based on video comprises the following steps:
s01: acquiring a frame of face image to be detected in a face video to be identified, and carrying out face positioning identification on the face image to be detected;
s02: establishing a key value pair set corresponding to the face image to be detected, wherein keys in the key value pair set are identity numbers in a white list stored in advance, the value in the key value pair set is the similarity between the face feature in the face image to be detected and the face feature in a standard face image corresponding to the identity number in the white list, and the value in the key value pair set is greater than or equal to a similarity threshold; m standard face images and corresponding identity numbers are stored in the white list, and m is an integer greater than or equal to 1;
s03: when the key value pair set corresponding to the face image to be detected is empty, the key value pair set and the state chain queue are emptied, and the step S01 is returned; the state chain queue is a queue which arranges a key value pair set by taking time as an index;
when the key value pair set corresponding to the face image to be detected is not empty, adding the key value pair set into a state chain queue, and when the number of the key value pair set in the state chain queue is smaller than k, acquiring a frame of image behind the face image to be detected corresponding to the state chain queue, and returning to the step S02; when the number of the key value pair sets in the state chain queue is equal to k, entering step S04; the length of the state chain queue is equal to the number of key value pair sets in the state chain queue, and k is an integer greater than 0;
s04: respectively calculating the total similarity between k frames of face images to be detected and each standard face image in the white list according to k key value pair sets in the state chain queue and m standard face images stored in the white list; the total similarity of each standard face image is obtained by weighting the similarity of the corresponding standard face image in the set through k key values;
s05: the calculated total similarity is arranged in descending order to form a B sequence { B } 1 ,B 2 ……B n -and outputting the recognition result according to the B sequence:
when n is greater than 1, and B 1 And B 2 When the difference value of (2) is greater than or equal to the separation threshold value, output B 1 The identity numbers and the corresponding total similarity in the corresponding white list are cleared;
when n is greater than 1, and B 1 And B 2 When the difference value of the face image is smaller than the separation threshold value, deleting a key value pair set corresponding to a frame of face image to be detected at the head of the queue in the state chain queue, and returning to the step S01;
when n is equal to 1, the value B is output 1 The identity numbers and the corresponding total similarity in the corresponding white list are cleared; wherein B is 1 And (3) representing the maximum value of the total similarity between k frames of face images to be detected in the state chain queue and standard face images in the white list, wherein n is a positive integer, and n is more than or equal to 1 and less than or equal to m.
Further, when the face in the face image to be detected cannot be located and identified in the step S01, the face position is prompted to be adjusted, and the step S01 is continued.
Further, after the cycle C times, when the set of key-value pairs in step S03 is still empty, outputting a signal of mismatch; wherein C is an integer greater than 1.
Further, the method for calculating the similarity in the key value pair set in step S02 is as follows:
Figure GDA0004153504930000021
Figure GDA0004153504930000022
wherein a is i,j Representing the similarity of the j-th frame face image to be detected and the i-th standard face image stored in the white list, a m Representing the similarity of one face feature in the j-th frame of face image to be detected and the corresponding face feature in the i-th standard face image stored in the white list, and w m Representation a m And (5) corresponding weight, wherein j is an integer less than or equal to k.
Further, the total similarity A between the k-frame face image to be detected in the step S04 and the ith standard face image stored in the white list i The calculation method of (1) is as follows:
Figure GDA0004153504930000031
wherein, when the key value pair set corresponding to the j-th frame of face image to be detected has no a i,j At the time, set a i,j =0。
Further, the number of the total similarity is less than or equal to m, and m is the number of the standard face images stored in the white list.
Further, when the similarity in the set of key-value pairs in step S02 is greater than or equal to the similarity threshold, the set of key-value pairs and the corresponding identity numbers in the whitelist are retained, and when the similarity is less than the similarity threshold, the similarity and the corresponding identity numbers in the whitelist are deleted from the set of key-value pairs.
The invention provides a device for face recognition based on video, which comprises a camera, an image reading module, a face matching module, a state chain queue updating module, a state chain queue judging module and an output module, wherein the camera is used for acquiring the video and transmitting the video to the cameraThe image reading module is used for reading a single-frame face image to be detected in the video and transmitting the single-frame face image to be detected to the face matching module, the face matching module calculates the similarity between each frame of face image to be detected and each standard face image stored in the white list to form a key value pair set, keys in the key value pair set are identity numbers in the white list stored in advance, values in the key value pair set are the similarity between the face image to be detected and the standard face image corresponding to the identity numbers in the white list, and the values in the key value pair set are larger than or equal to a similarity threshold; inputting the non-empty key value pair set into the state chain queue updating module, when the number of the key value pair set in the state chain queue is smaller than k, acquiring a frame of image behind the face image to be detected corresponding to the state chain queue, until the number of the key value pair set in the state chain queue updating module is equal to k, transmitting the state chain queue into the state chain queue judging module, calculating the total similarity corresponding to each standard face image stored in the white list, and arranging the calculated total similarity in a descending order by the state chain queue judging module to form a B sequence { B1, B } 2 ……B n And the B sequence is transmitted to the output module, and the output module judges and outputs an identification result according to the B sequence, wherein m standard face images and corresponding identity numbers are stored in the white list, m is an integer greater than or equal to 1, n is a positive integer, and n is greater than or equal to 1 and less than or equal to m.
Further, the face matching module further includes a face positioning unit, configured to outline a minimum face area.
The beneficial effects of the invention are as follows: due to the single frame matching accuracy A i The multi-frame matching accuracy is 1-pi (1-A i ) The more the number of matching frames, the higher the accuracy. The invention fully considers the matching degree of each frame image in the video and the images in the white list, further reduces the misjudgment proportion of the face recognition system on the premise of not influencing the face recognition algorithm, and simultaneously can also reduce the difficulty of face recognition threshold selection.
Drawings
Fig. 1 is a flowchart of a method for video-based face recognition according to the present invention.
Fig. 2 is a schematic diagram of a video-based face recognition device according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following detailed description of the specific embodiments of the present invention will be given with reference to the accompanying drawings.
As shown in fig. 1, the face recognition method based on video provided by the invention comprises the following steps:
s01: and acquiring a frame of face image to be detected in the face video to be identified, and carrying out face positioning identification on the face image to be detected.
When the face in the face image to be detected cannot be positioned and identified in the step, prompting to adjust the face position, and continuing the step S01
S02: establishing a key value pair set corresponding to the face image to be detected, wherein keys in the key value pair set are identity numbers in a white list stored in advance, values in the key value pair set are the similarity of face features in the face image to be detected and face features in a standard face image corresponding to the identity numbers in the white list, and the values in the key value pair set are larger than or equal to a similarity threshold; m standard face images and corresponding identity numbers are stored in the white list, and m is an integer greater than or equal to 1.
The specific steps for establishing the key value pair set are as follows: and storing a frame of face image to be detected, carrying out face positioning on the face image to be detected, respectively calculating the similarity between the face features in the face image to be detected and m standard face images stored in a white list, and storing the calculated similarity and the standard face image information corresponding to the white list into a key value pair set. And when the similarity is smaller than the similarity threshold, deleting the similarity and the identity number in the corresponding white list from the key value pair set. This is because a similarity smaller than the similarity threshold has no meaning for recognition, and unnecessary computation can be reduced by directly removing it.
The method for calculating the similarity in the key value pair set comprises the following steps:
Figure GDA0004153504930000041
wherein a is i,j Representing the similarity of the j-th frame face image to be detected and the i-th standard face image stored in the white list, a m Representing the similarity of one face feature in the j-th frame of face image to be detected and the corresponding face feature in the i-th standard face image stored in the white list, and w m Representation a m And (5) corresponding weight, wherein j is an integer less than or equal to k.
S03: and when the key value pair set corresponding to the image is empty, the key value pair set and the state chain queue are emptied, and the step S01 is returned.
After the cycle A times, outputting a non-matching signal when the key value pair set in the step S03 is still empty; wherein A is an integer greater than or equal to 1, and A is a freely set number.
When the key value pair set corresponding to the face image to be detected is not empty, adding the key value pair set into a state chain queue, and when the number of the key value pair set in the state chain queue is smaller than k, acquiring a frame of image behind the face image to be detected corresponding to the state chain queue, and returning to the step S02; when the number of the key value pair sets in the state chain queue is equal to k, entering step S04; the length of the state chain queue is equal to the number of key value pair sets in the state chain queue, and k is an integer greater than 0. The state chain queue is a queue which arranges a key value pair set by taking time as an index; k is a custom value, the optimum depending on the two factor algorithm's own accuracy and the desired accuracy.
S04: respectively calculating the total similarity between k frames of face images to be detected and each standard face image in the white list according to k key value pair sets in the state chain queue and m standard face images stored in the white list; the total similarity of each standard face image is obtained by weighting the similarity of the corresponding standard face image in the set through k key values.
Wherein, the total similarity A of the k frames of face images to be detected and the ith standard face image stored in the white list i The calculation method of (1) is as follows:
Figure GDA0004153504930000051
wherein, when the key value pair set corresponding to the j-th frame of face image to be detected has no a i,j At the time, set a i,j =0。
In the invention, because the values in the key value pair sets are all the values which are larger than or equal to the similarity threshold value, when the total similarity is calculated, k key value pair sets in the state chain queue may only correspond to the identity numbers in n white lists, namely, all k key value pair sets in the state chain queue do not contain the identity numbers in the remaining m-n white lists. Thus, the number n of calculated total similarity is necessarily m or less. That is, if the set of the k key value pairs in the state chain queue is P, the P includes n identity numbers in the white list, and the total similarity of the standard face image corresponding to each identity number is obtained by weighting the similarity of the standard face image corresponding to the k key value pairs in the set.
S05: the total similarity between the state chain queue and m standard face images in the white list is arranged according to a descending order to form a B sequence { B } 1 ,B 2 ……B n -and outputting the recognition result according to the B sequence: wherein B is 1 Representing the maximum value of the total similarity between k frames of face images to be detected in a state chain queue and standard face images in a white list, wherein n is a positive integer, and n is more than or equal to 1 and less than or equal to m; for example, if the similarity between the k-frame face image to be detected in the video and the i-th standard face image in the white list is the largest, B 1 =A i
When n is greater than 1, and B 1 And B 2 When the difference value of (2) is greater than or equal to the separation threshold value, output B 1 The identity numbers and the corresponding total similarity in the corresponding white list are cleared; wherein the separation threshold is a value set in advance. In this case, it is explained that the face image to be detected and B in the video are recognized by the above method 1 Corresponding white nameThe similarity of the standard face images in the single image is the largest, and the similarity is far greater than the rest similarity, which indicates that the face in the video image is B 1 Standard face images in the corresponding whitelist.
When n is greater than 1, and B 1 And B 2 When the difference value of the face image is smaller than the separation threshold value, deleting a key value pair set corresponding to a frame of face image to be detected at the head of the queue in the state chain queue, and returning to the step S01; in this case, it is explained that the image and B in the video are recognized by the above method 1 And B 2 The similarity of two standard face images in the corresponding white list is the largest; the face image to be detected in the video image is necessarily only one, and a conclusion that both the face image to be detected are similar is obtained, so that the identification process is inaccurate and the follow-up identification is required.
When n is equal to 1, the value B is output 1 And the corresponding identity numbers in the white list and the corresponding total similarity are used for emptying the state chain queue. In this case, it is explained that the face to be detected in the video image has a larger phase difference from the rest of the standard face images in the white list, and at this time, whether the face in the video image is B is further determined according to the output similarity 1 Standard face images in the corresponding whitelist.
As shown in fig. 2, the device for recognizing the face based on the video provided by the invention comprises a camera, an image reading module, a face matching module, a state chain queue updating module, a state chain queue judging module and an output module, wherein the face matching module further comprises a face positioning unit for delineating a minimum face area. The camera is used for acquiring a video and transmitting the video to the image reading module, the image reading module is used for reading a single-frame face image to be detected in the video and transmitting the single-frame face image to be detected to the face matching module, the face matching module calculates the similarity between each frame of face image to be detected and each standard face image stored in the white list to form a key value pair set, keys in the key value pair set are identity numbers in the white list stored in advance, and the values in the key value pair set are the similarity between the face image to be detected and the standard images corresponding to the identity numbers in the white listThe degree, and the value in the key value pair set is larger than or equal to the similarity threshold value; inputting the non-empty key value pair set into a state chain queue updating module, when the number of the key value pair set in the state chain queue is smaller than k, acquiring a frame of image behind the face image to be detected corresponding to the state chain queue, until the number of the key value pair set in the state chain queue updating module is equal to k, transmitting the state chain queue into a state chain queue judging module, calculating the total similarity corresponding to each standard face image stored in a white list, and arranging the calculated total similarity in a descending order by the state chain queue judging module to form a B sequence { B1, B } 2 ……B n And transmitting the B sequence to an output module, and judging and outputting an identification result by the output module according to the B sequence, wherein m standard face images and corresponding identity numbers are stored in a white list, m is an integer greater than or equal to 1, n is a positive integer, and n is greater than or equal to 1 and less than or equal to m.
The foregoing description is only of the preferred embodiments of the present invention, and the embodiments are not intended to limit the scope of the invention, so that all changes made in the structure and details of the invention which may be regarded as equivalents thereof are intended to be included within the scope of the invention as defined in the following claims.

Claims (9)

1. The face recognition method based on the video is characterized by comprising the following steps of:
s01: acquiring a frame of face image to be detected in a face video to be identified, and carrying out face positioning identification on the face image to be detected;
s02: establishing a key value pair set corresponding to the face image to be detected, wherein keys in the key value pair set are identity numbers in a white list stored in advance, the value in the key value pair set is the similarity between the face feature in the face image to be detected and the face feature in a standard face image corresponding to the identity number in the white list, and the value in the key value pair set is greater than or equal to a similarity threshold; m standard face images and corresponding identity numbers are stored in the white list, and m is an integer greater than or equal to 1;
s03: when the key value pair set corresponding to the face image to be detected is empty, the key value pair set and the state chain queue are emptied, and the step S01 is returned; the state chain queue is a queue which arranges a key value pair set by taking time as an index;
when the key value pair set corresponding to the face image to be detected is not empty, adding the key value pair set into a state chain queue, and when the number of the key value pair set in the state chain queue is smaller than k, acquiring a frame of image behind the face image to be detected corresponding to the state chain queue, and returning to the step S02; when the number of the key value pair sets in the state chain queue is equal to k, entering step S04; the length of the state chain queue is equal to the number of key value pair sets in the state chain queue, and k is an integer greater than 0;
s04: respectively calculating the total similarity between k frames of face images to be detected and each standard face image in the white list according to k key value pair sets in the state chain queue and m standard face images stored in the white list; the total similarity of each standard face image is obtained by weighting the similarity of the corresponding standard face image in the set through k key values;
s05: the calculated total similarity is arranged in descending order to form a B sequence { B } 1 ,B 2 ……B n -and outputting the recognition result according to the B sequence:
when n is greater than 1, and B 1 And B 2 When the difference value of (2) is greater than or equal to the separation threshold value, output B 1 The identity numbers and the corresponding total similarity in the corresponding white list are cleared;
when n is greater than 1, and B 1 And B 2 When the difference value of the face image is smaller than the separation threshold value, deleting a key value pair set corresponding to a frame of face image to be detected at the head of the queue in the state chain queue, and returning to the step S01;
when n is equal to 1, the value B is output 1 The identity numbers and the corresponding total similarity in the corresponding white list are cleared; wherein B is 1 And (3) representing the maximum value of the total similarity between k frames of face images to be detected in the state chain queue and standard face images in the white list, wherein n is a positive integer, and n is more than or equal to 1 and less than or equal to m.
2. The video-based face recognition method according to claim 1, wherein when the face in the face image to be detected cannot be positioned and recognized in the step S01, the face position is prompted to be adjusted, and the step S01 is continued.
3. The video-based face recognition method of claim 1, wherein when the set of key-value pairs in step S03 is still empty after the C cycles, a non-matching signal is output; wherein C is an integer greater than 1.
4. The video-based face recognition method according to claim 1, wherein the method for calculating the similarity in the set of key value pairs in step S02 is as follows:
Figure QLYQS_1
wherein a is i,j Representing the similarity of the j-th frame face image to be detected and the i-th standard face image stored in the white list, a m Representing the similarity of one face feature in the j-th frame of face image to be detected and the corresponding face feature in the i-th standard face image stored in the white list, and w m Representation a m And (5) corresponding weight, wherein j is an integer less than or equal to k.
5. The video-based face recognition method according to claim 4, wherein the total similarity a between the k-frame face image to be detected in step S04 and the i-th standard face image stored in the white list i The calculation method of (1) is as follows:
Figure QLYQS_2
wherein, when the key value pair set corresponding to the j-th frame of face image to be detected has no a i,j At the time, set a i,j =0。
6. The video-based face recognition method of claim 5, wherein the number of the total similarity is less than or equal to m, and m is the number of standard face images stored in a white list.
7. The video-based face recognition method according to claim 4, wherein the similarity a in the set of key-value pairs in step S02 i,j When the similarity threshold value is greater than or equal to the similarity threshold value, the similarity threshold value and the corresponding identity number in the white list are kept in the key value pair set, and when the similarity a is i,j And when the similarity is smaller than the similarity threshold value, deleting the similarity and the corresponding identity number in the white list in the key value pair set.
8. The device is characterized by comprising a camera, an image reading module, a face matching module, a state chain queue updating module, a state chain queue judging module and an output module, wherein the camera is used for acquiring a video and transmitting the video to the image reading module, the image reading module is used for reading a single-frame face image to be detected in the video and transmitting the single-frame face image to be detected to the face matching module, the face matching module calculates the similarity between each frame of face image to be detected and each standard face image stored in a white list to form a key value pair set, keys in the key value pair set are identity numbers in the white list stored in advance, the value in the key value pair set is the similarity between the face image to be detected and the standard face image corresponding to the identity number in the white list, and the value in the key value pair set is greater than or equal to a similarity threshold; inputting a non-empty key value pair set into the state chain queue updating module, when the number of key value pair sets in the state chain queue is smaller than k, acquiring a frame of image behind the face image to be detected corresponding to the state chain queue, until the number of key value pair sets in the state chain queue updating module is equal to k, transmitting the state chain queue into the state chain queue judging module, calculating the total similarity corresponding to each standard face image stored in a white list, and arranging the calculated total similarity according to descending order by the state chain queue judging module to form a B sequence { B } 1 ,B 2 ……B n And the B sequence is transmitted to the output module, and the output module judges and outputs an identification result according to the B sequence, wherein m standard face images and corresponding identity numbers are stored in the white list, m is an integer greater than or equal to 1, n is a positive integer, and n is greater than or equal to 1 and less than or equal to m.
9. The apparatus for video-based face recognition as defined in claim 8, wherein the face matching module further comprises a face positioning unit for delineating a minimum face region.
CN201811529841.6A 2018-12-14 2018-12-14 Face recognition method and device based on video Active CN109858328B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811529841.6A CN109858328B (en) 2018-12-14 2018-12-14 Face recognition method and device based on video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811529841.6A CN109858328B (en) 2018-12-14 2018-12-14 Face recognition method and device based on video

Publications (2)

Publication Number Publication Date
CN109858328A CN109858328A (en) 2019-06-07
CN109858328B true CN109858328B (en) 2023-06-02

Family

ID=66891213

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811529841.6A Active CN109858328B (en) 2018-12-14 2018-12-14 Face recognition method and device based on video

Country Status (1)

Country Link
CN (1) CN109858328B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111291627B (en) * 2020-01-16 2024-04-19 广州酷狗计算机科技有限公司 Face recognition method and device and computer equipment
CN113113094A (en) * 2021-03-15 2021-07-13 广州零端科技有限公司 Medical information processing method, system, device and medium based on face recognition

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103235825B (en) * 2013-05-08 2016-05-25 重庆大学 A kind of magnanimity face recognition search engine design method based on Hadoop cloud computing framework
WO2016101766A1 (en) * 2014-12-23 2016-06-30 北京奇虎科技有限公司 Method and device for obtaining similar face images and face image information
CN107424266A (en) * 2017-07-25 2017-12-01 上海青橙实业有限公司 The method and apparatus of recognition of face unblock
CN107403173B (en) * 2017-08-21 2020-10-09 合肥麟图信息科技有限公司 Face recognition system and method
CN108038422B (en) * 2017-11-21 2021-12-21 平安科技(深圳)有限公司 Camera device, face recognition method and computer-readable storage medium

Also Published As

Publication number Publication date
CN109858328A (en) 2019-06-07

Similar Documents

Publication Publication Date Title
WO2019119505A1 (en) Face recognition method and device, computer device and storage medium
US9070041B2 (en) Image processing apparatus and image processing method with calculation of variance for composited partial features
US8867828B2 (en) Text region detection system and method
US8437511B2 (en) Biometric authentication system
CN105335726B (en) Face recognition confidence coefficient acquisition method and system
US20130279740A1 (en) Identifying Multimedia Objects Based on Multimedia Fingerprint
KR102399025B1 (en) Improved data comparison method
US20050259873A1 (en) Apparatus and method for detecting eyes
CN111915015B (en) Abnormal value detection method and device, terminal equipment and storage medium
CN109858328B (en) Face recognition method and device based on video
CN112613471B (en) Face living body detection method, device and computer readable storage medium
JP5214679B2 (en) Learning apparatus, method and program
US20230229897A1 (en) Distances between distributions for the belonging-to-the-distribution measurement of the image
US9330662B2 (en) Pattern classifier device, pattern classifying method, computer program product, learning device, and learning method
JP2013117861A (en) Learning device, learning method and program
WO2006009035A1 (en) Signal detecting method, signal detecting system, signal detecting program and recording medium on which the program is recorded
CN116958868A (en) Method and device for determining similarity between text and video
CN117197864A (en) Certificate classification recognition and crown-free detection method and system based on deep learning
CN114612967B (en) Face clustering method, device, equipment and storage medium
CN112686129B (en) Face recognition system and method
US20230306273A1 (en) Information processing device, information processing method, and recording medium
EP3800578A1 (en) Hierarchical sampling for object identification
CN105184275B (en) Infrared local face key point acquisition method based on binary decision tree
CN113420699A (en) Face matching method and device and electronic equipment
CN112580462A (en) Feature point selection method, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant