CN113255631B - Similarity threshold updating method, face recognition method and related device - Google Patents

Similarity threshold updating method, face recognition method and related device Download PDF

Info

Publication number
CN113255631B
CN113255631B CN202110802852.2A CN202110802852A CN113255631B CN 113255631 B CN113255631 B CN 113255631B CN 202110802852 A CN202110802852 A CN 202110802852A CN 113255631 B CN113255631 B CN 113255631B
Authority
CN
China
Prior art keywords
attribute
face image
similarity
face
attributes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110802852.2A
Other languages
Chinese (zh)
Other versions
CN113255631A (en
Inventor
张兴明
殷俊
葛主贝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202110802852.2A priority Critical patent/CN113255631B/en
Publication of CN113255631A publication Critical patent/CN113255631A/en
Application granted granted Critical
Publication of CN113255631B publication Critical patent/CN113255631B/en
Priority to PCT/CN2021/128816 priority patent/WO2023284185A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a similarity threshold updating method, a face recognition method and a related device, comprising the following steps: receiving a first face image, and performing attribute analysis on the first face image to obtain an attribute of the first face image; respectively carrying out feature comparison on the first face image and a plurality of second face images in a database to obtain a plurality of similarity values; adding a plurality of similarity values to at least one attribute false report based on attributes of the first face image, wherein the attribute false report corresponds to attributes of a single type or a combination of a plurality of attributes of different types; in response to the fact that the number of the similarity values contained in any attribute error report reaches a first threshold value, extracting candidate similarity values from the similarity values of the attribute error reports reaching the first threshold value; and updating the current similarity threshold value by using the candidate similarity value. According to the scheme, the similarity threshold value can be updated, so that the false alarm rate of the face which is easy to mistakenly recognize is reduced, and the accuracy of the face which is difficult to recognize is improved.

Description

Similarity threshold updating method, face recognition method and related device
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a similarity threshold updating method, a face recognition method, and a related apparatus.
Background
As population flows more frequently, when the collected face images are managed, the face images need to be distributed or gathered, and when the face images are distributed or gathered, a similarity threshold value is usually set, and the face images with similarity values higher than the similarity threshold value are classified into the same class, so that the human images can be managed conveniently.
In the prior art, the similarity threshold is usually set as a single initial threshold and is not updated, but the similarity threshold required by the same false alarm rate between adults, children and people with different skin colors is different from the experimental test data. For example, the initial threshold can satisfy the false alarm rate of distribution or gathering between adults, but when children appear in the monitoring area, the number of false alarms is exponentially multiplied, so that the false alarm rate is increased; if the similarity between the face with the mask and the face without the mask is lower than the similarity between the faces without the mask at the same false alarm rate, the recognition rate and the gear-gathering success rate of the face with the mask are reduced if the similarity is set according to the similarity threshold of the face without the mask. In view of this, how to update the similarity threshold value, so as to reduce the false alarm rate of the human face that is easily recognized by mistake, and improve the accuracy rate of the human face that is difficult to recognize becomes an urgent problem to be solved.
Disclosure of Invention
The technical problem mainly solved by the application is to provide a similarity threshold updating method, a face recognition method and a related device, which can update the similarity threshold based on different attribute combinations, thereby reducing the false alarm rate of the face which is easy to be recognized by mistake and improving the accuracy rate of the face which is difficult to be recognized.
In order to solve the above technical problem, a first aspect of the present application provides a method for updating a similarity threshold, where the method includes: receiving a first face image, and performing attribute analysis on the first face image to obtain attributes of the first face image, wherein the attributes comprise at least one of occlusion condition, age, gender and skin color; respectively carrying out feature comparison on the first face image and a plurality of second face images in a database to obtain a plurality of similarity values; adding the plurality of similarity values to at least one attribute error report based on attributes of the first face image, wherein the attribute error report corresponds to a single type of the attributes or a combination of a plurality of different types of the attributes; in response to the fact that the number of the similarity values contained in any attribute error report reaches a first threshold value, extracting candidate similarity values from the similarity values of the attribute error report reaching the first threshold value; and updating the current similarity threshold value by using the candidate similarity value.
In order to solve the above technical problem, a second aspect of the present application provides a face recognition method, including: receiving a first face image, and respectively performing feature comparison on the first face image and a plurality of second face images in a database to obtain a plurality of similarity values; obtaining a current similarity threshold, wherein the similarity threshold is obtained according to the method of the first aspect; and performing deployment and/or gathering on the first face image and the plurality of second face images based on the current similarity threshold and the plurality of similarity values to obtain a first deployment result and/or a first gathering result.
To solve the above technical problem, a third aspect of the present application provides an electronic device, including: a memory and a processor coupled to each other, wherein the memory stores program data, and the processor calls the program data to execute the method of the first aspect or the second aspect.
In order to solve the above technical problem, a fourth aspect of the present application provides a computer-readable storage medium having stored thereon program data, which when executed by a processor, implements the method of the first aspect or the second aspect.
The beneficial effect of this application is: after a first face image is obtained, the first face image is subjected to attribute analysis to obtain attributes of the first face image, the first face image is subjected to feature comparison with a second face image stored in a database to obtain a plurality of similarity values, the similarity values are added into an attribute error report based on the attributes of the first face image, the attribute error report corresponds to attributes of a single type or a combination of a plurality of attributes of different types, the similarity values in the attribute error report are counted, when the number of the similarity values in any attribute error report exceeds a first threshold value, candidate similarity values are extracted from the corresponding attribute error report, the current similarity threshold value is updated by using the candidate similarity values, so that the similarity threshold value corresponding to any attribute or attribute combination can be automatically updated, and the false alarm rate of face recognition can be reduced when the face image of the corresponding attribute or attribute combination appears in a monitoring area, the accuracy of the face difficult to recognize is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts. Wherein:
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of a similarity threshold updating method according to the present application;
FIG. 2 is a schematic flow chart diagram illustrating another embodiment of a similarity threshold updating method according to the present application;
FIG. 3 is a schematic flow chart diagram illustrating an embodiment of a face recognition method according to the present application;
FIG. 4 is a schematic structural diagram of an embodiment of an electronic device of the present application;
FIG. 5 is a schematic structural diagram of an embodiment of a computer-readable storage medium according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. Further, the term "plurality" herein means two or more than two.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating an embodiment of a similarity threshold updating method according to the present application, including:
s101: the method comprises the steps of receiving a first face image, and carrying out attribute analysis on the first face image to obtain attributes of the first face image, wherein the attributes comprise at least one of shielding conditions, age, gender and skin color.
Specifically, after the first face image is obtained, attribute analysis is performed on the first face image, attributes of the first face image are detected and extracted from the first face image, and the attributes include at least one of shielding conditions, age, gender and skin color based on different attribute analysis methods.
In an application mode, a first face image is received, a face detection model is used for detecting the first face image, and higher-order attributes corresponding to the face features of the first face image are obtained based on the face features of the first face image.
In another application mode, a first face image is received, and a pre-trained attribute analysis model is used for extracting attributes of the first face image, so that the attribute analysis model outputs attributes corresponding to the first face image.
In a specific application scenario, the attributes of the first face image include facial occlusion, facial non-occlusion, children, young, old, male, female, yellow skin, white skin, and black skin. Other types of attributes may also be included in other application scenarios, and are not particularly limited herein.
S102: and respectively carrying out feature comparison on the first face image and a plurality of second face images in the database to obtain a plurality of similarity values.
Specifically, a plurality of second face images exist in the database, and the second face images correspondingly store feature information, the feature information of the first face image is extracted, and the feature information of the first face image is compared with the features of the plurality of second face images in the database, so that similarity values of the first face image relative to the plurality of second face images are obtained.
S103: adding a plurality of similarity values to at least one attribute false report based on the attributes of the first face image, wherein the attribute false report corresponds to a single type of attribute or a combination of a plurality of attributes of different types.
Specifically, an attribute error report which is generated in advance is extracted, wherein the attribute error report can be a single type of attribute or a combination of multiple attributes of different types.
In an application mode, the attribute of a single type corresponding to the attribute misinformation table is an attribute of a single face image, for example: the combination of multiple attributes of different types corresponding to the attribute misinformation table is the attribute of a single face image, for example: the child yellow skin error report is shielded on the face, or the child yellow skin error report is shielded on the face.
Further, matching the attributes of the first face image with the attributes of a single type or the combination of multiple attributes of different types in the attribute error report, and adding the similarity value of the first face image relative to the second face image to the corresponding attribute error report after at least part of the attributes of the first face image are matched with the attributes or the attribute combination corresponding to the attribute error report.
In another application, the attribute of a single type corresponding to the attribute misinformation table is a single attribute corresponding to each of the two face images, for example: the combination of a plurality of attributes of different types corresponding to the attribute misinformation table is a plurality of attributes respectively corresponding to two face images, for example: the child yellow skin/child yellow skin error report is shielded on the face or the child yellow skin/child yellow skin error report is shielded on the face.
Further, the attribute of the second face image is stored in the database, the attribute of the first face image and the attribute of the second face image are combined and matched with the corresponding attribute or attribute combination in the attribute error report, and the similarity value between the first face image and the second face image is added to the attribute error report which is successfully matched.
S104: and in response to the fact that the number of the similarity values contained in any attribute error report reaches a first threshold value, extracting candidate similarity values from the similarity values of the attribute error reports reaching the first threshold value.
Specifically, when the number of the added similarity values in any attribute error report reaches a first threshold, the similarity values in the attribute error report reaching the first threshold are sorted and a candidate similarity value is selected from the similarity values.
In an application mode, the first threshold is the reciprocal of the false alarm rate set currently, the similarity values added to the attribute false reports are arranged in a descending order according to the magnitude of the values, and when the similarity value in any attribute false report reaches the first threshold, the similarity values of the previous first values in the attribute false reports are averaged to obtain the candidate similarity value.
In another application mode, the first threshold is the reciprocal of the false alarm rate set currently, the similarity values added to the attribute false reports are arranged in a descending order according to the magnitude of the values, and when the similarity value in any attribute false report reaches the first threshold, the median of the similarity value of the previous first value in the attribute false report is used for obtaining the candidate similarity value.
S105: and updating the current similarity threshold value by using the candidate similarity value.
Specifically, when the number of the attribute false reports is one, the current similarity threshold value is updated by using the candidate similarity value corresponding to the current attribute false report.
Further, when the number of the attribute error reports exceeds one, the similarity threshold is updated based on the candidate similarity value in the attribute error report with the largest number of similarity values to ensure the reliability of judging the population with the largest number of samples, or the similarity threshold is updated based on the latest candidate similarity value to ensure that the similarity threshold can be changed according to different attribute error reports to cope with different application scenarios.
Further, when the similarity values in the attribute false reports are accumulated continuously, and the similarity value of the first value in any attribute false report is updated continuously, the candidate similarity value for updating the similarity threshold value is closer to a theoretically optimal value, the longer the false report table length is, the more stable the threshold value is, and further, the attribute corresponding to the attribute false report table or the face image of the attribute combination is, the more accurate the false alarm rate fed back when judging whether the face image is the same by using the similarity threshold value corresponding to the attribute false report table is, for example, the face image corresponding to the attribute or the attribute combination with the occlusion of a child or a face, the false alarm discrimination of the face can be effectively inhibited, and the recognition rate of the group which is difficult to recognize is improved.
According to the scheme, after a first face image is obtained, attribute analysis is carried out on the first face image to obtain attributes of the first face image, the first face image is compared with second face images stored in a database to obtain a plurality of similarity values, the similarity values are added into an attribute error report based on the attributes of the first face image, a single type of attribute or a combination of a plurality of attributes of different types is corresponding to the attribute error report, the similarity values in the attribute error report are counted, when the number of the similarity values in any attribute error report exceeds a first threshold value, candidate similarity values are extracted from the corresponding attribute error report, the candidate similarity values are used for updating a current similarity threshold value, so that the similarity threshold value corresponding to any attribute or attribute combination can be automatically updated, and the misinformation of face recognition can be reduced when the face image of the corresponding attribute or attribute combination appears in a monitoring area And the accuracy of the face difficult to recognize is improved.
Referring to fig. 2, fig. 2 is a schematic flow chart illustrating another embodiment of a similarity threshold updating method according to the present application, the method including:
s201: and receiving the first face image, carrying out attribute analysis on the first face image based on an attribute algorithm, and extracting different types of attributes corresponding to the first face image.
Specifically, after the first face image is obtained, attribute analysis is performed on the first face image by using an attribute algorithm to obtain an attribute analysis result, and different types of attributes corresponding to the first face image are extracted from the attribute analysis result.
In an application mode, the acquired first face image is sent to an attribute analysis model, and attributes corresponding to the first face image are extracted from the first face image by an attribute algorithm corresponding to the attribute analysis model, wherein the attributes comprise at least one of shielding condition, age, gender and skin color.
S202: the different types of attributes are stored in a predetermined order.
Specifically, the attributes of different types are sorted according to attributes preset by a user, so that the storage of the attributes is standardized, and the matching of subsequent attributes is facilitated.
In an application mode, the attributes of different types are stored in a binary mode according to a preset sequence so as to obtain attribute identification corresponding to the first face image.
Specifically, after the sequence of the attributes of different types is fixed, the states of the corresponding attributes are represented by 0 and 1, so that the attribute identifier corresponding to the first face image is obtained, and when attribute matching is performed subsequently, fast matching can be performed based on the attribute identifier, so that the processing efficiency is improved.
In an application scenario, please refer to table 1, where table 1 is an attribute identifier corresponding to a first face image, and the attributes include facial occlusion, children, youth, old age, gender, yellow skin, white skin, and black skin, and the above different types of attributes are arranged in a predetermined order and stored in a binary manner, for example, for the type of attribute of gender, 0 is used for female, 1 is used for male, for the type of attribute of facial occlusion, 0 is used for no facial occlusion, and 1 is used for facial occlusion. The other attributes of different types are not described in detail herein, and the attributes of different types can be conveniently and clearly and accurately stored in a binary storage mode, so that an attribute identifier is generated for subsequent attribute matching.
Table 1: attribute identification mark corresponding to first face image
image/Property Facial shield Children's toy Young people Old age Sex Yellow skin White skin Black skin
ID1 0 1 0 0 1 0 0 1
ID2 0 1 0 0 0 0 0 1
ID3 0 1 0 0 1 0 0 1
S203: and acquiring the acquisition time and the acquisition place corresponding to the first face image.
Specifically, the acquisition time and the acquisition place corresponding to the first face image are acquired when the first face image is received, and the acquisition time and the acquisition place are used as the spatiotemporal information corresponding to the first face image.
In an application mode, when a first face image is received, acquiring time of the first face image and an acquisition place corresponding to the first face image are acquired from equipment acquiring the first face image, wherein the acquisition place is represented by longitude and latitude.
S204: and acquiring the face image outside the preset space-time range as a second face image based on the acquisition time and the acquisition place.
Specifically, a preset space-time range is obtained, the acquisition time and the acquisition place corresponding to the first face image are used as the original points, the face image outside the preset space-time range is obtained and used as the second face image, the face image in the preset space-time range is filtered, and it is guaranteed that the second face image and the first face image are not from the same person.
In an application mode, a preset time range comprises a preset time period and a preset distance, a face image which is in the preset time period before the acquisition time corresponding to the first face image and is beyond the preset distance relative to the acquisition place is extracted as a second face image, and therefore the second face image which is not from the same person as the first face image in theory is extracted, and the similarity value obtained when the first face image and the second face image are subjected to feature comparison subsequently is a false alarm result.
In a specific application scenario, a preset time period included in a preset time range is 10 minutes, then the speed limit of a highway is 120km/h, the same person theoretically cannot travel a distance exceeding 20 kilometers in 10 minutes, the preset distance corresponding to the preset time period is set to be 25 kilometers exceeding a theoretical value, and a face image within 10 minutes before the acquisition time and outside 25 kilometers relative to the acquisition location is acquired as a second face image. In other application scenarios, the preset time period may also be 20 minutes or 30 minutes, and a preset distance corresponding to the preset time period is set to ensure that the second face image and the first face image do not originate from the same person.
S205: and respectively carrying out feature comparison on the first face image and a plurality of second face images in the database to obtain a plurality of similarity values.
Specifically, feature information of a first face image and feature information of a second face image stored in a database are extracted, so that feature comparison is performed on the first face image and a plurality of second face images to obtain a plurality of similarity values.
In an application mode, feature extraction is carried out on a first face image to obtain and store feature information corresponding to the first face image, and feature comparison is carried out on the feature information corresponding to the first face image and feature information corresponding to a plurality of second face images to obtain similarity values of the first face image relative to the plurality of second face images respectively.
Specifically, feature extraction is performed on a first face image to obtain feature information corresponding to the first face image, the feature information of the first face image is stored in a database to be called when a subsequent newly obtained face image is used for feature comparison, so that the face image in the database is supplemented, feature comparison is continued, feature information corresponding to a second face image is stored in the database, and feature comparison is performed based on the feature information of the first face image and the feature information of the second face image to obtain a plurality of similarity values, so that an attribute error report can be filled.
S206: a plurality of similarity values are added to at least one attribute error report based on attributes of the first face image.
Specifically, matching is carried out on the basis of the attribute identification and a single type of attributes or a plurality of attributes of different types in an attribute false alarm table so as to obtain an attribute matching result; and adding a plurality of similarity values to the corresponding attribute error report based on the attribute matching result.
In an application mode, the attribute corresponding to the attribute false report is the attribute of a single face image, the attribute of a first face image is represented by an attribute identifier, the attribute false report corresponds to the attribute of a single type or the combination of a plurality of attributes of different types, and the attribute identifier of the first face image, which conforms to the attribute of the attribute false report or the similarity value of the attribute combination, is added to the corresponding attribute false report, so that the rapid matching with the attribute false report is realized based on the attribute identifier, and the filling of the attribute false report is completed.
In a specific application scenario, please refer to table 1 again, the target collected by the device is a black-skin male child wearing a mask, the attribute identifier of the first face image corresponding to the target is 11001001, and the similarity value is added to the corresponding attribute error report based on the attribute identifier of the first face image, for example, when the attribute error report includes a facial occlusion error report, a facial non-occlusion error report, a child error report, a facial occlusion child error report, and a child yellow-skin error report, the similarity value of the first face image relative to the second face image may be added to the facial occlusion error report, the child error report, and the facial occlusion child yellow-skin error report, but is not added to the facial non-occlusion error report and the child yellow-skin error report.
In another application mode, the attributes corresponding to the attribute misinformation table are the attributes corresponding to the two face images respectively, the attribute of the first face image is represented by the attribute identification, the attribute identification and the corresponding face image can be stored in the database together after the attribute identification of the first face image is obtained, the corresponding attribute identification is stored in the second face image in the database, matching is carried out on the basis of the attribute identifications of the first face image and the second face head image and the attribute combination in the attribute misinformation table, the similarity value between the matched first face image and the matched second face image is added to the corresponding attribute misinformation table, so that quick matching with the attribute misinformation table is achieved on the basis of the attribute identification, and filling of the attribute misinformation table is completed.
In a specific application scenario, please refer to table 1 again, the target collected by the device is a black skin male child wearing a mask, the attribute identifier of the first face image corresponding to the target is 11001001, matching is performed based on the attribute identifier of the first face image and the attribute identifier of the second face image with the attribute false report table, when the attribute false report table includes a child with facial occlusion/facial occlusion, and a child with facial occlusion/facial non-facial occlusion false report table, the similarity values of the second face image and the first face image are extracted from the result of the feature comparison, all the attribute identifiers satisfy 1, and are added to the child with facial occlusion/facial occlusion false report table, the similarity values of the second face image and the first face image are extracted from the result of the feature comparison, all the attribute identifiers satisfy 11, and adding the similarity values into a child error report with occlusion on the face part/child error report with occlusion on the face part, extracting similarity values of the second face image and the first face image of which all attribute identifications meet 0 from the result of feature comparison, and adding the similarity values into a child error report with occlusion on the face part/child error report without occlusion on the face part.
S207: and in response to the fact that the number of the similarity values contained in any attribute error report reaches a first threshold value, extracting candidate similarity values from the similarity values of the attribute error reports reaching the first threshold value.
Specifically, the similarity values in the attribute error reports are arranged in descending order, the first threshold is the reciprocal of the false alarm rate, and when the number of the similarity values included in any one of the attribute error reports reaches the first threshold, the candidate similarity value is obtained from the corresponding attribute error report and used for updating the current similarity threshold.
In an application mode, in response to the fact that the number of the similarity values in any attribute false report table reaches the integral multiple of a first threshold value, the first integral multiple similarity values in the corresponding attribute false report table are used as candidate similarity values.
Specifically, the similarity values in the attribute error report are arranged in descending order from large to small according to the numerical values, whether the quantity of the similarity values in the attribute error report reaches the reciprocal of the false alarm rate is judged, when the quantity of the similarity values in any attribute error report reaches a first threshold for the first time, then the similarity value with the maximum value in the attribute false report table is taken as a candidate similarity value, when the number of the similarity values in the attribute false report table reaches the integral multiple of the first threshold value, taking the integral multiple similarity value in the attribute error report as a candidate similarity value, wherein the longer the length of the error report is, the more stable the threshold value is, and then the more accurate the false alarm rate fed back when the attribute or the face image combined with the attribute corresponding to the attribute false alarm table is judged whether the face image is the same by the similarity threshold corresponding to the attribute false alarm table, the more effective the false alarm discrimination of the face can be inhibited, and the identification rate of the group difficult to identify can be improved.
In an application scenario, the false alarm rate is set to be equal to 1e-10, and at least the candidate similarity value is obtained only when the number of the similarity values in the attribute false alarm table is greater than or equal to 1e10, and if the number of the similarity values in the attribute false alarm table is equal to 1e11, the new candidate similarity value is selected from the 10 th similarity value in the attribute false alarm table.
S208: and updating the current similarity threshold value by using the candidate similarity value.
Specifically, if one attribute false report can be set, the current similarity threshold is updated by using the candidate similarity value corresponding to the unique attribute false report, the number of the attribute false reports can also be set to be more than one, when the number of the similarity values in the multiple attribute false reports exceeds the first threshold, the candidate similarity values corresponding to the multiple attribute false reports are obtained, and the current similarity threshold is updated by using the maximum value of the multiple candidate similarity values.
Further, the more complex the attribute combination, the lower the sample occurrence probability of the face image, the higher the corresponding similarity threshold, and when the number of similarity values in the multiple attribute error reports reaches the first threshold, it indicates that the samples of the face image with specific attributes or attribute combinations appearing in the monitored area are sufficient and relatively common, so, in order to reduce the false alarm rate of the easily misidentified group, when the number of similarity values in the multiple attribute error reports reaches the first threshold, the similarity values in different attribute error reports are respectively sorted in descending order according to the value, the similarity value corresponding to the integral multiple position is selected as the candidate similarity value, and the maximum value in the multiple candidate similarity values is used as the similarity threshold.
In a specific application scenario, when a sand storm or medical emergency exists in a monitored area, people wearing a mask in the monitored area increase, the similarity value in a false report of facial occlusion/facial occlusion is rapidly accumulated until reaching a first threshold value, a first candidate similarity value extracted from an original yellow skin/yellow skin false report is smaller than a second candidate similarity value extracted from a false report of facial occlusion/facial occlusion, and the similarity threshold value is set as the second candidate similarity value in response to the situation that the number of groups difficult to identify in the current monitored area increases.
Optionally, as the higher the similarity threshold is, the lower the false alarm rate is, but the corresponding recognition rate is also reduced, and when the number of similarity values in any attribute false report in the preset period is smaller than the preset value, the candidate similarity values extracted from the attribute false report of which the number increased in the preset period is smaller than the preset value are deleted, so as to correspond to a scene in which a face image difficult to recognize of a specific attribute or attribute combination in the current monitoring area is sharply reduced, and improve the recognition rate of the face image corresponding to other attributes or attribute combinations.
In this embodiment, the attribute of the first face image is extracted and the attribute identifier of the first face image is generated, fast matching is performed based on the attribute identifier and the attributes of a single type or the attributes of different types in the attribute false alarm table, then the similarity values of the first face image relative to the second face image are added to the attribute false alarm table matched with the attribute false alarm table, when the number of the similarity values in the attribute false alarm table reaches the integral multiple of the first threshold value, the similarity values in the attribute false alarm table are arranged in a descending order according to the numerical value, and the similarity threshold value is updated by using the integral multiple similarity value as the candidate similarity value, so that the similarity threshold value is continuously optimized in the iterative updating process, and the false alarm rate of face recognition is reduced.
Referring to fig. 3, fig. 3 is a schematic flow chart of an embodiment of a face recognition method according to the present application, the method including:
s301: and receiving a first face image, and respectively performing characteristic comparison on the first face image and a plurality of second face images in a database to obtain a plurality of similarity values.
Specifically, after a first face image is acquired, feature information of the first face image is extracted, and feature comparison is performed on the feature information and a plurality of second face images in a database respectively, so that a plurality of similarity values are acquired.
S302: and obtaining a current similarity threshold value.
Specifically, the current similarity threshold is obtained according to the method in any of the embodiments, and specific contents may refer to any of the embodiments, which are not described herein again.
S303: and performing deployment and/or gathering on the first face image and the plurality of second face images based on the current similarity threshold and the plurality of similarity values to obtain a first deployment result and/or a first gathering result.
Specifically, whether the similarity value between the first face image and the second face image exceeds a similarity threshold value or not is judged respectively, if yes, the first face image and the second face image are classified into the same target, control alarm is conducted and/or archives corresponding to the same target are placed, if not, the first face image and the second face image are judged not to belong to the same target, control is not conducted, clustering is conducted, only the first face image or the second face image is independently archived, all face images corresponding to one target are clustered into the same archive, and therefore a first archive clustering result is obtained.
In an application mode, after a current similarity threshold value is obtained, a control and/or clustering algorithm is utilized to alarm and/or gather files of a first face image and a second face image in a database, two face images with similarity values exceeding the similarity threshold value are alarmed, or the two face images with similarity values exceeding the similarity threshold value are put into the same file, or the two face images with similarity values exceeding the similarity threshold value are alarmed and put into the same file.
In a specific application scenario, the second face image is a face image in a preset space-time range in the database, which is different from the process of obtaining the similarity threshold, in any of the above embodiments, it is required that the similarity value in the attribute false report table is a false report result, that is, two face images corresponding to the similarity value in the attribute false report table are necessarily from different people, and in the document gathering process, in order to obtain a face image belonging to the same person, in order to improve the comparison efficiency, the face image in the preset space-time range may be extracted as the second face image to perform feature comparison with the first face image, so as to obtain the similarity value between the second face image and the first face image, and then document gathering is performed by using a clustering algorithm, wherein the clustering algorithm uses a hierarchical clustering algorithm, and in other application scenarios, a density clustering algorithm may also be used, and no specific limitation is made here.
In a specific application scenario, the second face image is extracted from the blacklist database, the first face image and all the second face images in the blacklist database are subjected to feature comparison to obtain a similarity value between the first face image and all the second face images in the blacklist database, and when the similarity value exceeds a current similarity threshold value, a control alarm record is generated to prompt that the face images in the blacklist database appear.
In this embodiment, the similarity threshold is obtained based on the method in the above embodiment, so that the similarity threshold does not change after being set with an initial value, and the similarity threshold can be updated based on the attribute error report to adapt to groups with different attributes or attribute combinations, thereby reducing the false alarm rate of the faces which are easy to be recognized by errors and improving the accuracy of the faces which are difficult to be recognized.
Further, after the step of deploying and/or gathering the first face image and the plurality of second face images based on the current similarity threshold and the plurality of similarity values to obtain the first deployed result and/or the first gathering result, the method further includes: and in response to the acquisition of the updated similarity threshold, modifying the first deployment control result and/or the first document gathering result based on the updated similarity threshold and the attribute corresponding to the face image in the first document gathering result.
Specifically, after the similarity threshold is updated, if the attribute corresponding to the alarm record in the first deployment result matches with the attribute or the attribute combination of the attribute false report corresponding to the current similarity threshold, the second validation is performed based on the similarity value between the current similarity threshold and the two face images corresponding to the alarm record, if the similarity value is still greater than the similarity threshold, the alarm record is retained, if the similarity value is not greater than the similarity threshold, the corresponding alarm record is deleted, thereby reducing the probability of false alarm,
further, if the attribute of any archive in the first archive result is matched with the attribute or attribute combination of the attribute false report corresponding to the current similarity threshold, re-confirmation is performed based on the current similarity threshold and the similarity value between any two face images in the archive, if the similarity value is still larger than the similarity threshold, the face images with the similarity values between other face images in the archive being smaller than the current similarity threshold are deleted, and therefore before the user calls the archive, the first archive result is optimized, and the false alarm rate is reduced.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an embodiment of an electronic device 40 of the present application, where the electronic device includes a memory 401 and a processor 402 coupled to each other, where the memory 401 stores program data (not shown), and the processor 402 calls the program data to implement the method in any of the embodiments described above, and the description of the related contents refers to the detailed description of the embodiments of the method described above, which is not repeated herein.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an embodiment of a computer-readable storage medium 50 of the present application, the computer-readable storage medium 50 stores program data 500, and the program data 500 is executed by a processor to implement the method in any of the above embodiments, and the related contents are described in detail with reference to the above method embodiments and will not be described in detail herein.
It should be noted that, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (12)

1. A similarity threshold updating method is applied to face recognition and comprises the following steps:
receiving a first face image, and performing attribute analysis on the first face image to obtain attributes of the first face image, wherein the attributes comprise at least one of occlusion condition, age, gender and skin color;
respectively carrying out feature comparison on the first face image and a plurality of second face images in a database to obtain a plurality of similarity values;
adding the plurality of similarity values to at least one attribute error report based on attributes of the first face image, wherein the attribute error report corresponds to a single type of the attributes or a combination of a plurality of different types of the attributes;
in response to the fact that the number of the similarity values contained in any attribute error report reaches a first threshold value, extracting candidate similarity values from the similarity values of the attribute error report reaching the first threshold value;
and updating the current similarity threshold value by using the candidate similarity value.
2. The similarity threshold updating method according to claim 1, wherein the step of performing attribute analysis on the first face image to obtain the attribute of the first face image comprises:
performing attribute analysis on the first face image based on an attribute algorithm, and extracting different types of attributes corresponding to the first face image; the attribute algorithm corresponds to an attribute analysis model, and the attribute analysis model is used for extracting attributes corresponding to the first face image from the first face image;
storing the attributes of different types in a predetermined order.
3. The similarity threshold updating method according to claim 2, wherein the step of storing the attributes of different types in a predetermined order includes;
and storing the attributes of different types in a binary mode according to a preset sequence to obtain an attribute identifier corresponding to the first face image.
4. The similarity threshold updating method according to claim 3, wherein the step of adding a plurality of the similarity values to at least one attribute misreport table based on the attributes of the first face image comprises:
matching with a single type of attribute or a plurality of attributes of different types in the attribute false alarm table based on the attribute identification to obtain an attribute matching result;
and adding the similarity values to the corresponding attribute error report according to the attribute matching result.
5. The similarity threshold updating method according to claim 1,
the similarity values in the attribute error report are arranged in a descending order, and the first threshold is the reciprocal of the false alarm rate;
the step of extracting the candidate similarity value from the similarity values of the attribute false report reaching the first threshold includes:
and in response to the fact that the number of the similarity values in any one attribute error report reaches the integral multiple of the first threshold value, taking the integral multiple of the similarity values in the corresponding attribute error report as the candidate similarity values.
6. The similarity threshold updating method according to claim 5, wherein the number of the attribute false reports exceeds one, and the step of updating the current similarity threshold by using the candidate similarity value comprises:
and obtaining the candidate similarity values corresponding to the multiple attribute error reports, and updating the current similarity threshold value by using the maximum value of the candidate similarity values.
7. The similarity threshold updating method according to claim 1, wherein before the step of comparing the features of the first face image with the features of the second face images in the database respectively to obtain the similarity values, the method comprises:
acquiring acquisition time and acquisition place corresponding to the first face image;
and acquiring a face image outside a preset space-time range as the second face image based on the acquisition time and the acquisition place.
8. The method according to claim 7, wherein the step of comparing the features of the first face image with the features of the second face images in the database to obtain a plurality of similarity values includes:
extracting the characteristics of the first face image to obtain and store characteristic information corresponding to the first face image;
and comparing the characteristic information corresponding to the first face image with the characteristic information corresponding to the plurality of second face images to obtain similarity values of the first face image relative to the plurality of second face images respectively.
9. A face recognition method, comprising:
receiving a first face image, and respectively performing feature comparison on the first face image and a plurality of second face images in a database to obtain a plurality of similarity values;
obtaining a current similarity threshold, wherein the similarity threshold is obtained according to the method of any one of claims 1-8;
and performing deployment and/or gathering on the first face image and the plurality of second face images based on the current similarity threshold and the plurality of similarity values to obtain a first deployment result and/or a first gathering result.
10. The method according to claim 9, wherein after the step of performing deployment and/or filing on the first face image and the plurality of second face images based on the current similarity threshold and the plurality of similarity values to obtain a first deployment result and/or a first filing result, the method further comprises:
and in response to the obtained updated similarity threshold, modifying the first deployment control result and/or the first document gathering result based on the updated similarity threshold and the attribute corresponding to the face image in the first document gathering result.
11. An electronic device, comprising: a memory and a processor coupled to each other, wherein the memory stores program data that the processor calls to perform the method of any of claims 1-8 or 9-10.
12. A computer-readable storage medium, on which program data are stored, which program data, when being executed by a processor, carry out the method of any one of claims 1-8 or 9-10.
CN202110802852.2A 2021-07-15 2021-07-15 Similarity threshold updating method, face recognition method and related device Active CN113255631B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110802852.2A CN113255631B (en) 2021-07-15 2021-07-15 Similarity threshold updating method, face recognition method and related device
PCT/CN2021/128816 WO2023284185A1 (en) 2021-07-15 2021-11-04 Updating method for similarity threshold in face recognition and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110802852.2A CN113255631B (en) 2021-07-15 2021-07-15 Similarity threshold updating method, face recognition method and related device

Publications (2)

Publication Number Publication Date
CN113255631A CN113255631A (en) 2021-08-13
CN113255631B true CN113255631B (en) 2021-10-15

Family

ID=77180479

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110802852.2A Active CN113255631B (en) 2021-07-15 2021-07-15 Similarity threshold updating method, face recognition method and related device

Country Status (2)

Country Link
CN (1) CN113255631B (en)
WO (1) WO2023284185A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113255631B (en) * 2021-07-15 2021-10-15 浙江大华技术股份有限公司 Similarity threshold updating method, face recognition method and related device
CN114139007B (en) * 2022-01-26 2022-06-21 荣耀终端有限公司 Image searching method, electronic device, and medium thereof

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5992276B2 (en) * 2012-09-20 2016-09-14 株式会社東芝 Person recognition apparatus and method
CN105574500B (en) * 2015-12-15 2019-09-27 北京眼神智能科技有限公司 The method and apparatus for improving recognition of face percent of pass
CN111199029B (en) * 2018-11-16 2023-07-18 株式会社理光 Face recognition device and face recognition method
CN111091080A (en) * 2019-12-06 2020-05-01 贵州电网有限责任公司 Face recognition method and system
CN111666976B (en) * 2020-05-08 2023-07-28 深圳力维智联技术有限公司 Feature fusion method, device and storage medium based on attribute information
CN111626229A (en) * 2020-05-29 2020-09-04 广州云从博衍智能科技有限公司 Object management method, device, machine readable medium and equipment
CN111814570B (en) * 2020-06-12 2024-04-30 深圳禾思众成科技有限公司 Face recognition method, system and storage medium based on dynamic threshold
CN111814990B (en) * 2020-06-23 2023-10-10 汇纳科技股份有限公司 Threshold determining method, system, storage medium and terminal
CN111898495B (en) * 2020-07-16 2021-04-16 云从科技集团股份有限公司 Dynamic threshold management method, system, device and medium
CN112836661A (en) * 2021-02-07 2021-05-25 Oppo广东移动通信有限公司 Face recognition method and device, electronic equipment and storage medium
CN113255631B (en) * 2021-07-15 2021-10-15 浙江大华技术股份有限公司 Similarity threshold updating method, face recognition method and related device

Also Published As

Publication number Publication date
CN113255631A (en) 2021-08-13
WO2023284185A1 (en) 2023-01-19

Similar Documents

Publication Publication Date Title
CN113255631B (en) Similarity threshold updating method, face recognition method and related device
US10402627B2 (en) Method and apparatus for determining identity identifier of face in face image, and terminal
EP3410351B1 (en) Learning program, learning method, and object detection device
CN108229321B (en) Face recognition model, and training method, device, apparatus, program, and medium therefor
CN108268823A (en) Target recognition methods and device again
CN111859451B (en) Multi-source multi-mode data processing system and method for applying same
CN112487886A (en) Method and device for identifying face with shielding, storage medium and terminal
CN113255841B (en) Clustering method, clustering device and computer readable storage medium
EP3905084A1 (en) Method and device for detecting malware
JP2016099835A (en) Image processor, image processing method, and program
CN112508910A (en) Defect extraction method and device for multi-classification defect detection
CN111753642B (en) Method and device for determining key frame
CN111611944A (en) Identity recognition method and device, electronic equipment and storage medium
CN113992340A (en) User abnormal behavior recognition method, device, equipment, storage medium and program
CN108076032B (en) Abnormal behavior user identification method and device
CN113706837B (en) Engine abnormal state detection method and device
CN112801181B (en) Urban signaling traffic flow user classification and prediction method, storage medium and system
CN113628073A (en) Property management method and system for intelligent cell
CN113255621A (en) Face image filtering method, electronic device and computer-readable storage medium
CN114092809A (en) Object identification method and device and electronic equipment
CN114005060A (en) Image data determining method and device
CN114549884A (en) Abnormal image detection method, device, equipment and medium
CN113743293A (en) Fall behavior detection method and device, electronic equipment and storage medium
CN113591620A (en) Early warning method, device and system based on integrated mobile acquisition equipment
CN110570025A (en) prediction method, device and equipment for real reading rate of WeChat seal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant