CN114998978B - Method and system for analyzing quality of face image - Google Patents

Method and system for analyzing quality of face image Download PDF

Info

Publication number
CN114998978B
CN114998978B CN202210907766.2A CN202210907766A CN114998978B CN 114998978 B CN114998978 B CN 114998978B CN 202210907766 A CN202210907766 A CN 202210907766A CN 114998978 B CN114998978 B CN 114998978B
Authority
CN
China
Prior art keywords
feature
face
image
branch
variance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210907766.2A
Other languages
Chinese (zh)
Other versions
CN114998978A (en
Inventor
何昊驰
王东
李来
王月平
肖传宝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Moredian Technology Co ltd
Original Assignee
Hangzhou Moredian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Moredian Technology Co ltd filed Critical Hangzhou Moredian Technology Co ltd
Priority to CN202210907766.2A priority Critical patent/CN114998978B/en
Publication of CN114998978A publication Critical patent/CN114998978A/en
Application granted granted Critical
Publication of CN114998978B publication Critical patent/CN114998978B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a method and a system for analyzing the quality of a face image, wherein the method comprises a model training phase and a deployment application phase, and the model training phase comprises the following steps: automatically labeling the training data by adopting an age identification model to obtain an age label and an age deviation label of each face image; extracting the characteristic image of the marked face image by adopting a characteristic image extraction network; respectively inputting the feature map into a feature embedding branch and a feature variance branch to obtain the final features of the face image; and based on the final characteristics and the classification layer of the face image, performing back propagation adjustment parameters on the characteristic embedding branch and the characteristic variance branch by adopting a regression loss function, and using the obtained characteristic embedding branch and the characteristic variance branch for the quality analysis of the face image. Through the method and the device, the problem that the image quality analysis in face recognition is difficult to solve the cross-age sample is solved, the correlation between the dynamically adjusted age sample and the quality is realized, and the interference of the cross-age sample on the learning of the quality analysis is avoided.

Description

Method and system for analyzing quality of face image
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and a system for analyzing a quality of a face image.
Background
In a face recognition scheme, the quality of a face image needs to be analyzed. The following methods are generally adopted in the current image quality analysis scheme:
an image quality analysis method based on traditional characteristics. The image quality is judged by analyzing the traditional image characteristics instead of the abstract characteristics extracted by the deep neural network, but the traditional characteristics are single relatively, the induction capability of the image characteristics is limited, and the reliability and robustness of the image characteristics are not as good as those of the highly induced characteristics extracted by the deep neural network.
An image quality analysis method based on deep learning. The image is subjected to multi-classification training after high-quality/low-quality labeling, or the image or the score of 0-100 and the like is subjected to score regression training, however, manual labeling of the image quality is often influenced by subjective factors, and the score subdivision has multiple dimensions (such as illumination, fuzzy degree, integrity and the like) which are difficult to quantify, so that a unified standard is difficult to define the image quality, and the quality and the quantity of data labeling are difficult to guarantee.
An image quality analysis method based on identification model labeling. And calculating the intra-class similarity and the inter-class similarity of the image features with the identity information by means of the identification model, and further automatically labeling the image. But the similarity of image features is not always uniquely related to quality, such as: an age-spanning sample in recognition will typically have less intra-class similarity but is not necessarily indicative of low quality.
At present, an effective solution is not provided aiming at the problem that the image quality analysis in the related art is difficult to solve the cross-age sample.
Disclosure of Invention
The embodiment of the application provides a method and a system for analyzing the quality of a face image, which at least solve the problem that cross-age samples are difficult to solve in image quality analysis in the related art.
In a first aspect, an embodiment of the present application provides a method for analyzing quality of a face image, where the method includes a model training phase and an application deployment phase, where the model training phase includes:
automatically labeling the training data by adopting an age identification model to obtain an age label of each face image in the training data;
calculating each of the training dataIDAverage age of class, and associating the age label of each face image with the corresponding age labelIDThe difference value of the average ages of the classes is used as an age deviation label of the face image;
extracting the characteristic diagram of the face image in the marked training data through a characteristic diagram extraction network;
respectively inputting the feature graph into a feature embedding branch and a feature variance branch, and obtaining the final feature of the face image according to an output result;
and based on the final characteristics and the classification layer of the face image, performing back propagation adjustment parameters on the characteristic embedding branch and the characteristic variance branch by adopting a regression loss function, and using the obtained characteristic embedding branch and the obtained characteristic variance branch for the quality analysis of the face image.
In some of these embodiments, the method includes a model training phase and a deployment application phase, wherein the deployment application phase includes:
acquiring an image to be identified, and extracting a feature map of the image to be identified through the feature map extraction network;
respectively inputting the feature graph into a feature embedding branch and a feature variance branch to obtain a face feature mean value and a face feature variance;
based on the human face feature variance, calculating a class harmonic mean to obtain a comprehensive quality score of the image to be recognized;
and judging whether to perform face recognition according to the comprehensive quality score.
In some embodiments, the feature map extraction network and the classification layer are internal structural layers of a face recognition model;
and training the face recognition model based on the training data to obtain a feature map extraction network, a feature embedding layer and a classification layer.
In some embodiments, the respectively inputting the feature map into a feature embedding branch and a feature variance branch, and obtaining the final feature of the face image according to the output result includes:
will feature mapK(I)Input feature embedding branchingFAnd outputting the average value of the human face featuresF(K(I))Drawing the characteristicsK(I)Input feature variance branchSAnd outputting the variance of the human face featuresS(K(I))
Obtaining the final characteristics of the face image according to the output resultF(K(I))+ S(K(I))
In some embodiments, applying a regression loss function based on the final feature of the face image and the classification layer, and performing back propagation adjustment on the feature embedding branch and the feature variance branch comprises:
based on the final characteristics and classification layer of the face image, a regression loss function is adopted
Figure 446490DEST_PATH_IMAGE001
Performing a back propagation tuning parameter on the feature embedding branch and the feature variance branch, wherein,w c representing classes in a classification layercThe center of the feature of (a),F(K(I))the human face feature mean value of the feature embedding branch output is represented,S(K(I))the variance of the face features representing the output of the feature variance branch,Dthe dimensions of the features are represented by,lis shown aslThe value in the dimension(s) is,αan age deviation label representing an image of a human face,iis shown asiA human face image is displayed on the screen,Nrepresenting the number of face images.
In some embodiments, obtaining the comprehensive quality score of the image to be recognized through class and average calculation based on the variance of the face features includes:
based on the face feature variance, by
Figure 56463DEST_PATH_IMAGE002
Calculating to obtain a comprehensive quality score of the image to be identified, wherein,Dthe dimension representing the variance of the feature is,S(K(I))the variance of the face features representing the output of the feature variance branch,lis shown aslThe variance value in the dimension is calculated,iis shown asiAnd (5) displaying the face image.
In some embodiments, determining whether to perform face recognition according to the composite quality score includes:
judging whether to perform face recognition according to the comprehensive quality score;
if the comprehensive quality score is lower than a preset threshold value, stopping the face recognition;
and if the comprehensive quality score is higher than a preset threshold value, retrieving and matching in a human face bottom library according to the human face feature mean value.
In some embodiments, training the face recognition model based on the training data to obtain a feature extraction network, a feature embedding layer, and a classification layer includes:
based on training data and preset model structure train face recognition model adopts and to predetermine loss function right face recognition model carries out the back propagation adjustment parameter, obtains characteristic map and draws network, characteristic embedding layer and classification layer, wherein, it includes resnet structure and mobileface structure to predetermine the model structure, it is the margin-based function to predetermine the loss function, specifically includes arcface function and cosface function.
In a second aspect, an embodiment of the present application provides a system for analyzing facial image quality, where the system includes a data labeling module, a feature extraction module, and a parameter adjustment module;
in the model training phase of the human face image quality analysis:
the data labeling module is used for automatically labeling the training data by adopting an age identification model to obtain an age label of each face image in the training data; calculating each of the training dataIDAverage age of class, and associating the age label of each face image with the corresponding age labelIDThe difference value of the average ages of the classes is used as an age deviation label of the face image;
the characteristic extraction module is used for extracting the characteristic diagram of the face image in the marked training data through a characteristic diagram extraction network; respectively inputting the feature graph into a feature embedding branch and a feature variance branch, and obtaining the final feature of the face image according to an output result;
and the parameter adjusting module is used for performing back propagation parameter adjustment on the feature embedding branch and the feature variance branch by adopting a regression loss function based on the final feature and the classification layer of the face image, and the obtained feature embedding branch and the feature variance branch are used for analyzing the quality of the face image.
In some embodiments, the system further comprises an acquisition processing module and a computational analysis module;
in the deployment application phase of facial image quality analysis:
the acquisition processing module is used for acquiring an image to be identified and extracting a feature map of the image to be identified through the feature map extraction network; respectively inputting the feature graph into a feature embedding branch and a feature variance branch to obtain a face feature mean value and a face feature variance;
the calculation analysis module is used for calculating a class and average according to the face feature variance to obtain a comprehensive quality score of the image to be recognized; and judging whether to perform face recognition according to the comprehensive quality score.
Compared with the related art, the method and the system for analyzing the quality of the face image provided by the embodiment of the application comprise a model training phase and a deployment application phase, wherein the model training phase comprises the following steps: automatically labeling the training data by adopting an age identification model to obtain an age label of each face image in the training data; calculating each of the training dataIDAverage age of class, and associating the age label of each face image with the corresponding age labelIDThe difference value of the average ages of the classes is used as an age deviation label of the face image; extracting the characteristic diagram of the face image in the marked training data through a characteristic diagram extraction network; respectively inputting the feature images into a feature embedding branch and a feature variance branch, and obtaining the final features of the face images according to the output results; based on the final characteristics and the classification layer of the face image, the regression loss function is adopted to carry out back propagation adjustment on the characteristic embedding branch and the characteristic variance branch, the problem that cross-age samples are difficult to solve in image quality analysis in face recognition is solved, the automatic label identification of the age samples is realized, the correlation between the age samples and the quality is dynamically adjusted, and the interference of the cross-age samples on the quality analysis learning is avoided.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a first flowchart illustrating steps of a method for analyzing facial image quality according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a hierarchy multiplexing between image quality analysis and face image recognition according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating a second step of a method for analyzing facial image quality according to an embodiment of the present application;
FIG. 4 is a schematic illustration of a human face image quality analysis in a deployment application phase according to an embodiment of the present application;
FIG. 5 is a block diagram of a first embodiment of a system for analyzing facial image quality;
FIG. 6 is a block diagram of a second embodiment of a system for analyzing facial image quality according to the present application;
fig. 7 is an internal structural diagram of an electronic device according to an embodiment of the present application.
Description of the drawings: 51. a data annotation module; 52. a feature extraction module; 53. a parameter adjusting module; 61. acquiring a processing module; 62. and a calculation analysis module.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application.
It is obvious that the drawings in the following description are only examples or embodiments of the present application, and that it is also possible for a person skilled in the art to apply the present application to other similar contexts on the basis of these drawings without inventive effort. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to only those steps or elements but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but rather can include electrical connections, whether direct or indirect. The term "plurality" as referred to herein means two or more. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
The embodiment of the application provides a method for analyzing the quality of a face image, fig. 1 is a flow chart of steps of the method for analyzing the quality of the face image according to the embodiment of the application, the method comprises a model training phase and a deployment application phase, as shown in fig. 1, the model training phase comprises the following steps:
step S102, automatically labeling the training data by adopting an age identification model to obtain an age label of each face image in the training data;
specifically, an open-source age identification model is used for automatically labeling the age label of the training data to obtain the age label of each face image in the training data.
Step S104, calculating each of the training dataIDAverage age of class, and corresponding the age label of each face imageIDThe difference value of the average ages of the classes is used as an age deviation label of the face image;
it should be noted that one ID is a person, and the face recognition training is classification training, so that a person is also called a class. "average age per ID class" means the average age of all facial images under each person in the training data; compared with the conventional image quality analysis method based on identification model labeling, in the embodiment, based on steps S102 and S104, the age label and the age deviation label are labeled on the face image, so as to avoid interference of the age characteristics of the sample in the subsequent branch structure training (i.e., the training of the quality analysis part) on the training.
Step S106, extracting the characteristic diagram of the face image in the labeled training data through the characteristic diagram extraction network;
specifically, the face image in the labeled training data is extracted through a feature map extraction networkICharacteristic diagram ofK (I)
It should be noted that fig. 2 is a schematic diagram of structure layer multiplexing between image quality analysis and face image recognition according to an embodiment of the present application, and as shown in fig. 2, the feature map extraction network in step S106 and the classification layer in step S110 are both internal structure layers of a previously trained face recognition model, the feature extraction network and the classification layer of the face recognition model are multiplexed, an additional feature embedding branch and a feature variance branch are added, and training is continued with a new loss function, that is, the model that is finally trained and actually deployed and applied in this embodiment is a recognition model with quality analysis capability. Therefore, only a single model needs to be maintained; compared with the prior art in which image quality analysis and face recognition are separately deployed and maintained after being trained and learned separately, the image quality analysis learning process is integrated into the face recognition based on the steps S106 and S110, so that the recognition link is simplified, and the deployment and maintenance costs of the recognition system are reduced.
Further, the face recognition model is trained based on training data and a preset model structure, a preset loss function is adopted to perform back propagation parameter adjustment on the face recognition model, and a feature map extraction network, a feature embedding layer and a classification layer are obtained, wherein the preset model structure comprises a resnet structure and a mobility structure, and the preset loss function is a margin-based function and specifically comprises an arcface function and a cosface function.
Step S108, respectively inputting the feature image into a feature embedding branch and a feature variance branch, and obtaining the final feature of the face image according to the output result;
specifically, a feature map is setK(I)Input feature embedding branchingFAnd outputting the average value of the human face featuresF(K(I))Drawing the characteristicsK(I)Input feature variance branchSAnd outputting the variance of the human face featuresS(K(I))(ii) a Obtaining the final characteristics of the face image according to the output resultF(K(I))+ S(K(I))I.e. the face image features are represented by the face feature meanF(K(I))Sum of feature varianceS(K(I))And (4) forming.
And step S110, based on the final characteristics and the classification layer of the face image, performing back propagation adjustment on the characteristic embedding branch and the characteristic variance branch by adopting a regression loss function.
Specifically, a regression loss function is adopted based on the final characteristics and the classification layer of the face image
Figure 85598DEST_PATH_IMAGE003
For the characteristic embedding scoreThe branch and the feature variance branch perform back propagation tuning parameters, wherein,w c representing classes in a classification layercThe center of the feature of (a),F(K(I))the human face feature mean value of the feature embedding branch output is represented,S(K(I))the variance of the face features representing the output of the feature variance branch,Dthe dimensions of the features are represented in the graph,ldenotes the firstlThe value in the dimension(s) is,αan age deviation label representing an image of a human face,iis shown asiA human face image is displayed on the screen,Nrepresenting the number of face images.
It should be noted that the training process can be regarded as a face imageITo the category to which it belongscClass center ofw c Is directed to estimating a new feature expression for each input image and to making an appropriate estimation of its feature variance simultaneously,
Figure 58844DEST_PATH_IMAGE004
for images close to the feature center, a smaller feature variance is estimated, and for images away from the feature center, a larger feature variance is estimated, and meanwhile, the feature variance is prevented from approaching 0; logarithmic term
Figure 71800DEST_PATH_IMAGE005
It can be avoided that the feature variance is estimated too much.
Meanwhile, since the age-related image is not necessarily a low-quality image, but is usually an outlier of a difficult sample, the age coefficient term
Figure 22481DEST_PATH_IMAGE006
Dynamically adjusting the effect of the cross-age image on training: when the age of the image has no significant span, the coefficient term is close to 1, and the loss function is the conventional regression loss; when the age exists in a span, the coefficient item is reduced, the influence of the age is reduced, and the gain of training under the condition that the image of the age is low in quality is still kept; when the age span is too large, the coefficient tends to be 0, so that the image with too large age span does not interfere with normal training; and the obtained feature embedding branch and the feature variance branch are used for the quality analysis of the face image.
Through the steps S102 to S110 in the embodiment of the application, the problem that cross-age samples are difficult to solve in image quality analysis in face recognition is solved, the problem that the structure of quality analysis and image recognition in face recognition is too bloated is solved, the automatic label is adopted to distinguish the age samples, the correlation between the age samples and the quality is dynamically adjusted, the interference of the cross-age samples on the learning of the quality analysis is avoided, the learning process of the image quality analysis is integrated into the face recognition, the recognition link is simplified, and the deployment and maintenance costs of a recognition system are reduced.
In some embodiments, fig. 3 is a flowchart illustrating a second step of a method for analyzing facial image quality according to an embodiment of the present application, where the method includes a model training phase and a deployment application phase, and as shown in fig. 3, the deployment application phase includes the following steps:
step S302, acquiring an image to be identified, and extracting a feature map of the image to be identified through a feature map extraction network;
it should be noted that fig. 4 is a schematic diagram of quality analysis of a face image in a deployment application stage according to the embodiment of the present application, and as shown in fig. 4, a model that completes training is deployed on edge equipment according to a deployment process of a conventional face recognition system. Due to the multiplexing of the characteristic diagram extraction layer and the classification layer in the image quality analysis and the face image recognition, the deployment process and the maintenance difficulty are simplified.
Step S304, respectively inputting the feature image into a feature embedding branch and a feature variance branch to obtain a face feature mean value and a face feature variance;
step S306, based on the face feature variance, calculating through class harmonic mean to obtain the comprehensive quality score of the image to be recognized;
specifically, based on the variance of the face features, by
Figure 545604DEST_PATH_IMAGE002
And calculating to obtain the comprehensive quality score of the image to be identified, wherein,Dthe dimension representing the variance of the feature is,S(K(I))the variance of the face features representing the output of the feature variance branch,lis shown aslThe variance value in the dimension is calculated,idenotes the firstiAnd (5) displaying the face image.
And step S308, judging whether to perform face recognition according to the comprehensive quality score.
Specifically, whether the face recognition is carried out is judged according to the comprehensive quality score: if the comprehensive quality score is lower than a preset threshold value, stopping face recognition; and if the comprehensive quality score is higher than a preset threshold value, retrieving and matching in a human face bottom library according to the human face feature average value.
Through the steps S302 to S308 in the embodiment of the application, the deployment process and the maintenance difficulty of quality analysis and image recognition in face recognition are simplified.
It should be noted that the steps illustrated in the above-described flow diagrams or in the flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order different than here.
The embodiment of the present application provides a system for analyzing the quality of a human face image, and fig. 5 is a first structural block diagram of the system for analyzing the quality of a human face image according to the embodiment of the present application, and as shown in fig. 5, the system includes a data annotation module 51, a feature extraction module 52 and a parameter adjustment module 53;
in the model training phase of the human face image quality analysis:
the data labeling module 51 is configured to perform automatic labeling on the training data by using an age identification model to obtain an age label of each face image in the training data; calculating each of the training dataIDAverage age of class, and associating the age label of each face image with the corresponding age labelIDThe difference value of the average ages of the classes is used as an age deviation label of the face image;
the feature extraction module 52 is configured to extract a feature map of the face image in the labeled training data through a feature map extraction network; respectively inputting the feature image into a feature embedding branch and a feature variance branch, and obtaining the final feature of the face image according to the output result;
and the parameter adjusting module 53 is configured to perform back propagation parameter adjustment on the feature embedding branch and the feature variance branch by using a regression loss function based on the final feature and the classification layer of the face image.
Through the data labeling module 51, the feature extraction module 52 and the parameter adjustment module 53 in the embodiment of the application, the problem that cross-age samples are difficult to solve in image quality analysis in face recognition is solved, the problem that the structure of quality analysis and image recognition in face recognition is too bloated is solved, the automatic label identification of the age samples is realized, the correlation between the age samples and the quality is dynamically adjusted, the interference of the cross-age samples on the learning of the quality analysis is avoided, the fact that the learning process of the image quality analysis is integrated into the face recognition is realized, an identification link is simplified, and the cost of deployment and maintenance of a recognition system is reduced.
In some embodiments, fig. 6 is a block diagram of a second structure of a facial image quality analysis system according to an embodiment of the present application, and as shown in fig. 6, the system further includes an acquisition processing module 61 and a calculation analysis module 62;
in the deployment application phase of facial image quality analysis:
the acquisition processing module 61 is used for acquiring an image to be identified and extracting a feature map of the image to be identified through a feature map extraction network; respectively inputting the feature graph into a feature embedding branch and a feature variance branch to obtain a face feature mean value and a face feature variance;
the calculation analysis module 62 is configured to obtain a comprehensive quality score of the image to be recognized through class and average calculation according to the variance of the human face features; and judging whether to carry out face recognition or not according to the comprehensive quality score.
It should be noted that the above modules may be functional modules or program modules, and may be implemented by software or hardware. For a module implemented by hardware, the modules may be located in the same processor; or the modules can be respectively positioned in different processors in any combination.
The present embodiment also provides an electronic device comprising a memory having a computer program stored therein and a processor configured to execute the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
It should be noted that, for specific examples in this embodiment, reference may be made to examples described in the foregoing embodiments and optional implementations, and details of this embodiment are not described herein again.
In addition, in combination with the method for analyzing the quality of the face image in the foregoing embodiment, the embodiment of the present application may provide a storage medium to implement. The storage medium has a computer program stored thereon; the computer program, when executed by a processor, implements the method of human face image quality analysis of any of the above embodiments.
In one embodiment, a computer device is provided, which may be a terminal. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of facial image quality analysis. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on a shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
In one embodiment, fig. 7 is a schematic diagram of an internal structure of an electronic device according to an embodiment of the present application, and as shown in fig. 7, there is provided an electronic device, which may be a server, and an internal structure diagram of which may be as shown in fig. 7. The electronic device comprises a processor, a network interface, an internal memory and a non-volatile memory connected by an internal bus, wherein the non-volatile memory stores an operating system, a computer program and a database. The processor is used for providing calculation and control capability, the network interface is used for communicating with an external terminal through network connection, the internal memory is used for providing an environment for an operating system and the running of a computer program, the computer program is executed by the processor to realize a method for analyzing the quality of the human face image, and the database is used for storing data.
Those skilled in the art will appreciate that the architecture shown in fig. 7 is a block diagram of only a portion of the architecture associated with the subject application, and does not constitute a limitation on the electronic devices to which the subject application may be applied, and that a particular electronic device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It should be understood by those skilled in the art that various technical features of the above-described embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above-described embodiments are not described, however, so long as there is no contradiction between the combinations of the technical features, they should be considered as being within the scope of the present description.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (7)

1. A method for analyzing the quality of a face image is characterized by comprising a model training phase and a deployment application phase, wherein the model training phase comprises the following steps:
automatically labeling the training data by adopting an age identification model to obtain an age label of each face image in the training data;
calculating each of the training dataIDAverage age of class, and associating the age label of each face image with the corresponding age labelIDThe difference value of the average ages of the classes is used as an age deviation label of the face image;
extracting the characteristic diagram of the face image in the marked training data through a characteristic diagram extraction network;
respectively inputting the feature graph into a feature embedding branch and a feature variance branch, and obtaining the final feature of the face image according to the output result;
based on the final characteristics and classification layer of the face image, a regression loss function is adopted
Figure DEST_PATH_IMAGE002
Performing a back propagation tuning parameter on the feature embedding branch and the feature variance branch, wherein,w c representing classes in a classification layercThe center of the feature of (a),F(K(I))the human face feature mean value of the feature embedding branch output is represented,S(K(I))the variance of the face features representing the output of the feature variance branch,Dthe dimensions of the features are represented in the graph,lis shown aslThe value in the dimension(s) is,αan age deviation label representing an image of a human face,iis shown asiA human face image is displayed on the screen,Nwhich represents the number of face images,Crepresenting the number of categories of the classification layer;
the obtained feature embedding branch and feature variance branch are used for face image quality analysis, and the deployment application stage comprises the following steps:
acquiring an image to be identified, and extracting a feature map of the image to be identified through the feature map extraction network;
respectively inputting the feature graph into a feature embedding branch and a feature variance branch to obtain a face feature mean value and a face feature variance;
based on the face feature variance, calculating a class harmonic mean to obtain a comprehensive quality score of the image to be recognized; and judging whether to carry out face recognition or not according to the comprehensive quality score.
2. The method of claim 1, wherein the feature map extraction network and the classification layer are internal structural layers of a face recognition model;
and training the face recognition model based on the training data to obtain a feature map extraction network, a feature embedding layer and a classification layer.
3. The method of claim 1, wherein the step of inputting the feature map into a feature embedding branch and a feature variance branch respectively and obtaining a final feature of the face image according to an output result comprises:
will feature mapK(I)Input feature embedding branchingFAnd outputting the average value of the human face featuresF(K(I))Drawing the characteristicsK(I)Input feature variance branchSAnd outputting the variance of the human face featuresS(K(I))
Obtaining the final characteristics of the face image according to the output resultF(K(I))+ S(K(I))
4. The method of claim 1, wherein obtaining the composite quality score of the image to be recognized through class and mean calculation based on the face feature variance comprises:
based on the face feature variance, by
Figure DEST_PATH_IMAGE004
Calculating to obtain a comprehensive quality score of the image to be identified, wherein,Dthe dimension representing the variance of the feature is,S(K(I))the variance of the face features representing the output of the feature variance branch,lis shown aslThe variance value in the dimension is calculated,iis shown asiAnd (5) displaying the face image.
5. The method of claim 1, wherein determining whether to perform face recognition based on the composite quality score comprises:
judging whether to perform face recognition according to the comprehensive quality score;
if the comprehensive quality score is lower than a preset threshold value, stopping the face recognition;
and if the comprehensive quality score is higher than a preset threshold value, retrieving and matching in a human face bottom library according to the human face feature mean value.
6. The method of claim 2, wherein training the face recognition model based on the training data to obtain a feature map extraction network, a feature embedding layer, and a classification layer comprises:
based on training data and preset model structure train face recognition model adopts and to predetermine loss function right face recognition model carries out the back propagation adjustment parameter, obtains characteristic map and draws network, characteristic embedding layer and classification layer, wherein, it includes resnet structure and mobileface structure to predetermine the model structure, it is the margin-based function to predetermine the loss function, specifically includes arcface function and cosface function.
7. A system for analyzing the quality of a face image is characterized by comprising a data annotation module, a feature extraction module and a parameter adjustment module;
in the model training phase of the human face image quality analysis:
the data labeling module is used for automatically labeling the training data by adopting an age identification model to obtain an age label of each face image in the training data; calculating each of the training dataIDAverage age of class, and associating the age label of each face image with the corresponding age labelIDThe difference value of the average ages of the classes is used as an age deviation label of the face image;
the feature extraction module is used for extracting a feature map of the face image in the labeled training data through a feature map extraction network; respectively inputting the feature graph into a feature embedding branch and a feature variance branch, and obtaining the final feature of the face image according to an output result;
the parameter adjusting module is used for adopting a regression loss function based on the final characteristics and the classification layer of the face image
Figure DEST_PATH_IMAGE005
Performing a back propagation tuning parameter on the feature embedding branch and the feature variance branch, wherein,w c representing classes in a classification layercThe center of the feature of (a),F(K(I))the human face feature mean value of the feature embedding branch output is represented,S(K(I))the variance of the face features representing the output of the feature variance branch,Dthe dimensions of the features are represented in the graph,ldenotes the firstlThe value in the dimension(s) is,αan age deviation label representing an image of a human face,iis shown asiA human face image is displayed on the screen,Nthe number of the face images is represented,Crepresenting the number of categories of the classification layer; resulting feature embedded branchesAnd the characteristic variance branch is used for the quality analysis of the face image;
the system also comprises an acquisition processing module and a calculation analysis module; in the deployment application phase of facial image quality analysis:
the acquisition processing module is used for acquiring an image to be identified and extracting a feature map of the image to be identified through the feature map extraction network; respectively inputting the feature graph into a feature embedding branch and a feature variance branch to obtain a face feature mean value and a face feature variance;
the calculation analysis module is used for calculating a class and average according to the face feature variance to obtain a comprehensive quality score of the image to be recognized; and judging whether to perform face recognition according to the comprehensive quality score.
CN202210907766.2A 2022-07-29 2022-07-29 Method and system for analyzing quality of face image Active CN114998978B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210907766.2A CN114998978B (en) 2022-07-29 2022-07-29 Method and system for analyzing quality of face image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210907766.2A CN114998978B (en) 2022-07-29 2022-07-29 Method and system for analyzing quality of face image

Publications (2)

Publication Number Publication Date
CN114998978A CN114998978A (en) 2022-09-02
CN114998978B true CN114998978B (en) 2022-12-16

Family

ID=83020909

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210907766.2A Active CN114998978B (en) 2022-07-29 2022-07-29 Method and system for analyzing quality of face image

Country Status (1)

Country Link
CN (1) CN114998978B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114170654A (en) * 2021-11-26 2022-03-11 深圳数联天下智能科技有限公司 Training method of age identification model, face age identification method and related device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650650B (en) * 2016-12-14 2020-04-24 广东顺德中山大学卡内基梅隆大学国际联合研究院 Cross-age face recognition method
US10565433B2 (en) * 2017-03-30 2020-02-18 George Mason University Age invariant face recognition using convolutional neural networks and set distances
CN107977633B (en) * 2017-12-06 2019-04-09 平安科技(深圳)有限公司 Age recognition methods, device and the storage medium of facial image
CN109993125B (en) * 2019-04-03 2022-12-23 腾讯科技(深圳)有限公司 Model training method, face recognition device, face recognition equipment and storage medium
CN112183326A (en) * 2020-09-27 2021-01-05 深圳数联天下智能科技有限公司 Face age recognition model training method and related device
CN114549502A (en) * 2022-02-28 2022-05-27 上海商汤智能科技有限公司 Method and device for evaluating face quality, electronic equipment and storage medium
CN114694215A (en) * 2022-03-16 2022-07-01 北京金山云网络技术有限公司 Method, device, equipment and storage medium for training and estimating age estimation model

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114170654A (en) * 2021-11-26 2022-03-11 深圳数联天下智能科技有限公司 Training method of age identification model, face age identification method and related device

Also Published As

Publication number Publication date
CN114998978A (en) 2022-09-02

Similar Documents

Publication Publication Date Title
Ren et al. Adversarial domain adaptation for classification of prostate histopathology whole-slide images
WO2019100282A1 (en) Face skin color recognition method, device and intelligent terminal
CN112381782B (en) Human face image quality evaluation method and device, computer equipment and storage medium
CN112330685B (en) Image segmentation model training method, image segmentation device and electronic equipment
CN112446302B (en) Human body posture detection method, system, electronic equipment and storage medium
CN108269254A (en) Image quality measure method and apparatus
JP2021532434A (en) Face feature extraction model Training method, face feature extraction method, device, equipment and storage medium
CN112270686B (en) Image segmentation model training method, image segmentation device and electronic equipment
CN105813548A (en) Process for evaluation of at least one facial clinical sign
WO2021042690A1 (en) Deep convolution neural network-based breast cancer auxiliary diagnosis method and apparatus
CN112330624A (en) Medical image processing method and device
CN108351963A (en) Face analysis based on infrared image
Lanitis et al. Quantitative evaluation of the effects of aging on biometric templates
CN111178128A (en) Image recognition method and device, computer equipment and storage medium
CN112312210A (en) Television word size sound automatic adjustment processing method and device, intelligent terminal and medium
Yin et al. Two steps for fingerprint segmentation
Kamarajugadda et al. Extract features from periocular region to identify the age using machine learning algorithms
CN114998978B (en) Method and system for analyzing quality of face image
CN111401343A (en) Method for identifying attributes of people in image and training method and device for identification model
Silva et al. Face recognition using local mapped pattern and genetic algorithms
CN111968087B (en) Plant disease area detection method
CN108197542A (en) A kind of method and device of recognition of face
CN113094801A (en) Decoration simulation image generation method, device, equipment and medium
Rybintsev Age estimation from a face image in a selected gender-race group based on ranked local binary patterns
WO2018049858A1 (en) Calibration method for finger vein identification apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant