CN112001865A - Face recognition method, device and equipment - Google Patents
Face recognition method, device and equipment Download PDFInfo
- Publication number
- CN112001865A CN112001865A CN202010910226.0A CN202010910226A CN112001865A CN 112001865 A CN112001865 A CN 112001865A CN 202010910226 A CN202010910226 A CN 202010910226A CN 112001865 A CN112001865 A CN 112001865A
- Authority
- CN
- China
- Prior art keywords
- sparse representation
- image
- preset
- face
- face recognition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 238000012549 training Methods 0.000 claims abstract description 125
- 239000011159 matrix material Substances 0.000 claims abstract description 40
- 238000012360 testing method Methods 0.000 claims abstract description 23
- 239000013598 vector Substances 0.000 claims description 19
- 230000006870 function Effects 0.000 claims description 4
- 238000010276 construction Methods 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 3
- 238000005286 illumination Methods 0.000 abstract description 11
- 230000008569 process Effects 0.000 description 8
- 230000004913 activation Effects 0.000 description 5
- 238000010606 normalization Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 210000000554 iris Anatomy 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/12—Classification; Matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/513—Sparse representations
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The application discloses a face recognition method, a face recognition device and face recognition equipment, wherein the method comprises the following steps: acquiring a test face image, and repairing the test face image through a preset confrontation generation network to obtain a repaired image; performing face sparse representation on the restored image through a dictionary matrix constructed by a preset training sample set to obtain reconstructed images corresponding to various training samples; the method comprises the steps of calculating residual values of reconstructed images and restored images corresponding to various training samples, and selecting the class of the training sample corresponding to the minimum residual value as a face recognition result of a tested face image, so that the technical problem that when the face image has image quality conditions such as shielding, illumination, blurring and the like, an existing face recognition method is prone to recognition errors and low in face recognition accuracy is solved.
Description
Technical Field
The present application relates to the field of face recognition technologies, and in particular, to a face recognition method, apparatus and device.
Background
The identity recognition or verification method is widely applied to the fields of public security, electronic commerce and the like. Existing identification or authentication methods rely primarily on biometric identification techniques. Biometric identification refers to identification or verification of an identity by means of an intelligent method or technology depending on human physiological characteristics, the biometric characteristics include fingerprints, palm prints, irises and the like, and a face recognition method is the most commonly used method in identification. The face recognition method is used for identity recognition by taking a face as a biological feature, and is different from other biological feature recognition methods, and the face recognition technology has the advantages of non-contact type, convenience and rapidness, high recognition performance and the like.
In the face recognition method in the prior art, when the face image has image quality conditions such as shielding, illumination, blurring and the like, recognition errors are easy to occur, so that the recognition accuracy is not high.
Disclosure of Invention
The application provides a face recognition method, a face recognition device and face recognition equipment, which are used for solving the technical problems that in the existing face recognition method, when the face image has image quality conditions such as shielding, illumination, blurring and the like, recognition errors are easy to occur, and the face recognition accuracy is low.
In view of the above, a first aspect of the present application provides a face recognition method, including:
acquiring a test face image, and repairing the test face image through a preset countermeasure generation network to obtain a repaired image;
performing face sparse representation on the restored image through a dictionary matrix constructed by a preset training sample set to obtain reconstructed images corresponding to various training samples;
and calculating residual values of the reconstructed images and the repaired images corresponding to the training samples, and selecting the class of the training sample corresponding to the minimum residual value as a face recognition result of the test face image.
Optionally, the performing face sparse representation on the restored image through a dictionary matrix constructed by a preset training sample set to obtain reconstructed images corresponding to various training samples includes:
converting each training sample in a preset training sample set containing k classes into m-dimensional column vectors, and combining the m-dimensional column vectors corresponding to all the training samples in all the classes to obtain a dictionary matrix A corresponding to the preset training sample set, wherein the ith element in the dictionary matrix A is obtained by combining the m-dimensional column vectors corresponding to all the training samples in the ith class in the preset training sample set;
constructing a sparse representation model through the dictionary matrix A and the repaired image;
solving the sparse representation model to obtain sparse representation coefficients corresponding to various training samples;
and carrying out face sparse representation on the restored image through sparse representation coefficients corresponding to the various training samples and the dictionary matrix to obtain reconstructed images corresponding to the various training samples.
Optionally, the sparse representation model is:
wherein, x is a sparse representation coefficient, y is a repaired image and is an error.
Optionally, the solving the sparse representation model to obtain sparse representation coefficients corresponding to various types of the training samples includes:
s1, initializing target parameters related to the sparse representation model, wherein the initialized target parameters comprise an initial iteration number t equal to 1 and an initial residual r0Y, initial sparse representation coefficient x 0 and index set Λ0=φ;
S2, substituting the initialized initial residual error and the initialized initial sparse representation coefficient in the target parameter into the target functionCalculating to obtain footnote lambdat;
S3, based on the footnote lambdatUpdating the index set Λt=Λt-1∪λtAnd computing based on the updated index setCorresponding sparse representation coefficient xt;
S4, updating residual error rt=y-AxtWhen the residual r istWhen the preset convergence condition is met, outputting the sparse representation coefficient xtWhen the residual r istAnd when the preset convergence condition is not met, adding one to the iteration number, and returning to the step S2.
Optionally, the preset countermeasure generating network is obtained by training the preset training sample set, and the training optimization function of the preset countermeasure generating network is as follows:
wherein G is a generation network in the preset confrontation generation network, D is a discrimination network in the preset confrontation generation network, D (x) is a real human image, G (z) is a human face image generated by the generation network, and PdataData distribution for real face images, PzIs a noise distribution.
A second aspect of the present application provides a face recognition apparatus, including:
the image restoration unit is used for acquiring a test face image and restoring the test face image through a preset countermeasure generation network to obtain a restored image;
the sparse representation unit is used for carrying out face sparse representation on the restored image through a dictionary matrix constructed by a preset training sample set to obtain reconstructed images corresponding to various training samples;
and the computing unit is used for computing the residual error values of the reconstructed images and the repaired images corresponding to the various training samples, and selecting the class of the training sample corresponding to the minimum residual error value as the face recognition result of the tested face image.
Optionally, the sparse representation unit includes:
the combining subunit is configured to convert each training sample in a preset training sample set including k classes into an m-dimensional column vector, combine m-dimensional column vectors corresponding to all the training samples in all the classes, and obtain a dictionary matrix a corresponding to the preset training sample set, where an ith element in the dictionary matrix a is obtained by combining the m-dimensional column vectors corresponding to all the training samples in an ith class in the preset training sample set;
the construction subunit is used for constructing a sparse representation model through the dictionary matrix A and the repaired image;
the solving subunit is used for solving the sparse representation model to obtain sparse representation coefficients corresponding to various training samples;
and the sparse representation subunit is used for performing face sparse representation on the repaired image through the sparse representation coefficients corresponding to the various training samples and the dictionary matrix to obtain reconstructed images corresponding to the various training samples.
Optionally, the sparse representation model is:
wherein, x is a sparse representation coefficient, y is a repaired image and is an error.
Optionally, the solving subunit is specifically configured to:
s1, initializing target parameters related to the sparse representation model, wherein the initialized target parameters comprise an initial iteration number t equal to 1 and an initial residual r0Y, initial sparse representation coefficient x 0 and index set Λ0=φ;
S2, initializing the targetSubstituting initial residual error and initial sparse representation coefficient in target parameter into objective functionCalculating to obtain footnote lambdat;
S3, based on the footnote lambdatUpdating the index set Λt=Λt-1∪λtAnd computing based on the updated index setCorresponding sparse representation coefficient xt;
S4, updating residual error rt=y-AxtWhen the residual r istWhen the preset convergence condition is met, outputting the sparse representation coefficient xtWhen the residual r istAnd when the preset convergence condition is not met, adding one to the iteration number, and returning to the step S2.
A third aspect of the present application provides a face recognition device, the device comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to execute the face recognition method according to any one of the first aspect according to instructions in the program code.
According to the technical scheme, the method has the following advantages:
the application provides a face recognition method, which comprises the following steps: acquiring a test face image, and repairing the test face image through a preset confrontation generation network to obtain a repaired image; performing face sparse representation on the restored image through a dictionary matrix constructed by a preset training sample set to obtain reconstructed images corresponding to various training samples; and calculating residual values of the reconstructed image and the repaired image corresponding to each type of training sample, and selecting the type of the training sample corresponding to the minimum residual value as a face recognition result of the tested face image.
According to the face recognition method, the acquired test face image is processed through the preset countermeasure generation network, so that the problems of shielding, illumination, blurring and the like of the test face image are solved, the image quality is improved, and the face recognition accuracy rate is improved; the face sparse representation is carried out by constructing a dictionary matrix through the training samples to obtain a reconstructed image, then residual values of the reconstructed image and a repaired image corresponding to various training samples are calculated to obtain a face recognition result, and therefore the technical problem that when the face image has image quality conditions such as shielding, illumination, blurring and the like, recognition errors are prone to occur in the existing face recognition method, and the face recognition accuracy is low is solved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a schematic flowchart of a face recognition method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a face recognition apparatus according to an embodiment of the present application.
Detailed Description
The application provides a face recognition method, a face recognition device and face recognition equipment, which are used for solving the technical problems that in the existing face recognition method, when the face image has image quality conditions such as shielding, illumination, blurring and the like, recognition errors are easy to occur, and the face recognition accuracy is low.
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
For easy understanding, referring to fig. 1, an embodiment of a face recognition method provided in the present application includes:
The method aims to solve the technical problems that the face image is easy to have recognition errors and low in face recognition accuracy due to the image quality problems of shielding, illumination, blurring and the like. In the embodiment of the application, the face image is repaired through the preset confrontation generation network, and the quality of the face image is improved.
The preset countermeasure generating network comprises a generating network and a judging network, wherein the generating network is used for generating an image similar to an original image, and the generating network structure in the application adopts a self-coding network structure and is mainly divided into two parts: the system comprises an encoder and a decoder, wherein the encoder can map an original image to a hidden layer representation, and the decoder generates a human face image close to a real image through mapping information. The encoder is composed of four convolutional layers, each convolutional layer is followed by a batch normalization layer and a Relu activation layer, the decoder is composed of four transposed convolutional layers, the first three transposed convolutional layers are followed by a batch normalization layer and a Relu activation layer, and the last transposed convolutional layer is followed by a Tanh activation layer. The encoder and the decoder are connected through a residual module, and the residual module is formed by sequentially connecting a convolution layer, a batch normalization layer and a Relu activation layer.
The judging network is used for judging whether an input image is a real image or a generated image and consists of five convolution layers, wherein a batch normalization layer and a LeakyRelu layer are connected behind the first four convolution layers, and a Sigmoid activation function is connected behind the last convolution layer and used for judging the probability that the input image is the real image.
Further, the preset confrontation generating network is obtained by training a preset training sample set, and the training optimization function of the preset confrontation generating network is as follows:
wherein G is a generation network in the preset confrontation generation network, D is a discrimination network in the preset confrontation generation network, D (x) is a real human image, G (z) is a human face image generated by the generation network, and PdataData distribution for real face images, PzThe noise distribution represents the image problems such as shading, illumination, blurring and the like.
The method comprises the steps of training a countermeasure generation network through a preset training sample set, wherein the training process of the countermeasure generation network is to simultaneously enhance the generation network and a judgment network through competition, and finally, an image generated by the generation network can achieve the effect of being falsified and truthful, namely, the judgment network cannot distinguish the generated image from the real image.
The method comprises the steps of simulating a face image with the problems of shielding, illumination and the like during training, multiplying the complete face image by a represented damaged binary mask to obtain the face image before restoration, wherein the damaged binary mask has the same size as the complete face image and only comprises a 0 value and a 1 value, the 0 value represents the loss, and the 1 value represents the known region. In the training process, the face image before restoration is input into a generation network, the generation network processes the input face image through an encoder and a decoder to obtain the restored face image, a discriminator discriminates the restored face image and the original complete face image, the obtained judgment information guides the generation network to generate a more real restored image in return, and the generation network finally generates the more complete and real restored image through the countertraining.
102, performing face sparse representation on the repaired image through a dictionary matrix constructed by a preset training sample set to obtain reconstructed images corresponding to various training samples.
And carrying out face sparse representation on the restored image through a dictionary matrix constructed by a preset training sample set to obtain reconstructed images corresponding to various training samples.
Further, the specific process of performing face sparse representation on the restored image through a dictionary matrix constructed by a preset training sample set comprises the following steps:
1. converting each training sample in a preset training sample set containing k classes into m-dimensional column vectors, and combining the m-dimensional column vectors corresponding to all the training samples in all the classes to obtain a dictionary matrix A corresponding to the preset training sample set, wherein the ith element in the dictionary matrix A is obtained by combining the m-dimensional column vectors corresponding to all the training samples in the ith class in the preset training sample set.
The preset training sample set comprises k classes, and for the ith class of training samples, the corresponding training face images have niEach training face image is w multiplied by h in size, and each training face image is converted into an m-dimensional column vector, namelyThe ith class of training samples can be expressed asThe dictionary matrix a corresponding to the preset training sample set may be represented as a ═ a1,A2,…,…Ai,…,Ak]。
2. And constructing a sparse representation model through the dictionary matrix A and the repaired image.
The sparse representation model is:
wherein, x is a sparse representation coefficient, y is a repaired image and is an error.
3. And solving the sparse representation model to obtain sparse representation coefficients corresponding to various training samples.
The solving process specifically comprises the following steps:
s1, initializing target parameters related to the sparse representation model, wherein the initial iteration number t is 1 and the initial residual r in the initialized target parameters0Y, an initial sparse representation coefficient x 0, and an indexCollection lambda0=φ;
S2, substituting the initial residual error and the initial sparse representation coefficient in the initialized target parameter into the target functionCalculating to obtain footnote lambdat;
S3 based on footnote lambdatUpdate index set Λt=Λt-1∪λtAnd computing based on the updated index setCorresponding sparse representation coefficient xt;
Based on the updated index set calculationThe smallest sparse representation of the coefficient x is denoted xtAnd (4) showing.
S4, updating residual error rt=y-AxtWhen residual rtOutputting a sparse representation coefficient x when a preset convergence condition is mettWhen residual rtAnd when the preset convergence condition is not met, adding one to the iteration number, and returning to the step S2.
The preset convergence condition is that when | | | rtIf | | < tau, convergence is judged, tau is a very small constant and can be flexibly set according to requirements. When residual error rtOutputting a sparse representation coefficient x when a preset convergence condition is mettThe sparse representation coefficient x is obtained by calculating the current iteration times t; when residual error rtWhen the preset convergence condition is not satisfied, the number of iterations is incremented by one, that is, t is t +1, and the process returns to step S2.
4. And carrying out face sparse representation on the restored image through sparse representation coefficients corresponding to various training samples and the dictionary matrix to obtain reconstructed images corresponding to various training samples.
Performing face sparse representation on the repaired image through the obtained sparse representation coefficient x and the dictionary matrix A, wherein the reconstructed image corresponding to the ith class of training samples can be represented as yi=Axi,xiAnd representing the coefficients for the sparseness corresponding to the ith class of training samples.
And 103, calculating residual values of the reconstructed image and the repaired image corresponding to each type of training sample, and selecting the type of the training sample corresponding to the minimum residual value as a face recognition result of the tested face image.
Calculating residual values of the reconstructed image and the repaired image corresponding to each type of training sample, namely:
ri(y)=||y-yi||2;
wherein y is a restored image, yiAnd reconstructing an image corresponding to the ith type of training sample. Selecting the smallest residual value riAnd the corresponding class of the training sample is used as the recognition result of the tested face image.
According to the face recognition method in the embodiment of the application, the acquired test face image is processed through the preset countermeasure generation network, so that the problems of shielding, illumination, blurring and the like of the test face image are solved, the image quality is improved, and the face recognition accuracy rate is improved; the face sparse representation is carried out by constructing a dictionary matrix through the training samples to obtain a reconstructed image, then residual values of the reconstructed image and a repaired image corresponding to various training samples are calculated to obtain a face recognition result, and therefore the technical problem that when the face image has image quality conditions such as shielding, illumination, blurring and the like, recognition errors are prone to occur in the existing face recognition method, and the face recognition accuracy is low is solved.
The above is an embodiment of a face recognition method according to the present application, and the following is an embodiment of a face recognition apparatus according to the present application.
For easy understanding, referring to fig. 2, an embodiment of a face recognition apparatus provided in the present application includes:
the image restoration unit 201 is configured to acquire a test face image, and restore the test face image through a preset countermeasure generation network to obtain a restored image;
the sparse representation unit 202 is configured to perform face sparse representation on the restored image through a dictionary matrix constructed by a preset training sample set to obtain reconstructed images corresponding to various training samples;
and the calculating unit 203 is configured to calculate residual values of the reconstructed image and the repaired image corresponding to each type of training sample, and select the type of the training sample corresponding to the smallest residual value as a face recognition result of the test face image.
As a further improvement, the sparse representation unit 202 includes:
the combining subunit 2021 is configured to convert each training sample in the preset training sample set including the k classes into an m-dimensional column vector, and combine m-dimensional column vectors corresponding to all training samples in all classes to obtain a dictionary matrix a corresponding to the preset training sample set, where an ith element in the dictionary matrix a is obtained by combining m-dimensional column vectors corresponding to all training samples in an ith class in the preset training sample set;
the construction subunit 2022 is configured to construct a sparse representation model by using the dictionary matrix a and the repaired image;
the solving subunit 2023 is configured to solve the sparse representation model to obtain sparse representation coefficients corresponding to various training samples;
the sparse representation subunit 2024 is configured to perform face sparse representation on the repaired image through the sparse representation coefficients and the dictionary matrix corresponding to the various training samples, so as to obtain reconstructed images corresponding to the various training samples.
As a further improvement, the sparse representation model is:
wherein, x is a sparse representation coefficient, y is a repaired image and is an error.
As a further improvement, the solving subunit 2023 is specifically configured to:
s1, initializing target parameters related to the sparse representation model, wherein the initial iteration number t is 1 and the initial residual r in the initialized target parameters0Y, initial sparse representation coefficient x 0 and index set Λ0=φ;
S2, substituting the initial residual error and the initial sparse representation coefficient in the initialized target parameter into the target functionCalculating to obtain footnote lambdat;
S3 based on footnote lambdatUpdate index set Λt=Λt-1∪λtAnd computing based on the updated index setCorresponding sparse representation coefficient xt;
S4, updating residual error rt=y-AxtWhen residual rtOutputting a sparse representation coefficient x when a preset convergence condition is mettWhen residual rtAnd when the preset convergence condition is not met, adding one to the iteration number, and returning to the step S2.
An embodiment of the present application further provides a face recognition device, where the face recognition device includes a processor and a memory:
the memory is used for storing the program codes and transmitting the program codes to the processor;
the processor is used for executing the face recognition method in the foregoing method embodiment according to instructions in the program code.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for executing all or part of the steps of the method described in the embodiments of the present application through a computer device (which may be a personal computer, a server, or a network device). And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.
Claims (10)
1. A face recognition method, comprising:
acquiring a test face image, and repairing the test face image through a preset countermeasure generation network to obtain a repaired image;
performing face sparse representation on the restored image through a dictionary matrix constructed by a preset training sample set to obtain reconstructed images corresponding to various training samples;
and calculating residual values of the reconstructed images and the repaired images corresponding to the training samples, and selecting the class of the training sample corresponding to the minimum residual value as a face recognition result of the test face image.
2. The face recognition method according to claim 1, wherein the obtaining of the reconstructed images corresponding to various types of training samples by performing face sparse representation on the restored images through a dictionary matrix constructed by presetting a training sample set comprises:
converting each training sample in a preset training sample set containing k classes into m-dimensional column vectors, and combining the m-dimensional column vectors corresponding to all the training samples in all the classes to obtain a dictionary matrix A corresponding to the preset training sample set, wherein the ith element in the dictionary matrix A is obtained by combining the m-dimensional column vectors corresponding to all the training samples in the ith class in the preset training sample set;
constructing a sparse representation model through the dictionary matrix A and the repaired image;
solving the sparse representation model to obtain sparse representation coefficients corresponding to various training samples;
and carrying out face sparse representation on the restored image through sparse representation coefficients corresponding to the various training samples and the dictionary matrix to obtain reconstructed images corresponding to the various training samples.
4. The face recognition method according to claim 3, wherein the solving the sparse representation model to obtain sparse representation coefficients corresponding to the training samples of each type comprises:
s1, initializing target parameters related to the sparse representation model, wherein the initialized target parameters comprise an initial iteration number t equal to 1 and an initial residual r0Y, initial sparse representation coefficient x 0 and index set Λ0=φ;
S2, substituting the initialized initial residual error and the initialized initial sparse representation coefficient in the target parameter into the target functionCalculating to obtain footnote lambdat;
S3, based on the footnote lambdatUpdating the index set Λt=Λt-1∪λtAnd computing based on the updated index setCorresponding sparse representation coefficient xt;
S4, updating residual error rt=y-AxtWhen the residual r istWhen the preset convergence condition is met, outputting the sparse representation coefficient xtWhen the residual r istAnd when the preset convergence condition is not met, adding one to the iteration number, and returning to the step S2.
5. The face recognition method of claim 1, wherein the preset confrontation generating network is obtained by training the preset training sample set, and the training optimization function of the preset confrontation generating network is as follows:
wherein G is a generation network in the preset confrontation generation network, D is a discrimination network in the preset confrontation generation network, D (x) is a real human image, G (z) is a human face image generated by the generation network, and PdataData distribution for real face images, PzIs a noise distribution.
6. A face recognition apparatus, comprising:
the image restoration unit is used for acquiring a test face image and restoring the test face image through a preset countermeasure generation network to obtain a restored image;
the sparse representation unit is used for carrying out face sparse representation on the restored image through a dictionary matrix constructed by a preset training sample set to obtain reconstructed images corresponding to various training samples;
and the computing unit is used for computing the residual error values of the reconstructed images and the repaired images corresponding to the various training samples, and selecting the class of the training sample corresponding to the minimum residual error value as the face recognition result of the tested face image.
7. The face recognition apparatus according to claim 6, wherein the sparse representation unit includes:
the combining subunit is configured to convert each training sample in a preset training sample set including k classes into an m-dimensional column vector, combine m-dimensional column vectors corresponding to all the training samples in all the classes, and obtain a dictionary matrix a corresponding to the preset training sample set, where an ith element in the dictionary matrix a is obtained by combining the m-dimensional column vectors corresponding to all the training samples in an ith class in the preset training sample set;
the construction subunit is used for constructing a sparse representation model through the dictionary matrix A and the repaired image;
the solving subunit is used for solving the sparse representation model to obtain sparse representation coefficients corresponding to various training samples;
and the sparse representation subunit is used for performing face sparse representation on the repaired image through the sparse representation coefficients corresponding to the various training samples and the dictionary matrix to obtain reconstructed images corresponding to the various training samples.
9. The face recognition device of claim 7, wherein the solving subunit is specifically configured to:
s1, initializing target parameters related to the sparse representation model, wherein the initialized target parameters comprise an initial iteration number t equal to 1 and an initial residual r0Y, initial sparse representation coefficient x 0 and index set Λ0=φ;
S2, substituting the initialized initial residual error and the initialized initial sparse representation coefficient in the target parameter into the target functionCalculating to obtain footnote lambdat;
S3, based on the footnote lambdatUpdating the index set Λt=Λt-1∪λtAnd computing based on the updated index setCorresponding sparse representation coefficient xt;
S4, updating residual error rt=y-AxtWhen the residual r istWhen the preset convergence condition is met, outputting the sparse representation coefficient xtWhen the residual r istAnd when the preset convergence condition is not met, adding one to the iteration number, and returning to the step S2.
10. A face recognition device, the device comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to execute the face recognition method according to any one of claims 1 to 5 according to instructions in the program code.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010910226.0A CN112001865A (en) | 2020-09-02 | 2020-09-02 | Face recognition method, device and equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010910226.0A CN112001865A (en) | 2020-09-02 | 2020-09-02 | Face recognition method, device and equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112001865A true CN112001865A (en) | 2020-11-27 |
Family
ID=73465898
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010910226.0A Pending CN112001865A (en) | 2020-09-02 | 2020-09-02 | Face recognition method, device and equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112001865A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113554569A (en) * | 2021-08-04 | 2021-10-26 | 哈尔滨工业大学 | Face image restoration system based on double memory dictionaries |
CN115906032A (en) * | 2023-02-20 | 2023-04-04 | 之江实验室 | Recognition model correction method and device and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104063714A (en) * | 2014-07-20 | 2014-09-24 | 詹曙 | Fast human face recognition algorithm used for video monitoring and based on CUDA parallel computing and sparse representing |
WO2016050729A1 (en) * | 2014-09-30 | 2016-04-07 | Thomson Licensing | Face inpainting using piece-wise affine warping and sparse coding |
CN109377448A (en) * | 2018-05-20 | 2019-02-22 | 北京工业大学 | A kind of facial image restorative procedure based on generation confrontation network |
CN109492610A (en) * | 2018-11-27 | 2019-03-19 | 广东工业大学 | A kind of pedestrian recognition methods, device and readable storage medium storing program for executing again |
-
2020
- 2020-09-02 CN CN202010910226.0A patent/CN112001865A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104063714A (en) * | 2014-07-20 | 2014-09-24 | 詹曙 | Fast human face recognition algorithm used for video monitoring and based on CUDA parallel computing and sparse representing |
WO2016050729A1 (en) * | 2014-09-30 | 2016-04-07 | Thomson Licensing | Face inpainting using piece-wise affine warping and sparse coding |
CN109377448A (en) * | 2018-05-20 | 2019-02-22 | 北京工业大学 | A kind of facial image restorative procedure based on generation confrontation network |
CN109492610A (en) * | 2018-11-27 | 2019-03-19 | 广东工业大学 | A kind of pedestrian recognition methods, device and readable storage medium storing program for executing again |
Non-Patent Citations (6)
Title |
---|
WAI KEUNG WONG,NA HAN, ET. AL.: "Clustering Structure-Induced Robust Multi-View Graph Recovery", IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, vol. 30, no. 10, 2 October 2019 (2019-10-02), pages 3584, XP011812448, DOI: 10.1109/TCSVT.2019.2945202 * |
何苗,等: "隐式低秩表示联合稀疏表示的人脸识别方法", 云南师范大学学报(自然科学版), vol. 37, no. 01, 15 January 2017 (2017-01-15), pages 43 - 50 * |
张文清,等: "基于ROMP算法的人脸识别", 汕头大学学报(自然科学版), vol. 30, no. 01, 15 February 2015 (2015-02-15), pages 48 - 51 * |
房小兆: "基于稀疏和低秩约束的模型学习研究", 中国博士学位论文全文数据库 信息科技辑 (月刊), no. 02, 15 February 2017 (2017-02-15), pages 138 - 129 * |
杨荣根,等: "基于稀疏表示的人脸识别方法", 计算机科学, vol. 37, no. 09, 15 September 2010 (2010-09-15), pages 267 - 269 * |
韩娜: "交叉领域识别中若干问题与方法研究", 中国博士学位论文全文数据库 信息科技辑 (月刊), no. 3, 15 March 2020 (2020-03-15), pages 138 - 26 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113554569A (en) * | 2021-08-04 | 2021-10-26 | 哈尔滨工业大学 | Face image restoration system based on double memory dictionaries |
CN113554569B (en) * | 2021-08-04 | 2022-03-08 | 哈尔滨工业大学 | Face image restoration system based on double memory dictionaries |
CN115906032A (en) * | 2023-02-20 | 2023-04-04 | 之江实验室 | Recognition model correction method and device and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106570464B (en) | Face recognition method and device for rapidly processing face shielding | |
Xie et al. | Tensor completion via nonlocal low-rank regularization | |
CN113902926A (en) | General image target detection method and device based on self-attention mechanism | |
Martinez-Garcia et al. | Derivatives and inverse of cascaded linear+ nonlinear neural models | |
CN111523668A (en) | Training method and device of data generation system based on differential privacy | |
CN112734634A (en) | Face changing method and device, electronic equipment and storage medium | |
CN111881926A (en) | Image generation method, image generation model training method, image generation device, image generation equipment and image generation medium | |
CN113095333B (en) | Unsupervised feature point detection method and unsupervised feature point detection device | |
CN112001865A (en) | Face recognition method, device and equipment | |
CN111862251A (en) | Method, apparatus, storage medium and electronic device for medical image reconstruction technology | |
CN111260620A (en) | Image anomaly detection method and device and electronic equipment | |
CN115393231B (en) | Defect image generation method and device, electronic equipment and storage medium | |
CN111488810A (en) | Face recognition method and device, terminal equipment and computer readable medium | |
WO2024114321A1 (en) | Image data processing method and apparatus, computer device, computer-readable storage medium, and computer program product | |
CN110929836A (en) | Neural network training and image processing method and device, electronic device and medium | |
CN116309148A (en) | Image restoration model training method, image restoration device and electronic equipment | |
Uddin et al. | A perceptually inspired new blind image denoising method using $ L_ {1} $ and perceptual loss | |
CN113221660B (en) | Cross-age face recognition method based on feature fusion | |
Pajot et al. | Unsupervised adversarial image inpainting | |
Huang et al. | Underwater image enhancement via LBP‐based attention residual network | |
CN117333750A (en) | Spatial registration and local global multi-scale multi-modal medical image fusion method | |
CN115526891B (en) | Training method and related device for defect data set generation model | |
CN116959109A (en) | Human body posture image generation method, device, equipment and storage medium | |
CN115346091B (en) | Method and device for generating Mura defect image data set | |
CN116739950A (en) | Image restoration method and device, terminal equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |