CN113989901A - Face recognition method, face recognition device, client and storage medium - Google Patents

Face recognition method, face recognition device, client and storage medium Download PDF

Info

Publication number
CN113989901A
CN113989901A CN202111336657.1A CN202111336657A CN113989901A CN 113989901 A CN113989901 A CN 113989901A CN 202111336657 A CN202111336657 A CN 202111336657A CN 113989901 A CN113989901 A CN 113989901A
Authority
CN
China
Prior art keywords
face
image
feature
processing
inputting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111336657.1A
Other languages
Chinese (zh)
Inventor
蔡南平
兰超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Welab Information Technology Shenzhen Ltd
Original Assignee
Welab Information Technology Shenzhen Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Welab Information Technology Shenzhen Ltd filed Critical Welab Information Technology Shenzhen Ltd
Priority to CN202111336657.1A priority Critical patent/CN113989901A/en
Publication of CN113989901A publication Critical patent/CN113989901A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of artificial intelligence, and discloses a face recognition method, which comprises the following steps: inputting a first face image obtained after the face image to be registered is subjected to image standardization processing into a trained face coding model to perform coding processing, and storing a coded face feature map into a preset storage space of a client; inputting a second face image obtained after the face image to be verified is subjected to image standardization processing into a feature extraction branch of a trained face verification model to perform feature extraction processing to obtain a first feature; inputting the coded face feature map into a trained face verification model to perform decoding processing and feature extraction processing to obtain a second feature; and determining a face recognition result based on the similarity value of the first feature and the second feature. The invention also provides a face recognition device, a client and a storage medium. The invention improves the safety of the face information.

Description

Face recognition method, face recognition device, client and storage medium
Technical Field
The invention relates to the field of artificial intelligence, in particular to a face recognition method, a face recognition device, a client and a storage medium.
Background
With the development of artificial intelligence, face recognition is widely used for identity verification. The existing face recognition system is usually deployed at a server side, when a face is registered, a face image of a user is collected and stored in the server, and when face verification is needed, the face image to be verified is compared with the face image stored in the server. This kind of mode leads to the face information easily to reveal, and the security is not high. Therefore, a face recognition method is needed to ensure the safety of face information.
Disclosure of Invention
In view of the above, there is a need to provide a face recognition method, aiming at improving the security of face information.
The face recognition method provided by the invention comprises the following steps:
performing image standardization processing on a face image to be registered to obtain a first face image, inputting the first face image into a trained face coding model to perform coding processing to obtain a coded face feature map, and storing the coded face feature map into a preset storage space of a client;
acquiring a face image to be verified, performing image standardization processing on the face image to be verified to obtain a second face image, and inputting the second face image into a feature extraction branch of a trained face verification model to perform feature extraction processing to obtain a first feature;
inputting the coded face feature map stored in the preset storage space into a decoding branch of the trained face verification model to perform decoding processing to obtain a decoded face feature map;
inputting the decoded face feature map into a feature extraction branch of the trained face verification model to perform feature extraction processing to obtain a second feature;
and calculating a similarity value of the first feature and the second feature, and determining a face recognition result based on the similarity value.
Optionally, the performing an image standardization process on the face image to be registered to obtain a first face image includes:
detecting position coordinates of key parts of the face in the face image to be registered, and performing face image extraction processing on the face image to be registered based on the position coordinates to obtain a face region image;
judging whether the face region image needs to be corrected or not, if so, correcting the face region image to obtain a face corrected image;
and performing data standardization processing on the pixel value of each pixel point in the face correction image to obtain a first face image.
Optionally, the determining whether the face region image needs to be corrected includes:
acquiring a predetermined standard distance value between key parts of the human face, calculating a distance value between the key parts of the human face in the face area image, and judging that the face area image needs to be corrected if the absolute value of the difference between the calculated distance value and the standard distance value is greater than a preset threshold value.
Optionally, the face coding model includes a plurality of convolution modules connected in series, each convolution module includes a plurality of convolution units, and each convolution unit includes a convolution layer, a normalization layer, and an activation layer.
Optionally, the decoding branch of the face verification model includes a plurality of deconvolution modules, each deconvolution module includes a deconvolution unit and a plurality of convolution units, and each deconvolution unit includes a deconvolution layer, a normalization layer, and an activation layer;
the feature extraction branch of the face verification model comprises a plurality of convolution modules and an embedding module, wherein the embedding module comprises a plurality of full-connection layers and a normalization layer.
Optionally, the training process of the face coding model and the face verification model includes:
extracting a sample set from a sample library, and executing image standardization processing on each sample in the sample set to obtain a standardized sample set;
inputting the standardized sample set into a feature extraction branch of the face verification model to perform feature extraction processing, so as to obtain a third feature corresponding to each sample of the standardized sample set;
sequentially inputting the standardized sample set into the face coding model and the face verification model to perform coding, decoding and feature extraction processing, so as to obtain a fourth feature of each sample in the standardized sample set;
and determining the structural parameters of the face coding model and the face verification model by minimizing the loss value between the third feature and the fourth feature to obtain a trained face coding model and a trained face verification model.
Optionally, the calculation formula of the loss value is:
Figure BDA0003348546820000021
wherein, loss (q)i,pi) For normalizing the loss value, p, between the third and fourth features of the processed sample setiFor a third feature of the ith sample in the normalized sample set, qiC is a fourth feature of the ith sample in the normalized sample set, and c is a total number of samples in the normalized sample set.
In order to solve the above problem, the present invention further provides a face recognition apparatus, including:
the system comprises a coding module, a client and a processing module, wherein the coding module is used for executing image standardization processing on a face image to be registered to obtain a first face image, inputting the first face image into a trained face coding model to execute coding processing to obtain a coded face feature map, and storing the coded face feature map into a preset storage space of the client;
the system comprises an acquisition module, a feature extraction module and a verification module, wherein the acquisition module is used for acquiring a face image to be verified, performing image standardization processing on the face image to be verified to obtain a second face image, and inputting the second face image into a feature extraction branch of a trained face verification model to perform feature extraction processing to obtain a first feature;
the decoding module is used for inputting the coded face feature map stored in the preset storage space into a decoding branch of the trained face verification model to execute decoding processing so as to obtain a decoded face feature map;
the extraction module is used for inputting the decoded face feature map into a feature extraction branch of the trained face verification model to perform feature extraction processing to obtain a second feature;
and the recognition module is used for calculating the similarity value of the first feature and the second feature and determining a face recognition result based on the similarity value.
In order to solve the above problem, the present invention further provides a client, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a face recognition program executable by the at least one processor, the face recognition program being executable by the at least one processor to enable the at least one processor to perform the above-described face recognition based method.
In order to solve the above problem, the present invention also provides a computer-readable storage medium having a face recognition program stored thereon, the face recognition program being executable by one or more processors to implement the above face recognition method.
Compared with the prior art, the method comprises the steps of firstly, inputting a trained face coding model into a first face image obtained after image standardization processing is carried out on a face image to be registered, carrying out coding processing, and storing a coded face feature map into a preset storage space of a client; secondly, inputting a second face image obtained after the face image to be verified is subjected to image standardization processing into a feature extraction branch of the trained face verification model to perform feature extraction processing to obtain a first feature; then, inputting the coded face feature map into a trained face verification model to perform decoding processing and feature extraction processing to obtain a second feature; and finally, determining a face recognition result based on the similarity value of the first feature and the second feature. According to the invention, the face image to be registered is not required to be uploaded to the server, the face image to be registered is not stored in the client, but the face feature map is encoded, even if the encoded face feature map is leaked, a person who takes the face feature map cannot decode and utilize the face feature map, and the safety of face information is fully ensured; meanwhile, by using the scheme, after the face verification model is upgraded, the face does not need to be registered again. Therefore, the invention improves the safety of the face information.
Drawings
Fig. 1 is a schematic flow chart of a face recognition method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a convolution unit according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a face verification model according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a deconvolution module according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a deconvolution unit according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an embedded module according to an embodiment of the present invention;
fig. 7 is a schematic block diagram of a face recognition apparatus according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a client for implementing a face recognition method according to an embodiment of the present invention;
the implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
The embodiment of the application can acquire and process related data based on an artificial intelligence technology. Among them, Artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result.
The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the description relating to "first", "second", etc. in the present invention is for descriptive purposes only and is not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, technical solutions between various embodiments may be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present invention.
The invention provides a face recognition method. Fig. 1 is a schematic flow chart of a face recognition method according to an embodiment of the present invention. The method may be performed by a client, which may be implemented by software and/or hardware.
In this embodiment, the face recognition method includes:
s1, performing image standardization processing on the face image to be registered to obtain a first face image, inputting the first face image into a trained face coding model to perform coding processing to obtain a coded face feature map, and storing the coded face feature map into a preset storage space of the client.
In the embodiment, the execution main body of the face recognition method is the client instead of the server, so that the face image to be registered does not need to be uploaded to the server, the possibility of face image leakage is reduced, meanwhile, the face image to be registered does not need to be stored in the client, and only the coded face feature map needs to be stored in the preset storage space of the client, so that even if the coded face feature map stored in the client is leaked, a person taking the client cannot decode and utilize the face feature map, and the safety of face information is fully ensured.
The method for obtaining the first face image by executing image standardization processing on the face image to be registered comprises the following steps:
a11, detecting the position coordinates of key parts of the face in the face image to be registered, and executing face image extraction processing on the face image to be registered based on the position coordinates to obtain a face region image;
in this embodiment, adopt face image detection model to detect the position coordinate of face key position in waiting to register the face image, face image detection model can be the deep face model, face key position includes eyes, eyebrow, nose, ear, mouth, face, based on position coordinate can extract face region image.
A12, judging whether the face area image needs to be corrected or not, if so, performing correction processing on the face area image to obtain a face corrected image;
if the face of the image is not directly facing the lens, the image needs to be corrected.
The determining whether the face region image needs to be corrected includes:
acquiring a predetermined standard distance value between key parts of the human face, calculating a distance value between the key parts of the human face in the face area image, and judging that the face area image needs to be corrected if the absolute value of the difference between the calculated distance value and the standard distance value is greater than a preset threshold value.
For example, a preset standard distance between the right eye and the right ear is a, a calculated distance between the right eye and the right ear in the face area image is b, and if the absolute value of the difference between a and b is greater than a preset threshold, the face area image needs to be corrected.
In this embodiment, the correction processing procedure is as follows: and calculating a human face deflection angle according to the position coordinates of the key parts of the human face, and reversely rotating the face region image according to the calculated deflection angle to obtain a face correction image. The correction process is prior art and will not be described herein.
And A13, performing data standardization processing on the pixel value of each pixel point in the face correction image to obtain a first face image.
In this embodiment, the data normalization processing includes subtracting a mean value from the pixel value, and then dividing the mean value by a square difference, where the mean value subtraction is used to highlight differences between pixels, and the square difference subtraction is used to normalize the data, so as to reduce the amount of computation, thereby speeding up the face recognition.
In this embodiment, the face coding model is used to perform coding processing on an input image, and includes a plurality of convolution modules connected in series, each convolution module including a plurality of convolution units.
Fig. 2 is a schematic structural diagram of a convolution unit according to an embodiment of the present invention. Each convolution unit comprises a convolution layer, a normalization layer and an activation layer. The convolution unit is used for performing convolution processing, normalization processing and activation function nonlinear processing on the input image to obtain an activation function nonlinear processing characteristic diagram.
In this embodiment, the first face image of 112 × 112 is used as an input of the trained face coding model, an output of the trained face coding model is a 14 × 14 coded face feature map, and the coded face feature map is stored in a preset storage space of the client for subsequent face verification.
S2, acquiring a face image to be verified, executing image standardization processing on the face image to be verified to obtain a second face image, inputting the second face image into a feature extraction branch of the trained face verification model to execute feature extraction processing to obtain a first feature.
In this embodiment, the process of performing image normalization on the face image to be verified is the same as the process of performing image normalization on the face image to be registered in step S1, and details are not repeated here.
Fig. 3 is a schematic structural diagram of a face verification model according to an embodiment of the present invention. The face verification model comprises a decoding branch and a feature extraction branch, wherein the decoding branch is used for decoding an input image, and the feature extraction branch is used for extracting features of the input image.
The decoding branch comprises a plurality of deconvolution modules, and the feature extraction branch comprises a plurality of convolution modules and an embedding module.
Fig. 4 is a schematic structural diagram of a deconvolution module according to an embodiment of the present invention. Each deconvolution module includes a deconvolution unit and a plurality of convolution units.
Fig. 5 is a schematic structural diagram of a deconvolution unit according to an embodiment of the present invention. Each deconvolution unit comprises a deconvolution layer, a normalization layer and an activation layer, and the deconvolution unit is used for performing deconvolution processing, normalization processing and activation function nonlinear processing on input to obtain an activation function nonlinear processing characteristic diagram.
Fig. 6 is a schematic structural diagram of an embedded module according to an embodiment of the present invention. The embedded module comprises a plurality of full connection layers and a normalization layer, wherein the full connection layers are used for integrating input features, and the normalization layer is used for normalizing the input.
In this embodiment, the second face image obtained after the face image to be verified is subjected to the normalization processing is directly input to the feature extraction branch of the trained face verification model to perform the feature extraction processing, and the decoding branch is not required to be input for decoding. And for the 112 × 112 second face image, after the feature extraction branch processing, 256-dimensional first features are obtained.
And S3, inputting the coded face feature map stored in the preset storage space into a decoding branch of the trained face verification model to execute decoding processing, and obtaining a decoded face feature map.
After the first feature corresponding to the face image to be verified is obtained, the feature vector corresponding to the coded face feature image stored during registration needs to be compared, so as to verify whether the face image is the same person or not.
And because the stored coded face feature map needs to be decoded by a decoding branch of the face verification model, the input of the decoding branch is 14 × 14 coded face feature map, and the output of the decoding branch is 112 × 112 decoded face feature map.
And S4, inputting the decoded face feature map into a feature extraction branch of the trained face verification model to execute feature extraction processing to obtain a second feature.
Inputting the 112 × 112 decoded face feature map into a feature extraction branch to perform feature extraction processing, and obtaining a 256-dimensional second feature.
S5, calculating a similarity value of the first feature and the second feature, and determining a face recognition result based on the similarity value.
In this embodiment, the similarity value may be a cosine similarity value.
The determining a face recognition result based on the similarity value includes:
if the similarity value is greater than a similarity threshold (e.g., 90%), the face recognition is successful; and if the similarity value is less than or equal to the similarity threshold value, the face recognition fails.
In this embodiment, the training of the face coding model and the face verification model may be performed at the server, and after the training is completed, the server issues the trained face coding model and the trained face verification model to the client.
The training process of the face coding model and the face verification model comprises the following steps:
b11, extracting a sample set from the sample library, and performing image standardization on each sample in the sample set to obtain a standardized sample set;
in this embodiment, a training mode of joint training of a face coding model and a face verification model is adopted.
The sample library is an image library, and each sample in the sample set extracted from the image library is a human face image.
B12, inputting the standardized sample set into a feature extraction branch of the face verification model to perform feature extraction processing, and obtaining a third feature corresponding to each sample of the standardized sample set;
the third characteristic is not subjected to encoding and decoding processing.
B13, sequentially inputting the standardized sample set into the face coding model and the face verification model to perform coding, decoding and feature extraction processing, so as to obtain a fourth feature of each sample in the standardized sample set;
the fourth feature is that the encoding and decoding processes are performed.
And B14, determining the structural parameters of the face coding model and the face verification model by minimizing the loss value between the third feature and the fourth feature, and obtaining the trained face coding model and the trained face verification model.
The calculation formula of the loss value is as follows:
Figure BDA0003348546820000081
wherein, loss (q)i,pi) For normalizing the loss value, p, between the third and fourth features of the processed sample setiFor a third feature of the ith sample in the normalized sample set, qiC is a fourth feature of the ith sample in the normalized sample set, and c is a total number of samples in the normalized sample set.
The embodiment shows that the face recognition method provided by the invention comprises the steps of firstly, inputting a first face image obtained by performing image standardization processing on a face image to be registered into a trained face coding model to perform coding processing, and storing a coded face feature map into a preset storage space of a client; secondly, inputting a second face image obtained after the face image to be verified is subjected to image standardization processing into a feature extraction branch of the trained face verification model to perform feature extraction processing to obtain a first feature; then, inputting the coded face feature map into a trained face verification model to perform decoding processing and feature extraction processing to obtain a second feature; and finally, determining a face recognition result based on the similarity value of the first feature and the second feature. According to the invention, the face image to be registered is not required to be uploaded to the server, the face image to be registered is not stored in the client, but the face feature map is encoded, even if the encoded face feature map is leaked, a person who takes the face feature map cannot decode and utilize the face feature map, and the safety of face information is fully ensured; meanwhile, by using the scheme, after the face verification model is upgraded, the face does not need to be registered again. Therefore, the invention improves the safety of the face information.
Fig. 7 is a schematic block diagram of a face recognition apparatus according to an embodiment of the present invention.
The face recognition apparatus 100 of the present invention may be installed in a client. According to the implemented functions, the face recognition apparatus 100 may include an encoding module 110, an acquisition module 120, a decoding module 130, an extraction module 140, and a recognition module 150. The modules of the present invention, which may also be referred to as units, refer to a series of computer program segments that can be executed by a processor of a client and that can perform a fixed function, and are stored in a memory of the client.
In the present embodiment, the functions regarding the respective modules/units are as follows:
the encoding module 110 is configured to perform image standardization on a face image to be registered to obtain a first face image, input the first face image into a trained face encoding model to perform encoding processing to obtain an encoded face feature map, and store the encoded face feature map in a preset storage space of a client.
The method for obtaining the first face image by executing image standardization processing on the face image to be registered comprises the following steps:
a21, detecting the position coordinates of key parts of the face in the face image to be registered, and executing face image extraction processing on the face image to be registered based on the position coordinates to obtain a face region image;
a22, judging whether the face area image needs to be corrected or not, if so, performing correction processing on the face area image to obtain a face corrected image;
the determining whether the face region image needs to be corrected includes:
acquiring a predetermined standard distance value between key parts of the human face, calculating a distance value between the key parts of the human face in the face area image, and judging that the face area image needs to be corrected if the absolute value of the difference between the calculated distance value and the standard distance value is greater than a preset threshold value.
And A23, performing data standardization processing on the pixel value of each pixel point in the face correction image to obtain a first face image.
The acquisition module 120 is configured to acquire a face image to be verified, perform image normalization on the face image to be verified to obtain a second face image, and input the second face image into a feature extraction branch of a trained face verification model to perform feature extraction processing to obtain a first feature.
And the decoding module 130 is configured to input the encoded face feature map stored in the preset storage space into a decoding branch of the trained face verification model to perform decoding processing, so as to obtain a decoded face feature map.
And the extraction module 140 is configured to input the decoded face feature map into a feature extraction branch of the trained face verification model to perform feature extraction processing, so as to obtain a second feature.
And the recognition module 150 is configured to calculate a similarity value between the first feature and the second feature, and determine a face recognition result based on the similarity value.
The training process of the face coding model and the face verification model comprises the following steps:
b21, extracting a sample set from the sample library, and performing image standardization on each sample in the sample set to obtain a standardized sample set;
b22, inputting the standardized sample set into a feature extraction branch of the face verification model to perform feature extraction processing, and obtaining a third feature corresponding to each sample of the standardized sample set;
b23, sequentially inputting the standardized sample set into the face coding model and the face verification model to perform coding, decoding and feature extraction processing, so as to obtain a fourth feature of each sample in the standardized sample set;
and B24, determining the structural parameters of the face coding model and the face verification model by minimizing the loss value between the third feature and the fourth feature, and obtaining the trained face coding model and the trained face verification model.
The calculation formula of the loss value is as follows:
Figure BDA0003348546820000101
wherein, loss (q)i,pi) For normalizing the loss value, p, between the third and fourth features of the processed sample setiFor a third feature of the ith sample in the normalized sample set, qiC is a fourth feature of the ith sample in the normalized sample set, and c is a total number of samples in the normalized sample set.
Fig. 8 is a schematic structural diagram of a client for implementing a face recognition method according to an embodiment of the present invention.
The client 1 is a device capable of automatically performing numerical calculation and/or information processing in accordance with a command set in advance or stored. The client 1 may be a computer, or may also be a smart device such as a mobile phone and a tablet.
In the present embodiment, the client 1 includes, but is not limited to, a memory 11, a processor 12, and a network interface 13, which are communicatively connected to each other through a system bus, wherein the memory 11 stores a face recognition program 10, and the face recognition program 10 is executable by the processor 12. Fig. 8 only shows the client 1 with the components 11-13 and the face recognition program 10, and it will be understood by those skilled in the art that the structure shown in fig. 8 does not constitute a limitation of the client 1, and may comprise fewer or more components than shown, or some components in combination, or a different arrangement of components.
The storage 11 includes a memory and at least one type of readable storage medium. The memory provides cache for the operation of the client 1; the readable storage medium may be a non-volatile storage medium such as flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the readable storage medium may be an internal storage unit of the client 1, such as a hard disk of the client 1; in other embodiments, the non-volatile storage medium may also be an external storage device of the client 1, such as a plug-in hard disk provided on the client 1, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. In this embodiment, the readable storage medium of the memory 11 is generally used for storing an operating system and various application software installed in the client 1, for example, codes of the face recognition program 10 in an embodiment of the present invention are stored. Further, the memory 11 may also be used to temporarily store various types of data that have been output or are to be output.
Processor 12 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 12 is generally used for controlling the overall operation of the client 1, such as performing control and processing related to data interaction or communication with other devices. In this embodiment, the processor 12 is configured to run the program code stored in the memory 11 or process data, for example, run the face recognition program 10.
The network interface 13 may comprise a wireless network interface or a wired network interface, and the network interface 13 is used for establishing communication connection between the client 1 and other intelligent devices (not shown in the figure).
Optionally, the client 1 may further include a user interface, the user interface may include a Display (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface may further include a standard wired interface and a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable for displaying information processed in the client 1 and for displaying a visualized user interface.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
The face recognition program 10 stored in the memory 11 of the client 1 is a combination of instructions, which when run in the processor 12, can implement the face recognition method described above.
Specifically, the processor 12 may refer to the description of the relevant steps in the embodiment corresponding to fig. 1 for a specific implementation method of the face recognition program 10, which is not described herein again.
Further, the modules/units integrated by the client 1 may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as separate products. The computer readable storage medium may be non-volatile or non-volatile. The computer-readable storage medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM).
The computer readable storage medium has stored thereon a face recognition program 10, and the face recognition program 10 is executable by one or more processors to implement the face recognition method described above.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. A face recognition method is applied to a client side, and is characterized by comprising the following steps:
performing image standardization processing on a face image to be registered to obtain a first face image, inputting the first face image into a trained face coding model to perform coding processing to obtain a coded face feature map, and storing the coded face feature map into a preset storage space of the client;
acquiring a face image to be verified, performing image standardization processing on the face image to be verified to obtain a second face image, and inputting the second face image into a feature extraction branch of a trained face verification model to perform feature extraction processing to obtain a first feature;
inputting the coded face feature map stored in the preset storage space into a decoding branch of the trained face verification model to perform decoding processing to obtain a decoded face feature map;
inputting the decoded face feature map into a feature extraction branch of the trained face verification model to perform feature extraction processing to obtain a second feature;
and calculating a similarity value of the first feature and the second feature, and determining a face recognition result based on the similarity value.
2. The method for recognizing human face according to claim 1, wherein the performing image standardization processing on the human face image to be registered to obtain the first human face image comprises:
detecting position coordinates of key parts of the face in the face image to be registered, and performing face image extraction processing on the face image to be registered based on the position coordinates to obtain a face region image;
judging whether the face region image needs to be corrected or not, if so, correcting the face region image to obtain a face corrected image;
and performing data standardization processing on the pixel value of each pixel point in the face correction image to obtain a first face image.
3. The face recognition method of claim 2, wherein the determining whether the face region image needs to be corrected includes:
acquiring a predetermined standard distance value between key parts of the human face, calculating a distance value between the key parts of the human face in the face area image, and judging that the face area image needs to be corrected if the absolute value of the difference between the calculated distance value and the standard distance value is greater than a preset threshold value.
4. The method of claim 1, wherein the face coding model comprises a plurality of convolution modules connected in series, each convolution module comprising a plurality of convolution units, each convolution unit comprising a convolution layer, a normalization layer, and an activation layer.
5. The face recognition method of claim 1, wherein the decoding branch of the face verification model comprises a plurality of deconvolution modules, each deconvolution module comprises a deconvolution unit and a plurality of convolution units, each deconvolution unit comprises a deconvolution layer, a normalization layer and an activation layer;
the feature extraction branch of the face verification model comprises a plurality of convolution modules and an embedding module, wherein the embedding module comprises a plurality of full-connection layers and a normalization layer.
6. The method of claim 1, wherein the training process of the face coding model and the face verification model comprises:
extracting a sample set from a sample library, and executing image standardization processing on each sample in the sample set to obtain a standardized sample set;
inputting the standardized sample set into a feature extraction branch of the face verification model to perform feature extraction processing, so as to obtain a third feature corresponding to each sample of the standardized sample set;
sequentially inputting the standardized sample set into the face coding model and the face verification model to perform coding, decoding and feature extraction processing, so as to obtain a fourth feature of each sample in the standardized sample set;
and determining the structural parameters of the face coding model and the face verification model by minimizing the loss value between the third feature and the fourth feature to obtain a trained face coding model and a trained face verification model.
7. The face recognition method of claim 6, wherein the loss value is calculated by the formula:
Figure FDA0003348546810000021
wherein, loss (q)i,pi) For normalizing the loss value, p, between the third and fourth features of the processed sample setiFor a third feature of the ith sample in the normalized sample set, qiC is a fourth feature of the ith sample in the normalized sample set, and c is a total number of samples in the normalized sample set.
8. An apparatus for face recognition, the apparatus comprising:
the system comprises a coding module, a client and a processing module, wherein the coding module is used for executing image standardization processing on a face image to be registered to obtain a first face image, inputting the first face image into a trained face coding model to execute coding processing to obtain a coded face feature map, and storing the coded face feature map into a preset storage space of the client;
the system comprises an acquisition module, a feature extraction module and a verification module, wherein the acquisition module is used for acquiring a face image to be verified, performing image standardization processing on the face image to be verified to obtain a second face image, and inputting the second face image into a feature extraction branch of a trained face verification model to perform feature extraction processing to obtain a first feature;
the decoding module is used for inputting the coded face feature map stored in the preset storage space into a decoding branch of the trained face verification model to execute decoding processing so as to obtain a decoded face feature map;
the extraction module is used for inputting the decoded face feature map into a feature extraction branch of the trained face verification model to perform feature extraction processing to obtain a second feature;
and the recognition module is used for calculating the similarity value of the first feature and the second feature and determining a face recognition result based on the similarity value.
9. A client, the client comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a face recognition program executable by the at least one processor to enable the at least one processor to perform the face recognition method of any one of claims 1 to 7.
10. A computer-readable storage medium having stored thereon a face recognition program executable by one or more processors to implement the face recognition method of any one of claims 1 to 7.
CN202111336657.1A 2021-11-11 2021-11-11 Face recognition method, face recognition device, client and storage medium Pending CN113989901A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111336657.1A CN113989901A (en) 2021-11-11 2021-11-11 Face recognition method, face recognition device, client and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111336657.1A CN113989901A (en) 2021-11-11 2021-11-11 Face recognition method, face recognition device, client and storage medium

Publications (1)

Publication Number Publication Date
CN113989901A true CN113989901A (en) 2022-01-28

Family

ID=79748112

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111336657.1A Pending CN113989901A (en) 2021-11-11 2021-11-11 Face recognition method, face recognition device, client and storage medium

Country Status (1)

Country Link
CN (1) CN113989901A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115294682A (en) * 2022-10-09 2022-11-04 深圳壹家智能锁有限公司 Data management method, device and equipment for intelligent door lock and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115294682A (en) * 2022-10-09 2022-11-04 深圳壹家智能锁有限公司 Data management method, device and equipment for intelligent door lock and storage medium
CN115294682B (en) * 2022-10-09 2022-12-06 深圳壹家智能锁有限公司 Data management method, device and equipment for intelligent door lock and storage medium

Similar Documents

Publication Publication Date Title
CN112102221A (en) 3D UNet network model construction method and device for detecting tumor and storage medium
CN111914775B (en) Living body detection method, living body detection device, electronic equipment and storage medium
WO2019062080A1 (en) Identity recognition method, electronic device, and computer readable storage medium
CN112102402B (en) Flash light spot position identification method and device, electronic equipment and storage medium
CN112052850A (en) License plate recognition method and device, electronic equipment and storage medium
CN111932562B (en) Image identification method and device based on CT sequence, electronic equipment and medium
CN112541443B (en) Invoice information extraction method, invoice information extraction device, computer equipment and storage medium
CN112396005A (en) Biological characteristic image recognition method and device, electronic equipment and readable storage medium
CN111860377A (en) Live broadcast method and device based on artificial intelligence, electronic equipment and storage medium
CN113705462A (en) Face recognition method and device, electronic equipment and computer readable storage medium
CN113887408B (en) Method, device, equipment and storage medium for detecting activated face video
CN111611988A (en) Picture verification code identification method and device, electronic equipment and computer readable medium
CN112383554B (en) Interface flow abnormity detection method and device, terminal equipment and storage medium
CN112668575A (en) Key information extraction method and device, electronic equipment and storage medium
CN110956149A (en) Pet identity verification method, device and equipment and computer readable storage medium
CN114049568A (en) Object shape change detection method, device, equipment and medium based on image comparison
CN113705469A (en) Face recognition method and device, electronic equipment and computer readable storage medium
CN116311370A (en) Multi-angle feature-based cow face recognition method and related equipment thereof
CN111639360A (en) Intelligent data desensitization method and device, computer equipment and storage medium
CN113989901A (en) Face recognition method, face recognition device, client and storage medium
CN112883346A (en) Safety identity authentication method, device, equipment and medium based on composite data
CN115424335B (en) Living body recognition model training method, living body recognition method and related equipment
CN113850260B (en) Key information extraction method and device, electronic equipment and readable storage medium
CN113705455B (en) Identity verification method and device, electronic equipment and readable storage medium
CN115798004A (en) Face card punching method and device based on local area, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination