CN111680595A - Face recognition method and device and electronic equipment - Google Patents
Face recognition method and device and electronic equipment Download PDFInfo
- Publication number
- CN111680595A CN111680595A CN202010474940.XA CN202010474940A CN111680595A CN 111680595 A CN111680595 A CN 111680595A CN 202010474940 A CN202010474940 A CN 202010474940A CN 111680595 A CN111680595 A CN 111680595A
- Authority
- CN
- China
- Prior art keywords
- convolution
- convolutional
- face recognition
- layers
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000012549 training Methods 0.000 claims abstract description 17
- 238000013527 convolutional neural network Methods 0.000 claims description 30
- 230000015654 memory Effects 0.000 claims description 14
- 238000012545 processing Methods 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 abstract description 5
- 238000005516 engineering process Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 4
- 238000004590 computer program Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000013526 transfer learning Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Molecular Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
Abstract
The present invention relates to the field of face recognition technologies, and in particular, to a face recognition method, a face recognition device, and an electronic device. The method comprises the following steps: establishing a face recognition model comprising a preset weighted convolution structure; inputting data to be trained to the face recognition model for training to obtain a characteristic value of the data to be trained, and storing the characteristic value in a database; inputting the collected face image into the face recognition model to obtain the characteristics of the collected face image; and calculating the similarity between the characteristics of the acquired face image and the characteristic value of each image in the database based on the characteristic values stored in the database, and identifying the acquired face image according to the similarity. The face recognition method and the face recognition device provided by the embodiment of the invention are used for carrying out face recognition based on the face recognition model with the preset weighted convolution structure, so that the model precision can be obviously improved on the basis of increasing less calculation amount.
Description
Technical Field
The present invention relates to the field of face recognition technologies, and in particular, to a face recognition method, a face recognition device, and an electronic device.
Background
Face recognition is applied to more and more fields, and not only brings convenience to personal authentication, but also improves the security of the authentication.
The currently common face recognition methods mainly include a geometric feature-based method, a model-based method, a statistical-based method, and a convolutional neural network-based method. Due to the characteristics of the convolutional neural network, most of the existing face recognition methods utilize the convolutional neural network for recognition, so that the accuracy of face recognition can be improved, but interference is easily caused due to excessive data amount, and a large amount of data calculation also occupies a lot of resources, so that a face recognition method with high accuracy and small operand is urgently needed.
Disclosure of Invention
In order to solve the above technical problems, embodiments of the present invention provide a face recognition method, a face recognition device, and an electronic device, so as to solve the technical problem that the accuracy of a model is low when face recognition is performed in the related art.
In order to solve the above technical problem, an embodiment of the present invention provides the following technical solutions:
in a first aspect, an embodiment of the present invention provides a face recognition method, where the method includes:
establishing a face recognition model comprising a preset weighted convolution structure;
inputting data to be trained to the face recognition model for training to obtain a characteristic value of the data to be trained, and storing the characteristic value in a database;
inputting the collected face image into the face recognition model to obtain the characteristics of the collected face image;
and calculating the similarity between the characteristics of the acquired face image and the characteristic value of each image in the database based on the characteristic values stored in the database, and identifying the acquired face image according to the similarity.
In some embodiments, the preset weighted convolution structure includes: the convolution layer processing system comprises at least two convolution layers, wherein the at least two convolution layers operate in parallel, and the output result of the parallel operation of the at least two convolution layers is the para-position addition of the results of the at least two parallel convolution characteristic graphs.
In some embodiments, the establishing the face recognition model including the preset weighted convolution structure includes:
and replacing the common convolutional layer or the depth separable convolutional layer in the convolutional neural network model by the preset weighted convolutional structure.
In some embodiments, the preset weighted convolution structure is composed of two parallel convolution layers, and the output result of the two parallel convolution layers is the para-position addition of the two parallel convolution feature map results.
In some embodiments, the convolution kernels of the two parallel convolution layers are 3 x 3 and 5 x 5, respectively.
In some embodiments, when the convolutional neural network model includes a plurality of normal convolutional layers or a plurality of depth separable convolutional layers,
the replacing of the common convolutional layer or the depth separable convolutional layer in the convolutional neural network model with the preset weighted convolutional structure comprises:
replacing a common convolutional layer or a depth separable convolutional layer located at the rear position of the network in the convolutional neural network model with the preset weighted convolutional structure; or
And replacing all common convolutional layers or all depth separable convolutional layers in the convolutional neural network model with the preset weighted convolutional structure.
In a second aspect, an embodiment of the present invention provides a face recognition apparatus, where the apparatus includes:
the model establishing module is used for establishing a face recognition model comprising a preset weighted convolution structure;
the data training module is used for inputting data to be trained to the face recognition model for training so as to obtain a characteristic value of the data to be trained, and storing the characteristic value in a database;
the characteristic identification module is used for inputting the collected face image into the face identification model so as to obtain the characteristics of the collected face image;
and the face recognition module is used for calculating the similarity between the characteristics of the acquired face image and the characteristic value of each image in the database based on the characteristic values stored in the database, and recognizing the acquired face image according to the similarity.
In some embodiments, the preset weighted convolution structure includes: the convolution layer processing system comprises at least two convolution layers, wherein the at least two convolution layers operate in parallel, and the output result of the parallel operation of the at least two convolution layers is the para-position addition of the results of the at least two parallel convolution characteristic graphs.
In some embodiments, the model building module is specifically configured to:
and replacing the common convolutional layer or the depth separable convolutional layer in the convolutional neural network model by the preset weighted convolutional structure.
In a third aspect, an embodiment of the present invention provides an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the above-described face recognition method.
The beneficial effects of the embodiment of the invention are as follows: the method comprises the steps of establishing a face recognition model comprising a preset weighted convolution structure, inputting data to be trained to the face recognition model for training to obtain a characteristic value of the data to be trained, and storing the characteristic value in a database; inputting the collected face image into the face recognition model to obtain the characteristics of the collected face image; and calculating the similarity between the characteristics of the acquired face image and the characteristic value of each image in the database based on the characteristic values stored in the database, and identifying the acquired face image according to the similarity. The face recognition method and the face recognition device provided by the embodiment of the invention are used for carrying out face recognition based on the face recognition model with the preset weighted convolution structure, so that the model precision can be obviously improved on the basis of increasing less calculation amount.
Drawings
One or more embodiments are illustrated in drawings corresponding to, and not limiting to, the embodiments, in which elements having the same reference number designation may be represented as similar elements, unless specifically noted, the drawings in the figures are not to scale.
Fig. 1 is a flowchart of a face recognition method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of the default weighted convolution structure according to the embodiment of the present invention;
fig. 3 is a schematic diagram of the preset weighted convolution structure applied based on MobileFaceNet according to the embodiment of the present invention;
fig. 4 is a schematic diagram of a residual error structure of the MobileFaceNet according to the embodiment of the present invention;
fig. 5 is a schematic structural diagram of a face recognition apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to facilitate an understanding of the invention, the invention is described in more detail below with reference to the accompanying drawings and detailed description. It will be understood that when an element is referred to as being "secured to" another element, it can be directly on the other element or intervening elements may also be present. When an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may be present. The terms "vertical," "horizontal," "left," "right," and the like as used herein are for descriptive purposes only.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The embodiment of the invention provides a face recognition method and a face recognition device, which mainly design a novel convolutional neural network structure, namely a convolutional neural network comprising a preset weighted convolutional structure to carry out face recognition, wherein the preset weighted convolutional structure forms a weighted result effective to convolutional kernels by overlapping convolutional kernels with different sizes. And the preset weighted convolution structure is very flexible and is easy to be added into various current mainstream models, and if the trained models are used for continuous training, the network convergence can be faster and better.
Referring to fig. 1, fig. 1 is a flowchart of a face recognition method according to an embodiment of the present invention. As shown in fig. 1, the method includes:
s101, establishing a face recognition model comprising a preset weighted convolution structure.
In this embodiment, the preset weighted convolution structure is a scheme for superimposing different convolution results. The preset weighted convolution structure includes: the convolution layer processing system comprises at least two convolution layers, wherein the at least two convolution layers operate in parallel, and the output result of the parallel operation of the at least two convolution layers is the para-position addition of the results of the at least two parallel convolution characteristic graphs. The at least two convolution layers may be different convolution kernels, for example, may be at least two of 1 × 1, 3 × 3, 5 × 5, 7 × 7, and the like. And performing convolution operation on the at least two convolution layers respectively to obtain respective characteristic graphs, and finally adding the characteristic graphs by using superposition.
In some embodiments, the preset weighted convolution structure is composed of two parallel convolution layers, and the output result of the two parallel convolution layers is the para-position addition of the two parallel convolution feature map results. For example, the preset weighted convolution is composed of 3 × 3 depth separable convolution kernels and 5 × 5 depth separable convolution kernels, and after performing convolution operations in parallel, the results are superimposed, for example, the convolution with 3 × 64 outputs 64 feature maps, the convolution with 5 × 64 also outputs 64 feature maps, then the feature maps are added, and finally the 64 feature maps are output.
It should be noted that the preset weighted convolution structure may also be composed of three parallel convolution layers or more parallel convolution layers, and finally, the feature maps corresponding to the output results of all the parallel convolution layers are subjected to superposition operation. The convolutional layer in which several parallel convolutional layers are arranged can be determined according to the size of the established model and the accuracy of the model to be considered.
In this embodiment, the weighted convolution structure is different from the way of splicing convolution results in the related art, and the weighted convolution structure of this embodiment directly adds feature maps by using superposition, so that for the same receptive field, when a certain part of information is retained in a feature map, the information of a circle of pixels around the part is also retained, especially when the information of the part around the part is important. For example, when the preset weighted convolution structure is composed of 3 × 3 depth separable convolution kernels and 5 × 5 depth separable convolution kernels, the convolution result of 3 × 3 and the convolution result of 5 × 5 are superimposed, so that for the same receptive field, the feature map retains the information of one circle of pixels around the 3 × 3 region while retaining the information of the 3 × 3 region part. Meanwhile, since the 5 × 5 convolution kernel also covers the region covered by the 3 × 3 convolution kernel, and thus the 3 × 3 convolution kernel overlaps with the central region of the 5 × 5 convolution kernel, it can be understood that the region of 3 × 3 is weighted and the information of the 3 × 3 edge is considered but is not in a significant position. In face recognition, the principle of distinguishing important features can be achieved through the preset weighted convolution structure, that is, the edge of a region where a certain important feature is located should have certain importance, and this part should be given a weaker weight and considered by the model.
Wherein, the establishing of the face recognition model comprising the preset weighted convolution structure comprises: and replacing the common convolutional layer or the depth separable convolutional layer in the convolutional neural network model by the preset weighted convolutional structure.
The preset weighted convolution structure may be set in a convolution neural network model of the current mainstream, for example, MobileNet, MobileFaceNet, ResNet, or the like. When the preset weighted convolution structure is added to the convolutional neural network model, the preset weighted convolution structure is specifically replaced by a common convolutional layer or a depth separable convolutional layer in the convolutional neural network model.
If the convolutional neural network model comprises a plurality of normal convolutional layers and/or a plurality of depth separable convolutional layers, only a part of the normal convolutional layers or a part of the depth separable convolutional layers in the convolutional neural network model may be replaced in the operation process of replacing the preset weighted convolutional structure, and the number of the part may be one, two, and the like. In some embodiments, the predetermined weighted convolution structure may replace a normal convolution layer or a depth separable convolution layer located at a later position of the network in the convolutional neural network model. Therefore, the model precision is improved on the basis of increasing less computation, and the convolution of the later position of the replacement network has less influence on the time complexity and the space complexity of the algorithm.
Of course, if only the accuracy of the model is considered, the preset weighted convolution structure may replace all of the normal convolutional layers or all of the depth separable convolutional layers in the convolutional neural network model for maximum accuracy.
After the preset weighted convolution structure replaces a common convolution layer or a depth separable convolution layer in the convolutional neural network model, the input of the preset weighted convolution structure is a feature map transmitted from the upper layer, the feature map runs and operates on different convolution layers in parallel in the preset weighted convolution structure, and the result of the parallel operation is transmitted to the next layer after the feature map is aligned and added.
S102, inputting data to be trained to the face recognition model for training to obtain a characteristic value of the data to be trained, and storing the characteristic value in a database.
And after the face recognition model is obtained, inputting data to be trained into the face recognition model for training, and obtaining a characteristic value of the data to be trained. The data to be trained may be face image data obtained from a face information base, face image features corresponding to the face image data are obtained through the face recognition model, and the obtained face image features are stored in a database so as to be compared with a face image to be recognized.
S103, inputting the collected face image into the face recognition model to obtain the characteristics of the collected face image.
The collected face image is a face image which needs to be identified currently, and the specific process of acquiring the features of the face image is the same as the step S102.
For example, the network structure of the face recognition model includes: presetting a weighted convolution structure layer, a residual error structure body and a full connection layer, and selecting a PRelu function as an activation function of the neural network. And calculating the data to be trained through the feature map size calculation formula of the preset weighted convolution structure layer to obtain the feature map of the data to be trained, mapping the feature map into a multi-dimensional vector through global convolution, and directly sending the multi-dimensional vector into a full connection layer with corresponding dimensionality, thereby obtaining the feature value of the data to be trained. In other embodiments, the maximum feature map may be obtained by using a maximum pooling method through a pooling layer, the maximum feature map is activated according to a prilu function, and the activated maximum feature maps are connected through a full connection layer, so as to obtain the feature value of the data to be trained.
And S104, calculating the similarity between the characteristics of the acquired face image and the characteristic value of each image in the database based on the characteristic values stored in the database, and identifying the acquired face image according to the similarity.
The similarity between the features of the acquired face image and each image feature value in the database can be calculated by using a cosine similarity method, for example, and the feature value which exceeds a preset recognition threshold and corresponds to the maximum similarity is selected as the recognition result of the face image to be recognized.
The face recognition method provided by the embodiment of the invention can be applied to a face unlocking system, such as an access control system based on face recognition, a desktop roll call system based on face recognition and the like, and is not limited herein.
The face recognition method provided by the embodiment of the invention adds the preset weighting convolution structure of the embodiment of the invention in the mainstream convolution neural network model, and the preset weighting convolution structure runs convolution kernels with different sizes in parallel, and forms a result of effectively weighting the convolution kernels through superposition of the convolution kernels with different sizes. The preset weighted convolution structure is flexible in application, is easy to add to different network models, can be combined into the existing mainstream network model, and can perform transfer learning by using the pre-training model parameters without a parallel structure. In addition, in the aspect of face recognition, the accuracy of the model can be obviously improved on the basis of increasing less calculation amount.
The face recognition method is described below by taking a specific application example.
For example, the preset weighted convolution structure is applied based on the existing MobileFaceNet, and the preset weighted convolution structure may refer to fig. 2, which includes two branches, the left branch is structured by a convolution kernel of 3 × 3, and the right branch is structured by a convolution kernel of 5 × 5, and after performing convolution operations in parallel, the feature maps are added in pairs. In this example, the preset weighted convolution structure is added to the residual module and separable convolution module near the tail of MobileFaceNet, and after adding this structure, the overall structure of the residual layer is as shown in fig. 3. Fig. 4 is an original residual error structure of the MobileFaceNet, and in this embodiment, the intermediate convolution is replaced with the preset weighted convolution structure. The embodiment adds the preset weighted convolution structure to the known model, so that the operation amount is only slightly increased, but a more remarkable result improvement is obtained. Meanwhile, the convolution of the convolution structure and the original structure can realize parallelism without increasing the parallel characteristic diagram, so that the precision can be improved by directly training the pre-training model.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a face recognition apparatus according to an embodiment of the present invention, and as shown in fig. 5, the apparatus 10 includes: a model building module 11, a data training module 12, a feature recognition module 13 and a face recognition module 14.
The model establishing module 11 is configured to establish a face recognition model including a preset weighted convolution structure; the data training module 12 is configured to input data to be trained to the face recognition model for training, so as to obtain a feature value of the data to be trained, and store the feature value in a database; the feature recognition module 13 is configured to input the acquired face image into the face recognition model to obtain features of the acquired face image; the face recognition module 14 is configured to calculate, based on the feature values stored in the database, a similarity between a feature of the acquired face image and each image feature value in the database, and recognize the acquired face image according to the similarity.
Wherein the preset weighted convolution structure includes: the convolution layer processing system comprises at least two convolution layers, wherein the at least two convolution layers operate in parallel, and the output result of the parallel operation of the at least two convolution layers is the para-position addition of the results of the at least two parallel convolution characteristic graphs.
Wherein the model building module is specifically configured to:
and replacing the common convolutional layer or the depth separable convolutional layer in the convolutional neural network model by the preset weighted convolutional structure.
In some embodiments, the preset weighted convolution structure is composed of two parallel convolution layers, and the output result of the two parallel convolution layers is the para-position addition of the two parallel convolution feature map results.
In some embodiments, the convolution kernels of the two parallel convolution layers are 3 x 3 and 5 x 5, respectively.
In some embodiments, when the convolutional neural network model includes a plurality of normal convolutional layers or a plurality of depth separable convolutional layers,
the replacing of the common convolutional layer or the depth separable convolutional layer in the convolutional neural network model with the preset weighted convolutional structure comprises:
replacing a common convolutional layer or a depth separable convolutional layer located at the rear position of the network in the convolutional neural network model with the preset weighted convolutional structure; or
And replacing all common convolutional layers or all depth separable convolutional layers in the convolutional neural network model with the preset weighted convolutional structure.
It should be noted that the face recognition apparatus can execute the face recognition method provided by the embodiment of the present invention, and has functional modules and beneficial effects corresponding to the execution method. For technical details that are not described in detail in the embodiment of the face recognition apparatus, reference may be made to the face recognition method provided in the embodiment of the present invention.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. As shown in fig. 6, the electronic device 20 includes one or more processors 21 and a memory 22. In fig. 6, one processor 21 is taken as an example.
The processor 21 and the memory 22 may be connected by a bus or other means, such as the bus connection in fig. 6.
The memory 22, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the face recognition method in the embodiment of the present invention. The processor 21 executes various functional applications and data processing of the face recognition apparatus by running the nonvolatile software programs, instructions and modules stored in the memory 22, that is, the functions of the face recognition method provided by the above-mentioned method embodiment and the various modules or units of the above-mentioned apparatus embodiment are realized.
The memory 22 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory 22 may optionally include memory located remotely from the processor 21, and these remote memories may be connected to the processor 21 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The program instructions/modules are stored in the memory 22 and, when executed by the one or more processors 21, perform the face recognition method of any of the method embodiments described above.
Embodiments of the present invention further provide a non-volatile computer storage medium, where the computer storage medium stores computer-executable instructions, which are executed by one or more processors, for example, one processor 21 in fig. 6, and enable the one or more processors to execute the face recognition method in any of the above method embodiments.
Embodiments of the present invention also provide a computer program product, which includes a computer program stored on a non-volatile computer-readable storage medium, where the computer program includes program instructions, and when the program instructions are executed by the electronic device, the electronic device is caused to execute any one of the face recognition methods.
The above-described embodiments of the apparatus or device are merely illustrative, wherein the unit modules described as separate parts may or may not be physically separate, and the parts displayed as module units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network module units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a general hardware platform, and certainly can also be implemented by hardware. Based on such understanding, the above technical solutions substantially or contributing to the related art may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; within the idea of the invention, also technical features in the above embodiments or in different embodiments may be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.
Claims (10)
1. A face recognition method, comprising:
establishing a face recognition model comprising a preset weighted convolution structure;
inputting data to be trained to the face recognition model for training so as to obtain a characteristic value of the data to be trained, and storing the characteristic value in a database;
inputting the collected face image into the face recognition model to obtain the characteristics of the collected face image;
and calculating the similarity between the characteristics of the acquired face image and the characteristic value of each image in the database based on the characteristic values stored in the database, and identifying the acquired face image according to the similarity.
2. The method of claim 1, wherein the pre-set weighted convolution structure comprises: the convolution layer processing system comprises at least two convolution layers, wherein the at least two convolution layers operate in parallel, and the output result of the parallel operation of the at least two convolution layers is the para-position addition of the results of the at least two parallel convolution characteristic graphs.
3. The method of claim 2, wherein the building a face recognition model comprising a pre-set weighted convolution structure comprises:
and replacing the common convolutional layer or the depth separable convolutional layer in the convolutional neural network model by the preset weighted convolutional structure.
4. The method of claim 2, wherein the pre-defined weighted convolution structure consists of two parallel convolutional layers, and the output of the two parallel convolutional layers is an aligned addition of two parallel convolutional eigenmap results.
5. The method of claim 4, wherein the convolution kernels of the two parallel convolutional layers are 3 x 3 and 5 x 5, respectively.
6. The method of claim 3, wherein when the convolutional neural network model comprises a plurality of normal convolutional layers or a plurality of depth separable convolutional layers,
the replacing of the common convolutional layer or the depth separable convolutional layer in the convolutional neural network model with the preset weighted convolutional structure comprises:
replacing a common convolutional layer or a depth separable convolutional layer located at the rear position of the network in the convolutional neural network model with the preset weighted convolutional structure; or
And replacing all common convolutional layers or all depth separable convolutional layers in the convolutional neural network model with the preset weighted convolutional structure.
7. An apparatus for face recognition, the apparatus comprising:
the model establishing module is used for establishing a face recognition model comprising a preset weighted convolution structure;
the data training module is used for inputting data to be trained to the face recognition model for training so as to obtain a characteristic value of the data to be trained, and storing the characteristic value in a database;
the characteristic identification module is used for inputting the collected face image into the face identification model so as to obtain the characteristics of the collected face image;
and the face recognition module is used for calculating the similarity between the characteristics of the acquired face image and the characteristic value of each image in the database based on the characteristic values stored in the database, and recognizing the acquired face image according to the similarity.
8. The apparatus of claim 7, wherein the pre-set weighted convolution structure comprises: the convolution layer processing system comprises at least two convolution layers, wherein the at least two convolution layers operate in parallel, and the output result of the parallel operation of the at least two convolution layers is the para-position addition of the results of the at least two parallel convolution characteristic graphs.
9. The apparatus of claim 8, wherein the model building module is specifically configured to:
and replacing the common convolutional layer or the depth separable convolutional layer in the convolutional neural network model by the preset weighted convolutional structure.
10. An electronic device, comprising:
at least one processor, and
a memory communicatively coupled to the at least one processor, the memory storing instructions executable by the at least one processor to enable the at least one processor to perform the method of any of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010474940.XA CN111680595A (en) | 2020-05-29 | 2020-05-29 | Face recognition method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010474940.XA CN111680595A (en) | 2020-05-29 | 2020-05-29 | Face recognition method and device and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111680595A true CN111680595A (en) | 2020-09-18 |
Family
ID=72434551
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010474940.XA Pending CN111680595A (en) | 2020-05-29 | 2020-05-29 | Face recognition method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111680595A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113011377A (en) * | 2021-04-06 | 2021-06-22 | 新疆爱华盈通信息技术有限公司 | Pedestrian attribute identification method and device, electronic equipment and storage medium |
CN113034769A (en) * | 2021-03-03 | 2021-06-25 | 唐山市就业服务中心 | Access control system and method based on face recognition |
CN113128428A (en) * | 2021-04-24 | 2021-07-16 | 新疆爱华盈通信息技术有限公司 | Depth map prediction-based in vivo detection method and related equipment |
Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105426860A (en) * | 2015-12-01 | 2016-03-23 | 北京天诚盛业科技有限公司 | Human face identification method and apparatus |
CN106815566A (en) * | 2016-12-29 | 2017-06-09 | 天津中科智能识别产业技术研究院有限公司 | A kind of face retrieval method based on multitask convolutional neural networks |
CN106997475A (en) * | 2017-02-24 | 2017-08-01 | 中国科学院合肥物质科学研究院 | A kind of insect image-recognizing method based on parallel-convolution neutral net |
CN107766850A (en) * | 2017-11-30 | 2018-03-06 | 电子科技大学 | Based on the face identification method for combining face character information |
CN107886064A (en) * | 2017-11-06 | 2018-04-06 | 安徽大学 | A kind of method that recognition of face scene based on convolutional neural networks adapts to |
WO2018139847A1 (en) * | 2017-01-25 | 2018-08-02 | 국립과학수사연구원 | Personal identification method through facial comparison |
CN108985236A (en) * | 2018-07-20 | 2018-12-11 | 南京开为网络科技有限公司 | A kind of face identification method separating convolution model based on depthization |
CN109389030A (en) * | 2018-08-23 | 2019-02-26 | 平安科技(深圳)有限公司 | Facial feature points detection method, apparatus, computer equipment and storage medium |
CN110070072A (en) * | 2019-05-05 | 2019-07-30 | 厦门美图之家科技有限公司 | A method of generating object detection model |
CN110175506A (en) * | 2019-04-08 | 2019-08-27 | 复旦大学 | Pedestrian based on parallel dimensionality reduction convolutional neural networks recognition methods and device again |
CN110503738A (en) * | 2019-08-28 | 2019-11-26 | 北京中电普华信息技术有限公司 | A kind of multi-functional attendance all-in-one machine and multi-functional attendance checking system |
CN110532900A (en) * | 2019-08-09 | 2019-12-03 | 西安电子科技大学 | Facial expression recognizing method based on U-Net and LS-CNN |
CN110717394A (en) * | 2019-09-06 | 2020-01-21 | 北京三快在线科技有限公司 | Training method and device of face recognition model, electronic equipment and storage medium |
CN110781784A (en) * | 2019-10-18 | 2020-02-11 | 高新兴科技集团股份有限公司 | Face recognition method, device and equipment based on double-path attention mechanism |
CN110969089A (en) * | 2019-11-01 | 2020-04-07 | 北京交通大学 | Lightweight face recognition system and recognition method under noise environment |
CN111062324A (en) * | 2019-12-17 | 2020-04-24 | 上海眼控科技股份有限公司 | Face detection method and device, computer equipment and storage medium |
CN111178187A (en) * | 2019-12-17 | 2020-05-19 | 武汉迈集信息科技有限公司 | Face recognition method and device based on convolutional neural network |
-
2020
- 2020-05-29 CN CN202010474940.XA patent/CN111680595A/en active Pending
Patent Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105426860A (en) * | 2015-12-01 | 2016-03-23 | 北京天诚盛业科技有限公司 | Human face identification method and apparatus |
CN106815566A (en) * | 2016-12-29 | 2017-06-09 | 天津中科智能识别产业技术研究院有限公司 | A kind of face retrieval method based on multitask convolutional neural networks |
WO2018139847A1 (en) * | 2017-01-25 | 2018-08-02 | 국립과학수사연구원 | Personal identification method through facial comparison |
CN106997475A (en) * | 2017-02-24 | 2017-08-01 | 中国科学院合肥物质科学研究院 | A kind of insect image-recognizing method based on parallel-convolution neutral net |
CN107886064A (en) * | 2017-11-06 | 2018-04-06 | 安徽大学 | A kind of method that recognition of face scene based on convolutional neural networks adapts to |
CN107766850A (en) * | 2017-11-30 | 2018-03-06 | 电子科技大学 | Based on the face identification method for combining face character information |
CN108985236A (en) * | 2018-07-20 | 2018-12-11 | 南京开为网络科技有限公司 | A kind of face identification method separating convolution model based on depthization |
CN109389030A (en) * | 2018-08-23 | 2019-02-26 | 平安科技(深圳)有限公司 | Facial feature points detection method, apparatus, computer equipment and storage medium |
CN110175506A (en) * | 2019-04-08 | 2019-08-27 | 复旦大学 | Pedestrian based on parallel dimensionality reduction convolutional neural networks recognition methods and device again |
CN110070072A (en) * | 2019-05-05 | 2019-07-30 | 厦门美图之家科技有限公司 | A method of generating object detection model |
CN110532900A (en) * | 2019-08-09 | 2019-12-03 | 西安电子科技大学 | Facial expression recognizing method based on U-Net and LS-CNN |
CN110503738A (en) * | 2019-08-28 | 2019-11-26 | 北京中电普华信息技术有限公司 | A kind of multi-functional attendance all-in-one machine and multi-functional attendance checking system |
CN110717394A (en) * | 2019-09-06 | 2020-01-21 | 北京三快在线科技有限公司 | Training method and device of face recognition model, electronic equipment and storage medium |
CN110781784A (en) * | 2019-10-18 | 2020-02-11 | 高新兴科技集团股份有限公司 | Face recognition method, device and equipment based on double-path attention mechanism |
CN110969089A (en) * | 2019-11-01 | 2020-04-07 | 北京交通大学 | Lightweight face recognition system and recognition method under noise environment |
CN111062324A (en) * | 2019-12-17 | 2020-04-24 | 上海眼控科技股份有限公司 | Face detection method and device, computer equipment and storage medium |
CN111178187A (en) * | 2019-12-17 | 2020-05-19 | 武汉迈集信息科技有限公司 | Face recognition method and device based on convolutional neural network |
Non-Patent Citations (1)
Title |
---|
CHRISTIAN SZEGEDY等: ""Going Deeper with Convolutions"", 2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, pages 1 - 9 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113034769A (en) * | 2021-03-03 | 2021-06-25 | 唐山市就业服务中心 | Access control system and method based on face recognition |
CN113011377A (en) * | 2021-04-06 | 2021-06-22 | 新疆爱华盈通信息技术有限公司 | Pedestrian attribute identification method and device, electronic equipment and storage medium |
CN113128428A (en) * | 2021-04-24 | 2021-07-16 | 新疆爱华盈通信息技术有限公司 | Depth map prediction-based in vivo detection method and related equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109522942B (en) | Image classification method and device, terminal equipment and storage medium | |
KR102591961B1 (en) | Model training method and device, and terminal and storage medium for the same | |
WO2019100724A1 (en) | Method and device for training multi-label classification model | |
CN111680595A (en) | Face recognition method and device and electronic equipment | |
CN111814794B (en) | Text detection method and device, electronic equipment and storage medium | |
CN111797893A (en) | Neural network training method, image classification system and related equipment | |
CN111860398B (en) | Remote sensing image target detection method and system and terminal equipment | |
KR20160034814A (en) | Client device with neural network and system including the same | |
CN111027576B (en) | Cooperative significance detection method based on cooperative significance generation type countermeasure network | |
CN112183295A (en) | Pedestrian re-identification method and device, computer equipment and storage medium | |
CN110956263A (en) | Construction method of binarization neural network, storage medium and terminal equipment | |
CN115170565B (en) | Image fraud detection method and device based on automatic neural network architecture search | |
CN112085056A (en) | Target detection model generation method, device, equipment and storage medium | |
CN112418195A (en) | Face key point detection method and device, electronic equipment and storage medium | |
CN111950633A (en) | Neural network training method, neural network target detection method, neural network training device, neural network target detection device and storage medium | |
CN111126049A (en) | Object relation prediction method and device, terminal equipment and readable storage medium | |
CN111382638A (en) | Image detection method, device, equipment and storage medium | |
CN115439726B (en) | Image detection method, device, equipment and storage medium | |
CN117011909A (en) | Training method of face recognition model, face recognition method and device | |
CN114820755A (en) | Depth map estimation method and system | |
CN115346270A (en) | Traffic police gesture recognition method and device, electronic equipment and storage medium | |
CN111191675B (en) | Pedestrian attribute identification model realization method and related device | |
CN116802646A (en) | Data processing method and device | |
CN112489687A (en) | Speech emotion recognition method and device based on sequence convolution | |
CN113963282A (en) | Video replacement detection and training method and device of video replacement detection model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200918 |