CN113159006B - Attendance checking method and system based on face recognition, electronic equipment and storage medium - Google Patents
Attendance checking method and system based on face recognition, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN113159006B CN113159006B CN202110699382.1A CN202110699382A CN113159006B CN 113159006 B CN113159006 B CN 113159006B CN 202110699382 A CN202110699382 A CN 202110699382A CN 113159006 B CN113159006 B CN 113159006B
- Authority
- CN
- China
- Prior art keywords
- face
- deformable
- texture
- trained
- face picture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000003860 storage Methods 0.000 title claims description 12
- 238000012549 training Methods 0.000 claims abstract description 25
- 238000012937 correction Methods 0.000 claims abstract description 11
- 239000013598 vector Substances 0.000 claims description 38
- 230000006870 function Effects 0.000 claims description 27
- 238000004590 computer program Methods 0.000 claims description 20
- 238000009826 distribution Methods 0.000 claims description 12
- 238000013507 mapping Methods 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000013527 convolutional neural network Methods 0.000 claims description 4
- 230000002194 synthesizing effect Effects 0.000 claims description 2
- 230000036544 posture Effects 0.000 description 12
- 238000010586 diagram Methods 0.000 description 7
- 238000004140 cleaning Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 238000009877 rendering Methods 0.000 description 4
- 230000003042 antagnostic effect Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/10—Office automation; Time management
- G06Q10/109—Time management, e.g. calendars, reminders, meetings or time accounting
- G06Q10/1091—Recording time for administrative or management purposes
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C1/00—Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people
- G07C1/10—Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people together with the recording, indicating or registering of other data, e.g. of signs of identity
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Business, Economics & Management (AREA)
- Evolutionary Computation (AREA)
- Human Resources & Organizations (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Strategic Management (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Biomedical Technology (AREA)
- Entrepreneurship & Innovation (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Educational Administration (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Economics (AREA)
- Marketing (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Image Analysis (AREA)
Abstract
The method comprises the steps of collecting a correction face picture of a person to be checked as a sample set, generating the face picture with any angle from the correction face picture through a pre-trained 3D deformable countermeasure generation network to expand the sample set, training a face recognition model by using the expanded sample set, recognizing the face picture with any angle of the person on site checked by using the trained face recognition model, and generating an attendance record when the recognition is successful.
Description
Technical Field
The application relates to the technical field of machine learning, in particular to an attendance checking method, system, electronic equipment and storage medium based on face recognition.
Background
With the development of face recognition technology, more and more face recognition technology is applied to daily life of people, for example, the face recognition technology is used in an entrance guard or attendance system.
At present, due to the lack of the current situation of a large-angle posture face sample, the recognition accuracy of a face recognition model is low, and a user often needs to be matched with the angle of a camera shooting instrument deliberately to collect correct face images so as to perform face recognition, so that the face recognition efficiency is low under the actual complex condition, and therefore the situation that the temporary residence or even queuing is blocked easily occurs when the attendance is checked.
Disclosure of Invention
The embodiment of the application provides an attendance checking method, an attendance checking system, electronic equipment and a storage medium based on face recognition, and aims to improve face recognition efficiency, improve attendance checking efficiency, save traffic time and avoid attendance checking personnel residence and queuing jam.
In a first aspect, an embodiment of the present application provides an attendance checking method based on face recognition, where the method includes:
collecting a correction face picture of a person to be checked as a sample set;
generating a face picture with any angle to the end face picture through a pre-trained 3D deformable generation confrontation network (3D deformable Model GAN for short) to expand a sample set, wherein the 3D deformable generation confrontation network is generated by combining a 3D deformable Model and a generation confrontation network;
training a face recognition model by using the extended sample set;
recognizing face pictures of the attendance personnel at any angle by using the trained face recognition model, and generating an attendance record when the recognition is successful;
the 3D deformable generation countermeasure network is obtained by:
random texture parametersAnd 3D deformable model shape parametersInput texture generatorGenerating a texture map of the shape corresponding to the 3D deformable model;
the random texture parameter is usedThe 3D deformable model shape parametersRandom background parameterGesture tagAnd expression parametersInput background generatorGenerating a background image according with given parameters;
tagging the texture map and pose with the 3D deformable modelThe expression parameterAnd the 3D deformable model shape parametersCombined for 3D reconstruction and by texture mapping methodsMapping the texture to a rendered image space by the micro-rendering function R to generate a rendered face texture map;
synthesizing the rendered face texture image and the background image, sending the synthesized image into a discriminator D, and optimizing a loss function of the discriminator DAnd the texture generatorLoss function and the context generatorLoss function ofTo obtain the pre-trained 3D deformable generation countermeasure network.
In some of these embodiments, the face recognition model is an end-to-end deep convolutional neural network model.
In some embodiments, the training mode for generating the countermeasure network by the 3D deformable is as follows:
alternately minimizing the loss of the arbiter and the generator, wherein the target loss function is:
wherein,in order to be a loss function of the discriminator,in order to generate the loss function of the generator,represents the cross-entropy loss value of the real sample,represents the cross-entropy loss value of the generated samples,an additional gradient penalty term is represented by,is given by the formulaThe calculation results in that,is meant to include random texture parametersRandom background parameter3D deformable model shape parametersExpression parametersThe vector representation of the inner is represented by,to receive training dataRandomly selected real images and associated gesture tags in the distribution of (a),in order to be a real image,in the form of a gesture tag, the gesture tag,,in order to be a parameterized generator network G,represents a parametric texture that is represented by a parametric texture,on behalf of the network of background generators,the discriminator network D, N representing parameterizationAnd the comprehensive expression symbols of the middle texture, the background, the shape and the expression parameter vector dimension.
In some embodiments, the generating arbitrary-angle face pictures for the upright face pictures by the pre-trained 3D deformable generation countermeasure network to expand the sample set includes:
according toAnd generating a face picture with any angle, wherein,is the image that is generated, and it is,is thatThe random texture parameters of the dimensions are,is thatThe random background parameter of the dimension(s),is thatThe 3D deformable model shape parameters of the dimension,is thatThe expression parameters of the dimensions are set to be,in the form of a gesture tag, the gesture tag,in order to exercise a good parametric texture,representing a trained background generator network; and adding the generated human face picture with any angle into the sample set to obtain the expanded sample set, wherein G is a generator network.
In some of these embodiments, in the case where the antagonistic network generates a face picture of 30 degrees side-to-side, a face picture of 60 degrees side-to-side, and a face picture of 90 degrees side-to-side through the pre-trained 3D deformable generation, the method further comprises:
extracting face feature vectors from the upright face picture, the face picture swinging left and right by 30 degrees, the face picture swinging left and right by 60 degrees and the face picture swinging left and right by 90 degrees through the trained face recognition model;
fusing the extracted face feature vectors;
and taking the fused features as registration features to establish a face feature database.
In some embodiments, the recognizing the face picture of the on-site attendance checking person at any angle by using the trained face recognition model, and when the recognition is successful, generating the attendance checking record includes: extracting face characteristic vectors from any face picture captured by a camera on the attendance checking site through the trained face recognition model, and solving cosine similarity between the extracted face characteristic vectors and the face characteristic vectors in the face characteristic database one by one;
and if the cosine similarity exceeds a preset threshold value, the identification is successful, and an attendance record is generated.
In a second aspect, an embodiment of the present application provides an attendance system based on face recognition, where the system includes:
the acquisition module is used for acquiring a correction face picture of a person to be checked as a sample set;
the expansion module is used for generating a confrontation network through pre-trained 3D deformable to generate a face picture with any angle for the end face picture so as to expand a sample set;
the training module is used for training a face recognition model through the extended sample set;
and the recognition module is used for recognizing the face picture of the attendance checking personnel at any angle through the trained face recognition model and generating an attendance checking record when the recognition is successful.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a memory and a processor, where the memory stores a computer program, and the processor is configured to execute the computer program to execute the attendance checking method based on face recognition as described above.
In a fourth aspect, the present application provides a storage medium, in which a computer program is stored, where the computer program is configured to execute the attendance checking method based on face recognition as described above when running.
Compared with the related technology, in the attendance checking method based on face recognition provided by the embodiment of the application, the correcting face picture of the person to be checked is collected as the sample set, and the problem that the training data is seriously in shortage is solved by generating the confrontation network through pre-training 3D deformable to generate the face picture of any angle for the correcting face picture so as to expand the sample set in consideration of the fact that the data set containing different angle posture changes is difficult to obtain in reality and the class (different people) cleaning and sorting are difficult to perform even if large-angle face data can be captured; then, the extended sample set is used for training a face recognition model, compared with the prior art that the sample set only comprises right face pictures, the extended sample set of the embodiment of the application is added with face pictures with other angles, the embodiment of the application trains the face recognition model through the extended sample set, and the data set is more comprehensive, so that the trained model effect is better, the accuracy of face recognition can be improved, and the robustness of the face recognition model is improved; and then, recognizing the face picture of the attendance checking personnel at any angle through the trained face recognition model, and generating an attendance checking record when the recognition is successful. Compared with the prior art for face recognition, because the face pictures with large-angle postures cannot be recognized, when the collected face pictures comprise the large-angle face pictures, the face pictures are filtered by the face angle estimation module, and the user can smoothly recognize the face only by matching with the collection equipment to stop footsteps, turn around or turn around to collect the correct face pictures (shooting the front photos), so that the face recognition efficiency is low, and the situations of temporary staying in the attendance field and even queuing up are caused due to the slow recognition passing speed and the overlong passing time, the time cost of attendance is high and the attendance efficiency is low, however, in the embodiment of the application, the attendance personnel do not need to stop footsteps, turn around or turn around to be shot the front photos (correcting the face pictures) according to the shooting angle of the face picture collection equipment, but recognize the arbitrary-angle face pictures of the attendance personnel on the site by using the trained face recognition model, the face recognition efficiency is improved, so that the attendance checking efficiency is improved, and the condition that attendance checking personnel temporarily reside or even queue up and block up during passing is avoided. Therefore, the method and the device have the advantages that the large-angle face recognition is improved, the body feeling is realized, the attendance time is saved, and the attendance efficiency is improved. In addition, the embodiment of the application trains the face recognition model through the extended sample set, and the data set is more comprehensive, so that the trained model has better effect, and the accuracy of face recognition can be improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a first flowchart of an attendance checking method based on face recognition according to an embodiment of the present application;
FIG. 2 is a flow diagram of a 3D morphable generative countermeasure network acquisition approach according to an embodiment of the present application;
FIG. 3 is a flow chart of the steps involved in generating arbitrary angle face pictures for the upright face pictures to augment a sample set by a pre-trained 3D deformable generation confrontation network in accordance with an embodiment of the present application;
fig. 4 is a second flowchart of an attendance checking method based on face recognition according to an embodiment of the present application;
fig. 5 is a flowchart of steps involved in recognizing a face picture of an attendance checking person at any angle by using a trained face recognition model and generating an attendance record when the recognition is successful according to an embodiment of the present application;
fig. 6 is a block diagram of a structure of an attendance system based on face recognition according to an embodiment of the application;
fig. 7 is an internal structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. Reference herein to "a plurality" means greater than or equal to two. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
The application provides an attendance checking method based on face recognition, fig. 1 is a first flowchart of the attendance checking method based on face recognition in the embodiment of the application, as shown in fig. 1, in the embodiment, the method includes the following steps:
step S101, collecting a correction face picture of a person to be checked as a sample set; the technical personnel in the field can acquire the correction face picture of the personnel to be checked as a sample set through acquisition equipment, wherein the acquisition equipment can be a camera or other equipment, and is not specifically limited; it is worth noting that the left-right swing amplitude value and the up-down pitching amplitude value of the face in the front face picture are both 0;
step S102, generating a confrontation network opposite-end face picture through pre-trained 3D deformable to generate a face picture at any angle so as to expand a sample set; for example, the pre-trained 3D deformable generation confrontation network confrontation face picture generates a face picture that swings 5 degrees left, a face picture that swings 5 degrees right, a face picture that swings 10 degrees left, a face picture that swings 10 degrees right, a face picture that swings 15 degrees left, and a face picture that swings 15 degrees right to expand the sample set, although in some other embodiments, the pre-trained 3D deformable generation confrontation network confrontation face picture generates a face picture that swings 15 degrees left to expand the sample set, specifically set according to user requirements, and no specific limitation is made here; because a data set containing different angle posture changes is difficult to obtain in reality, human face data with large angle posture changes are scarce; moreover, even if large-angle face data can be captured, class (different people) cleaning and sorting are difficult to perform, so that the existing algorithm model is not robust enough for large-angle face recognition, the cleaning and sorting of the data by the model is not reliable, the cost of manual operation is too high, for convenience of understanding, a face picture swinging 5 degrees left, a face picture swinging 5 degrees right, a face picture swinging 10 degrees left, a face picture swinging 10 degrees right, a face picture swinging 15 degrees left and a face picture swinging 15 degrees right generated by the countermeasure network opposite to the front face picture in the step S102 are generated by using a pre-trained 3D deformable model, and a sample set is taken as an example for explanation, compared with the prior art in which the sample set only comprises the front face picture, the extended sample set is added with the face picture swinging 5 degrees left, the face picture swinging 5 degrees right, the face picture swinging 10 degrees left, The face picture swinging rightwards by 10 degrees, the face picture swinging leftwards by 15 degrees and the face picture swinging rightwards by 15 degrees make up for the defect of serious shortage of training data, and are favorable for enhancing the robustness of a face recognition model;
step S103, training a face recognition model by using the extended sample set; compared with the sample set in the prior art, the extended sample set is added with some other angle face pictures so as to enable the sample set to be more comprehensive, and therefore the accuracy of the face recognition model on the face recognition is improved;
and step S104, recognizing the face picture of the attendance checking personnel at any angle by using the trained face recognition model, and generating an attendance checking record when the recognition is successful. Compared with the prior art that human face pictures with large-angle postures cannot be recognized when the human face pictures are recognized, when the collected human face pictures comprise the large-angle human face pictures, the human face pictures are filtered by the human face angle estimation module, and the human face pictures can be successfully recognized only by matching with the collected human face pictures (shooting front photos) of stopping steps, turning bodies or twisting heads of the collecting equipment, so that the human face recognition efficiency is low, and the situations of temporary residence and even queuing jam of an attendance field are caused due to slow recognition passing speed and overlong passing time, and the time cost of attendance is high, however, the human face pictures with any angle of the attendance personnel can be recognized by using the trained human face recognition model without matching with the steps, turning bodies or twisting heads of the collecting equipment to shoot front photos, so that the human face recognition efficiency is improved, and the human face recognition passing feeling of the large-angle is improved, the situation that the attendance card stays temporarily or even lines up and blocks up during passing is reduced, the attendance time cost is saved for a user, the problem that the accuracy of a face recognition technology in an entrance guard or an attendance system is low is solved, and an attendance record is generated when the recognition is successful, so that the follow-up inquiry of the attendance record is facilitated.
Through the steps S101 to S104, in the technical solution of this embodiment, the right-side face picture of the person to be examined is collected as the sample set, and considering that it is difficult to obtain a data set with different angle and posture changes in reality and it is difficult to perform class (different person) cleaning and sorting even if large-angle face data can be captured, the embodiment of the present application generates the right-side face picture of the confrontation network through the pre-trained 3D deformable
The face image with any angle is generated to expand the sample set, so that the problem of serious shortage of training data is solved, and the robustness of a face recognition model is enhanced; compared with the prior art in which the sample set only comprises a positive face picture, the extended sample set increases a large number of face pictures at other angles so that the data in the sample set is more comprehensive, therefore, the face recognition model trained by the extended sample set has better effect and improves the accuracy of face recognition, in addition, the application identifies the face picture at any angle of a field attendance person by the trained face recognition model and generates an attendance record when the identification is successful, compared with the prior art in which face recognition is performed, because the face picture with a large angle posture cannot be identified (the face picture with the large angle posture is a non-positive face picture), when the acquired face picture comprises the large angle face picture, the face picture not only needs to be filtered by a face angle estimation module, and the user can smoothly identify the face only by matching with the acquisition equipment to stop footsteps, turn around or turn around the head and correct the face picture (shoot the front photo), so that the face identification efficiency is low, and the face identification passing speed is low, the passing time is too long, so that the situation of short residence or even queuing jam of the attendance field is caused, and the time cost of attendance is high, but the application has the advantages that the trained face identification model is used for identifying the face picture at any angle of the attendance personnel on the field without matching with the acquisition equipment to stop footsteps, turn around or turn around the head to shoot the front photo, so that the face identification efficiency is improved, the situation of short residence or even queuing jam of the attendance field is avoided, the passing rate of the face identification is improved, the body feeling of the large-angle face identification is improved, the user is helped to save time, and the attendance efficiency is also improved, and when the identification is successful, the attendance record is generated, so that the inquiry of the subsequent attendance record is facilitated.
In some embodiments, the pre-trained 3D deformable generation countermeasure network is generated by a combination of a 3D deformable Model (3D deformable Model) and a generation countermeasure network (GAN). Specifically, fig. 2 is a flowchart of a manner of obtaining a 3D deformable generation countermeasure network according to an embodiment of the present application, and as shown in fig. 2, in an alternative embodiment, the step of obtaining the 3D deformable generation countermeasure network by:
step S201, random texture parameters are processedAnd 3D deformable model shape parametersInput texture generatorGenerating a texture map of the shape corresponding to the 3D deformable model; wherein, the texture information does not relate to the gesture and the expression, so the gesture parameterAnd expression parametersWithout being an input to the texture generator, it is also readily understood by those skilled in the art that the pose tags are also called conditional pose parameters or pose parameters, and the 3D deformable model shape parameters are also called 3D deformable model shape parameters.
Step S202, random texture parameters are processed3D deformable model shape parametersRandom background parameterGesture tagAnd expression parametersInput background generatorGenerating a background image according with given parameters; the background map includes features of the 3D deformable model such as hair edges, clothing, and glasses that are not reconstructed in accordance with the texture information.
Step S203, labeling the texture map and the pose through the 3D deformable modelExpression parametersAnd 3D deformable model shape parametersCombining to carry out 3D reconstruction, and mapping the texture into a rendered image space through a texture mapping method M and a micro-renderable function R to generate a rendered face texture map;
step S204, the rendered face texture image and the rendered background image are synthesized, the synthesized image is sent to a discriminator D, and the loss function of the discriminator D is optimizedAnd texture generatorLoss function and background generatorTo obtain a pre-trained 3D deformable generation countermeasure network, wherein the texture generatorLoss function and the context generatorIs expressed as a loss function of。
Through the above steps S201 to S204, the pre-trained 3D deformable generation confrontation network can be obtained, and through the pre-trained 3D deformable generation confrontation network, a confrontation network confrontation face picture generates a face picture at any angle to expand the sample set.
In some embodiments, the face recognition model is an end-to-end deep convolutional neural network model, where the end-to-end is directly connected from an input end to an output end of the model by the deep convolutional neural network model, and certainly in some other embodiments, the face recognition model may also adopt other neural network models, which are not specifically limited herein and are specifically set according to user requirements.
In some embodiments, the training mode for 3D deformable generation of the antagonistic network is as follows:
alternately minimizing the loss of the arbiter and the generator, wherein the target loss function is:
wherein,in order to be a loss function of the discriminator,in order to generate the loss function of the generator,represents the cross-entropy loss value of the real sample,represents the cross-entropy loss value of the generated samples,an additional gradient penalty term is represented by,is given by the formulaThe calculation results in that,is meant to include random texture parametersRandom background parameter3D deformable model shape parametersExpression parametersThe vector representation of the inner is represented by,to receive training dataRandomly selected real images and associated gesture tags in the distribution of (a),in order to be a real image,the gesture label is also called as a condition gesture parameter or a gesture parameter and is used for controlling a left-right swing amplitude value and a up-down pitching amplitude value of the face,,in order to be a parameterized generator network G,represents a parametric texture that is represented by a parametric texture,on behalf of the network of background generators,the discriminator network D, N representing parameterizationAnd the comprehensive expression symbols of the middle texture, the background, the shape and the expression parameter vector dimension.
As will be appreciated by those skilled in the art,is thatRandom texture parameters of the dimension, which conform to standard Normal distribution (also called gaussian distribution);is thatRandom background parameters of dimension, which accord with standard normal distribution;is thatThe shape parameters of the dimensional 3D deformable model conform to the standard normal distribution;is thatExpressive parameters of dimension, also in accordance with a standard normal distribution, andandin a form for controlling a 3D deformable model (3D deformable model);
it should be noted that the alternate minimization generator and the arbiter loss are a general learning training mode, and in addition, the multiple alternate iteration optimization generator and the arbiter can make the two approach the global optimum together to train a better network model, which is beneficial to subsequently generating a large-angle face sample of random arbitrary angle by using the trained 3D deformable generation countermeasure network.
Fig. 3 is a flowchart illustrating steps included in generating an arbitrary-angle face picture from a pre-trained 3D deformable generation confrontation network confrontation face picture to expand a sample set according to an embodiment of the present application, and as shown in fig. 3, in some embodiments, generating an arbitrary-angle face picture from a pre-trained 3D deformable generation confrontation network confrontation face picture to expand a sample set includes the following steps:
step S301, according toAnd generating a face picture with any angle, wherein,is the image that is generated, and it is,is thatRandom texture parameters of the dimension conform to the standard normal distribution,is thatRandom background parameters of the dimension, which conform to the standard normal distribution,is thatThe shape parameters of the dimensional 3D deformable model, which conform to a standard normal distribution,is thatExpressive parameters of dimension, conforming to a standard normal distribution, andandare each used to control a 3D deformable model (3D deformable model);as a gesture tag, i.e.A roll amplitude value and a pitch amplitude value representing the face,in order to exercise a good parametric texture,representing a trained background generator network; of course in some other embodiments, it may also be according to
Any angle face picture is generated, wherein, as known to those skilled in the art,is the image that is generated, and it is,is a background generator that is used to generate,is a texture generator that is a function of the texture,is a binary mask for combining a background picture and texture, 1 represents an all-1 vector of the same shape as the picture, sinceOperation representativesTwo vectors are multiplied by element bit, thenRepresentsAndthe two vectors are multiplied by the element bits, which, in the same way,representsAndthe two vectors are multiplied by element bits, wherein,is an inverse texture mapping function that maps the interpolation in the generated texture map to an appropriate location in image space, y represents the rendering of the texture coordinates in image space; since the y rendering associated with K is performed by a differentiable rendering function R, thenWhereinis a random instance shape vertex vector of a 3D deformable model (3D deformable model),representing 3D deformable models (3D deformable models)A list of indices of the vertices of a triangle,is a vector of texture coordinates, where,each triangle has three sets of two-dimensional texture vertices.
And step S302, adding the generated human face picture with any angle into a sample set to obtain an expanded sample set. Preferably, the face pictures in the extended sample set include an upright face picture and any angle face pictures generated by the confrontation network according to the upright face picture through a pre-trained 3D deformable generation, wherein the specific angle of the face pictures can be set according to the user requirement, and is not specifically limited;
through the above steps S301 to S302, the embodiment is based onThe human face pictures with any angle are generated, the generated human face pictures with any angle are added into the sample set to obtain the extended sample set, and compared with the sample set in the prior art, the extended sample set is added with human face pictures with other angles, so that the sample set is richer, the trained human face recognition model has a better effect, and the accuracy of human face recognition can be improved.
Fig. 4 is a second flowchart of an attendance checking method based on face recognition in an embodiment of the present application, and as shown in fig. 4, in a case where a confrontation network generates a face picture of 30 degrees left and right, a face picture of 60 degrees left and right, and a face picture of 90 degrees left and right through pre-trained 3D deformable generation, the method further includes the following steps:
step S401, extracting face feature vectors from an end face image, a face image swinging left and right by 30 degrees, a face image swinging left and right by 60 degrees and a face image swinging left and right by 90 degrees through a trained face recognition model; in this embodiment, face feature vectors are extracted from an end face image, a face image swinging left and right by 30 degrees, a face image swinging left and right by 60 degrees and a face image swinging left and right by 90 degrees by a trained face recognition model, and of course, in some other embodiments, the angle of the face image swinging right may be 15 degrees, 45 degrees or other angles, which is not specifically limited herein; it should be noted that, in the field of deep learning, a face feature vector refers to a floating-point type digital vector with a certain dimension obtained by convolution network calculation, and is generally described as a feature vector, that is, a face is digitally described.
Step S402, fusing the extracted face feature vectors; in this embodiment, feature values of corresponding dimensions between face feature vectors of 7 different pose face pictures (a normal face picture, a face picture swinging left and right by 30 degrees, a face picture swinging left and right by 60 degrees, and a face picture swinging left and right by 90 degrees) are averaged to obtain a fusion feature vector.
And step S403, using the fused features as registration features to establish a human face feature database.
The method further includes the steps S401 to S403, and in this embodiment, the face feature vectors are extracted from the normal face picture of the opposite end, the face picture swinging left and right by 30 degrees, the face picture swinging left and right by 60 degrees and the face picture swinging left and right by 90 degrees of the trained face recognition model; fusing the extracted face feature vectors; and taking the fused features as registration features to establish a face feature database. It should be noted that the face feature database can be used as the base database of the above.
Fig. 5 is a flowchart of the steps included in the attendance record generation process when the face picture at any angle of the attendance personnel is recognized by the trained face recognition model in the embodiment of the present application and the attendance record is generated successfully, and as shown in fig. 5, the face picture at any angle of the attendance personnel is recognized by the trained face recognition model and the attendance record generation process when the recognition is successful includes:
step S501, extracting face feature vectors from any face picture captured by a camera on the attendance checking site through a trained face recognition model, and solving cosine similarity between the extracted face feature vectors and the face feature vectors in a base (the base is a face feature database) one by one;
and step S502, if the cosine similarity exceeds a preset threshold value, the identification is successful, and an attendance record is generated. For example, in this embodiment, the cosine similarity is 0.7, and the preset threshold is 0.6, then the cosine similarity exceeds the preset threshold, the identification is successful, and an attendance record is generated, and certainly in some other embodiments, if the cosine similarity does not exceed the preset threshold, the identification is failed, and a failed attendance record is generated or no attendance record is generated;
in this embodiment, the process of face recognition is completed through steps S501 to S502, specifically, a face feature vector is extracted from any face picture captured by a camera on the attendance checking site through a trained face recognition model, and cosine similarity is obtained between the extracted face feature vector and the face feature vector in a base (i.e., a face feature database) one by one; for example, the cosine distance is used as the similarity between the captured face and the registered face, for example, the similarity ranges from 0.0 to 1.0, and finally a group of faces with the maximum similarity is found, if the corresponding score exceeds a preset threshold, the captured face and the corresponding registered face are considered to belong to the same person, and at this time, if the recognition is successful, the attendance registration is successful, and an attendance record is generated; otherwise, judging that the captured face is not in the face feature database, at the moment, failing in face recognition and failing in attendance registration. It should be noted that the preset threshold is set according to the user requirement, and is not specifically limited herein.
The application also provides an attendance system based on face recognition, and the system realizes the above embodiment mode, and is not repeated after the description. As used hereinafter, the terms "module," "unit," "subunit," and the like may implement a combination of software and/or hardware for a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 6 is a block diagram of a structure of an attendance system based on face recognition in an embodiment of the present application, and as shown in fig. 6, the system includes:
the acquisition module 61 is used for acquiring a correction face picture of a person to be checked as a sample set;
an expansion module 62, configured to generate a confrontation network opposite-end face picture through pre-trained 3D deformable generation to generate a face picture at any angle to expand a sample set;
a training module 63, configured to train a face recognition model through the extended sample set;
and the recognition module 64 is used for recognizing the face picture of the attendance checking personnel at any angle through the trained face recognition model and generating an attendance checking record when the recognition is successful.
In the technical scheme of the system, a correcting face picture of a person to be checked is collected through a collection module 61 to serve as a sample set, and since a data set with different angle posture changes is difficult to obtain in reality and class (different people) cleaning and sorting are difficult to perform even if large-angle face data can be captured, in the application, an expansion module 62 generates a confrontation network through pre-trained 3D deformation to generate any-angle face picture for the correcting face picture so as to expand the sample set, so that the problem of serious shortage of training data is solved; compared with the prior art in which the sample set only includes the upright face picture, in the present application, the training module 63 trains the face recognition model through the extended sample set; it is easy to understand that the extended sample set not only includes the correct face picture but also includes the face pictures of other angles, so the data of the extended sample set is richer, the training effect of the face recognition model by using the extended sample set is better, and the accuracy of the face recognition is stronger; because the face pictures with large-angle postures cannot be recognized in the traditional technical scheme (the face pictures with large-angle postures are non-upright face pictures), when the collected face pictures comprise the face pictures with large angles, the face pictures not only need to be filtered by the face angle estimation module, but also a user can smoothly recognize the face only by matching with the collected face pictures (shooting the front photos) with the collected face pictures (turning the body or twisting the head), so that the face recognition efficiency is low, and the recognition passing speed is low, the passing time is too long, so that the situation that the attendance checking site temporarily stays or is even queued up is caused, and the time cost of attendance checking is high, in the technical scheme of the application, the recognition module 64 recognizes the face pictures with any angles of the personnel on the site through the trained face recognition model, and when the recognition is successful, the attendance record is generated, because the attendance record is shot by using the trained face recognition model without matching with the acquisition equipment for stopping steps, turning or twisting the head, the face picture of the attendance personnel at any angle is recognized by using the trained face recognition model, the face recognition efficiency is improved, the situation of temporary residence or even queuing during passing is avoided, the passing rate of face recognition is improved, the passing body feeling of large-angle face recognition is improved, the time is saved for helping a user, the attendance efficiency is also improved, and the attendance record is generated when the recognition is successful, so that the attendance record query of the follow-up attendance record is facilitated.
The above modules may be functional modules or program modules, and may be implemented by software or hardware. For a module implemented by hardware, the modules may be located in the same processor; or the modules can be respectively positioned in different processors in any combination.
The application also provides an electronic device, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor is configured to run the computer program to execute the attendance checking method based on the face recognition.
Optionally, the electronic device may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, collecting correction face pictures of the staff to be checked as a sample set;
s2, generating a confrontation network opposite-end face picture through pre-trained 3D deformable to generate a face picture with any angle so as to expand a sample set;
s3, training a face recognition model by using the extended sample set;
and S4, recognizing the face picture of the attendance checking personnel at any angle by using the trained face recognition model, and generating an attendance checking record when the recognition is successful.
It should be noted that, for specific examples in this embodiment, reference may be made to examples described in the foregoing embodiments and optional implementations, and details of this embodiment are not described herein again.
In addition, in combination with the attendance checking method based on face recognition in the above embodiments, the embodiments of the present application may provide a storage medium to implement. The storage medium having stored thereon a computer program; when being executed by a processor, the computer program realizes any attendance checking method based on face recognition in the embodiments.
In one embodiment, a computer device is provided, which may be a terminal. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to realize an attendance checking method based on face recognition. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
In one embodiment, fig. 7 is a schematic diagram of an internal structure of an electronic device according to an embodiment of the present application, and as shown in fig. 7, there is provided an electronic device, which may be a server, and an internal structure diagram of which may be as shown in fig. 7. The electronic device comprises a processor, a network interface, an internal memory and a non-volatile memory connected by an internal bus, wherein the non-volatile memory stores an operating system, a computer program and a database. The processor is used for providing calculation and control capability, the network interface is used for communicating with an external terminal through network connection, the internal memory is used for providing an environment for an operating system and the running of a computer program, the computer program is executed by the processor to realize an attendance checking method based on face recognition, and the database is used for storing data.
Those skilled in the art will appreciate that the architecture shown in fig. 7 is a block diagram of only a portion of the architecture associated with the subject application, and does not constitute a limitation on the electronic devices to which the subject application may be applied, and that a particular electronic device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
It should be understood by those skilled in the art that various features of the above-described embodiments can be combined in any combination, and for the sake of brevity, all possible combinations of features in the above-described embodiments are not described in detail, but rather, all combinations of features which are not inconsistent with each other should be construed as being within the scope of the present disclosure.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (9)
1. An attendance checking method based on face recognition is characterized by comprising the following steps:
collecting a correction face picture of a person to be checked as a sample set;
generating a confrontation network through pre-trained 3D deformable to generate a face picture with any angle for the correction face picture so as to expand a sample set; wherein the 3D deformable generation countermeasure network is generated by a combination of a 3D deformable model and a generation countermeasure network;
training a face recognition model by using the extended sample set;
recognizing face pictures of the attendance personnel at any angle by using the trained face recognition model, and generating an attendance record when the recognition is successful;
the 3D deformable generation countermeasure network is obtained by:
random texture parametersAnd 3D deformable model shape parametersInput texture generatorGenerating a texture map of the shape corresponding to the 3D deformable model;
the random texture parameter is usedThe 3D deformable model shape parametersRandom background parameterGesture tagAnd expression parametersInput background generatorGenerating a background image according with given parameters;
tagging the texture map and pose with the 3D deformable modelThe expression parameterAnd the 3D deformable model shape parametersCombining to carry out 3D reconstruction, and mapping the texture into a rendered image space through a texture mapping method M and a micro-renderable function R to generate a rendered face texture map;
synthesizing the rendered face texture map and the background map, sending the synthesized map into a discriminator D, and obtaining the pre-trained 3D deformable generation countermeasure network by optimizing a loss function of the discriminator D, a loss function of the texture generator and a loss function of the background generator, wherein the loss functions of the texture generator and the background generator are expressed as。
2. The method of claim 1, wherein the face recognition model is an end-to-end deep convolutional neural network model.
3. The method of claim 1, wherein the training mode for generating the countermeasure network by the 3D deformable is as follows:
alternately minimizing the loss of the arbiter and the generator, wherein the target loss function is:
wherein,in order to be a loss function of the discriminator,in order to generate the loss function of the generator,represents the cross-entropy loss value of the real sample,represents the cross-entropy loss value of the generated samples,an additional gradient penalty term is represented by,is given by the formulaThe calculation results in that,is meant to include random texture parametersRandom background parameter3D deformable model shape parametersExpression parametersThe vector representation of the inner is represented by,to receive training dataRandomly selected real images and associated gesture tags in the distribution of (a),in order to be a real image,in the form of a gesture tag, the gesture tag,,in order to be a parameterized generator network G,represents a parametric texture that is represented by a parametric texture,on behalf of the network of background generators,the discriminator network D, N representing parameterizationAnd the comprehensive expression symbols of the middle texture, the background, the shape and the expression parameter vector dimension.
4. The method of claim 1, wherein generating arbitrary-angle face pictures for the orthoscopic face pictures by the pre-trained 3D deformable generation confrontation network to expand the sample set comprises:
according toAnd generating a face picture with any angle, wherein,is the image that is generated, and it is,is thatThe random texture parameters of the dimensions are,is thatThe random background parameter of the dimension(s),is thatThe 3D deformable model shape parameters of the dimension,is thatThe expression parameters of the dimensions are set to be,in the form of a gesture tag, the gesture tag,in order to exercise a good parametric texture,representing a trained background generator network; and adding the generated human face picture with any angle into the sample set to obtain the expanded sample set, wherein G is a generator network.
5. The method of claim 1, wherein in the case of generating a face picture of a side-to-side swing of 30 degrees, a face picture of a side-to-side swing of 60 degrees, and a face picture of a side-to-side swing of 90 degrees by the pre-trained 3D deformable generation countermeasure network, the method further comprises:
extracting face feature vectors from the upright face picture, the face picture swinging left and right by 30 degrees, the face picture swinging left and right by 60 degrees and the face picture swinging left and right by 90 degrees through the trained face recognition model;
fusing the extracted face feature vectors;
and taking the fused features as registration features to establish a face feature database.
6. The method of claim 5, wherein the recognizing any-angle face picture of the attendance checking personnel by using the trained face recognition model, and when the recognition is successful, generating the attendance checking record comprises: extracting face characteristic vectors from any face picture captured by a camera on the attendance checking site through the trained face recognition model, and solving cosine similarity between the extracted face characteristic vectors and the face characteristic vectors in the face characteristic database one by one;
and if the cosine similarity exceeds a preset threshold value, the identification is successful, and an attendance record is generated.
7. An attendance system based on face recognition, the system comprising:
the acquisition module is used for acquiring a correction face picture of a person to be checked as a sample set;
the expansion module is used for generating a confrontation network through pre-trained 3D deformable to generate a face picture with any angle for the end face picture so as to expand a sample set;
the training module is used for training a face recognition model through the extended sample set;
and the recognition module is used for recognizing the face picture of the attendance checking personnel at any angle through the trained face recognition model and generating an attendance checking record when the recognition is successful.
8. An electronic device comprising a memory and a processor, wherein the memory stores a computer program, and the processor is configured to execute the computer program to perform the attendance checking method based on human face recognition according to any one of claims 1 to 6.
9. A storage medium, wherein a computer program is stored in the storage medium, and wherein the computer program is configured to execute the attendance checking method based on human face recognition according to any one of claims 1 to 6 when the computer program runs.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110699382.1A CN113159006B (en) | 2021-06-23 | 2021-06-23 | Attendance checking method and system based on face recognition, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110699382.1A CN113159006B (en) | 2021-06-23 | 2021-06-23 | Attendance checking method and system based on face recognition, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113159006A CN113159006A (en) | 2021-07-23 |
CN113159006B true CN113159006B (en) | 2021-09-14 |
Family
ID=76876037
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110699382.1A Active CN113159006B (en) | 2021-06-23 | 2021-06-23 | Attendance checking method and system based on face recognition, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113159006B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113780095B (en) * | 2021-08-17 | 2023-12-26 | 中移(杭州)信息技术有限公司 | Training data expansion method, terminal equipment and medium of face recognition model |
CN113837236B (en) * | 2021-08-31 | 2022-11-15 | 广东智媒云图科技股份有限公司 | Method and device for identifying target object in image, terminal equipment and storage medium |
CN113792679A (en) * | 2021-09-17 | 2021-12-14 | 深信服科技股份有限公司 | Blacklist person identification method and device, electronic equipment and storage medium |
CN113822245B (en) * | 2021-11-22 | 2022-03-04 | 杭州魔点科技有限公司 | Face recognition method, electronic device, and medium |
CN115081920B (en) * | 2022-07-08 | 2024-07-12 | 华南农业大学 | Attendance check-in scheduling management method, system, equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109446981A (en) * | 2018-10-25 | 2019-03-08 | 腾讯科技(深圳)有限公司 | A kind of face's In vivo detection, identity identifying method and device |
CN109635766A (en) * | 2018-12-20 | 2019-04-16 | 中国地质大学(武汉) | The face of convolutional neural networks based on small sample is taken pictures Work attendance method and system |
US10789796B1 (en) * | 2019-06-24 | 2020-09-29 | Sufian Munir Inc | Priority-based, facial recognition-assisted attendance determination and validation system |
-
2021
- 2021-06-23 CN CN202110699382.1A patent/CN113159006B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109446981A (en) * | 2018-10-25 | 2019-03-08 | 腾讯科技(深圳)有限公司 | A kind of face's In vivo detection, identity identifying method and device |
CN109635766A (en) * | 2018-12-20 | 2019-04-16 | 中国地质大学(武汉) | The face of convolutional neural networks based on small sample is taken pictures Work attendance method and system |
US10789796B1 (en) * | 2019-06-24 | 2020-09-29 | Sufian Munir Inc | Priority-based, facial recognition-assisted attendance determination and validation system |
Non-Patent Citations (1)
Title |
---|
精准生成Fake人脸!Amazon全新GAN模型给你全方位无死角美颜;Amusi(CVer);《https://blog.csdn.net/amusi1994/article/details/112598361》;20210113;第1-12页 * |
Also Published As
Publication number | Publication date |
---|---|
CN113159006A (en) | 2021-07-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113159006B (en) | Attendance checking method and system based on face recognition, electronic equipment and storage medium | |
US20210158023A1 (en) | System and Method for Generating Image Landmarks | |
CN110135249B (en) | Human behavior identification method based on time attention mechanism and LSTM (least Square TM) | |
CN109657533A (en) | Pedestrian recognition methods and Related product again | |
JP4951498B2 (en) | Face image recognition device, face image recognition method, face image recognition program, and recording medium recording the program | |
WO2011112368A2 (en) | Robust object recognition by dynamic modeling in augmented reality | |
Shen et al. | Lidargait: Benchmarking 3d gait recognition with point clouds | |
CN112911393B (en) | Method, device, terminal and storage medium for identifying part | |
CN112257696B (en) | Sight estimation method and computing equipment | |
CN113822254B (en) | Model training method and related device | |
CN108537214B (en) | Automatic construction method of indoor semantic map | |
CN112528902B (en) | Video monitoring dynamic face recognition method and device based on 3D face model | |
Tu et al. | Consistent 3d hand reconstruction in video via self-supervised learning | |
CN110222572A (en) | Tracking, device, electronic equipment and storage medium | |
CN111680550B (en) | Emotion information identification method and device, storage medium and computer equipment | |
CN111160307A (en) | Face recognition method and face recognition card punching system | |
CN109858433B (en) | Method and device for identifying two-dimensional face picture based on three-dimensional face model | |
CN111008935A (en) | Face image enhancement method, device, system and storage medium | |
CN111209811A (en) | Method and system for detecting eyeball attention position in real time | |
Jiang et al. | Application of a fast RCNN based on upper and lower layers in face recognition | |
Neverova | Deep learning for human motion analysis | |
CN115601710A (en) | Examination room abnormal behavior monitoring method and system based on self-attention network architecture | |
CN111815768A (en) | Three-dimensional face reconstruction method and device | |
CN113689527B (en) | Training method of face conversion model and face image conversion method | |
JP2022095332A (en) | Learning model generation method, computer program and information processing device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |