CN113159006B - Attendance checking method and system based on face recognition, electronic equipment and storage medium - Google Patents

Attendance checking method and system based on face recognition, electronic equipment and storage medium Download PDF

Info

Publication number
CN113159006B
CN113159006B CN202110699382.1A CN202110699382A CN113159006B CN 113159006 B CN113159006 B CN 113159006B CN 202110699382 A CN202110699382 A CN 202110699382A CN 113159006 B CN113159006 B CN 113159006B
Authority
CN
China
Prior art keywords
face
deformable
texture
trained
face picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110699382.1A
Other languages
Chinese (zh)
Other versions
CN113159006A (en
Inventor
王东
王月平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Moredian Technology Co ltd
Original Assignee
Hangzhou Moredian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Moredian Technology Co ltd filed Critical Hangzhou Moredian Technology Co ltd
Priority to CN202110699382.1A priority Critical patent/CN113159006B/en
Publication of CN113159006A publication Critical patent/CN113159006A/en
Application granted granted Critical
Publication of CN113159006B publication Critical patent/CN113159006B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/109Time management, e.g. calendars, reminders, meetings or time accounting
    • G06Q10/1091Recording time for administrative or management purposes
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C1/00Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people
    • G07C1/10Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people together with the recording, indicating or registering of other data, e.g. of signs of identity

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Business, Economics & Management (AREA)
  • Evolutionary Computation (AREA)
  • Human Resources & Organizations (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Strategic Management (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Biomedical Technology (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Educational Administration (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The method comprises the steps of collecting a correction face picture of a person to be checked as a sample set, generating the face picture with any angle from the correction face picture through a pre-trained 3D deformable countermeasure generation network to expand the sample set, training a face recognition model by using the expanded sample set, recognizing the face picture with any angle of the person on site checked by using the trained face recognition model, and generating an attendance record when the recognition is successful.

Description

Attendance checking method and system based on face recognition, electronic equipment and storage medium
Technical Field
The application relates to the technical field of machine learning, in particular to an attendance checking method, system, electronic equipment and storage medium based on face recognition.
Background
With the development of face recognition technology, more and more face recognition technology is applied to daily life of people, for example, the face recognition technology is used in an entrance guard or attendance system.
At present, due to the lack of the current situation of a large-angle posture face sample, the recognition accuracy of a face recognition model is low, and a user often needs to be matched with the angle of a camera shooting instrument deliberately to collect correct face images so as to perform face recognition, so that the face recognition efficiency is low under the actual complex condition, and therefore the situation that the temporary residence or even queuing is blocked easily occurs when the attendance is checked.
Disclosure of Invention
The embodiment of the application provides an attendance checking method, an attendance checking system, electronic equipment and a storage medium based on face recognition, and aims to improve face recognition efficiency, improve attendance checking efficiency, save traffic time and avoid attendance checking personnel residence and queuing jam.
In a first aspect, an embodiment of the present application provides an attendance checking method based on face recognition, where the method includes:
collecting a correction face picture of a person to be checked as a sample set;
generating a face picture with any angle to the end face picture through a pre-trained 3D deformable generation confrontation network (3D deformable Model GAN for short) to expand a sample set, wherein the 3D deformable generation confrontation network is generated by combining a 3D deformable Model and a generation confrontation network;
training a face recognition model by using the extended sample set;
recognizing face pictures of the attendance personnel at any angle by using the trained face recognition model, and generating an attendance record when the recognition is successful;
the 3D deformable generation countermeasure network is obtained by:
random texture parameters
Figure 227429DEST_PATH_IMAGE001
And 3D deformable model shape parameters
Figure 472465DEST_PATH_IMAGE002
Input texture generator
Figure 455465DEST_PATH_IMAGE003
Generating a texture map of the shape corresponding to the 3D deformable model;
the random texture parameter is used
Figure 912991DEST_PATH_IMAGE004
The 3D deformable model shape parameters
Figure 320838DEST_PATH_IMAGE002
Random background parameter
Figure 612142DEST_PATH_IMAGE005
Gesture tag
Figure 708537DEST_PATH_IMAGE006
And expression parameters
Figure 500912DEST_PATH_IMAGE007
Input background generator
Figure 232108DEST_PATH_IMAGE008
Generating a background image according with given parameters;
tagging the texture map and pose with the 3D deformable model
Figure 287789DEST_PATH_IMAGE009
The expression parameter
Figure 340320DEST_PATH_IMAGE010
And the 3D deformable model shape parameters
Figure 546174DEST_PATH_IMAGE002
Combined for 3D reconstruction and by texture mapping methods
Figure 397455DEST_PATH_IMAGE011
Mapping the texture to a rendered image space by the micro-rendering function R to generate a rendered face texture map;
synthesizing the rendered face texture image and the background image, sending the synthesized image into a discriminator D, and optimizing a loss function of the discriminator D
Figure 420775DEST_PATH_IMAGE012
And the texture generator
Figure 865663DEST_PATH_IMAGE013
Loss function and the context generator
Figure 468682DEST_PATH_IMAGE014
Loss function of
Figure 929795DEST_PATH_IMAGE015
To obtain the pre-trained 3D deformable generation countermeasure network.
In some of these embodiments, the face recognition model is an end-to-end deep convolutional neural network model.
In some embodiments, the training mode for generating the countermeasure network by the 3D deformable is as follows:
alternately minimizing the loss of the arbiter and the generator, wherein the target loss function is:
Figure 733803DEST_PATH_IMAGE016
Figure 790621DEST_PATH_IMAGE017
wherein,
Figure 462911DEST_PATH_IMAGE018
in order to be a loss function of the discriminator,
Figure 429730DEST_PATH_IMAGE019
in order to generate the loss function of the generator,
Figure 529273DEST_PATH_IMAGE020
represents the cross-entropy loss value of the real sample,
Figure 309272DEST_PATH_IMAGE021
represents the cross-entropy loss value of the generated samples,
Figure 254095DEST_PATH_IMAGE022
an additional gradient penalty term is represented by,
Figure 809841DEST_PATH_IMAGE023
is given by the formula
Figure 80285DEST_PATH_IMAGE024
The calculation results in that,
Figure 111695DEST_PATH_IMAGE023
is meant to include random texture parameters
Figure 96094DEST_PATH_IMAGE025
Random background parameter
Figure 365401DEST_PATH_IMAGE026
3D deformable model shape parameters
Figure 947692DEST_PATH_IMAGE002
Expression parameters
Figure 466398DEST_PATH_IMAGE007
The vector representation of the inner is represented by,
Figure 18602DEST_PATH_IMAGE027
to receive training data
Figure 548941DEST_PATH_IMAGE028
Randomly selected real images and associated gesture tags in the distribution of (a),
Figure 928232DEST_PATH_IMAGE029
in order to be a real image,
Figure 809600DEST_PATH_IMAGE030
in the form of a gesture tag, the gesture tag,
Figure 431074DEST_PATH_IMAGE031
Figure 815919DEST_PATH_IMAGE032
in order to be a parameterized generator network G,
Figure 599068DEST_PATH_IMAGE033
represents a parametric texture that is represented by a parametric texture,
Figure 967732DEST_PATH_IMAGE034
on behalf of the network of background generators,
Figure 127318DEST_PATH_IMAGE035
the discriminator network D, N representing parameterization
Figure 897828DEST_PATH_IMAGE036
And the comprehensive expression symbols of the middle texture, the background, the shape and the expression parameter vector dimension.
In some embodiments, the generating arbitrary-angle face pictures for the upright face pictures by the pre-trained 3D deformable generation countermeasure network to expand the sample set includes:
according to
Figure 87763DEST_PATH_IMAGE037
And generating a face picture with any angle, wherein,
Figure 68357DEST_PATH_IMAGE038
is the image that is generated, and it is,
Figure 907000DEST_PATH_IMAGE039
is that
Figure 391071DEST_PATH_IMAGE040
The random texture parameters of the dimensions are,
Figure 656968DEST_PATH_IMAGE041
is that
Figure 593700DEST_PATH_IMAGE042
The random background parameter of the dimension(s),
Figure 236033DEST_PATH_IMAGE043
is that
Figure 341655DEST_PATH_IMAGE044
The 3D deformable model shape parameters of the dimension,
Figure 778453DEST_PATH_IMAGE045
is that
Figure 468060DEST_PATH_IMAGE046
The expression parameters of the dimensions are set to be,
Figure 648505DEST_PATH_IMAGE047
in the form of a gesture tag, the gesture tag,
Figure 841589DEST_PATH_IMAGE048
in order to exercise a good parametric texture,
Figure 573922DEST_PATH_IMAGE049
representing a trained background generator network; and adding the generated human face picture with any angle into the sample set to obtain the expanded sample set, wherein G is a generator network.
In some of these embodiments, in the case where the antagonistic network generates a face picture of 30 degrees side-to-side, a face picture of 60 degrees side-to-side, and a face picture of 90 degrees side-to-side through the pre-trained 3D deformable generation, the method further comprises:
extracting face feature vectors from the upright face picture, the face picture swinging left and right by 30 degrees, the face picture swinging left and right by 60 degrees and the face picture swinging left and right by 90 degrees through the trained face recognition model;
fusing the extracted face feature vectors;
and taking the fused features as registration features to establish a face feature database.
In some embodiments, the recognizing the face picture of the on-site attendance checking person at any angle by using the trained face recognition model, and when the recognition is successful, generating the attendance checking record includes: extracting face characteristic vectors from any face picture captured by a camera on the attendance checking site through the trained face recognition model, and solving cosine similarity between the extracted face characteristic vectors and the face characteristic vectors in the face characteristic database one by one;
and if the cosine similarity exceeds a preset threshold value, the identification is successful, and an attendance record is generated.
In a second aspect, an embodiment of the present application provides an attendance system based on face recognition, where the system includes:
the acquisition module is used for acquiring a correction face picture of a person to be checked as a sample set;
the expansion module is used for generating a confrontation network through pre-trained 3D deformable to generate a face picture with any angle for the end face picture so as to expand a sample set;
the training module is used for training a face recognition model through the extended sample set;
and the recognition module is used for recognizing the face picture of the attendance checking personnel at any angle through the trained face recognition model and generating an attendance checking record when the recognition is successful.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a memory and a processor, where the memory stores a computer program, and the processor is configured to execute the computer program to execute the attendance checking method based on face recognition as described above.
In a fourth aspect, the present application provides a storage medium, in which a computer program is stored, where the computer program is configured to execute the attendance checking method based on face recognition as described above when running.
Compared with the related technology, in the attendance checking method based on face recognition provided by the embodiment of the application, the correcting face picture of the person to be checked is collected as the sample set, and the problem that the training data is seriously in shortage is solved by generating the confrontation network through pre-training 3D deformable to generate the face picture of any angle for the correcting face picture so as to expand the sample set in consideration of the fact that the data set containing different angle posture changes is difficult to obtain in reality and the class (different people) cleaning and sorting are difficult to perform even if large-angle face data can be captured; then, the extended sample set is used for training a face recognition model, compared with the prior art that the sample set only comprises right face pictures, the extended sample set of the embodiment of the application is added with face pictures with other angles, the embodiment of the application trains the face recognition model through the extended sample set, and the data set is more comprehensive, so that the trained model effect is better, the accuracy of face recognition can be improved, and the robustness of the face recognition model is improved; and then, recognizing the face picture of the attendance checking personnel at any angle through the trained face recognition model, and generating an attendance checking record when the recognition is successful. Compared with the prior art for face recognition, because the face pictures with large-angle postures cannot be recognized, when the collected face pictures comprise the large-angle face pictures, the face pictures are filtered by the face angle estimation module, and the user can smoothly recognize the face only by matching with the collection equipment to stop footsteps, turn around or turn around to collect the correct face pictures (shooting the front photos), so that the face recognition efficiency is low, and the situations of temporary staying in the attendance field and even queuing up are caused due to the slow recognition passing speed and the overlong passing time, the time cost of attendance is high and the attendance efficiency is low, however, in the embodiment of the application, the attendance personnel do not need to stop footsteps, turn around or turn around to be shot the front photos (correcting the face pictures) according to the shooting angle of the face picture collection equipment, but recognize the arbitrary-angle face pictures of the attendance personnel on the site by using the trained face recognition model, the face recognition efficiency is improved, so that the attendance checking efficiency is improved, and the condition that attendance checking personnel temporarily reside or even queue up and block up during passing is avoided. Therefore, the method and the device have the advantages that the large-angle face recognition is improved, the body feeling is realized, the attendance time is saved, and the attendance efficiency is improved. In addition, the embodiment of the application trains the face recognition model through the extended sample set, and the data set is more comprehensive, so that the trained model has better effect, and the accuracy of face recognition can be improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a first flowchart of an attendance checking method based on face recognition according to an embodiment of the present application;
FIG. 2 is a flow diagram of a 3D morphable generative countermeasure network acquisition approach according to an embodiment of the present application;
FIG. 3 is a flow chart of the steps involved in generating arbitrary angle face pictures for the upright face pictures to augment a sample set by a pre-trained 3D deformable generation confrontation network in accordance with an embodiment of the present application;
fig. 4 is a second flowchart of an attendance checking method based on face recognition according to an embodiment of the present application;
fig. 5 is a flowchart of steps involved in recognizing a face picture of an attendance checking person at any angle by using a trained face recognition model and generating an attendance record when the recognition is successful according to an embodiment of the present application;
fig. 6 is a block diagram of a structure of an attendance system based on face recognition according to an embodiment of the application;
fig. 7 is an internal structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. Reference herein to "a plurality" means greater than or equal to two. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
The application provides an attendance checking method based on face recognition, fig. 1 is a first flowchart of the attendance checking method based on face recognition in the embodiment of the application, as shown in fig. 1, in the embodiment, the method includes the following steps:
step S101, collecting a correction face picture of a person to be checked as a sample set; the technical personnel in the field can acquire the correction face picture of the personnel to be checked as a sample set through acquisition equipment, wherein the acquisition equipment can be a camera or other equipment, and is not specifically limited; it is worth noting that the left-right swing amplitude value and the up-down pitching amplitude value of the face in the front face picture are both 0;
step S102, generating a confrontation network opposite-end face picture through pre-trained 3D deformable to generate a face picture at any angle so as to expand a sample set; for example, the pre-trained 3D deformable generation confrontation network confrontation face picture generates a face picture that swings 5 degrees left, a face picture that swings 5 degrees right, a face picture that swings 10 degrees left, a face picture that swings 10 degrees right, a face picture that swings 15 degrees left, and a face picture that swings 15 degrees right to expand the sample set, although in some other embodiments, the pre-trained 3D deformable generation confrontation network confrontation face picture generates a face picture that swings 15 degrees left to expand the sample set, specifically set according to user requirements, and no specific limitation is made here; because a data set containing different angle posture changes is difficult to obtain in reality, human face data with large angle posture changes are scarce; moreover, even if large-angle face data can be captured, class (different people) cleaning and sorting are difficult to perform, so that the existing algorithm model is not robust enough for large-angle face recognition, the cleaning and sorting of the data by the model is not reliable, the cost of manual operation is too high, for convenience of understanding, a face picture swinging 5 degrees left, a face picture swinging 5 degrees right, a face picture swinging 10 degrees left, a face picture swinging 10 degrees right, a face picture swinging 15 degrees left and a face picture swinging 15 degrees right generated by the countermeasure network opposite to the front face picture in the step S102 are generated by using a pre-trained 3D deformable model, and a sample set is taken as an example for explanation, compared with the prior art in which the sample set only comprises the front face picture, the extended sample set is added with the face picture swinging 5 degrees left, the face picture swinging 5 degrees right, the face picture swinging 10 degrees left, The face picture swinging rightwards by 10 degrees, the face picture swinging leftwards by 15 degrees and the face picture swinging rightwards by 15 degrees make up for the defect of serious shortage of training data, and are favorable for enhancing the robustness of a face recognition model;
step S103, training a face recognition model by using the extended sample set; compared with the sample set in the prior art, the extended sample set is added with some other angle face pictures so as to enable the sample set to be more comprehensive, and therefore the accuracy of the face recognition model on the face recognition is improved;
and step S104, recognizing the face picture of the attendance checking personnel at any angle by using the trained face recognition model, and generating an attendance checking record when the recognition is successful. Compared with the prior art that human face pictures with large-angle postures cannot be recognized when the human face pictures are recognized, when the collected human face pictures comprise the large-angle human face pictures, the human face pictures are filtered by the human face angle estimation module, and the human face pictures can be successfully recognized only by matching with the collected human face pictures (shooting front photos) of stopping steps, turning bodies or twisting heads of the collecting equipment, so that the human face recognition efficiency is low, and the situations of temporary residence and even queuing jam of an attendance field are caused due to slow recognition passing speed and overlong passing time, and the time cost of attendance is high, however, the human face pictures with any angle of the attendance personnel can be recognized by using the trained human face recognition model without matching with the steps, turning bodies or twisting heads of the collecting equipment to shoot front photos, so that the human face recognition efficiency is improved, and the human face recognition passing feeling of the large-angle is improved, the situation that the attendance card stays temporarily or even lines up and blocks up during passing is reduced, the attendance time cost is saved for a user, the problem that the accuracy of a face recognition technology in an entrance guard or an attendance system is low is solved, and an attendance record is generated when the recognition is successful, so that the follow-up inquiry of the attendance record is facilitated.
Through the steps S101 to S104, in the technical solution of this embodiment, the right-side face picture of the person to be examined is collected as the sample set, and considering that it is difficult to obtain a data set with different angle and posture changes in reality and it is difficult to perform class (different person) cleaning and sorting even if large-angle face data can be captured, the embodiment of the present application generates the right-side face picture of the confrontation network through the pre-trained 3D deformable
The face image with any angle is generated to expand the sample set, so that the problem of serious shortage of training data is solved, and the robustness of a face recognition model is enhanced; compared with the prior art in which the sample set only comprises a positive face picture, the extended sample set increases a large number of face pictures at other angles so that the data in the sample set is more comprehensive, therefore, the face recognition model trained by the extended sample set has better effect and improves the accuracy of face recognition, in addition, the application identifies the face picture at any angle of a field attendance person by the trained face recognition model and generates an attendance record when the identification is successful, compared with the prior art in which face recognition is performed, because the face picture with a large angle posture cannot be identified (the face picture with the large angle posture is a non-positive face picture), when the acquired face picture comprises the large angle face picture, the face picture not only needs to be filtered by a face angle estimation module, and the user can smoothly identify the face only by matching with the acquisition equipment to stop footsteps, turn around or turn around the head and correct the face picture (shoot the front photo), so that the face identification efficiency is low, and the face identification passing speed is low, the passing time is too long, so that the situation of short residence or even queuing jam of the attendance field is caused, and the time cost of attendance is high, but the application has the advantages that the trained face identification model is used for identifying the face picture at any angle of the attendance personnel on the field without matching with the acquisition equipment to stop footsteps, turn around or turn around the head to shoot the front photo, so that the face identification efficiency is improved, the situation of short residence or even queuing jam of the attendance field is avoided, the passing rate of the face identification is improved, the body feeling of the large-angle face identification is improved, the user is helped to save time, and the attendance efficiency is also improved, and when the identification is successful, the attendance record is generated, so that the inquiry of the subsequent attendance record is facilitated.
In some embodiments, the pre-trained 3D deformable generation countermeasure network is generated by a combination of a 3D deformable Model (3D deformable Model) and a generation countermeasure network (GAN). Specifically, fig. 2 is a flowchart of a manner of obtaining a 3D deformable generation countermeasure network according to an embodiment of the present application, and as shown in fig. 2, in an alternative embodiment, the step of obtaining the 3D deformable generation countermeasure network by:
step S201, random texture parameters are processed
Figure 626192DEST_PATH_IMAGE050
And 3D deformable model shape parameters
Figure 970848DEST_PATH_IMAGE051
Input texture generator
Figure 159383DEST_PATH_IMAGE052
Generating a texture map of the shape corresponding to the 3D deformable model; wherein, the texture information does not relate to the gesture and the expression, so the gesture parameter
Figure 62617DEST_PATH_IMAGE053
And expression parameters
Figure 602183DEST_PATH_IMAGE054
Without being an input to the texture generator, it is also readily understood by those skilled in the art that the pose tags are also called conditional pose parameters or pose parameters, and the 3D deformable model shape parameters are also called 3D deformable model shape parameters.
Step S202, random texture parameters are processed
Figure 249065DEST_PATH_IMAGE055
3D deformable model shape parameters
Figure 557687DEST_PATH_IMAGE056
Random background parameter
Figure 631822DEST_PATH_IMAGE057
Gesture tag
Figure 753624DEST_PATH_IMAGE058
And expression parameters
Figure 79563DEST_PATH_IMAGE059
Input background generator
Figure 367325DEST_PATH_IMAGE060
Generating a background image according with given parameters; the background map includes features of the 3D deformable model such as hair edges, clothing, and glasses that are not reconstructed in accordance with the texture information.
Step S203, labeling the texture map and the pose through the 3D deformable model
Figure 222149DEST_PATH_IMAGE061
Expression parameters
Figure 860940DEST_PATH_IMAGE062
And 3D deformable model shape parameters
Figure 724991DEST_PATH_IMAGE063
Combining to carry out 3D reconstruction, and mapping the texture into a rendered image space through a texture mapping method M and a micro-renderable function R to generate a rendered face texture map;
step S204, the rendered face texture image and the rendered background image are synthesized, the synthesized image is sent to a discriminator D, and the loss function of the discriminator D is optimized
Figure 867259DEST_PATH_IMAGE064
And texture generator
Figure 424143DEST_PATH_IMAGE052
Loss function and background generator
Figure 532256DEST_PATH_IMAGE065
To obtain a pre-trained 3D deformable generation countermeasure network, wherein the texture generator
Figure 324631DEST_PATH_IMAGE066
Loss function and the context generator
Figure 196772DEST_PATH_IMAGE067
Is expressed as a loss function of
Figure 49191DEST_PATH_IMAGE068
Through the above steps S201 to S204, the pre-trained 3D deformable generation confrontation network can be obtained, and through the pre-trained 3D deformable generation confrontation network, a confrontation network confrontation face picture generates a face picture at any angle to expand the sample set.
In some embodiments, the face recognition model is an end-to-end deep convolutional neural network model, where the end-to-end is directly connected from an input end to an output end of the model by the deep convolutional neural network model, and certainly in some other embodiments, the face recognition model may also adopt other neural network models, which are not specifically limited herein and are specifically set according to user requirements.
In some embodiments, the training mode for 3D deformable generation of the antagonistic network is as follows:
alternately minimizing the loss of the arbiter and the generator, wherein the target loss function is:
Figure 898460DEST_PATH_IMAGE069
Figure 697789DEST_PATH_IMAGE017
wherein,
Figure 814650DEST_PATH_IMAGE070
in order to be a loss function of the discriminator,
Figure 447756DEST_PATH_IMAGE071
in order to generate the loss function of the generator,
Figure 17278DEST_PATH_IMAGE072
represents the cross-entropy loss value of the real sample,
Figure 26822DEST_PATH_IMAGE073
represents the cross-entropy loss value of the generated samples,
Figure 499654DEST_PATH_IMAGE022
an additional gradient penalty term is represented by,
Figure 162717DEST_PATH_IMAGE023
is given by the formula
Figure 219534DEST_PATH_IMAGE074
The calculation results in that,
Figure 501611DEST_PATH_IMAGE023
is meant to include random texture parameters
Figure 593064DEST_PATH_IMAGE075
Random background parameter
Figure 567973DEST_PATH_IMAGE076
3D deformable model shape parameters
Figure 879131DEST_PATH_IMAGE077
Expression parameters
Figure 230478DEST_PATH_IMAGE078
The vector representation of the inner is represented by,
Figure 910858DEST_PATH_IMAGE079
to receive training data
Figure 446881DEST_PATH_IMAGE080
Randomly selected real images and associated gesture tags in the distribution of (a),
Figure 353658DEST_PATH_IMAGE081
in order to be a real image,
Figure 102171DEST_PATH_IMAGE082
the gesture label is also called as a condition gesture parameter or a gesture parameter and is used for controlling a left-right swing amplitude value and a up-down pitching amplitude value of the face,
Figure 43582DEST_PATH_IMAGE083
Figure 986392DEST_PATH_IMAGE084
in order to be a parameterized generator network G,
Figure 239519DEST_PATH_IMAGE085
represents a parametric texture that is represented by a parametric texture,
Figure 932669DEST_PATH_IMAGE086
on behalf of the network of background generators,
Figure 853220DEST_PATH_IMAGE087
the discriminator network D, N representing parameterization
Figure 606413DEST_PATH_IMAGE088
And the comprehensive expression symbols of the middle texture, the background, the shape and the expression parameter vector dimension.
As will be appreciated by those skilled in the art,
Figure 346835DEST_PATH_IMAGE089
is that
Figure 578097DEST_PATH_IMAGE090
Random texture parameters of the dimension, which conform to standard Normal distribution (also called gaussian distribution);
Figure 854619DEST_PATH_IMAGE091
is that
Figure 778713DEST_PATH_IMAGE092
Random background parameters of dimension, which accord with standard normal distribution;
Figure 6432DEST_PATH_IMAGE093
is that
Figure 166018DEST_PATH_IMAGE094
The shape parameters of the dimensional 3D deformable model conform to the standard normal distribution;
Figure 670949DEST_PATH_IMAGE095
is that
Figure 890577DEST_PATH_IMAGE096
Expressive parameters of dimension, also in accordance with a standard normal distribution, and
Figure 746538DEST_PATH_IMAGE077
and
Figure 680121DEST_PATH_IMAGE097
in a form for controlling a 3D deformable model (3D deformable model);
it should be noted that the alternate minimization generator and the arbiter loss are a general learning training mode, and in addition, the multiple alternate iteration optimization generator and the arbiter can make the two approach the global optimum together to train a better network model, which is beneficial to subsequently generating a large-angle face sample of random arbitrary angle by using the trained 3D deformable generation countermeasure network.
Fig. 3 is a flowchart illustrating steps included in generating an arbitrary-angle face picture from a pre-trained 3D deformable generation confrontation network confrontation face picture to expand a sample set according to an embodiment of the present application, and as shown in fig. 3, in some embodiments, generating an arbitrary-angle face picture from a pre-trained 3D deformable generation confrontation network confrontation face picture to expand a sample set includes the following steps:
step S301, according to
Figure 305138DEST_PATH_IMAGE098
And generating a face picture with any angle, wherein,
Figure 695668DEST_PATH_IMAGE038
is the image that is generated, and it is,
Figure 773345DEST_PATH_IMAGE099
is that
Figure 540313DEST_PATH_IMAGE100
Random texture parameters of the dimension conform to the standard normal distribution,
Figure 878890DEST_PATH_IMAGE101
is that
Figure 315688DEST_PATH_IMAGE102
Random background parameters of the dimension, which conform to the standard normal distribution,
Figure 506760DEST_PATH_IMAGE103
is that
Figure 421626DEST_PATH_IMAGE104
The shape parameters of the dimensional 3D deformable model, which conform to a standard normal distribution,
Figure 880289DEST_PATH_IMAGE105
is that
Figure 487988DEST_PATH_IMAGE106
Expressive parameters of dimension, conforming to a standard normal distribution, and
Figure 664892DEST_PATH_IMAGE107
and
Figure 383449DEST_PATH_IMAGE108
are each used to control a 3D deformable model (3D deformable model);
Figure 962198DEST_PATH_IMAGE109
as a gesture tag, i.e.
Figure 89599DEST_PATH_IMAGE109
A roll amplitude value and a pitch amplitude value representing the face,
Figure 629165DEST_PATH_IMAGE110
in order to exercise a good parametric texture,
Figure 276047DEST_PATH_IMAGE111
representing a trained background generator network; of course in some other embodiments, it may also be according to
Figure 584668DEST_PATH_IMAGE113
Any angle face picture is generated, wherein, as known to those skilled in the art,
Figure 658803DEST_PATH_IMAGE114
is the image that is generated, and it is,
Figure 154507DEST_PATH_IMAGE115
is a background generator that is used to generate,
Figure 605080DEST_PATH_IMAGE116
is a texture generator that is a function of the texture,
Figure 768208DEST_PATH_IMAGE117
is a binary mask for combining a background picture and texture, 1 represents an all-1 vector of the same shape as the picture, since
Figure 514709DEST_PATH_IMAGE118
Operation representatives
Figure 28867DEST_PATH_IMAGE119
Two vectors are multiplied by element bit, then
Figure 751972DEST_PATH_IMAGE120
Represents
Figure 894241DEST_PATH_IMAGE121
And
Figure 451124DEST_PATH_IMAGE122
the two vectors are multiplied by the element bits, which, in the same way,
Figure 311633DEST_PATH_IMAGE123
represents
Figure 448216DEST_PATH_IMAGE117
And
Figure 212035DEST_PATH_IMAGE124
the two vectors are multiplied by element bits, wherein,
Figure 939819DEST_PATH_IMAGE125
is an inverse texture mapping function that maps the interpolation in the generated texture map to an appropriate location in image space, y represents the rendering of the texture coordinates in image space; since the y rendering associated with K is performed by a differentiable rendering function R, then
Figure 22045DEST_PATH_IMAGE126
Wherein
Figure 493478DEST_PATH_IMAGE127
is a random instance shape vertex vector of a 3D deformable model (3D deformable model),
Figure 610338DEST_PATH_IMAGE128
representing 3D deformable models (3D deformable models)
Figure 102499DEST_PATH_IMAGE129
A list of indices of the vertices of a triangle,
Figure 812966DEST_PATH_IMAGE130
is a vector of texture coordinates, where,
Figure 448609DEST_PATH_IMAGE129
each triangle has three sets of two-dimensional texture vertices.
And step S302, adding the generated human face picture with any angle into a sample set to obtain an expanded sample set. Preferably, the face pictures in the extended sample set include an upright face picture and any angle face pictures generated by the confrontation network according to the upright face picture through a pre-trained 3D deformable generation, wherein the specific angle of the face pictures can be set according to the user requirement, and is not specifically limited;
through the above steps S301 to S302, the embodiment is based on
Figure 295343DEST_PATH_IMAGE131
The human face pictures with any angle are generated, the generated human face pictures with any angle are added into the sample set to obtain the extended sample set, and compared with the sample set in the prior art, the extended sample set is added with human face pictures with other angles, so that the sample set is richer, the trained human face recognition model has a better effect, and the accuracy of human face recognition can be improved.
Fig. 4 is a second flowchart of an attendance checking method based on face recognition in an embodiment of the present application, and as shown in fig. 4, in a case where a confrontation network generates a face picture of 30 degrees left and right, a face picture of 60 degrees left and right, and a face picture of 90 degrees left and right through pre-trained 3D deformable generation, the method further includes the following steps:
step S401, extracting face feature vectors from an end face image, a face image swinging left and right by 30 degrees, a face image swinging left and right by 60 degrees and a face image swinging left and right by 90 degrees through a trained face recognition model; in this embodiment, face feature vectors are extracted from an end face image, a face image swinging left and right by 30 degrees, a face image swinging left and right by 60 degrees and a face image swinging left and right by 90 degrees by a trained face recognition model, and of course, in some other embodiments, the angle of the face image swinging right may be 15 degrees, 45 degrees or other angles, which is not specifically limited herein; it should be noted that, in the field of deep learning, a face feature vector refers to a floating-point type digital vector with a certain dimension obtained by convolution network calculation, and is generally described as a feature vector, that is, a face is digitally described.
Step S402, fusing the extracted face feature vectors; in this embodiment, feature values of corresponding dimensions between face feature vectors of 7 different pose face pictures (a normal face picture, a face picture swinging left and right by 30 degrees, a face picture swinging left and right by 60 degrees, and a face picture swinging left and right by 90 degrees) are averaged to obtain a fusion feature vector.
And step S403, using the fused features as registration features to establish a human face feature database.
The method further includes the steps S401 to S403, and in this embodiment, the face feature vectors are extracted from the normal face picture of the opposite end, the face picture swinging left and right by 30 degrees, the face picture swinging left and right by 60 degrees and the face picture swinging left and right by 90 degrees of the trained face recognition model; fusing the extracted face feature vectors; and taking the fused features as registration features to establish a face feature database. It should be noted that the face feature database can be used as the base database of the above.
Fig. 5 is a flowchart of the steps included in the attendance record generation process when the face picture at any angle of the attendance personnel is recognized by the trained face recognition model in the embodiment of the present application and the attendance record is generated successfully, and as shown in fig. 5, the face picture at any angle of the attendance personnel is recognized by the trained face recognition model and the attendance record generation process when the recognition is successful includes:
step S501, extracting face feature vectors from any face picture captured by a camera on the attendance checking site through a trained face recognition model, and solving cosine similarity between the extracted face feature vectors and the face feature vectors in a base (the base is a face feature database) one by one;
and step S502, if the cosine similarity exceeds a preset threshold value, the identification is successful, and an attendance record is generated. For example, in this embodiment, the cosine similarity is 0.7, and the preset threshold is 0.6, then the cosine similarity exceeds the preset threshold, the identification is successful, and an attendance record is generated, and certainly in some other embodiments, if the cosine similarity does not exceed the preset threshold, the identification is failed, and a failed attendance record is generated or no attendance record is generated;
in this embodiment, the process of face recognition is completed through steps S501 to S502, specifically, a face feature vector is extracted from any face picture captured by a camera on the attendance checking site through a trained face recognition model, and cosine similarity is obtained between the extracted face feature vector and the face feature vector in a base (i.e., a face feature database) one by one; for example, the cosine distance is used as the similarity between the captured face and the registered face, for example, the similarity ranges from 0.0 to 1.0, and finally a group of faces with the maximum similarity is found, if the corresponding score exceeds a preset threshold, the captured face and the corresponding registered face are considered to belong to the same person, and at this time, if the recognition is successful, the attendance registration is successful, and an attendance record is generated; otherwise, judging that the captured face is not in the face feature database, at the moment, failing in face recognition and failing in attendance registration. It should be noted that the preset threshold is set according to the user requirement, and is not specifically limited herein.
The application also provides an attendance system based on face recognition, and the system realizes the above embodiment mode, and is not repeated after the description. As used hereinafter, the terms "module," "unit," "subunit," and the like may implement a combination of software and/or hardware for a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 6 is a block diagram of a structure of an attendance system based on face recognition in an embodiment of the present application, and as shown in fig. 6, the system includes:
the acquisition module 61 is used for acquiring a correction face picture of a person to be checked as a sample set;
an expansion module 62, configured to generate a confrontation network opposite-end face picture through pre-trained 3D deformable generation to generate a face picture at any angle to expand a sample set;
a training module 63, configured to train a face recognition model through the extended sample set;
and the recognition module 64 is used for recognizing the face picture of the attendance checking personnel at any angle through the trained face recognition model and generating an attendance checking record when the recognition is successful.
In the technical scheme of the system, a correcting face picture of a person to be checked is collected through a collection module 61 to serve as a sample set, and since a data set with different angle posture changes is difficult to obtain in reality and class (different people) cleaning and sorting are difficult to perform even if large-angle face data can be captured, in the application, an expansion module 62 generates a confrontation network through pre-trained 3D deformation to generate any-angle face picture for the correcting face picture so as to expand the sample set, so that the problem of serious shortage of training data is solved; compared with the prior art in which the sample set only includes the upright face picture, in the present application, the training module 63 trains the face recognition model through the extended sample set; it is easy to understand that the extended sample set not only includes the correct face picture but also includes the face pictures of other angles, so the data of the extended sample set is richer, the training effect of the face recognition model by using the extended sample set is better, and the accuracy of the face recognition is stronger; because the face pictures with large-angle postures cannot be recognized in the traditional technical scheme (the face pictures with large-angle postures are non-upright face pictures), when the collected face pictures comprise the face pictures with large angles, the face pictures not only need to be filtered by the face angle estimation module, but also a user can smoothly recognize the face only by matching with the collected face pictures (shooting the front photos) with the collected face pictures (turning the body or twisting the head), so that the face recognition efficiency is low, and the recognition passing speed is low, the passing time is too long, so that the situation that the attendance checking site temporarily stays or is even queued up is caused, and the time cost of attendance checking is high, in the technical scheme of the application, the recognition module 64 recognizes the face pictures with any angles of the personnel on the site through the trained face recognition model, and when the recognition is successful, the attendance record is generated, because the attendance record is shot by using the trained face recognition model without matching with the acquisition equipment for stopping steps, turning or twisting the head, the face picture of the attendance personnel at any angle is recognized by using the trained face recognition model, the face recognition efficiency is improved, the situation of temporary residence or even queuing during passing is avoided, the passing rate of face recognition is improved, the passing body feeling of large-angle face recognition is improved, the time is saved for helping a user, the attendance efficiency is also improved, and the attendance record is generated when the recognition is successful, so that the attendance record query of the follow-up attendance record is facilitated.
The above modules may be functional modules or program modules, and may be implemented by software or hardware. For a module implemented by hardware, the modules may be located in the same processor; or the modules can be respectively positioned in different processors in any combination.
The application also provides an electronic device, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor is configured to run the computer program to execute the attendance checking method based on the face recognition.
Optionally, the electronic device may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, collecting correction face pictures of the staff to be checked as a sample set;
s2, generating a confrontation network opposite-end face picture through pre-trained 3D deformable to generate a face picture with any angle so as to expand a sample set;
s3, training a face recognition model by using the extended sample set;
and S4, recognizing the face picture of the attendance checking personnel at any angle by using the trained face recognition model, and generating an attendance checking record when the recognition is successful.
It should be noted that, for specific examples in this embodiment, reference may be made to examples described in the foregoing embodiments and optional implementations, and details of this embodiment are not described herein again.
In addition, in combination with the attendance checking method based on face recognition in the above embodiments, the embodiments of the present application may provide a storage medium to implement. The storage medium having stored thereon a computer program; when being executed by a processor, the computer program realizes any attendance checking method based on face recognition in the embodiments.
In one embodiment, a computer device is provided, which may be a terminal. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to realize an attendance checking method based on face recognition. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
In one embodiment, fig. 7 is a schematic diagram of an internal structure of an electronic device according to an embodiment of the present application, and as shown in fig. 7, there is provided an electronic device, which may be a server, and an internal structure diagram of which may be as shown in fig. 7. The electronic device comprises a processor, a network interface, an internal memory and a non-volatile memory connected by an internal bus, wherein the non-volatile memory stores an operating system, a computer program and a database. The processor is used for providing calculation and control capability, the network interface is used for communicating with an external terminal through network connection, the internal memory is used for providing an environment for an operating system and the running of a computer program, the computer program is executed by the processor to realize an attendance checking method based on face recognition, and the database is used for storing data.
Those skilled in the art will appreciate that the architecture shown in fig. 7 is a block diagram of only a portion of the architecture associated with the subject application, and does not constitute a limitation on the electronic devices to which the subject application may be applied, and that a particular electronic device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
It should be understood by those skilled in the art that various features of the above-described embodiments can be combined in any combination, and for the sake of brevity, all possible combinations of features in the above-described embodiments are not described in detail, but rather, all combinations of features which are not inconsistent with each other should be construed as being within the scope of the present disclosure.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (9)

1. An attendance checking method based on face recognition is characterized by comprising the following steps:
collecting a correction face picture of a person to be checked as a sample set;
generating a confrontation network through pre-trained 3D deformable to generate a face picture with any angle for the correction face picture so as to expand a sample set; wherein the 3D deformable generation countermeasure network is generated by a combination of a 3D deformable model and a generation countermeasure network;
training a face recognition model by using the extended sample set;
recognizing face pictures of the attendance personnel at any angle by using the trained face recognition model, and generating an attendance record when the recognition is successful;
the 3D deformable generation countermeasure network is obtained by:
random texture parameters
Figure DEST_PATH_IMAGE002
And 3D deformable model shape parameters
Figure DEST_PATH_IMAGE004
Input texture generator
Figure DEST_PATH_IMAGE006
Generating a texture map of the shape corresponding to the 3D deformable model;
the random texture parameter is used
Figure DEST_PATH_IMAGE007
The 3D deformable model shape parameters
Figure 585779DEST_PATH_IMAGE004
Random background parameter
Figure DEST_PATH_IMAGE009
Gesture tag
Figure DEST_PATH_IMAGE011
And expression parameters
Figure DEST_PATH_IMAGE013
Input background generator
Figure DEST_PATH_IMAGE015
Generating a background image according with given parameters;
tagging the texture map and pose with the 3D deformable model
Figure DEST_PATH_IMAGE016
The expression parameter
Figure DEST_PATH_IMAGE017
And the 3D deformable model shape parameters
Figure 47241DEST_PATH_IMAGE004
Combining to carry out 3D reconstruction, and mapping the texture into a rendered image space through a texture mapping method M and a micro-renderable function R to generate a rendered face texture map;
synthesizing the rendered face texture map and the background map, sending the synthesized map into a discriminator D, and obtaining the pre-trained 3D deformable generation countermeasure network by optimizing a loss function of the discriminator D, a loss function of the texture generator and a loss function of the background generator, wherein the loss functions of the texture generator and the background generator are expressed as
Figure DEST_PATH_IMAGE025
2. The method of claim 1, wherein the face recognition model is an end-to-end deep convolutional neural network model.
3. The method of claim 1, wherein the training mode for generating the countermeasure network by the 3D deformable is as follows:
alternately minimizing the loss of the arbiter and the generator, wherein the target loss function is:
Figure DEST_PATH_IMAGE027
Figure DEST_PATH_IMAGE029
wherein,
Figure DEST_PATH_IMAGE031
in order to be a loss function of the discriminator,
Figure DEST_PATH_IMAGE033
in order to generate the loss function of the generator,
Figure DEST_PATH_IMAGE035
represents the cross-entropy loss value of the real sample,
Figure DEST_PATH_IMAGE037
represents the cross-entropy loss value of the generated samples,
Figure DEST_PATH_IMAGE039
an additional gradient penalty term is represented by,
Figure DEST_PATH_IMAGE041
is given by the formula
Figure DEST_PATH_IMAGE043
The calculation results in that,
Figure 44166DEST_PATH_IMAGE041
is meant to include random texture parameters
Figure DEST_PATH_IMAGE045
Random background parameter
Figure DEST_PATH_IMAGE047
3D deformable model shape parameters
Figure 6306DEST_PATH_IMAGE004
Expression parameters
Figure DEST_PATH_IMAGE048
The vector representation of the inner is represented by,
Figure DEST_PATH_IMAGE050
to receive training data
Figure DEST_PATH_IMAGE052
Randomly selected real images and associated gesture tags in the distribution of (a),
Figure DEST_PATH_IMAGE054
in order to be a real image,
Figure DEST_PATH_IMAGE056
in the form of a gesture tag, the gesture tag,
Figure DEST_PATH_IMAGE058
Figure DEST_PATH_IMAGE060
in order to be a parameterized generator network G,
Figure DEST_PATH_IMAGE062
represents a parametric texture that is represented by a parametric texture,
Figure DEST_PATH_IMAGE064
on behalf of the network of background generators,
Figure DEST_PATH_IMAGE066
the discriminator network D, N representing parameterization
Figure DEST_PATH_IMAGE068
And the comprehensive expression symbols of the middle texture, the background, the shape and the expression parameter vector dimension.
4. The method of claim 1, wherein generating arbitrary-angle face pictures for the orthoscopic face pictures by the pre-trained 3D deformable generation confrontation network to expand the sample set comprises:
according to
Figure DEST_PATH_IMAGE070
And generating a face picture with any angle, wherein,
Figure DEST_PATH_IMAGE072
is the image that is generated, and it is,
Figure DEST_PATH_IMAGE074
is that
Figure DEST_PATH_IMAGE076
The random texture parameters of the dimensions are,
Figure DEST_PATH_IMAGE078
is that
Figure DEST_PATH_IMAGE080
The random background parameter of the dimension(s),
Figure DEST_PATH_IMAGE082
is that
Figure DEST_PATH_IMAGE084
The 3D deformable model shape parameters of the dimension,
Figure DEST_PATH_IMAGE086
is that
Figure DEST_PATH_IMAGE088
The expression parameters of the dimensions are set to be,
Figure DEST_PATH_IMAGE090
in the form of a gesture tag, the gesture tag,
Figure DEST_PATH_IMAGE092
in order to exercise a good parametric texture,
Figure DEST_PATH_IMAGE094
representing a trained background generator network; and adding the generated human face picture with any angle into the sample set to obtain the expanded sample set, wherein G is a generator network.
5. The method of claim 1, wherein in the case of generating a face picture of a side-to-side swing of 30 degrees, a face picture of a side-to-side swing of 60 degrees, and a face picture of a side-to-side swing of 90 degrees by the pre-trained 3D deformable generation countermeasure network, the method further comprises:
extracting face feature vectors from the upright face picture, the face picture swinging left and right by 30 degrees, the face picture swinging left and right by 60 degrees and the face picture swinging left and right by 90 degrees through the trained face recognition model;
fusing the extracted face feature vectors;
and taking the fused features as registration features to establish a face feature database.
6. The method of claim 5, wherein the recognizing any-angle face picture of the attendance checking personnel by using the trained face recognition model, and when the recognition is successful, generating the attendance checking record comprises: extracting face characteristic vectors from any face picture captured by a camera on the attendance checking site through the trained face recognition model, and solving cosine similarity between the extracted face characteristic vectors and the face characteristic vectors in the face characteristic database one by one;
and if the cosine similarity exceeds a preset threshold value, the identification is successful, and an attendance record is generated.
7. An attendance system based on face recognition, the system comprising:
the acquisition module is used for acquiring a correction face picture of a person to be checked as a sample set;
the expansion module is used for generating a confrontation network through pre-trained 3D deformable to generate a face picture with any angle for the end face picture so as to expand a sample set;
the training module is used for training a face recognition model through the extended sample set;
and the recognition module is used for recognizing the face picture of the attendance checking personnel at any angle through the trained face recognition model and generating an attendance checking record when the recognition is successful.
8. An electronic device comprising a memory and a processor, wherein the memory stores a computer program, and the processor is configured to execute the computer program to perform the attendance checking method based on human face recognition according to any one of claims 1 to 6.
9. A storage medium, wherein a computer program is stored in the storage medium, and wherein the computer program is configured to execute the attendance checking method based on human face recognition according to any one of claims 1 to 6 when the computer program runs.
CN202110699382.1A 2021-06-23 2021-06-23 Attendance checking method and system based on face recognition, electronic equipment and storage medium Active CN113159006B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110699382.1A CN113159006B (en) 2021-06-23 2021-06-23 Attendance checking method and system based on face recognition, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110699382.1A CN113159006B (en) 2021-06-23 2021-06-23 Attendance checking method and system based on face recognition, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113159006A CN113159006A (en) 2021-07-23
CN113159006B true CN113159006B (en) 2021-09-14

Family

ID=76876037

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110699382.1A Active CN113159006B (en) 2021-06-23 2021-06-23 Attendance checking method and system based on face recognition, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113159006B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113780095B (en) * 2021-08-17 2023-12-26 中移(杭州)信息技术有限公司 Training data expansion method, terminal equipment and medium of face recognition model
CN113837236B (en) * 2021-08-31 2022-11-15 广东智媒云图科技股份有限公司 Method and device for identifying target object in image, terminal equipment and storage medium
CN113792679A (en) * 2021-09-17 2021-12-14 深信服科技股份有限公司 Blacklist person identification method and device, electronic equipment and storage medium
CN113822245B (en) * 2021-11-22 2022-03-04 杭州魔点科技有限公司 Face recognition method, electronic device, and medium
CN115081920B (en) * 2022-07-08 2024-07-12 华南农业大学 Attendance check-in scheduling management method, system, equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109446981A (en) * 2018-10-25 2019-03-08 腾讯科技(深圳)有限公司 A kind of face's In vivo detection, identity identifying method and device
CN109635766A (en) * 2018-12-20 2019-04-16 中国地质大学(武汉) The face of convolutional neural networks based on small sample is taken pictures Work attendance method and system
US10789796B1 (en) * 2019-06-24 2020-09-29 Sufian Munir Inc Priority-based, facial recognition-assisted attendance determination and validation system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109446981A (en) * 2018-10-25 2019-03-08 腾讯科技(深圳)有限公司 A kind of face's In vivo detection, identity identifying method and device
CN109635766A (en) * 2018-12-20 2019-04-16 中国地质大学(武汉) The face of convolutional neural networks based on small sample is taken pictures Work attendance method and system
US10789796B1 (en) * 2019-06-24 2020-09-29 Sufian Munir Inc Priority-based, facial recognition-assisted attendance determination and validation system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
精准生成Fake人脸!Amazon全新GAN模型给你全方位无死角美颜;Amusi(CVer);《https://blog.csdn.net/amusi1994/article/details/112598361》;20210113;第1-12页 *

Also Published As

Publication number Publication date
CN113159006A (en) 2021-07-23

Similar Documents

Publication Publication Date Title
CN113159006B (en) Attendance checking method and system based on face recognition, electronic equipment and storage medium
US20210158023A1 (en) System and Method for Generating Image Landmarks
CN110135249B (en) Human behavior identification method based on time attention mechanism and LSTM (least Square TM)
CN109657533A (en) Pedestrian recognition methods and Related product again
JP4951498B2 (en) Face image recognition device, face image recognition method, face image recognition program, and recording medium recording the program
WO2011112368A2 (en) Robust object recognition by dynamic modeling in augmented reality
Shen et al. Lidargait: Benchmarking 3d gait recognition with point clouds
CN112911393B (en) Method, device, terminal and storage medium for identifying part
CN112257696B (en) Sight estimation method and computing equipment
CN113822254B (en) Model training method and related device
CN108537214B (en) Automatic construction method of indoor semantic map
CN112528902B (en) Video monitoring dynamic face recognition method and device based on 3D face model
Tu et al. Consistent 3d hand reconstruction in video via self-supervised learning
CN110222572A (en) Tracking, device, electronic equipment and storage medium
CN111680550B (en) Emotion information identification method and device, storage medium and computer equipment
CN111160307A (en) Face recognition method and face recognition card punching system
CN109858433B (en) Method and device for identifying two-dimensional face picture based on three-dimensional face model
CN111008935A (en) Face image enhancement method, device, system and storage medium
CN111209811A (en) Method and system for detecting eyeball attention position in real time
Jiang et al. Application of a fast RCNN based on upper and lower layers in face recognition
Neverova Deep learning for human motion analysis
CN115601710A (en) Examination room abnormal behavior monitoring method and system based on self-attention network architecture
CN111815768A (en) Three-dimensional face reconstruction method and device
CN113689527B (en) Training method of face conversion model and face image conversion method
JP2022095332A (en) Learning model generation method, computer program and information processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant