CN113869714A - Attendance recording method and device, computer equipment and storage medium - Google Patents

Attendance recording method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113869714A
CN113869714A CN202111128834.7A CN202111128834A CN113869714A CN 113869714 A CN113869714 A CN 113869714A CN 202111128834 A CN202111128834 A CN 202111128834A CN 113869714 A CN113869714 A CN 113869714A
Authority
CN
China
Prior art keywords
attendance
feature
target
similarity
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111128834.7A
Other languages
Chinese (zh)
Inventor
张旭龙
王健宗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202111128834.7A priority Critical patent/CN113869714A/en
Publication of CN113869714A publication Critical patent/CN113869714A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Human Resources & Organizations (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Economics (AREA)
  • Development Economics (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Probability & Statistics with Applications (AREA)
  • Educational Administration (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Game Theory and Decision Science (AREA)
  • Computational Linguistics (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of target detection, and discloses an attendance recording method, an attendance recording device, computer equipment and a storage medium, wherein the method comprises the following steps: the method comprises the steps of receiving an attendance recording request, wherein the attendance recording request comprises an image to be recognized and shooting time corresponding to the image to be recognized, generating a target candidate area set on the image to be recognized according to a selective search algorithm, extracting feature data based on a convolutional neural network to serve as first feature data, classifying the first feature data based on a classification algorithm to obtain at least one attendance object feature, obtaining each basic object feature in a preset basic object set, respectively matching the attendance object feature with each basic object feature aiming at each attendance object feature, taking a basic object corresponding to the successfully matched basic object feature as a target object corresponding to the attendance object feature, and generating an attendance record corresponding to each target object based on the shooting time and each target object to improve the attendance recording efficiency.

Description

Attendance recording method and device, computer equipment and storage medium
Technical Field
The invention relates to the technical field of target detection, in particular to an attendance recording method and device, computer equipment and a storage medium.
Background
Attendance and assessment of employees are important contents in the management of the manpower resources of enterprises. In order to strengthen the management of the employees, the enterprise carries out attendance checking on the employees, wherein the attendance checking is the statistics of the attendance rate of the employees, such as the time records of work attendance, work leaving and work overtime, late arrival, early return, miner time, total attendance, leave-on days and the like.
Traditional attendance mode mainly utilizes biological identification technology discernment staff's fingerprint through fingerprint attendance machine, in order to reach verification staff's identity, and then realize carrying out the purpose of attendance record to the staff, but adopt this kind at that time, often can be because staff self fingerprint goes wrong and can't punch the card, if finger fish tail appears, sweat, peeling, too dry scheduling problem, at this moment, then need report to relevant personnel, carry out the attendance additional recording to the staff by the manual work, make the managerial efficiency of record of attendance lower. In addition, in large enterprises with more staff, the card is punched through fingerprint attendance, long time is consumed for queuing, and the attendance recording efficiency is not high.
In order to solve the problem that the card punching cannot be performed due to the fact that the fingerprints of the staff are problematic, the staff are subjected to attendance recording through the face recognition technology, however, the technology still needs to be used for individual queuing for face recognition at present, a long time still needs to be consumed for queuing in large enterprises with more staff, and the attendance recording efficiency is not high.
Disclosure of Invention
The embodiment of the invention provides an attendance recording method, an attendance recording device, computer equipment and a storage medium, and aims to improve attendance recording efficiency.
In order to solve the technical problem, an embodiment of the present application provides an attendance recording method, including:
receiving an attendance recording request, wherein the attendance recording request comprises an image to be identified and shooting time corresponding to the image to be identified;
generating a target candidate region set on the image to be identified according to a selective search algorithm;
extracting feature data of the target candidate region set based on a convolutional neural network to serve as first feature data;
classifying the first feature data based on a classification algorithm to obtain at least one attendance object feature;
acquiring each basic object feature in a preset basic object set, respectively matching the attendance object feature with each basic object feature aiming at each attendance object feature, and taking a basic object corresponding to the successfully matched basic object feature as a target object corresponding to the attendance object feature;
and generating an attendance record corresponding to each target object based on the shooting time and each target object.
In order to solve the above technical problem, an embodiment of the present application further provides an attendance recording apparatus, including:
the system comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for receiving an attendance recording request, and the attendance recording request comprises an image to be identified and shooting time corresponding to the image to be identified;
the region generation module is used for generating a target candidate region set on the image to be identified according to a selective search algorithm;
the extraction module is used for extracting the feature data of the target candidate region set based on a convolutional neural network to serve as first feature data;
the classification module is used for classifying the first feature data based on a classification algorithm to obtain at least one attendance object feature;
the matching module is used for acquiring each basic object feature in a preset basic object set, matching the attendance object feature with each basic object feature respectively aiming at each attendance object feature, and taking the basic object corresponding to the successfully matched basic object feature as a target object corresponding to the attendance object feature;
and the record generating module is used for generating an attendance record corresponding to each target object based on the shooting time and each target object.
In order to solve the technical problem, an embodiment of the present application further provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the attendance recording method when executing the computer program.
In order to solve the technical problem, an embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the steps of the attendance recording method are implemented.
According to the attendance recording method, the attendance recording device, the computer equipment and the storage medium, the attendance recording request is received, and comprises the image to be identified and the shooting time corresponding to the image to be identified; generating a target candidate area set on an image to be identified according to a selective search algorithm, and extracting feature data of the target candidate area set as first feature data based on a convolutional neural network; the method comprises the steps of classifying first feature data based on a classification algorithm to obtain at least one attendance object feature, obtaining each basic object feature in a preset basic object set, respectively matching the attendance object feature with each basic object feature aiming at each attendance object feature, taking a basic object corresponding to the successfully matched basic object feature as a target object corresponding to the attendance object feature, and generating an attendance record corresponding to each target object based on shooting time and each target object, wherein a target candidate area set is generated on an image to be recognized through a selective search algorithm, then classifying feature data of the candidate area set, and determining at least one attendance object to be recognized on the image to be recognized, so that attendance records of a plurality of target objects can be simultaneously performed, and the attendance recording efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
fig. 2 is a flow chart of one embodiment of an attendance recording method of the present application;
fig. 3 is a schematic structural diagram of an embodiment of an attendance recording apparatus according to the present application;
FIG. 4 is a schematic block diagram of one embodiment of a computer device according to the present application.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "including" and "having," and any variations thereof, in the description and claims of this application and the description of the above figures are intended to cover non-exclusive inclusions. The terms "first," "second," and the like in the description and claims of this application or in the above-described drawings are used for distinguishing between different objects and not for describing a particular order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, as shown in fig. 1, a system architecture 100 may include terminal devices 101, 102, 103, a network 104 and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, mpeg compression standard Audio Layer 3), MP4 players (Moving Picture Experts Group Audio Layer IV, mpeg compression standard Audio Layer 4), laptop portable computers, desktop computers, and the like.
The server 105 may be a server providing various services, such as a background server providing support for pages displayed on the terminal devices 101, 102, 103.
It should be noted that, the attendance recording method provided in the embodiment of the present application is executed by the server, and accordingly, the attendance recording apparatus is disposed in the server.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. Any number of terminal devices, networks and servers may be provided according to implementation needs, and the terminal devices 101, 102 and 103 in this embodiment may specifically correspond to an application system in actual production. The server in the embodiment of the application may be an independent server, or may be a cloud server that provides basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, Network service, cloud communication, middleware service, domain name service, security service, Content Delivery Network (CDN), big data, an artificial intelligence platform, and the like.
Referring to fig. 2, fig. 2 shows an attendance recording method according to an embodiment of the present invention, which is described by taking the method as an example of being applied to the server in fig. 1, and the details are as follows:
s201: and receiving an attendance recording request, wherein the attendance recording request comprises the image to be identified and the shooting time corresponding to the image to be identified.
Specifically, an attendance place is provided with a camera device, the camera device is connected with a server in a wired or wireless manner and reads configuration information from the server, the configuration request comprises a preset time range and a fixed interval, the camera device performs image shooting according to the configuration information in a preset time period range and at the fixed interval and rapidly identifies a shot image, if a person exists in the shot image, the shot image is taken as an image to be identified and the shooting time corresponding to the image to be identified is recorded, an attendance recording request is generated based on the image to be identified and the shooting time corresponding to the image to be identified, and the attendance recording request is sent to the server, and the server receives and analyzes the attendance recording request to obtain the image to be identified and the shooting time corresponding to the image to be identified.
The preset time period range may be set according to an actual application scenario, and is not specifically limited herein. For example, three time periods are set, namely 8:00 to 9:30, 11:30 to 13:30 and 17:00 to 19:00, so that the attendance related information can be accurately acquired by setting the preset time period range in combination with actual requirements, meanwhile, unnecessary resource overhead of the server is reduced, and the processing performance of the server is improved.
The fixed interval may be set according to an actual application scenario, and is not specifically limited herein. For example, the fixed interval time may be set to 1 second, 3 seconds, 5 seconds.
In a specific embodiment, images to be identified in two consecutive attendance record requests are selected, and the specific selection mode comprises the following steps:
and acquiring images to be identified in the two continuous attendance recording requests to be processed as a first image and a second image, and combining the first image and the second image into a dual-channel image, wherein the first image and the second image are both gray level images with target sizes.
Optionally, it may be detected whether the first image and the second image are grayscale images of target sizes, and if it is detected that any image (the first image and/or the second image) is not a grayscale image, grayscale processing is performed on any image to process the image into a grayscale image. Illustratively, assuming that the target size is Z × Z (Z is greater than 0), and the first image X1 and the second image X2 are both single-channel images (i.e., grayscale images) of the target size (Z, 1), the two images (Z, 1) may be combined into one two-channel image (Z, 2) to form the input image pair (X1, X2), or, if the size of any one image is detected to be not the target size, the size of any one image may be adjusted to the target size.
And performing data processing on the dual-channel image through the trained image similarity model to obtain an image similarity matrix corresponding to the dual-channel image.
The network structure of the image similarity model may include a convolutional layer and an anti-convolutional layer. Illustratively, the first l1 (e.g., 4) layers in the image similarity model are convolutional layers, the penultimate layer is an anti-convolutional layer, and the last layer is a convolutional layer. The convolution layer is used for extracting a Feature Map (Feature Map), downsampling processing is realized by using stride 2, and deconvolution is realized by an upsampling process, so that the image size is enlarged, and the image content is filled, so that the image content is richer.
In a specific implementation, after the image similarity model is input into the two-channel image, the two-channel image can be continuously downsampled through the first 1 convolutional layers, and a feature map of the two-channel image is extracted.
Furthermore, the feature map extracted by the convolutional layer can be subjected to up-sampling processing through the deconvolution layer, the feature map is updated according to the target dimension to obtain a target feature map, and then the last layer of convolutional layer is used for performing dimension reduction processing on the target feature map to obtain an image similarity matrix corresponding to the two-channel image. The target dimension is the dimension of an image similarity matrix finally output by the image similarity model, and the dimension is preset during model training; the feature map is a matrix characterizing the image features.
Exemplarily, it is assumed that the first 4 network layers of the image similarity model are convolutional layers, which are respectively a first convolutional layer, a second convolutional layer, a third convolutional layer and a fourth convolutional layer, the 5 th network layer is an anti-convolutional layer, the 6 th network layer is a fifth convolutional layer, the target dimension is 12 × 12, and the two-channel image is an image 1 with one dimension (96 × 2). In this case, the first convolution layer performs a first downsampling on the image 1, extracts image features of the image 1, obtains a first feature map with a dimension of (48 × k1), inputs the first feature map into the second convolution layer, performs a second downsampling on the second convolution layer, obtains a second feature map with a dimension of (24 × k2), and so on, obtains a third feature map with a dimension of (12 × k3) after the second feature map passes through the third convolution layer, and obtains a fourth feature map with a dimension of (6 × k4) after the third feature map passes through the fourth convolution layer. Further, the deconvolution layer performs upsampling processing on a fourth feature map with the dimension of (6 × k4), updates the fourth feature map into a target feature map with the dimension of (12 × k4) according to the target dimension of "12 × 12", and then performs dimension reduction processing on the target feature map through the last layer of convolution layer, so as to reduce the third dimension of "k 4" of the target feature map to 1, thereby obtaining an image similarity matrix corresponding to the two-channel image. And the first characteristic diagram, the second characteristic diagram, the third characteristic diagram and the fourth characteristic diagram respectively correspond to a third dimension: each of k1, k2, k3 and k4 may be an integer of 2 or more, and the specific numerical value is related to the network parameter of each corresponding convolutional layer, which is not particularly limited.
And determining the image similarity of the first image and the second image in the target image area according to the image similarity matrix.
In an embodiment, the dimension of the image similarity matrix is N × M (N and M are both integers greater than 0), and a numerical value of each element in the image similarity matrix represents a similarity value of the first image and the second image in the corresponding image region, where the specific implementation of determining the image similarity of the first image and the second image in the target image region according to the image similarity matrix may be as follows:
and dividing the first image and the second image into N x M image sub-regions according to the dimension of the image similarity matrix, wherein each image sub-region has a corresponding relation with each element in the image similarity matrix.
Further, the target image sub-regions included in the target image region are determined from the N x M image sub-regions, the numerical values of the target elements corresponding to the target image sub-regions are obtained from the image similarity matrix, and the image similarity of the first image and the second image in the target image region is determined according to the numerical values of the target elements.
Specifically, the numerical value of each element in the image similarity matrix represents the image similarity of the first image and the second image in the corresponding image area, and the image similarity of the target image area is as follows: and averaging the values of the target elements associated with the target image area in the image similarity matrix.
In an embodiment, assuming that the target image area is a global image area of the first image or the second image, the determining the image similarity of the first image and the second image in the target image area according to the image similarity matrix may include: and acquiring the numerical value of each element in the image similarity matrix, and averaging the numerical values of each element in the image similarity matrix to obtain the global image similarity of the first image and the second image.
And if the image similarity is greater than a preset similarity threshold, randomly selecting the first image or the second image as the image to be identified.
Specifically, the preset similarity threshold is obtained according to historical empirical data molecules.
And if the image similarity is smaller than a preset similarity threshold value, taking the first image and the second image as images to be identified.
S202: and generating a target candidate region set on the image to be identified according to a selective search algorithm.
Specifically, the selective search algorithm mainly uses an image segmentation technology to perform object detection, the process of the selective search algorithm is to generate a target area set on an image to be identified according to the image segmentation technology, calculate the similarity of every two adjacent areas in the target area set based on euclidean distance, merge the adjacent areas with the similarity higher than a preset threshold into a new area, generate a target candidate area set on the image to be identified until the similarity between every two adjacent areas in the target area set is smaller than the preset threshold, wherein the target area set comprises a foreground area and a background area, the target candidate area set is a foreground area, the foreground area comprises candidate objects in the image to be identified, the candidate objects comprise objects to be checked and non-objects to be checked, the background area is a background, such as a wall, in the image to be identified, candidate targets in the foreground area of the image to be identified can be quickly and accurately positioned through a selective search algorithm, and then a target candidate area is generated, so that a target candidate area set is obtained, and the target candidate area set is quickly and accurately determined, so that the identification efficiency of the object to be checked is favorably improved, and the attendance recording efficiency is improved.
As for the above step S202, the step of generating the target candidate region set on the image to be recognized according to the selective search algorithm specifically includes steps S2021 to S2023:
step S2021, performing candidate target positioning on the image to be recognized, and generating a candidate target frame region set.
Specifically, candidate target positioning is performed on an image to be recognized by using an image segmentation algorithm based on graph theory, namely, each pixel point in the image to be recognized is used as a node in a weighted undirected graph, edges are formed among the nodes, the weight W (i, j) of the edges is formed by a spatial relationship (such as the distance from a vertex i to the vertex j) and a gray level test (such as texture, color and gray value), the similarity between the vertex i and the vertex j is reflected by the weight of the edges, the image to be recognized is cut into a plurality of subset regions according to the similarity between each pixel point, namely, a candidate target frame region set is generated, wherein each subset region represents a candidate target.
Step S2022, calculates the comprehensive similarity of the adjacent target candidate frame regions in the candidate target frame region set.
Specifically, the integrated similarity is a weighted sum of the color similarity, the texture similarity, the size similarity, and the overlapping similarity.
Step S2023, if the comprehensive similarity is greater than the preset similarity threshold, merging the target candidate frame areas to obtain a new target candidate frame area set, returning to the step of calculating the comprehensive similarity of the adjacent target candidate frame areas in the candidate target frame area set to continue execution until the comprehensive similarity of the adjacent target candidate frame areas in the target candidate frame area set is lower than the preset similarity threshold, and generating the target candidate area set.
Specifically, the preset similarity threshold is obtained by analyzing historical empirical data.
S203: and extracting the characteristic data of the target candidate region set as first characteristic data based on the convolutional neural network.
Specifically, the feature data of the target candidate area set includes feature data of each target candidate area in the target candidate area set, each feature data is used for characterizing a corresponding target candidate area, for example, an image to be recognized includes an object to be checked a and an object not to be checked B, the feature data of the object to be checked a and the object not to be checked B are extracted based on a convolutional neural network, so as to obtain the feature data of the object to be checked a and the feature data of the object not to be checked B, the feature data of the object to be checked a can be used for characterizing the object to be checked a, the feature data of the object not to be checked B can be used for characterizing the object not to be checked B, the convolutional neural network is a deep neural network with a convolutional structure, and is composed of a plurality of convolutional layers, each convolutional layer includes a plurality of convolutional kernels, and the convolutional kernels are used for sequentially scanning the whole image from left to right and from left to down, the output data called a feature map is obtained, and in the present application, the convolutional neural network is used to extract feature data of each target candidate region in the target candidate region set, i.e., feature data of the target candidate region set.
S204: and classifying the first feature data based on a classification algorithm to obtain at least one attendance object feature.
Specifically, the attendance object features may be face features, human posture features and a combination of the face features and the human posture features, and the classification algorithm includes, but is not limited to, decision tree, bayes, artificial neural network, K-nearest neighbor, support vector machine and association rule-based classification.
For better understanding of the above step S204, taking an artificial neural network as an example, the above steps are exemplified:
and acquiring a sample image training set, and preprocessing each image in the image data set.
And constructing a Convolutional Neural Network (CNN), and setting hyper-parameters and loss functions of the Convolutional Neural Network (CNN).
And taking the preprocessed sample image and the known sample image classification label as input values of the convolutional neural network CNN, and starting to train the convolutional neural network CNN.
And calculating a loss function according to the output of the convolutional neural network CNN and the image label, and then performing error back propagation.
In the error back propagation process, updating parameters in the convolutional neural network CNN training process by adopting a gradient direction parameter optimization mode, and finishing iterative training after meeting finishing conditions to obtain the trained convolutional neural network CNN.
And inputting the first feature data into a trained convolutional neural network for classification to obtain at least one attendance object feature.
Specifically, the loss function L is calculated according to formula (1):
L(α)=-|y-a|β(1-y)log(1-a)+ylog(a)) (1)
in the formula, beta is more than or equal to 0, alpha is a prediction probability value, and y is a real probability value.
Here, it should be particularly noted that when y is equal to α, the loss function is the smallest, that is, the closer y is to α, the more easily the sample is divided, the loss function is suppressed, the accuracy of classification is improved, and the attendance recording efficiency is improved.
S205: and acquiring each basic object feature in a preset basic object set, respectively matching the attendance object features with each basic object feature aiming at each attendance object feature, and taking the basic object corresponding to the successfully matched basic object feature as a target object corresponding to the attendance object feature.
Specifically, the basic object features are features of an attendance object collected in advance, including but not limited to human face features, human body posture features and features combining the human face features and the human body posture features, the preset basic object set includes a plurality of basic object features, the number of the basic object features corresponds to the attendance object features needing attendance one by one, feature similarity of the attendance object features and each basic object feature is calculated based on a similarity calculation method to obtain a similarity value set, the basic object features corresponding to similarity values smaller than a preset experience threshold value in the similarity set are used as the basic object features successfully matched, the basic objects corresponding to the successfully matched basic object features are used as target objects corresponding to the attendance object features, wherein the preset experience threshold value is obtained by analyzing historical experience data, and the similarity calculation method includes but not limited to euclidean distance, Cosine similarity, manhattan distance, Jaccard similarity coefficient.
S206: and generating an attendance record corresponding to each target object based on the shooting time and each target object.
Specifically, the attendance record includes, but is not limited to, a normal attendance record, a late arrival record, an early exit record, and an overtime record, and if the shooting time is earlier than or equal to the preset attendance time, the normal attendance record is performed on the target object, and if the shooting time is earlier than the preset attendance time, the normal attendance record of the target object is generated.
It should be particularly noted that the attendance time interval may be set according to actual scene needs, for example, the attendance time interval in the morning is 7 to 9 am, and the attendance time interval in the afternoon is 18 to 22 pm.
According to the attendance recording method provided by the embodiment of the invention, an attendance recording request is received, wherein the attendance recording request comprises an image to be identified and shooting time corresponding to the image to be identified; generating a target candidate region set on the image to be identified according to a selective search algorithm; extracting feature data of a target candidate region set based on a convolutional neural network to serve as first feature data; classifying the first feature data based on a classification algorithm to obtain at least one attendance object feature; acquiring each basic object feature in a preset basic object set, respectively matching the attendance object feature with each basic object feature aiming at each attendance object feature, and taking the basic object corresponding to the successfully matched basic object feature as a target object corresponding to the attendance object feature; based on the shooting time and each target object, the attendance record corresponding to each target object is generated, the attendance records can be carried out on a plurality of target objects simultaneously, and the attendance recording efficiency is improved.
In some optional implementation manners of this embodiment, in step S204, the step of classifying the first feature data based on a classification algorithm to obtain at least one attendance object feature includes the following steps a1 to a 2:
step a1, mapping the first feature data into classification probability values based on the one-hot encoding.
Specifically, the One-Hot coding, namely One-Hot coding, also called One-bit effective coding, is a method for coding N states by using an N-bit state register, each state has an independent register bit, and only One bit of the N-bit state is effective at any time.
Step a2, classifying the first feature data according to the classification probability value to obtain at least one attendance object feature.
Specifically, each classification probability value corresponds to a category, and the category is an attendance object characteristic or a non-attendance object characteristic.
In the embodiment, the first feature data is mapped into the classification probability value based on the one-hot code, and the first feature data is classified according to the classification probability value to obtain at least one attendance object feature, so that the attendance object feature can be accurately determined, and the accuracy of attendance records can be improved.
In some optional implementation manners of this embodiment, for the step S204 and the step S206, the following steps b1 to b3 are further included between the steps between the step of classifying the first feature data based on the classification algorithm to obtain at least one attendance object feature and the step of generating an attendance record corresponding to each target object based on the shooting time and each target object:
step b1, acquiring a preset first-class attribute comparison table, wherein the first-class attribute comparison table comprises a basic object feature set and unique mark information corresponding to each basic object feature to be compared in the basic object feature set.
Specifically, the first category attribute comparison table may be updated in real time according to actual application requirements.
And b2, calculating the feature similarity of the attendance object features to be compared and the attendance object features based on a similarity calculation method to obtain a first feature similarity value.
Specifically, the similarity algorithm includes, but is not limited to, euclidean distance, cosine similarity, manhattan distance, Jaccard similarity coefficient.
And b3, if the first feature similarity value is larger than a preset second threshold value, using the unique mark information corresponding to the basic object features to be compared as the unique mark information of the attendance object features.
Specifically, the preset second threshold is obtained by analyzing historical experience data, and the unique mark information corresponding to the characteristics of the basic objects to be compared may be names, numbers, and the like of the basic objects to be compared.
In the embodiment, the characteristics of the attendance object are represented by the unique mark information, so that the accuracy of attendance record is improved.
In some optional implementation manners of this embodiment, for step b3, if the first feature similarity value is greater than the preset second threshold, the step of using the unique mark information corresponding to the basic object feature to be compared as the unique mark information of the attendance object feature further includes:
acquiring a preset second category attribute comparison table, wherein the second category attribute comparison table comprises unique mark information to be compared corresponding to each basic object to be compared in the basic object set to be compared and the basic object feature set to be compared;
and respectively converting the unique mark information and the unique mark information to be compared into a first feature vector and a second feature vector based on the BERT model.
Specifically, the BERT (bidirectional encoder Representations from Transformer) Language model is a novel Language model, and the BERT model is composed of multiple layers of transcoders including, but not limited to, a Multi-head Self-adaptation (Multi-head-front), a Normalization Layer (Layer Normalization), and a Forward feedback Layer (Feed Forward), and is developed to obtain a semantic representation of a target text by using large-scale unlabeled corpus training, and then to apply the semantic representation of the text to a specific NLP (natural Language processing) task after fine-tuning model parameters (such as weights) in the NLP task.
And calculating the feature similarity of the first feature vector and the second feature vector based on a similarity algorithm to obtain a second feature similarity value.
Specifically, the similarity algorithm includes, but is not limited to, euclidean distance, cosine similarity, manhattan distance, Jaccard similarity coefficient.
And if the second characteristic similarity value is larger than a preset third threshold value, taking the basic object to be compared corresponding to the unique mark information as a target object.
Specifically, the preset third threshold is obtained by analyzing historical experience data, and if the unique mark information is the employee number or the employee name, the basic object to be compared corresponding to the employee number or the employee name is used as the target object, and the target object is the object needing attendance recording.
In the embodiment, the target object is determined by comparing the unique mark information with the feature similarity of the unique mark information to be compared, so that the accuracy of attendance record is improved.
In some optional implementation manners of this embodiment, in step S205, the step of matching the attendance object characteristics with each of the basic object characteristics, and taking the basic object corresponding to the successfully matched basic object characteristics as the target object corresponding to the attendance object characteristics includes:
and calculating the feature similarity of the attendance object features and the basic object features based on a similarity calculation method to obtain a feature similarity value.
Specifically, the similarity algorithm includes, but is not limited to, euclidean distance, cosine similarity, manhattan distance, Jaccard similarity coefficient.
And if the feature similarity value is larger than a preset fourth threshold value, taking the basic object corresponding to the basic object features as a target object corresponding to the attendance object features.
Specifically, the preset fourth threshold is obtained by analyzing historical empirical data.
In the embodiment, the target object is determined by comparing the feature similarity of the attendance object features and the feature similarity of the basic object features, so that the accuracy of attendance record is improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Fig. 3 is a schematic block diagram of an attendance recording apparatus corresponding to the attendance recording method according to the above-described embodiment. As shown in fig. 3, the attendance recording apparatus includes a receiving module 30, an area generating module 31, an extracting module 32, a classifying module 33, a matching module 34, and a record generating module 35. The functional modules are explained in detail as follows:
the receiving module 30 is configured to receive an attendance recording request, where the attendance recording request includes an image to be identified and shooting time corresponding to the image to be identified.
And the region generating module 31 is configured to generate a target candidate region set on the image to be identified according to a selective search algorithm.
And the extraction module 32 is configured to extract feature data of the target candidate region set as first feature data based on the convolutional neural network.
The classification module 33 is configured to classify the first feature data based on a classification algorithm to obtain at least one attendance object feature.
The matching module 34 is configured to acquire each basic object feature in a preset basic object set, match the attendance object feature with each basic object feature respectively for each attendance object feature, and use a basic object corresponding to the successfully matched basic object feature as a target object corresponding to the attendance object feature;
the record generating module 35 is configured to generate an attendance record corresponding to each target object based on the shooting time and each target object.
Optionally, the region generating module 31 includes a positioning module, a similarity calculating module, and a region merging module, and each functional module is described in detail as follows:
and the positioning module is used for positioning the candidate target of the image to be identified and generating a candidate target frame area set.
And the similarity calculation module is used for calculating the comprehensive similarity of the adjacent target candidate frame regions in the candidate target frame region set.
And the region merging module is used for merging the target candidate frame regions to obtain a new target candidate frame region set if the comprehensive similarity is greater than a preset similarity threshold, returning to the step of calculating the comprehensive similarity of the adjacent target candidate frame regions in the candidate target frame region set to continue the execution until the comprehensive similarity of every two target candidate frame regions in the target candidate frame region set is lower than the preset similarity threshold, and generating the target candidate region set.
Optionally, the classification module 33 includes a mapping module and a feature obtaining module, and each functional module is described in detail as follows:
a mapping module for mapping the first feature data to a classification probability value based on the one-hot encoding.
And the characteristic acquisition module is used for classifying the first characteristic data according to the classification probability value to obtain at least one attendance object characteristic.
Optionally, the attendance recording apparatus further includes a first comparison table obtaining module, a first feature similarity calculating module, and a flag information determining module, and each of the functional modules is described in detail as follows:
the first comparison table acquisition module is used for acquiring a preset first class attribute comparison table, and the first class attribute comparison table comprises a basic object feature set and unique mark information corresponding to each basic object feature to be compared in the basic object feature set.
And the first feature similarity calculation module is used for calculating the feature similarity of the attendance object features to be compared and the attendance object features based on a similarity calculation method to obtain a first feature similarity value.
And the marking information determining module is used for taking the unique marking information corresponding to the basic object features to be compared as the unique marking information of the attendance object features if the first feature similarity value is greater than a preset second threshold value.
Optionally, the attendance recording apparatus further includes a second comparison table obtaining module, a conversion module, a second feature similarity calculation module, and a first target object determination module, and each of the functional modules is described in detail as follows:
and the second comparison table acquisition module is used for acquiring a preset second category attribute comparison table, and the second category attribute comparison table comprises unique mark information to be compared corresponding to each basic object to be compared in the basic object set to be compared and the basic object feature set to be compared.
And the conversion module is used for converting the unique mark information and the unique mark information to be compared into a first feature vector and a second feature vector respectively based on the BERT model.
And the second feature similarity calculation module is used for calculating the feature similarity of the first feature vector and the second feature vector based on a similarity calculation method to obtain a second feature similarity value.
And the first target object determining module is used for taking the basic object to be compared corresponding to the unique mark information as the target object if the second characteristic similarity value is greater than a preset third threshold value.
Optionally, the matching module 34 includes a feature similarity value obtaining module and a second target object determining module, and each functional module is described in detail as follows:
and the characteristic similarity value acquisition module is used for calculating the characteristic similarity of the attendance object characteristics and the basic object characteristics based on a similarity calculation method to obtain a characteristic similarity value.
And the second target object determining module is used for taking the basic object corresponding to the basic object characteristics as the target object corresponding to the attendance object characteristics if the characteristic similarity value is greater than a preset fourth threshold value.
For specific limitations of the attendance recording device, reference may be made to the above limitations on the attendance recording method, which are not described herein again. All modules in the attendance recording device can be completely or partially realized through software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In order to solve the technical problem, an embodiment of the present application further provides a computer device. Referring to fig. 4, fig. 4 is a block diagram of a basic structure of a computer device according to the present embodiment.
The computer device 4 comprises a memory 41, a processor 42, a network interface 43 communicatively connected to each other via a system bus. It is noted that only the computer device 4 having the components connection memory 41, processor 42, network interface 43 is shown, but it is understood that not all of the shown components are required to be implemented, and that more or fewer components may be implemented instead. As will be understood by those skilled in the art, the computer device is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The computer device can be a desktop computer, a notebook, a palm computer, a cloud server and other computing devices. The computer equipment can carry out man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch panel or voice control equipment and the like.
The memory 41 includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or D interface display memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the memory 41 may be an internal storage unit of the computer device 4, such as a hard disk or a memory of the computer device 4. In other embodiments, the memory 41 may also be an external storage device of the computer device 4, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the computer device 4. Of course, the memory 41 may also include both internal and external storage devices of the computer device 4. In this embodiment, the memory 41 is generally used for storing an operating system installed in the computer device 4 and various types of application software, such as program codes for controlling electronic files. Further, the memory 41 may also be used to temporarily store various types of data that have been output or are to be output.
The processor 42 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 42 is typically used to control the overall operation of the computer device 4. In this embodiment, the processor 42 is configured to execute the program code stored in the memory 41 or process data, such as program code for executing control of an electronic file.
The network interface 43 may comprise a wireless network interface or a wired network interface, and the network interface 43 is generally used for establishing communication connection between the computer device 4 and other electronic devices.
The present application further provides another embodiment, which is to provide a computer-readable storage medium, wherein the computer-readable storage medium stores an interface display program, and the interface display program is executable by at least one processor, so as to cause the at least one processor to execute the steps of the attendance recording method.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
It is to be understood that the above-described embodiments are merely illustrative of some, but not restrictive, of the broad invention, and that the appended drawings illustrate preferred embodiments of the invention and do not limit the scope of the invention. This application is capable of embodiments in many different forms and is provided for the purpose of enabling a thorough understanding of the disclosure of the application. Although the present application has been described in detail with reference to the foregoing embodiments, it will be apparent to one skilled in the art that the present application may be practiced without modification or with equivalents of some of the features described in the foregoing embodiments. All equivalent structures made by using the contents of the specification and the drawings of the present application are directly or indirectly applied to other related technical fields and are within the protection scope of the present application.

Claims (10)

1. An attendance recording method is characterized by comprising the following steps:
receiving an attendance recording request, wherein the attendance recording request comprises an image to be identified and shooting time corresponding to the image to be identified;
generating a target candidate region set on the image to be identified according to a selective search algorithm;
extracting feature data of the target candidate region set based on a convolutional neural network to serve as first feature data;
classifying the first feature data based on a classification algorithm to obtain at least one attendance object feature;
acquiring each basic object feature in a preset basic object set, respectively matching the attendance object feature with each basic object feature aiming at each attendance object feature, and taking a basic object corresponding to the successfully matched basic object feature as a target object corresponding to the attendance object feature;
and generating an attendance record corresponding to each target object based on the shooting time and each target object.
2. The attendance recording method of claim 1, wherein the step of generating a set of target candidate regions on the image to be identified according to a selective search algorithm comprises:
performing candidate target positioning on the image to be identified to generate a candidate target frame region set;
calculating the comprehensive similarity of adjacent target candidate frame regions in the candidate target frame region set;
if the comprehensive similarity is larger than a preset similarity threshold, merging the target candidate frame areas to obtain a new target candidate frame area set, returning to the step of calculating the comprehensive similarity of the adjacent target candidate frame areas in the candidate target frame area set to continue to be executed until the comprehensive similarity of every two target candidate frame areas in the target candidate frame area set is lower than the preset similarity threshold, and generating the target candidate area set.
3. The attendance recording method of claim 1, wherein the step of classifying the first feature data based on a classification algorithm to obtain at least one attendance object feature comprises:
mapping the first feature data to a classification probability value based on one-hot encoding;
and classifying the first feature data according to the classification probability value to obtain at least one attendance object feature.
4. The attendance recording method of claim 1, wherein the steps between the step of classifying the first feature data based on a classification algorithm to obtain at least one attendance object feature and the step of generating an attendance record corresponding to each of the target objects based on the photographing time and each of the target objects further comprise:
acquiring a preset first-class attribute comparison table, wherein the first-class attribute comparison table comprises a basic object feature set and unique mark information corresponding to each basic object feature to be compared in the basic object feature set;
calculating the feature similarity of the attendance object features to be compared and the attendance object features based on a similarity calculation method to obtain a first feature similarity value;
and if the first feature similarity value is larger than a preset second threshold value, using the unique mark information corresponding to the basic object features to be compared as the unique mark information of the attendance object features.
5. The attendance recording method of claim 4, wherein if the first feature similarity value is greater than a preset second threshold value, the step of using the unique mark information corresponding to the basic object feature to be compared as the unique mark information of the attendance object feature further includes:
acquiring a preset second category attribute comparison table, wherein the second category attribute comparison table comprises unique mark information to be compared, corresponding to each basic object to be compared in the basic object set to be compared and the basic object feature set to be compared;
respectively converting the unique mark information and the unique mark information to be compared into a first feature vector and a second feature vector based on a BERT model;
calculating the feature similarity of the first feature vector and the second feature vector based on a similarity algorithm to obtain a second feature similarity value;
and if the second feature similarity value is larger than a preset third threshold value, taking the basic object to be compared corresponding to the unique mark information as the target object.
6. The attendance recording method according to claim 1, wherein the step of matching the attendance object characteristics with each of the basic object characteristics, and using the basic object corresponding to the successfully matched basic object characteristic as the target object corresponding to the attendance object characteristic, comprises:
calculating the feature similarity of the attendance object features and the basic object features based on a similarity algorithm to obtain a feature similarity value:
and if the feature similarity value is larger than a preset fourth threshold value, taking the basic object corresponding to the basic object feature as a target object corresponding to the attendance object feature.
7. An attendance recording apparatus, comprising:
the system comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for receiving an attendance recording request, and the attendance recording request comprises an image to be identified and shooting time corresponding to the image to be identified;
the region generation module is used for generating a target candidate region set on the image to be identified according to a selective search algorithm;
the extraction module is used for extracting the feature data of the target candidate region set based on a convolutional neural network to serve as first feature data;
the classification module is used for classifying the first feature data based on a classification algorithm to obtain at least one attendance object feature;
the matching module is used for acquiring each basic object feature in a preset basic object set, matching the attendance object feature with each basic object feature respectively aiming at each attendance object feature, and taking the basic object corresponding to the successfully matched basic object feature as a target object corresponding to the attendance object feature;
and the record generating module is used for generating an attendance record corresponding to each target object based on the shooting time and each target object.
8. The attendance recording apparatus of claim 7, wherein the zone generation module comprises:
the positioning module is used for carrying out candidate target positioning on the image to be identified and generating a candidate target frame area set;
the similarity calculation module is used for calculating the comprehensive similarity of adjacent target candidate frame regions in the candidate target frame region set;
and the region merging module is used for merging the target candidate frame regions to obtain a new target candidate frame region set if the comprehensive similarity is greater than a preset similarity threshold, returning to the step of calculating the comprehensive similarity of the adjacent target candidate frame regions in the candidate target frame region set and continuing to execute the step until the comprehensive similarity of every two target candidate frame regions in the target candidate frame region set is lower than the preset similarity threshold, and generating the target candidate region set.
9. A computer device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the attendance recording method of any of claims 1 to 6 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored, which, when executed by a processor, implements the attendance recording method according to any of claims 1 to 6.
CN202111128834.7A 2021-09-26 2021-09-26 Attendance recording method and device, computer equipment and storage medium Pending CN113869714A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111128834.7A CN113869714A (en) 2021-09-26 2021-09-26 Attendance recording method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111128834.7A CN113869714A (en) 2021-09-26 2021-09-26 Attendance recording method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113869714A true CN113869714A (en) 2021-12-31

Family

ID=78994412

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111128834.7A Pending CN113869714A (en) 2021-09-26 2021-09-26 Attendance recording method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113869714A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104217225A (en) * 2014-09-02 2014-12-17 中国科学院自动化研究所 A visual target detection and labeling method
CN109754481A (en) * 2019-01-18 2019-05-14 深圳壹账通智能科技有限公司 Attendance record generation method, device, computer equipment and storage medium
CN110852703A (en) * 2019-10-22 2020-02-28 佛山科学技术学院 Attendance checking method, system, equipment and medium based on side face multi-feature fusion face recognition
CN111325083A (en) * 2019-08-01 2020-06-23 杭州海康威视***技术有限公司 Method and device for recording attendance information
CN111914742A (en) * 2020-07-31 2020-11-10 辽宁工业大学 Attendance checking method, system, terminal equipment and medium based on multi-mode biological characteristics
CN113140051A (en) * 2020-01-20 2021-07-20 上海依图信息技术有限公司 Attendance checking method and device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104217225A (en) * 2014-09-02 2014-12-17 中国科学院自动化研究所 A visual target detection and labeling method
CN109754481A (en) * 2019-01-18 2019-05-14 深圳壹账通智能科技有限公司 Attendance record generation method, device, computer equipment and storage medium
CN111325083A (en) * 2019-08-01 2020-06-23 杭州海康威视***技术有限公司 Method and device for recording attendance information
CN110852703A (en) * 2019-10-22 2020-02-28 佛山科学技术学院 Attendance checking method, system, equipment and medium based on side face multi-feature fusion face recognition
CN113140051A (en) * 2020-01-20 2021-07-20 上海依图信息技术有限公司 Attendance checking method and device, electronic equipment and storage medium
CN111914742A (en) * 2020-07-31 2020-11-10 辽宁工业大学 Attendance checking method, system, terminal equipment and medium based on multi-mode biological characteristics

Similar Documents

Publication Publication Date Title
CN112329659B (en) Weak supervision semantic segmentation method based on vehicle image and related equipment thereof
CN112395979B (en) Image-based health state identification method, device, equipment and storage medium
CN113127633B (en) Intelligent conference management method and device, computer equipment and storage medium
CN112861662B (en) Target object behavior prediction method based on face and interactive text and related equipment
CN114550053A (en) Traffic accident responsibility determination method, device, computer equipment and storage medium
CN113705534A (en) Behavior prediction method, behavior prediction device, behavior prediction equipment and storage medium based on deep vision
CN114550051A (en) Vehicle loss detection method and device, computer equipment and storage medium
CN113343898B (en) Mask shielding face recognition method, device and equipment based on knowledge distillation network
CN114266252A (en) Named entity recognition method, device, equipment and storage medium
CN114022841A (en) Personnel monitoring and identifying method and device, electronic equipment and readable storage medium
CN116863116A (en) Image recognition method, device, equipment and medium based on artificial intelligence
CN116453125A (en) Data input method, device, equipment and storage medium based on artificial intelligence
CN115757075A (en) Task abnormity detection method and device, computer equipment and storage medium
CN115880702A (en) Data processing method, device, equipment, program product and storage medium
CN113901817A (en) Document classification method and device, computer equipment and storage medium
CN113869714A (en) Attendance recording method and device, computer equipment and storage medium
CN113706207A (en) Order transaction rate analysis method, device, equipment and medium based on semantic analysis
CN114298460A (en) Material work order assignment processing method, device, equipment and storage medium
CN113743721A (en) Marketing strategy generation method and device, computer equipment and storage medium
CN113255824A (en) Method and device for training classification model and data classification
CN112733645A (en) Handwritten signature verification method and device, computer equipment and storage medium
CN113723093B (en) Personnel management policy recommendation method and device, computer equipment and storage medium
CN117172632B (en) Enterprise abnormal behavior detection method, device, equipment and storage medium
CN117312991B (en) Method for constructing classification recognition system and related equipment thereof
CN112507141B (en) Investigation task generation method, investigation task generation device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination