CN113936312B - Face recognition base screening method based on deep learning graph convolution network - Google Patents

Face recognition base screening method based on deep learning graph convolution network Download PDF

Info

Publication number
CN113936312B
CN113936312B CN202111185859.0A CN202111185859A CN113936312B CN 113936312 B CN113936312 B CN 113936312B CN 202111185859 A CN202111185859 A CN 202111185859A CN 113936312 B CN113936312 B CN 113936312B
Authority
CN
China
Prior art keywords
image
node
network
model
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111185859.0A
Other languages
Chinese (zh)
Other versions
CN113936312A (en
Inventor
王乾宇
周金明
张世坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Inspector Intelligent Technology Co ltd
Original Assignee
Nanjing Inspector Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Inspector Intelligent Technology Co ltd filed Critical Nanjing Inspector Intelligent Technology Co ltd
Priority to CN202111185859.0A priority Critical patent/CN113936312B/en
Publication of CN113936312A publication Critical patent/CN113936312A/en
Application granted granted Critical
Publication of CN113936312B publication Critical patent/CN113936312B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a face recognition base screening method based on a deep learning graph rolling network, which comprises the following steps: firstly, acquiring a video under a use scene; secondly, selecting a face recognition network as a main network according to the requirement of a use scene, combining a graph convolution network as a branch, constructing a face image quality evaluation model, and training the model; and thirdly, detecting the face object from the captured video. Fourthly, acquiring an image to be selected by using a mask detection algorithm; fifthly, adding part of interference images and the images to be selected obtained in the last step, inputting the images to a trained face recognition base image screening model for screening, and outputting the image with the highest confidence score in the images to be detected by the model; and sixthly, taking the high-quality face image output by the model as a base for storage. The method is used for screening the face database, so that the screening efficiency is ensured, and the definition of the screened face image and the integrity of the features are ensured.

Description

Face recognition base screening method based on deep learning graph convolution network
Technical Field
The invention relates to the fields of computer vision, deep learning face recognition and intelligent monitoring, in particular to a face recognition base screening method based on a deep learning graph rolling network.
Background
Along with the high-speed development of artificial intelligence in China, the face recognition technology is widely applied to various large fields, so that great convenience is brought to the life of people, and the safety of public places is improved to a great extent; the precondition of face recognition is to input face base information, if the input base image quality is poor, the conditions of unclear blurring, five sense organs shielding and the like exist, the face recognition information is likely to be recognized incorrectly when being compared; therefore, a proper method is selected to screen out the high-definition and high-quality face image as the base information, plays a vital role in face recognition, and is helpful for improving the accuracy and stability of face recognition.
At present, most of face recognition base image construction work still relies on manual screening, and high-quality clear faces are screened from a large number of face images to serve as a recognition base; however, the manual detection screening has obvious defects: the experience of the staff is relied on, the subjectivity of the staff is overlarge in the screening process, and the screening result has no unified standard; the detection and screening are long in time consumption and low in working efficiency.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a face recognition base screening method based on a deep learning graph rolling network, and the face recognition base screening is carried out through the model, so that the screening efficiency is ensured, and the definition and the feature integrity of the screened face images are ensured; the face image with the highest quality of the detection object is obtained through the neural network learning to serve as a base, a screening threshold is not required to be set manually, and the influence caused by insufficient manual experience is reduced. The technical proposal is as follows:
a face recognition base screening method based on a deep learning graph rolling network comprises the following steps:
The first step is to acquire the video shot by the monitoring camera under the use scene.
And secondly, selecting a face recognition network as a main network according to the requirement of a use scene, combining a graph convolution network as a branch, constructing a face image quality evaluation model, and training the model.
Thirdly, detecting a face object from the captured video, and adopting a lightweight face detection algorithm MTCNN for face detection. Selecting a detected face object as a target, marking the target, preventing the next detection from being repeatedly selected, tracking the selected target by using a KCF target tracking algorithm in each frame, and intercepting and storing a frame image of the face of the target.
And fourthly, detecting the image stored in the third step by using paddlehub mask detection algorithm, and storing the image without the mask as a candidate image according to the detection result.
And fifthly, adding part of interference images and the images to be selected obtained in the last step, inputting the images to a trained face recognition base image screening model for screening, and outputting the image with the highest confidence score in the images to be detected by the model.
And sixthly, taking the high-quality face image output by the model as a base for storage.
Preferably, in the second step, the face recognition network is trained using Resnet networks.
Further, when the face recognition network is trained by using Resnet network, the model is trained in two stages:
The first stage: the labeled dataset is used as input to train the backbone network Resnet first for extracting the features of the image to be detected.
And a second stage: training the GCN-V network:
(1) Constructing a data set:
① Given a data set with classification labels, the features of each image in the given data set are extracted using a trained backbone network Resnet to form a feature set Wherein f i∈RD, D represents the feature dimension, i ε {1,2,3 … … N }, N being the number of images in a given dataset; defining each image feature f i as node i, the similarity between node i and node j being denoted as a i,j,ai,j as the cosine similarity between f i and f j, j e {1,2,3 … … N }.
② Obtaining k adjacent nodes of all the image nodes according to the similarity among the image nodes and constructing a similarity graph G= (V, E) according to the k adjacent nodes, wherein the k adjacent nodes are specifically: each image feature is a node belonging to V, the similarity value of each image feature and each node is ordered from big to small, the first K nodes are taken as neighbor nodes of the node, namely a neighborhood of the node is formed, vertexes are connected with the neighbor nodes to form K sides belonging to E, a similarity graph can be expressed as a vertex feature matrix F 'and an adjacent matrix A, the size of the F' is NxD, the size of the adjacent matrix A is NxN, and in the adjacent matrix A, if vi and vj are not connected, the similarity a i,j between the image node i and the node j is updated to be 0.
③ Obtaining a group-truth confidence label; since datasets typically vary widely within a class, there may be different confidence values for each image, even though each image belongs to the same class. The features in the image class with high confidence are obvious, the probability of belonging to the class is high, the low confidence image is biased to be marginal, and the features in the class are weak. Based on this feature, the confidence c i of each node is defined according to its neighborhood: and accumulating the similarity between all adjacent nodes of the current node if the adjacent nodes are consistent with the adjacent node category, subtracting the similarity between the current node and the adjacent nodes if the adjacent nodes are not the same category, and dividing the node confidence by the adjacent node number of the current node to obtain the node confidence.
(2) Training of a model:
① The model inputs are the vertex feature matrix F' and the adjacency matrix a in the dataset constructed in the previous step.
② Performing aggregation operation on the feature matrix F 'and the adjacent matrix A, obtaining new features through the L-layer convolution layers, performing regression on the new features at the last layer of the network to obtain a predicted confidence value C',
C'=FLW+b
Wherein W is a trainable regression variable, b is a trainable deviation, and L is the number of layers of the graph convolution, which can be adjusted as required; the predictive confidence of node v i may be derived fromExtracted from the corresponding elements in the formulaAnd (3) representing.
③ The model is trained by minimizing the Mean Square Error (MSE) between the true confidence and the confidence score of the model predictions.
Preferably, in the second step, the backbone network is not fixed, and can be optionally replaced by any face recognition network according to the requirement of use.
Preferably, the fifth step is specifically:
(1) Firstly, acquiring an image feature set through a backbone network
(2) F, processing the F output by the backbone network: and obtaining neighboring k nodes of all the image nodes according to the similarity among the image nodes, so as to construct a similarity graph G= (V, E), and representing the similarity graph as a vertex characteristic matrix F' and an adjacent matrix A.
(3) Taking the vertex characteristic matrix F and the adjacency matrix A as inputs of a GCN-V network, and calculating the confidence score of each image to be detected through the GCN-V: c' =f L w+b.
(4) And (5) screening out the image output with the highest confidence score.
Compared with the prior art, one of the technical schemes has the following beneficial effects: based on a deep learning face recognition network and a graph rolling network, a face recognition base image screening model is constructed, and face base screening is carried out through the model, so that the screening efficiency and the definition and the feature integrity of the screened face images are ensured; at the same time, only one object is ensured to keep one high-quality image as a bottom library. Meanwhile, the face image with the highest quality of the detection object is obtained through the neural network learning to serve as a base, a screening threshold value is not required to be set manually, and the influence caused by insufficient manual experience is reduced; meanwhile, the real-time performance is high, the requirement on hardware is low, and the hardware cost is reduced; the high-quality face can be effectively detected and screened out to be used as a base image.
Detailed Description
In order to clarify the technical scheme and working principle of the present invention, the following describes the embodiments of the present disclosure in further detail. Any combination of the above-mentioned optional solutions may be adopted to form an optional embodiment of the present disclosure, which is not described herein in detail.
The terms "first step," "second step," "third step," and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those described herein.
The embodiment of the disclosure provides a face recognition base screening method based on a deep learning graph rolling network, which comprises the following steps: the method comprises the following steps:
firstly, acquiring a video shot by a monitoring camera under a use scene; the method can be suitable for face recognition base construction in any scene, such as airports, railway stations, banks and the like.
And secondly, selecting a face recognition network as a main network according to the requirement of a use scene, combining a graph convolution network as a branch, constructing a face image quality evaluation model, and training the model.
Preferably, in the second step, the backbone network is not fixed, and can be optionally replaced by any face recognition network according to the requirement of use; the flexibility is stronger, and the performance can not receive the influence of use scene change. All face recognition networks can be used, and different networks have different influences on the accuracy and detection speed of model screening;
preferably, the face recognition network is trained using Resnet networks.
Further, when the face recognition network is trained by using Resnet networks, the model is trained in two stages.
The first stage: the labeled dataset is used as input to train the backbone network Resnet first for extracting the features of the image to be detected.
And a second stage: training the GCN-V network:
(1) Constructing a data set:
① Given a data set with classification labels, the features of each image in the given data set are extracted using a trained backbone network Resnet to form a feature set Wherein f i∈RD, D represents the feature dimension, i ε {1,2,3 … … N }, N being the number of images in a given dataset; defining each image feature f i as node i, the similarity between node i and node j being denoted as a i,j,ai,j as the cosine similarity between f i and f j, j e {1,2,3 … … N }.
② Obtaining k adjacent nodes of all the image nodes according to the similarity among the image nodes and constructing a similarity graph G= (V, E) according to the k adjacent nodes, wherein the k adjacent nodes are specifically: each image feature is a node belonging to V, the similarity value of each image feature and each node is ordered from big to small, the first K nodes are taken as neighbor nodes of the node, namely a neighborhood of the node is formed, vertexes are connected with the neighbor nodes to form K sides belonging to E, a similarity graph can be expressed as a vertex feature matrix F 'and an adjacent matrix A, the size of the F' is NxD, the size of the adjacent matrix A is NxN, and in the adjacent matrix A, if vi and vj are not connected, the similarity a i,j between the image node i and the node j is updated to be 0.
③ Obtaining a group-truth confidence label; since datasets typically vary widely within a class, there may be different confidence values for each image, even though each image belongs to the same class. The features in the image class with high confidence are obvious, the probability of belonging to the class is high, the low confidence image is biased to be marginal, and the features in the class are weak. Based on this feature, the confidence c i of each node is defined according to its neighborhood: and accumulating the similarity between all adjacent nodes of the current node if the adjacent nodes are consistent with the adjacent node category, subtracting the similarity between the current node and the adjacent nodes if the adjacent nodes are not the same category, and dividing the node confidence by the adjacent node number of the current node to obtain the node confidence.
(2) Training of a model:
① The input of the model is a vertex characteristic matrix F' and an adjacent matrix A in the data set constructed in the last step;
② Performing aggregation operation on the feature matrix F 'and the adjacent matrix A, obtaining new features through the L-layer convolution layers, performing regression on the new features at the last layer of the network to obtain a predicted confidence value C',
C'=FLW+b
Where W is a trainable regression variable, b is a trainable bias, and L is the number of layers of the graph convolution, which can be adjusted as needed. The predictive confidence of node v i may be derived fromExtracted from the corresponding elements in the formulaAnd (3) representing.
③ The model is trained by minimizing the Mean Square Error (MSE) between the true confidence and the confidence score of the model predictions.
Thirdly, detecting a face object from the captured video, and adopting a lightweight face detection algorithm MTCNN for face detection. Selecting a detected face object as a target, marking the target, preventing the next detection from being repeatedly selected, tracking the selected target by using a KCF target tracking algorithm in each frame, and intercepting and storing a frame image of the face of the target.
The face recognition base screening model designed by the invention has no requirement on input images, and can accept the input of any size, so that the images obtained by using the tracking algorithm do not need to be subjected to size constraint, and the applicability of the model is improved. If the usage unit has a requirement on the size of the bottom library image, after all frame images of the target object are acquired in this step, the images can be further preprocessed: and (5) adjusting the size. The subsequent operations are performed after the uniform sizing.
And fourthly, detecting the image stored in the third step by using paddlehub mask detection algorithm, and storing the image without the mask as a candidate image according to the detection result. The mask detection is used for preprocessing the image, so that the face five sense organs of the finally screened high-quality face recognition base image are free from shielding, and the influence of the incomplete face image on the next step of extracting the similarity image is reduced.
And fifthly, adding part of interference images and the images to be selected obtained in the last step, inputting the images to a trained face recognition base image screening model for screening, and outputting the image with the highest confidence score in the images to be detected by the model.
Preferably, the fifth step is specifically:
(1) Firstly, acquiring an image feature set through a backbone network
(2) F, processing the F output by the backbone network: and obtaining neighboring k nodes of all the image nodes according to the similarity among the image nodes, so as to construct a similarity graph G= (V, E), and representing the similarity graph as a vertex characteristic matrix F' and an adjacent matrix A.
(3) Taking the vertex characteristic matrix F and the adjacency matrix A as inputs of a GCN-V network, and calculating the confidence score of each image to be detected through the GCN-V: c' =f L w+b.
(4) And (5) screening out the image output with the highest confidence score.
The confidence represents the quality score of the image, and the higher the quality score is, the higher the quality of the image is, and the more the image meets the requirements of the bottom library image. For the image with high confidence, the contained intra-class features are comprehensive and can more completely represent the belonged class. While low confidence images typically contain non-obvious, non-comprehensive intra-class features and are not representative of the class in which they are located. Therefore, the image with the highest confidence score in the same class is the image with the highest quality, wherein the image can represent the characteristics in the class comprehensively in all the images of the class. Based on the confidence score, we screen the face image with the highest confidence score in the same class as the base.
And sixthly, taking the high-quality face image output by the model as a base for storage.
While the invention has been described above by way of example, it is evident that the invention is not limited to the particular embodiments described above, but rather, it is intended to provide various insubstantial modifications, both as to the method concepts and technical solutions of the invention; or the above conception and technical scheme of the invention are directly applied to other occasions without improvement and equivalent replacement, and all are within the protection scope of the invention.

Claims (2)

1. The face recognition base screening method based on the deep learning graph rolling network is characterized by comprising the following steps of:
firstly, acquiring a video shot by a monitoring camera under a use scene;
Secondly, selecting a face recognition network as a main network according to the requirement of a use scene, combining a graph convolution network as a branch, constructing a face image quality evaluation model, and training the model;
The face recognition network is trained by using Resnet network, and the model is trained in two stages:
the first stage: training a backbone network Resnet by taking a dataset with a label as input, and extracting the characteristics of an image to be detected;
and a second stage: training the GCN-V network:
(1) Constructing a data set:
① Given a data set with classification labels, the features of each image in the given data set are extracted using a trained backbone network Resnet to form a feature set Wherein f i∈RD, D represents the feature dimension, i ε {1,2,3 … … N }, N being the number of images in a given dataset; defining each image feature f i as a node i, wherein the similarity between the node i and the node j is expressed as a i,j;ai,j being the cosine similarity between f i and f j, and j epsilon {1,2,3 … … N };
② Obtaining k adjacent nodes of all the image nodes according to the similarity among the image nodes and constructing a similarity graph G= (V, E) according to the k adjacent nodes, wherein the k adjacent nodes are specifically: each image feature is a node belonging to V, the similarity value of each image feature and each node is ordered from big to small, the first K nodes are taken as neighbor nodes of the node, namely a neighborhood of the node is formed, vertexes are connected with the neighbor nodes to form K sides belonging to E, a similarity graph is represented by a vertex feature matrix F 'and an adjacent matrix A, the size of the F' is N multiplied by D, the size of the adjacent matrix A is N multiplied by N, and in the adjacent matrix A, if vi and vj are not connected, the similarity a i,j between the image node i and the node j is updated to be 0;
③ Obtaining a group-truth confidence label; since datasets typically vary widely within a class, there may be different confidence values for each image, even though each image belongs to the same class; the features in the image class with high confidence are obvious, the probability of belonging to the class is high, the low confidence image is marginalized, and the features in the class are weaker; based on this feature, the confidence c i of each node is defined according to its neighborhood: for all adjacent nodes of the current node, accumulating the similarity between the current node and the adjacent node if the current node is consistent with the adjacent node, subtracting the similarity between the current node and the adjacent node if the current node is not the same type, and dividing the node confidence by the adjacent node number of the current node to obtain the node confidence;
(2) Training of a model:
① The input of the model is a vertex characteristic matrix F' and an adjacent matrix A in the data set constructed in the last step;
② Performing aggregation operation on the feature matrix F 'and the adjacent matrix A, obtaining new features through the L-layer convolution layers, performing regression on the new features at the last layer of the network to obtain a predicted confidence value C',
C′=FLW+b
Wherein W is a trainable regression variable, b is a trainable deviation, and L is the number of layers of the graph convolution, and the number of layers is adjusted according to the requirement; predictive confidence slave for node v i Extracted from the corresponding elements in the formulaA representation;
③ Training the model by minimizing the mean square error between the true confidence and the confidence score of the model prediction;
thirdly, detecting a face object from the captured video, and adopting a lightweight face detection algorithm MTCNN for face detection; selecting a detected face object as a target, marking the target, preventing the next detection from being repeatedly selected, tracking the selected target by using a KCF target tracking algorithm in each frame, intercepting a frame image of the face of the target, and storing the frame image;
fourthly, detecting the image stored in the third step by using paddlehub mask detection algorithm, and storing the image without wearing the mask according to the detection result as a to-be-selected image;
Fifthly, adding part of interference images and the images to be selected obtained in the last step, inputting the images to a trained face recognition base image screening model for screening, and outputting the image with the highest confidence score in the images to be detected by the model;
the method comprises the following steps:
(1) Firstly, acquiring an image feature set through a backbone network
(2) F, processing the F output by the backbone network: obtaining k adjacent nodes of all the image nodes according to the similarity among the image nodes, so as to construct a similarity graph G= (V, E) which is expressed as a vertex characteristic matrix F' and an adjacent matrix A;
(3) Taking the vertex characteristic matrix F' and the adjacency matrix A as inputs of a GCN-V network, and calculating the confidence score of each image to be detected through the GCN-V: c' =f L w+b;
(4) Screening out the image output with the highest confidence coefficient score;
And sixthly, taking the high-quality face image output by the model as a base for storage.
2. The deep learning graph convolution network-based face recognition base screening method according to claim 1, wherein in the second step, the backbone network is not fixed, and the matched requirements are replaced by any face recognition network.
CN202111185859.0A 2021-10-12 2021-10-12 Face recognition base screening method based on deep learning graph convolution network Active CN113936312B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111185859.0A CN113936312B (en) 2021-10-12 2021-10-12 Face recognition base screening method based on deep learning graph convolution network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111185859.0A CN113936312B (en) 2021-10-12 2021-10-12 Face recognition base screening method based on deep learning graph convolution network

Publications (2)

Publication Number Publication Date
CN113936312A CN113936312A (en) 2022-01-14
CN113936312B true CN113936312B (en) 2024-06-07

Family

ID=79278328

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111185859.0A Active CN113936312B (en) 2021-10-12 2021-10-12 Face recognition base screening method based on deep learning graph convolution network

Country Status (1)

Country Link
CN (1) CN113936312B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109711384A (en) * 2019-01-09 2019-05-03 江苏星云网格信息技术有限公司 A kind of face identification method based on depth convolutional neural networks
CN110070010A (en) * 2019-04-10 2019-07-30 武汉大学 A kind of face character correlating method identified again based on pedestrian
CN110472495A (en) * 2019-07-08 2019-11-19 南京邮电大学盐城大数据研究院有限公司 A kind of deep learning face identification method based on graphical inference global characteristics
CN111103891A (en) * 2019-12-30 2020-05-05 西安交通大学 Unmanned aerial vehicle rapid posture control system and method based on skeleton point detection
CN111339983A (en) * 2020-03-05 2020-06-26 四川长虹电器股份有限公司 Method for fine-tuning face recognition model
WO2020155873A1 (en) * 2019-02-02 2020-08-06 福州大学 Deep apparent features and adaptive aggregation network-based multi-face tracking method
CN111753884A (en) * 2020-06-08 2020-10-09 浙江工业大学 Depth map convolution model defense method and device based on network feature reinforcement
CN112215822A (en) * 2020-10-13 2021-01-12 北京中电兴发科技有限公司 Face image quality evaluation method based on lightweight regression network
CN112381987A (en) * 2020-11-10 2021-02-19 中国人民解放军国防科技大学 Intelligent entrance guard epidemic prevention system based on face recognition
CN112488034A (en) * 2020-12-14 2021-03-12 上海交通大学 Video processing method based on lightweight face mask detection model
CN112613385A (en) * 2020-12-18 2021-04-06 成都三零凯天通信实业有限公司 Face recognition method based on monitoring video
WO2021134871A1 (en) * 2019-12-30 2021-07-08 深圳市爱协生科技有限公司 Forensics method for synthesized face image based on local binary pattern and deep learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210000404A1 (en) * 2019-07-05 2021-01-07 The Penn State Research Foundation Systems and methods for automated recognition of bodily expression of emotion

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109711384A (en) * 2019-01-09 2019-05-03 江苏星云网格信息技术有限公司 A kind of face identification method based on depth convolutional neural networks
WO2020155873A1 (en) * 2019-02-02 2020-08-06 福州大学 Deep apparent features and adaptive aggregation network-based multi-face tracking method
CN110070010A (en) * 2019-04-10 2019-07-30 武汉大学 A kind of face character correlating method identified again based on pedestrian
CN110472495A (en) * 2019-07-08 2019-11-19 南京邮电大学盐城大数据研究院有限公司 A kind of deep learning face identification method based on graphical inference global characteristics
CN111103891A (en) * 2019-12-30 2020-05-05 西安交通大学 Unmanned aerial vehicle rapid posture control system and method based on skeleton point detection
WO2021134871A1 (en) * 2019-12-30 2021-07-08 深圳市爱协生科技有限公司 Forensics method for synthesized face image based on local binary pattern and deep learning
CN111339983A (en) * 2020-03-05 2020-06-26 四川长虹电器股份有限公司 Method for fine-tuning face recognition model
CN111753884A (en) * 2020-06-08 2020-10-09 浙江工业大学 Depth map convolution model defense method and device based on network feature reinforcement
CN112215822A (en) * 2020-10-13 2021-01-12 北京中电兴发科技有限公司 Face image quality evaluation method based on lightweight regression network
CN112381987A (en) * 2020-11-10 2021-02-19 中国人民解放军国防科技大学 Intelligent entrance guard epidemic prevention system based on face recognition
CN112488034A (en) * 2020-12-14 2021-03-12 上海交通大学 Video processing method based on lightweight face mask detection model
CN112613385A (en) * 2020-12-18 2021-04-06 成都三零凯天通信实业有限公司 Face recognition method based on monitoring video

Also Published As

Publication number Publication date
CN113936312A (en) 2022-01-14

Similar Documents

Publication Publication Date Title
Kümmerer et al. DeepGaze II: Reading fixations from deep features trained on object recognition
US20200285896A1 (en) Method for person re-identification based on deep model with multi-loss fusion training strategy
EP3872650A1 (en) Method for footprint image retrieval
Shami et al. People counting in dense crowd images using sparse head detections
CN111611874B (en) Face mask wearing detection method based on ResNet and Canny
US11640714B2 (en) Video panoptic segmentation
CN110728294A (en) Cross-domain image classification model construction method and device based on transfer learning
CN115482491B (en) Bridge defect identification method and system based on transformer
Szwoch Extraction of stable foreground image regions for unattended luggage detection
Björklund et al. Automatic license plate recognition with convolutional neural networks trained on synthetic data
CN111126115A (en) Violence sorting behavior identification method and device
CN111882586A (en) Multi-actor target tracking method oriented to theater environment
CN114842553A (en) Behavior detection method based on residual shrinkage structure and non-local attention
CN115019039A (en) Example segmentation method and system combining self-supervision and global information enhancement
Zhang et al. Surveillance video quality assessment based on quality related retraining
CN113936312B (en) Face recognition base screening method based on deep learning graph convolution network
Kajabad et al. YOLOv4 for urban object detection: Case of electronic inventory in St. Petersburg
CN111986233A (en) Large-scene minimum target remote sensing video tracking method based on feature self-learning
CN116824641A (en) Gesture classification method, device, equipment and computer storage medium
CN111160115A (en) Video pedestrian re-identification method based on twin double-flow 3D convolutional neural network
US11893084B2 (en) Object detection systems and methods including an object detection model using a tailored training dataset
CN113192108B (en) Man-in-loop training method and related device for vision tracking model
CN111401286B (en) Pedestrian retrieval method based on component weight generation network
Li et al. Image object detection algorithm based on improved Gaussian mixture model
Xiang et al. A Precipitation Nowcasting Mechanism for Real‐World Data Based on Machine Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant