CN114331904A - Face shielding identification method - Google Patents

Face shielding identification method Download PDF

Info

Publication number
CN114331904A
CN114331904A CN202111665913.1A CN202111665913A CN114331904A CN 114331904 A CN114331904 A CN 114331904A CN 202111665913 A CN202111665913 A CN 202111665913A CN 114331904 A CN114331904 A CN 114331904A
Authority
CN
China
Prior art keywords
layer
face
network model
data set
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111665913.1A
Other languages
Chinese (zh)
Other versions
CN114331904B (en
Inventor
陈波
陈圩钦
邓媛丹
曾俊涛
朱舜文
王庆先
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Research Institute Of Yibin University Of Electronic Science And Technology
University of Electronic Science and Technology of China
Original Assignee
Research Institute Of Yibin University Of Electronic Science And Technology
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Research Institute Of Yibin University Of Electronic Science And Technology, University of Electronic Science and Technology of China filed Critical Research Institute Of Yibin University Of Electronic Science And Technology
Priority to CN202111665913.1A priority Critical patent/CN114331904B/en
Publication of CN114331904A publication Critical patent/CN114331904A/en
Application granted granted Critical
Publication of CN114331904B publication Critical patent/CN114331904B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a face shielding identification method, which comprises the following steps: s1: repairing the shielded face image by using a structure generator, a texture generator and a first network model which are connected in sequence to obtain a repaired face image; s2: and identifying the repaired face image by using a second network model to obtain a face shielding identification result. The face shielding identification method provided by the invention can solve the technical problems that the image characteristic information is difficult to extract and the face shielding identification cannot be efficiently realized in the existing face shielding identification method, thereby effectively realizing the face shielding identification.

Description

Face shielding identification method
Technical Field
The invention relates to the technical field of face recognition, in particular to a face shielding recognition method.
Background
The human face recognition technology is an important invention, and the development of human civilization and science and technology is greatly promoted. However, due to the complexity of the acquisition scene and the dynamic variability of the object, if the acquired face has a shielding problem, the accuracy of face recognition is seriously affected, so that the application range of face recognition is limited to a certain extent.
In recent years, deep learning is mature in the aspect of non-shielding face recognition, and the precision is as high as 99.23%. There are still some problems to be solved. Firstly, the method comprises the following steps: the existing model has too deep network level, longer training time and further optimization space. Secondly, the method comprises the following steps: the real environment is more changeable and complex, and most of the existing recognition methods are supervised recognition methods under the condition that the face is shielded, and rely on a database of the complete face. Thirdly, the method comprises the following steps: the existing unsupervised face occlusion recognition method cannot effectively extract the face features and has poor recognition effect. Therefore, it is a great concern of researchers at present to find a light-weight and efficient method for recognizing an occluded face.
Disclosure of Invention
The invention aims to provide a face occlusion recognition method, which aims to solve the technical problems that the image characteristic information is difficult to extract and the face occlusion recognition cannot be efficiently realized in the conventional face occlusion recognition method, so that the face occlusion recognition can be effectively realized.
The technical scheme for solving the technical problems is as follows:
the invention provides a face occlusion recognition method, which comprises the following steps:
s1: repairing the shielded face image by using a structure generator, a texture generator and a first network model which are connected in sequence to obtain a repaired face image;
s2: and identifying the repaired face image by using a second network model to obtain a face shielding identification result.
Alternatively, the step S1 includes:
s11: training the first network model and the original structure generator by using a non-shielded human face data set to obtain a trained first network model and a trained structure generator;
s12: randomly extracting partial images from the non-shielded face data set to obtain an initial sample set;
s13: randomly adding an occlusion feature to the partial image in the initial sample set to obtain an occlusion face data set;
s14: processing the occlusion face data set by using the trained structure generator to obtain a structural feature image data set;
s15: training a texture generator by using the occlusion face data set and the structural feature image data set to obtain a trained texture generator;
s16: processing the structural feature image data set by using the trained texture generator to obtain a first face restoration image data set;
s17: and inputting the first face repairing image data set into the trained first network model to obtain a second face repairing image data set, wherein the second face repairing image set comprises repaired face images.
Optionally, the structure generator includes a first input layer, a first closed convolution layer, a first void convolution layer, a second closed convolution layer, an autoregressive network, and a first output layer, which are connected in sequence.
Optionally, the texture generator includes a second input layer, a third closed convolution layer, a first residual error network, a fourth closed convolution layer, a second void convolution layer, a fifth closed convolution layer, a second residual error network, a sixth closed convolution layer, a third residual error network, a seventh closed convolution layer, and a second output layer, which are connected in sequence; the third input layer, the attention mechanism layer and the second residual error network are sequentially connected, and the output end of the second cavity convolution layer is also simultaneously connected with the attention mechanism layer;
the first output layer is connected with the second input layer.
Optionally, the first network model is a layered network model, the layered network model is a VQ-VAE network model, the VQ-VAE network model includes an input module, an encoder, a decoder, and an output module, which are connected in sequence, the input module is connected to the second output layer, and the output module is connected to the second network model.
Optionally, the encoder includes a first convolution layer, a first vector quantization layer, a second convolution layer, a first skip convolution layer, a third convolution layer, and a second vector quantization layer connected in sequence, where the first convolution layer is connected to the input module as an input layer of the encoder, and the second vector quantization layer is connected to the decoder as an output layer of the encoder.
Optionally, the decoder comprises a second skip convolutional layer connected as an input layer of the decoder to the second vector quantization layer and a fourth convolutional layer connected as an output layer of the decoder to the output module.
Alternatively, the step S2 includes:
training a second network model by using the non-shielded human face data set to obtain a trained second network model;
inputting the second face repairing image data set into the trained second network model to obtain an identified image set;
and outputting the identified image set as the face shielding identification result.
Optionally, the second network model is a recognition network model, and the recognition network model is a SqueezeNet network model.
Optionally, the squarezenet network model includes a compression layer and an expansion layer, the compression layer is connected to the first network model, and an image generated after a compression operation is performed through the compression layer enters the expansion layer, and the expansion layer performs expansion identification on the image generated after the compression operation by using an activation function.
The invention has the following beneficial effects:
(1) by effectively utilizing an attention mechanism and a hierarchical network model, the generated image structure and texture are more real.
(2) The method can well solve the problem of identification of face shielding in the traditional method, and can well repair shielding of different degrees and different areas.
(3) The identification module network is light in weight, and the identification efficiency can be greatly improved.
Drawings
FIG. 1 is a schematic diagram of a face occlusion recognition process provided by the present invention;
FIG. 2 is a flow chart of a face occlusion recognition method provided by the present invention;
FIG. 3 is a schematic diagram of a structure generator;
FIG. 4 is a schematic diagram of a texture generator;
FIG. 5 is a flowchart illustrating the substeps of step S1;
FIG. 6 is a schematic structural diagram of a VQ-VAE network model;
FIG. 7 is a schematic diagram of the structure of the SqueezeNet network model.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
Examples
The invention provides a face shielding recognition method, which is basically characterized in that the method utilizes the data of a non-shielding face image, selects a part of the data to be added with a shielding template to be used as a training set for training and generating a network model, carries out face repairing on the shielding face image through the model, inputs the repaired image as a data set into a pre-trained lightweight classification network for recognition, and effectively solves the problem of the non-supervision learning method on the recognition of the face shielding.
Aiming at the problems that an unconstrained face shielding image is difficult to be complemented and the conventional method is difficult to be complemented under an actual complex scene, and the incomplete part of the complement can generate the phenomenon of structural distortion or texture error, a two-stage model is provided for restoring the face shielding image, firstly, a structure generator is utilized to process the shielded face image to generate diversified feature structure images, then, the diversified feature structure images are subjected to texture enhancement through a texture generator to obtain enhanced face images, then, the enhanced face images are processed through a trained layer VQ-VAE to obtain final face images restored without shielding, and finally, the restored face images are subjected to face recognition through a trained Squeezet network model, wherein the whole process is shown in figure 1, and the specific method comprises the following steps:
referring to fig. 2, the face occlusion recognition method includes:
s1: repairing the shielded face image by using a structure generator, a texture generator and a first network model which are connected in sequence to obtain a repaired face image;
s2: and identifying the repaired face image by using a second network model to obtain a face shielding identification result.
In the invention, the structure generator comprises a first input layer, a first closed convolution layer, a first cavity convolution layer, a second closed convolution layer, an autoregressive network and a first output layer which are connected in sequence.
The structure generator is composed of a first closed convolution layer, a first cavity convolution layer, a second closed convolution layer and a light-weight autoregressive network, and can generate different structural characteristics. As shown in fig. 3. Firstly, the input incomplete face image is mapped to a space in a blocking way by closing the convolution layer. Secondly, a lightweight autoregressive network is used, and diversified structural feature images are obtained by reducing hidden units and residual units.
Due to the low resolution of the structural features, the diversified structure generators can better capture global information. Thus, it contributes to the formation of a reasonable global structure. In addition, the training goal of the diversified structure generator is to maximize the likelihood of all samples in the training set without adding any additional losses. Thus, the resulting structure does not suffer from the known drawbacks of GAN, such as modal collapse and lack of diversity.
Secondly, optionally, the texture generator includes a second input layer, a third closed convolution layer, a first residual error network, a fourth closed convolution layer, a second cavity convolution layer, a fifth closed convolution layer, a second residual error network, a sixth closed convolution layer, a third residual error network, a seventh closed convolution layer and a second output layer, which are connected in sequence; the third input layer, the attention mechanism layer and the second residual error network are sequentially connected, and the output end of the second cavity convolution layer is also simultaneously connected with the attention mechanism layer;
the first output layer is connected with the second input layer.
Texture generator architecture as shown in fig. 4, in addition to the multiple closed convolutions and the multiple hole convolutions, there is an attention module that uses the structural features as input. By introducing a structure attention module based on structural feature correlation, it can be ensured that the synthesized texture is consistent with the generated structure.
Note that the module is widely used in existing image inpainting methods. Since it usually represents the focus of computing on a low resolution intermediate feature map of the network. However, due to lack of direct supervision of the attention score, the learned attention is not reliable enough, resulting in poor image restoration quality. In order to solve the problem, a structure attention module is introduced, the module directly calculates attention scores on structural features, and long-term correlation of structural information can be accurately simulated, so that consistency of the synthesized texture and the generated structure is improved.
Alternatively, referring to fig. 5, the step S1 includes:
s11: training the first network model and the original structure generator by using a non-shielded human face data set to obtain a trained first network model and a trained structure generator;
s12: randomly extracting partial images from the non-shielded face data set to obtain an initial sample set;
s13: randomly adding an occlusion feature to the partial image in the initial sample set to obtain an occlusion face data set;
s14: processing the occlusion face data set by using the trained structure generator to obtain a structural feature image data set;
s15: training a texture generator by using the occlusion face data set and the structural feature image data set to obtain a trained texture generator;
s16: processing the structural feature image data set by using the trained texture generator to obtain a first face restoration image data set;
s17: and inputting the first face repairing image data set into the trained first network model to obtain a second face repairing image data set, wherein the second face repairing image set comprises repaired face images.
Optionally, the first network model is a layered network model, the layered network model is a VQ-VAE network model, and as shown in fig. 6, the VQ-VAE network model includes an input module, an encoder, a decoder, and an output module, which are connected in sequence, the input module is connected to the second output layer, and the output module is connected to the second network model.
And further calculating two characteristic losses by using a VQ-VAE network model, which respectively contributes to improving the consistency of the structure and the reality sense of the texture.
Optionally, the encoder includes a first convolution layer, a first vector quantization layer, a second convolution layer, a first skip convolution layer, a third convolution layer, and a second vector quantization layer connected in sequence, where the first convolution layer is connected to the input module as an input layer of the encoder, and the second vector quantization layer is connected to the decoder as an output layer of the encoder.
Optionally, the decoder comprises a second skip convolutional layer connected as an input layer of the decoder to the second vector quantization layer and a fourth convolutional layer connected as an output layer of the decoder to the output module.
Alternatively, the step S2 includes:
training a second network model by using the non-shielded human face data set to obtain a trained second network model;
inputting the second face repairing image data set into the trained second network model to obtain an identified image set;
and outputting the identified image set as the face shielding identification result.
Optionally, the second network model is a recognition network model, and the recognition network model is a SqueezeNet network model.
The SqueezeNet model is a lightweight and efficient convolutional neural network model, and compared with the AlexNet model, the SqueezeNet model has fewer parameters and similar model performance. Smaller models have more advantages than large models: the model parameters are less, the network traffic is reduced, and the distributed training is more efficient; the updating of the model is more portable; and is suitable for deployment on hardware with limited memory. The network model of the SqueezeNet is detailed in fig. 7.
Optionally, the squarezenet network model includes a compression layer and an expansion layer, the compression layer is connected to the first network model, and an image generated after a compression operation is performed through the compression layer enters the expansion layer, and the expansion layer performs expansion identification on the image generated after the compression operation by using an activation function. Here, the present invention does not limit the specific activation function, and those skilled in the art can combine the present invention with the practical requirement to set the activation function, as an embodiment, the activation function of the present invention is a ReLU activation function.
The invention has the following beneficial effects:
(1) by effectively utilizing an attention mechanism and a hierarchical network model, the generated image structure and texture are more real.
(2) The method can well solve the problem of identification of face shielding in the traditional method, and can well repair shielding of different degrees and different areas.
(3) The identification module network is light in weight, and the identification efficiency can be greatly improved.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. A face occlusion recognition method is characterized by comprising the following steps:
s1: repairing the shielded face image by using a structure generator, a texture generator and a first network model which are connected in sequence to obtain a repaired face image;
s2: and identifying the repaired face image by using a second network model to obtain a face shielding identification result.
2. The face mask recognition method according to claim 1, wherein the step S1 includes:
s11: training the first network model and the original structure generator by using a non-shielded human face data set to obtain a trained first network model and a trained structure generator;
s12: randomly extracting partial images from the non-shielded face data set to obtain an initial sample set;
s13: randomly adding an occlusion feature to the partial image in the initial sample set to obtain an occlusion face data set;
s14: processing the occlusion face data set by using the trained structure generator to obtain a structural feature image data set;
s15: training a texture generator by using the occlusion face data set and the structural feature image data set to obtain a trained texture generator;
s16: processing the structural feature image data set by using the trained texture generator to obtain a first face restoration image data set;
s17: and inputting the first face repairing image data set into the trained first network model to obtain a second face repairing image data set, wherein the second face repairing image set comprises repaired face images.
3. The face mask recognition method according to claim 1, wherein the structure generator comprises a first input layer, a first closed convolution layer, a first void convolution layer, a second closed convolution layer, an autoregressive network and a first output layer which are connected in sequence.
4. The face mask recognition method according to claim 3, wherein the texture generator comprises a second input layer, a third closed convolution layer, a first residual network, a fourth closed convolution layer, a second cavity convolution layer, a fifth closed convolution layer, a second residual network, a sixth closed convolution layer, a third residual network, a seventh closed convolution layer and a second output layer which are connected in sequence; the third input layer, the attention mechanism layer and the second residual error network are sequentially connected, and the output end of the second cavity convolution layer is also simultaneously connected with the attention mechanism layer;
the first output layer is connected with the second input layer.
5. The face mask recognition method according to claim 4, wherein the first network model is a layered network model, the layered network model is a VQ-VAE network model, the VQ-VAE network model comprises an input module, an encoder, a decoder and an output module which are connected in sequence, the input module is connected with the second output layer, and the output module is connected with the second network model.
6. The face mask recognition method according to claim 5, wherein the encoder includes a first convolution layer, a first vector quantization layer, a second convolution layer, a first jump convolution layer, a third convolution layer, and a second vector quantization layer connected in sequence, the first convolution layer is connected to the input module as an input layer of the encoder, and the second vector quantization layer is connected to the decoder as an output layer of the encoder.
7. The face mask recognition method of claim 6, wherein the decoder comprises a second skip convolutional layer connected to the second vector quantization layer as an input layer of the decoder and a fourth convolutional layer connected to the output module as an output layer of the decoder.
8. The face mask recognition method according to claim 2, wherein the step S2 includes:
training a second network model by using the non-shielded human face data set to obtain a trained second network model;
inputting the second face repairing image data set into the trained second network model to obtain an identified image set;
and outputting the identified image set as the face shielding identification result.
9. The face mask recognition method according to any one of claims 1-8, wherein the second network model is a recognition network model, and the recognition network model is a SqueezeNet network model.
10. The face mask recognition method according to claim 9, wherein the squaezenet network model includes a compression layer and an expansion layer, the compression layer is connected to the first network model, and an image generated after a compression operation is performed through the compression layer enters the expansion layer, and the expansion layer performs expansion recognition on the image generated after the compression operation by using an activation function.
CN202111665913.1A 2021-12-31 2021-12-31 Face shielding recognition method Active CN114331904B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111665913.1A CN114331904B (en) 2021-12-31 2021-12-31 Face shielding recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111665913.1A CN114331904B (en) 2021-12-31 2021-12-31 Face shielding recognition method

Publications (2)

Publication Number Publication Date
CN114331904A true CN114331904A (en) 2022-04-12
CN114331904B CN114331904B (en) 2023-08-08

Family

ID=81021650

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111665913.1A Active CN114331904B (en) 2021-12-31 2021-12-31 Face shielding recognition method

Country Status (1)

Country Link
CN (1) CN114331904B (en)

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109934116A (en) * 2019-02-19 2019-06-25 华南理工大学 A kind of standard faces generation method based on generation confrontation mechanism and attention mechanism
CN110148207A (en) * 2018-12-13 2019-08-20 湖南师范大学 The intelligent generating algorithm for producing type based on ancient times Changsha Kiln ceramics style
CN110458133A (en) * 2019-08-19 2019-11-15 电子科技大学 Lightweight method for detecting human face based on production confrontation network
CN110619887A (en) * 2019-09-25 2019-12-27 电子科技大学 Multi-speaker voice separation method based on convolutional neural network
CN111028167A (en) * 2019-12-05 2020-04-17 广州市久邦数码科技有限公司 Image restoration method based on deep learning
US20200302667A1 (en) * 2019-03-21 2020-09-24 Electronic Arts Inc. Generating Facial Position Data based on Audio Data
CN111738940A (en) * 2020-06-02 2020-10-02 大连理工大学 Human face image eye completing method for generating confrontation network based on self-attention mechanism model
CN112133282A (en) * 2020-10-26 2020-12-25 厦门大学 Lightweight multi-speaker speech synthesis system and electronic equipment
US20210027169A1 (en) * 2019-07-25 2021-01-28 Rochester Institute Of Technology Method for Training Parametric Machine Learning Systems
CN112598053A (en) * 2020-12-21 2021-04-02 西北工业大学 Active significance target detection method based on semi-supervised learning
CN112597941A (en) * 2020-12-29 2021-04-02 北京邮电大学 Face recognition method and device and electronic equipment
CN112784764A (en) * 2021-01-27 2021-05-11 南京邮电大学 Expression recognition method and system based on local and global attention mechanism
CN112801404A (en) * 2021-02-14 2021-05-14 北京工业大学 Traffic prediction method based on self-adaptive spatial self-attention-seeking convolution
CN112949565A (en) * 2021-03-25 2021-06-11 重庆邮电大学 Single-sample partially-shielded face recognition method and system based on attention mechanism
US20210183072A1 (en) * 2019-12-16 2021-06-17 Nvidia Corporation Gaze determination machine learning system having adaptive weighting of inputs
CN112990052A (en) * 2021-03-28 2021-06-18 南京理工大学 Partially-shielded face recognition method and device based on face restoration
WO2021169641A1 (en) * 2020-02-28 2021-09-02 深圳壹账通智能科技有限公司 Face recognition method and system
CN113591482A (en) * 2021-02-25 2021-11-02 腾讯科技(深圳)有限公司 Text generation method, device, equipment and computer readable storage medium
CN113591795A (en) * 2021-08-19 2021-11-02 西南石油大学 Lightweight face detection method and system based on mixed attention feature pyramid structure
US20210342977A1 (en) * 2020-04-29 2021-11-04 Shanghai Harvest Intelligence Technology Co., Ltd. Method And Apparatus For Image Restoration, Storage Medium And Terminal
WO2021218238A1 (en) * 2020-04-29 2021-11-04 华为技术有限公司 Image processing method and image processing apparatus
CN116311462A (en) * 2023-03-27 2023-06-23 电子科技大学 Facial image restoration and recognition method combining context information and VGG19

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110148207A (en) * 2018-12-13 2019-08-20 湖南师范大学 The intelligent generating algorithm for producing type based on ancient times Changsha Kiln ceramics style
CN109934116A (en) * 2019-02-19 2019-06-25 华南理工大学 A kind of standard faces generation method based on generation confrontation mechanism and attention mechanism
US20200302667A1 (en) * 2019-03-21 2020-09-24 Electronic Arts Inc. Generating Facial Position Data based on Audio Data
US20210027169A1 (en) * 2019-07-25 2021-01-28 Rochester Institute Of Technology Method for Training Parametric Machine Learning Systems
CN110458133A (en) * 2019-08-19 2019-11-15 电子科技大学 Lightweight method for detecting human face based on production confrontation network
CN110619887A (en) * 2019-09-25 2019-12-27 电子科技大学 Multi-speaker voice separation method based on convolutional neural network
CN111028167A (en) * 2019-12-05 2020-04-17 广州市久邦数码科技有限公司 Image restoration method based on deep learning
US20210183072A1 (en) * 2019-12-16 2021-06-17 Nvidia Corporation Gaze determination machine learning system having adaptive weighting of inputs
WO2021169641A1 (en) * 2020-02-28 2021-09-02 深圳壹账通智能科技有限公司 Face recognition method and system
WO2021218238A1 (en) * 2020-04-29 2021-11-04 华为技术有限公司 Image processing method and image processing apparatus
US20210342977A1 (en) * 2020-04-29 2021-11-04 Shanghai Harvest Intelligence Technology Co., Ltd. Method And Apparatus For Image Restoration, Storage Medium And Terminal
CN111738940A (en) * 2020-06-02 2020-10-02 大连理工大学 Human face image eye completing method for generating confrontation network based on self-attention mechanism model
CN112133282A (en) * 2020-10-26 2020-12-25 厦门大学 Lightweight multi-speaker speech synthesis system and electronic equipment
CN112598053A (en) * 2020-12-21 2021-04-02 西北工业大学 Active significance target detection method based on semi-supervised learning
CN112597941A (en) * 2020-12-29 2021-04-02 北京邮电大学 Face recognition method and device and electronic equipment
CN112784764A (en) * 2021-01-27 2021-05-11 南京邮电大学 Expression recognition method and system based on local and global attention mechanism
CN112801404A (en) * 2021-02-14 2021-05-14 北京工业大学 Traffic prediction method based on self-adaptive spatial self-attention-seeking convolution
CN113591482A (en) * 2021-02-25 2021-11-02 腾讯科技(深圳)有限公司 Text generation method, device, equipment and computer readable storage medium
CN112949565A (en) * 2021-03-25 2021-06-11 重庆邮电大学 Single-sample partially-shielded face recognition method and system based on attention mechanism
CN112990052A (en) * 2021-03-28 2021-06-18 南京理工大学 Partially-shielded face recognition method and device based on face restoration
CN113591795A (en) * 2021-08-19 2021-11-02 西南石油大学 Lightweight face detection method and system based on mixed attention feature pyramid structure
CN116311462A (en) * 2023-03-27 2023-06-23 电子科技大学 Facial image restoration and recognition method combining context information and VGG19

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JIALUN PENG 等: "Generating Diverse Structure for Image Inpainting With Hierarchical VQ-VAE", pages 10770 - 10779 *

Also Published As

Publication number Publication date
CN114331904B (en) 2023-08-08

Similar Documents

Publication Publication Date Title
Zhong et al. An end-to-end dense-inceptionnet for image copy-move forgery detection
CN111626300B (en) Image segmentation method and modeling method of image semantic segmentation model based on context perception
CN108520503B (en) Face defect image restoration method based on self-encoder and generation countermeasure network
Peng et al. Visda: The visual domain adaptation challenge
CN111260740B (en) Text-to-image generation method based on generation countermeasure network
CN109241982B (en) Target detection method based on deep and shallow layer convolutional neural network
CN111860171B (en) Method and system for detecting irregular-shaped target in large-scale remote sensing image
CN111861945B (en) Text-guided image restoration method and system
Liu et al. Learning human pose models from synthesized data for robust RGB-D action recognition
CN109284767B (en) Pedestrian retrieval method based on augmented sample and multi-flow layer
CN113361250A (en) Bidirectional text image generation method and system based on semantic consistency
CN111460980A (en) Multi-scale detection method for small-target pedestrian based on multi-semantic feature fusion
CN114092926B (en) License plate positioning and identifying method in complex environment
CN111541900B (en) Security and protection video compression method, device, equipment and storage medium based on GAN
CN115222998B (en) Image classification method
CN112036260A (en) Expression recognition method and system for multi-scale sub-block aggregation in natural environment
Zhang et al. Deep RGB-D saliency detection without depth
CN112329771A (en) Building material sample identification method based on deep learning
Gao et al. Adaptive random down-sampling data augmentation and area attention pooling for low resolution face recognition
CN113096133A (en) Method for constructing semantic segmentation network based on attention mechanism
CN117238019A (en) Video facial expression category identification method and system based on space-time relative transformation
CN114331904B (en) Face shielding recognition method
CN116797681A (en) Text-to-image generation method and system for progressive multi-granularity semantic information fusion
Kasi et al. A deep learning based cross model text to image generation using DC-GAN
CN113128461B (en) Pedestrian re-recognition performance improving method based on human body key point mining full-scale features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant