CN112906668B - Face information identification method based on convolutional neural network - Google Patents

Face information identification method based on convolutional neural network Download PDF

Info

Publication number
CN112906668B
CN112906668B CN202110375280.4A CN202110375280A CN112906668B CN 112906668 B CN112906668 B CN 112906668B CN 202110375280 A CN202110375280 A CN 202110375280A CN 112906668 B CN112906668 B CN 112906668B
Authority
CN
China
Prior art keywords
face
picture
data
neural network
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110375280.4A
Other languages
Chinese (zh)
Other versions
CN112906668A (en
Inventor
任金鹏
刘云翔
原鑫鑫
熊婷婷
肖岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Institute of Technology
Original Assignee
Shanghai Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Institute of Technology filed Critical Shanghai Institute of Technology
Priority to CN202110375280.4A priority Critical patent/CN112906668B/en
Publication of CN112906668A publication Critical patent/CN112906668A/en
Application granted granted Critical
Publication of CN112906668B publication Critical patent/CN112906668B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of computer vision and discloses a face information identification method based on a convolutional neural network. The invention overcomes the defect that the traditional method cannot solve the influence of position, rotation and the like on the classification effect, and can more accurately predict the gender and age of the person through the face.

Description

Face information identification method based on convolutional neural network
Technical Field
The invention relates to the technical field of computer vision, in particular to a face information identification method based on a convolutional neural network.
Background
At present, leading edge technologies such as deep learning, artificial intelligence and the like are continuously integrated into the daily life of people, and even vending machines which are small enough to be seen everywhere in life are started to use modes such as face brushing payment and the like. The detection and recognition of faces are not far from reach, but are still in a relatively blank stage in the research field according to the gender and age of face recognition. Faces possess a large amount of useful information, and these information are stable, natural and unique, which are effective ways to identify a person. Meanwhile, the face recognition technology is also an indispensable component of human-computer information interaction, and because the technology does not need to be contacted, the degree of user friendliness is greatly improved, so that the face recognition technology in recent years has a great specific gravity in the market and is rapid in development.
Based on the information identification of the human face, the extraction of the information characteristics of the human face is the precondition of the human face identification by acquiring the picture or video of the human face, the accuracy of the information characteristic extraction can greatly influence the effect of the human face identification, and conversely, the human face identification method has a wide application scene for carrying out the human face identification on the picture of the human face on the basis of high accuracy of the human face identification.
Disclosure of Invention
The invention provides a face information identification method based on a convolutional neural network aiming at the blank existing in face gender and age judgment in face identification.
In order to solve the technical problems, the technical scheme of the invention is as follows:
a face information identification method based on convolutional neural network comprises the following steps:
s1, collecting face pictures, classifying and marking the face pictures, storing the marked results in corresponding CSV files, and sorting and storing the face pictures according to age and gender classification to obtain an Adience face data set;
s2, preprocessing a face picture in the face data set of the science, judging whether the picture has a face or not according to a pre-trained face detection model MTCNN, and marking the face position if the picture has the face;
s3, cutting off the detected face area, and then enhancing the blurred image by utilizing color histogram equalization;
s4, selecting pictures with labels meeting a specified value interval according to the requirement for each face picture, for example, discarding the non-conforming face pictures in a specified age range;
s5, carrying out a series of random operations including cutting, rotating and overturning on the screened face pictures, so that limited picture data can generate more equivalent picture data;
s6, designing a face recognition convolutional neural network model for the face picture, the gender and the age label, and naming the face recognition convolutional neural network model as Remon_Net;
s7, inputting the processed face picture data and the calibrated face information classification result into a designed face recognition convolutional neural network model Remon_Net for parameter training until convergence;
s8, carrying out face information prediction on the video input stream by using a trained face recognition convolutional neural network model Remon_Net, and carrying out frame selection on targets belonging to a predetermined category to obtain a face region and category information.
Preferably, the step S2 includes the following steps:
s21, using the existing Adience face data set, integrating the data, removing invalid files and integrating tag files;
s22, building an existing face detection model MTCNN, and inputting the integrated face data set into the model for training until parameters converge to obtain the trained face detection model MTCNN;
s23, applying the trained face detection model to the Adience face data set in the step S1.
Preferably, the step S3 includes the following steps:
s31, intercepting an original face picture including a hair part for the detected face region;
s32, enhancing the partially blurred image by adopting a color image histogram equalization method;
s33, for the histogram equalization of the color image in the step S32, when the nonlinear extension of the image occurs, the pixel values are redistributed, and the finally output image is a discontinuous histogram with smooth top end;
and S34, carrying out equalization processing on three channels of the color image, and finally merging and outputting.
Preferably, the step S6 includes the steps of:
s61, designing eight layers of convolutions of a face recognition convolutional neural network model Remon_Net, extracting basic features from the first four layers of convolutions, and extracting advanced features from the last four layers of convolutions;
s62, selecting a convolution kernel with a small size for the eight-layer convolution;
s63, adopting a Relu function for the activation function between the convolutions of each layer, and using a learning rate adjustment function to monitor the change of the loss value so as to adjust the learning rate;
s64, the output of the characteristic extraction network is shared to two classifiers, namely an age classifier and a gender classifier, the age classifier finally outputs an age interval, and the gender classifier outputs the gender label of the face picture.
Preferably, the step S8 includes the steps of:
s81, receiving unknown face pictures or video stream data acquired by a camera, intercepting the video stream data according to time frames, and storing data records of a time sequence into a computer for input;
s82, for the image data, using a trained face detection model MTCNN to detect the face and regress the face area, discarding the image data if the face is not detected, otherwise, storing the data;
s83, inputting all face areas in the picture into a trained face recognition convolutional neural network model Remon_Net for prediction, and outputting face information data of each face area;
and S84, integrating all detection results, corresponding to the position of each face region, including the attributes of the face size, the age and the sex, and returning and outputting the real time in a video stream window.
Compared with the prior art, the invention has the beneficial effects that:
according to the invention, the original face image is firstly subjected to processing such as segmentation and alignment, the influence of the background and the angle on the classification effect is avoided, the segmented and aligned face data is input through a designed network model, an eight-layer convolutional neural network is used for the feature extraction part, and then two fully connected modules are used as classifiers for gender identification and age identification. The invention adopts the convolutional neural network model to extract the characteristics, and overcomes the defect that the traditional method can not solve the influence of position, rotation and the like on the classification effect, so the method provided by the invention can more accurately predict the gender and age of the person through the face. Meanwhile, the output of the face feature extraction network is directly shared to two parallel classifier modules in the face recognition convolutional neural network model remote_Net, so that redundant calculation amount and overlong prediction time caused by classification work by using two independent models are avoided, and the problem of low real-time performance of the traditional image classification in face information recognition is solved to a certain extent.
Of course, it is not necessary for any one product to practice the invention to achieve all of the advantages set forth above at the same time.
Drawings
FIG. 1 is a flow chart of a face information recognition method based on convolutional neural network;
fig. 2 is a schematic diagram of a face recognition convolutional neural network model remon_net of the present invention.
Detailed Description
The following describes the embodiments of the present invention further with reference to the drawings. The description of these embodiments is provided to assist understanding of the present invention, but is not intended to limit the present invention. In addition, the technical features of the embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
Referring to fig. 1, a face information recognition method based on a convolutional neural network includes the steps of:
s1, collecting face pictures, classifying and marking the face pictures, storing the marked results in corresponding CSV files, and sorting and storing the face pictures according to age and gender classification to obtain an Adience face data set;
s2, preprocessing a face picture in the face data set of the science, judging whether the picture has a face or not according to a pre-trained face detection model MTCNN, and marking the face position if the picture has the face;
s21, using the existing Adience face data set, integrating the data, removing invalid files and integrating tag files;
s22, building an existing face detection model MTCNN, and inputting the integrated face data set into the model for training until parameters converge to obtain the trained face detection model MTCNN;
s23, applying the trained face detection model to the Adience face data set in the step S1.
The accuracy of the trained face detection model can reach more than 94%.
S3, cutting off the detected face area, and then enhancing the blurred image by utilizing color histogram equalization;
the face region is cut, so that complex backgrounds can be removed, effective characteristics of pictures are reserved, and redundant characteristics are reduced.
S31, intercepting an original face picture including a hair part for the detected face region;
in order to keep more picture information, the detection result is expanded to a certain extent, so that the hair information can be kept.
S32, enhancing the partially blurred image by adopting a color image histogram equalization method;
the main body features are made more obvious and clear by enhancing the partially blurred image in turn.
S33, for the histogram equalization of the color image in the step S32, when the nonlinear extension of the image occurs, the pixel values are redistributed, and the finally output image is a discontinuous histogram with smooth top end;
under certain conditions, the number of pixel gray levels which are different from each other tends to be balanced, and the contrast of the part which is not processed is more obvious, so that the finally output image is a discontinuous histogram with smooth top.
And S34, carrying out equalization processing on three channels of the color image, and finally merging and outputting.
S4, selecting pictures with labels meeting a specified value interval according to the requirement for each face picture, for example, discarding the non-conforming face pictures in a specified age range;
s5, carrying out a series of random operations including cutting, rotating and overturning on the screened face pictures, so that limited picture data can generate more equivalent picture data;
more equivalent picture data can prevent the over-fitting problem in the model training process, namely, the method is effective only for selecting the trained face picture and is ineffective for unknown face recognition.
S6, designing a face recognition convolutional neural network model for the face picture, the gender and the age label, and naming the face recognition convolutional neural network model as Remon_Net;
s61, designing eight layers of convolutions of a face recognition convolutional neural network model Remon_Net, extracting basic features from the first four layers of convolutions, and extracting advanced features from the last four layers of convolutions;
the basic features are local features such as an organ, and the advanced features are texture lines of the organ.
S62, selecting a convolution kernel with a small size for the eight-layer convolution;
the eight-layer convolution can realize the functions of feature extraction and classification, and small-size convolution kernels, such as 3×3, are selected, so that the calculated amount in the training process can be reduced, and the face recognition rate is accelerated.
S63, adopting a Relu function for the activation function between the convolutions of each layer, and using a learning rate adjustment function to monitor the change of the loss value so as to adjust the learning rate;
the Relu function is used mainly to add non-linearity to the model, which is not well expressed. The Relu function judges whether the output of the current node reaches a threshold value, and the output is transferred to the next layer if the output reaches the threshold value, and the output is not transferred if the output does not reach the threshold value.
S64, the output of the characteristic extraction network is shared to two classifiers, namely an age classifier and a gender classifier, the age classifier finally outputs an age interval, and the gender classifier outputs the gender label of the face picture.
Referring to fig. 2, the face recognition convolutional neural network model remon_net includes a left feature extraction network and two classification modules on the right. The feature extraction network includes eight layers, each having a convolutional layer, a regularization layer, and an activation function layer, and the underlying data represents the number of channels of the convolutional layer. The two classifier modules are connected by the full connection layer because the output is flattened to one dimension as well as downsampling by a maximum pooling layer through the output of the feature extraction network, and the following numbers represent the node number of the full connection layer. The last is the output classification book, gender is 2-class, and age is 8-class, i.e., the likelihood of outputting gender and the likelihood of each age group.
S7, inputting the processed face picture data and the calibrated face information classification result into a designed face recognition convolutional neural network model Remon_Net for parameter training until convergence;
s8, carrying out face information prediction on the video input stream by using a trained face recognition convolutional neural network model Remon_Net, and carrying out frame selection on targets belonging to a predetermined category to obtain a face region and category information.
"framing targets belonging to a predetermined category" means that only faces are detected, and information categories of the faces are discriminated so as to be in line with the faces and the categories are framed in the detection effect only when they are within the range.
S81, receiving video stream data acquired by a camera, intercepting the video stream data according to time frames, and storing a data record of a time sequence into a computer memory for input, wherein the data record corresponds to each input data, namely image data of each frame;
s82, for the image data, using a trained face detection model MTCNN to detect the face and regress the face area, discarding the image data if the face is not detected, otherwise, storing the data;
s83, inputting all face areas in the picture into a trained face recognition convolutional neural network model Remon_Net for prediction, and outputting face information data of each face area;
and S84, integrating all detection results, corresponding to the position of each face region, including the attributes of the face size, the age and the sex, and returning and outputting the real time in a video stream window.
The embodiments of the present invention have been described in detail above with reference to the accompanying drawings, but the present invention is not limited to the described embodiments. It will be apparent to those skilled in the art that various changes, modifications, substitutions and alterations can be made to these embodiments without departing from the principles and spirit of the invention, and yet fall within the scope of the invention.

Claims (1)

1. The face information identification method based on the convolutional neural network is characterized by comprising the following steps of:
s1, collecting face pictures, classifying and marking the face pictures, storing the marked results in corresponding CSV files, and sorting and storing the face pictures according to age and gender classification to obtain an Adience face data set;
s2, preprocessing a face picture in the face data set of the science, judging whether the picture has a face or not according to a pre-trained face detection model MTCNN, and marking the face position if the picture has the face;
s3, cutting off the detected face area, and then enhancing the blurred image by utilizing color histogram equalization;
s4, selecting pictures with labels meeting a specified value interval according to the requirement for each face picture, and discarding non-conforming face pictures;
s5, carrying out a series of random operations including cutting, rotating and overturning on the screened face pictures, so that limited picture data can generate more equivalent picture data;
s6, designing a face recognition convolutional neural network model for the face picture, the gender and the age label, and naming the face recognition convolutional neural network model as Remon_Net;
s7, inputting the processed face picture data and the calibrated face information classification result into a designed face recognition convolutional neural network model Remon_Net for parameter training until convergence;
s8, carrying out face information prediction on the video input stream by using a trained face recognition convolutional neural network model Remon_Net, and carrying out frame selection on targets belonging to a predetermined category to obtain a face region and category information;
the step S2 includes the following steps:
s21, using the existing Adience face data set, integrating the data, removing invalid files and integrating tag files;
s22, building an existing face detection model MTCNN, and inputting the integrated face data set into the model for training until parameters converge to obtain the trained face detection model MTCNN;
s23, applying the trained face detection model to the Adience face data set in the step S1;
step S3 comprises the steps of:
s31, intercepting an original face picture including a hair part for the detected face region;
s32, enhancing the partially blurred image by adopting a color image histogram equalization method;
s33, for the histogram equalization of the color image in the step S32, when the nonlinear extension of the image occurs, the pixel values are redistributed, and the finally output image is a discontinuous histogram with smooth top end;
s34, carrying out equalization treatment on three channels of the color image, and finally merging and outputting;
step S6 includes the steps of:
s61, designing eight layers of convolutions of a face recognition convolutional neural network model Remon_Net, extracting basic features from the first four layers of convolutions, and extracting advanced features from the last four layers of convolutions;
s62, selecting a convolution kernel with a small size for the eight-layer convolution;
s63, adopting a Relu function for the activation function between the convolutions of each layer, and using a learning rate adjustment function to monitor the change of the loss value so as to adjust the learning rate;
s64, sharing the output of the characteristic extraction network to two classifiers, namely an age classifier and a gender classifier, wherein the age classifier finally outputs an age interval, and the gender classifier outputs the gender label of the face picture;
step S8 includes the steps of:
s81, receiving unknown face pictures or video stream data acquired by a camera, intercepting the video stream data according to time frames, and storing data records of a time sequence into a computer for input;
s82, for the image data, using a trained face detection model MTCNN to detect the face and regress the face area, discarding the image data if the face is not detected, otherwise, storing the data;
s83, inputting all face areas in the picture into a trained face recognition convolutional neural network model Remon_Net for prediction, and outputting face information data of each face area;
and S84, integrating all detection results, corresponding to the position of each face region, including the attributes of the face size, the age and the sex, and returning and outputting the real time in a video stream window.
CN202110375280.4A 2021-04-07 2021-04-07 Face information identification method based on convolutional neural network Active CN112906668B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110375280.4A CN112906668B (en) 2021-04-07 2021-04-07 Face information identification method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110375280.4A CN112906668B (en) 2021-04-07 2021-04-07 Face information identification method based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN112906668A CN112906668A (en) 2021-06-04
CN112906668B true CN112906668B (en) 2023-08-25

Family

ID=76110095

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110375280.4A Active CN112906668B (en) 2021-04-07 2021-04-07 Face information identification method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN112906668B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106529402A (en) * 2016-09-27 2017-03-22 中国科学院自动化研究所 Multi-task learning convolutional neural network-based face attribute analysis method
WO2019109526A1 (en) * 2017-12-06 2019-06-13 平安科技(深圳)有限公司 Method and device for age recognition of face image, storage medium
WO2020114118A1 (en) * 2018-12-07 2020-06-11 深圳光启空间技术有限公司 Facial attribute identification method and device, storage medium and processor
CN111611849A (en) * 2020-04-08 2020-09-01 广东工业大学 Face recognition system for access control equipment
CN112200008A (en) * 2020-09-15 2021-01-08 青岛邃智信息科技有限公司 Face attribute recognition method in community monitoring scene

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106529402A (en) * 2016-09-27 2017-03-22 中国科学院自动化研究所 Multi-task learning convolutional neural network-based face attribute analysis method
WO2019109526A1 (en) * 2017-12-06 2019-06-13 平安科技(深圳)有限公司 Method and device for age recognition of face image, storage medium
WO2020114118A1 (en) * 2018-12-07 2020-06-11 深圳光启空间技术有限公司 Facial attribute identification method and device, storage medium and processor
CN111611849A (en) * 2020-04-08 2020-09-01 广东工业大学 Face recognition system for access control equipment
CN112200008A (en) * 2020-09-15 2021-01-08 青岛邃智信息科技有限公司 Face attribute recognition method in community monitoring scene

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于深度卷积神经网络的人脸年龄分类;李超琪;王绍宇;;智能计算机与应用(03);全文 *

Also Published As

Publication number Publication date
CN112906668A (en) 2021-06-04

Similar Documents

Publication Publication Date Title
CN108304788B (en) Face recognition method based on deep neural network
CN106919920B (en) Scene recognition method based on convolution characteristics and space vision bag-of-words model
Zhang et al. Improving multiview face detection with multi-task deep convolutional neural networks
CN112784763B (en) Expression recognition method and system based on local and overall feature adaptive fusion
CN112215180B (en) Living body detection method and device
CN106485214A (en) A kind of eyes based on convolutional neural networks and mouth state identification method
CN111368764B (en) False video detection method based on computer vision and deep learning algorithm
CN109858467B (en) Face recognition method and device based on key point region feature fusion
CN113011253B (en) Facial expression recognition method, device, equipment and storage medium based on ResNeXt network
CN110674730A (en) Monocular-based face silence living body detection method
Gragnaniello et al. Biometric spoofing detection by a domain-aware convolutional neural network
CN109063626A (en) Dynamic human face recognition methods and device
CN111353447A (en) Human skeleton behavior identification method based on graph convolution network
CN106485226A (en) A kind of video pedestrian detection method based on neutral net
Alafif et al. On detecting partially occluded faces with pose variations
CN109159129A (en) A kind of intelligence company robot based on facial expression recognition
Kumar et al. Facial emotion recognition and detection using cnn
CN112906668B (en) Face information identification method based on convolutional neural network
CN112488165A (en) Infrared pedestrian identification method and system based on deep learning model
Yifei et al. Flower image classification based on improved convolutional neural network
KR20180092453A (en) Face recognition method Using convolutional neural network and stereo image
Elmansori et al. An enhanced face detection method using skin color and back-propagation neural network
Singla et al. Age and gender detection using Deep Learning
Depuru et al. Hybrid CNNLBP using facial emotion recognition based on deep learning approach
Aina et al. Gesture recognition system for Nigerian tribal greeting postures using support vector machine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant