CN107180225A - A kind of recognition methods for cartoon figure's facial expression - Google Patents

A kind of recognition methods for cartoon figure's facial expression Download PDF

Info

Publication number
CN107180225A
CN107180225A CN201710257911.6A CN201710257911A CN107180225A CN 107180225 A CN107180225 A CN 107180225A CN 201710257911 A CN201710257911 A CN 201710257911A CN 107180225 A CN107180225 A CN 107180225A
Authority
CN
China
Prior art keywords
cartoon
layer
pond
facial expression
expression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710257911.6A
Other languages
Chinese (zh)
Inventor
邓诗雨
刘龙至
张伟彬
李嘉恒
肖玉可
林泽宏
刘梓熙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201710257911.6A priority Critical patent/CN107180225A/en
Publication of CN107180225A publication Critical patent/CN107180225A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a kind of recognition methods for cartoon figure's facial expression, including:Obtain cartoon figure's facial expression picture, progress, which is pre-processed, obtains reference format picture, and for cartoon figure's facial expression feature, adjustment builds the depth convolutional neural networks for cartoon figure's human facial expression recognition, by training the depth convolutional neural networks completed to carry out characteristic matching to the picture, recognition effect includes the identification probability of each expression, and the higher result of probability is further returned into user.The inventive method can improve the stability and efficiency to cartoon figure's Expression Recognition, reach higher discrimination.

Description

A kind of recognition methods for cartoon figure's facial expression
Technical field
It is more particularly to a kind of to be directed to cartoon figure's facial expression the present invention relates to image recognition and machine learning techniques field Recognition methods.
Background technology
Expression recognition technology refer to from image or video in, extract specific emotional state.With technology Development and face expression database it is abundant, expression recognition technology is also increasingly ripe and accurate.Expression recognition technology Committed step is human facial feature extraction, based on substantial amounts of existing Facial expression database, then using feature extraction Method extracts expressive features from database, finally realizes facial expression classification.Efficient stable and the face with high-accuracy Expression Recognition system all has huge practical value in life and industrial quarters.
The existing century course of the development of cartoon technology, especially cartoon industry is flourishing as never before in recent years, each colour atla Logical image emerges in an endless stream.With the maturation of technology, the expression of cartoon role is also further lively, and the expression of cartoon figure is substantially It is by human face expression develops, while having the characteristic of cartoon character itself., psychologist Ekman and Friesen in 1971 Six kinds that have researched and proposed the mankind main emotions, every kind of emotion reflects a kind of unique psychological activity with unique expression, Respectively angry, glad, sad, surprised, detest and neutrality.Under contemporary ripe cartoon technique, cartoon figure's facial expression Substantially real human can be covered and possess institute's espressiove.The Expression Recognition of cartoon figure, be by expression recognition technology with Cartoon technology is combined, and is important application of the expression recognition technology in cartoon industry, by deep learning and The means such as feature extraction, the expression to cartoon figure is caught and detected, it will produced in cartoon with film & TV circles huge Application value.Research currently for expression recognition has had a lot, but in cartoon character face expression side The technology that face is specially optimized and applied is not common.
Depth learning technology comes from the research of artificial neural network.A kind of common deep learning structure includes containing many hidden layers Multilayer perceptron.The deep learning high-rise expression attribute classification or feature more abstract by combining low-level feature formation, with It was found that the distributed nature of data is represented.Deep learning is a new field in machine learning research field, and its motivation exists In setting up, simulating the neutral net that human brain carries out analytic learning, it imitates the mechanism of human brain to explain data, such as image, sound Sound and text.
It is the same with machine learning method, also supervised learning different from point of unsupervised learning of deep learning method It is very much different to practise the learning model set up under framework.For example, convolutional neural networks (Convolutional Neural Networks, CNNs) it is exactly a kind of machine learning model under supervised learning of depth, and depth confidence net (Deep Belief Nets, DBNs) it is exactly a kind of machine learning model under unsupervised learning.
The content of the invention
It is an object of the invention to overcome the shortcoming and deficiency of prior art there is provided one kind to be directed to cartoon figure's facial expression Recognition methods, the accuracy rate of cartoon figure's Expression Recognition can be obviously improved.
The purpose of the present invention is realized by following technical scheme:A kind of identification side for cartoon figure's facial expression Method, comprises the following steps:
Training stage:
S1, extract from cartoon figure's expression data storehouse FERG each cartoon figure role expression and set up picture library, and to figure Storehouse carries out classification integration, and cartoon figure's facial expression is divided into angry, glad, sad, surprised, detest and neutral six class;
S2, cartoon figure's expression picture is pre-processed;
S3, pretreated picture is divided into two parts of test sample and training sample, then by training set and test set Switch to LMDB forms respectively;
S4, with obtained training set and test set depth convolutional neural networks are trained;
Convolutional neural networks after trained are in cognitive phase, and the data for the LMDB types that data Layer is inputted are changed to use The data input of dim parameters description;The output layer Softmax With Loss of former training network are changed to Softmax, output by Loss is changed to export prob;
Cognitive phase:
S5, the image for choosing cartoon figure's facial expression to be identified;
S6, the image is pre-processed;
S7, by pretreated image with it is trained obtain be applied to cartoon figure's facial expression convolutional neural networks mould Type carries out characteristic matching;
S8, by the depth convolutional neural networks of trained completion animation cartoon personage's picture is identified, and it is defeated Go out recognition result.
It is preferred that, when being trained to depth convolutional neural networks, the initial value of learning rate is set as 0.0001, use The strategy of " step ", is often trained 4000 times, and learning rate reduces 0.00001, and the maximum iteration of training network is set to 100000 times.
It is preferred that, depth convolutional neural networks concrete structure includes:
Data Layer-convolutional layer 1- local acknowledgements normalization layer 1- ponds layer 1- convolutional layer 2- Chi Huaceng 2- local acknowledgements normalizing Change the full articulamentums of many of layer 2- convolutional layer 3- convolutional layer 4- pond layer 4-, at the top of last full articulamentum plus description net The Accuracy layers of the current training characteristics of network and Loss layers, meanwhile, also it is directly output to as the data Layer of the bottom Accuracy and Loss layers.
Specifically, introducing non-linear as activation primitive using ReLU for all convolutional layers and/or full articulamentum.
Specifically, in depth convolutional neural networks, from maximum pond and the alternate pond mode of average pondization.
Further, pond layer 1 is using maximum pond, and pond layer 2 is using average pond, and pond layer 4 is using maximum pond.
The pond window size of pond layer 1 is 3, and pond step-length selects 1.
Specifically, plus Dropout layers on full articulamentum, it is with certain probability that its is temporary transient for neutral net unit Abandon.
Specifically, depth convolutional neural networks include 3 full articulamentums being sequentially connected.
Specifically, Accuracy layers:Accuracy rate is exported, often trains 100 times and network is once tested, is test with 80 Iteration number;
Loss layers:Loss is exported, passes through loss variation tendency, it can be determined that the state that network is currently trained.
It is preferred that, for training set and test set with certain batch size, by the cartoon figure's facial expression pre-processed Data set extraction document name and label, the input of depth convolutional neural networks is used as using the data set of tape label.
It is preferred that, picture progress pretreatment is included in step S2 and step S6:Processing obtains the figure of 256*256 pixels Picture, and colour picture is converted into greyish white image of the gray value in [0,255].
It is preferred that, test sample quantity accounts for the 8%-12% of overall picture library quantity in step S3.
It is preferred that, first calibrated before animation cartoon personage's picture is identified in step S5.
The present invention compared with prior art, has the following advantages that and beneficial effect:
Present invention incorporates deep learning and image recognition technology, for cartoon figure, Expression Recognition can be obviously improved Accuracy rate, even for the huge different cartoon characters of difference, in the case that image resolution ratio is relatively low, still with more be satisfied with Expression Recognition rate, possess stronger practicality.
The present invention can be used for any multiple terminal equipment, such as PC, pocket computer, notebook computer, intelligence Mobile phone etc..
Brief description of the drawings
Fig. 1 is cartoon figure's Expression Recognition flow chart in embodiment;
Fig. 2 is the example images in cartoon figure's facial expression data storehouse that training pattern is used;
Fig. 3 is the concrete structure diagram of identification convolutional neural networks used;
Fig. 4 is the example images of the selected cartoon figure's facial expression of test.
Embodiment
With reference to embodiment and accompanying drawing, the present invention is described in further detail, but embodiments of the present invention are not limited In this.
A kind of recognition methods for cartoon figure's facial expression, as shown in figure 1, by expression recognition technology and cartoon Cartoon role facial expression in animation industry is combined, and is that it makes special recognition methods to measure, be ensure that certain Stability and accuracy rate, this method are applied to all kinds of occasions and terminal device, including PC, pocket computer, notebook Computer, smart mobile phone etc..Specifically, this method comprises the following steps:
Step 1, extract from cartoon figure's expression data storehouse FERG each cartoon figure role expression and set up picture library, its figure As example such as Fig. 2, and to picture library progress classification integration, by cartoon figure's facial expression be divided into it is angry, glad, sad, surprised, detest Dislike and neutral six class.
Step 2, cartoon figure's expression picture is pre-processed, obtain the image of 256*256 pixels, and by colour picture It is converted into greyish white image of the gray value in [0,255].
Step 3, the picture after processing is divided into two parts of test sample and training sample, test sample quantity accounts for entirety The 8%-12% of picture library quantity, then training set and test set are switched into LMDB forms respectively.
Step 4, with obtained training set and test set depth convolutional neural networks (CNN) are trained, learning rate Initial value is set as 0.0001, using the strategy of " step ", often trains 4000 times, learning rate reduces 0.00001, preferably to fit Cartoon figure's human facial expression recognition process is answered, the maximum iteration of training network is set to 100000 times.The depth convolution The structure chart of neutral net is as shown in figure 3, specific network structure includes:
Data Layer:With 32 it is batch size for training set, with 4 is batch size to test set.By the card pre-processed Logical character face's expression data collection extraction document name and label, the input of data Layer is used as using the data set of tape label.Data Layer Make the bottom of whole network.
Using convolutional layer 1 as the top layer of data Layer, the size of convolution kernel is 11, with 4 for convolution kernel step-length.
At the top of convolutional layer 1 plus local acknowledgement's normalization layer (Local Response Normalization, LRN) Norm1;Local acknowledgement's normalization layer is conducive to improving network training performance.
Introduce non-linear as activation primitive using ReLU for convolutional layer 1.
The top layer of layer 1 is normalized using pond layer 1 as local acknowledgement, pond window size is 3, using the pond in maximum pond Change mode, pond step-length selects 1.Although pond step-length, which is set to minimum value, can expend more training times and hardware resource, But highest training accuracy rate precision can be obtained.
In present networks, from alternate pond mode, i.e. pond layer not merely from maximum pond or average pond In one kind, but combine both pond modes.
Using convolutional layer 2 as the top layer of pond layer 1, the size of convolution kernel is 7.
Introduce non-linear as activation primitive using ReLU for convolutional layer 2.
Using pond layer 2 as the top of convolutional layer 2, pond mode is turned to using average pond, pond window size is 3, Pond step-length is used as using 2.
In pond, the top of layer 2 is plus local acknowledgement normalization layer Norm2.
Top layer using convolutional layer 3 as Norm2 layer, the convolution kernel size of convolutional layer 3 is 3.
Introduce non-linear as activation primitive using ReLU for convolutional layer 3.
Using convolutional layer 4 as the top layer of convolutional layer 3, the convolution kernel size of convolutional layer 4 is 3.
Using pond layer 4 as the top layer of convolutional layer 4, the pond mode of pond layer 4 is maximum pond, and pond window size is 3, with 2 for pond step-length.
Using full articulamentum 5 (IP5) as the top layer of pond layer 4, full articulamentum 5 is output as 4096, in order to prevent due to The over-fitting that data are very few and cause, adds Dropout layers, for neutral net unit with 0.5 on the full articulamentum Probability temporarily abandons it.
Using full articulamentum 6 (IP6) as the top layer of full articulamentum 5 (IP5), full articulamentum 6 is output as 4096, similar , dropout layers are added equally on full articulamentum 6.
The top layer of full articulamentum 6 (IP6) is used as using full articulamentum 7 (IP7).
Full articulamentum 7 (IP7) is output as 6, i.e., data set input is divided into six classes.On the top of last full articulamentum Portion is plus the Accuracy layers of the description current training characteristics of network and Loss layers.
Accuracy layers:Accuracy rate is exported, often trains 100 times and network is once tested, is test number of iterations with 80 Amount.
Loss layers:Loss is exported, passes through loss variation tendency, it can be determined that the state that network is currently trained.Work as train When loss and test loss constantly decline, illustrate that network is still being studied hard.
Meanwhile, also it is directly output to Accuracy and Loss layers as the data Layer of the bottom.
Identification by the convolutional neural networks after trained to specific cartoon figure's image:The LMDB that data Layer is inputted The data of type are changed to the data input described with 4 dim parameters.By the output layer Softmax With of former training network Loss is changed to Softmax, and that exported during training is loss, is changed to export prob during test.
Step 5, by the depth convolutional neural networks of trained completion animation cartoon personage's picture after calibration is entered Row identification, and recognition result is exported, as a result it is shown in text formatting on screen, five kinds of expression institutes of an indicating probability highest are right The probability answered, and sort from high to low;Specifically:
User uploads the facial expression image of a cartoon figure;Any present age conventional animated character can be used, the figure The conventional expression of one kind that picture illustrates the personage (meets six kinds of the angry, glad, sad, surprised of model needs, detest and fear One of expression).Two cartoon figure's facial expression pictures are randomly choosed, specific picture can refer to Fig. 4, respectively one displaying life The cartoon figure that the cartoon figure of gas meter feelings and a displaying are happily expressed one's feelings.
The image is pre-processed, reference format picture is converted into, that is, obtains the image of 256*256 pixels, and by coloured silk Chromatic graph piece is converted into greyish white image of the gray value in [0,255], the identification work so as to after.
By having trained the depth convolutional neural networks of completion, the input picture of standardized format is identified;Output Recognition result, presentation form as a result is the occupied probability of first five class expression.
Two selected pictures have obtained gratifying recognition effect, and specific recognition result can refer to following table:
Table 1 randomly chooses the recognition result of two cartoon figure's facial expression pictures
The test result that six classes expression for the FERG picture libraries used in training is tested respectively is as shown in the table:
The test result that table 2 is tested respectively for the six classes expression of the FERG picture libraries used in training
Indignation It is glad It is sad It is surprised Detest It is neutral
Indignation 0.9807 0.0014 0.0006 0.0001 0.0029 0.0143
It is glad 0.0000 0.9206 0.0068 0.0001 0.0004 0.0722
It is sad 0.0000 0.0015 0.8711 0.0000 0.1258 0.0016
It is surprised 0.0000 0.0001 0.0506 0.9394 0.0001 0.0008
Detest 0.0002 0.0021 0.0958 0.0396 0.8615 0.0008
It is neutral 0.0002 0.0856 0.0215 0.0011 0.0093 0.8806
Found by test, even with the various characters differed greatly, Expression Recognition all has at a relatively high Accuracy rate, with stronger practicality.
Above-described embodiment is preferably embodiment, but embodiments of the present invention are not by above-described embodiment of the invention Limitation, other any Spirit Essences without departing from the present invention and the change made under principle, modification, replacement, combine, simplification, Equivalent substitute mode is should be, is included within protection scope of the present invention.

Claims (10)

1. a kind of recognition methods for cartoon figure's facial expression, it is characterised in that comprise the following steps:
Training stage:
S1, extract from cartoon figure's expression data storehouse FERG each cartoon figure role expression and set up picture library, and picture library is entered Row classification is integrated, and cartoon figure's facial expression is classified;
S2, cartoon figure's expression picture is pre-processed;
S3, pretreated picture is divided into two parts of test sample and training sample, then training set and test set are distinguished Switch to LMDB forms;
S4, with obtained training set and test set depth convolutional neural networks are trained;
Convolutional neural networks after trained are in cognitive phase, and the data for the LMDB types that data Layer is inputted are changed to be joined with dim The data input of number description;The output layer Softmax With Loss of former training network are changed to Softmax, exported by loss It is changed to export prob;
Cognitive phase:
S5, the image for choosing cartoon figure's facial expression to be identified;
S6, the image is pre-processed;
S7, by pretreated image with it is trained obtain enter suitable for cartoon figure's facial expression convolutional neural networks model Row characteristic matching;
S8, by the depth convolutional neural networks of trained completion animation cartoon personage's picture is identified, and exports knowledge Other result.
2. the recognition methods according to claim 1 for cartoon figure's facial expression, it is characterised in that to depth convolution When neutral net is trained, the initial value of learning rate is set as 0.0001, using the strategy of " step ", often trains 4000 times, Learning rate reduces 0.00001, and the maximum iteration of training network is set to 100000 times.
3. the recognition methods according to claim 1 for cartoon figure's facial expression, it is characterised in that depth convolution god Include through network concrete structure:
The local acknowledgement of the convolutional layer of data Layer-the first-the first normalization pond of layer-the first the-the second pond of the-the second convolutional layer of layer layer- Second local acknowledgement, three convolutional layers of normalization layer-the-pond layer of Volume Four lamination-the four-multiple full articulamentums, at last The top of full articulamentum adds the Accuracy layers for describing the current training characteristics of network and Loss layers, meanwhile, it is used as the bottom Data Layer is also directly output to Accuracy and Loss layers.
4. the recognition methods according to claim 3 for cartoon figure's facial expression, it is characterised in that for all Convolutional layer and/or full articulamentum introduce non-linear using ReLU as activation primitive.
5. the recognition methods according to claim 3 for cartoon figure's facial expression, it is characterised in that depth convolution god Through in network, from maximum pond and the alternate pond mode of average pondization.
6. the recognition methods according to claim 3 for cartoon figure's facial expression, it is characterised in that the first pond layer Using maximum pond, the second pond layer is using average pond, and the 4th pond layer is using maximum pond.
7. the recognition methods according to claim 3 for cartoon figure's facial expression, it is characterised in that the first pond layer Pond window size is 3, and pond step-length selects 1.
8. the recognition methods according to claim 3 for cartoon figure's facial expression, it is characterised in that in full articulamentum It is upper to add Dropout layers, it is temporarily abandoned with certain probability for neutral net unit.
9. the recognition methods according to claim 3 for cartoon figure's facial expression, it is characterised in that depth convolution god Include 3 full articulamentums being sequentially connected through network.
10. the recognition methods according to claim 3 for cartoon figure's facial expression, it is characterised in that in step S5 First calibrated before animation cartoon personage's picture is identified.
CN201710257911.6A 2017-04-19 2017-04-19 A kind of recognition methods for cartoon figure's facial expression Pending CN107180225A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710257911.6A CN107180225A (en) 2017-04-19 2017-04-19 A kind of recognition methods for cartoon figure's facial expression

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710257911.6A CN107180225A (en) 2017-04-19 2017-04-19 A kind of recognition methods for cartoon figure's facial expression

Publications (1)

Publication Number Publication Date
CN107180225A true CN107180225A (en) 2017-09-19

Family

ID=59831957

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710257911.6A Pending CN107180225A (en) 2017-04-19 2017-04-19 A kind of recognition methods for cartoon figure's facial expression

Country Status (1)

Country Link
CN (1) CN107180225A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107742117A (en) * 2017-11-15 2018-02-27 北京工业大学 A kind of facial expression recognizing method based on end to end model
CN108009581A (en) * 2017-11-30 2018-05-08 中国地质大学(武汉) A kind of method for crack based on CNN, equipment and storage device
CN108062170A (en) * 2017-12-15 2018-05-22 南京师范大学 Multi-class human posture recognition method based on convolutional neural networks and intelligent terminal
CN108491866A (en) * 2018-03-06 2018-09-04 平安科技(深圳)有限公司 Porny identification method, electronic device and readable storage medium storing program for executing
CN110276365A (en) * 2018-03-16 2019-09-24 中国科学院遥感与数字地球研究所 A kind of training method and its classification method of the convolutional neural networks for the classification of SAR image sea ice
CN111227789A (en) * 2018-11-29 2020-06-05 百度在线网络技术(北京)有限公司 Human health monitoring method and device
CN113469950A (en) * 2021-06-08 2021-10-01 海南电网有限责任公司电力科学研究院 Method for diagnosing abnormal heating defect of composite insulator based on deep learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101334839A (en) * 2007-06-29 2008-12-31 佳能株式会社 Image-processing apparatus and method
CN105447473A (en) * 2015-12-14 2016-03-30 江苏大学 PCANet-CNN-based arbitrary attitude facial expression recognition method
US20160275341A1 (en) * 2015-03-18 2016-09-22 Adobe Systems Incorporated Facial Expression Capture for Character Animation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101334839A (en) * 2007-06-29 2008-12-31 佳能株式会社 Image-processing apparatus and method
US20160275341A1 (en) * 2015-03-18 2016-09-22 Adobe Systems Incorporated Facial Expression Capture for Character Animation
CN105447473A (en) * 2015-12-14 2016-03-30 江苏大学 PCANet-CNN-based arbitrary attitude facial expression recognition method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KUANG LIU: "Facial Expression Recognition with CNN Ensemble", 《ATIONAL CONFERENCE ON CYBERWORLDS》 *
卢官明: "一种用于人脸表情识别的卷积神经网络", 《南京邮电大学学报》 *
陈向震: "基于深度学习的人脸表情识别算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107742117A (en) * 2017-11-15 2018-02-27 北京工业大学 A kind of facial expression recognizing method based on end to end model
CN108009581A (en) * 2017-11-30 2018-05-08 中国地质大学(武汉) A kind of method for crack based on CNN, equipment and storage device
CN108062170A (en) * 2017-12-15 2018-05-22 南京师范大学 Multi-class human posture recognition method based on convolutional neural networks and intelligent terminal
CN108491866A (en) * 2018-03-06 2018-09-04 平安科技(深圳)有限公司 Porny identification method, electronic device and readable storage medium storing program for executing
CN108491866B (en) * 2018-03-06 2022-09-13 平安科技(深圳)有限公司 Pornographic picture identification method, electronic device and readable storage medium
CN110276365A (en) * 2018-03-16 2019-09-24 中国科学院遥感与数字地球研究所 A kind of training method and its classification method of the convolutional neural networks for the classification of SAR image sea ice
CN111227789A (en) * 2018-11-29 2020-06-05 百度在线网络技术(北京)有限公司 Human health monitoring method and device
CN113469950A (en) * 2021-06-08 2021-10-01 海南电网有限责任公司电力科学研究院 Method for diagnosing abnormal heating defect of composite insulator based on deep learning

Similar Documents

Publication Publication Date Title
CN107180225A (en) A kind of recognition methods for cartoon figure's facial expression
Chen et al. Global context-aware progressive aggregation network for salient object detection
CN110399821B (en) Customer satisfaction acquisition method based on facial expression recognition
Liu Research on the application of multimedia elements in visual communication art under the Internet background
CN110532900A (en) Facial expression recognizing method based on U-Net and LS-CNN
CN111832546B (en) Lightweight natural scene text recognition method
CN108304823A (en) A kind of expression recognition method based on two-fold product CNN and long memory network in short-term
CN106407889A (en) Video human body interaction motion identification method based on optical flow graph depth learning model
Sun et al. Using facial expression to detect emotion in e-learning system: A deep learning method
CN106295506A (en) A kind of age recognition methods based on integrated convolutional neural networks
CN105654066A (en) Vehicle identification method and device
CN104679863A (en) Method and system for searching images by images based on deep learning
CN110110719A (en) A kind of object detection method based on attention layer region convolutional neural networks
CN107016415A (en) A kind of coloured image Color Semantic sorting technique based on full convolutional network
CN110263822A (en) A kind of Image emotional semantic analysis method based on multi-task learning mode
CN109785227A (en) Face emotion color transfer method based on convolutional neural networks
Chao The fractal artistic design based on interactive genetic algorithm
CN110826510A (en) Three-dimensional teaching classroom implementation method based on expression emotion calculation
CN116758477A (en) Kitchen personnel dressing detection method based on improved YOLOv7 model
CN111553424A (en) CGAN-based image data balancing and classifying method
Wang The Impact of Animation and Film English Education Environment on Students' Psychological Health
CN106845391B (en) Atmosphere field identification method and system in home environment
CN107967468A (en) A kind of supplementary controlled system based on pilotless automobile
CN108280400A (en) A kind of expression recognition method based on depth residual error network
Zhu et al. Recognition and analysis of kawaii style for fashion clothing through deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170919

WD01 Invention patent application deemed withdrawn after publication