CN110866962A - Virtual portrait and expression synchronization method based on convolutional neural network - Google Patents

Virtual portrait and expression synchronization method based on convolutional neural network Download PDF

Info

Publication number
CN110866962A
CN110866962A CN201911138699.7A CN201911138699A CN110866962A CN 110866962 A CN110866962 A CN 110866962A CN 201911138699 A CN201911138699 A CN 201911138699A CN 110866962 A CN110866962 A CN 110866962A
Authority
CN
China
Prior art keywords
portrait
expression
neural network
convolutional neural
portrait data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911138699.7A
Other languages
Chinese (zh)
Other versions
CN110866962B (en
Inventor
吕云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Weiai New Economic And Technological Research Institute Co Ltd
Original Assignee
Chengdu Weiai New Economic And Technological Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Weiai New Economic And Technological Research Institute Co Ltd filed Critical Chengdu Weiai New Economic And Technological Research Institute Co Ltd
Priority to CN201911138699.7A priority Critical patent/CN110866962B/en
Publication of CN110866962A publication Critical patent/CN110866962A/en
Application granted granted Critical
Publication of CN110866962B publication Critical patent/CN110866962B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a virtual portrait and expression synchronization method based on a convolutional neural network, which comprises the following steps: constructing a portrait expression through three-dimensional software, and constructing dynamic feature points on the portrait expression; acquiring portrait data through a camera, and preprocessing the data to obtain preprocessed portrait data; constructing a convolutional neural network, and training the convolutional neural network by a method of manually marking portrait data; inputting the preprocessed portrait data into a trained convolutional neural network, and identifying the feature points of the portrait data to obtain the feature points of the portrait data; and matching the dynamic characteristic points according to the characteristic points of the portrait data to obtain the synchronous expression of the portrait data. The invention matches the dynamic characteristic points with the facial characteristic points of the input portrait data, and finally moves the dynamic characteristic points according to the matching result, thereby accurately completing the synchronization of the portrait expression and the virtual expression.

Description

Virtual portrait and expression synchronization method based on convolutional neural network
Technical Field
The invention belongs to the field of virtual portrait and expression synchronization, and particularly relates to a virtual portrait and expression synchronization method based on a convolutional neural network.
Background
With the development of information technology, social software is more and more, communication of users is more and more compact, emotions of people cannot be fully displayed through characters, existing social software provides various emotion packages, users express their emotions through the emotion packages, but in the existing technology, the emotions are fixed, generally, the emotions of a certain part of a face are fixed, and the emotion packages are provided by manufacturers, cannot be synchronized with the emotions of the users in real time, and time is consumed for finding the emotions.
Disclosure of Invention
Aiming at the defects in the prior art, the virtual portrait and expression synchronization method based on the convolutional neural network solves the problem that the expression of a user and the virtual expression cannot be synchronized in real time.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that: a virtual portrait and expression synchronization method based on a convolutional neural network is characterized by comprising the following steps:
s1, constructing a portrait expression through three-dimensional software, and constructing dynamic feature points on the portrait expression;
s2, acquiring portrait data through a camera, and preprocessing the portrait data to obtain preprocessed portrait data;
s3, constructing a convolutional neural network, and training the convolutional neural network by a method of manually marking portrait data;
s4, inputting the preprocessed portrait data into the trained convolutional neural network, and identifying the feature points of the portrait data to obtain the feature points of the portrait data;
and S5, matching the dynamic characteristic points according to the characteristic points of the portrait data to obtain the synchronous expression of the portrait data.
Further, the specific method for constructing the dynamic feature points on the human expression in step S1 is as follows: and constructing a portrait expression through three-dimensional software, and constructing dynamic feature points at the mouth, nose, eyes and eyebrows of the portrait expression.
Further, the step S2 includes the following sub-steps:
s2.1, acquiring portrait data through a camera to obtain portrait data;
s2.2, carrying out normalization preprocessing on the portrait data to obtain preprocessed portrait data.
Further, the convolutional neural network in step S3 includes an input layer, a first convolutional layer, a second convolutional layer, an average pooling layer, a third convolutional layer, a first maximum pooling layer, a fourth convolutional layer, a second maximum pooling layer, a fully-connected layer, and an output layer, which are connected in sequence;
the size of the first winding layer is 3 multiplied by 3, and the step length is 2; the size of the average pooling layer is 2 multiplied by 2, and the step length is 2; the sizes of the second convolution layer, the third convolution layer and the fourth convolution layer are all 3 x 3, and the step length is 1.
Further, the step S3 includes the following sub-steps:
s3.1, constructing a convolutional neural network, and collecting a plurality of training portrait data;
s3.2, marking the characteristic points of the portrait data in a manual marking mode to obtain label image data;
s3.3, inputting the training portrait data and the label image data into a convolutional neural network;
s3.4, training the convolutional neural network by taking the matching degree of the original training portrait data and the characteristic points of the label image as a loss value and taking the minimum loss value as a target;
and S3.4, optimizing the convolutional neural network by adopting a Adam algorithm as an optimization algorithm of the network parameters, and storing the network parameters at the moment as final network parameters when training is carried out until the loss value is less than 0.5, so as to obtain the trained convolutional neural network.
Further, the expression in the training portrait data in step S3.1 includes a mouth-skimming expression, a smiling expression, a laughing expression, a frowning expression, a mouth-opening expression, a blinking expression, and a pounding expression.
Further, the step S5 includes the following sub-steps:
s5.1, matching the dynamic characteristic points with the characteristic points of the portrait data according to the positions of the characteristic points of the portrait data;
s5.2, judging whether the matching degree of the dynamic characteristic points and the characteristic points of the portrait data exceeds 98%, if so, entering the step S5.3, otherwise, returning to the step S5.1;
and S5.3, changing the position of the dynamic feature point according to the matching result and the portrait data feature point to finish the synchronous expression of the portrait data.
Further, the step S5.1 comprises the following sub-steps:
s5.1.1, dividing face regions of the portrait data and the portrait expressions, wherein the face regions comprise eyebrows, a nose, a mouth, eyes and other regions;
s5.1.2, matching the portrait data with the feature points in the same facial area in the portrait expression according to the frame information of the facial area to obtain a matching result.
The invention has the beneficial effects that:
(1) the invention constructs dynamic feature points at the mouth, nose, eyes and eyebrows of the portrait expression, completes the synchronization of the virtual portrait and the expression through the dynamic feature points, and constructs the dynamic feature points at the important expression display part of the face, so that the expression capable of being synchronized is very rich.
(2) After the human image data are collected, the human image data are normalized, so that the subsequent calculated amount is reduced, and the expression synchronization efficiency is improved.
(3) The method and the device train the convolutional neural network by using various expressions, and ensure the accuracy of the convolutional neural network in identifying the portrait data feature points.
(4) The invention can accurately identify the characteristic points of the portrait data by constructing the convolutional neural network and training the convolutional neural network.
(5) The invention sets dynamic feature points, identifies the facial feature points of the input portrait data, matches the dynamic feature points with the facial feature points of the input portrait data, and moves the dynamic feature points according to the matching result, thereby accurately completing the synchronization of the portrait expression and the virtual expression.
Drawings
Fig. 1 is a flowchart of a virtual portrait and expression synchronization method based on a convolutional neural network according to the present invention.
Fig. 2 is a structural diagram of a convolutional neural network according to the present invention.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
As shown in fig. 1, a virtual portrait and expression synchronization method based on a convolutional neural network includes the following steps:
s1, constructing a portrait expression through three-dimensional software, and constructing dynamic feature points on the portrait expression;
s2, acquiring portrait data through a camera, and preprocessing the portrait data to obtain preprocessed portrait data;
s3, constructing a convolutional neural network, and training the convolutional neural network by a method of manually marking portrait data;
s4, inputting the preprocessed portrait data into the trained convolutional neural network, and identifying the feature points of the portrait data to obtain the feature points of the portrait data;
and S5, matching the dynamic characteristic points according to the characteristic points of the portrait data to obtain the synchronous expression of the portrait data.
In this embodiment, a portrait expression is constructed by the three-dimensional software FaceGen modeler, and the facial expression of the user is mainly collected when the portrait data is collected by the camera.
The specific method for constructing the dynamic feature points on the portrait expression in step S1 is as follows: and constructing a portrait expression through three-dimensional software, and constructing dynamic feature points at the mouth, nose, eyes and eyebrows of the portrait expression.
The step S2 includes the following sub-steps:
s2.1, acquiring portrait data through a camera to obtain portrait data;
s2.2, carrying out normalization preprocessing on the portrait data to obtain preprocessed portrait data.
As shown in fig. 2, the convolutional neural network in step S3 includes an input layer, a first convolutional layer, a second convolutional layer, an average pooling layer, a third convolutional layer, a first maximum pooling layer, a fourth convolutional layer, a second maximum pooling layer, a fully-connected layer, and an output layer, which are connected in sequence;
the size of the first winding layer is 3 multiplied by 3, and the step length is 2; the size of the average pooling layer is 2 multiplied by 2, and the step length is 2; the sizes of the second convolution layer, the third convolution layer and the fourth convolution layer are all 3 x 3, and the step length is 1.
The step S3 includes the following sub-steps:
s3.1, constructing a convolutional neural network, and collecting a plurality of training portrait data;
s3.2, marking the characteristic points of the portrait data in a manual marking mode to obtain label image data;
s3.3, inputting the training portrait data and the label image data into a convolutional neural network;
s3.4, training the convolutional neural network by taking the matching degree of the original training portrait data and the characteristic points of the label image as a loss value and taking the minimum loss value as a target;
and S3.4, optimizing the convolutional neural network by adopting a Adam algorithm as an optimization algorithm of the network parameters, and storing the network parameters at the moment as final network parameters when training is carried out until the loss value is less than 0.5, so as to obtain the trained convolutional neural network.
The conditions in the training portrait data in the step S3.1 comprise a mouth skimming expression, a smiling expression, a laughing expression, a frown frowning expression, a mouth opening expression, a blinking expression and a pouting expression.
In this embodiment, before inputting the portrait data into the trained convolutional neural network, the face detection is performed, and the specific method is as follows:
a1, setting the detection window to be 24 × 24, setting the translation amount to be 2, setting k to be 1, and setting the initial position of the detection window to be the upper left corner of the portrait data;
a2, inputting the subimages in the detection window into a cascade classifier for face detection;
a3, translating the detection window to the right by 2, and inputting the subimages of the detection window into a cascade classifier for face detection;
a4, repeating the step A3 for a plurality of times until the detection window reaches the right boundary of the portrait data, judging whether the subimages contain faces or not, if so, recording the information of the subimages, otherwise, entering the step A5;
a5, translating the detection window downwards by 2 x k, and performing face detection by using the method of the steps A2-A4;
a6, adding one to the count value of k, and repeating the step A5;
a7, setting a detection factor x for the sub-image of the detected face, and setting x to be 1;
a8, judging whether the overlapping area of every two subimages with human faces exceeds 75% of the area of the subimages, if so, adding 1 to the count value of the detection factor x of the corresponding subimage, otherwise, entering the step A9;
and A9, eliminating the image with the detection factor x smaller than 2, and carrying out weighting combination on the detection windows corresponding to the sub-images according to the detection factor to obtain a face detection result.
In this embodiment, if the face data in the face detection result does not include a face, the face data is discarded.
The step S5 includes the following sub-steps:
s5.1, matching the dynamic characteristic points with the characteristic points of the portrait data according to the positions of the characteristic points of the portrait data;
s5.2, judging whether the matching degree of the dynamic characteristic points and the characteristic points of the portrait data exceeds 98%, if so, entering the step S5.3, otherwise, returning to the step S5.1;
and S5.3, changing the position of the dynamic feature point according to the matching result and the portrait data feature point to finish the synchronous expression of the portrait data.
Step S5.1 comprises the following sub-steps:
s5.1.1, dividing face regions of the portrait data and the portrait expressions, wherein the face regions comprise eyebrows, a nose, a mouth, eyes and other regions;
s5.1.2, matching the portrait data with the feature points in the same facial area in the portrait expression according to the frame information of the facial area to obtain a matching result.
The face detection is carried out before the face data is input into the trained convolutional neural network, the face data without the face is abandoned, the calculation amount of the whole process is reduced, and the feature point recognition is accelerated by the face detection result.
The invention constructs dynamic feature points at the mouth, nose, eyes and eyebrows of the portrait expression, completes the synchronization of the virtual portrait and the expression through the dynamic feature points, and constructs the dynamic feature points at the important expression display part of the face, so that the expression capable of being synchronized is very rich. After the human image data are collected, the human image data are normalized, so that the subsequent calculated amount is reduced, and the expression synchronization efficiency is improved.
The method and the device train the convolutional neural network by using various expressions, and ensure the accuracy of the convolutional neural network in identifying the portrait data feature points. The invention can accurately identify the characteristic points of the portrait data by constructing the convolutional neural network and training the convolutional neural network. The invention sets dynamic feature points, identifies the facial feature points of the input portrait data, matches the dynamic feature points with the facial feature points of the input portrait data, and moves the dynamic feature points according to the matching result, thereby accurately completing the synchronization of the portrait expression and the virtual expression.

Claims (8)

1. A virtual portrait and expression synchronization method based on a convolutional neural network is characterized by comprising the following steps:
s1, constructing a portrait expression through three-dimensional software, and constructing dynamic feature points on the portrait expression;
s2, acquiring portrait data through a camera, and preprocessing the portrait data to obtain preprocessed portrait data;
s3, constructing a convolutional neural network, and training the convolutional neural network by a method of manually marking portrait data;
s4, inputting the preprocessed portrait data into the trained convolutional neural network, and identifying the feature points of the portrait data to obtain the feature points of the portrait data;
and S5, matching the dynamic characteristic points according to the characteristic points of the portrait data to obtain the synchronous expression of the portrait data.
2. The virtual portrait and expression synchronization method based on convolutional neural network of claim 1, wherein the specific method for constructing dynamic feature points on the portrait expression in step S1 is as follows: and constructing a portrait expression through three-dimensional software, and constructing dynamic feature points at the mouth, nose, eyes and eyebrows of the portrait expression.
3. The virtual portrait and expression synchronization method based on convolutional neural network of claim 1, wherein the step S2 includes the following substeps:
s2.1, acquiring portrait data through a camera to obtain portrait data;
s2.2, carrying out normalization preprocessing on the portrait data to obtain preprocessed portrait data.
4. The virtual portrait and expression synchronization method based on convolutional neural network of claim 1, wherein the convolutional neural network in step S3 includes an input layer, a first convolutional layer, a second convolutional layer, an average pooling layer, a third convolutional layer, a first maximum pooling layer, a fourth convolutional layer, a second maximum pooling layer, a full-link layer and an output layer which are connected in sequence;
the size of the first winding layer is 3 multiplied by 3, and the step length is 2; the size of the average pooling layer is 2 multiplied by 2, and the step length is 2; the sizes of the second convolution layer, the third convolution layer and the fourth convolution layer are all 3 x 3, and the step length is 1.
5. The virtual portrait and expression synchronization method based on convolutional neural network of claim 1, wherein the step S3 includes the following substeps:
s3.1, constructing a convolutional neural network, and collecting a plurality of training portrait data;
s3.2, marking the characteristic points of the portrait data in a manual marking mode to obtain label image data;
s3.3, inputting the training portrait data and the label image data into a convolutional neural network;
s3.4, training the convolutional neural network by taking the matching degree of the original training portrait data and the characteristic points of the label image as a loss value and taking the minimum loss value as a target;
and S3.4, optimizing the convolutional neural network by adopting a Adam algorithm as an optimization algorithm of the network parameters, and storing the network parameters at the moment as final network parameters when training is carried out until the loss value is less than 0.5, so as to obtain the trained convolutional neural network.
6. The convolutional neural network-based virtual portrait and expression synchronization method as claimed in claim 5, wherein the expression in the training portrait data in step S3.1 includes a mouth-skimming expression, a smiling expression, a laughing expression, a frowning expression, a mouth-opening expression, a blinking expression and a puckering expression.
7. The virtual portrait and expression synchronization method based on convolutional neural network of claim 1, wherein the step S5 includes the following substeps:
s5.1, matching the dynamic characteristic points with the characteristic points of the portrait data according to the positions of the characteristic points of the portrait data;
s5.2, judging whether the matching degree of the dynamic characteristic points and the characteristic points of the portrait data exceeds 98%, if so, entering the step S5.3, otherwise, returning to the step S5.1;
and S5.3, changing the position of the dynamic feature point according to the matching result and the portrait data feature point to finish the synchronous expression of the portrait data.
8. The virtual portrait and expression synchronization method based on the convolutional neural network as claimed in claim 1, wherein the step S5.1 comprises the following substeps:
s5.1.1, dividing face regions of the portrait data and the portrait expressions, wherein the face regions comprise eyebrows, a nose, a mouth, eyes and other regions;
s5.1.2, matching the portrait data with the feature points in the same facial area in the portrait expression according to the frame information of the facial area to obtain a matching result.
CN201911138699.7A 2019-11-20 2019-11-20 Virtual portrait and expression synchronization method based on convolutional neural network Active CN110866962B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911138699.7A CN110866962B (en) 2019-11-20 2019-11-20 Virtual portrait and expression synchronization method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911138699.7A CN110866962B (en) 2019-11-20 2019-11-20 Virtual portrait and expression synchronization method based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN110866962A true CN110866962A (en) 2020-03-06
CN110866962B CN110866962B (en) 2023-06-16

Family

ID=69655684

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911138699.7A Active CN110866962B (en) 2019-11-20 2019-11-20 Virtual portrait and expression synchronization method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN110866962B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111347845A (en) * 2020-03-17 2020-06-30 北京百度网讯科技有限公司 Electrochromic glass adjusting method and device and electronic equipment
CN111429567A (en) * 2020-03-23 2020-07-17 成都威爱新经济技术研究院有限公司 Digital virtual human eyeball real environment reflection method
CN113379880A (en) * 2021-07-02 2021-09-10 福建天晴在线互动科技有限公司 Automatic expression production method and device

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998013782A1 (en) * 1996-09-26 1998-04-02 Interval Research Corporation Affect-based robot communication methods and systems
CN101620669A (en) * 2008-07-01 2010-01-06 邹采荣 Method for synchronously recognizing identities and expressions of human faces
CN101944163A (en) * 2010-09-25 2011-01-12 德信互动科技(北京)有限公司 Method for realizing expression synchronization of game character through capturing face expression
US20110029471A1 (en) * 2009-07-30 2011-02-03 Nec Laboratories America, Inc. Dynamically configurable, multi-ported co-processor for convolutional neural networks
US20150170021A1 (en) * 2013-12-18 2015-06-18 Marc Lupon Reconfigurable processing unit
CN105512624A (en) * 2015-12-01 2016-04-20 天津中科智能识别产业技术研究院有限公司 Smile face recognition method and device for human face image
CN106096538A (en) * 2016-06-08 2016-11-09 中国科学院自动化研究所 Face identification method based on sequencing neural network model and device
CN107122705A (en) * 2017-03-17 2017-09-01 中国科学院自动化研究所 Face critical point detection method based on three-dimensional face model
CN107423707A (en) * 2017-07-25 2017-12-01 深圳帕罗人工智能科技有限公司 A kind of face Emotion identification method based under complex environment
CN107729872A (en) * 2017-11-02 2018-02-23 北方工业大学 Facial expression recognition method and device based on deep learning
CN108256426A (en) * 2017-12-15 2018-07-06 安徽四创电子股份有限公司 A kind of facial expression recognizing method based on convolutional neural networks
CN108304826A (en) * 2018-03-01 2018-07-20 河海大学 Facial expression recognizing method based on convolutional neural networks
CN108764031A (en) * 2018-04-17 2018-11-06 平安科技(深圳)有限公司 Identify method, apparatus, computer equipment and the storage medium of face
US20180349758A1 (en) * 2017-06-06 2018-12-06 Via Alliance Semiconductor Co., Ltd. Computation method and device used in a convolutional neural network
CN109753922A (en) * 2018-12-29 2019-05-14 北京建筑大学 Anthropomorphic robot expression recognition method based on dense convolutional neural networks
CN109934204A (en) * 2019-03-22 2019-06-25 重庆邮电大学 A kind of facial expression recognizing method based on convolutional neural networks
CN110175534A (en) * 2019-05-08 2019-08-27 长春师范大学 Teaching assisting system based on multitask concatenated convolutional neural network
US20190272462A1 (en) * 2018-02-28 2019-09-05 Honda Research Institute Europe Gmbh Unsupervised learning of metric representations from slow features
CN209462413U (en) * 2019-07-12 2019-10-01 成都威爱新经济技术研究院有限公司 A kind of 5G smart classroom system
CN110400251A (en) * 2019-06-13 2019-11-01 深圳追一科技有限公司 Method for processing video frequency, device, terminal device and storage medium
CN110414305A (en) * 2019-04-23 2019-11-05 苏州闪驰数控***集成有限公司 Artificial intelligence convolutional neural networks face identification system

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998013782A1 (en) * 1996-09-26 1998-04-02 Interval Research Corporation Affect-based robot communication methods and systems
CN101620669A (en) * 2008-07-01 2010-01-06 邹采荣 Method for synchronously recognizing identities and expressions of human faces
US20110029471A1 (en) * 2009-07-30 2011-02-03 Nec Laboratories America, Inc. Dynamically configurable, multi-ported co-processor for convolutional neural networks
CN101944163A (en) * 2010-09-25 2011-01-12 德信互动科技(北京)有限公司 Method for realizing expression synchronization of game character through capturing face expression
US20150170021A1 (en) * 2013-12-18 2015-06-18 Marc Lupon Reconfigurable processing unit
CN105512624A (en) * 2015-12-01 2016-04-20 天津中科智能识别产业技术研究院有限公司 Smile face recognition method and device for human face image
CN106096538A (en) * 2016-06-08 2016-11-09 中国科学院自动化研究所 Face identification method based on sequencing neural network model and device
CN107122705A (en) * 2017-03-17 2017-09-01 中国科学院自动化研究所 Face critical point detection method based on three-dimensional face model
US20180349758A1 (en) * 2017-06-06 2018-12-06 Via Alliance Semiconductor Co., Ltd. Computation method and device used in a convolutional neural network
CN107423707A (en) * 2017-07-25 2017-12-01 深圳帕罗人工智能科技有限公司 A kind of face Emotion identification method based under complex environment
CN107729872A (en) * 2017-11-02 2018-02-23 北方工业大学 Facial expression recognition method and device based on deep learning
CN108256426A (en) * 2017-12-15 2018-07-06 安徽四创电子股份有限公司 A kind of facial expression recognizing method based on convolutional neural networks
US20190272462A1 (en) * 2018-02-28 2019-09-05 Honda Research Institute Europe Gmbh Unsupervised learning of metric representations from slow features
CN108304826A (en) * 2018-03-01 2018-07-20 河海大学 Facial expression recognizing method based on convolutional neural networks
CN108764031A (en) * 2018-04-17 2018-11-06 平安科技(深圳)有限公司 Identify method, apparatus, computer equipment and the storage medium of face
CN109753922A (en) * 2018-12-29 2019-05-14 北京建筑大学 Anthropomorphic robot expression recognition method based on dense convolutional neural networks
CN109934204A (en) * 2019-03-22 2019-06-25 重庆邮电大学 A kind of facial expression recognizing method based on convolutional neural networks
CN110414305A (en) * 2019-04-23 2019-11-05 苏州闪驰数控***集成有限公司 Artificial intelligence convolutional neural networks face identification system
CN110175534A (en) * 2019-05-08 2019-08-27 长春师范大学 Teaching assisting system based on multitask concatenated convolutional neural network
CN110400251A (en) * 2019-06-13 2019-11-01 深圳追一科技有限公司 Method for processing video frequency, device, terminal device and storage medium
CN209462413U (en) * 2019-07-12 2019-10-01 成都威爱新经济技术研究院有限公司 A kind of 5G smart classroom system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"《基于卷积神经网络的人脸表情识别》" *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111347845A (en) * 2020-03-17 2020-06-30 北京百度网讯科技有限公司 Electrochromic glass adjusting method and device and electronic equipment
CN111347845B (en) * 2020-03-17 2021-09-21 北京百度网讯科技有限公司 Electrochromic glass adjusting method and device and electronic equipment
CN111429567A (en) * 2020-03-23 2020-07-17 成都威爱新经济技术研究院有限公司 Digital virtual human eyeball real environment reflection method
CN113379880A (en) * 2021-07-02 2021-09-10 福建天晴在线互动科技有限公司 Automatic expression production method and device
CN113379880B (en) * 2021-07-02 2023-08-11 福建天晴在线互动科技有限公司 Expression automatic production method and device

Also Published As

Publication number Publication date
CN110866962B (en) 2023-06-16

Similar Documents

Publication Publication Date Title
CN109376582B (en) Interactive face cartoon method based on generation of confrontation network
CN108564007B (en) Emotion recognition method and device based on expression recognition
WO2019174439A1 (en) Image recognition method and apparatus, and terminal and storage medium
EP3885965B1 (en) Image recognition method based on micro facial expressions, apparatus and related device
CN107333071A (en) Video processing method and device, electronic equipment and storage medium
CN110929569B (en) Face recognition method, device, equipment and storage medium
CN110866962B (en) Virtual portrait and expression synchronization method based on convolutional neural network
CN110309254A (en) Intelligent robot and man-machine interaction method
CN104361316B (en) Dimension emotion recognition method based on multi-scale time sequence modeling
CN102567716B (en) Face synthetic system and implementation method
CN107341435A (en) Processing method, device and the terminal device of video image
CN112800903A (en) Dynamic expression recognition method and system based on space-time diagram convolutional neural network
CN109299690B (en) Method capable of improving video real-time face recognition precision
CN113920568B (en) Face and human body posture emotion recognition method based on video image
CN107016046A (en) The intelligent robot dialogue method and system of view-based access control model displaying
CN111680550B (en) Emotion information identification method and device, storage medium and computer equipment
CN111108508B (en) Face emotion recognition method, intelligent device and computer readable storage medium
CN107911643A (en) Show the method and apparatus of scene special effect in a kind of video communication
CN109410138B (en) Method, device and system for modifying double chin
CN109598210A (en) A kind of image processing method and device
WO2021042850A1 (en) Item recommending method and related device
CN111476878A (en) 3D face generation control method and device, computer equipment and storage medium
CN111666845A (en) Small sample deep learning multi-mode sign language recognition method based on key frame sampling
CN114565602A (en) Image identification method and device based on multi-channel fusion and storage medium
CN113191216A (en) Multi-person real-time action recognition method and system based on gesture recognition and C3D network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant