CN116030526A - Emotion recognition method, system and storage medium based on multitask deep learning - Google Patents

Emotion recognition method, system and storage medium based on multitask deep learning Download PDF

Info

Publication number
CN116030526A
CN116030526A CN202310165454.3A CN202310165454A CN116030526A CN 116030526 A CN116030526 A CN 116030526A CN 202310165454 A CN202310165454 A CN 202310165454A CN 116030526 A CN116030526 A CN 116030526A
Authority
CN
China
Prior art keywords
network
emotion
vector
label
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310165454.3A
Other languages
Chinese (zh)
Other versions
CN116030526B (en
Inventor
王金凤
郑志燊
苏志坚
黄可
李杏圆
许健恒
尤茵茵
刘星宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Agricultural University
Original Assignee
South China Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Agricultural University filed Critical South China Agricultural University
Priority to CN202310165454.3A priority Critical patent/CN116030526B/en
Publication of CN116030526A publication Critical patent/CN116030526A/en
Application granted granted Critical
Publication of CN116030526B publication Critical patent/CN116030526B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an emotion recognition method, an emotion recognition system and a storage medium based on multitask deep learning, wherein the method comprises the following steps: acquiring an emotion data set; the emotion data set comprises a plurality of classroom student images, and emotion state labels of all students are marked in the classroom student images; the built multi-task deep learning network model comprises an image characterization learning network, an image reconstruction network and a multi-task label generation network; extracting features of the classroom student images by using an image characterization learning network to obtain characterization vectors; image reconstruction is carried out on the characterization vector by utilizing an image reconstruction network; predicting an emotion state label according to the characterization vector by utilizing a multi-task label generating network; training a multi-task deep learning network model by using the emotion data set, calculating a loss function and updating the weight of the model; inputting the image to be identified into a trained multitask deep learning network model, and predicting the emotion state label. The invention improves the accuracy of emotion recognition by using the constructed network model.

Description

Emotion recognition method, system and storage medium based on multitask deep learning
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an emotion recognition method, system, computer device, and storage medium based on multitasking deep learning.
Background
Emotion education is an important component of diathesis education, and teachers can lead students to perceive and understand by means of determining factors in education and teaching, and the purpose of diathesis comprehensive education is achieved. In practical teaching, the phenomenon of heavy knowledge and light emotion is serious, and teachers always use cognitive factors in an unintentional mode and neglect the importance of emotion factors. Thus, a teacher can perform teaching activities according to own teaching design uniformly, and the acceptance and response of students to the taught knowledge are not considered; the students have no interest in the teaching of the caretaker and cannot put the students into study. This negative interaction of teachers and students makes teaching inefficient. In the traditional teaching process, the analysis and research of the interactive emotions of teachers and students are mostly carried out on investigation and inquiry, psychological and educational means are used for intervention adjustment, subjective factors are more, and the effect is not obvious. For the reason, a teacher and students lack interactive emotion recognition and evaluation mechanism, so that the teacher and students cannot clearly recognize the current teaching state.
Along with the development of multimedia technology, computer technology, artificial intelligence technology and the like, the class of schools is also more informationized and intelligent. In a classroom, the emotion states of students are important measurement indexes reflecting the learning states of the students, and the emotion states of the students can be automatically revealed through analysis of the emotion states of the students by a computer, so that better emotion interaction between teachers and the students is facilitated, and the quality of lectures is improved. In a smart classroom, analyzing the emotional state of a student in the classroom is a challenging task, and it is important to enhance the interaction between the student and the teacher and determine the participation of the student in the classroom. Therefore, in order to obtain the emotional state of the student, conventional methods are employed including emotional state recognition using electroencephalogram signal detection, emotional state recognition from the facial expression of the student, and the like. However, the conventional method does not consider that the emotion states of students are identified and comprehensively judged according to various states and various aspects of information of the students.
Disclosure of Invention
In order to solve the defects in the prior art, the invention provides a multi-task deep learning-based emotion recognition method, a multi-task deep learning-based emotion recognition system, a multi-task deep learning network model is built by combining a plurality of tasks related to student emotion, and effective characteristic information related to the tasks can be extracted by combining an image reconstruction network and a multi-task label generation network, so that the model has stronger characteristic learning capability, and the accuracy of the model on emotion state recognition is effectively improved; in addition, the characteristic domain identification network is added into the multi-task deep learning network model to obtain a multi-task self-countermeasure deep learning network model, the characteristic domain identification network and the image characterization learning network form a countermeasure network, and the problem that a characteristic vector generated by the image reconstruction network is highly correlated with one characteristic vector generated in the multi-task label generation network and is low in correlation with other characteristic vectors is solved by performing self countermeasure training on the multi-task self-countermeasure deep learning network model, so that the accuracy of the model in emotion state identification is further improved.
A first object of the present invention is to provide an emotion recognition method based on multitasking deep learning.
A second object of the present invention is to provide an emotion recognition system based on multitasking deep learning.
A third object of the present invention is to provide a computer device.
A fourth object of the present invention is to provide a storage medium.
The first object of the present invention can be achieved by adopting the following technical scheme:
an emotion recognition method based on multitasking deep learning, the method comprising:
acquiring an emotion data set; the emotion data set comprises a plurality of classroom student images, and emotion state labels of all students are marked in the classroom student images;
constructing a multi-task deep learning network model; the multi-task deep learning network model comprises an image characterization learning network, an image reconstruction network and a multi-task label generation network; extracting features of the classroom student images by using the image characterization learning network to obtain characterization vectors; performing image reconstruction on the characterization vector by using the image reconstruction network; predicting an emotion state label according to the characterization vector by utilizing the multi-task label generating network;
Training a multi-task deep learning network model by using the emotion data set, calculating a loss function, carrying out standard gradient back propagation, and updating the weight of the network model;
inputting the image to be identified into a trained multitask deep learning network model, and predicting the emotion state label.
Further, adding a feature domain identification network into the multi-task deep learning network model to obtain a multi-task self-countermeasure deep learning network model; the characteristic domain identification network and the image characterization learning network form an countermeasure network;
and calculating a loss function of the characteristic domain identification network, carrying out negative gradient back propagation, and updating the weight of the network model.
Further, generating a specific domain feature vector generated by a network according to the characterization vector and the multi-task label to obtain a truth value label; the specific domain feature vector comprises an emotion domain feature vector, a head-up state domain feature vector and a sight direction domain feature vector;
inputting the characterization vector and the specific domain feature vector into a feature domain identification network to obtain an identification result;
calculating a loss function of the feature domain authentication network according to the truth value label and the authentication result; and optimizing the weight of the image representation learning network according to the calculated value of the loss function.
Further, the feature domain discrimination network includes a self-attention layer, a layer normalization, a linear layer, a batch normalization, and a ReLU layer;
the identification result comprises an emotion feature domain identification result, a head-up state feature domain identification result and a sight direction feature domain identification result;
the step of inputting the characterization vector and the specific domain feature vector into a feature domain authentication network to obtain an authentication result comprises the following steps:
splicing the characterization vector and the emotion domain feature vector, inputting the spliced vector into a self-attention layer, and extracting associated features through a self-attention mechanism; adding the result output by the self-attention layer and the spliced vector, and sequentially processing the vector after the adding operation by a layer normalization layer, a linear layer, a batch normalization layer, a ReLU layer and a linear layer to obtain an emotion feature domain identification result;
similarly, splicing the characterization vector and the feature vector of the head-up state domain, and inputting the spliced vector into a feature domain identification network to obtain a head-up state feature domain identification result;
and similarly, splicing the characterization vector and the sight direction domain feature vector, and inputting the spliced vector into a feature domain identification network to obtain a sight direction feature domain identification result.
Further, the multi-task label generating network comprises an emotion label generating network, a head-up state label generating network and a sight direction label generating network;
the generating the network generated specific domain feature vector according to the characterization vector and the multi-task label to obtain a truth value label comprises the following steps:
the characterization vector and the emotion domain feature vector are respectively and sequentially processed by an FC layer and a Softmax layer of an emotion label generation network to obtain two results; if the two results are consistent, the characteristic vector and the emotion domain characteristic vector belong to the same specific domain, and the truth value label is 1; otherwise, the characteristic vector and the emotion domain characteristic vector do not belong to the same specific domain, and the truth value label is 0;
similarly, the characterization vector and the head-up state domain feature vector are respectively processed by an FC layer and a Softmax layer of a head-up state label generating network, and a value of a true value label is obtained according to two results;
and similarly, the characterization vector and the sight direction domain feature vector are respectively processed by an FC layer and a Softmax layer of a sight direction label generating network, and the value of the truth label is obtained according to the two results.
Further, the emotion tag generation network comprises an emotion specific domain feature learning network comprising a self-attention layer, a layer normalization, a linear layer and a ReLU layer;
The multi-tasking label generating network generates a domain specific feature vector comprising:
inputting the characterization vector into a self-attention layer, performing first time adding operation on the vector output by the self-attention layer and the characterization vector, sequentially performing layer normalization, linear layer, reLU layer and linear layer processing on the vector after the first time adding operation, performing second time adding operation on the vector after the processing and the vector output by the layer normalization, and sequentially performing layer normalization on the vector after the second time adding operation to obtain an emotion domain feature vector;
the structures of the head-up state label generating network and the sight direction label generating network are the same as those of the emotion label generating network, and the head-up state domain feature vector and the sight direction domain feature vector are obtained according to the generating process of the emotion domain feature vector.
Further, the predicted emotion state labels comprise a predicted emotion label, a predicted head-up state label and a predicted line-of-sight direction label;
processing the emotion domain feature vector sequentially through an FC layer and a Softmax layer of an emotion label generation network to obtain a predicted emotion label;
the head-up state domain feature vector is sequentially processed through an FC layer and a Softmax layer of a head-up state label generating network to obtain a predicted head-up state label;
And processing the sight line direction domain feature vector through an FC layer and a Softmax layer of a sight line direction label generating network in sequence to obtain the sight line direction label.
Further, preprocessing the classroom student image before extracting features of the classroom student image by using the image characterization learning network, including:
according to the classroom student image, automatically identifying the position of the student in the image by using a target detection model;
according to the positions of students in the images, images of a plurality of students are intercepted from the classroom student images;
and carrying out image enhancement on the cut image to obtain an enhanced image.
Further, the image reconstruction network is a deconvolution neural network, and the deconvolution neural network is formed by deconvolution layers and activation functions alternately;
and inputting the characterization vector into an image reconstruction network, wherein the output reconstructed image has the same size as the enhanced image.
The second object of the invention can be achieved by adopting the following technical scheme:
an emotion recognition system based on multitasking deep learning, the system comprising:
the data set acquisition module is used for acquiring an emotion data set; the emotion data set comprises a plurality of classroom student images, and emotion state labels of all students are marked in the classroom student images;
The multi-task deep learning network model construction module is used for constructing a multi-task deep learning network model; the multi-task deep learning network model comprises an image characterization learning network, an image reconstruction network and a multi-task label generation network; extracting features of the classroom student images by using the image characterization learning network to obtain characterization vectors; performing image reconstruction on the characterization vector by using the image reconstruction network; predicting an emotion state label according to the characterization vector by utilizing the multi-task label generating network;
the network model training module is used for training a multi-task deep learning network model by utilizing the emotion data set, calculating a loss function and carrying out standard gradient back propagation, and updating the weight of the network model;
and the emotion state label recognition module is used for inputting the image to be recognized into the trained multi-task deep learning network model and predicting the emotion state label.
The third object of the present invention can be achieved by adopting the following technical scheme:
a computer device comprises a processor and a memory for storing a program executable by the processor, wherein the emotion recognition method is realized when the processor executes the program stored in the memory.
The fourth object of the present invention can be achieved by adopting the following technical scheme:
a storage medium storing a program which, when executed by a processor, implements the emotion recognition method described above.
Compared with the prior art, the invention has the following beneficial effects:
according to the method provided by the invention, the image reconstruction network and the multi-task label generation network are combined to be used by constructing the multi-task deep learning network model, so that the effective characteristic information associated with a plurality of tasks can be extracted, the model has stronger characteristic learning capability, and the accuracy of the model for identifying the emotion state is effectively improved; in addition, the characteristic domain identification network is added into the multi-task deep learning network model to obtain a multi-task self-countermeasure deep learning network model, the characteristic domain identification network and the image characterization learning network form a countermeasure network, and the problem that a characteristic vector generated by the image reconstruction network is highly correlated with one characteristic vector generated in the multi-task label generation network and is low in correlation with other characteristic vectors is solved by performing self countermeasure training on the multi-task self-countermeasure deep learning network model, so that the accuracy of the model in emotion state identification is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to the structures shown in these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an emotion recognition method based on multitasking deep learning according to embodiment 1 of the present invention.
Fig. 2 is a schematic structural diagram of a multi-task deep learning network model according to embodiment 1 of the present invention.
Fig. 3 is a block diagram of the configuration of the fransfomer layer of embodiment 1 of the present invention.
Fig. 4 is a schematic structural diagram of a multi-task self-countermeasure deep learning network model according to embodiment 1 of the present invention.
Fig. 5 is a block diagram showing the structure of a feature domain authentication network according to embodiment 1 of the present invention.
Fig. 6 is a block diagram of the emotion recognition system based on multitasking deep learning according to embodiment 3 of the present invention.
Fig. 7 is a block diagram showing the structure of a computer device according to embodiment 4 of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments, and all other embodiments obtained by those skilled in the art without making any inventive effort based on the embodiments of the present invention are within the scope of protection of the present invention. It should be understood that the description of the specific embodiments is intended for purposes of illustration only and is not intended to limit the scope of the present application.
Example 1:
most of school class rooms are provided with monitoring cameras, and images of students in class can be obtained through the cameras. The embodiment combines deep learning and computer vision technology to construct a deep learning model, and realizes automatic positioning and emotion state identification of student positions in classroom student images.
As shown in fig. 1, the emotion recognition method based on multi-task deep learning provided in this embodiment includes the following steps:
s101, acquiring a plurality of classroom student images, automatically identifying the positions of students in the images, and labeling emotion state labels, head-up state labels and sight line direction labels of the students to obtain an emotion data set of the classroom students.
Collecting image samples of classes of students in a plurality of classes, and marking pixel positions in the image for each student through a target detection model
Figure SMS_1
Wherein->
Figure SMS_2
Is the upper left corner coordinate point of the student image, < >>
Figure SMS_3
The lower right corner coordinate point of the student image is adopted as the target detection model in the embodiment, and the Yolo V7 model is adopted as the prior art.
Labeling emotion labels for each student, wherein the emotion labels comprise 4 types of labels of calm, investment, confusion and boring; labeling each student with a head-up state label comprising 2 types of head-up labels and head-down labels; each student is marked with line-of-sight labels, including direct view, left, right, and down 4 kinds of labels.
S102, constructing a multi-task deep learning network model.
As shown in fig. 2, the multi-task deep learning network model includes an image characterization learning network, an image reconstruction network, and a multi-task tag generation network.
Further, step S102 includes:
(1) The image characterizes a learning network.
Before feature extraction is performed by using an image characterization learning network, preprocessing is performed on images of students in a class, including:
according to the course of studentsPixel coordinate position of hall student image
Figure SMS_4
Intercepting out sub-picture, and then enhancing the sub-picture by using image enhancement mode to obtain input image +.>
Figure SMS_5
The image enhancement method adopted in the embodiment includes: randomly changing the brightness, contrast and saturation of the image, randomly horizontally turning over, converting the random image into gray scale and randomly Gaussian blur.
The image characterization learning network may employ different convolutional neural networks.
Specifically, in one embodiment a VGG19 convolutional neural network architecture is employed. VGG19 includes a total of 16 convolutional layers, 5 pooled layers, and 3 fully connected layers, where the convolutional layers are all convolutional kernels of 3x3 size, and the pooled layers have a step size of 2, and the fully connected layers map the final feature map to a classification result vector. The image characterization learning network only adopts a backbone network of a VGG19 convolution layer and a pooling layer.
Specifically, in one embodiment a ResNet18 convolutional neural network structure is employed. The ResNet18 comprises 17 convolution layers and a full connection layer, wherein the convolution layers, the BN layer and the RELU activation function form a basic convolution block, residual connection is adopted by the ResNet18, the residual realizes addition operation of the characteristic diagram output by the convolution block and the characteristic diagram input by the convolution block, and the problem of network degradation is effectively solved by the residual method. The image characterization learning network uses only a backbone network of ResNet18 convolutional layers.
Image characterization learning network learning by inputting images
Figure SMS_6
Extracting features, and adding image data->
Figure SMS_7
Mapping to a token vector +.>
Figure SMS_8
The token vector is taken as an image +.>
Figure SMS_9
Representation in feature vector space.
(2) An image reconstruction network.
Image reconstruction network for representing vectors
Figure SMS_10
Image reconstruction is carried out, and the characterization vector is +.>
Figure SMS_11
Mapping to input image data +.>
Figure SMS_12
Image data of the same width and height->
Figure SMS_13
The image reconstruction network is a deconvolution neural network, wherein the deconvolution neural network is formed by deconvolution layers and activation functions alternately, and deconvolution can output high-resolution images until the output image size is the same as the true image size.
In this embodiment, the deconvolution neural network employs 5 deconvolution layers, each deconvolution layer employs a convolution kernel of 3x3 size and a step size of 2, and each deconvolution layer is followed by a BN layer and a LeakyReLU activation function. Each time the deconvolution operation is performed, the width and the height of the image are enlarged to be 2 times of the original width and the height of the original image until the image is enlarged to be the same as the original input image
Figure SMS_14
The same width and height.
(3) The multi-tasking label generates a network.
The multi-task tag generation network comprises an emotion tag generation network, a head-up state tag generation network and a sight direction tag generation network.
(3-1) emotion tag generation network.
The emotion tag generation network includes an emotion specific domain feature learning network, an FC (fully connected) layer, and a Softmax layer. The emotion specific domain feature learning network employs a Transfomer layer including 1 self attention (selfattribute) layer, 2 layer normalization (layer normalization), 2 Linear (Linear) layers, and 1 Linear rectification function ReLU (ReLU layer), as shown in fig. 3. The SelfAttention layer extracts features related to specific domains through a self-attention mechanism, the layer normalization is to normalize the features, and the Linear layer and the ReLU layer carry out nonlinear change on the features extracted from the self-attention layer to extract emotion specific domain features.
The characterization vector is to be represented
Figure SMS_15
Inputting emotion specific domain feature learning network, and representing vector +.>
Figure SMS_16
Feature vector mapped to emotion specific domain +.>
Figure SMS_17
,/>
Figure SMS_18
Generating predictive emotion tag +.>
Figure SMS_19
Predictive emotion tag->
Figure SMS_20
Is 4 in dimension.
(3-2) generating a network by the head-up status tag.
The head-up state tag generation network includes a head-up state domain specific feature learning network, an FC layer, and a Softmax layer. The structure of the head-up state specific domain feature learning network is the same as that of the emotion label generation network, but independent network parameters are used, and the network parameters are not shared with other tasks.
The characterization vector is to be represented
Figure SMS_21
Inputting a head-up state specific domain feature learning network, and representing vectors +.>
Figure SMS_22
Mapping to a head-up state domain feature vector +.>
Figure SMS_23
Then generating a predictive head-up status tag through the FC layer and the Softmax layer>
Figure SMS_24
Predictive head-up status tag->
Figure SMS_25
Is 2.
(3-3) the gaze direction tag generating network.
The gaze direction label generating network includes a gaze-specific domain feature learning network, an FC layer, and a Softmax layer. The structure of the line-of-sight specific domain feature learning network is the same as the emotion tag generation network, but independent network parameters are used, and the network parameters are not shared with other tasks.
The characterization vector is to be represented
Figure SMS_26
Inputting the sight specific domain feature learning network, mapping the characterization vector into the sight direction domain feature vector +.>
Figure SMS_27
Then generating a predictive gaze direction tag via the FC layer and the Softmax layer>
Figure SMS_28
Predicted line of sight tag->
Figure SMS_29
Is 4 in dimension.
The domain-specific feature vector includes a feature vector
Figure SMS_30
、/>
Figure SMS_31
And->
Figure SMS_32
S103, training a multi-task deep learning network model by using the student emotion data set.
The loss functions of the multi-tasking deep learning network model include loss functions of the image reconstruction network and loss functions of the multi-tasking label generation network, wherein:
loss function of image reconstruction network
Figure SMS_33
The method comprises the following steps:
Figure SMS_34
wherein,,
Figure SMS_35
for pre-processed image +.>
Figure SMS_36
Reconstructing an image output by the network for the image;
the penalty functions of the multi-tasking label generating network include penalty functions of the emotion label generating network
Figure SMS_37
Loss function of head-up status tag generation network>
Figure SMS_38
And a line of sight direction label generating network +.>
Figure SMS_39
Wherein:
Figure SMS_40
Figure SMS_41
Figure SMS_42
wherein N is the number of students in the image sample,
Figure SMS_43
、/>
Figure SMS_44
and->
Figure SMS_45
The method comprises the steps of marking an emotion label, a head-up state label and a sight direction label of an ith student in an image sample; />
Figure SMS_46
、/>
Figure SMS_47
And->
Figure SMS_48
Generating a predicted emotion label, a predicted head-up state label and a predicted sight direction label of an ith student, which are generated by a network, for the multi-task label;
then the loss function of the multitasking deep learning network model
Figure SMS_49
The method comprises the following steps:
Figure SMS_50
the process of training the multi-task deep learning network model mainly comprises the following steps: preprocessing a student sample image, and inputting the preprocessed image into a multi-task self-countermeasure deep learning network model; extracting the preprocessed image by using an image characterization learning network to obtain a characterization vector of the image
Figure SMS_51
The method comprises the steps of carrying out a first treatment on the surface of the Characterization vector->
Figure SMS_52
Respectively through emotion state recognitionProcessing of an emotion label generation network, a head-up state label generation network and a line-of-sight direction label generation network in a network to respectively obtain predictive emotion labels +.>
Figure SMS_53
Predictive head-up status tag->
Figure SMS_54
And predictive gaze direction label->
Figure SMS_55
The method comprises the steps of carrying out a first treatment on the surface of the The characterization vector is processed by an image reconstruction network to obtain image data; and calculating the value of the loss function according to the preprocessed image and the image data, as well as the predicted emotion label, the predicted head-up state label, the predicted sight direction label and the emotion state label marked in the image, and carrying out standard gradient back propagation to update the network weight.
S104, inputting the image to be identified into a trained multi-task deep learning network model, and predicting the emotion state label.
Automatically identifying the position of each person in the image to be identified by using the target detection model; according to the positions of the people in the image, capturing images of a plurality of people from the image to be identified; image enhancement is carried out on the cut-out image, and an enhanced image is obtained; and inputting the enhanced images into a trained multi-task deep learning network model, and outputting the emotion state labels of each person.
In order to alleviate the problem that the feature vector generated by the image reconstruction network is highly correlated with one of the feature vectors generated in the multi-task tag generation network and is low correlated with other feature vectors, the present embodiment adds a feature domain identification network in the multi-task deep learning network model to obtain a multi-task self-countermeasure deep learning network model, as shown in fig. 4.
The characteristic domain identification network is used for identifying whether the characteristic vector output by the image characteristic learning network and the specific domain characteristic vector output by the specific domain characteristic learning network belong to the same specific domain.
As shown in fig. 5, the feature domain authentication network includes three identical structures, each including 1 self-attention layer, 1 LayerNorm layer, 2 Linear layers, 1 batch normalization (BN layer), and 1 ReLU layer. The SelfAttention layer extracts related features from feature pairs input by a feature domain identification network through a self-attention mechanism, normalizes the features through the LayerNom layer, performs nonlinear transformation once through the Linear layer, the BN layer and the ReLU layer, and finally outputs an identification result through the Linear layer.
Input of a feature domain authentication network is obtained by splicing a characterization vector and a specific domain feature vector
Figure SMS_57
Figure SMS_59
And->
Figure SMS_60
The spliced feature vectors are processed by three structures of the feature domain identification network to output identification results +.>
Figure SMS_61
,/>
Figure SMS_62
E represents the project, h represents the head, and s represents the sight. Authentication result
Figure SMS_63
Identification truth value label combined with characteristic domain>
Figure SMS_64
、/>
Figure SMS_56
And->
Figure SMS_58
The loss function of the identified network portion is used to calculate and reverse propagate the negative gradient to update the network weights.
Truth-value tag for characterizing whether vectors are in the same domain as feature vectors generated in a domain-specific feature learning network
Figure SMS_65
Is calculated as follows:
Figure SMS_66
wherein,,
Figure SMS_67
、/>
Figure SMS_68
respectively representing an emotion label generation network, a head-up state label generation network and a sight direction label generation network,/->
Figure SMS_69
Is a domain-specific feature vector and requires +.>
Figure SMS_70
And->
Figure SMS_71
Having the same dimensions. The characterization vector and the specific domain feature are respectively input into an FC layer and a Softmax layer in the multi-task label generation network, and the output generated label value is consistent, and the user is +.>
Figure SMS_72
And->
Figure SMS_73
The prediction results are equal, and the truth value label is 1; otherwise the true value tag is 0.
The image characterization learning network and the characteristic domain identification network form an countermeasure network, and the image characterization learning network aims to output a common characterization vector with unchanged domain as far as possible to confuse the characteristic domain identification network; the purpose of the feature domain identification network is to identify whether the feature vector with unchanged domain and the feature vector generated in the specific domain feature learning network belong to the same specific domain, and finally output 1 or 0 to respectively indicate yes and no. The image characterization learning network can output domain invariant features, namely characterization vectors, through self-countermeasure learning, and the domain invariant features can be applied to specific tasks after being mapped to specific domain features through the feature domain learning network. The domain invariant feature vector output by the image feature learning network is decoupled from the specific task by a self-countermeasure learning method. Compared with the traditional method that the same characterization vector is shared by multiple tasks to conduct specific task prediction, the self-countermeasure method effectively relieves the preference of the image characterization learning network to a certain task.
The loss function of the feature domain authentication network includes an authentication loss function
Figure SMS_74
Identifying loss function->
Figure SMS_75
And identifying a loss function->
Figure SMS_76
Wherein:
Figure SMS_77
Figure SMS_78
Figure SMS_79
,/>
wherein,,
Figure SMS_81
、/>
Figure SMS_82
and->
Figure SMS_83
Characterization vector for the ith student +.>
Figure SMS_84
Specific domain features->
Figure SMS_85
True value tags of whether the same domain; />
Figure SMS_86
、/>
Figure SMS_87
And->
Figure SMS_80
The feature domain discriminates the corresponding discrimination results output by the network according to the emotion domain feature vector, the head-up state domain feature vector and the sight direction domain feature vector.
The loss function of the feature domain authentication network is:
Figure SMS_88
the process of training the multi-task self-countermeasure deep learning network model comprises the following steps:
firstly, calculating a loss function of a multi-task deep learning network model, performing standard gradient back propagation, and updating network weights;
then, calculating a loss function of the feature domain identification network, carrying out negative gradient back propagation, and updating the network weight; and stopping model training when the loss value tends to be stable and converged, so as to obtain a trained multi-task self-countermeasure deep learning network model.
Those skilled in the art will appreciate that all or part of the steps in a method implementing the above embodiments may be implemented by a program to instruct related hardware, and the corresponding program may be stored in a computer readable storage medium.
It should be noted that although the method operations of the above embodiments are depicted in the drawings in a particular order, this does not require or imply that the operations must be performed in that particular order or that all illustrated operations be performed in order to achieve desirable results. Rather, the depicted steps may change the order of execution. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform.
Example 2:
in this embodiment, the validity of the emotion recognition method provided in embodiment 1 is determined from the index of emotion recognition accuracy.
First, to verify validity, the present embodiment compares with a single task algorithm on published data. Since only emotion labels are on the public data, the embodiment only adopts a multi-task label generation network (emotion recognition task) and an image reconstruction network (image reconstruction task) to form multi-task learning.
Table 1 algorithmic comparison on ck+ dataset
Figure SMS_89
The F1 score is a harmonic mean of model accuracy and recall.
This example was run on a ck+ dataset. Ck+ data sets were collected in a laboratory environment in 2010. The dataset consisted of 593 video sequences of 123 testers from which images with more pronounced expression were selected for the experiment, a total of 981 images, including 7 basic facial expression categories. Since ck+ dataset pictures are small in number, this example uses ten fold cross validation. The data sets were partitioned in a 9:1 ratio of training and testing sets, and finally, the average accuracy and average F1 score of ten-fold cross validation were used as evaluation indexes, as shown in Table 1. From the experimental results, the performance of the network model is improved after the tasks are added.
Table 2 algorithmic comparison on FER2013 dataset
Figure SMS_90
The present example performed experiments on FER2013 dataset. Fer2013 is a facial expression dataset proposed on ICML in 2013, comprising a total of 35886 facial expression images of 7 basic expressions. Because the Fer2013 data sets are all from the network crawling pictures, the faces in the data sets have great differences among the facial gestures, the face angles and the ages, and the face states in the data sets are very similar to the face states in the real scene, so that the face detection method is very suitable for face expression detection in the real environment. In this example, when experiments were performed on Fer2013 dataset, to better verify the advantages of the multi-tasking network model, the network model performance was finally verified on Fer2013 test set, and the results are shown in table 2. According to the experimental result, after the image reconstruction task is added, the learning ability of the network model on the expression characteristics is further enhanced, and the performance is improved.
Then, to verify validity, the present embodiment compares with a single-task algorithm on a self-built dataset. The training data consists of two parts, namely, online collection and production. The making part is various student pictures obtained from the video stream of the classroom video, and the total number of the pictures is 3000. And dividing the data set according to the proportion of the training set, the verification set and the test set of 5:2.5:2.5, and finally taking the accuracy and the F1 score as evaluation indexes. Unlike the public data set, the data set built by the embodiment is additionally marked for realizing head-up state identification and sight direction identification. The implementation method adopts the processors of Intel [email protected], memory RAM16GB, video memory 12GB and a display card GeForce RTX 3060, and the picture size is unified to 48x48 before the implementation method is input into a network. The initial learning rate is set to 0.01, the iteration turns are 250, and the direction, angle, brightness, saturation, exposure, tone and the like of the picture are changed in a mode of enhancing the image data before each iteration starts to generate a new training picture.
Table 3 algorithm comparison on the self-built dataset
Figure SMS_91
In the embodiment, firstly, the Vgg19 and ResNet18 algorithm is adopted to fully train a self-built data set, an optimal model is obtained, then the optimal model is tested on test set data, and the final performance indexes are shown in a table 3. Then, the embodiment uses additional labeling data to respectively construct a multi-task deep learning network model and a multi-task self-countermeasure deep learning network model, fully trains a self-built data set, and tests the optimal model on test set data. The accuracy and F1 score of the final multi-task self-countermeasure method are improved compared with the single-task and multi-task learning methods, so that the method proposed by the embodiment is effective.
Example 3:
as shown in fig. 6, the present embodiment provides an emotion recognition system based on multi-task deep learning, which includes a data set acquisition module 601, a multi-task deep learning network model construction module 602, a network model training module 603, and an emotion state label recognition module 604, wherein:
a data set acquisition module 601, configured to acquire an emotion data set; the emotion data set comprises a plurality of classroom student images, and emotion state labels of all students are marked in the classroom student images;
A multi-task deep learning network model construction module 602, configured to construct a multi-task deep learning network model; the multi-task deep learning network model comprises an image characterization learning network, an image reconstruction network and a multi-task label generation network; extracting features of the classroom student images by using the image characterization learning network to obtain characterization vectors; performing image reconstruction on the characterization vector by using the image reconstruction network; predicting an emotion state label according to the characterization vector by utilizing the multi-task label generating network;
the network model training module 603 is configured to train a multi-task deep learning network model by using the emotion data set, calculate a loss function, perform standard gradient back propagation, and update the weight of the network model;
the emotion state label recognition module 604 is configured to input an image to be recognized into the trained multi-task deep learning network model, and predict an emotion state label.
Specific implementation of each module in this embodiment may be referred to embodiment 1 above, and will not be described in detail herein; it should be noted that, in the system provided in this embodiment, only the division of the above functional modules is used as an example, in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure is divided into different functional modules to perform all or part of the functions described above.
Example 4:
the present embodiment provides a computer device, which may be a computer, as shown in fig. 7, and is connected through a system bus 701, where the processor is configured to provide computing and control capabilities, the memory includes a nonvolatile storage medium 706 and an internal memory 707, where the nonvolatile storage medium 706 stores an operating system, a computer program, and a database, and the internal memory 707 provides an environment for the operating system and the computer program in the nonvolatile storage medium, and when the processor 702 executes the computer program stored in the memory, the emotion recognition method of the foregoing embodiment 1 is implemented as follows:
acquiring an emotion data set; the emotion data set comprises a plurality of classroom student images, and emotion state labels of all students are marked in the classroom student images;
constructing a multi-task deep learning network model; the multi-task deep learning network model comprises an image characterization learning network, an image reconstruction network and a multi-task label generation network; extracting features of the classroom student images by using the image characterization learning network to obtain characterization vectors; performing image reconstruction on the characterization vector by using the image reconstruction network; predicting an emotion state label according to the characterization vector by utilizing the multi-task label generating network;
Training a multi-task deep learning network model by using the emotion data set, calculating a loss function, carrying out standard gradient back propagation, and updating the weight of the network model;
inputting the image to be identified into a trained multitask deep learning network model, and predicting the emotion state label.
Example 5:
the present embodiment provides a storage medium, which is a computer-readable storage medium storing a computer program that, when executed by a processor, implements the emotion recognition method of embodiment 1 described above, as follows:
acquiring an emotion data set; the emotion data set comprises a plurality of classroom student images, and emotion state labels of all students are marked in the classroom student images;
constructing a multi-task deep learning network model; the multi-task deep learning network model comprises an image characterization learning network, an image reconstruction network and a multi-task label generation network; extracting features of the classroom student images by using the image characterization learning network to obtain characterization vectors; performing image reconstruction on the characterization vector by using the image reconstruction network; predicting an emotion state label according to the characterization vector by utilizing the multi-task label generating network;
Training a multi-task deep learning network model by using the emotion data set, calculating a loss function, carrying out standard gradient back propagation, and updating the weight of the network model;
inputting the image to be identified into a trained multitask deep learning network model, and predicting the emotion state label.
The computer readable storage medium of the present embodiment may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The above-mentioned embodiments are only preferred embodiments of the present invention, but the protection scope of the present invention is not limited thereto, and any person skilled in the art can make equivalent substitutions or modifications according to the technical solution and the inventive concept of the present invention within the scope of the present invention disclosed in the present invention patent, and all those skilled in the art belong to the protection scope of the present invention.

Claims (10)

1. An emotion recognition method based on multitasking deep learning, which is characterized by comprising the following steps:
acquiring an emotion data set; the emotion data set comprises a plurality of classroom student images, and emotion state labels of all students are marked in the classroom student images;
constructing a multi-task deep learning network model; the multi-task deep learning network model comprises an image characterization learning network, an image reconstruction network and a multi-task label generation network; extracting features of the classroom student images by using the image characterization learning network to obtain characterization vectors; performing image reconstruction on the characterization vector by using the image reconstruction network; predicting an emotion state label according to the characterization vector by utilizing the multi-task label generating network;
training a multi-task deep learning network model by using the emotion data set, calculating a loss function, carrying out standard gradient back propagation, and updating the weight of the network model;
inputting the image to be identified into a trained multitask deep learning network model, and predicting the emotion state label.
2. The emotion recognition method according to claim 1, wherein a feature domain identification network is added to a multi-task deep learning network model to obtain a multi-task self-countermeasure deep learning network model; the characteristic domain identification network and the image characterization learning network form an countermeasure network;
And calculating a loss function of the characteristic domain identification network, carrying out negative gradient back propagation, and updating the weight of the network model.
3. The emotion recognition method according to claim 2, wherein a truth value tag is obtained according to the characterization vector and a specific domain feature vector generated by a multi-task tag generation network; the specific domain feature vector comprises an emotion domain feature vector, a head-up state domain feature vector and a sight direction domain feature vector;
inputting the characterization vector and the specific domain feature vector into a feature domain identification network to obtain an identification result;
calculating a loss function of the feature domain authentication network according to the truth value label and the authentication result; and optimizing the weight of the image representation learning network according to the calculated value of the loss function.
4. The emotion recognition method of claim 3, wherein the feature domain discrimination network comprises a self-attention layer, a layer normalization, a linear layer, a batch normalization, and a ReLU layer;
the identification result comprises an emotion feature domain identification result, a head-up state feature domain identification result and a sight direction feature domain identification result;
the step of inputting the characterization vector and the specific domain feature vector into a feature domain authentication network to obtain an authentication result comprises the following steps:
Splicing the characterization vector and the emotion domain feature vector, inputting the spliced vector into a self-attention layer, and extracting associated features through a self-attention mechanism; adding the result output by the self-attention layer and the spliced vector, and sequentially processing the vector after the adding operation by a layer normalization layer, a linear layer, a batch normalization layer, a ReLU layer and a linear layer to obtain an emotion feature domain identification result;
similarly, splicing the characterization vector and the feature vector of the head-up state domain, and inputting the spliced vector into a feature domain identification network to obtain a head-up state feature domain identification result;
and similarly, splicing the characterization vector and the sight direction domain feature vector, and inputting the spliced vector into a feature domain identification network to obtain a sight direction feature domain identification result.
5. The emotion recognition method of claim 3, wherein said multitasking label generation network comprises an emotion label generation network, a heads-up status label generation network, and a gaze direction label generation network;
the generating the network generated specific domain feature vector according to the characterization vector and the multi-task label to obtain a truth value label comprises the following steps:
the characterization vector and the emotion domain feature vector are respectively and sequentially processed by an FC layer and a Softmax layer of an emotion label generation network to obtain two results; if the two results are consistent, the characteristic vector and the emotion domain characteristic vector belong to the same specific domain, and the truth value label is 1; otherwise, the characteristic vector and the emotion domain characteristic vector do not belong to the same specific domain, and the truth value label is 0;
Similarly, the characterization vector and the head-up state domain feature vector are respectively processed by an FC layer and a Softmax layer of a head-up state label generating network, and a value of a true value label is obtained according to two results;
and similarly, the characterization vector and the sight direction domain feature vector are respectively processed by an FC layer and a Softmax layer of a sight direction label generating network, and the value of the truth label is obtained according to the two results.
6. The emotion recognition method of claim 5, wherein the emotion tag generation network comprises an emotion specific domain feature learning network comprising a self-attention layer, a layer normalization, a linear layer, and a ReLU layer;
the multi-tasking label generating network generates a domain specific feature vector comprising:
inputting the characterization vector into a self-attention layer, performing first time adding operation on the vector output by the self-attention layer and the characterization vector, sequentially performing layer normalization, linear layer, reLU layer and linear layer processing on the vector after the first time adding operation, performing second time adding operation on the vector after the processing and the vector output by the layer normalization, and sequentially performing layer normalization on the vector after the second time adding operation to obtain an emotion domain feature vector;
The structures of the head-up state label generating network and the sight direction label generating network are the same as those of the emotion label generating network, and the head-up state domain feature vector and the sight direction domain feature vector are obtained according to the generating process of the emotion domain feature vector.
7. The emotion recognition method of claim 6, wherein the predicted emotion state tags include a predicted emotion tag, a predicted head-up state tag, and a predicted line-of-sight direction tag;
processing the emotion domain feature vector sequentially through an FC layer and a Softmax layer of an emotion label generation network to obtain a predicted emotion label;
the head-up state domain feature vector is sequentially processed through an FC layer and a Softmax layer of a head-up state label generating network to obtain a predicted head-up state label;
and processing the sight line direction domain feature vector through an FC layer and a Softmax layer of a sight line direction label generating network in sequence to obtain the sight line direction label.
8. The emotion recognition method according to any one of claims 1 to 7, characterized in that preprocessing of classroom student images before feature extraction is performed on the classroom student images by the image characterization learning network, includes:
According to the classroom student image, automatically identifying the position of the student in the image by using a target detection model;
according to the positions of students in the images, images of a plurality of students are intercepted from the classroom student images;
and carrying out image enhancement on the cut image to obtain an enhanced image.
9. The emotion recognition method of claim 8, wherein the image reconstruction network is a deconvolution neural network consisting of deconvolution layers alternating with activation functions;
and inputting the characterization vector into an image reconstruction network, wherein the output reconstructed image has the same size as the enhanced image.
10. An emotion recognition system based on multitasking deep learning, the system comprising:
the data set acquisition module is used for acquiring an emotion data set; the emotion data set comprises a plurality of classroom student images, and emotion state labels of all students are marked in the classroom student images;
the multi-task deep learning network model construction module is used for constructing a multi-task deep learning network model; the multi-task deep learning network model comprises an image characterization learning network, an image reconstruction network and a multi-task label generation network; extracting features of the classroom student images by using the image characterization learning network to obtain characterization vectors; performing image reconstruction on the characterization vector by using the image reconstruction network; predicting an emotion state label according to the characterization vector by utilizing the multi-task label generating network;
The network model training module is used for training a multi-task deep learning network model by utilizing the emotion data set, calculating a loss function and carrying out standard gradient back propagation, and updating the weight of the network model;
and the emotion state label recognition module is used for inputting the image to be recognized into the trained multi-task deep learning network model and predicting the emotion state label.
CN202310165454.3A 2023-02-27 2023-02-27 Emotion recognition method, system and storage medium based on multitask deep learning Active CN116030526B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310165454.3A CN116030526B (en) 2023-02-27 2023-02-27 Emotion recognition method, system and storage medium based on multitask deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310165454.3A CN116030526B (en) 2023-02-27 2023-02-27 Emotion recognition method, system and storage medium based on multitask deep learning

Publications (2)

Publication Number Publication Date
CN116030526A true CN116030526A (en) 2023-04-28
CN116030526B CN116030526B (en) 2023-08-15

Family

ID=86074155

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310165454.3A Active CN116030526B (en) 2023-02-27 2023-02-27 Emotion recognition method, system and storage medium based on multitask deep learning

Country Status (1)

Country Link
CN (1) CN116030526B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017177830A (en) * 2016-03-28 2017-10-05 株式会社電通 Expression recording system
CN110263822A (en) * 2019-05-29 2019-09-20 广东工业大学 A kind of Image emotional semantic analysis method based on multi-task learning mode
CN112613552A (en) * 2020-12-18 2021-04-06 北京工业大学 Convolutional neural network emotion image classification method combining emotion category attention loss
CN112784798A (en) * 2021-02-01 2021-05-11 东南大学 Multi-modal emotion recognition method based on feature-time attention mechanism
CN114927144A (en) * 2022-05-19 2022-08-19 南京工业大学 Voice emotion recognition method based on attention mechanism and multi-task learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017177830A (en) * 2016-03-28 2017-10-05 株式会社電通 Expression recording system
CN110263822A (en) * 2019-05-29 2019-09-20 广东工业大学 A kind of Image emotional semantic analysis method based on multi-task learning mode
CN112613552A (en) * 2020-12-18 2021-04-06 北京工业大学 Convolutional neural network emotion image classification method combining emotion category attention loss
CN112784798A (en) * 2021-02-01 2021-05-11 东南大学 Multi-modal emotion recognition method based on feature-time attention mechanism
CN114927144A (en) * 2022-05-19 2022-08-19 南京工业大学 Voice emotion recognition method based on attention mechanism and multi-task learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘尚旺;刘承伟;张爱丽;: "基于深度可分卷积神经网络的实时人脸表情和性别分类", 计算机应用, no. 04, pages 990 - 995 *

Also Published As

Publication number Publication date
CN116030526B (en) 2023-08-15

Similar Documents

Publication Publication Date Title
Yang et al. MTD-Net: Learning to detect deepfakes images by multi-scale texture difference
Xuan et al. Cross-modal attention network for temporal inconsistent audio-visual event localization
Ahmad et al. A novel deep learning-based online proctoring system using face recognition, eye blinking, and object detection techniques
Tang et al. Automatic facial expression analysis of students in teaching environments
Yang et al. Student in-class behaviors detection and analysis system based on CBAM-YOLOv5
CN116522212B (en) Lie detection method, device, equipment and medium based on image text fusion
Bühler et al. Automated hand-raising detection in classroom videos: A view-invariant and occlusion-robust machine learning approach
CN116030526B (en) Emotion recognition method, system and storage medium based on multitask deep learning
CN116127350A (en) Learning concentration monitoring method based on Transformer network
Moreira et al. Neuromorphic event-based face identity recognition
CN115659221A (en) Teaching quality assessment method and device and computer readable storage medium
CN115116117A (en) Learning input data acquisition method based on multi-mode fusion network
Naqvi et al. Advancements in Facial Expression-Based Automatic Emotion Identification Using Deep Learning
Ishrak et al. Explainable Deepfake Video Detection using Convolutional Neural Network and CapsuleNet
Zhao et al. A novel dataset based on indoor teacher-student interactive mode using AIoT
Anekar et al. Exploring Emotion and Sentiment Landscape of Depression: A Multimodal Analysis Approach
Agnihotri DeepFake Detection using Deep Neural Networks
Shou et al. A Method for Analyzing Learning Sentiment Based on Classroom Time‐Series Images
Sukamto et al. Learners mood detection using Convolutional Neural Network (CNN)
Shen Analysis and Research on the Characteristics of Modern English Classroom Learners’ Concentration Based on Deep Learning
Wu et al. The Application and Optimization of Deep Learning in Recognizing Student Learning Emotions.
Patil et al. Predicting Engagement and Boredom in Online Video Lectures
Sowjanya et al. Decoding Student Emotions: An Advanced CNN Approach for Behavior Analysis Application using Uniform Local Binary Pattern
Alkhalisy et al. Abnormal Behavior Detection in Online Exams Using Deep Learning and Data Augmentation Techniques.
Abdelkawy et al. Measuring Student Behavioral Engagement using Histogram of Actions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant