CN111985243B - Emotion model training method, emotion analysis device and storage medium - Google Patents

Emotion model training method, emotion analysis device and storage medium Download PDF

Info

Publication number
CN111985243B
CN111985243B CN201910436077.6A CN201910436077A CN111985243B CN 111985243 B CN111985243 B CN 111985243B CN 201910436077 A CN201910436077 A CN 201910436077A CN 111985243 B CN111985243 B CN 111985243B
Authority
CN
China
Prior art keywords
emotion
data
generator
output
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910436077.6A
Other languages
Chinese (zh)
Other versions
CN111985243A (en
Inventor
柳圆圆
李家乐
闫兴安
汤煜
曹彬
何威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Suzhou Software Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Suzhou Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Suzhou Software Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN201910436077.6A priority Critical patent/CN111985243B/en
Publication of CN111985243A publication Critical patent/CN111985243A/en
Application granted granted Critical
Publication of CN111985243B publication Critical patent/CN111985243B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a training method, an emotion analysis device and a storage medium for an emotion model; the method comprises the following steps: inputting sample data to a generator, and outputting emotion characteristics, wherein the sample data comprises text data and/or picture data; training the discriminator to discriminate the performance of the corresponding data type according to the input emotion characteristics by taking the emotion characteristics as the input of the discriminator and taking the data type as the output of the discriminator; training the emotion classifier to output the performance of the corresponding emotion type according to the input emotion characteristics by taking the emotion characteristics as the input of the emotion classifier and taking the emotion type as the output of the emotion classifier; the sample data is used as the input of the generator, the emotion characteristics are used as the output of the generator, the data type and the emotion type are used as the adjusting parameters of the generator, and the training generator outputs the performance of the emotion characteristics of the input data according to the input data. By applying the scheme of the invention, emotion analysis can be performed on two different types of data.

Description

Emotion model training method, emotion analysis device and storage medium
Technical Field
The present invention relates to emotion analysis technologies, and in particular, to a training method for emotion models, an emotion analysis method, an emotion analysis device, and a storage medium.
Background
Emotion analysis is first applied to the field of text processing, carries out emotion polarity classification on texts carrying subjective emotion tendencies, and has positive classes such as happiness, pleasure and the like, negative classes such as casualties, depression, anger and the like, and plays an important role in information retrieval and recommendation systems. With the development of internet technology, more and more kinds of data are available for research, the research scope of emotion analysis is gradually expanded, and research objects relate to data of multiple modes such as pictures, voice, video and the like, for example, the emotion analysis of pictures mainly observes emotion polarity reaction after people see a picture, if people feel happy and other positive emotions, emotion conveyed by the picture is positive, otherwise, emotion conveyed by the picture is negative. In the related art, different emotion analysis methods are adopted for different kinds of data, and a method capable of carrying out emotion analysis on two different kinds of data does not exist.
Disclosure of Invention
The embodiment of the invention provides a training method, an emotion analysis device and a storage medium for emotion models, which can be used for performing emotion classification on texts and also can be used for performing emotion classification on pictures.
The technical scheme of the embodiment of the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a training method based on an emotion classification model, where the emotion model includes a generator and an emotion classifier, and the method includes:
inputting sample data to the generator, and outputting emotion characteristics corresponding to the sample data; wherein the sample data comprises text data and/or picture data;
training the discriminator to discriminate the performance of the corresponding data type according to the input emotion characteristics by taking the emotion characteristics output by the generator as the input of the discriminator and taking the data type of the sample data as the output of the discriminator;
training the emotion classifier to output performance of the corresponding emotion type according to the input emotion characteristics by taking the emotion characteristics output by the generator as input of the emotion classifier and emotion type corresponding to the emotion characteristics as output of the emotion classifier;
and training the performance of the generator for outputting the emotion characteristics of the input data according to the input data by taking the sample data as the input of the generator and the emotion characteristics as the output of the generator and taking the data type output by the discriminator and the emotion type output by the emotion classifier as the adjustment parameters of the generator.
In the above aspect, before the inputting the sample data to the generator, the method further includes:
obtaining vectors corresponding to source data, wherein the source data comprises picture source data and/or text source data;
and carrying out truncation processing or filling processing on the vector corresponding to the source data to obtain the sample data with the preset data size.
In the above scheme, the inputting the sample data to the generator and outputting the emotion feature corresponding to the sample data includes:
when the input sample data is text data, obtaining a word vector sequence corresponding to the text data by using a neural network model of the generator;
assigning an attention probability to a word vector in the sequence of word vectors using an attention model of the generator;
and carrying out weighted summation on word vectors in the vector sequence and the distributed attention probability to obtain a text vector corresponding to the text data.
In the above scheme, the inputting the sample data to the generator and outputting the emotion feature corresponding to the sample data includes:
when the input sample data is picture data, obtaining a plurality of feature vectors corresponding to the picture data by using a neural network model of the generator;
Assigning attention probabilities to a plurality of the feature vectors using an attention model of the generator;
and carrying out weighted summation on each characteristic vector and the allocated attention probability to obtain a picture vector of the corresponding picture data.
In the above solution, the training the discriminator to discriminate the performance of the corresponding data type according to the input emotion feature by using the emotion feature output by the generator as the input of the discriminator and using the data type of the sample data as the output of the discriminator includes:
initializing a first full connection layer and a second full connection layer which are included by the discriminator;
constructing a training sample set, wherein the training sample set comprises the emotion characteristics and the data types of the sample data output by the generator;
initializing a loss function constructed based on the arbiter input, the arbiter output, and the arbiter parameters;
and training the discriminator to discriminate the performance of the corresponding data type according to the input emotion characteristics by taking the emotion characteristics output by the generator as the input of the discriminator and taking the data type of the sample data as the output of the discriminator.
In the above solution, the training the performance of the generator to output the emotion feature of the input data according to the input data by using the sample data as the input of the generator, using the emotion feature as the output of the generator, using the data type output by the discriminator and the emotion type output by the emotion classifier as the adjustment parameters of the generator includes:
Initializing an input layer, an intermediate layer and an output layer included in the generator;
constructing a training sample set, wherein the training sample set comprises sample data and emotion characteristics corresponding to the sample data;
initializing a loss function constructed based on an input of the generator, an output of the generator, and model training parameters including: the data type output by the discriminator and the emotion type output by the emotion classifier;
and training the performance of the generator for outputting the emotion characteristics of the input data according to the input data by taking the sample data as input and the emotion characteristics as output through a gradient descent algorithm.
In a second aspect, an embodiment of the present invention provides an emotion analysis method based on an emotion model, including:
acquiring an emotion model obtained through training; wherein, the liquid crystal display device comprises a liquid crystal display device,
the emotion model comprises a generator and an emotion classifier;
the generator is trained by taking sample data as input of the generator and emotion characteristics as output of the generator;
the emotion classifier is obtained by training with the emotion characteristics output by the generator as input of the emotion classifier and emotion types corresponding to the emotion characteristics as output of the emotion classifier;
Inputting data to be analyzed to the generator obtained through training, and outputting emotion characteristics corresponding to the data to be analyzed;
and inputting the emotion characteristics output by the generator to the emotion classifier obtained through training, and outputting emotion types corresponding to the emotion characteristics.
In a third aspect, an embodiment of the present invention provides a training apparatus for an emotion model, where the emotion model includes a generator and an emotion classifier, including:
the first output unit is used for inputting sample data to the generator and outputting emotion characteristics corresponding to the sample data; wherein the sample data comprises text data and/or picture data;
the first training unit is used for taking the emotion characteristics output by the generator as the input of the discriminator and taking the data type of the sample data as the output of the discriminator, and training the discriminator to discriminate the performance of the corresponding data type according to the input emotion characteristics;
the second training unit is used for taking the emotion characteristics output by the generator as the input of the emotion classifier and the emotion types corresponding to the emotion characteristics as the output of the emotion classifier, and training the emotion classifier to output the performance corresponding to the emotion types according to the input emotion characteristics;
The third training unit is used for taking the sample data as the input of the generator, taking the emotion characteristics as the output of the generator, taking the data type output by the discriminator and the emotion type output by the emotion classifier as the adjustment parameters of the generator, and training the performance of the emotion characteristics of the input data output by the generator according to the input data.
In the above scheme, the device further includes: the preprocessing unit is used for acquiring vectors corresponding to source data, wherein the source data comprises picture source data and/or text source data; and carrying out truncation processing or filling processing on the vector corresponding to the source data to obtain the sample data with the preset data size.
In the above scheme, the first output unit is specifically configured to obtain, when the input sample data is text data, a word vector sequence corresponding to the text data using a neural network model of the generator; assigning an attention probability to a word vector in the sequence of word vectors using an attention model of the generator; carrying out weighted summation on word vectors in the vector sequence and the distributed attention probability to obtain text vectors corresponding to the text data;
And when the input sample data is picture data, obtaining a plurality of feature vectors corresponding to the picture data by using a neural network model of the generator; assigning attention probabilities to a plurality of the feature vectors using an attention model of the generator; and carrying out weighted summation on each characteristic vector and the allocated attention probability to obtain a picture vector of the corresponding picture data.
In the above scheme, the first training unit is specifically configured to initialize a first full-connection layer and a second full-connection layer included in the identifier; constructing a training sample set, wherein the training sample set comprises the emotion characteristics and the data types of the sample data output by the generator; initializing a loss function constructed based on the arbiter input, the arbiter output, and the arbiter parameters; and training the discriminator to discriminate the performance of the corresponding data type according to the input emotion characteristics by taking the emotion characteristics output by the generator as the input of the discriminator and taking the data type of the sample data as the output of the discriminator.
In the above scheme, the third training unit is specifically configured to initialize an input layer, an intermediate layer, and an output layer that are included in the generator; constructing a training sample set, wherein the training sample set comprises sample data and emotion characteristics corresponding to the sample data; initializing a loss function constructed based on an input of the generator, an output of the generator, and model training parameters including: the data type output by the discriminator and the emotion type output by the emotion classifier; and training the performance of the generator for outputting the emotion characteristics of the input data according to the input data by taking the sample data as input and the emotion characteristics as output through a gradient descent algorithm.
In a fourth aspect, an embodiment of the present invention provides an emotion analysis device based on an emotion model, including:
the first acquisition unit is used for acquiring the emotion model obtained through training; wherein, the liquid crystal display device comprises a liquid crystal display device,
the emotion model comprises a generator and an emotion classifier;
the generator is trained by taking sample data as input of the generator and emotion characteristics as output of the generator;
the emotion classifier is obtained by training with the emotion characteristics output by the generator as input of the emotion classifier and emotion types corresponding to the emotion characteristics as output of the emotion classifier;
the first input unit is used for inputting data to be analyzed to the generator obtained by training;
the second output unit is used for outputting emotion characteristics corresponding to the data to be analyzed;
the second input unit is used for inputting the emotion characteristics output by the generator to the emotion classifier obtained through training;
and the third output unit is used for outputting the emotion type corresponding to the emotion characteristics.
In a fifth aspect, an embodiment of the present invention provides a training apparatus for an emotion model, including: a memory and a processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory is used for storing a computer program capable of running on the processor;
The processor is used for executing the training method of the emotion model in the scheme when the computer program is run.
In a sixth aspect, an embodiment of the present invention provides a storage medium storing a computer program, where the computer program, when executed by at least one processor, implements the method for training an emotion model in the above scenario.
The embodiment of the invention provides an emotion model, which comprises a generator and an emotion classifier, wherein the method comprises the following steps: inputting sample data to the generator, and outputting emotion characteristics corresponding to the sample data; wherein the sample data comprises text data and/or picture data; training the discriminator to discriminate the performance of the corresponding data type according to the input emotion characteristics by taking the emotion characteristics output by the generator as the input of the discriminator and taking the data type of the sample data as the output of the discriminator; training the emotion classifier to output performance of the corresponding emotion type according to the input emotion characteristics by taking the emotion characteristics output by the generator as input of the emotion classifier and emotion type corresponding to the emotion characteristics as output of the emotion classifier; and training the performance of the generator for outputting the emotion characteristics of the input data according to the input data by taking the sample data as the input of the generator and the emotion characteristics as the output of the generator and taking the data type output by the discriminator and the emotion type output by the emotion classifier as the adjustment parameters of the generator. In this way, according to the embodiment of the invention, the data type output by the discriminator and the emotion type output by the emotion classifier are used as the adjustment parameters of the generator, the generator is trained according to the performance of the emotion characteristics of the input data output by the input data, and the generator and the discriminator form a generation countermeasure network, so that the idea of using the maximum and minimum binary games to adjust the generator and the discriminator is realized, the difference between the picture data and the text data can be reduced, the generator can accurately extract the characteristics of the picture data and the text data, and the emotion model obtained based on the training method provided by the embodiment of the invention can accurately carry out emotion analysis on the two data.
Drawings
FIG. 1 is a schematic diagram of an emotion analysis system architecture based on emotion models according to an embodiment of the present invention;
FIG. 2 is a schematic hardware structure of a training device for emotion models according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of a training method of emotion models according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart of text source data processing according to an embodiment of the present invention;
fig. 5 is a schematic flow chart of picture source data processing according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a text vector according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a generator provided by an embodiment of the present invention;
FIG. 8 is a schematic diagram of a arbiter according to an embodiment of the present invention;
FIG. 9 is a schematic flow chart of an emotion analysis method based on an emotion model according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of a composition structure of a training device for emotion models according to an embodiment of the present invention;
fig. 11 is a schematic diagram of a composition structure of an emotion analysis device based on an emotion model according to an embodiment of the present invention.
Detailed Description
The present invention will be further described in detail with reference to the accompanying drawings, for the purpose of making the objects, technical solutions and advantages of the present invention more apparent, and the described embodiments should not be construed as limiting the present invention, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present invention.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
In the following description, the terms "first", "second", "third" and the like are merely used to distinguish similar objects and do not represent a specific ordering of the objects, it being understood that the "first", "second", "third" may be interchanged with a specific order or sequence, as permitted, to enable embodiments of the invention described herein to be practiced otherwise than as illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein is for the purpose of describing embodiments of the invention only and is not intended to be limiting of the invention.
In the related art, whether a traditional machine learning method or a deep learning method is utilized, a classifier is trained according to a large number of tagged picture data or text data, and emotion classification is carried out on the non-tagged data in the test set. That is, the prior art scheme is only suitable for picture data or text data, and if the classifier trained by the picture data is forcibly applied to the text data, the accuracy of emotion analysis based on the model is drastically reduced. The reason for this is that: on one hand, the picture belongs to two-dimensional data, the text belongs to one-dimensional data, and a large difference exists between the two data; on the other hand, direct connection is difficult to establish between low-level visual features and high-level emotion semantics of pictures, huge semantic gaps exist, the expression of texts is usually more ambiguous, the emotion polarity conveyed by the texts can be accurately judged by combining context, so that feature extraction methods designed for the pictures or the texts often have certain field limitations, and classification performance is usually poor on other kinds of data.
Based on the above, in the embodiment of the invention, a training method of an emotion model is provided, an antagonism network is generated by the composition of the generator and the arbiter, the generator and the arbiter are regulated by using the idea of the maximum and minimum binary game, the difference between the picture data and the text data can be reduced, and the generator can accurately extract the characteristics of the picture data and the text data, so that the emotion model obtained based on the training method provided by the embodiment of the invention can accurately carry out emotion analysis on the two data.
Firstly, describing the emotion analysis system based on the emotion model provided by the embodiment of the present invention, referring to fig. 1, the emotion analysis system provided by the embodiment of the present invention includes a terminal and/or a server, and both the training method of the emotion model and the emotion analysis method based on the emotion model in the embodiment of the present invention may be implemented by the terminal, or may be implemented by the server, or may be implemented by the terminal and the server in cooperation.
In an Application scene, an emotion analysis program (APP) is arranged on a terminal, a training device of an emotion model is arranged on a server, the server trains to obtain the emotion model based on the training method of the emotion model provided by the embodiment of the invention, a user inputs texts and/or pictures in the emotion analysis APP on the terminal, the emotion analysis APP submits the texts and/or pictures to the server, and then the emotion model on the server analyzes the texts and/or pictures submitted by the emotion analysis APP and returns analysis results to the terminal.
In another application scene, a training device of an emotion analysis APP and an emotion model is arranged on the terminal, the terminal is trained to obtain the emotion model based on the training method of the emotion model provided by the embodiment of the invention, then the emotion model obtained through training is adopted to update the emotion analysis APP of the terminal, and when a user inputs a text and/or a picture based on the updated emotion analysis APP, the emotion model analyzes the text or the picture and outputs a corresponding analysis result.
Next, a training device for emotion models and an emotion analysis device based on emotion models according to an embodiment of the present invention will be described. The training device for emotion models and the emotion analysis device based on emotion models according to the embodiments of the present invention may be implemented in various forms, such as: the method is implemented by the terminals such as a smart phone, a tablet personal computer, a desktop computer and the like alone or cooperatively by the terminals and a server. Both the training device for emotion models and the emotion analysis device based on emotion models provided in the embodiments of the present invention may be implemented in hardware or a combination of hardware and software, and taking the training device for emotion models in the embodiments of the present invention as an example, the training device for emotion models in the embodiments of the present invention will be described in detail below, and it will be understood that fig. 2 only shows an exemplary structure of the training device for emotion models, but not all structures, and that part of or all structures shown in fig. 2 may be implemented as needed.
The training device 100 for emotion models provided by the embodiment of the invention comprises: at least one processor 101, a memory 102, a user interface 103, and at least one network interface 104. The various components of training apparatus 100 for emotion models are coupled together via bus system 105. It is understood that the bus system 105 is used to enable connected communications between these components. The bus system 105 includes a power bus, a control bus, and a status signal bus in addition to a data bus. But for clarity of illustration the various buses are labeled as bus system 105 in fig. 2.
The user interface 103 may include, among other things, a display, keyboard, mouse, trackball, click wheel, keys, buttons, touch pad, or touch screen, etc.
It will be appreciated that the memory 102 may be either volatile memory or nonvolatile memory, and may include both volatile and nonvolatile memory.
Memory 102 in embodiments of the present invention is used to store various types of data to support the operation of training apparatus 100 for emotion models. Examples of such data include: any executable instructions, such as executable instructions, for operation on the emotion model training device 100, a program implementing the emotion model training method of an embodiment of the present invention may be included in the executable instructions.
The training method of the emotion model disclosed by the embodiment of the invention can be applied to the processor 101 or realized by the processor 101. The processor 101 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the training method of emotion models may be accomplished by instructions in the form of integrated logic circuits of hardware or software in processor 101. The processor 101 may be a general purpose processor, a digital signal processor (DSP, digital Signal Processor), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. Processor 101 may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present invention. The general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method provided by the embodiment of the invention can be directly embodied in the hardware of the decoding processor or can be implemented by combining hardware and software modules in the decoding processor. The software module may be located in a storage medium, where the storage medium is located in the memory 102, and the processor 101 reads information in the memory 102, and in combination with its hardware, performs the steps of the training method for emotion models provided in the embodiment of the present invention.
The following describes a training method of an emotion model provided by the embodiment of the present invention, where the emotion model includes a generator and an emotion classifier, and fig. 3 is a schematic implementation flow diagram of the training method of an emotion model provided by the embodiment of the present invention, and the method mainly includes the following steps:
step 201: inputting sample data to a generator, and outputting emotion characteristics corresponding to the sample data; wherein the sample data comprises text data and/or picture data.
In some embodiments, the sample data may be obtained by: obtaining vectors corresponding to source data, wherein the source data comprises picture source data and/or text source data; and carrying out truncation processing or filling processing on the vector corresponding to the source data to obtain sample data with preset data size.
When the source data is text source data, obtaining a preset number of words through truncation processing or filling processing, obtaining word vectors corresponding to each word in the processed text by using word2vec technology, sorting the obtained word vectors according to different modes to obtain corresponding different text vectors, and combining the different text vectors to obtain sample data with preset data size. As shown in fig. 4, assuming that the number of words after processing is m, the dimension of word vectors is n, and word vectors are ordered according to k different modes to obtain sample data with the data size of m x n x k, wherein word2vec is a correlation model for generating word vectors, and each word can be mapped to one vector through training; when the source data is picture source data, obtaining a picture with uniform size through cutting processing or filling processing, and extracting pixel values of all color channels as sample data. As shown in fig. 5, assuming that the processed picture size is m×n and the number of color channels is k, one sample data with a data size of m×n×k can be obtained.
In practical application, the word vector corresponding to the word2vec technology can be obtained by the following way: extracting emotion words from text source data, such as: beauful, sad, happy, etc., forms a emotion dictionary based on text source data, and combines the emotion dictionary based on text source data, synonyms (SynsetTerms) in SentiWordNet, and adjective noun pairs (ANP, adjective Noun Pairs) into a text emotion dictionary. The SentiWordNet is a dictionary established by foreign emotion analysis, and comprises more than 11 ten thousand records, wherein the format of one record is as follows: POS (parts of speech, including class 4: noun, adjective a, verb v, and adverb r), ID (entry number), posScore (positive emotion value), negScore (negative emotion value), synsetTerms (synonym entry name), and Gloss (notes); ANP is a data set of Borth et al sort labels, which is a collection of adjective noun pairs, for example: beautiful flowers, sad keys, etc. The text emotion dictionary obtained by combining the emotion words extracted from the text source data, the synonyms in SentiWordNet and the ANP is actually a set of words or phrases, such as a set of words or phrases of { beauful, sad, happy, good, beautiful flowers, sad mines } and the like, and the word2vec technology is adopted to train the obtained set so as to obtain word vectors corresponding to the words in the set, wherein the word vector dimension of each word is the same.
In practical applications, the resulting text word vectors may be ordered as follows: assuming that the number of words obtained after the truncation process or the filling process is m, the processed text is expressed as d= (word 1 ,word 2 ,…word m ) Wherein word represents word in text, then word vector corresponding to processed text is [ v ] 1 ,v 2 ,…,v m ],v m Representing word m The corresponding word vector has a dimension n. If it is required to obtain sample data with data size of m×n×k, word vectors are ordered according to k ordering modes to obtain k text vectors, and then the 1 st ordering mode may be that text word vectors are arranged in forward direction according to the sequence of occurrence in text, and the corresponding text vectors are vec 1 =[v 1 ,v 2 ,…,v m ]The method comprises the steps of carrying out a first treatment on the surface of the The 2 nd ordering mode can be reverse ordering according to the sequence of the words in the text, and the corresponding text vector is vec 2 =[v m ,v m-1 ,…,v 1 ]The method comprises the steps of carrying out a first treatment on the surface of the The kth ordering method can be that k words from the end of the text to the front are placed in the head of the document, the order of other words is unchanged, and the corresponding text vector is vec k =[v m-k+1 ,…,v m-1 ,v m ,v 1 ,v 2 ,…v m-k ]. As shown in fig. 6, combining the obtained k text vectors together can obtain one sample data with data size of m×n×k, which can be expressed as [ vec ] 1 ,vec 2 ,…,vec k ]。
According to the embodiment of the invention, the text data and the picture data can be processed into the data with the same size by carrying out the truncation processing or the filling processing on the source data, so that the training of a model is facilitated. In addition, the embodiment of the invention also provides a new text vector generation method, which is characterized in that word vectors are arranged according to different sequences to obtain different text vectors, and the different text vectors are combined together to obtain multi-dimensional sample data, so that the characteristics of the text data can be enriched.
In some embodiments, when the input sample data is text data, a word vector sequence corresponding to the text data is obtained using a neural network model of the generator; assigning an attention probability to a word vector in the sequence of word vectors using the attention model of the generator; and carrying out weighted summation on word vectors in the vector sequence and the distributed attention probability to obtain text vectors corresponding to the text data.
In some embodiments, when the input sample data is picture data, deriving a plurality of feature vectors corresponding to the picture data using a neural network model of a generator; assigning attention probabilities to a plurality of said feature vectors using an attention model of the generator; and carrying out weighted summation on each characteristic vector and the allocated attention probability to obtain the picture vector of the corresponding picture data.
It should be noted that fig. 7 is an alternative generator according to an embodiment of the present invention, where the generator is a convolutional neural network based on an attention model, and includes a first convolutional layer, a first pooling layer, a second convolutional layer, a second pooling layer, and an attention model output layer. Weighting the feature vectors output by the second pooling layer of the neural network through the attention model, and when the input sample data is text data, weighting the word vectors of the text; when the input sample data is picture data, weighting processing is performed on feature vectors of the picture data.
In general, the ideal outcome of the attention model is: the feature dimension of the text emotion words and the like related to the final emotion label can be increased in weight, and the weight of irrelevant features (such as sentence subjects and the like) can be reduced; for pictures, the subject content of the picture is generally considered to have a greater impact on emotion tags, such as: the picture is a face picture, the weight in the face region, especially the mouth and eye regions, is increased, and the weight elsewhere is decreased.
According to the embodiment of the invention, the attention model is added into the generator, and the weight of the feature with larger influence on the emotion label is increased through the attention model, so that more proper emotion features can be extracted, and the accuracy of emotion classification is further improved.
It should be noted that, when the input sample data is text data, the output emotion features are usually emotion words, such as good, bad, and negative words that may exist, and when the input sample data is picture data, the output emotion features are usually pixels, brightness, and other features of a picture, the output emotion features are reflected on feature vectors, which are values of different dimensions (columns), and are more prone to implicit features, and cannot be in one-to-one correspondence with an actual certain factor, the format of the emotion features is a one-dimensional vector, and the value of each dimension is between 0 and 1.
Step 202: the emotion characteristics output by the generator are used as input of the discriminator, the data types of the sample data are used as output of the discriminator, and the discriminator is trained to discriminate the performance of the corresponding data types according to the input emotion characteristics.
Here, the emotion features output by the generator are input into the discriminator, the discriminator outputs the label of the corresponding data type according to the input emotion features, for example, the picture corresponds to output '1', the text corresponds to output '0', the label of the data type provided by the user is used as the standard result output by the discriminator, and the discriminator is trained to discriminate the performance of the corresponding data type according to the emotion features according to the input emotion features.
In some embodiments, the initialization arbiter comprises a first fully connected layer and a second fully connected layer; constructing a training sample set, wherein the training sample set comprises emotion characteristics and data types of sample data output by a generator; initializing a loss function constructed based on the arbiter input, the arbiter output, and the arbiter parameters; the emotion characteristics output by the generator are used as input of the discriminator, the data types of the sample data are used as output of the discriminator, and the discriminator is trained to discriminate the performance of the corresponding data types according to the input emotion characteristics.
It should be noted that, as shown in fig. 8, the identifier provided in the embodiment of the present invention includes two full connection layers, where a first full connection layer uses a modified linear unit (ReLU, rectified Linear Unit) as an activation function, and a second full connection layer uses a Softmax function as an activation function, and obtains, through the Softmax function, a probability that a data type of sample data belongs to picture data or text data.
Step 203: and training the emotion classifier to output the performance of the corresponding emotion type according to the input emotion characteristics by taking the emotion characteristics output by the generator as the input of the emotion classifier and taking the emotion type of the corresponding emotion characteristics as the output of the emotion classifier.
Here, the emotion type of the corresponding emotion feature is the emotion type of the sample data, the emotion feature output by the generator is input to the emotion classifier, the emotion classifier outputs a label of the corresponding emotion type according to the input emotion feature, for example, positive emotion outputs 1, negative emotion outputs 0, the emotion type of the sample data provided by the user is used as a standard result output by the emotion classifier, and the emotion classifier is trained to output performance of the corresponding emotion type according to the input emotion feature. Typically, the activation function of the emotion classifier also selects a Softmax function to obtain the probability that the emotion type corresponding to the sample data belongs to the positive class or the negative class.
Step 204: the sample data is used as the input of the generator, the emotion characteristics are used as the output of the generator, the data type output by the discriminator and the emotion type output by the emotion classifier are used as the adjusting parameters of the generator, and the training generator outputs the performance of the emotion characteristics of the input data according to the input data.
In some embodiments, the initialization generator includes an input layer, an intermediate layer, and an output layer; constructing a training sample set, wherein the training sample set comprises sample data and emotion characteristics corresponding to the sample data; initializing a loss function constructed based on an input of the generator, an output of the generator, and model training parameters including: the data type output by the discriminator and the emotion type output by the emotion classifier; and taking sample data as input and emotion characteristics as output, and adopting a gradient descent algorithm training generator to output the performance of the emotion characteristics of the input data according to the input data.
Here, the loss function constructed based on the input of the generator, the output of the generator, and the model training parameters includes the loss of the emotion classifier, the loss of the discriminant, and the L2 norm of the parameters, and then the loss function is expressed as:
Loss=L classifier +L discriminator +γL 2 (w)
wherein L is classifier For loss of emotion classifier, L discriminator For loss of discriminator, L 2 (w) is the L2 norm of the parameter, w is a coefficient capable of making L classifier +L discriminator Taking the value of the minimum value;
the objective function is:
w * =argmin w (Loss)
when the loss function value is smaller than the preset loss threshold value, training is stopped, and the preset loss threshold value is an empirical value (such as 0.001).
In practical application, the performance of training the generator by adopting a gradient descent algorithm to output the emotion characteristics of the input data according to the input data can be realized through an Adam algorithm, and the Adam algorithm is a first-order optimization algorithm capable of replacing the traditional random gradient descent process and capable of iteratively updating the neural network weight based on training data.
According to the embodiment of the invention, the data type output by the discriminator and the emotion type pair generator output by the emotion classifier are used as training parameters to train the generator, so that the difference between the picture data and the text data can be reduced, the data of two different data types can be extracted, meanwhile, the emotion characteristics extracted by the generator are beneficial to the classification of the emotion classifier, and the accuracy of emotion classification is improved.
As shown in fig. 9, an embodiment of the present invention provides an emotion analysis method based on an emotion model, including:
step 301: and obtaining the emotion model obtained through training.
Wherein the emotion model comprises a generator and an emotion classifier; the generator is obtained by training with sample data as input of the generator and emotion characteristics as output of the generator; the emotion classifier is obtained by training by taking the emotion characteristics output by the generator as the input of the emotion classifier and the emotion types corresponding to the emotion characteristics as the output of the emotion classifier.
It should be noted that, the emotion model in the embodiment of the present invention is obtained by training according to the training method of the emotion model.
Step 302: inputting the data to be analyzed to a generator obtained by training, and outputting emotion characteristics corresponding to the data to be analyzed.
Step 303: and inputting the emotion characteristics output by the generator to the emotion classifier obtained by training, and outputting emotion types corresponding to the emotion characteristics.
The embodiment of the invention also provides a training device for the emotion model, the emotion model comprises a generator and an emotion classifier, as shown in fig. 10, fig. 10 is a schematic diagram of a composition structure of the training device for the emotion model, which comprises:
A first output unit 401, configured to input sample data to the generator, and output emotion features corresponding to the sample data; wherein the sample data comprises text data and/or picture data;
a first training unit 402, configured to train the discriminator to determine the performance of the corresponding data type according to the input emotion feature, with the emotion feature output by the generator as an input of the discriminator and the data type of the sample data as an output of the discriminator;
a second training unit 403, configured to train the emotion classifier to output performance of the corresponding emotion type according to the input emotion features, with the emotion features output by the generator as input of the emotion classifier and emotion types of the corresponding emotion features as output of the emotion classifier;
and the third training unit 404 is configured to train the generator to output performance of the emotion feature of the input data according to the input data, with the sample data as input of the generator, with the emotion feature as output of the generator, and with the data type output by the discriminator and the emotion type output by the emotion classifier as adjustment parameters of the generator.
In some embodiments, the training device of emotion models further comprises: the preprocessing unit is used for acquiring vectors corresponding to source data, wherein the source data comprises picture source data and/or text source data, and performing truncation processing or filling processing on the vectors corresponding to the source data to obtain sample data with preset data size.
In some embodiments, the first output unit 401 is specifically configured to obtain, when the input sample data is picture data, a plurality of feature vectors corresponding to the picture data using a neural network model of the generator; assigning an attention probability to the plurality of feature vectors using an attention model of the generator; weighting and summing each characteristic vector and the allocated attention probability to obtain a picture vector corresponding to the picture data; when the input sample data is text data, obtaining a word vector sequence corresponding to the text data by using a neural network model of the generator; assigning an attention probability to a word vector in the sequence of word vectors using the attention model of the generator; and carrying out weighted summation on word vectors in the vector sequence and the distributed attention probability to obtain text vectors corresponding to the text data.
In some embodiments, the first training unit 402 is specifically configured to initialize a first fully-connected layer and a second fully-connected layer included in the arbiter; constructing a training sample set, wherein the training sample set comprises emotion characteristics and data types of sample data output by a generator; initializing a loss function constructed based on the arbiter input, the arbiter output, and the arbiter parameters; the emotion characteristics output by the generator are used as input of the discriminator, the data types of the sample data are used as output of the discriminator, and the discriminator is trained to discriminate the performance of the corresponding data types according to the input emotion characteristics.
In some embodiments, the third training unit 404 is specifically configured to initialize an input layer, an intermediate layer, and an output layer included in the generator; constructing a training sample set, wherein the training sample set comprises sample data and emotion characteristics corresponding to the sample data; initializing a loss function constructed based on the input of the generator, the output of the generator, and model training parameters including: the data type output by the discriminator and the emotion type output by the emotion classifier; sample data is taken as input, emotion characteristics are taken as output, and a gradient descent algorithm training generator is adopted to output the performance of the emotion characteristics of the input data according to the input data.
The embodiment of the invention also provides an emotion analysis device based on an emotion model, as shown in fig. 11, fig. 11 is a schematic diagram of a composition structure of the emotion analysis device based on an emotion model, which includes:
a first obtaining unit 501, configured to obtain an emotion model obtained by training; wherein the emotion model comprises a generator and an emotion classifier; the generator is obtained by training with sample data as input of the generator and emotion characteristics as output of the generator; the emotion classifier is obtained by training by taking the emotion characteristics output by the generator as the input of the emotion classifier and the emotion types corresponding to the emotion characteristics as the output of the emotion classifier;
A first input unit 502, configured to input data to be analyzed to a training-obtained generator;
a second output unit 503, configured to output emotion features corresponding to data to be analyzed;
a second input unit 504, configured to input the emotion features output by the generator to the trained emotion classifier;
and a third output unit 505, configured to output an emotion type corresponding to the emotion feature.
An embodiment of the present invention provides a storage medium storing executable instructions, where the executable instructions are stored, which when executed by a processor, cause the processor to perform a method for training an emotion model provided by an embodiment of the present invention, for example, a method as shown in fig. 3.
In some embodiments, the storage medium may be FRAM, ROM, PROM, EPROM, EEPROM, flash, magnetic surface memory, optical disk, or CD-ROM; but may be a variety of devices including one or any combination of the above memories.
In some embodiments, the executable instructions may be in the form of programs, software modules, scripts, or code, written in any form of programming language (including compiled or interpreted languages, or declarative or procedural languages), and they may be deployed in any form, including as stand-alone programs or as modules, components, subroutines, or other units suitable for use in a computing environment.
As an example, the executable instructions may, but need not, correspond to files in a file system, may be stored as part of a file that holds other programs or data, for example, in one or more scripts in a hypertext markup language (HTML, hyper Text Markup Language) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
As an example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices located at one site or, alternatively, distributed across multiple sites and interconnected by a communication network.
In summary, the embodiment of the invention provides a new text vector generation method, which can enrich the characteristics of text, and convert text data and picture data into data with the same size by a cutting-off processing or filling processing mode, thereby being beneficial to inputting models. In addition, the embodiment of the invention also provides a training mode for generating an countermeasure network based on the composition of the generator and the discriminator, the generator and the discriminator are regulated by the idea of the maximum and minimum binary game, and the difference between the picture and the text is reduced, so that the model is applicable to two different data types. Furthermore, the embodiment of the invention adds the attention model into the generator, so that the attention model focuses on the characteristics with larger influence on the emotion types, the effectiveness of emotion characteristic extraction is improved, and the accuracy of emotion classification is further improved.
The foregoing is merely exemplary embodiments of the present invention and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and scope of the present invention are included in the protection scope of the present invention.

Claims (15)

1. A method of training an emotion model, the emotion model comprising a generator and an emotion classifier, the method comprising:
inputting sample data to the generator, and outputting emotion characteristics corresponding to the sample data; wherein the sample data comprises text data and/or picture data;
training the discriminator to discriminate the performance of the corresponding data type according to the input emotion characteristics by taking the emotion characteristics output by the generator as the input of the discriminator and taking the data type of the sample data as the output of the discriminator;
training the emotion classifier to output performance of the corresponding emotion type according to the input emotion characteristics by taking the emotion characteristics output by the generator as input of the emotion classifier and emotion type corresponding to the emotion characteristics as output of the emotion classifier;
and training the performance of the generator for outputting the emotion characteristics of the input data according to the input data by taking the sample data as the input of the generator and the emotion characteristics as the output of the generator and the data type output by the discriminator and the emotion type output by the emotion classifier.
2. The method of claim 1, wherein prior to inputting sample data to the generator, further comprising:
obtaining vectors corresponding to source data, wherein the source data comprises picture source data and/or text source data;
and carrying out truncation processing or filling processing on the vector corresponding to the source data to obtain the sample data with the preset data size.
3. The method of claim 1, wherein the inputting sample data to the generator and outputting emotion features corresponding to the sample data comprises:
when the input sample data is text data, obtaining a word vector sequence corresponding to the text data by using a neural network model of the generator;
assigning an attention probability to a word vector in the sequence of word vectors using an attention model of the generator;
and carrying out weighted summation on word vectors in the vector sequence and the distributed attention probability to obtain a text vector corresponding to the text data.
4. The method of claim 1, wherein the inputting sample data to the generator and outputting emotion features corresponding to the sample data comprises:
When the input sample data is picture data, obtaining a plurality of feature vectors corresponding to the picture data by using a neural network model of the generator;
assigning attention probabilities to a plurality of the feature vectors using an attention model of the generator;
and carrying out weighted summation on each characteristic vector and the allocated attention probability to obtain a picture vector of the corresponding picture data.
5. The method of claim 1, wherein the training the arbiter to discriminate the performance of the corresponding data type based on the input emotion features, with the emotion features of the generator output as input to the arbiter and the data type of the sample data as output from the arbiter, comprises:
initializing a first full connection layer and a second full connection layer which are included by the discriminator;
constructing a training sample set, wherein the training sample set comprises the emotion characteristics and the data types of the sample data output by the generator;
initializing a loss function constructed based on the arbiter input, the arbiter output, and the arbiter parameters;
and training the discriminator to discriminate the performance of the corresponding data type according to the input emotion characteristics by taking the emotion characteristics output by the generator as the input of the discriminator and taking the data type of the sample data as the output of the discriminator.
6. The method of claim 1, wherein the training the generator to output the performance of the emotion feature of the input data according to the input data with the sample data as the input of the generator, the emotion feature as the output of the generator, the data type output by the discriminator, and the emotion type output by the emotion classifier comprises:
initializing an input layer, an intermediate layer and an output layer included in the generator;
constructing a training sample set, wherein the training sample set comprises sample data;
initializing a loss function constructed based on an input of the generator, an output of the generator, and model training parameters including: the data type output by the discriminator and the emotion type output by the emotion classifier; the model training parameters are predicted values for calculating a loss function;
and training the performance of the generator for outputting the emotion characteristics of the input data according to the input data by taking the sample data as input and the emotion characteristics as output through a gradient descent algorithm.
7. A emotion analysis method based on emotion models, the method comprising:
Acquiring an emotion model obtained through training; wherein, the liquid crystal display device comprises a liquid crystal display device,
the emotion model comprises a generator and an emotion classifier;
the generator is obtained by training the sample data serving as the input of the generator and the emotion characteristics serving as the output of the generator, and the data type output by the discriminator and the emotion type output by the emotion classifier;
the emotion classifier is obtained by training with the emotion characteristics output by the generator as input of the emotion classifier and emotion types corresponding to the emotion characteristics as output of the emotion classifier;
inputting data to be analyzed to the generator obtained through training, and outputting emotion characteristics corresponding to the data to be analyzed;
and inputting the emotion characteristics output by the generator to the emotion classifier obtained through training, and outputting emotion types corresponding to the emotion characteristics.
8. A training apparatus for emotion models, the emotion models comprising a generator and an emotion classifier, the apparatus comprising:
the first output unit is used for inputting sample data to the generator and outputting emotion characteristics corresponding to the sample data; wherein the sample data comprises text data and/or picture data;
The first training unit is used for taking the emotion characteristics output by the generator as the input of the discriminator and taking the data type of the sample data as the output of the discriminator, and training the discriminator to discriminate the performance of the corresponding data type according to the input emotion characteristics;
the second training unit is used for taking the emotion characteristics output by the generator as the input of the emotion classifier and the emotion types corresponding to the emotion characteristics as the output of the emotion classifier, and training the emotion classifier to output the performance corresponding to the emotion types according to the input emotion characteristics;
and the third training unit is used for taking the sample data as the input of the generator, taking the emotion characteristics as the output of the generator, and training the performance of the emotion characteristics of the input data output by the generator according to the input data by using the data type output by the discriminator and the emotion type output by the emotion classifier.
9. The apparatus of claim 8, wherein the apparatus further comprises:
the preprocessing unit is used for acquiring vectors corresponding to source data, wherein the source data comprises picture source data and/or text source data; and carrying out truncation processing or filling processing on the vector corresponding to the source data to obtain the sample data with the preset data size.
10. The apparatus of claim 8, wherein the device comprises a plurality of sensors,
the first output unit is specifically configured to obtain a word vector sequence corresponding to text data using a neural network model of the generator when the input sample data is text data; assigning an attention probability to a word vector in the sequence of word vectors using an attention model of the generator; carrying out weighted summation on word vectors in the vector sequence and the distributed attention probability to obtain text vectors corresponding to the text data;
and when the input sample data is picture data, obtaining a plurality of feature vectors corresponding to the picture data by using a neural network model of the generator; assigning attention probabilities to a plurality of the feature vectors using an attention model of the generator; and carrying out weighted summation on each characteristic vector and the allocated attention probability to obtain a picture vector of the corresponding picture data.
11. The apparatus of claim 8, wherein the device comprises a plurality of sensors,
the first training unit is specifically configured to initialize a first full-connection layer and a second full-connection layer included in the discriminator; constructing a training sample set, wherein the training sample set comprises the emotion characteristics and the data types of the sample data output by the generator; initializing a loss function constructed based on the arbiter input, the arbiter output, and the arbiter parameters; and training the discriminator to discriminate the performance of the corresponding data type according to the input emotion characteristics by taking the emotion characteristics output by the generator as the input of the discriminator and taking the data type of the sample data as the output of the discriminator.
12. The apparatus of claim 8, wherein the device comprises a plurality of sensors,
the third training unit is specifically configured to initialize an input layer, an intermediate layer, and an output layer that the generator includes; constructing a training sample set, wherein the training sample set comprises sample data; initializing a loss function constructed based on an input of the generator, an output of the generator, and model training parameters including: the data type output by the discriminator and the emotion type output by the emotion classifier; the model training parameters are predicted values for calculating a loss function; and training the performance of the generator for outputting the emotion characteristics of the input data according to the input data by taking the sample data as input and the emotion characteristics as output through a gradient descent algorithm.
13. An emotion analysis device based on emotion models, the device comprising:
the first acquisition unit is used for acquiring the emotion model obtained through training; wherein, the liquid crystal display device comprises a liquid crystal display device,
the emotion model comprises a generator and an emotion classifier;
the generator is obtained by training the sample data serving as the input of the generator and the emotion characteristics serving as the output of the generator, and the data type output by the discriminator and the emotion type output by the emotion classifier;
The emotion classifier is obtained by training with the emotion characteristics output by the generator as input of the emotion classifier and emotion types corresponding to the emotion characteristics as output of the emotion classifier;
the first input unit is used for inputting data to be analyzed to the generator obtained by training;
the second output unit is used for outputting emotion characteristics corresponding to the data to be analyzed;
the second input unit is used for inputting the emotion characteristics output by the generator to the emotion classifier obtained through training;
and the third output unit is used for outputting the emotion type corresponding to the emotion characteristics.
14. A training device for emotion models, the device comprising: a memory and a processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory is used for storing a computer program capable of running on the processor;
the processor, when running the computer program, is configured to perform the training method of emotion models according to any of claims 1 to 6.
15. A storage medium storing a computer program which, when executed by at least one processor, implements the method of training an emotion model according to any one of claims 1 to 6.
CN201910436077.6A 2019-05-23 2019-05-23 Emotion model training method, emotion analysis device and storage medium Active CN111985243B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910436077.6A CN111985243B (en) 2019-05-23 2019-05-23 Emotion model training method, emotion analysis device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910436077.6A CN111985243B (en) 2019-05-23 2019-05-23 Emotion model training method, emotion analysis device and storage medium

Publications (2)

Publication Number Publication Date
CN111985243A CN111985243A (en) 2020-11-24
CN111985243B true CN111985243B (en) 2023-09-08

Family

ID=73437203

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910436077.6A Active CN111985243B (en) 2019-05-23 2019-05-23 Emotion model training method, emotion analysis device and storage medium

Country Status (1)

Country Link
CN (1) CN111985243B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112885432A (en) * 2021-02-06 2021-06-01 北京色彩情绪健康科技发展有限公司 Emotion analysis and management system
CN113010675A (en) * 2021-03-12 2021-06-22 出门问问信息科技有限公司 Method and device for classifying text information based on GAN and storage medium
CN116418686A (en) * 2021-12-31 2023-07-11 华为技术有限公司 Model data processing method and device
CN115982473B (en) * 2023-03-21 2023-06-23 环球数科集团有限公司 Public opinion analysis arrangement system based on AIGC

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107679580A (en) * 2017-10-21 2018-02-09 桂林电子科技大学 A kind of isomery shift image feeling polarities analysis method based on the potential association of multi-modal depth
CN107818084A (en) * 2017-10-11 2018-03-20 北京众荟信息技术股份有限公司 A kind of sentiment analysis method for merging comment figure
WO2018060993A1 (en) * 2016-09-27 2018-04-05 Faception Ltd. Method and system for personality-weighted emotion analysis
CN108108849A (en) * 2017-12-31 2018-06-01 厦门大学 A kind of microblog emotional Forecasting Methodology based on Weakly supervised multi-modal deep learning
CN108388544A (en) * 2018-02-10 2018-08-10 桂林电子科技大学 A kind of picture and text fusion microblog emotional analysis method based on deep learning
CN108764268A (en) * 2018-04-02 2018-11-06 华南理工大学 A kind of multi-modal emotion identification method of picture and text based on deep learning
CN108959322A (en) * 2017-05-25 2018-12-07 富士通株式会社 Information processing method and device based on text generation image
CN109117482A (en) * 2018-09-17 2019-01-01 武汉大学 A kind of confrontation sample generating method towards the detection of Chinese text emotion tendency
CN109213876A (en) * 2018-08-02 2019-01-15 宁夏大学 Based on the cross-module state search method for generating confrontation network
CN109344879A (en) * 2018-09-07 2019-02-15 华南理工大学 A kind of decomposition convolution method fighting network model based on text-image
CN109376775A (en) * 2018-10-11 2019-02-22 南开大学 The multi-modal sentiment analysis method of online news
CN109753566A (en) * 2019-01-09 2019-05-14 大连民族大学 The model training method of cross-cutting sentiment analysis based on convolutional neural networks
CN109783798A (en) * 2018-12-12 2019-05-21 平安科技(深圳)有限公司 Method, apparatus, terminal and the storage medium of text information addition picture

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11205103B2 (en) * 2016-12-09 2021-12-21 The Research Foundation for the State University Semisupervised autoencoder for sentiment analysis

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018060993A1 (en) * 2016-09-27 2018-04-05 Faception Ltd. Method and system for personality-weighted emotion analysis
CN108959322A (en) * 2017-05-25 2018-12-07 富士通株式会社 Information processing method and device based on text generation image
CN107818084A (en) * 2017-10-11 2018-03-20 北京众荟信息技术股份有限公司 A kind of sentiment analysis method for merging comment figure
CN107679580A (en) * 2017-10-21 2018-02-09 桂林电子科技大学 A kind of isomery shift image feeling polarities analysis method based on the potential association of multi-modal depth
CN108108849A (en) * 2017-12-31 2018-06-01 厦门大学 A kind of microblog emotional Forecasting Methodology based on Weakly supervised multi-modal deep learning
CN108388544A (en) * 2018-02-10 2018-08-10 桂林电子科技大学 A kind of picture and text fusion microblog emotional analysis method based on deep learning
CN108764268A (en) * 2018-04-02 2018-11-06 华南理工大学 A kind of multi-modal emotion identification method of picture and text based on deep learning
CN109213876A (en) * 2018-08-02 2019-01-15 宁夏大学 Based on the cross-module state search method for generating confrontation network
CN109344879A (en) * 2018-09-07 2019-02-15 华南理工大学 A kind of decomposition convolution method fighting network model based on text-image
CN109117482A (en) * 2018-09-17 2019-01-01 武汉大学 A kind of confrontation sample generating method towards the detection of Chinese text emotion tendency
CN109376775A (en) * 2018-10-11 2019-02-22 南开大学 The multi-modal sentiment analysis method of online news
CN109783798A (en) * 2018-12-12 2019-05-21 平安科技(深圳)有限公司 Method, apparatus, terminal and the storage medium of text information addition picture
CN109753566A (en) * 2019-01-09 2019-05-14 大连民族大学 The model training method of cross-cutting sentiment analysis based on convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于表示学习的跨模态检索模型与特征抽取研究综述;李志义,黄子风,许晓绵;情报学报;第37卷(第4期);422-435 *

Also Published As

Publication number Publication date
CN111985243A (en) 2020-11-24

Similar Documents

Publication Publication Date Title
CN111191078B (en) Video information processing method and device based on video information processing model
Keneshloo et al. Deep reinforcement learning for sequence-to-sequence models
CN111027327B (en) Machine reading understanding method, device, storage medium and device
CN111985243B (en) Emotion model training method, emotion analysis device and storage medium
CN112131350B (en) Text label determining method, device, terminal and readable storage medium
US11720761B2 (en) Systems and methods for intelligent routing of source content for translation services
CN113011186B (en) Named entity recognition method, named entity recognition device, named entity recognition equipment and computer readable storage medium
CN114676234A (en) Model training method and related equipment
CN113505193A (en) Data processing method and related equipment
CN111767394A (en) Abstract extraction method and device based on artificial intelligence expert system
Huang et al. C-Rnn: a fine-grained language model for image captioning
CN111414561A (en) Method and apparatus for presenting information
CN114358203A (en) Training method and device for image description sentence generation module and electronic equipment
CN112101042A (en) Text emotion recognition method and device, terminal device and storage medium
CN112507124A (en) Chapter-level event causal relationship extraction method based on graph model
CN116320607A (en) Intelligent video generation method, device, equipment and medium
CN111445545B (en) Text transfer mapping method and device, storage medium and electronic equipment
Long et al. Cross-domain personalized image captioning
CN116881446A (en) Semantic classification method, device, equipment and storage medium thereof
CN115525757A (en) Contract abstract generation method and device and contract key information extraction model training method
CN115240713A (en) Voice emotion recognition method and device based on multi-modal features and contrast learning
CN114722832A (en) Abstract extraction method, device, equipment and storage medium
CN112784573A (en) Text emotion content analysis method, device and equipment and storage medium
Dehaqi et al. Adversarial image caption generator network
CN112988963B (en) User intention prediction method, device, equipment and medium based on multi-flow nodes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant