CN111985243A - Emotion model training method, emotion analysis device and storage medium - Google Patents

Emotion model training method, emotion analysis device and storage medium Download PDF

Info

Publication number
CN111985243A
CN111985243A CN201910436077.6A CN201910436077A CN111985243A CN 111985243 A CN111985243 A CN 111985243A CN 201910436077 A CN201910436077 A CN 201910436077A CN 111985243 A CN111985243 A CN 111985243A
Authority
CN
China
Prior art keywords
data
generator
emotion
output
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910436077.6A
Other languages
Chinese (zh)
Other versions
CN111985243B (en
Inventor
柳圆圆
李家乐
闫兴安
汤煜
曹彬
何威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Suzhou Software Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Suzhou Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Suzhou Software Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN201910436077.6A priority Critical patent/CN111985243B/en
Publication of CN111985243A publication Critical patent/CN111985243A/en
Application granted granted Critical
Publication of CN111985243B publication Critical patent/CN111985243B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for training an emotion model, a method and a device for emotion analysis and a storage medium; the method comprises the following steps: inputting sample data to a generator and outputting emotional characteristics, wherein the sample data comprises text data and/or picture data; taking the emotional characteristics as the input of a discriminator and the data type as the output of the discriminator, and training the discriminator to discriminate the performance of the corresponding data type according to the input emotional characteristics; taking the emotion characteristics as the input of an emotion classifier, taking the emotion types as the output of the emotion classifier, and training the emotion classifier to output the performance corresponding to the emotion types according to the input emotion characteristics; and training the performance of the generator for outputting the emotional characteristics of the input data according to the input data by taking the sample data as the input of the generator, the emotional characteristics as the output of the generator and the data type and the emotional type as the adjusting parameters of the generator. By applying the scheme of the invention, the emotion analysis can be carried out on two different types of data.

Description

Emotion model training method, emotion analysis device and storage medium
Technical Field
The invention relates to an emotion analysis technology, in particular to an emotion model training method, an emotion analysis device and a storage medium.
Background
Emotion analysis was first applied in the field of text processing to classify the polarity of emotion in text carrying subjective emotional tendencies, positive categories such as happiness, joy, etc., and negative categories such as impairment, depression, anger, etc., which play an important role in information retrieval and recommendation systems. With the development of internet technology, more and more kinds of data are available for research, the research range of emotion analysis is gradually expanded, research objects relate to data of multiple modes such as pictures, voice and videos, for example, picture emotion analysis mainly observes emotion polarity reaction after people see a picture, if people feel positive emotions such as happiness, emotion conveyed by the picture is positive, otherwise, emotion conveyed by the picture is negative. In the related art, different emotion analysis methods are adopted for different types of data, but a method capable of performing emotion analysis on two different types of data does not exist.
Disclosure of Invention
The embodiment of the invention provides a method for training an emotion model, a method for analyzing emotion, a device and a storage medium, which can be used for classifying the emotion of a text and classifying pictures.
The technical scheme of the embodiment of the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a training method based on an emotion classification model, where the emotion model includes a generator and an emotion classifier, and the method includes:
inputting sample data to the generator, and outputting emotional characteristics corresponding to the sample data; wherein the sample data comprises text data and/or picture data;
taking the emotional features output by the generator as the input of a discriminator, taking the data type of the sample data as the output of the discriminator, and training the discriminator to discriminate the performance of the corresponding data type according to the input emotional features;
taking the emotional features output by the generator as the input of the emotional classifier, taking the emotional types corresponding to the emotional features as the output of the emotional classifier, and training the emotional classifier to output the performance corresponding to the emotional types according to the input emotional features;
and training the performance of the generator for outputting the emotional characteristics of the input data according to the input data by taking the sample data as the input of the generator, taking the emotional characteristics as the output of the generator, taking the data type output by the discriminator and the emotional type output by the emotion classifier as the adjusting parameters of the generator.
In the foregoing solution, before the inputting the sample data to the generator, the method further includes:
obtaining a vector corresponding to source data, wherein the source data comprises picture source data and/or text source data;
and performing truncation processing or filling processing on the vector corresponding to the source data to obtain the sample data with the preset data size.
In the above scheme, the inputting sample data to the generator and outputting emotional features corresponding to the sample data includes:
when the input sample data is text data, obtaining a word vector sequence corresponding to the text data by using a neural network model of the generator;
assigning an attention probability to a word vector in the sequence of word vectors using an attention model of the generator;
and carrying out weighted summation on the word vectors in the vector sequence and the allocated attention probability to obtain the text vector corresponding to the text data.
In the above scheme, the inputting sample data to the generator and outputting emotional features corresponding to the sample data includes:
when the input sample data is picture data, obtaining a plurality of characteristic vectors corresponding to the picture data by using a neural network model of the generator;
Assigning an attention probability to a plurality of the feature vectors using an attention model of the generator;
and carrying out weighted summation on each feature vector and the allocated attention probability to obtain a picture vector corresponding to the picture data.
In the above solution, the training the discriminator to discriminate the performance of the corresponding data type according to the input emotional feature by using the emotional feature output by the generator as the input of the discriminator and using the data type of the sample data as the output of the discriminator includes:
initializing a first full connection layer and a second full connection layer which are included by the discriminator;
constructing a training sample set, wherein the training sample set comprises the emotional features output by the generator and the data types of the sample data;
initializing a loss function constructed based on the discriminator input, the discriminator output and the discriminator parameter;
and taking the emotional features output by the generator as the input of a discriminator, taking the data type of the sample data as the output of the discriminator, and training the discriminator to discriminate the performance of the corresponding data type according to the input emotional features.
In the above solution, the training the performance of the generator to output the emotion feature of the input data according to the input data by using the sample data as the input of the generator, using the emotion feature as the output of the generator, and using the data type output by the discriminator and the emotion type output by the emotion classifier as the adjustment parameter of the generator includes:
Initializing an input layer, an intermediate layer and an output layer which are included by the generator;
constructing a training sample set, wherein the training sample set comprises sample data and emotional characteristics corresponding to the sample data;
initializing a loss function constructed based on an input of the generator, an output of the generator, and model training parameters, the model training parameters including: the data type output by the discriminator and the emotion type output by the emotion classifier;
and taking the sample data as input and the emotional characteristic as output, and training the generator to output the performance of the emotional characteristic of the input data according to the input data by adopting a gradient descent algorithm.
In a second aspect, an embodiment of the present invention provides an emotion analysis method based on an emotion model, including:
acquiring an emotion model obtained by training; wherein the content of the first and second substances,
the emotion model comprises a generator and an emotion classifier;
the generator is obtained by training by taking sample data as input of the generator and emotional characteristics as output of the generator;
the emotion classifier is obtained by training with the emotion characteristics output by the generator as the input of the emotion classifier and the emotion types corresponding to the emotion characteristics as the output of the emotion classifier;
Inputting data to be analyzed to the generator obtained by training, and outputting emotional characteristics corresponding to the data to be analyzed;
and inputting the emotional characteristics output by the generator to the trained emotional classifier, and outputting the emotional types corresponding to the emotional characteristics.
In a third aspect, an embodiment of the present invention provides an apparatus for training an emotion model, where the emotion model includes a generator and an emotion classifier, and includes:
the first output unit is used for inputting sample data to the generator and outputting emotional characteristics corresponding to the sample data; wherein the sample data comprises text data and/or picture data;
the first training unit is used for taking the emotional features output by the generator as the input of a discriminator and taking the data types of the sample data as the output of the discriminator, and training the discriminator to discriminate the performance of the corresponding data types according to the input emotional features;
the second training unit is used for taking the emotional features output by the generator as the input of the emotion classifier and taking the emotion types corresponding to the emotional features as the output of the emotion classifier, and training the emotion classifier to output the performance corresponding to the emotion types according to the input emotional features;
And the third training unit is used for taking the sample data as the input of the generator, taking the emotional feature as the output of the generator, taking the data type output by the discriminator and the emotional type output by the emotional classifier as the adjusting parameter of the generator, and training the performance of the generator for outputting the emotional feature of the input data according to the input data.
In the above scheme, the apparatus further comprises: the preprocessing unit is used for acquiring a vector corresponding to source data, wherein the source data comprises picture source data and/or text source data; and performing truncation processing or filling processing on the vector corresponding to the source data to obtain the sample data with the preset data size.
In the foregoing solution, the first output unit is specifically configured to, when the input sample data is text data, obtain a word vector sequence corresponding to the text data by using a neural network model of the generator; assigning an attention probability to a word vector in the sequence of word vectors using an attention model of the generator; carrying out weighted summation on the word vectors in the vector sequence and the allocated attention probability to obtain text vectors corresponding to the text data;
When the input sample data is picture data, obtaining a plurality of feature vectors corresponding to the picture data by using a neural network model of the generator; assigning an attention probability to a plurality of the feature vectors using an attention model of the generator; and carrying out weighted summation on each feature vector and the allocated attention probability to obtain a picture vector corresponding to the picture data.
In the above scheme, the first training unit is specifically configured to initialize a first full connection layer and a second full connection layer that are included in the discriminator; constructing a training sample set, wherein the training sample set comprises the emotional features output by the generator and the data types of the sample data; initializing a loss function constructed based on the discriminator input, the discriminator output and the discriminator parameter; and taking the emotional features output by the generator as the input of a discriminator, taking the data type of the sample data as the output of the discriminator, and training the discriminator to discriminate the performance of the corresponding data type according to the input emotional features.
In the above scheme, the third training unit is specifically configured to initialize an input layer, an intermediate layer, and an output layer that are included in the generator; constructing a training sample set, wherein the training sample set comprises sample data and emotional characteristics corresponding to the sample data; initializing a loss function constructed based on an input of the generator, an output of the generator, and model training parameters, the model training parameters including: the data type output by the discriminator and the emotion type output by the emotion classifier; and taking the sample data as input and the emotional characteristic as output, and training the generator to output the performance of the emotional characteristic of the input data according to the input data by adopting a gradient descent algorithm.
In a fourth aspect, an embodiment of the present invention provides an emotion analysis apparatus based on an emotion model, including:
the first acquisition unit is used for acquiring the emotion model obtained by training; wherein the content of the first and second substances,
the emotion model comprises a generator and an emotion classifier;
the generator is obtained by training by taking sample data as input of the generator and emotional characteristics as output of the generator;
the emotion classifier is obtained by training with the emotion characteristics output by the generator as the input of the emotion classifier and the emotion types corresponding to the emotion characteristics as the output of the emotion classifier;
a first input unit for inputting data to be analyzed to the generator obtained by training;
the second output unit is used for outputting the emotional characteristics corresponding to the data to be analyzed;
the second input unit is used for inputting the emotion characteristics output by the generator to the emotion classifier obtained by training;
and the third output unit is used for outputting the emotion types corresponding to the emotion characteristics.
In a fifth aspect, an embodiment of the present invention provides an emotion model training apparatus, including: a memory and a processor; wherein the content of the first and second substances,
the memory for storing a computer program operable on the processor;
The processor is used for executing the emotion model training method in the scheme when the computer program is run.
In a sixth aspect, an embodiment of the present invention provides a storage medium, where the storage medium stores a computer program, and when the computer program is executed by at least one processor, the method for training an emotion model in the above aspects is implemented.
The embodiment of the invention provides an emotion model, which comprises a generator and an emotion classifier, and the method comprises the following steps: inputting sample data to the generator, and outputting emotional characteristics corresponding to the sample data; wherein the sample data comprises text data and/or picture data; taking the emotional features output by the generator as the input of a discriminator, taking the data type of the sample data as the output of the discriminator, and training the discriminator to discriminate the performance of the corresponding data type according to the input emotional features; taking the emotional features output by the generator as the input of the emotional classifier, taking the emotional types corresponding to the emotional features as the output of the emotional classifier, and training the emotional classifier to output the performance corresponding to the emotional types according to the input emotional features; and training the performance of the generator for outputting the emotional characteristics of the input data according to the input data by taking the sample data as the input of the generator, taking the emotional characteristics as the output of the generator, taking the data type output by the discriminator and the emotional type output by the emotion classifier as the adjusting parameters of the generator. Therefore, the data type output by the discriminator and the emotion type output by the emotion classifier are used as the adjusting parameters of the generator, the generator is trained to output the performance of the emotion characteristics of input data according to the input data, and the generator and the discriminator form a generation countermeasure network, so that the idea of extremely small binary game is used for adjusting the generator and the discriminator, the difference between picture data and text data can be reduced, the generator can accurately extract the characteristics of the picture data and the text data, and the emotion model obtained based on the training method provided by the embodiment of the invention can accurately perform emotion analysis on the two data.
Drawings
FIG. 1 is a schematic structural diagram of an emotion analysis system architecture based on an emotion model according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a hardware structure of an emotion model training apparatus provided in an embodiment of the present invention;
FIG. 3 is a flow chart of a method for training an emotion model according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating text source data processing according to an embodiment of the invention;
FIG. 5 is a flow chart of processing of picture source data according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating a structure of a text vector according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a generator according to an embodiment of the present invention;
FIG. 8 is a schematic structural diagram of an arbiter provided in an embodiment of the present invention;
FIG. 9 is a flowchart illustrating an emotion analysis method based on an emotion model according to an embodiment of the present invention;
FIG. 10 is a schematic diagram illustrating a structure of an apparatus for training an emotion model according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of a composition structure of an emotion analysis apparatus based on an emotion model according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail with reference to the accompanying drawings, the described embodiments should not be construed as limiting the present invention, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, references to the terms "first \ second \ third" are only to distinguish similar objects and do not denote a particular order, but rather the terms "first \ second \ third" are used to interchange specific orders or sequences, where appropriate, to enable embodiments of the invention described herein to be practiced in other than the order shown or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein is for the purpose of describing embodiments of the invention only and is not intended to be limiting of the invention.
In the related art, no matter a traditional machine learning method or a deep learning method is used, a classifier is respectively trained according to a large amount of labeled picture data or text data, and the label-free data in a test set is subjected to emotion classification. That is to say, the prior art is only applicable to the picture data or only applicable to the text data, and if the classifier obtained by training the picture data is forcibly applied to the text data, the accuracy of emotion analysis based on the model will be sharply reduced. The reason for this is that: on one hand, the picture belongs to two-dimensional data, the text belongs to one-dimensional data, and a large difference exists between the two data; on the other hand, direct connection between the low-level visual features of the pictures and the high-level emotion semantics is difficult to establish, a huge semantic gap exists, the expression of the texts is usually more implicit, and the emotion polarity conveyed by the texts can be accurately judged generally by combining context, so that the feature extraction method designed for the pictures or the texts is often limited in certain fields and is generally poor in classification expression on other types of data.
Based on this, the embodiment of the invention provides a method for training an emotion model, a generator and a discriminator are used to form a generation confrontation network, the idea of the extremely-small binary game is used to adjust the generator and the discriminator, the difference between picture data and text data can be reduced, the generator can accurately extract the characteristics of the picture data and the text data, and the emotion model obtained based on the training method provided by the embodiment of the invention can accurately perform emotion analysis on the two data.
First, an emotion analysis system based on an emotion model provided in an embodiment of the present invention is described, referring to fig. 1, where the emotion analysis system provided in an embodiment of the present invention includes a terminal and/or a server, and both a method for training an emotion model and a method for emotion analysis based on an emotion model implemented in an embodiment of the present invention may be implemented by a terminal, a server, or a terminal and a server cooperatively.
In an Application scenario, an emotion analysis program (APP) is arranged on a terminal, an emotion model training device is arranged on a server, the server trains to obtain an emotion model based on the emotion model training method provided by the embodiment of the invention, a user inputs texts and/or pictures in the emotion analysis APP on the terminal, the emotion analysis APP submits the texts and/or pictures to the server, then the emotion model on the server analyzes the texts and/or pictures submitted by the emotion analysis APP, and an analysis result is returned to the terminal.
In another application scenario, a terminal is provided with an emotion analysis APP and an emotion model training device, the terminal is trained to obtain an emotion model based on the emotion model training method provided by the embodiment of the invention, then the emotion model obtained through training is adopted to update the emotion analysis APP of the terminal, and when a user inputs a text and/or a picture based on the updated emotion analysis APP, the emotion model analyzes the text or the picture and outputs a corresponding analysis result.
Next, a description will be given of an emotion model training apparatus and an emotion analysis apparatus based on an emotion model according to an embodiment of the present invention. The emotion model training device and the emotion analysis device based on the emotion model of the embodiment of the invention can be implemented in various forms, such as: the method is implemented independently by terminals such as a smart phone, a tablet computer and a desktop computer, or implemented cooperatively by the terminals and a server. The emotion model training device and the emotion analysis device based on the emotion model provided in the embodiments of the present invention may be implemented in hardware or a combination of hardware and software, and the emotion model training device in the embodiments of the present invention is taken as an example, and the emotion model training device in the embodiments of the present invention is described in detail below, it is understood that fig. 2 only shows an exemplary structure of the emotion model training device, and not all structures, and a part of the structure or all structures shown in fig. 2 may be implemented as needed.
The emotion model training apparatus 100 provided in the embodiment of the present invention includes: at least one processor 101, memory 102, a user interface 103, and at least one network interface 104. The various components of the emotion model training apparatus 100 are coupled together by a bus system 105. It will be appreciated that the bus system 105 is used to enable communications among the components of the connection. The bus system 105 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 105 in FIG. 2.
The user interface 103 may include, among other things, a display, a keyboard, a mouse, a trackball, a click wheel, a key, a button, a touch pad, or a touch screen.
It will be appreciated that the memory 102 can be either volatile memory or nonvolatile memory, and can include both volatile and nonvolatile memory.
The memory 102 in the embodiment of the present invention is used for storing various types of data to support the operation of the training device 100 of the emotion model. Examples of such data include: any executable instructions for operating on the emotion model training apparatus 100, such as executable instructions, may be included in the executable instructions, and the program implementing the emotion model training method according to the embodiment of the present invention may be included in the executable instructions.
The emotion model training method disclosed by the embodiment of the invention can be applied to the processor 101, or can be implemented by the processor 101. The processor 101 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the emotion model training method may be implemented by integrated logic circuits of hardware or instructions in software in the processor 101. The Processor 101 may be a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. The processor 101 may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present invention. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method provided by the embodiment of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in a storage medium located in the memory 102, and the processor 101 reads the information in the memory 102, and completes the steps of the emotion model training method provided by the embodiment of the present invention in combination with the hardware thereof.
The following describes a method for training an emotion model provided in an embodiment of the present invention, where the embodiment of the present invention provides a method for training an emotion model, where the emotion model includes a generator and an emotion classifier, and fig. 3 is a schematic diagram of an implementation flow of the method for training an emotion model provided in an embodiment of the present invention, and the method mainly includes the following steps:
step 201: inputting sample data to a generator, and outputting emotional characteristics corresponding to the sample data; wherein the sample data comprises text data and/or picture data.
In some embodiments, sample data may be obtained by: obtaining a vector corresponding to source data, wherein the source data comprises picture source data and/or text source data; and performing truncation processing or filling processing on the vector corresponding to the source data to obtain sample data with a preset data size.
It should be noted that, when the source data is text source data, words of a preset number are obtained through truncation processing or filling processing, word vectors corresponding to each word in the processed text are obtained by using word2vec technology, the obtained word vectors are ordered according to different modes to obtain corresponding different text vectors, and the different text vectors are combined to obtain sample data of a preset data size. As shown in fig. 4, assuming that the number of processed words is m and the dimensionality of a word vector is n, sorting the word vectors according to k different ways to obtain sample data with a data size of m × n × k, where word2vec is a correlation model for generating a word vector, and each word can be mapped to one vector through training; when the source data is picture source data, pictures with uniform sizes are obtained through cutting processing or filling processing, and pixel values of all color channels are extracted as sample data. As shown in fig. 5, assuming that the processed picture size is m × n and the number of color channels is k, a sample data with a data size of m × n × k can be obtained.
In practical application, the corresponding word vector using word2vec technology may be implemented by: extracting emotion words from text source data, such as: beautiful, sad, happy, etc., forming an emotion dictionary based on the text source data, and combining the emotion dictionary based on the text source data, synonyms (SynsetTerms) and Adjective name Pairs (ANP, objective Noun Pairs) into a text emotion dictionary. Wherein, SentiWordNet is a dictionary established by foreign emotion analysis, and comprises 11 thousands of records, wherein the format of one record is as follows: POS (parts of speech, including 4 classes: noun, adjective a, verb v, and adverb r), ID (lemma number), PosScore (positive sentiment value), NegScore (negative sentiment value), SynsetTerms (synonym lemma name), and Gloss (comments); ANP is a dataset of Borth et al collation tokens, which is a collection of adjective-name pairs, for example: beautiful flowers, sad eyes, etc. The text emotion dictionary obtained by combining the emotion words extracted from the text source data, the synonyms in the SentiWordNet and the ANP is actually a set of words or phrases, for example, a set of words or phrases such as { beautiful, sad, happy, good, beautiful flowers, sad eyes } and the like, and the obtained set is trained by adopting a word2vec technology to obtain word vectors corresponding to the words in the set, wherein the word vector dimension of each word is the same.
In practical applications, the resulting text word vectors may be ordered as follows: assuming that the number of words obtained after the truncation processing or the padding processing is m, the processed text is represented as d ═ word (word)1,word2,…wordm) Wherein, word represents word in text, then word vector corresponding to processed text is [ v [ ]1,v2,…,vm],vmRepresenting wordmAnd the corresponding word vector has the dimension of n. Assuming that sample data with a data size of m × n × k needs to be obtained, the word vectors are ordered according to k ordering manners to obtain k text vectors, and then the 1 st ordering manner may be to arrange the text word vectors in a forward direction according to a sequence of occurrence in the text, where the corresponding text vector is vec1=[v1,v2,…,vm](ii) a The 2 nd sorting mode canThe words are arranged reversely according to the sequence of appearance of the words in the text, and the corresponding text vector is vec2=[vm,vm-1,…,v1](ii) a The k-th sorting mode may be that k words counted from the end of the text to the top are placed in the head of the document, the order of other words is not changed, and the corresponding text vector is veck=[vm-k+1,…,vm-1,vm,v1,v2,…vm-k]. As shown in fig. 6, combining the obtained k text vectors together can obtain a sample data with a data size of m × n × k, which can be represented as [ vec ] 1,vec2,…,veck]。
According to the embodiment of the invention, the text data and the picture data can be processed into the data with the same size by performing truncation processing or filling processing on the source data, so that the training of the model is facilitated. In addition, the embodiment of the invention also provides a new text vector generation method, different text vectors are obtained by arranging the word vectors according to different orders and are combined together to obtain a multi-dimensional sample data, and thus, the characteristics of the text data can be enriched.
In some embodiments, when the input sample data is text data, a word vector sequence corresponding to the text data is obtained by using a neural network model of the generator; assigning an attention probability to a word vector in the sequence of word vectors using an attention model of the generator; and carrying out weighted summation on the word vectors in the vector sequence and the allocated attention probability to obtain the text vector corresponding to the text data.
In some embodiments, when the input sample data is picture data, using a neural network model of a generator to obtain a plurality of feature vectors corresponding to the picture data; assigning an attention probability to a plurality of the feature vectors using an attention model of a generator; and carrying out weighted summation on each feature vector and the allocated attention probability to obtain a picture vector corresponding to the picture data.
It should be noted that fig. 7 is an optional generator according to an embodiment of the present invention, where the generator is a convolutional neural network based on an attention model, and includes a first convolutional layer, a first pooling layer, a second convolutional layer, a second pooling layer, and an attention model output layer. Weighting the feature vectors output by the second pooling layer of the neural network through the attention model, and when the input sample data is text data, performing weighting processing on word vectors of the text; when the input sample data is picture data, weighting processing is performed on the feature vector of the picture data.
In general, the ideal result of the attention model is: the weight can be increased for the feature dimension of the final emotion label related to the emotion words and the like of the text, and the weight can be reduced for the unrelated features (such as the subject of a sentence and the like); for pictures, it is generally considered that the main content of the picture has a greater influence on the emotion tag, for example: if the picture is a face picture, the weight in the face region, especially the mouth and eye region, is increased, and the weight elsewhere is decreased.
According to the embodiment of the invention, the attention model is added into the generator, the weight of the features which have larger influence on the emotion label is increased through the attention model, more appropriate emotion features can be extracted, and the accuracy of emotion classification is further improved.
It should be noted that the emotional features output by the generator and the type of input sample data are correlated, when the input sample data is text data, the output emotional features are usually emotional words, such as good, bad, and possible negative words, when the input sample data is picture data, the output emotional features are usually the features of pixel points, brightness, and the like of the picture, the output emotional features are reflected on the feature vector and are values of different dimensions (columns), which are more prone to the implicit features and cannot be in one-to-one correspondence with certain actual factors, the format of the emotional features is a one-dimensional vector, and the numerical value of each dimension is between 0 and 1.
Step 202: and taking the emotional features output by the generator as the input of the discriminator, taking the data type of the sample data as the output of the discriminator, and training the discriminator to discriminate the performance of the corresponding data type according to the input emotional features.
Here, the emotional features output by the generator are input into the discriminator, the discriminator outputs the labels of the corresponding data types according to the input emotional features, for example, the picture corresponds to output "1", the text corresponds to output "0", the labels of the data types provided by the user are used as the standard result output by the discriminator, and the training discriminator discriminates the performance of the corresponding data types according to the input emotional features according to the emotional features.
In some embodiments, the initialization arbiter comprises a first fully connected layer and a second fully connected layer; constructing a training sample set, wherein the training sample set comprises the emotional characteristics output by the generator and the data type of the sample data; initializing a loss function constructed based on the discriminator input, the discriminator output and the discriminator parameter; and taking the emotional features output by the generator as the input of the discriminator, taking the data type of the sample data as the output of the discriminator, and training the discriminator to discriminate the performance of the corresponding data type according to the input emotional features.
It should be noted that, in the discriminator according to the embodiment of the present invention, as shown in fig. 8, the discriminator includes two full-link layers, where a first full-link layer uses a modified Linear Unit (ReLU) as an activation function, a second full-link layer uses a Softmax function as an activation function, and a probability that a data type of sample data belongs to picture data or text data is obtained through the Sof tmax function.
Step 203: and taking the emotional characteristics output by the generator as the input of an emotional classifier, taking the emotional types corresponding to the emotional characteristics as the output of the emotional classifier, and training the emotional classifier to output the performance corresponding to the emotional types according to the input emotional characteristics.
The emotion classifier outputs labels of corresponding emotion types according to the input emotion characteristics, such as positive emotion output of '1' and negative emotion output of '0', takes the emotion types of the sample data provided by a user as standard results of the output of the emotion classifier, and trains the emotion classifier to output the performance of the corresponding emotion types according to the input emotion characteristics. Generally, the activation function of the emotion classifier also selects a Softmax function to obtain the probability that the emotion type corresponding to the sample data belongs to the positive class or the negative class.
Step 204: and training the performance of the generator for outputting the emotional characteristics of the input data according to the input data by taking the sample data as the input of the generator, taking the emotional characteristics as the output of the generator, taking the data type output by the discriminator and the emotional type output by the emotional classifier as the adjusting parameters of the generator.
In some embodiments, the initialization generator includes an input layer, an intermediate layer, and an output layer; constructing a training sample set, wherein the training sample set comprises sample data and emotional characteristics corresponding to the sample data; initializing a loss function constructed based on an input of the generator, an output of the generator, and model training parameters, the model training parameters including: the data type output by the discriminator and the emotion type output by the emotion classifier; and taking the sample data as input and the emotional characteristics as output, and outputting the performance of the emotional characteristics of the input data according to the input data by adopting a gradient descent algorithm training generator.
Here, the loss function constructed based on the input of the generator, the output of the generator, and the model training parameters includes the loss of the emotion classifier, the loss of the discriminator, and the L2 norm of the parameters, and then the loss function is expressed as:
Loss=Lclassifier+Ldiscriminator+γL2(w)
wherein L isclassifierFor loss of emotion classifier, LdiscriminatorFor loss of discriminator, L2(w) is the norm of L2 with the parameter, w is such that Lclassifier+LdiscriminatorTaking the value of the minimum value;
the objective function is:
w*=argminw(Loss)
and stopping training when the loss function value is smaller than a preset loss threshold value, wherein the preset loss threshold value is an empirical value (such as 0.001).
In practical application, the performance of the generator for outputting the emotional characteristics of the input data according to the input data by adopting a gradient descent algorithm can be realized by an Adam algorithm, the Adam algorithm is a first-order optimization algorithm which can replace the traditional random gradient descent process, and the weights of the neural network can be updated iteratively based on training data.
According to the embodiment of the invention, the data type output by the discriminator and the emotion type pair output by the emotion classifier are used as the training parameter training generator, so that the difference between the picture data and the text data can be reduced, the data of two different data types can be extracted, the emotion characteristics extracted by the generator are beneficial to the classification of the emotion classifier, and the accuracy of emotion classification is improved.
As shown in fig. 9, an embodiment of the present invention provides an emotion analysis method based on an emotion model, including:
step 301: and acquiring the emotion model obtained by training.
The emotion model comprises a generator and an emotion classifier; the generator is obtained by training by taking the sample data as the input of the generator and the emotional characteristics as the output of the generator; the emotion classifier is obtained by training with the emotion characteristics output by the generator as the input of the emotion classifier and the emotion types corresponding to the emotion characteristics as the output of the emotion classifier.
The emotion model in the embodiment of the present invention is obtained by training according to the above-described emotion model training method.
Step 302: and inputting the data to be analyzed to a generator obtained by training, and outputting the emotional characteristics corresponding to the data to be analyzed.
Step 303: and inputting the emotional characteristics output by the generator to the trained emotional classifier, and outputting the emotional types corresponding to the emotional characteristics.
An embodiment of the present invention further provides a training apparatus for an emotion model, where the emotion model includes a generator and an emotion classifier, as shown in fig. 10, fig. 10 is a schematic diagram of a composition structure of the training apparatus for an emotion model provided in an embodiment of the present invention, and includes:
A first output unit 401, configured to input sample data to the generator, and output an emotional feature corresponding to the sample data; wherein the sample data comprises text data and/or picture data;
a first training unit 402, configured to use the emotional features output by the generator as input of a discriminator, use the data type of the sample data as output of the discriminator, and train the discriminator to discriminate performance of a corresponding data type according to the input emotional features;
a second training unit 403, configured to use the emotion features output by the generator as input of an emotion classifier, use emotion types corresponding to the emotion features as output of the emotion classifier, and train the emotion classifier to output performance corresponding to the emotion types according to the input emotion features;
and a third training unit 404, configured to train the performance of the generator to output the emotional feature of the input data according to the input data, with the sample data as the input of the generator, the emotional feature as the output of the generator, and the data type output by the discriminator and the emotional type output by the emotional classifier as the adjustment parameter of the generator.
In some embodiments, the apparatus for training an emotion model further includes: the preprocessing unit is used for acquiring a vector corresponding to source data, wherein the source data comprises picture source data and/or text source data, and performing truncation processing or filling processing on the vector corresponding to the source data to obtain sample data with a preset data size.
In some embodiments, the first output unit 401 is specifically configured to, when the input sample data is picture data, obtain a plurality of feature vectors corresponding to the picture data by using a neural network model of a generator; assigning an attention probability to the plurality of feature vectors using an attention model of the generator; carrying out weighted summation on each feature vector and the distributed attention probability to obtain a picture vector corresponding to the picture data; when the input sample data is text data, obtaining a word vector sequence corresponding to the text data by using a neural network model of a generator; assigning an attention probability to a word vector in the sequence of word vectors using an attention model of the generator; and carrying out weighted summation on the word vectors in the vector sequence and the allocated attention probability to obtain the text vector corresponding to the text data.
In some embodiments, the first training unit 402 is specifically configured to initialize a first fully-connected layer and a second fully-connected layer included in the arbiter; constructing a training sample set, wherein the training sample set comprises the emotional characteristics output by the generator and the data type of the sample data; initializing a loss function constructed based on the discriminator input, the discriminator output and the discriminator parameter; and taking the emotional features output by the generator as the input of the discriminator, taking the data type of the sample data as the output of the discriminator, and training the discriminator to discriminate the performance of the corresponding data type according to the input emotional features.
In some embodiments, the third training unit 404 is specifically configured to initialize the input layer, the middle layer, and the output layer included in the generator; constructing a training sample set, wherein the training sample set comprises sample data and emotional characteristics corresponding to the sample data; initializing a loss function constructed based on an input of a generator, an output of the generator, and model training parameters, the model training parameters comprising: the data type output by the discriminator and the emotion type output by the emotion classifier; and taking the sample data as input and the emotional characteristics as output, and adopting a gradient descent algorithm to train the performance of the generator for outputting the emotional characteristics of the input data according to the input data.
An embodiment of the present invention further provides an emotion analysis apparatus based on an emotion model, as shown in fig. 11, where fig. 11 is a schematic diagram of a composition structure of the emotion analysis apparatus based on an emotion model provided in an embodiment of the present invention, and the schematic diagram includes:
a first obtaining unit 501, configured to obtain an emotion model obtained through training; the emotion model comprises a generator and an emotion classifier; the generator is obtained by training by taking the sample data as the input of the generator and the emotional characteristics as the output of the generator; the emotion classifier is obtained by training with the emotion characteristics output by the generator as the input of the emotion classifier and the emotion types corresponding to the emotion characteristics as the output of the emotion classifier;
A first input unit 502 for inputting data to be analyzed to a trained generator;
a second output unit 503, configured to output an emotional feature corresponding to the data to be analyzed;
a second input unit 504, configured to input the emotion features output by the generator to the trained emotion classifier;
and a third output unit 505, configured to output an emotion type corresponding to the emotion feature.
Embodiments of the present invention provide a storage medium storing executable instructions, which when executed by a processor, will cause the processor to perform a method for emotion model training, for example, as shown in fig. 3.
In some embodiments, the storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
In summary, the embodiment of the present invention provides a new text vector generation method, which can enrich the features of a text, convert text data and picture data into data with the same size by means of truncation processing or filling processing, and is beneficial to inputting a model. In addition, the embodiment of the invention also provides a training mode for generating the countermeasure network based on the generator and the discriminator, the generator and the discriminator are adjusted through the idea of the extremely small binary game, and the difference between the picture and the text is reduced, so that the model is suitable for two different data types. Furthermore, the attention model is added into the generator, so that the generator focuses on the features which have large influence on the emotion types, the effectiveness of emotion feature extraction is improved, and the accuracy of emotion classification is further improved.
The above description is only an example of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present invention are included in the protection scope of the present invention.

Claims (15)

1. A method for training an emotion model, wherein the emotion model comprises a generator and an emotion classifier, and the method comprises the following steps:
inputting sample data to the generator, and outputting emotional characteristics corresponding to the sample data; wherein the sample data comprises text data and/or picture data;
taking the emotional features output by the generator as the input of a discriminator, taking the data type of the sample data as the output of the discriminator, and training the discriminator to discriminate the performance of the corresponding data type according to the input emotional features;
taking the emotional features output by the generator as the input of the emotional classifier, taking the emotional types corresponding to the emotional features as the output of the emotional classifier, and training the emotional classifier to output the performance corresponding to the emotional types according to the input emotional features;
and training the performance of the generator for outputting the emotional characteristics of the input data according to the input data by taking the sample data as the input of the generator, taking the emotional characteristics as the output of the generator, taking the data type output by the discriminator and the emotional type output by the emotion classifier as the adjusting parameters of the generator.
2. The method of claim 1, wherein before inputting sample data to the producer, further comprising:
obtaining a vector corresponding to source data, wherein the source data comprises picture source data and/or text source data;
and performing truncation processing or filling processing on the vector corresponding to the source data to obtain the sample data with the preset data size.
3. The method of claim 1, wherein inputting sample data to the generator and outputting emotional features corresponding to the sample data comprises:
when the input sample data is text data, obtaining a word vector sequence corresponding to the text data by using a neural network model of the generator;
assigning an attention probability to a word vector in the sequence of word vectors using an attention model of the generator;
and carrying out weighted summation on the word vectors in the vector sequence and the allocated attention probability to obtain the text vector corresponding to the text data.
4. The method of claim 1, wherein inputting sample data to the generator and outputting emotional features corresponding to the sample data comprises:
When the input sample data is picture data, obtaining a plurality of characteristic vectors corresponding to the picture data by using a neural network model of the generator;
assigning an attention probability to a plurality of the feature vectors using an attention model of the generator;
and carrying out weighted summation on each feature vector and the allocated attention probability to obtain a picture vector corresponding to the picture data.
5. The method of claim 1, wherein the training the discriminator to discriminate the performance of the corresponding data type according to the input emotional features with the emotional features output by the generator as the input of the discriminator and the data type of the sample data as the output of the discriminator comprises:
initializing a first full connection layer and a second full connection layer which are included by the discriminator;
constructing a training sample set, wherein the training sample set comprises the emotional features output by the generator and the data types of the sample data;
initializing a loss function constructed based on the discriminator input, the discriminator output and the discriminator parameter;
and taking the emotional features output by the generator as the input of a discriminator, taking the data type of the sample data as the output of the discriminator, and training the discriminator to discriminate the performance of the corresponding data type according to the input emotional features.
6. The method of claim 1, wherein training the performance of the generator to output the emotion feature of the input data according to the input data by taking the sample data as an input of the generator, the emotion feature as an output of the generator, and the data type output by the arbiter and the emotion type output by the emotion classifier as adjustment parameters of the generator comprises:
initializing an input layer, an intermediate layer and an output layer which are included by the generator;
constructing a training sample set, wherein the training sample set comprises sample data and emotional characteristics corresponding to the sample data;
initializing a loss function constructed based on an input of the generator, an output of the generator, and model training parameters, the model training parameters including: the data type output by the discriminator and the emotion type output by the emotion classifier;
and taking the sample data as input and the emotional characteristic as output, and training the generator to output the performance of the emotional characteristic of the input data according to the input data by adopting a gradient descent algorithm.
7. An emotion analysis method based on an emotion model, characterized by comprising:
Acquiring an emotion model obtained by training; wherein the content of the first and second substances,
the emotion model comprises a generator and an emotion classifier;
the generator is obtained by training by taking sample data as input of the generator and emotional characteristics as output of the generator;
the emotion classifier is obtained by training with the emotion characteristics output by the generator as the input of the emotion classifier and the emotion types corresponding to the emotion characteristics as the output of the emotion classifier;
inputting data to be analyzed to the generator obtained by training, and outputting emotional characteristics corresponding to the data to be analyzed;
and inputting the emotional characteristics output by the generator to the trained emotional classifier, and outputting the emotional types corresponding to the emotional characteristics.
8. An apparatus for training an emotion model, wherein the emotion model includes a generator and an emotion classifier, the apparatus comprising:
the first output unit is used for inputting sample data to the generator and outputting emotional characteristics corresponding to the sample data; wherein the sample data comprises text data and/or picture data;
the first training unit is used for taking the emotional features output by the generator as the input of a discriminator and taking the data types of the sample data as the output of the discriminator, and training the discriminator to discriminate the performance of the corresponding data types according to the input emotional features;
The second training unit is used for taking the emotional features output by the generator as the input of the emotion classifier and taking the emotion types corresponding to the emotional features as the output of the emotion classifier, and training the emotion classifier to output the performance corresponding to the emotion types according to the input emotional features;
and the third training unit is used for taking the sample data as the input of the generator, taking the emotional feature as the output of the generator, taking the data type output by the discriminator and the emotional type output by the emotional classifier as the adjusting parameter of the generator, and training the performance of the generator for outputting the emotional feature of the input data according to the input data.
9. The apparatus of claim 8, further comprising:
the preprocessing unit is used for acquiring a vector corresponding to source data, wherein the source data comprises picture source data and/or text source data; and performing truncation processing or filling processing on the vector corresponding to the source data to obtain the sample data with the preset data size.
10. The apparatus of claim 8,
the first output unit is specifically configured to, when the input sample data is text data, obtain a word vector sequence corresponding to the text data by using a neural network model of the generator; assigning an attention probability to a word vector in the sequence of word vectors using an attention model of the generator; carrying out weighted summation on the word vectors in the vector sequence and the allocated attention probability to obtain text vectors corresponding to the text data;
When the input sample data is picture data, obtaining a plurality of feature vectors corresponding to the picture data by using a neural network model of the generator; assigning an attention probability to a plurality of the feature vectors using an attention model of the generator; and carrying out weighted summation on each feature vector and the allocated attention probability to obtain a picture vector corresponding to the picture data.
11. The apparatus of claim 8,
the first training unit is specifically configured to initialize a first full connection layer and a second full connection layer included in the discriminator; constructing a training sample set, wherein the training sample set comprises the emotional features output by the generator and the data types of the sample data; initializing a loss function constructed based on the discriminator input, the discriminator output and the discriminator parameter; and taking the emotional features output by the generator as the input of a discriminator, taking the data type of the sample data as the output of the discriminator, and training the discriminator to discriminate the performance of the corresponding data type according to the input emotional features.
12. The apparatus of claim 8,
The third training unit is specifically configured to initialize an input layer, an intermediate layer, and an output layer included in the generator; constructing a training sample set, wherein the training sample set comprises sample data and emotional characteristics corresponding to the sample data; initializing a loss function constructed based on an input of the generator, an output of the generator, and model training parameters, the model training parameters including: the data type output by the discriminator and the emotion type output by the emotion classifier; and taking the sample data as input and the emotional characteristic as output, and training the generator to output the performance of the emotional characteristic of the input data according to the input data by adopting a gradient descent algorithm.
13. An emotion analysis apparatus based on an emotion model, the apparatus comprising:
the first acquisition unit is used for acquiring the emotion model obtained by training; wherein the content of the first and second substances,
the emotion model comprises a generator and an emotion classifier;
the generator is obtained by training by taking sample data as input of the generator and emotional characteristics as output of the generator;
the emotion classifier is obtained by training with the emotion characteristics output by the generator as the input of the emotion classifier and the emotion types corresponding to the emotion characteristics as the output of the emotion classifier;
A first input unit for inputting data to be analyzed to the generator obtained by training;
the second output unit is used for outputting the emotional characteristics corresponding to the data to be analyzed;
the second input unit is used for inputting the emotion characteristics output by the generator to the emotion classifier obtained by training;
and the third output unit is used for outputting the emotion types corresponding to the emotion characteristics.
14. An apparatus for training an emotion model, the apparatus comprising: a memory and a processor; wherein the content of the first and second substances,
the memory for storing a computer program operable on the processor;
the processor is configured to execute the method for training an emotion model according to any one of claims 1 to 6 when the computer program is executed.
15. A storage medium storing a computer program which, when executed by at least one processor, implements a method of training an emotion model as claimed in any of claims 1 to 6.
CN201910436077.6A 2019-05-23 2019-05-23 Emotion model training method, emotion analysis device and storage medium Active CN111985243B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910436077.6A CN111985243B (en) 2019-05-23 2019-05-23 Emotion model training method, emotion analysis device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910436077.6A CN111985243B (en) 2019-05-23 2019-05-23 Emotion model training method, emotion analysis device and storage medium

Publications (2)

Publication Number Publication Date
CN111985243A true CN111985243A (en) 2020-11-24
CN111985243B CN111985243B (en) 2023-09-08

Family

ID=73437203

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910436077.6A Active CN111985243B (en) 2019-05-23 2019-05-23 Emotion model training method, emotion analysis device and storage medium

Country Status (1)

Country Link
CN (1) CN111985243B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112885432A (en) * 2021-02-06 2021-06-01 北京色彩情绪健康科技发展有限公司 Emotion analysis and management system
CN113010675A (en) * 2021-03-12 2021-06-22 出门问问信息科技有限公司 Method and device for classifying text information based on GAN and storage medium
CN115982473A (en) * 2023-03-21 2023-04-18 环球数科集团有限公司 AIGC-based public opinion analysis arrangement system
WO2023125985A1 (en) * 2021-12-31 2023-07-06 华为技术有限公司 Data processing method and apparatus for model

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107679580A (en) * 2017-10-21 2018-02-09 桂林电子科技大学 A kind of isomery shift image feeling polarities analysis method based on the potential association of multi-modal depth
CN107818084A (en) * 2017-10-11 2018-03-20 北京众荟信息技术股份有限公司 A kind of sentiment analysis method for merging comment figure
WO2018060993A1 (en) * 2016-09-27 2018-04-05 Faception Ltd. Method and system for personality-weighted emotion analysis
CN108108849A (en) * 2017-12-31 2018-06-01 厦门大学 A kind of microblog emotional Forecasting Methodology based on Weakly supervised multi-modal deep learning
US20180165554A1 (en) * 2016-12-09 2018-06-14 The Research Foundation For The State University Of New York Semisupervised autoencoder for sentiment analysis
CN108388544A (en) * 2018-02-10 2018-08-10 桂林电子科技大学 A kind of picture and text fusion microblog emotional analysis method based on deep learning
CN108764268A (en) * 2018-04-02 2018-11-06 华南理工大学 A kind of multi-modal emotion identification method of picture and text based on deep learning
CN108959322A (en) * 2017-05-25 2018-12-07 富士通株式会社 Information processing method and device based on text generation image
CN109117482A (en) * 2018-09-17 2019-01-01 武汉大学 A kind of confrontation sample generating method towards the detection of Chinese text emotion tendency
CN109213876A (en) * 2018-08-02 2019-01-15 宁夏大学 Based on the cross-module state search method for generating confrontation network
CN109344879A (en) * 2018-09-07 2019-02-15 华南理工大学 A kind of decomposition convolution method fighting network model based on text-image
CN109376775A (en) * 2018-10-11 2019-02-22 南开大学 The multi-modal sentiment analysis method of online news
CN109753566A (en) * 2019-01-09 2019-05-14 大连民族大学 The model training method of cross-cutting sentiment analysis based on convolutional neural networks
CN109783798A (en) * 2018-12-12 2019-05-21 平安科技(深圳)有限公司 Method, apparatus, terminal and the storage medium of text information addition picture

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018060993A1 (en) * 2016-09-27 2018-04-05 Faception Ltd. Method and system for personality-weighted emotion analysis
US20180165554A1 (en) * 2016-12-09 2018-06-14 The Research Foundation For The State University Of New York Semisupervised autoencoder for sentiment analysis
CN108959322A (en) * 2017-05-25 2018-12-07 富士通株式会社 Information processing method and device based on text generation image
CN107818084A (en) * 2017-10-11 2018-03-20 北京众荟信息技术股份有限公司 A kind of sentiment analysis method for merging comment figure
CN107679580A (en) * 2017-10-21 2018-02-09 桂林电子科技大学 A kind of isomery shift image feeling polarities analysis method based on the potential association of multi-modal depth
CN108108849A (en) * 2017-12-31 2018-06-01 厦门大学 A kind of microblog emotional Forecasting Methodology based on Weakly supervised multi-modal deep learning
CN108388544A (en) * 2018-02-10 2018-08-10 桂林电子科技大学 A kind of picture and text fusion microblog emotional analysis method based on deep learning
CN108764268A (en) * 2018-04-02 2018-11-06 华南理工大学 A kind of multi-modal emotion identification method of picture and text based on deep learning
CN109213876A (en) * 2018-08-02 2019-01-15 宁夏大学 Based on the cross-module state search method for generating confrontation network
CN109344879A (en) * 2018-09-07 2019-02-15 华南理工大学 A kind of decomposition convolution method fighting network model based on text-image
CN109117482A (en) * 2018-09-17 2019-01-01 武汉大学 A kind of confrontation sample generating method towards the detection of Chinese text emotion tendency
CN109376775A (en) * 2018-10-11 2019-02-22 南开大学 The multi-modal sentiment analysis method of online news
CN109783798A (en) * 2018-12-12 2019-05-21 平安科技(深圳)有限公司 Method, apparatus, terminal and the storage medium of text information addition picture
CN109753566A (en) * 2019-01-09 2019-05-14 大连民族大学 The model training method of cross-cutting sentiment analysis based on convolutional neural networks

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
JIUXIANG GU, JIANFEI CAI, SHAFIQ R. JOTY, LI NIU, GANG WANG: "Look, Imagine and Match: Improving Textual-Visual Cross-Modal Retrieval With Generative Models", PROCEEDINGS OF THE IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), pages 7181 - 7189 *
W. -C. CHEN, C. -W. CHEN AND M. -C. HU: "Syncgan: Synchronize the Latent Spaces of Cross-Modal Generative Adversarial Networks", 2018 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), pages 1 - 6 *
YUXIN PENG, JINWEI QI, YUXIN YUAN: "CM-GANs: Cross-modal Generative Adversarial Networks for Common Representation Learning", ARXIV, pages 1 - 13 *
彭小江;: "基于多模态信息的情感计算综述", 衡阳师范学院学报, no. 03, pages 37 - 42 *
李志义,黄子风,许晓绵: "基于表示学习的跨模态检索模型与特征抽取研究综述", 情报学报, vol. 37, no. 4, pages 422 - 435 *
陈师哲;王帅;金琴;: "多文化场景下的多模态情感识别", 软件学报, no. 04, pages 168 - 178 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112885432A (en) * 2021-02-06 2021-06-01 北京色彩情绪健康科技发展有限公司 Emotion analysis and management system
CN113010675A (en) * 2021-03-12 2021-06-22 出门问问信息科技有限公司 Method and device for classifying text information based on GAN and storage medium
WO2023125985A1 (en) * 2021-12-31 2023-07-06 华为技术有限公司 Data processing method and apparatus for model
CN115982473A (en) * 2023-03-21 2023-04-18 环球数科集团有限公司 AIGC-based public opinion analysis arrangement system
CN115982473B (en) * 2023-03-21 2023-06-23 环球数科集团有限公司 Public opinion analysis arrangement system based on AIGC

Also Published As

Publication number Publication date
CN111985243B (en) 2023-09-08

Similar Documents

Publication Publication Date Title
KR102577514B1 (en) Method, apparatus for text generation, device and storage medium
CN111191078B (en) Video information processing method and device based on video information processing model
CN112131350B (en) Text label determining method, device, terminal and readable storage medium
CN111985243B (en) Emotion model training method, emotion analysis device and storage medium
CN113011186B (en) Named entity recognition method, named entity recognition device, named entity recognition equipment and computer readable storage medium
AU2016256764A1 (en) Semantic natural language vector space for image captioning
US20230004732A1 (en) Systems and Methods for Intelligent Routing of Source Content for Translation Services
CN111739520B (en) Speech recognition model training method, speech recognition method and device
CN112732916A (en) BERT-based multi-feature fusion fuzzy text classification model
Huang et al. Character-level convolutional network for text classification applied to chinese corpus
CN112347787A (en) Method, device and equipment for classifying aspect level emotion and readable storage medium
CN112287672A (en) Text intention recognition method and device, electronic equipment and storage medium
Banik et al. Gru based named entity recognition system for bangla online newspapers
CN114385806A (en) Text summarization method and system based on deep learning
Patel et al. Dynamic lexicon generation for natural scene images
CN112507124A (en) Chapter-level event causal relationship extraction method based on graph model
Chaudhuri Visual and text sentiment analysis through hierarchical deep learning networks
CN114722832A (en) Abstract extraction method, device, equipment and storage medium
CN113486143A (en) User portrait generation method based on multi-level text representation and model fusion
Dilawari et al. Neural attention model for abstractive text summarization using linguistic feature space
Garg et al. Textual description generation for visual content using neural networks
CN111566665B (en) Apparatus and method for applying image coding recognition in natural language processing
Amrutha et al. Effortless and beneficial processing of natural languages using transformers
CN110377915B (en) Text emotion analysis method and device, storage medium and equipment
Chen et al. Emotion recognition in videos via fusing multimodal features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant