CN113326695B - Emotion polarity analysis method based on transfer learning - Google Patents

Emotion polarity analysis method based on transfer learning Download PDF

Info

Publication number
CN113326695B
CN113326695B CN202110455888.8A CN202110455888A CN113326695B CN 113326695 B CN113326695 B CN 113326695B CN 202110455888 A CN202110455888 A CN 202110455888A CN 113326695 B CN113326695 B CN 113326695B
Authority
CN
China
Prior art keywords
model
emotion polarity
sentence
formula
polarity analysis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110455888.8A
Other languages
Chinese (zh)
Other versions
CN113326695A (en
Inventor
杨鹏
任炳先
周华健
于晓潭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202110455888.8A priority Critical patent/CN113326695B/en
Publication of CN113326695A publication Critical patent/CN113326695A/en
Application granted granted Critical
Publication of CN113326695B publication Critical patent/CN113326695B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/237Lexical tools
    • G06F40/247Thesauruses; Synonyms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Machine Translation (AREA)

Abstract

The invention discloses an emotion polarity analysis method based on transfer learning. And then, constructing an emotion polarity analysis model, and transferring relevant parameters obtained by training the text sequencing model to corresponding positions in the emotion polarity analysis model. And finally, continuing training the migrated model on the emotion polarity analysis data set. The invention introduces the field priori knowledge for the emotion polarity analysis model by utilizing the transfer learning, can improve the field adaptability of the model, is beneficial to obtaining sentence characteristics with higher quality, and further improves the classification accuracy of the model.

Description

Emotion polarity analysis method based on transfer learning
Technical Field
The invention relates to an emotion polarity analysis method based on transfer learning, and belongs to the field of Internet and natural language processing.
Background
With the continuous development and progress of mobile internet technology, various large news media and the general public tend to release views and attitudes of real events in social networks. Emotion polarity analysis is a method for automatically acquiring text emotion tendencies or emotion categories by using related theory of natural language processing, and has great utilization value.
At present, research on emotion polarity analysis at home and abroad has obtained abundant results, and has reference significance for research work of the invention. The existing emotion polarity analysis method is mainly divided into 3 types, namely emotion dictionary-based emotion learning-based emotion polarity analysis method and deep learning-based emotion polarity analysis method. The emotion dictionary-based method introduces expert knowledge into the emotion value calculation process, is suitable for scenes which lack large-scale anticipation, but has the defects of continuous expansion and poor portability. Compared with the method based on the emotion dictionary, the method based on the machine learning has the advantages of simple modeling, and better expansibility and portability. However, the machine learning method requires a high-quality labeling data set, and consumes a certain labeling cost. The deep learning-based method trains the classifier using a neural network model. Compared with the method based on emotion dictionary and machine learning, the deep learning model has stronger expression capability and obtains better classification indexes. In recent years, with the development of a pre-training language model, a deep learning-based mode achieves better effects. However, considering a specific application scenario, the existing deep learning model still has a disadvantage. First, the lack of sentence-level pre-training tasks in existing language models results in the model's ability to logically perceive and semantically express remains a room for improvement. In addition, for emotion analysis of social comments, the content of the emotion analysis model has the characteristics of brevity and random, so that sentence features are not dense and noise is high, and classification robustness of the emotion polarity analysis model is insufficient.
Aiming at the problems of insufficient logic perception and semantic expression capability and insufficient classification robustness of the current deep learning model, the invention provides an emotion polarity analysis method based on transfer learning. On the one hand, the model uses a transfer learning method, firstly, the logic perception and semantic expression capability of the model is trained in a text sequencing task, and then related model parameters are transferred into an emotion polarity analysis model. Through transfer learning, the model can obtain domain priori knowledge, obtain high-quality sentence characteristics and improve the classification accuracy of the model. On the other hand, after the emotion polarity analysis model extracts sentence characteristics, the characteristic noise is further reduced by combining with the attention mechanism, and the classification robustness of the model can be improved.
Disclosure of Invention
Aiming at the problems and the defects in the existing emotion polarity analysis technology, the invention provides an emotion polarity analysis method based on transfer learning, which introduces field priori knowledge into an emotion polarity analysis model based on transfer learning and can improve the classification accuracy of the model. Meanwhile, the invention reduces noise interference for the emotion polarity analysis model based on the attention mechanism, and can improve the classification robustness of the model.
In order to achieve the above object, the technical scheme of the present invention is as follows: according to the emotion polarity analysis method based on transfer learning, firstly, the positions of characters or words in comment texts are disturbed according to a certain proportion. Then, a text ranking model is constructed, and the model is trained by taking the disturbed sentences as input and the sentences with normal word ranks as output. And then, constructing an emotion polarity analysis model, and migrating relevant parameters in the text ordering model to corresponding positions in the emotion polarity analysis model. And finally, continuing training the migrated emotion polarity analysis model. According to the method, priori knowledge is introduced into the emotion polarity analysis model through transfer learning, so that the field adaptability of the model can be improved, higher-quality sentence characteristics can be obtained, and the classification accuracy of the model is improved.
The emotion polarity analysis method based on transfer learning mainly comprises 4 steps, and specifically comprises the following steps:
And 1, constructing a sentence pair data set. The word position of each sentence in the emotion polarity analysis data set is disturbed according to a set proportion (the proportion size is determined according to the comparison experiment result), meanwhile, the sentences before disturbance are reserved, and each group of disturbed sentences and sentences with normal word order form one piece of training data in the new data set.
And 2, training a text ordering model. Constructing a text ordering model based on a seq2seq mode, firstly taking a disturbed sentence as a model input, and extracting sentence characteristics by using an encoder; then decoding word by word, predicting and outputting words according to the decoding characteristics of the current time step; finally, comparing the model output with characters at positions corresponding to the normal language order, and training model parameters based on the cross entropy loss function.
And 3, parameter migration. Firstly, constructing an emotion polarity analysis model, using the same coding structure as that of the text ordering model, and then migrating coding parameters and word vector parameters of the text ordering model into the emotion polarity analysis model.
And 4, training an emotion polarity analysis model. Firstly inputting comment text based on emotion polarity analysis data set, extracting sentence characteristics by using an encoder, and then further extracting local characteristics, whole characteristics and final characteristics after noise reduction of sentences by using a convolutional neural network, a cyclic neural network and an attention mechanism; and finally classifying the features.
Compared with the prior art, the invention has the following technical effects:
1. According to the method, based on transfer learning, a text ordering model is firstly constructed, and then parameters learned by the text ordering model are transferred to an emotion polarity analysis model. The defect that the current language model lacks of sentence-level pre-training tasks is effectively overcome, the logic perception and semantic expression capacity of the model can be improved, and higher-quality sentence characteristic representation is obtained. In the embodiment, the model after the transfer learning is improved by 3.7% on the classification accuracy index, and the effectiveness of the scheme is verified.
2. The emotion polarity analysis model firstly uses CNN (convolutional neural network) to extract sentence local characteristics, then uses BiGRU (bidirectional gating circulation unit) to extract sentence integral characteristics, and finally uses attention mechanism to reduce characteristic noise, so that the interference of the random nature of the social text on model classification can be reduced. Ablation experiments on the examples showed that: the increased attention mechanism achieves higher classification robustness than if the overall features were classified directly.
Drawings
FIG. 1 is an overall frame diagram of an embodiment of the present invention.
Fig. 2 is a diagram of a text ranking model framework according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of an emotion polarity analysis model according to an embodiment of the present invention.
Detailed Description
The invention is further illustrated below in conjunction with specific embodiments in order to enhance the understanding and appreciation of the invention.
Example 1: an emotion polarity analysis method based on transfer learning is shown in figure 1 in the whole working flow when in implementation. The method comprises the steps of firstly constructing a text ordering model as shown in fig. 2, wherein the model overcomes the defect that a current language model lacks a sentence-level pre-training task, and can improve the logic perception and semantic expression capability of the model. Then, an emotion polarity analysis model is constructed as shown in fig. 3, and relevant parameters obtained through training of the text sorting model are migrated to corresponding positions in the emotion polarity analysis model. And finally, continuing training the migrated model on the emotion polarity analysis data set. The method comprises the following specific implementation steps:
And 1, constructing a sentence pair data set. The sentences in the emotion polarity analysis data set are disturbed according to a set proportion (after the comparison test, the embodiment is set to 25%), the sentences before the disturbance are reserved, and the sentence pairs formed by the sentences after the disturbance and the sentences with normal word order are used as one piece of training data in the new data set. For example, sentences of normal order are: all medical staff are hard. The disturbed sentences are: is bitter for medical staff. Where the context of both "pungent" and "bitter" changes.
And 2, training a text ordering model. Constructing a text ordering model in a seq2seq mode, firstly taking a disturbed sentence as a model input, and extracting sentence characteristics by using an encoder; then decoding word by word, predicting the words to be output according to the decoding characteristics of the current time step; and finally, comparing the model output with a normal language order, and training model parameters by using a logarithmic loss function. This step can be divided into 3 sub-steps, and the specific embodiment is as follows.
Substep 2-1, sentence coding. In this embodiment, the Bert is used to extract text coding features, and for convenience of description, the meanings of the model-related symbols are summarized in table 1. First, a text sequence X= (token 1,token2,…,tokenm) with a length of m is input, a code Emb i of the token is extracted according to an index id i of the token in vocab, e i is calculated, s represents a text matrix formed by the whole sentence, specific calculation processes of e i and s are shown in formulas (1) and (2), wherein the position represents a position code, and the segment represents a segment code.
TABLE 1 model related symbol meanings
ei=Bert(Embi+segmenti+postioni) (1)
s=(e1,e2,…,em) (2)
S is then fed into a coding model containing a 12-layer transducer sequence to extract the final code output S. LN in equation (3) is a layer normalization operation and MSA is a multi-head self-attention operation. Taking the z-th layer as an example, the coding output s z-1 of the upper layer is firstly processed by MSA, and then the residual error and LN operation are carried out to obtainFinally, as shown in the formula (3), the FFN pair/>Processing is performed and layer encoded output s z is obtained in combination with the residual and LN as shown in equation (4). The calculation process of FFN is shown in formula (5), wherein W 1、b1、W2、b2 is a model learning parameter.
FFN(x)=max(0,xW1+b1)W2+b2 (5)
And 2-2, sentence decoding. The GRU is used as a base unit of the decoding network. The decoding process is shown in formulas (6) and (7). In formula (6), d t-1 represents an input, and h t-1 represents a hidden layer output of the previous step. d 0 corresponds to a CLS (a special character in vocab, representing the start of a sentence) as input. Equation (7) represents hidden layer initialization in the decoding process, namely, firstly, the coding output S is subjected to average pooling, and then, the initial hidden layer input h 0 is represented after linear layer processing, wherein W s、bs is a model-learnable parameter.
ht=GRU(dt-1,ht-1) (6)
h0=Wsavg(S)+bs (7)
And 2-3, outputting prediction, namely taking the decoding hidden layer output h t of each step as a query, taking the coding output S as a key and a value, calculating a context vector context in a dot-product attention mode as shown in a formula (8), then splicing context and h t as final characteristics of the current decoding step, and processing the spliced characteristics through linear transformation and softmax functions to obtain a prediction probability distribution p of a model, wherein the process is shown in a formula (9), and W p、bp is a model learning parameter. Finally, a model log loss is calculated based on the predicted value p and the actual value y as shown in equation (10), where m represents the dictionary vocab size.
context=Attention(ht,S,S) (8)
p=softmax(Wp[context,ht]+bp) (9)
And 3, parameter migration. After the training of the text ordering model is completed, the emotion polarity analysis model is built by using the same coding structure as the text ordering model. And then migrating the coding model parameters and the word vector parameters of the text ordering model into the emotion polarity analysis model.
And 4, training an emotion polarity analysis model. Based on emotion polarity analysis data set, firstly inputting a text into a model, extracting sentence characteristics by using an encoder, and then extracting local characteristics, whole characteristics and final characteristics after noise reduction of sentences by using a convolutional neural network, a cyclic neural network and an attention mechanism; and finally classifying the features. This step can be divided into 5 sub-steps, and the specific embodiment is as follows.
Sub-step 4-1, sentence coding. Since the coding model is completely consistent with the text ordering model, the coding process of the emotion polarity model is the same as that of the substep 2-1. The encoded output is denoted by S.
And sub-step 4-2, extracting local features. Extracting a local feature representation T of the encoded output S using a one-dimensional convolutional network, as shown in equation (11); the specific calculation result T i of each step is shown as a formula (12), wherein W and b are model-learnable parameters; x represents convolution calculation; i represents the i-th step; k represents the convolution kernel width (this embodiment uses a convolution kernel of width 2).
T=Conv(S) (11)
Ti=tanh(W×Si:i+k-1+b) (12)
And sub-step 4-3, extracting the integral features. The Bi-GRU is used for extracting the sentence integral characteristic C as shown in formulas (13) and (14). The bidirectional GRU comprises a forward GRU reading T generation from left to rightReverse GRU Generation/>Nt denotes the length of the previous convolution output. /(I)And/>And respectively representing hidden layer output in two directions when the GRU model is subjected to the j-th step. Will beAnd/>The characteristic h j of each step obtained after the splicing is shown in formula (15).
And sub-step 4-4, attention is paid to noise reduction. The actual comment text may be quite noisy and it is generally not desirable to make a final prediction of the input model with no difference for all parts of the sentence. In combination with the attention mechanism, the text feature C can be further optimized. V j is obtained by passing h j through a perceptron (MLP) as in equation (16), where W a and b a are model learnable parameters. The importance of v j by similarity to context C is shown in equation (17). The sentence final semantic feature C a is calculated by means of weighted summation as shown in equation (18).
vj=tanh(Wahj+ba) (16)
Sub-step 4-5, emotion polarity classification. Firstly, carrying out linear change and softmax function processing on sentence characteristics C a to obtain classification probability distribution p of a model, wherein the process is shown in a formula (19), and W p and b p are model learning parameters; the model log loss is then calculated based on p and the actual label y as shown in equation (20), where n represents the number of categories.
p=softmax(WpCa+bp) (19)
It is to be understood that these examples are for illustration only and not for limitation of the scope of the application, and that modifications of the application in its various equivalents will fall within the scope of the application as defined by the appended claims after reading the application.

Claims (4)

1. An emotion polarity analysis method based on transfer learning is characterized by comprising the following steps:
step 1, constructing a sentence pair data set,
Step 2, training a text ordering model,
Step 3, parameter migration,
Training an emotion polarity analysis model;
step 3, parameter migration, namely firstly constructing an emotion polarity analysis model, using the same coding structure as that of a text ordering model, and then migrating coding parameters and word vector parameters of the text ordering model into the emotion polarity analysis model;
step 2, training a text ordering model, which specifically comprises the following steps:
In the substep 2-1, sentence coding, extracting text coding features by using Bert, and the meanings of the related symbols of the model are as follows:
token: each word in the dataset;
n: total number of token in the dataset;
h: word vector dimensions of Token;
Emb: embedding matrix, shape: nxh;
vocab: a token dictionary, token: index id;
First inputting a text sequence X= (token 1,token2,…,tokenm) with a length of m, extracting a code Emb i of the token according to an index id i of the token in vocab, calculating e i, using s to represent a text matrix formed by the whole sentence, specifically calculating e i and s as shown in formulas (1) and (2), wherein the position represents a position code, the segment represents a segment code,
ei=Bert(Embi+segmenti+postioni) (1)
s=(e1,e2,…,em) (2)
Then S is sent into a coding model containing 12 layers of transformers to extract the final coding output S, LN in the formula (3) is a layer normalization operation, MSA is a multi-head self-attention operation, taking the z-th layer as an example, the coding output S z-1 of the upper layer is firstly processed by MSA, and then residual error and LN operation are carried out to obtain the multi-head self-attention algorithmFinally, as shown in the formula (3), the FFN pair/>Processing is performed, and the layer encoded output s z is obtained by combining the residual and LN as shown in formula (4), the calculation process of FFN is shown in formula (5), wherein W 1、b1、W2、b2 is a model-learnable parameter,
FFN(x)= max(0,xW1+b1)W2+b2 (5)
In the substep 2-2, sentence decoding, GRU is used as a basic unit of a decoding network, the decoding process is shown in formulas (6) and (7), in the formula (6), d t-1 represents input, h t-1 represents hidden layer output of the last step, d 0 corresponds to CLS as input, the formula (7) represents hidden layer initialization of the decoding process, namely, firstly, the coding output S is subjected to average pooling, then, the linear layer processing is used for representing initial hidden layer input h 0, wherein W s、bs is model-learnable parameters,
ht=GRU(dt-1,ht-1) (6)h0=Wsavg(S)+bs (7)
Sub-step 2-3, outputting prediction, taking the decoding hidden layer output h t of each step as query, taking the coding output S as key and value, calculating context vector context as shown in formula (8) in a dot-product Attention mode, then splicing context and h t as final characteristics of the current decoding step, and processing the spliced characteristics through linear transformation and softmax function to obtain a prediction probability distribution p of a model, wherein W p、bp is a model learning parameter, finally calculating a model logarithmic loss as shown in formula (10) based on a predicted value p and an actual value y, wherein m represents the size of a dictionary vocab, context=attribute (h t, S, S) (8)
p=softmax(Wp[context,ht]+bp) (9)
Wherein, step 4, training emotion polarity analysis model, concretely as follows,
Step 4-1, sentence coding, wherein the coding model is completely consistent with the text ordering model, so that the coding process of the emotion polarity model is the same as that of step 2-1, and S is used for representing coding output;
Sub-step 4-2, extracting local features, and extracting a local feature representation T of the encoded output S by using a one-dimensional convolution network, as shown in a formula (11); the specific calculation result T i of each step is shown as a formula (12), wherein W and b are model-learnable parameters; x represents convolution calculation; i represents the i-th step; k represents the width of the convolution kernel,
T=Conv(S) (11)
Ti=tanh(W×Si:i+k-1+b) (12)
Sub-step 4-3, extracting integral features, extracting sentence integral features C by Bi-GRU as shown in formulas (13), (14), bi-directional GRU including a forward GRU generated from left to right reading TReverse GRU Generation/>Nt represents the length of the last convolution output,/>And/>Hidden layer outputs in two directions respectively representing jth step of GRU model and will/>And/>The characteristic h j of each step obtained after the splicing is shown in a formula (15),
Substep 4-4, attention denoising, combining with an attention mechanism, further optimizing the text feature C, obtaining v j as shown in formula (16) by h j through a perceptron MLP, wherein W a and b a are model learnable parameters, the importance of which is measured by the similarity between v j and the context C as shown in formula (17), calculating the final semantic feature C a of the sentence as shown in formula (18) through a weighted summation mode,
vj=tanh(Wahj+ba) (16)
Sub-step 4-5, emotion polarity classification, namely firstly, linear change and softmax function processing are carried out on sentence characteristics C a to obtain classification probability distribution p of a model, wherein the process is shown in a formula (19), and W p and b p are model learnable parameters; the log-model loss is then calculated based on p and the actual label y as shown in equation (20), where n represents the number of categories
p=softmax(WpCa+bp) (19)
2. The method for emotion polarity analysis based on transfer learning of claim 1,
Step 1, constructing a sentence pair data set, namely disturbing the word position of each sentence in the emotion polarity analysis data set according to a set proportion, simultaneously reserving the sentences before disturbance, and forming a training data in a new data set by each group of the sentences after disturbance and the sentences with normal word order.
3. The method for emotion polarity analysis based on transfer learning of claim 1,
Step 2, training a text ordering model, constructing the text ordering model based on a seq2seq mode, firstly taking a disturbed sentence as a model input, and extracting sentence characteristics by using an encoder; then decoding word by word, predicting and outputting words according to the decoding characteristics of the current time step; finally, comparing the model output with characters at positions corresponding to the normal language order, and training model parameters based on the cross entropy loss function.
4. The emotion polarity analysis method based on transfer learning according to claim 1, wherein step 4, training an emotion polarity analysis model, inputting comment text based on emotion polarity analysis data set, extracting sentence characteristics by an encoder, and then further extracting local characteristics, whole characteristics and final characteristics after noise reduction of sentences by a convolutional neural network, a cyclic neural network and an attention mechanism; and finally classifying the features.
CN202110455888.8A 2021-04-26 2021-04-26 Emotion polarity analysis method based on transfer learning Active CN113326695B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110455888.8A CN113326695B (en) 2021-04-26 2021-04-26 Emotion polarity analysis method based on transfer learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110455888.8A CN113326695B (en) 2021-04-26 2021-04-26 Emotion polarity analysis method based on transfer learning

Publications (2)

Publication Number Publication Date
CN113326695A CN113326695A (en) 2021-08-31
CN113326695B true CN113326695B (en) 2024-04-26

Family

ID=77413792

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110455888.8A Active CN113326695B (en) 2021-04-26 2021-04-26 Emotion polarity analysis method based on transfer learning

Country Status (1)

Country Link
CN (1) CN113326695B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113836919A (en) * 2021-09-30 2021-12-24 中国建筑第七工程局有限公司 Building industry text error correction method based on transfer learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107679580A (en) * 2017-10-21 2018-02-09 桂林电子科技大学 A kind of isomery shift image feeling polarities analysis method based on the potential association of multi-modal depth
CN110334187A (en) * 2019-07-09 2019-10-15 昆明理工大学 Burmese sentiment analysis method and device based on transfer learning
WO2021051598A1 (en) * 2019-09-19 2021-03-25 平安科技(深圳)有限公司 Text sentiment analysis model training method, apparatus and device, and readable storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107679580A (en) * 2017-10-21 2018-02-09 桂林电子科技大学 A kind of isomery shift image feeling polarities analysis method based on the potential association of multi-modal depth
CN110334187A (en) * 2019-07-09 2019-10-15 昆明理工大学 Burmese sentiment analysis method and device based on transfer learning
WO2021051598A1 (en) * 2019-09-19 2021-03-25 平安科技(深圳)有限公司 Text sentiment analysis model training method, apparatus and device, and readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于BERT的文本情感分析方法的研究;方英兰 等;信息技术与信息化;20200228(第02期);全文 *
基于迁移学习的分层注意力网络情感分析算法;曲昭伟 等;计算机应用;20180719(第11期);全文 *

Also Published As

Publication number Publication date
CN113326695A (en) 2021-08-31

Similar Documents

Publication Publication Date Title
CN107133213B (en) Method and system for automatically extracting text abstract based on algorithm
CN107832400B (en) A kind of method that location-based LSTM and CNN conjunctive model carries out relationship classification
CN110059188B (en) Chinese emotion analysis method based on bidirectional time convolution network
CN113239181B (en) Scientific and technological literature citation recommendation method based on deep learning
CN109325112B (en) A kind of across language sentiment analysis method and apparatus based on emoji
CN111274398B (en) Method and system for analyzing comment emotion of aspect-level user product
CN112667818B (en) GCN and multi-granularity attention fused user comment sentiment analysis method and system
CN110321563B (en) Text emotion analysis method based on hybrid supervision model
CN111858932A (en) Multiple-feature Chinese and English emotion classification method and system based on Transformer
CN111143563A (en) Text classification method based on integration of BERT, LSTM and CNN
CN114757182A (en) BERT short text sentiment analysis method for improving training mode
WO2023134083A1 (en) Text-based sentiment classification method and apparatus, and computer device and storage medium
CN110472245B (en) Multi-label emotion intensity prediction method based on hierarchical convolutional neural network
CN113065344A (en) Cross-corpus emotion recognition method based on transfer learning and attention mechanism
CN114462420A (en) False news detection method based on feature fusion model
CN115759119B (en) Financial text emotion analysis method, system, medium and equipment
CN111540470B (en) Social network depression tendency detection model based on BERT transfer learning and training method thereof
CN112287106A (en) Online comment emotion classification method based on dual-channel hybrid neural network
CN114004220A (en) Text emotion reason identification method based on CPC-ANN
CN116049387A (en) Short text classification method, device and medium based on graph convolution
CN113468854A (en) Multi-document automatic abstract generation method
CN116010553A (en) Viewpoint retrieval system based on two-way coding and accurate matching signals
CN113326695B (en) Emotion polarity analysis method based on transfer learning
CN113806528A (en) Topic detection method and device based on BERT model and storage medium
CN113159831A (en) Comment text sentiment analysis method based on improved capsule network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant