CN116757195B - Implicit emotion recognition method based on prompt learning - Google Patents

Implicit emotion recognition method based on prompt learning Download PDF

Info

Publication number
CN116757195B
CN116757195B CN202310746692.3A CN202310746692A CN116757195B CN 116757195 B CN116757195 B CN 116757195B CN 202310746692 A CN202310746692 A CN 202310746692A CN 116757195 B CN116757195 B CN 116757195B
Authority
CN
China
Prior art keywords
prompt
seed word
template
category
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310746692.3A
Other languages
Chinese (zh)
Other versions
CN116757195A (en
Inventor
卜坤
刘远超
刘秉权
孙承杰
单丽莉
林磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN202310746692.3A priority Critical patent/CN116757195B/en
Publication of CN116757195A publication Critical patent/CN116757195A/en
Application granted granted Critical
Publication of CN116757195B publication Critical patent/CN116757195B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/284Lexical analysis, e.g. tokenisation or collocates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/216Parsing using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Machine Translation (AREA)

Abstract

The invention discloses a implicit emotion recognition method based on prompt learning, which comprises the following steps: acquiring emotion texts to perform topic identification, and selecting a parent category corresponding to a seed word based on the identified topic; based on the parent category corresponding to the seed word, adopting the clustering and cosine similarity of unlabeled comments to continue category detection on the seed word, and obtaining the sub-category corresponding to the seed word; constructing a prompt template, embedding the sub-category corresponding to the seed word into the prompt template, training, combining the trained prompt template with a pre-training language model, and identifying the corresponding implicit emotion. Compared with a manually constructed template, the constructed prompting template can be matched with a pre-training language model, and the learning efficiency is improved by actively adapting a downstream task to a large pre-training language model.

Description

Implicit emotion recognition method based on prompt learning
Technical Field
The invention belongs to the technical field of artificial intelligence, natural language processing and emotion analysis, and particularly relates to a implicit emotion recognition method based on prompt learning.
Background
Emotion analysis is a classical task in the field of natural language processing. From the development of initial text coarse granularity emotion analysis to date, great progress has been made. Data which has no obvious emotion words but can transmit emotion tendencies is defined as implicit emotion data, and expressed emotion tendencies are defined as invisible emotion. The existing large amount of work is focused on analysis work of explicit emotion, and compared with the explicit emotion, the implicit emotion is widely distributed in the real world and has higher mining potential. The technical difficulties that the judgment of emotion polarity needs to introduce external knowledge and the like become research hotspots in recent years because the pointing object is not clear.
For capturing implicit emotion, the prior proposal provides that a large pre-training model is used for searching in a large corpus, SCAPT model is used for learning emotion knowledge from the corpus, and the large corpus searched from language resources in the field is subjected to supervision and contrast pre-training. The scheme has high requirements on computing resources, and downstream tasks are not well close to the large-scale pre-training model, so that knowledge in the large-scale pre-training model cannot be utilized efficiently. The existing scheme also provides a method for representing the event, and a mood detection model is built on the premise that the model is extracted from the text, so that the end-to-end implicit mood analysis cannot be realized. The existing scheme also provides a correction algorithm, and implicit features are directly found by using co-occurrence between the implicit features and words, but the method does not perform semantic disambiguation, a large amount of information contained in unlabeled data cannot be injected into a model, and the relation between different parts of speech and the implicit features is not explored.
Therefore, aiming at the problems in the prior art, it is highly desirable to provide a implicit emotion recognition method based on prompt learning so as to solve the problem of implicit emotion no-emotion directed emotion data analysis under low-resource working conditions.
Disclosure of Invention
The invention aims to provide a implicit emotion recognition method based on prompt learning so as to solve the problems in the prior art.
In order to achieve the above object, the present invention provides a implicit emotion recognition method based on prompt learning, comprising the following steps:
Acquiring emotion texts to perform topic identification, and selecting a parent category corresponding to a seed word based on the identified topic;
based on the parent category corresponding to the seed word, adopting the clustering and cosine similarity of unlabeled comments to continue category detection on the seed word, and obtaining the sub-category corresponding to the seed word;
Constructing a prompt template, embedding the sub-category corresponding to the seed word into the prompt template, training, combining the trained prompt template with a pre-training language model, and identifying the corresponding implicit emotion.
Optionally, the process of performing category detection on the seed word based on the clustering and cosine similarity of the unlabeled comments includes: acquiring cosine similarity of each seed word in the seed word to be detected and each seed word in the target category, and taking an average value; word embedding is carried out on a data set based on a BERT model, a group of unlabeled sentences are obtained from the data set, euclidean distances between word embedding average values of the unlabeled sentences are obtained by using a k-means clustering algorithm, the unlabeled sentences are clustered based on the Euclidean distances, and cosine similarity of each category is obtained; and analyzing and comparing the cosine similarity of the seed word to be detected with the cosine similarity of each category to obtain the sub-category corresponding to the seed word to be detected.
Optionally, the expression of the hint template is as follows:
T=[x][v1][v2]...[vm][mask]
Wherein T is a prompt template, x is a category corresponding to the seed word, v 1、v2……vm is a pseudo tag, and mask is a prediction result.
Optionally, the character string in the prompt template includes a first vacancy and a second vacancy, the first vacancy is used for inputting a category corresponding to the seed word, and the second vacancy is used for filling a corresponding prediction result obtained based on the pre-training language model.
Optionally, the pre-training language model dynamically adjusts the hint template in semantic space.
Optionally, the training the prompting template includes: and acquiring a vocabulary in the pre-training model, randomly designating one vocabulary in the vocabulary for each pseudo-mark, initializing each vocabulary, taking the vocabulary corresponding to each vocabulary as the initialization of the pseudo-mark, completing training of each pseudo-mark and corresponding parameters, and obtaining a trained prompt template.
Optionally, the process of identifying the category corresponding to the seed word includes: inputting the category corresponding to the seed word into a trained prompt template, analyzing and identifying the category corresponding to the seed word based on a pre-training language model to obtain a corresponding prediction result, filling the corresponding prediction result into the trained prompt template, predicting the probability of the corresponding prediction result based on the pre-training language model, and outputting the final implicit emotion.
The invention has the technical effects that:
(1) The implicit emotion recognition method based on prompt learning provided by the invention can automatically switch different fields, determine corresponding seed words and perform implicit emotion recognition by applying a prompt learning technology. The traditional manually constructed template cannot be flexibly matched with different data due to the fixed mode, and compared with the manually constructed template, the prompting template constructed by the method can be more matched with a pre-training language model, and the prior knowledge in the pre-training language model is more efficiently utilized by actively adapting a downstream task to a large-scale language pre-training language model, so that the learning efficiency is improved.
(2) The implicit emotion recognition method based on prompt learning reduces the requirements on computing resources and data resources, and can be realized under the working condition of low resources.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application. In the drawings:
FIG. 1 is a schematic diagram of text body recognition seed words (restaurant type is taken as an example) in an embodiment of the invention;
FIG. 2 is a schematic flow chart of category detection by using the clustering and cosine similarity of unlabeled comments in an embodiment of the invention;
FIG. 3 is a schematic diagram of implicit emotion recognition based on a prompt template in an embodiment of the present invention;
fig. 4 is a schematic diagram of model performance under different coefficients α and different k values in an embodiment of the present invention, where (a) is a schematic diagram of model performance under different coefficients α using F1score as an index, and (b) is a schematic diagram of model performance under different k values using F1score as an index.
Detailed Description
It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other. The application will be described in detail below with reference to the drawings in connection with embodiments.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
Example 1
The embodiment provides a implicit emotion recognition method based on prompt learning, which comprises the following steps: acquiring emotion texts to perform topic identification, and acquiring seed words based on the identified topics; performing category detection on the seed words based on the clusters and cosine similarity of the unlabeled comments to obtain categories corresponding to the seed words; and constructing a prompt template, training, combining the trained prompt template with a pre-training language model, identifying the category corresponding to the seed word, and outputting the corresponding implicit emotion.
As a specific embodiment, as shown in fig. 1, firstly, a text is acquired, and text topic identification is performed by using a text topic identification model; then determining seed words in different fields according to the identified subjects; as shown in fig. 2, the category detection task is accomplished with clustering and cosine similarity of unlabeled reviews.
In practice, taking a catering class as an example, the similarity is defined as the average of cosine similarity values between sentences and each seed word belonging to that class. Word embedding is performed on the yellow dataset based on the BERT, and a set of unlabeled sentences is obtained from the yellow dataset. And using a k-means clustering algorithm to obtain Euclidean distances between word embedding averages of words in sentences, and clustering the unlabeled sentences into k classes based on the Euclidean distances. For each cluster, a similarity value is calculated for each category. A set of seed words is automatically selected for each category to represent the category.
As a specific embodiment, first, the present embodiment selects 5 seed words for each category as a representative. The seed word selection method comprises the following steps: and carrying out TF-IDF (terminal Frequency-Inverse Document Frequency) statistics and data cleaning on the comment data of each category to obtain the first five words. Specifically: after word frequency statistics, the data is manually cleaned, including deleting (or, if the term is meaningless) the word of the mood aid, deleting the word that may cause category confusion (for example, the word A appears in a plurality of categories, and cannot become a semantic feature word, and should be deleted).
Next, in order to quantitatively describe the similarity of an input sentence and a category, the embodiment describes the category similarity of the sentence by calculating the average cosine similarity score of the input sentence and the seed phrase. The process is as follows:
this embodiment defines sentSim ci (x) as a quantization index for similarity. Where c i represents the ith category, x is the input sentence, and s i is the ith seed word.
The different categories can be better distinguished by adding the sigmoid function into the corresponding category detection model. The specific formula is shown as follows.
So far, there is a similarity measure value corresponding to an input sentence x.
And clustering the unlabeled corpus by using a k-means algorithm and using Euclidean distances among word vector means. The following formula is shown:
Where u k denotes the kth cluster. In analogy to the sentence similarity above, the present embodiment incorporates herein a sigmoid function, as shown in the following formula:
Assuming now that the test corpus is given, the sentScore vectors for that sentence are first computed and then interpolated using clustScore corresponding to the nearest cluster for the given sentence, as shown in the following equation. The nearest cluster is found by finding the centroid nearest to the sentence based on the Euclidean distance between the word embedding mean of the sentence and the cluster centroid.
score=αsentScoreN+(1-α)clustScore N
Wherein sentScore N and clustScore N are the L2 normalized vectors of the sentence and its clusters, respectively. As shown in fig. 4, through a simulation experiment, as shown in fig. 4 (a), a coefficient α=0.7 is selected; as shown in fig. 4 (b), the number of clusters (cluster) k=17 is selected, so that a better classification effect can be achieved.
As shown in FIG. 3, because aspect has been extracted, the problem has been translated into a common fine granularity emotion analysis, using a trainable hint template to complete emotion.
Wherein the obtained aspect is the main body corresponding to the emotion. For example, example one: the screen of the computer is clear but the battery is common, wherein the aspect corresponding to the clear is the screen; the corresponding aspect is a battery, and the emotion words and aspect are clearly determined, and belong to display analysis. The invention mainly solves the problem of implicit emotion analysis. For example, example two: the "passed attendant walks directly after spraying water on me", the corpus does not contain any emotional words, is a typical implicit emotion, but the "water sprayed on me" event reflects a negative emotion for restaurant services. The restaurant service is aspect in example one. Emotion (SENTIMENT) corresponding to aspect (restaurant service) is negative emotion, and finally the final output of the model is formed: restaurant service-negative emotion.
The subject (aspect) of the corpus has been detected by the category detection module to be a restaurant service. On the basis of obtaining the aspect, the problem has become a common explicit emotion recognition problem through the combination of the template and the pre-trained language model. The model of this embodiment employs a trainable vectorized template that exists within the semantic space.
As shown in fig. 3, the original text is entered on the left and the trainable alert template on the right. The present embodiment converts the hint templates into continuous vectors that can be optimized. And the prompt template is converted into a vector existing in a semantic space, so that optimization is facilitated. The formalized expression of the continuity template is shown in the formula 1:
T=[x][v1][v2]...[vm][mask] (1)
Wherein T is a prompt template, x is a category corresponding to the seed word, v 1、v2……vm is a pseudo tag, and mask is a prediction result.
Where v m is a pseudo-tag and has no actual meaning, essentially a vector. Based on this, the pre-trained language model can dynamically adjust the hint templates in semantic space. There are a number of ways to initialize each pseudo tag. The most common is random initialization, i.e. with normal distribution, even distribution, etc. In addition, the pre-training model can be used to initialize each token (vocabulary) by using the existing embedding table (vocabulary), at this time, each pseudo-tag randomly designates a word in the vocabulary, and takes embedding of the corresponding word as the initialization of the pseudo-tag; in summary, each pseudo tag and corresponding parameters may be trained during the training process. And calculating the similarity between the representation of the test sentence and the prototype vector of each category, and taking the label corresponding to the maximum similarity as a prediction result, wherein the label is the complete filling of the pre-training model pair [ mask ].
Wherein [ mask ] is determined by the pre-training language model (open source acquisition) prediction, and the task is completed by the pre-training language model in a form filling manner.
As a specific embodiment, the pre-trained language model selected in this embodiment is K-BERT, because in implicit emotion analysis, there is also such a class of data as: this computer is fast as a leopard/slow as a tortoise. The pre-training language model requires the characterization semantics of min Bai Liebao and tortoise: fast and slow. Based on this, the pre-training model can generate correct answers in the subsequent prompt templates, so that the enhanced pre-training language model is selected. This type of knowledge belongs to the common sense class of knowledge. The pre-training model lacks domain knowledge, which is injected into sentences using the K-BERT pre-training language model in this embodiment in order to achieve this capability. K-BERT is an open source model that is compatible with the BERT model; the domain knowledge can be conveniently injected into the model without the need for pre-training. In addition, the model introduces soft locations and a visible matrix to limit the influence of knowledge to overcome knowledge noise.
According to the embodiment, through model judgment, for example, an original text is that 'a passionate attendant directly walks after spraying water on the body', and a prompt template is combined, and Yes are filled in a [ mask ] position of a pre-training language model; negative, hint templates may be understood as follows (since a trainable hint template is used in this example, a similar manual template is chosen for ease of understanding):
The service is bad?[Yes].
It is[Negative].
the improvement of the prompting template according to the embodiment is that the prompting template is trainable; the switching between different fields can be accomplished automatically. For example, the text "this duration is one hour" is entered, and the digital product field is identified by the text topic. The text topic identification model is automatically switched from restaurant food to digital, and the seed words are also switched to corresponding fields, so that the implicit emotion analysis in different fields is realized.
Prompt learning can be understood as that there is a function f prompt (·), the input x is modified to x' =f prompt (x), the prompt template is defined as a string of character strings spliced after the input corpus, two gaps are preset in the character strings, one is used for inputting the corpus [ text ], and the other is used for filling the prediction result [ mask ] generated by the language model. The quality of the template directly affects the effect of the final model, defining the hint template as T.
Prediction result mapping: the mask will be mapped into the field y to determine the corresponding category. The academic community generally considers that the manual template greatly depends on experience of a designer, the effect cannot be guaranteed, and the technical details of automatic template construction can be summarized as follows: given the original input, a plurality of discrete character composition templates are additionally defined, the probability of the corresponding mask is predicted through a pre-training language model, and y is determined through an optimizing algorithm, namely a prediction result. The optimization algorithm herein includes, but is not limited to, gradient search. For example: through calculation, the pre-training model considers that the similarity between the vector replacing [ mask ] and yes is 0.6 and the similarity between the vector replacing [ mask ] and apple is 0.2, so that yes is filled, and the predicted result refers to the predicted result obtained through the trained pre-training language model.
Y is Yes, negative, etc. as described above. Can be understood colloquially as: "[ mask ] will be mapped to a corresponding category" determined in field [ y ], which is a randomly initialized vocabulary to populate [ mask ], y can be yes, sure, OK, etc., as determined by the pre-trained language model.
The manually defined template is fixed in form, and the trainable prompt template provided by the embodiment can better guide knowledge in the pre-training language model. The addition of the prompt template can better determine the implicit aspect. The trainable vectorization prompt template is applied to be closer to the pre-training language model, so that a better effect is obtained, and the model output is determined.
At present, along with the development of a pre-training language model, the parameter volume is greatly increased, and the requirements on data, hardware and training time are increasingly increased. And the marked implicit emotion data has a smaller data size than the marked explicit emotion data. Performing fine-tuning of the pre-trained language model on a small dataset is prone to over-fitting. In summary, the method for performing implicit emotion recognition by combining the prompt template and the pre-training language model provided by the embodiment has the following advantages: firstly, through the automatic different fields of switching of text topic recognition models, corresponding seed words are determined, implicit emotion recognition is carried out by applying a prompt learning technology, a prompt template can be more matched with a pre-training language model compared with a manually constructed template, and learning efficiency is improved in a mode of enabling a downstream task to be actively adapted to a large-scale language pre-training language model. Second, prompt learning alleviates the training bottleneck problem of data scarcity, providing a solution for facilitating innovation and application of pre-trained language models. Third, the requirements for computing resources and data resources can be reduced, and the method can be implemented under low-resource working conditions.
The present application is not limited to the above-mentioned embodiments, and any changes or substitutions that can be easily understood by those skilled in the art within the technical scope of the present application are intended to be included in the scope of the present application. Therefore, the protection scope of the present application should be subject to the protection scope of the claims.

Claims (6)

1. The implicit emotion recognition method based on prompt learning is characterized by comprising the following steps of:
Acquiring emotion texts to perform topic identification, and selecting a parent category corresponding to a seed word based on the identified topic;
based on the parent category corresponding to the seed word, adopting the clustering and cosine similarity of unlabeled comments to continue category detection on the seed word, and obtaining the sub-category corresponding to the seed word;
constructing a prompt template, embedding the sub-category corresponding to the seed word into the prompt template, training, combining the trained prompt template with a pre-training language model, and identifying the corresponding implicit emotion;
The process of category detection of the seed words based on the clustering and cosine similarity of unlabeled comments comprises the following steps: acquiring cosine similarity of each seed word in the seed word to be detected and each seed word in the target category, and taking an average value; word embedding is carried out on a Yelp data set based on a BERT model, a group of unlabeled sentences are obtained from the Yelp data set, euclidean distances between word embedding average values of the unlabeled sentences are obtained by using a k-means clustering algorithm, the unlabeled sentences are clustered based on the Euclidean distances, and cosine similarity of each category is obtained; and analyzing and comparing the cosine similarity of the seed word to be detected with the cosine similarity of each category to obtain the sub-category corresponding to the seed word to be detected.
2. The implicit emotion recognition method based on prompt learning of claim 1, characterized in that,
The expression of the prompt template is shown as follows:
T=[x][v1][v2]...[vm][mask]
Wherein T is a prompt template, x is a category corresponding to the seed word, v 1、v2……vm is a pseudo tag, and mask is a prediction result.
3. The implicit emotion recognition method based on prompt learning of claim 2, characterized in that,
The character strings in the prompt template comprise a first vacancy and a second vacancy, wherein the first vacancy is used for inputting a category corresponding to the seed word, and the second vacancy is used for filling a corresponding prediction result obtained based on the pre-training language model.
4. The implicit emotion recognition method based on prompt learning of claim 2, characterized in that,
The pre-training language model dynamically adjusts the prompt template in a semantic space.
5. The implicit emotion recognition method based on prompt learning of claim 1, characterized in that,
The training process of the prompt template comprises the following steps: and acquiring a vocabulary in the pre-training model, randomly designating one vocabulary in the vocabulary for each pseudo-mark, initializing each vocabulary, taking the vocabulary corresponding to each vocabulary as the initialization of the pseudo-mark, completing training of each pseudo-mark and corresponding parameters, and obtaining a trained prompt template.
6. The implicit emotion recognition method based on prompt learning of claim 1, characterized in that,
The process for identifying the category corresponding to the seed word comprises the following steps: inputting the category corresponding to the seed word into a trained prompt template, analyzing and identifying the category corresponding to the seed word based on a pre-training language model to obtain a corresponding prediction result, filling the corresponding prediction result into the trained prompt template, predicting the probability of the corresponding prediction result based on the pre-training language model, and outputting the final implicit emotion.
CN202310746692.3A 2023-06-25 2023-06-25 Implicit emotion recognition method based on prompt learning Active CN116757195B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310746692.3A CN116757195B (en) 2023-06-25 2023-06-25 Implicit emotion recognition method based on prompt learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310746692.3A CN116757195B (en) 2023-06-25 2023-06-25 Implicit emotion recognition method based on prompt learning

Publications (2)

Publication Number Publication Date
CN116757195A CN116757195A (en) 2023-09-15
CN116757195B true CN116757195B (en) 2024-06-14

Family

ID=87949340

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310746692.3A Active CN116757195B (en) 2023-06-25 2023-06-25 Implicit emotion recognition method based on prompt learning

Country Status (1)

Country Link
CN (1) CN116757195B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117592483B (en) * 2023-11-21 2024-05-28 合肥工业大学 Implicit emotion analysis method and device based on thinking tree

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109670039A (en) * 2018-11-20 2019-04-23 华南师范大学 Sentiment analysis method is commented on based on the semi-supervised electric business of tripartite graph and clustering

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11379668B2 (en) * 2018-07-12 2022-07-05 Samsung Electronics Co., Ltd. Topic models with sentiment priors based on distributed representations
CN112417157B (en) * 2020-12-15 2022-04-26 华南师范大学 Emotion classification method of text attribute words based on deep learning network
CN113535957B (en) * 2021-07-27 2022-08-02 哈尔滨工业大学 Conversation emotion recognition network model system based on dual knowledge interaction and multitask learning, construction method, equipment and storage medium
CN115098675A (en) * 2022-06-20 2022-09-23 重庆科技学院 Emotion triple generation method based on multi-class table filling

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109670039A (en) * 2018-11-20 2019-04-23 华南师范大学 Sentiment analysis method is commented on based on the semi-supervised electric business of tripartite graph and clustering

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于卷积神经网络的情感分析算法;李佳丽;封化民;潘扬;徐治理;刘飚;;计算机应用与软件(04);第293-298页 *

Also Published As

Publication number Publication date
CN116757195A (en) 2023-09-15

Similar Documents

Publication Publication Date Title
CN110929030B (en) Text abstract and emotion classification combined training method
Wang et al. Application of convolutional neural network in natural language processing
Fu et al. Aligning where to see and what to tell: Image captioning with region-based attention and scene-specific contexts
CN108733792B (en) Entity relation extraction method
CN110245229B (en) Deep learning theme emotion classification method based on data enhancement
CN113239700A (en) Text semantic matching device, system, method and storage medium for improving BERT
CN107315737A (en) A kind of semantic logic processing method and system
CN110321563B (en) Text emotion analysis method based on hybrid supervision model
CN108416065A (en) Image based on level neural network-sentence description generates system and method
CN111274790B (en) Chapter-level event embedding method and device based on syntactic dependency graph
CN110502610A (en) Intelligent sound endorsement method, device and medium based on text semantic similarity
CN111222318B (en) Trigger word recognition method based on double-channel bidirectional LSTM-CRF network
CN112818118B (en) Reverse translation-based Chinese humor classification model construction method
US10915756B2 (en) Method and apparatus for determining (raw) video materials for news
Li et al. UD_BBC: Named entity recognition in social network combined BERT-BiLSTM-CRF with active learning
US11755668B1 (en) Apparatus and method of performance matching
CN112052318A (en) Semantic recognition method and device, computer equipment and storage medium
CN113705238B (en) Method and system for analyzing aspect level emotion based on BERT and aspect feature positioning model
CN116757195B (en) Implicit emotion recognition method based on prompt learning
CN111145914B (en) Method and device for determining text entity of lung cancer clinical disease seed bank
CN112183106A (en) Semantic understanding method and device based on phoneme association and deep learning
CN115858750A (en) Power grid technical standard intelligent question-answering method and system based on natural language processing
Seilsepour et al. Self-supervised sentiment classification based on semantic similarity measures and contextual embedding using metaheuristic optimizer
CN110377753B (en) Relation extraction method and device based on relation trigger word and GRU model
CN116978367A (en) Speech recognition method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant