CN116451706A - Emotion recognition method and device based on specific words, electronic equipment and storage medium - Google Patents

Emotion recognition method and device based on specific words, electronic equipment and storage medium Download PDF

Info

Publication number
CN116451706A
CN116451706A CN202310443836.8A CN202310443836A CN116451706A CN 116451706 A CN116451706 A CN 116451706A CN 202310443836 A CN202310443836 A CN 202310443836A CN 116451706 A CN116451706 A CN 116451706A
Authority
CN
China
Prior art keywords
representation
matrix
information
hidden layer
emotion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310443836.8A
Other languages
Chinese (zh)
Inventor
刘羲
周涵
舒畅
陈又新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202310443836.8A priority Critical patent/CN116451706A/en
Publication of CN116451706A publication Critical patent/CN116451706A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/284Lexical analysis, e.g. tokenisation or collocates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/042Knowledge-based neural networks; Logical representations of neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Machine Translation (AREA)

Abstract

The invention relates to artificial intelligence in the field of financial science and technology, and discloses an emotion recognition method based on specific words, which comprises the following steps: performing weight calculation on the specific word representation of the emotion text to be identified, and generating sentence representation features according to the weight calculation result; respectively carrying out information convolution on the adjacent matrix and the hidden layer information matrix to obtain an adjacent information vector and a hidden layer information vector, and splicing the adjacent information vector, the hidden layer information vector and priori syntactic information to obtain priori syntactic characteristics; and fusing the hidden layer representation features, the sentence representation features and the priori syntax features, performing probability calculation on the fused result to obtain fusion probability, and performing emotion analysis to obtain emotion recognition results. In addition, the invention also relates to a blockchain technology, and the specific word characterization can be stored in nodes of the blockchain. The invention also provides an emotion recognition device based on the specific word, electronic equipment and a storage medium. The invention can improve the accuracy of emotion recognition in the financial field.

Description

Emotion recognition method and device based on specific words, electronic equipment and storage medium
Technical Field
The present invention relates to the field of artificial intelligence, and in particular, to a method and apparatus for emotion recognition based on specific words, an electronic device, and a storage medium.
Background
In the field of natural language processing, emotion recognition has been one of the very important tasks. Emotion recognition of text is to analyze and process subjective text with emotion colors so as to obtain emotion tendencies, such as positive, negative or neutral, contained in the text of a user. In the field of financial science and technology, transaction dialogs between business personnel and clients contain different emotion tendencies, and identifying emotion tendencies in the transaction dialogs has a key effect on success or failure of financial business transactions. Because a text sentence may contain a plurality of different specific words, the emotion polarity of each word may be different, which affects emotion recognition of the text sentence, so that the emotion type of the specific word needs to be accurately recognized.
The existing emotion recognition method for specific words generally extracts the characteristics of each word in a sentence according to neural network models such as long-term and short-term memory models, and then carries out emotion recognition according to the part-of-speech correlation between the characteristics of each word and the characteristics of the specific word. Therefore, there is a need to propose an emotion recognition method with higher accuracy.
Disclosure of Invention
The invention provides a emotion recognition method and device based on specific words, electronic equipment and a storage medium, and aims to improve emotion recognition accuracy.
In order to achieve the above object, the present invention provides an emotion recognition method based on specific words, including:
acquiring an emotion text to be identified, inputting the emotion text to be identified into a preset double-layer representation model, and extracting hidden layer representation features in the preset double-layer representation model;
extracting specific word representation in the hidden layer representation features, carrying out weight calculation on the specific word representation based on an attention mechanism to obtain a weight calculation result, and generating sentence representation features according to the weight calculation result;
constructing a corresponding adjacency matrix according to the pre-acquired priori syntax information, and performing linear transformation on the hidden layer representation features to obtain a hidden information matrix;
respectively inputting the adjacent matrix and the hidden layer information matrix into a preset graph convolution network to perform information convolution processing to obtain an adjacent information vector and a hidden layer information vector, performing initial splicing processing on the adjacent information vector and the hidden layer information vector to obtain initial splicing characteristics, and performing re-splicing processing on the initial splicing characteristics and the priori syntax information to obtain priori syntax characteristics;
And carrying out primary fusion on the hidden layer representation features and the sentence representation features to obtain initial fusion features, carrying out feature fusion on the initial fusion features and the priori syntax features, inputting a result obtained after feature fusion into a linear layer to carry out probability calculation to obtain fusion probability, and carrying out emotion analysis according to the fusion probability and a preset emotion reference library to obtain an emotion recognition result.
Optionally, the weight calculation is performed on the specific word representation based on the attention mechanism to obtain a weight calculation result, and sentence representation features are generated according to the weight calculation result, including:
removing the multiple word characterizations of the specific word characterizations from the hidden layer representation features, and splicing to obtain residual characterization features;
calculating the attention score corresponding to the specific word token according to an attention score calculation formula in the attention mechanism and the residual token feature, and taking the attention score as a weight calculation result;
and carrying out pooling operation on the specific word representation according to the weight calculation result to obtain sentence representation characteristics.
Optionally, the constructing a corresponding adjacency matrix according to the pre-acquired prior syntax information includes:
Identifying a plurality of information entities in the prior syntax information and entity relationships between a plurality of the information entities;
and storing the entity relation in a preset two-dimensional array to obtain an adjacency matrix.
Optionally, the inputting the adjacency matrix into a preset graph convolution network to perform information convolution processing to obtain an adjacency information vector includes:
acquiring a preset convolution kernel, and multiplying the adjacent matrix and the convolution kernel by using a transformation formula after Fourier transformation to obtain a feature matrix;
and performing Fourier inverse transformation processing on the feature matrix to obtain an adjacent information vector.
Optionally, the transformation formula is:
g*x=U(U T g·U T x)
wherein g is the convolution kernel, x is the adjacency matrix, U is an orthogonal matrix of Fourier transform, U T Representing a transpose of the orthogonal matrix.
Optionally, the performing linear transformation on the hidden layer representation feature to obtain a hidden information matrix includes:
and identifying the matrix size of the adjacent matrix, and converting the hidden layer representation characteristic into a matrix with the same matrix size as the adjacent matrix by utilizing a linear layer to obtain a hidden information matrix.
Optionally, the extracting the specific word representation in the hidden layer representation feature includes:
Performing text word segmentation on the emotion text to be recognized to obtain a plurality of word segments to be recognized;
and identifying a specific word in the plurality of to-be-identified segmented words, and taking the representation feature corresponding to the specific word in the hidden layer representation feature as a specific word representation.
In order to solve the above problems, the present invention also provides an emotion recognition device based on a specific word, the device comprising:
the feature extraction module is used for acquiring an emotion text to be identified, inputting the emotion text to be identified into a preset double-layer representation model and extracting hidden layer representation features in the preset double-layer representation model;
the weight calculation module is used for extracting specific word representation in the hidden layer representation features, carrying out weight calculation on the specific word representation based on an attention mechanism to obtain a weight calculation result, and generating sentence representation features according to the weight calculation result;
the linear transformation module is used for constructing a corresponding adjacent matrix according to the pre-acquired priori syntax information, and carrying out linear transformation on the hidden layer representation characteristics to obtain a hidden information matrix;
the emotion analysis module is used for respectively inputting the adjacent matrix and the hidden layer information matrix into a preset graph convolution network to carry out information convolution processing to obtain an adjacent information vector and a hidden layer information vector, carrying out initial splicing processing on the adjacent information vector and the hidden layer information vector to obtain initial splicing characteristics, carrying out splicing processing on the initial splicing characteristics and the priori syntactic information again to obtain priori syntactic characteristics, carrying out primary fusion on the hidden layer representation characteristics and the sentence representation characteristics to obtain initial fusion characteristics, carrying out characteristic fusion on the initial fusion characteristics and the priori syntactic characteristics, inputting a result obtained by characteristic fusion into a linear layer to carry out probability calculation to obtain fusion probability, and carrying out emotion analysis according to the fusion probability and a preset emotion reference library to obtain an emotion recognition result.
In order to solve the above-mentioned problems, the present invention also provides an electronic apparatus including:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the specific word-based emotion recognition method described above.
In order to solve the above-mentioned problems, the present invention also provides a storage medium having stored therein at least one computer program that is executed by a processor in an electronic device to implement the above-mentioned specific word-based emotion recognition method.
According to the embodiment of the invention, through extracting hidden layer representation features corresponding to the emotion text to be recognized, carrying out weight calculation on the specific word representation based on an attention mechanism, generating sentence representation features according to a weight calculation result, obtaining priori syntax features by using a graph convolution network, relieving the influence of priori syntax information, carrying out feature fusion on the hidden layer representation features, the sentence representation features and the priori syntax features, inputting a feature fusion result into a linear layer for probability calculation, obtaining fusion probability, enabling the feature fusion result to be more accurate, enabling the related dimension to be more, and carrying out emotion analysis according to the fusion probability and a preset emotion reference library to obtain the emotion recognition result. Therefore, the emotion recognition method, the emotion recognition device, the electronic equipment and the storage medium based on the specific words can solve the problem of low accuracy of improving emotion recognition.
Drawings
FIG. 1 is a schematic flow chart of a emotion recognition method based on specific words according to an embodiment of the present invention;
FIG. 2 is a detailed flow chart of one of the steps shown in FIG. 1;
FIG. 3 is a functional block diagram of an emotion recognition device based on specific words according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device for implementing the emotion recognition method based on a specific word according to an embodiment of the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The embodiment of the application provides an emotion recognition method based on specific words. The execution subject of the emotion recognition method based on the specific word includes, but is not limited to, at least one of a server, a terminal and the like, which can be configured to execute the method provided by the embodiment of the application. In other words, the emotion recognition method based on the specific word may be performed by software or hardware installed in a terminal device or a server device, and the software may be a blockchain platform. The service end includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like. The server may be an independent server, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communications, middleware services, domain name services, security services, content delivery networks (ContentDelivery Network, CDN), and basic cloud computing services such as big data and artificial intelligence platforms.
Referring to fig. 1, a schematic flow chart of a specific word-based emotion recognition method according to an embodiment of the present invention is shown. In this embodiment, the emotion recognition method based on the specific word includes the following steps S1 to S4:
s1, acquiring an emotion text to be identified, inputting the emotion text to be identified into a preset double-layer representation model, and extracting hidden layer representation features in the preset double-layer representation model.
In the embodiment of the invention, the emotion text to be recognized is sentence text needing emotion recognition, and the double-layer representation model (BidirectionalEncoder Representations from Transformer, BERT) is a pre-trained language representation model, which emphasizes that the pre-training is not performed by adopting a traditional unidirectional language model or a method for shallow splicing two unidirectional language models like the prior art. The hidden layer representation features in the preset double-layer representation model are the hidden states of the emotion text to be identified, which pass through a multi-layer structure in the double-layer representation model and are output by a last layer of network.
Wherein the hidden layer representation feature is represented by H.
For example, in the field of financial science and technology, the emotion text to be recognized is sentence text that needs emotion recognition in a conversation between a customer and a business person when a financial transaction is performed. You can speak for the customer, i.e. "i am very happy, buying this product a you recommend. "
S2, extracting specific word representation in the hidden layer representation features, carrying out weight calculation on the specific word representation based on an attention mechanism to obtain a weight calculation result, and generating sentence representation features according to the weight calculation result.
In the embodiment of the invention, since the emotion text to be recognized is a sentence text, and a sentence text may contain a plurality of different specific words, the specific words refer to other words which cannot be replaced, and most of the specific words refer to proper nouns and phrases which can be used only in specific situations and language ranges. Since the emotion polarities of different specific words may be different, the specific word characterization in the hidden layer representation feature needs to be extracted for subsequent data processing.
Specifically, the extracting the specific word representation in the hidden layer representation feature includes:
Performing text word segmentation on the emotion text to be recognized to obtain a plurality of word segments to be recognized;
and identifying a specific word in the plurality of to-be-identified segmented words, and taking the representation feature corresponding to the specific word in the hidden layer representation feature as a specific word representation.
In detail, performing text word segmentation on the emotion text to be recognized according to a text word segmentation device to obtain a plurality of word segments to be recognized, wherein the word segments to be recognized can be word-of-speech, nonsensical words, combined words and the like, specific words in the word segments to be recognized are recognized, and the corresponding representation features of the specific words in the hidden layer representation features are used as specific word representation.
For example, the specific word is characterized as h i The words corresponding to the word to be identified except the specific word are characterized as h 1 ,h 2 …h i-1 ,h i+1 …,h n
Further, referring to fig. 2, the weight calculation is performed on the specific word representation based on the attention mechanism to obtain a weight calculation result, and sentence representation features are generated according to the weight calculation result, which includes the following steps S21-S23:
s21, splicing a plurality of word characterizations with the specific word characterizations removed from the hidden layer representation characteristics to obtain residual characterization characteristics;
S22, calculating the attention score corresponding to the specific word token according to an attention score calculation formula in the attention mechanism and the residual token characteristics, and taking the attention score as a weight calculation result;
s23, carrying out pooling operation on the specific word representation according to the weight calculation result to obtain sentence representation characteristics.
In detail, the multiple word tokens excluding the specific word token are denoted as h 1 ,h 2 …h i-1 ,h i+1 …,h n Splicing a plurality of word representations of which the specific word representations are removed from the hidden layer representation features to obtain the residual representation features H o =(h 1 ,h 2 …h i-1 ,h i+1 …,h n ). The pooling operation is a maximum pooling operation.
Preferably, the attention mechanism (Attention Mechanism) may provide the neural network with the ability to concentrate on a subset of its inputs, and attention may be applied to any type of input, regardless of its shape. In situations where computing power is limited, the attention mechanism is a resource allocation scheme that is the primary means of solving the information overload problem, allocating computing resources to more important tasks. In the scheme, the optimization weight is automatically updated by calculating the weight relation between the specific word and the upper and lower Wen Ciyu by using an attention mechanism.
Specifically, the attention score calculation formula is:
S=softmax(h i ×H o )×H o
Wherein S is the attention score,h i for the characterization of the specific word, H o For the remaining characterization features, i represents the characterization parameters of the characterization of the specific word, and o represents the characterization parameters of the remaining characterization features.
S3, constructing a corresponding adjacent matrix according to the pre-acquired priori syntactic information, and performing linear transformation on the hidden layer representation features to obtain a hidden information matrix.
In the embodiment of the invention, the priori syntax information refers to syntax information of sentences of emotion texts to be identified.
Specifically, the constructing a corresponding adjacency matrix according to the pre-acquired prior syntax information includes:
identifying a plurality of information entities in the prior syntax information and entity relationships between a plurality of the information entities;
and storing the entity relation in a preset two-dimensional array to obtain an adjacency matrix.
In detail, the data corresponding to the entity relationship is stored in a two-dimensional array, which is called an adjacency matrix. The adjacency matrix is divided into a directed graph adjacency matrix and an undirected graph adjacency matrix, and the adjacency matrix in the scheme is an undirected adjacency matrix.
Further, the performing linear transformation on the hidden layer representation feature to obtain a hidden information matrix includes:
And identifying the matrix size of the adjacent matrix, and converting the hidden layer representation characteristic into a matrix with the same matrix size as the adjacent matrix by utilizing a linear layer to obtain a hidden information matrix.
The hidden information matrix and the adjacent matrix are required to be fused later, so that the hidden layer representation features are required to be subjected to linear transformation to obtain the hidden information matrix, and the fusion accuracy and efficiency are improved.
S4, respectively inputting the adjacent matrix and the hidden layer information matrix into a preset graph convolution network to perform information convolution processing to obtain an adjacent information vector and a hidden layer information vector, performing initial splicing processing on the adjacent information vector and the hidden layer information vector to obtain initial splicing characteristics, and performing re-splicing processing on the initial splicing characteristics and the priori syntax information to obtain priori syntax characteristics.
In the embodiment of the invention, the graph rolling network can further extract the characteristics of the fusion syntax information. The graph convolution neural network is a neural network architecture for operating graph data.
Specifically, the inputting the adjacency matrix into a preset graph convolution network to perform information convolution processing to obtain an adjacency information vector includes:
acquiring a preset convolution kernel, and multiplying the adjacent matrix and the convolution kernel by using a transformation formula after Fourier transformation to obtain a feature matrix;
and performing Fourier inverse transformation processing on the feature matrix to obtain an adjacent information vector.
In detail, the convolution kernel is a filter function.
Specifically, the transformation formula is:
g*x=U(U T g·U T x)
wherein g is the convolution kernel, x is the adjacency matrix, U is an orthogonal matrix of Fourier transform, U T Representing a transpose of the orthogonal matrix.
Further, the method for inputting the hidden information matrix into the preset graph convolution network to perform information convolution processing to obtain the hidden information vector is consistent with the method for inputting the adjacent matrix into the preset graph convolution network to perform information convolution processing to obtain the adjacent information vector, which is not described herein.
Specifically, performing initial stitching processing on the adjacent information vector and the hidden layer information vector to obtain initial stitching features, and performing re-stitching processing on the initial stitching features and the priori syntax information to obtain priori syntax features.
S5, carrying out primary fusion on the hidden layer representation features and the sentence representation features to obtain initial fusion features, carrying out feature fusion on the initial fusion features and the priori syntax features, inputting a feature fusion result into a linear layer to carry out probability calculation to obtain fusion probability, and carrying out emotion analysis according to the fusion probability and a preset emotion reference library to obtain an emotion recognition result.
In the embodiment of the invention, the hidden layer representation feature and the sentence representation feature are subjected to primary fusion to obtain an initial fusion feature, the initial fusion feature and the priori syntax feature are subjected to feature fusion to obtain a feature fusion result, and the feature fusion result is input into a linear layer to perform probability calculation to obtain fusion probability, wherein the linear layer contains an activation function which can be a softmax function or a sigmoid function, and the fusion probability is obtained according to the activation function, and since the preset emotion reference library contains different probability values and emotion types corresponding to the probability values, emotion analysis can be performed according to the fusion probability and the preset emotion reference library to obtain an emotion recognition result.
In detail, the emotion recognition result includes different evaluations of emotion text to be recognized, such as negative evaluation, neutral evaluation, or positive evaluation.
According to the embodiment of the invention, through extracting hidden layer representation features corresponding to the emotion text to be recognized, carrying out weight calculation on the specific word representation based on an attention mechanism, generating sentence representation features according to a weight calculation result, obtaining priori syntax features by using a graph convolution network, relieving the influence of priori syntax information, carrying out feature fusion on the hidden layer representation features, the sentence representation features and the priori syntax features, inputting a feature fusion result into a linear layer for probability calculation, obtaining fusion probability, enabling the feature fusion result to be more accurate, enabling the related dimension to be more, and carrying out emotion analysis according to the fusion probability and a preset emotion reference library to obtain the emotion recognition result. Therefore, the emotion recognition method based on the specific words can solve the problem of low accuracy of improving emotion recognition.
Fig. 3 is a functional block diagram of an emotion recognition device based on a specific word according to an embodiment of the present invention.
The emotion recognition device 100 based on a specific word according to the present invention may be installed in an electronic apparatus. Depending on the implementation, the emotion recognition device 100 based on a specific word may include a feature extraction module 101, a weight calculation module 102, a linear transformation module 103, and an emotion analysis module 104. The module of the invention, which may also be referred to as a unit, refers to a series of computer program segments, which are stored in the memory of the electronic device, capable of being executed by the processor of the electronic device and of performing a fixed function.
In the present embodiment, the functions concerning the respective modules/units are as follows:
the feature extraction module 101 is configured to obtain an emotion text to be identified, input the emotion text to be identified into a preset double-layer representation model, and extract hidden layer representation features in the preset double-layer representation model;
the weight calculation module 102 is configured to extract a specific word representation in the hidden layer representation feature, perform weight calculation on the specific word representation based on an attention mechanism, obtain a weight calculation result, and generate a sentence representation feature according to the weight calculation result;
the linear transformation module 103 is configured to construct a corresponding adjacency matrix according to pre-acquired prior syntax information, and perform linear transformation on the hidden layer representation feature to obtain a hidden information matrix;
the emotion analysis module 104 is configured to input the adjacent matrix and the hidden layer information matrix into a preset graph convolution network respectively to perform information convolution processing to obtain an adjacent information vector and a hidden layer information vector, perform initial stitching processing on the adjacent information vector and the hidden layer information vector to obtain an initial stitching feature, perform stitching processing on the initial stitching feature and the priori syntax information again to obtain an priori syntactic feature, perform primary fusion on the hidden layer representation feature and the sentence representation feature to obtain an initial fusion feature, perform feature fusion on the initial fusion feature and the priori syntactic feature, input a result after feature fusion into a linear layer to perform probability calculation to obtain fusion probability, and perform emotion analysis according to the fusion probability and a preset emotion reference library to obtain an emotion recognition result.
In detail, the specific embodiments of each module of the emotion recognition device 100 based on specific words are as follows:
step one, acquiring an emotion text to be identified, inputting the emotion text to be identified into a preset double-layer representation model, and extracting hidden layer representation features in the preset double-layer representation model.
In the embodiment of the invention, the emotion text to be recognized is sentence text needing emotion recognition, and the double-layer characterization model is a pre-trained language characterization model, which emphasizes that the pre-training is not performed by adopting a traditional unidirectional language model or a method for shallow splicing two unidirectional language models like the prior art. The hidden layer representation features in the preset double-layer representation model are the hidden states of the emotion text to be identified, which pass through a multi-layer structure in the double-layer representation model and are output by a last layer of network.
Wherein the hidden layer representation feature is represented by H.
For example, in the field of financial science and technology, the emotion text to be recognized is sentence text that needs emotion recognition in a conversation between a customer and a business person when a financial transaction is performed. You can speak for the customer, i.e. "i am very happy, buying this product a you recommend. "
Extracting specific word representation in the hidden layer representation features, carrying out weight calculation on the specific word representation based on an attention mechanism to obtain a weight calculation result, and generating sentence representation features according to the weight calculation result.
In the embodiment of the invention, since the emotion text to be recognized is a sentence text, and a sentence text may contain a plurality of different specific words, the specific words refer to other words which cannot be replaced, and most of the specific words refer to proper nouns and phrases which can be used only in specific situations and language ranges. Since the emotion polarities of different specific words may be different, the specific word characterization in the hidden layer representation feature needs to be extracted for subsequent data processing.
Specifically, the extracting the specific word representation in the hidden layer representation feature includes:
performing text word segmentation on the emotion text to be recognized to obtain a plurality of word segments to be recognized;
and identifying a specific word in the plurality of to-be-identified segmented words, and taking the representation feature corresponding to the specific word in the hidden layer representation feature as a specific word representation.
In detail, performing text word segmentation on the emotion text to be recognized according to a text word segmentation device to obtain a plurality of word segments to be recognized, wherein the word segments to be recognized can be word-of-speech, nonsensical words, combined words and the like, specific words in the word segments to be recognized are recognized, and the corresponding representation features of the specific words in the hidden layer representation features are used as specific word representation.
For example, the specific word is characterized as h i The words corresponding to the word to be identified except the specific word are characterized as h 1 ,h 2 …h i-1 ,h i+1 …,h n
Further, the weight calculation is performed on the specific word representation based on the attention mechanism, and sentence representation features are generated according to the weight calculation result, including:
removing the multiple word characterizations of the specific word characterizations from the hidden layer representation features, and splicing to obtain residual characterization features;
calculating the attention score corresponding to the specific word token according to an attention score calculation formula in the attention mechanism and the residual token feature, and taking the attention score as a weight calculation result;
and carrying out pooling operation on the specific word representation according to the weight calculation result to obtain sentence representation characteristics.
In detail, the multiple word tokens excluding the specific word token are denoted as h 1 ,h 2 …h i-1 ,h i+1 …,h n Splicing a plurality of word representations of which the specific word representations are removed from the hidden layer representation features to obtain the residual representation features H o =(h 1 ,h 2 …h i-1 ,h i+1 …,h n ). The pooling operation is a maximum pooling operation.
Preferably, the attention mechanism may provide the neural network with the ability to focus on a subset of its inputs, and attention may be applied to any type of input regardless of its shape. In situations where computing power is limited, the attention mechanism is a resource allocation scheme that is the primary means of solving the information overload problem, allocating computing resources to more important tasks. In the scheme, the optimization weight is automatically updated by calculating the weight relation between the specific word and the upper and lower Wen Ciyu by using an attention mechanism.
Specifically, the attention score calculation formula is:
S=softmax(h i ×H o )×H o
wherein S is the attention score, h i For the characterization of the specific word, H o For the remaining characterization features, i represents the characterization parameters of the characterization of the specific word, and o represents the characterization parameters of the remaining characterization features.
Thirdly, constructing a corresponding adjacent matrix according to the pre-acquired priori syntactic information, and performing linear transformation on the hidden layer representation features to obtain a hidden information matrix.
In the embodiment of the invention, the priori syntax information refers to syntax information of sentences of emotion texts to be identified.
Specifically, the constructing a corresponding adjacency matrix according to the pre-acquired prior syntax information includes:
identifying a plurality of information entities in the prior syntax information and entity relationships between a plurality of the information entities;
and storing the entity relation in a preset two-dimensional array to obtain an adjacency matrix.
In detail, the data corresponding to the entity relationship is stored in a two-dimensional array, which is called an adjacency matrix. The adjacency matrix is divided into a directed graph adjacency matrix and an undirected graph adjacency matrix, and the adjacency matrix in the scheme is an undirected adjacency matrix.
Further, the performing linear transformation on the hidden layer representation feature to obtain a hidden information matrix includes:
And identifying the matrix size of the adjacent matrix, and converting the hidden layer representation characteristic into a matrix with the same matrix size as the adjacent matrix by utilizing a linear layer to obtain a hidden information matrix.
The hidden information matrix and the adjacent matrix are required to be fused later, so that the hidden layer representation features are required to be subjected to linear transformation to obtain the hidden information matrix, and the fusion accuracy and efficiency are improved.
And step four, respectively inputting the adjacent matrix and the hidden layer information matrix into a preset graph convolution network to perform information convolution processing to obtain an adjacent information vector and a hidden layer information vector, performing initial splicing processing on the adjacent information vector and the hidden layer information vector to obtain initial splicing characteristics, and performing re-splicing processing on the initial splicing characteristics and the priori syntax information to obtain priori syntax characteristics.
In the embodiment of the invention, the graph rolling network can further extract the characteristics of the fusion syntax information. The graph convolution neural network is a neural network architecture for operating graph data.
Specifically, the inputting the adjacency matrix into a preset graph convolution network to perform information convolution processing to obtain an adjacency information vector includes:
acquiring a preset convolution kernel, and multiplying the adjacent matrix and the convolution kernel by using a transformation formula after Fourier transformation to obtain a feature matrix;
and performing Fourier inverse transformation processing on the feature matrix to obtain an adjacent information vector.
In detail, the convolution kernel is a filter function.
Specifically, the transformation formula is:
g*x=U(U T g·U T x)
wherein g is the convolution kernel, x is the adjacency matrix, U is an orthogonal matrix of Fourier transform, U T Representing a transpose of the orthogonal matrix.
Further, the method for inputting the hidden information matrix into the preset graph convolution network to perform information convolution processing to obtain the hidden information vector is consistent with the method for inputting the adjacent matrix into the preset graph convolution network to perform information convolution processing to obtain the adjacent information vector, which is not described herein.
Specifically, performing initial stitching processing on the adjacent information vector and the hidden layer information vector to obtain initial stitching features, and performing re-stitching processing on the initial stitching features and the priori syntax information to obtain priori syntax features.
Fifthly, carrying out primary fusion on the hidden layer representation features and the sentence representation features to obtain initial fusion features, carrying out feature fusion on the initial fusion features and the priori syntax features, inputting a feature fusion result into a linear layer to carry out probability calculation to obtain fusion probability, and carrying out emotion analysis according to the fusion probability and a preset emotion reference library to obtain an emotion recognition result.
In the embodiment of the invention, the hidden layer representation feature and the sentence representation feature are subjected to primary fusion to obtain an initial fusion feature, the initial fusion feature and the priori syntax feature are subjected to feature fusion to obtain a feature fusion result, and the feature fusion result is input into a linear layer to perform probability calculation to obtain fusion probability, wherein the linear layer contains an activation function which can be a softmax function or a sigmoid function, and the fusion probability is obtained according to the activation function, and since the preset emotion reference library contains different probability values and emotion types corresponding to the probability values, emotion analysis can be performed according to the fusion probability and the preset emotion reference library to obtain an emotion recognition result.
In detail, the emotion recognition result includes different evaluations of emotion text to be recognized, such as negative evaluation, neutral evaluation, or positive evaluation.
According to the embodiment of the invention, through extracting hidden layer representation features corresponding to the emotion text to be recognized, carrying out weight calculation on the specific word representation based on an attention mechanism, generating sentence representation features according to a weight calculation result, obtaining priori syntax features by using a graph convolution network, relieving the influence of priori syntax information, carrying out feature fusion on the hidden layer representation features, the sentence representation features and the priori syntax features, inputting a feature fusion result into a linear layer for probability calculation, obtaining fusion probability, enabling the feature fusion result to be more accurate, enabling the related dimension to be more, and carrying out emotion analysis according to the fusion probability and a preset emotion reference library to obtain the emotion recognition result. Therefore, the emotion recognition device based on the specific word can solve the problem of low accuracy of improving emotion recognition.
Fig. 4 is a schematic structural diagram of an electronic device for implementing a specific word-based emotion recognition method according to an embodiment of the present invention.
The electronic device 1 may comprise a processor 10, a memory 11, a communication bus 12 and a communication interface 13, and may further comprise a computer program stored in the memory 11 and executable on the processor 10, such as an emotion recognition program based on specific words.
The processor 10 may be formed by an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be formed by a plurality of integrated circuits packaged with the same function or different functions, including one or more central processing units (Central Processing Unit, CPU), a microprocessor, a digital processing chip, a graphics processor, a combination of various control chips, and so on. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, executes or executes programs or modules stored in the memory 11 (for example, executes emotion recognition programs based on specific words, etc.), and invokes data stored in the memory 11 to perform various functions of the electronic device and process data.
The memory 11 includes at least one type of readable storage medium including flash memory, a removable hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device, such as a mobile hard disk of the electronic device. The memory 11 may in other embodiments also be an external storage device of the electronic device, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the electronic device. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device. The memory 11 may be used not only for storing application software installed in an electronic device and various types of data, such as codes of emotion recognition programs based on specific words, but also for temporarily storing data that has been output or is to be output.
The communication bus 12 may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. The bus is arranged to enable a connection communication between the memory 11 and at least one processor 10 etc.
The communication interface 13 is used for communication between the electronic device and other devices, including a network interface and a user interface. Optionally, the network interface may include a wired interface and/or a wireless interface (e.g., WI-FI interface, bluetooth interface, etc.), typically used to establish a communication connection between the electronic device and other electronic devices. The user interface may be a Display (Display), an input unit such as a Keyboard (Keyboard), or alternatively a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch, or the like. The display may also be referred to as a display screen or display unit, as appropriate, for displaying information processed in the electronic device and for displaying a visual user interface.
Fig. 4 shows only an electronic device with components, it being understood by a person skilled in the art that the structure shown in fig. 4 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than shown, or may combine certain components, or may be arranged in different components.
For example, although not shown, the electronic device may further include a power source (such as a battery) for supplying power to the respective components, and preferably, the power source may be logically connected to the at least one processor 10 through a power management device, so that functions of charge management, discharge management, power consumption management, and the like are implemented through the power management device. The power supply may also include one or more of any of a direct current or alternating current power supply, recharging device, power failure detection circuit, power converter or inverter, power status indicator, etc. The electronic device may further include various sensors, bluetooth modules, wi-Fi modules, etc., which are not described herein.
It should be understood that the embodiments described are for illustrative purposes only and are not limited to this configuration in the scope of the patent application.
The emotion recognition program based on a specific word stored in the memory 11 of the electronic device 1 is a combination of a plurality of instructions, which when executed in the processor 10, can realize:
Acquiring an emotion text to be identified, inputting the emotion text to be identified into a preset double-layer representation model, and extracting hidden layer representation features in the preset double-layer representation model;
extracting specific word representation in the hidden layer representation features, carrying out weight calculation on the specific word representation based on an attention mechanism to obtain a weight calculation result, and generating sentence representation features according to the weight calculation result;
constructing a corresponding adjacency matrix according to the pre-acquired priori syntax information, and performing linear transformation on the hidden layer representation features to obtain a hidden information matrix;
respectively inputting the adjacent matrix and the hidden layer information matrix into a preset graph convolution network to perform information convolution processing to obtain an adjacent information vector and a hidden layer information vector, performing initial splicing processing on the adjacent information vector and the hidden layer information vector to obtain initial splicing characteristics, and performing re-splicing processing on the initial splicing characteristics and the priori syntax information to obtain priori syntax characteristics;
and carrying out primary fusion on the hidden layer representation features and the sentence representation features to obtain initial fusion features, carrying out feature fusion on the initial fusion features and the priori syntax features, inputting a result obtained after feature fusion into a linear layer to carry out probability calculation to obtain fusion probability, and carrying out emotion analysis according to the fusion probability and a preset emotion reference library to obtain an emotion recognition result.
In particular, the specific implementation method of the above instructions by the processor 10 may refer to the description of the relevant steps in the corresponding embodiment of the drawings, which is not repeated herein.
Further, the modules/units integrated in the electronic device 1 may be stored in a storage medium if implemented in the form of software functional units and sold or used as separate products. The storage medium may be volatile or nonvolatile. For example, the computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM).
The present invention also provides a storage medium storing a computer program which, when executed by a processor of an electronic device, can implement:
acquiring an emotion text to be identified, inputting the emotion text to be identified into a preset double-layer representation model, and extracting hidden layer representation features in the preset double-layer representation model;
extracting specific word representation in the hidden layer representation features, carrying out weight calculation on the specific word representation based on an attention mechanism to obtain a weight calculation result, and generating sentence representation features according to the weight calculation result;
Constructing a corresponding adjacency matrix according to the pre-acquired priori syntax information, and performing linear transformation on the hidden layer representation features to obtain a hidden information matrix;
respectively inputting the adjacent matrix and the hidden layer information matrix into a preset graph convolution network to perform information convolution processing to obtain an adjacent information vector and a hidden layer information vector, performing initial splicing processing on the adjacent information vector and the hidden layer information vector to obtain initial splicing characteristics, and performing re-splicing processing on the initial splicing characteristics and the priori syntax information to obtain priori syntax characteristics;
and carrying out primary fusion on the hidden layer representation features and the sentence representation features to obtain initial fusion features, carrying out feature fusion on the initial fusion features and the priori syntax features, inputting a result obtained after feature fusion into a linear layer to carry out probability calculation to obtain fusion probability, and carrying out emotion analysis according to the fusion probability and a preset emotion reference library to obtain an emotion recognition result.
In the several embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be other manners of division when actually implemented.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof.
The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The blockchain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, encryption algorithm and the like. The Blockchain (Blockchain), which is essentially a decentralised database, is a string of data blocks that are generated by cryptographic means in association, each data block containing a batch of information of network transactions for verifying the validity of the information (anti-counterfeiting) and generating the next block. The blockchain may include a blockchain underlying platform, a platform product services layer, an application services layer, and the like.
The embodiment of the application can acquire and process the related data based on the artificial intelligence technology. Among these, artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use knowledge to obtain optimal results.
Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units or means recited in the system claims can also be implemented by means of software or hardware by means of one unit or means. The terms first, second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.

Claims (10)

1. A emotion recognition method based on a specific word, the method comprising:
acquiring an emotion text to be identified, inputting the emotion text to be identified into a preset double-layer representation model, and extracting hidden layer representation features in the preset double-layer representation model;
extracting specific word representation in the hidden layer representation features, carrying out weight calculation on the specific word representation based on an attention mechanism to obtain a weight calculation result, and generating sentence representation features according to the weight calculation result;
constructing a corresponding adjacency matrix according to the pre-acquired priori syntax information, and performing linear transformation on the hidden layer representation features to obtain a hidden information matrix;
respectively inputting the adjacent matrix and the hidden layer information matrix into a preset graph convolution network to perform information convolution processing to obtain an adjacent information vector and a hidden layer information vector, performing initial splicing processing on the adjacent information vector and the hidden layer information vector to obtain initial splicing characteristics, and performing re-splicing processing on the initial splicing characteristics and the priori syntax information to obtain priori syntax characteristics;
And carrying out primary fusion on the hidden layer representation features and the sentence representation features to obtain initial fusion features, carrying out feature fusion on the initial fusion features and the priori syntax features, inputting a result obtained after feature fusion into a linear layer to carry out probability calculation to obtain fusion probability, and carrying out emotion analysis according to the fusion probability and a preset emotion reference library to obtain an emotion recognition result.
2. The emotion recognition method based on a specific word of claim 1, wherein the weighting calculation is performed on the specific word representation based on an attention mechanism to obtain a weight calculation result, and generating sentence representation features according to the weight calculation result comprises:
removing the multiple word characterizations of the specific word characterizations from the hidden layer representation features, and splicing to obtain residual characterization features;
calculating the attention score corresponding to the specific word token according to an attention score calculation formula in the attention mechanism and the residual token feature, and taking the attention score as a weight calculation result;
and carrying out pooling operation on the specific word representation according to the weight calculation result to obtain sentence representation characteristics.
3. The emotion recognition method based on a specific word as set forth in claim 1, wherein said constructing a corresponding adjacency matrix from pre-acquired a priori syntax information includes:
identifying a plurality of information entities in the prior syntax information and entity relationships between a plurality of the information entities;
and storing the entity relation in a preset two-dimensional array to obtain an adjacency matrix.
4. The emotion recognition method based on a specific word as set forth in claim 1, wherein said inputting the adjacency matrix into a preset graph convolution network for information convolution processing to obtain adjacency information vectors includes:
acquiring a preset convolution kernel, and multiplying the adjacent matrix and the convolution kernel by using a transformation formula after Fourier transformation to obtain a feature matrix;
and performing Fourier inverse transformation processing on the feature matrix to obtain an adjacent information vector.
5. The emotion recognition method of claim 4, wherein the transformation formula is:
g*x=U(U T g·U T x)
wherein g is the convolution kernel, x is the adjacency matrix, U is an orthogonal matrix of Fourier transform, U T Representing a transpose of the orthogonal matrix.
6. The emotion recognition method based on a specific word as set forth in claim 1, wherein said linearly transforming said hidden layer representation feature to obtain a hidden information matrix includes:
And identifying the matrix size of the adjacent matrix, and converting the hidden layer representation characteristic into a matrix with the same matrix size as the adjacent matrix by utilizing a linear layer to obtain a hidden information matrix.
7. The emotion recognition method based on a specific word of claim 1, wherein said extracting a specific word characterization in the hidden layer representation features comprises:
performing text word segmentation on the emotion text to be recognized to obtain a plurality of word segments to be recognized;
and identifying a specific word in the plurality of to-be-identified segmented words, and taking the representation feature corresponding to the specific word in the hidden layer representation feature as a specific word representation.
8. An emotion recognition device based on a specific word, the device comprising:
the feature extraction module is used for acquiring an emotion text to be identified, inputting the emotion text to be identified into a preset double-layer representation model and extracting hidden layer representation features in the preset double-layer representation model;
the weight calculation module is used for extracting specific word representation in the hidden layer representation features, carrying out weight calculation on the specific word representation based on an attention mechanism to obtain a weight calculation result, and generating sentence representation features according to the weight calculation result;
The linear transformation module is used for constructing a corresponding adjacent matrix according to the pre-acquired priori syntax information, and carrying out linear transformation on the hidden layer representation characteristics to obtain a hidden information matrix;
the emotion analysis module is used for respectively inputting the adjacent matrix and the hidden layer information matrix into a preset graph convolution network to carry out information convolution processing to obtain an adjacent information vector and a hidden layer information vector, carrying out initial splicing processing on the adjacent information vector and the hidden layer information vector to obtain initial splicing characteristics, carrying out splicing processing on the initial splicing characteristics and the priori syntactic information again to obtain priori syntactic characteristics, carrying out primary fusion on the hidden layer representation characteristics and the sentence representation characteristics to obtain initial fusion characteristics, carrying out characteristic fusion on the initial fusion characteristics and the priori syntactic characteristics, inputting a result obtained by characteristic fusion into a linear layer to carry out probability calculation to obtain fusion probability, and carrying out emotion analysis according to the fusion probability and a preset emotion reference library to obtain an emotion recognition result.
9. An electronic device, the electronic device comprising:
at least one processor; the method comprises the steps of,
A memory communicatively coupled to the at least one processor; wherein,,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the specific word-based emotion recognition method of any one of claims 1 to 7.
10. A storage medium storing a computer program which, when executed by a processor, implements the specific word-based emotion recognition method of any one of claims 1 to 7.
CN202310443836.8A 2023-04-13 2023-04-13 Emotion recognition method and device based on specific words, electronic equipment and storage medium Pending CN116451706A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310443836.8A CN116451706A (en) 2023-04-13 2023-04-13 Emotion recognition method and device based on specific words, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310443836.8A CN116451706A (en) 2023-04-13 2023-04-13 Emotion recognition method and device based on specific words, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116451706A true CN116451706A (en) 2023-07-18

Family

ID=87130053

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310443836.8A Pending CN116451706A (en) 2023-04-13 2023-04-13 Emotion recognition method and device based on specific words, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116451706A (en)

Similar Documents

Publication Publication Date Title
CN112988963B (en) User intention prediction method, device, equipment and medium based on multi-flow nodes
CN110705301A (en) Entity relationship extraction method and device, storage medium and electronic equipment
CN113378970B (en) Sentence similarity detection method and device, electronic equipment and storage medium
CN113515938B (en) Language model training method, device, equipment and computer readable storage medium
CN113821622B (en) Answer retrieval method and device based on artificial intelligence, electronic equipment and medium
CN113722483A (en) Topic classification method, device, equipment and storage medium
CN114880449B (en) Method and device for generating answers of intelligent questions and answers, electronic equipment and storage medium
CN116701574A (en) Text semantic similarity calculation method, device, equipment and storage medium
CN115525750A (en) Robot phonetics detection visualization method and device, electronic equipment and storage medium
CN116681082A (en) Discrete text semantic segmentation method, device, equipment and storage medium
CN113360654B (en) Text classification method, apparatus, electronic device and readable storage medium
CN113553431B (en) User tag extraction method, device, equipment and medium
CN114840684A (en) Map construction method, device and equipment based on medical entity and storage medium
CN113808616A (en) Voice compliance detection method, device, equipment and storage medium
CN113626576A (en) Method and device for extracting relational characteristics in remote supervision, terminal and storage medium
CN113128196A (en) Text information processing method and device, storage medium
CN116719920A (en) Dynamic sampling dialogue generation model training method, device, equipment and medium
CN116701635A (en) Training video text classification method, training video text classification device, training video text classification equipment and storage medium
CN116468025A (en) Electronic medical record structuring method and device, electronic equipment and storage medium
CN116522944A (en) Picture generation method, device, equipment and medium based on multi-head attention
CN114548114B (en) Text emotion recognition method, device, equipment and storage medium
CN116341646A (en) Pretraining method and device of Bert model, electronic equipment and storage medium
CN115510188A (en) Text keyword association method, device, equipment and storage medium
CN115114408A (en) Multi-modal emotion classification method, device, equipment and storage medium
CN116451706A (en) Emotion recognition method and device based on specific words, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination