CN111414478A - Social network emotion modeling method based on deep cycle neural network - Google Patents

Social network emotion modeling method based on deep cycle neural network Download PDF

Info

Publication number
CN111414478A
CN111414478A CN202010174687.6A CN202010174687A CN111414478A CN 111414478 A CN111414478 A CN 111414478A CN 202010174687 A CN202010174687 A CN 202010174687A CN 111414478 A CN111414478 A CN 111414478A
Authority
CN
China
Prior art keywords
model
deep
user
emotion
social network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010174687.6A
Other languages
Chinese (zh)
Other versions
CN111414478B (en
Inventor
王晓慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology Beijing USTB
Original Assignee
University of Science and Technology Beijing USTB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology Beijing USTB filed Critical University of Science and Technology Beijing USTB
Priority to CN202010174687.6A priority Critical patent/CN111414478B/en
Publication of CN111414478A publication Critical patent/CN111414478A/en
Application granted granted Critical
Publication of CN111414478B publication Critical patent/CN111414478B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • G06F16/353Clustering; Classification into predefined classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a social network emotion modeling method based on a deep cyclic neural network, which comprises the steps of processing heterogeneous data of the social network based on an attention model, constructing a deep L STM-based long-term memory model, comprising the steps of constructing a promoted deep neural network residual error structure, constructing a deep L STM model, constructing a deep recurrent neural network fused with a multi-L STM model, inputting the processed data into the constructed deep L STM-based long-term memory model, and outputting to obtain emotion states of users in the social network at different moments.

Description

Social network emotion modeling method based on deep cycle neural network
Technical Field
The invention relates to the technical field of multi-modal emotion calculation, in particular to a social network emotion modeling method based on a deep cycle neural network.
Background
With the development of the internet, understanding human emotion through a social network has become a research hotspot of multiple disciplines such as current sociology, psychology, computer science and the like, is also a core problem of emotion calculation, and has important research significance. The media select to push various information including news information on the social network, and if the emotion of a person can be accurately analyzed and understood, accurate personalized intelligent recommendation can be performed through the social platform. Therefore, social network emotion research has important practical value.
There have been studies to analyze user emotional states based on user behaviors in a social network, such as microblogs, geographical locations, phone records, etc., and to predict user personalities based on the structure of the social network and the user behaviors. Studies based on data on Facebook indicate that the emotions of users on a social network are closely related to their social activities and interactions. Social studies have shown that emotions are social in nature due to the common sense of a person, i.e. your feeling depends on what you are in contact with and what you are in relationship with or near or far from a person. Existing researchers build influence models and propagation models of influences according to user marks, microblogs, published articles and the like on the internet.
The deep Neural Network has the characteristics that the deep Neural Network is suitable for processing various heterogeneous data in a social Network, so that modeling of various heterogeneous data to emotion is realized, besides the traditional static Neural Network, a cyclic Neural Network (RNN) receives more attention in recent years, compared with the static Neural Network, a part of cyclic feedback, namely a state Memory function, is added, when the current input is processed, the historical state is taken as the input, namely the historical information is subjected to 'Memory', the behavior mode is closer to the human brain, but the traditional RNN is not considered to be dependent on long-Term Memory, for example, the probability of a certain word appearing in a natural language is elaborately input, a plurality of information related units (STM-L) are more and more elaborately transferred, and the problem that the STM is effectively transferred before the short-Term Memory unit is solved, namely, the STM-L is effectively transferred, and the STM is a new model.
Shizhe Chen and Qin Jin propose that classic L STM is directly adopted to carry out emotion modeling on the basis of various characteristics, the model structure is relatively single, various characteristics need to be manually extracted, and compared with a social network, simple audio and video data are more regular, and the analysis needs fewer factors to be considered.
Starting from the new deep belief network training method proposed by Geoffrey Hinton and his student Simon osidero in 2006, the rapid development of deep learning has opened the preface. Compared with the prior shallow learning device, the deep learning device has more excellent characteristic learning capability, has more essential description on data and can learn more complex concepts. Many feature extraction steps which need manual coding before are completely replaced by homogeneous networks in deep learning, so that the difficulty of developing a new algorithm aiming at a specific task is greatly reduced. The Alex Krizhevsky et al study showed that deep networks often extracted better feature representations than manually elaborated, provided there was sufficient training. At present, the research of deep learning is continuously raised in academic circles, and the addition of large enterprises such as Google, microsoft and the like is a promotion, such as the famous Google bridge, microsoft 152-layer residual error network and the like. Recent deep neural network modeling AI is utilized by deep mind corporation to automatically learn and play video games, input original pixels without manual labeling, and finally surpass human players. Alphago realizes high-quality Weiqi AI by utilizing a deep network, overcomes European champion with 5:0, and greatly challenges the field of Weiqi which traditionally considers that computers cannot overcome human beings.
The method takes emotion cognition of the social network as a starting point, establishes a social network emotion analysis model based on the recurrent neural network, inputs social network data including text, image, video, network relationship and other heterogeneous data in the social network, and outputs emotion states of users in the social network at different moments.
Disclosure of Invention
The invention aims to provide a social network emotion modeling method based on a deep cycle neural network, which is used for predicting emotional states of a user at different moments according to heterogeneous data such as texts, images, videos and network relations in the social network, solving the key problem of social network emotion calculation and providing a model basis for applications such as intelligent advertisements and recommendation; in addition, the method organically combines the social network, the emotion calculation, the deep learning, the memory neural network, the attention model and the like, is different from most of the existing research work about the social network emotion, does not need to manually establish emotion change and influence models, avoids a plurality of prior hypotheses, and has natural advantages in the universality and the accuracy of data.
To solve the above technical problem, an embodiment of the present invention provides the following solutions:
a social network emotion modeling method based on a deep cycle neural network comprises the following steps:
processing the social network heterogeneous data based on the attention model;
constructing a long-term memory model based on a deep L STM, which comprises constructing a promoted deep neural network residual error structure, constructing a deep L STM model, and constructing a deep recurrent neural network fused with a plurality of L STM models;
and inputting the processed data into a constructed long-term memory model based on a depth L STM, and outputting to obtain the emotional states of the users at different moments in the social network.
Preferably, the social network heterogeneous data comprises text, images, audio, video, network relations in the social network.
Preferably, the step of processing the social network heterogeneous data based on the attention model comprises:
extracting information meeting importance distribution from the social network heterogeneous data according to requirements by using an attention model according to the current state, wherein the information comprises the following steps:
generating importance distribution of all heterogeneous data by combining a user emotion state vector with data rough representation and performing sparse sampling, wherein the data rough representation comprises vectorization representation of a label, a title and a thumbnail;
vectorizing the extracted information and generating a compact vector representation for input into a subsequent model.
Wherein, for an image, an AutoEncoder is used to generate a compact vector representation;
for audio, an L STM based AutoEncoder is used to generate a compact vector representation;
for the video, processing a single picture by using an AutoEncoder, and then processing by using a method for processing audio;
for text, a word vector is used for representation.
Preferably, the step of constructing a generalized deep neural network residual structure includes:
adding a path from an input end to an internal node on the basic deep neural network structure;
and short-circuiting any node.
Preferably, the step of building an STM model of depth L includes:
constructing an emotion change time sequence model and an influence association time sequence model;
the transfer relationship of each variable is as follows:
Figure BDA0002410387960000041
Figure BDA0002410387960000042
Figure BDA0002410387960000043
Figure BDA0002410387960000044
wherein the content of the first and second substances,
Figure BDA0002410387960000045
for activating the function, the result takes on a value of [0, 1],Xt+1,Rt+1For the input processed data, zt+1,rt+1Is in the last state HtThrough
Figure BDA0002410387960000046
The two amounts of activation that are generated,
Figure BDA0002410387960000047
for deep neural networks by promotion
Figure BDA0002410387960000048
The new intermediate state is generated.
Preferably, the step of constructing a deep recurrent neural network fusing the multiple L STM models comprises:
the method comprises the steps of representing by adopting a classical RNN time sequence data stream, modeling based on an emotion change time sequence model and an influence association time sequence model, and simultaneously, enabling the emotion change time sequence model and the influence association time sequence model to also depend on the state of the other side as input;
wherein the modeling and prediction are performed by the following parameters:
x and I respectively represent observed data and processed data, H, A and R represent state vectors, f represents various mapping functions, theta is a model parameter,
Figure BDA0002410387960000049
d representing user i at time tmThe data of the class observation data is obtained,
Figure BDA00024103879600000410
a summary vector representing the data observed at time t for user i, represented by fATAnd outputting the signals to the computer for output,
Figure BDA00024103879600000411
an interaction state vector of a friend j of a user i and the user i at the moment t;
Figure BDA00024103879600000412
mapping the emotional state vector of the user i at the moment t into understandable information containing joy, anger, sadness and funeral through an output layer
Figure BDA00024103879600000413
The process passes through a function
Figure BDA00024103879600000414
Showing that the method is realized by utilizing a deep neural network residual error structure,
Figure BDA00024103879600000415
for the influence state vector of the friend j of the user i on the user i at the moment t,
Figure BDA00024103879600000416
an aggregation vector representing the influence of friends on the user i at the time t;
Figure BDA00024103879600000417
the method comprises the steps that an influence state vector of two persons at the next moment is presumed according to the influence state vector of the two persons in the past, the emotion state vector of the two persons at the current moment and the interaction state vector;
Figure BDA00024103879600000418
in order to be an attention model for the user,
Figure BDA00024103879600000419
matching with the information to screen information;
Figure BDA0002410387960000051
integrating user i affected by other users, wherein Nei (u)i) Representing associated users of user i in the social network;
Figure BDA0002410387960000052
inputting user i current emotional state vectorAnd predicting the emotional state vector of the next moment by the current behavior state vector and the state vector influenced by other users.
The scheme of the invention at least comprises the following beneficial effects:
the invention fully combines the advantages of processing various heterogeneous data by deep learning and fully simulates the cognitive characteristics of human brain in memory, thereby providing a new idea for processing the emotion analysis problem. Different from traditional social network emotion calculation, the method disclosed by the invention gets rid of dependence on manual assumption and modeling, automatically extracts relevant features and establishes a relationship model of emotion transfer and influence change, avoids deviation between a manual model and an actual situation, and enhances the popularization capability of the system. The method does not depend on specific emotion space, does not need to be retrained in the occasion of switching the emotion space or adding a new emotion type, and reduces the expense. Meanwhile, mass dynamic social network data are automatically screened, downloaded and subjected to incremental learning without being limited by a small amount of static data, and deviation caused by manual steps is avoided. In addition, the multi-class emotion problems are processed by directly utilizing the emotion space, and the problem that the emotion problems are indirectly processed by being split into a plurality of two-class emotion problems is not needed.
Drawings
FIG. 1 is a flowchart of a social network emotion modeling method based on a deep recurrent neural network according to an embodiment of the present invention;
FIGS. 2a and 2b are schematic diagrams of a basic deep neural network structure and a deep neural network residual structure in an embodiment of the present invention, respectively;
FIG. 3 is a schematic illustration of an STM model with depth L in an embodiment of the invention;
FIG. 4 is a schematic diagram of a deep recurrent neural network that fuses multiple L STM models in an embodiment of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantages of the present invention more apparent, the following detailed description is given with reference to the accompanying drawings and specific embodiments.
The embodiment of the invention provides a social network emotion modeling method based on a deep recurrent neural network, which comprises the following steps of:
processing the social network heterogeneous data based on the attention model;
constructing a long-term memory model based on a deep L STM, which comprises constructing a promoted deep neural network residual error structure, constructing a deep L STM model, and constructing a deep recurrent neural network fused with a plurality of L STM models;
and inputting the processed data into a constructed long-term memory model based on a depth L STM, and outputting to obtain the emotional states of the users at different moments in the social network.
The invention fully combines the advantages of processing various heterogeneous data by deep learning and fully simulates the cognitive characteristics of human brain in memory, thereby providing a new idea for processing the emotion analysis problem. Different from traditional social network emotion calculation, the method disclosed by the invention gets rid of dependence on manual assumption and modeling, automatically extracts relevant features and establishes a relationship model of emotion transfer and influence change, avoids deviation between a manual model and an actual situation, and enhances the popularization capability of the system.
The method does not depend on specific emotion space, does not need to be retrained in the occasion of switching the emotion space or adding a new emotion type, and reduces the expense. Meanwhile, mass dynamic social network data are automatically screened, downloaded and subjected to incremental learning without being limited by a small amount of static data, and deviation caused by manual steps is avoided. In addition, the multi-class emotion problems are processed by directly utilizing the emotion space, and the problem that the emotion problems are indirectly processed by being split into a plurality of two-class emotion problems is not needed.
Further, the social network heterogeneous data comprises text, images, audio, video, network relations and other heterogeneous data in the social network. In the invention, the social network heterogeneous data is input, and the constructed model can output the emotional states of the user at different moments.
Further, the step of processing the social network heterogeneous data based on the attention model comprises:
extracting information meeting importance distribution from the social network heterogeneous data according to requirements by using an attention model according to the current state, wherein the information comprises the following steps:
generating importance distribution of all heterogeneous data by combining a user emotion state vector with data rough representation and performing sparse sampling, wherein the data rough representation comprises vectorization representation of a label, a title and a thumbnail;
vectorizing the extracted information and generating a compact vector representation for input into a subsequent model.
The social network heterogeneous data contains various heterogeneous information such as pictures, audio, video, characters and the like, and the information needs to be screened, aggregated and normalized at first. In the embodiment of the invention, the importance distribution on all data is generated by combining the emotion vector of the user with the rough representation of the data so as to carry out sparse sampling, and the importance distribution is obtained by deep network modeling. In addition, for large media data such as pictures and videos, part of the content is selectively skipped through importance distribution, thereby saving resources. These information are then vectorized and spliced into a compact vector representation.
Specifically, compact vector representation is generated by using an AutoEncoder for images, compact vector representation is generated by using an AutoEncoder based on L STM for audios, a single picture is processed by using the AutoEncoder for videos, then the audio is processed by using a method of processing audios, words are represented by using word vectors for characters, and other weak classifier outputs can be referred to fully utilize previous research results.
Further, the step of constructing the generalized deep neural network residual structure includes:
adding a path from an input end to an internal node on the basic deep neural network structure;
and short-circuiting any node.
The method comprises the steps of designing a deep neural network structure, designing a layout of a hidden layer and a short-circuit edge placement scheme in the middle of experimental study, converting learning of a function into learning of residual errors, and greatly improving learning efficiency of a deep neural network.
Further, the step of building a depth L STM model includes:
and constructing an emotion change time sequence model and an influence association time sequence model.
The user emotion change, influence change relation and the like are taken as core modules to be the basis of the whole time sequence network, and the deep L STM module is designed by combining the advantages of L STM long-time relevance and easiness in training and the characteristic of strong deep network expression capability.
FIG. 3 is a schematic diagram of an STM model with depth L according to an embodiment of the present invention, where the transfer relationship of the variables is:
Figure BDA0002410387960000071
Figure BDA0002410387960000072
Figure BDA0002410387960000081
Figure BDA0002410387960000082
wherein the content of the first and second substances,
Figure BDA0002410387960000083
for activating the function, the result takes on a value of [0, 1]Compared with classical L STM, the linear part is replaced by a deep residual neural network, the state is more compact, and Xt+1,Rt+1For the input processed data, zt+1,rt+1Is in the last state HtThrough
Figure BDA0002410387960000084
The two amounts of activation that are generated,
Figure BDA0002410387960000085
for deep neural networks by promotion
Figure BDA0002410387960000086
The new intermediate state is generated.
Input Xt+1,Rt+1(post-processing observation data), and last State HtThrough
Figure BDA0002410387960000087
Into two activation amounts zt+1,rt+1Respectively for modulating state HtFor new intermediate state
Figure BDA0002410387960000088
(by deep networks
Figure BDA0002410387960000089
Generated), and new intermediate state
Figure BDA00024103879600000810
State HtContribution to the final new state.
Further, the step of constructing a deep recurrent neural network fusing the L STM models comprises the following steps:
and (3) expressing by adopting a classical RNN time sequence data stream, modeling based on an emotion change time sequence model and an influence association time sequence model, and simultaneously, taking the state of the other side as input by the emotion change time sequence model and the influence association time sequence model.
Specifically, as shown in fig. 4, taking the relationship between user i and friend j as an example, a classical RNN time sequence data stream is used for representation, where the core functions of the emotion change time sequence model and the influence association time sequence model are fH,fAUse deeplyDegree L STM model implementation, FIG. 4 shows the transfer relationship of data streams at two adjacent time instants t, t + 1.
Wherein X and I respectively represent observed data and processed data, H, A and R represent state vectors, f represents various mapping functions, theta is a model parameter,
Figure BDA00024103879600000811
d representing user i at time tmThe data of the class observation data is obtained,
Figure BDA00024103879600000812
a summary vector representing the data observed at time t for user i, represented by fATAnd outputting the signals to the computer for output,
Figure BDA00024103879600000813
leaving a message for i for an interaction state vector of a friend j of the user i and the user i at the moment t, such as j;
Figure BDA00024103879600000814
mapping the emotional state vector of the user i at the moment t into understandable information containing joy, anger, sadness and funeral through an output layer
Figure BDA00024103879600000815
The process passes through a function
Figure BDA00024103879600000816
Showing that the method is realized by utilizing a deep neural network residual error structure,
Figure BDA00024103879600000817
distance ratio
Figure BDA00024103879600000818
The information is rich and, in some cases,
Figure BDA00024103879600000819
even will be
Figure BDA00024103879600000820
The information is contained in the information, and the information is contained in the information,
Figure BDA00024103879600000821
the influence state vector of friend j of user i on user i at time t, and
Figure BDA00024103879600000822
in a similar manner to that described above,
Figure BDA00024103879600000823
rich information can also be saved, including, in addition to the current moment impact strength, the impact before encoding,
Figure BDA00024103879600000824
an aggregation vector representing the influence of friends on the user i at the time t;
Figure BDA00024103879600000825
the method comprises the steps that an influence state vector of two persons at the next moment is presumed according to the influence state vector of the two persons in the past, the emotion state vector of the two persons at the current moment and the interaction state vector;
Figure BDA0002410387960000091
in order to be an attention model for the user,
Figure BDA0002410387960000092
matching with the information to screen information;
Figure BDA0002410387960000093
integrating user i affected by other users, wherein Nei (u)i) Representing associated users of user i in the social network;
Figure BDA0002410387960000094
and inputting the current emotional state vector of the user i, the current behavior state vector and the state vector influenced by other users, and predicting the emotional state vector at the next moment.
In conclusion, the invention fully combines the advantages of processing various heterogeneous data by deep learning and fully simulates the cognitive characteristics of human brain in memory, thereby providing a new idea for processing the emotion analysis problem. Different from traditional social network emotion calculation, the method disclosed by the invention gets rid of dependence on manual assumption and modeling, automatically extracts relevant features and establishes a relationship model of emotion transfer and influence change, avoids deviation between a manual model and an actual situation, and enhances the popularization capability of the system. The method does not depend on specific emotion space, does not need to be retrained in the occasion of switching the emotion space or adding a new emotion type, and reduces the expense. Meanwhile, mass dynamic social network data are automatically screened, downloaded and subjected to incremental learning without being limited by a small amount of static data, and deviation caused by manual steps is avoided. In addition, the multi-class emotion problems are processed by directly utilizing the emotion space, and the problem that the emotion problems are indirectly processed by being split into a plurality of two-class emotion problems is not needed.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (5)

1. A social network emotion modeling method based on a deep cycle neural network is characterized by comprising the following steps:
processing the social network heterogeneous data based on the attention model;
constructing a long-term memory model based on a deep L STM, which comprises constructing a promoted deep neural network residual error structure, constructing a deep L STM model, and constructing a deep recurrent neural network fused with a plurality of L STM models;
and inputting the processed data into a constructed long-term memory model based on a depth L STM, and outputting to obtain the emotional states of the users at different moments in the social network.
2. The social network emotion modeling method of claim 1, wherein the step of processing social network heterogeneous data based on the attention model comprises:
extracting information meeting importance distribution from the social network heterogeneous data according to requirements by using an attention model according to the current state, wherein the information comprises the following steps:
generating importance distribution of all heterogeneous data by combining a user emotion state vector with data rough representation and performing sparse sampling, wherein the data rough representation comprises vectorization representation of a label, a title and a thumbnail;
vectorizing the extracted information and generating a compact vector representation for input into a subsequent model;
wherein, for an image, an AutoEncoder is used to generate a compact vector representation;
for audio, an L STM based AutoEncoder is used to generate a compact vector representation;
for the video, processing a single picture by using an AutoEncoder, and then processing by using a method for processing audio;
for text, a word vector is used for representation.
3. The social network emotion modeling method of claim 1, wherein the step of constructing a generalized deep neural network residual structure comprises:
adding a path from an input end to an internal node on the basic deep neural network structure;
and short-circuiting any node.
4. The social network emotion modeling method of claim 3, wherein the step of building a deep L STM model comprises:
constructing an emotion change time sequence model and an influence association time sequence model;
the transfer relationship of each variable is as follows:
Figure FDA0002410387950000021
Figure FDA0002410387950000022
Figure FDA0002410387950000023
Figure FDA0002410387950000024
wherein the content of the first and second substances,
Figure FDA0002410387950000025
for activating the function, the result takes on a value of [0, 1],Xt+1,Rt+1For the input processed data, zt+1,rt+1Is in the last state HtThrough
Figure FDA0002410387950000026
The two amounts of activation that are generated,
Figure FDA0002410387950000027
for deep neural networks by promotion
Figure FDA0002410387950000028
The new intermediate state is generated.
5. The social network emotion modeling method of claim 4, wherein the step of constructing a deep recurrent neural network that fuses multiple L STM models comprises:
the method comprises the steps of representing by adopting a classical RNN time sequence data stream, modeling based on an emotion change time sequence model and an influence association time sequence model, and simultaneously, enabling the emotion change time sequence model and the influence association time sequence model to also depend on the state of the other side as input;
wherein the modeling and prediction are performed by the following parameters:
x and I respectively represent observed data and processed data, H, A and R represent state vectors, f represents various mapping functions, theta is a model parameter,
Figure FDA0002410387950000029
d representing user i at time tmThe data of the class observation data is obtained,
Figure FDA00024103879500000210
a summary vector representing the data observed at time t for user i, represented by fATAnd outputting the signals to the computer for output,
Figure FDA00024103879500000211
an interaction state vector of a friend j of a user i and the user i at the moment t;
Figure FDA00024103879500000212
mapping the emotional state vector of the user i at the moment t into understandable information containing joy, anger, sadness and funeral through an output layer
Figure FDA00024103879500000213
The process passes through a function
Figure FDA00024103879500000214
Showing that the method is realized by utilizing a deep neural network residual error structure,
Figure FDA00024103879500000215
for the influence state vector of the friend j of the user i on the user i at the moment t,
Figure FDA00024103879500000216
an aggregation vector representing the influence of friends on the user i at the time t;
Figure FDA00024103879500000217
the method comprises the steps that an influence state vector of two persons at the next moment is presumed according to the influence state vector of the two persons in the past, the emotion state vector of the two persons at the current moment and the interaction state vector;
Figure FDA00024103879500000218
in order to be an attention model for the user,
Figure FDA00024103879500000219
matching with the information to screen information;
Figure FDA00024103879500000220
integrating user i affected by other users, wherein Nei (u)i) Representing associated users of user i in the social network;
Figure FDA0002410387950000031
and inputting the current emotional state vector of the user i, the current behavior state vector and the state vector influenced by other users, and predicting the emotional state vector at the next moment.
CN202010174687.6A 2020-03-13 2020-03-13 Social network emotion modeling method based on deep cyclic neural network Active CN111414478B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010174687.6A CN111414478B (en) 2020-03-13 2020-03-13 Social network emotion modeling method based on deep cyclic neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010174687.6A CN111414478B (en) 2020-03-13 2020-03-13 Social network emotion modeling method based on deep cyclic neural network

Publications (2)

Publication Number Publication Date
CN111414478A true CN111414478A (en) 2020-07-14
CN111414478B CN111414478B (en) 2023-11-17

Family

ID=71492941

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010174687.6A Active CN111414478B (en) 2020-03-13 2020-03-13 Social network emotion modeling method based on deep cyclic neural network

Country Status (1)

Country Link
CN (1) CN111414478B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113327631A (en) * 2021-07-15 2021-08-31 广州虎牙科技有限公司 Emotion recognition model training method, emotion recognition method and emotion recognition device
CN113609306A (en) * 2021-08-04 2021-11-05 北京邮电大学 Social network link prediction method and system for resisting residual image variation self-encoder

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016101688A1 (en) * 2014-12-25 2016-06-30 清华大学 Continuous voice recognition method based on deep long-and-short-term memory recurrent neural network
CN107679580A (en) * 2017-10-21 2018-02-09 桂林电子科技大学 A kind of isomery shift image feeling polarities analysis method based on the potential association of multi-modal depth
CN107808168A (en) * 2017-10-31 2018-03-16 北京科技大学 A kind of social network user behavior prediction method based on strong or weak relation
CN108764268A (en) * 2018-04-02 2018-11-06 华南理工大学 A kind of multi-modal emotion identification method of picture and text based on deep learning
CN109508375A (en) * 2018-11-19 2019-03-22 重庆邮电大学 A kind of social affective classification method based on multi-modal fusion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016101688A1 (en) * 2014-12-25 2016-06-30 清华大学 Continuous voice recognition method based on deep long-and-short-term memory recurrent neural network
CN107679580A (en) * 2017-10-21 2018-02-09 桂林电子科技大学 A kind of isomery shift image feeling polarities analysis method based on the potential association of multi-modal depth
CN107808168A (en) * 2017-10-31 2018-03-16 北京科技大学 A kind of social network user behavior prediction method based on strong or weak relation
CN108764268A (en) * 2018-04-02 2018-11-06 华南理工大学 A kind of multi-modal emotion identification method of picture and text based on deep learning
CN109508375A (en) * 2018-11-19 2019-03-22 重庆邮电大学 A kind of social affective classification method based on multi-modal fusion

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113327631A (en) * 2021-07-15 2021-08-31 广州虎牙科技有限公司 Emotion recognition model training method, emotion recognition method and emotion recognition device
CN113609306A (en) * 2021-08-04 2021-11-05 北京邮电大学 Social network link prediction method and system for resisting residual image variation self-encoder
CN113609306B (en) * 2021-08-04 2024-04-23 北京邮电大学 Social network link prediction method and system for anti-residual diagram variation self-encoder

Also Published As

Publication number Publication date
CN111414478B (en) 2023-11-17

Similar Documents

Publication Publication Date Title
CN108133038B (en) Entity level emotion classification system and method based on dynamic memory network
Hayles Translating media: Why we should rethink textuality
CN113051916B (en) Interactive microblog text emotion mining method based on emotion offset perception in social network
CN113127624B (en) Question-answer model training method and device
CN111581966A (en) Context feature fusion aspect level emotion classification method and device
Liu et al. Sentiment recognition for short annotated GIFs using visual-textual fusion
CN113064968B (en) Social media emotion analysis method and system based on tensor fusion network
Mehta et al. Automated 3D sign language caption generation for video
Zhang et al. A BERT fine-tuning model for targeted sentiment analysis of Chinese online course reviews
CN111414478A (en) Social network emotion modeling method based on deep cycle neural network
CN114969282B (en) Intelligent interaction method based on rich media knowledge graph multi-modal emotion analysis model
Sharif et al. Vision to language: Methods, metrics and datasets
Liebert Communicative strategies of popularization of science (including science exhibitions, museums, magazines)
CN114443846A (en) Classification method and device based on multi-level text abnormal composition and electronic equipment
CN114444481A (en) Sentiment analysis and generation method of news comments
CN113627550A (en) Image-text emotion analysis method based on multi-mode fusion
CN117271745A (en) Information processing method and device, computing equipment and storage medium
CN113343712A (en) Social text emotional tendency analysis method and system based on heterogeneous graph
CN117349402A (en) Emotion cause pair identification method and system based on machine reading understanding
CN112132075A (en) Method and medium for processing image-text content
Rodriguez et al. How important is motion in sign language translation?
CN113741759B (en) Comment information display method and device, computer equipment and storage medium
CN115359486A (en) Method and system for determining custom information in document image
CN118014086B (en) Data processing method, device, equipment, storage medium and product
Montes Mora et al. Promoting the self-efficacy of deaf people through the application of translator of signifiers

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant