CN116662656A - Movie recommendation method based on collaborative enhancement and graph annotation intention neural network - Google Patents
Movie recommendation method based on collaborative enhancement and graph annotation intention neural network Download PDFInfo
- Publication number
- CN116662656A CN116662656A CN202310623642.6A CN202310623642A CN116662656A CN 116662656 A CN116662656 A CN 116662656A CN 202310623642 A CN202310623642 A CN 202310623642A CN 116662656 A CN116662656 A CN 116662656A
- Authority
- CN
- China
- Prior art keywords
- movie
- entity
- layer
- user
- knowledge graph
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 51
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 10
- 239000013598 vector Substances 0.000 claims abstract description 188
- 238000013507 mapping Methods 0.000 claims abstract description 14
- 230000009133 cooperative interaction Effects 0.000 claims abstract description 12
- 230000002452 interceptive effect Effects 0.000 claims description 33
- 230000006870 function Effects 0.000 claims description 30
- 230000004913 activation Effects 0.000 claims description 21
- 230000003993 interaction Effects 0.000 claims description 18
- 238000005070 sampling Methods 0.000 claims description 12
- 239000011159 matrix material Substances 0.000 claims description 10
- 230000005540 biological transmission Effects 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 238000001228 spectrum Methods 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 2
- 238000001914 filtration Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 239000000969 carrier Substances 0.000 description 1
- 229910003460 diamond Inorganic materials 0.000 description 1
- 239000010432 diamond Substances 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/36—Creation of semantic tools, e.g. ontology or thesauri
- G06F16/367—Ontology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/042—Knowledge-based neural networks; Logical representations of neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Databases & Information Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Animal Behavior & Ethology (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
A movie recommendation method based on collaborative enhancement and a graph annotation intention neural network comprises the following steps: firstly, calculating a collaborative movie neighbor set of each of a user and a movie to obtain a corresponding collaborative interaction embedded vector; mapping the entities in the set onto the film knowledge graph to transmit to obtain corresponding knowledge graph embedded vectors of each layer and multi-jump knowledge graph embedded vectors; and finally, combining the movie knowledge graph embedding vector and the collaborative interaction embedding vector to obtain a final user embedding vector and a movie embedding vector, calculating to obtain the predicted click probability of the user on the movie, and obtaining the predicted ranking result of the user recommended movie according to the probability from high to low. The invention considers the full combination of the cooperative information and the knowledge graph information, and has high accuracy and good recommendation effect.
Description
Technical Field
The invention relates to the field of recommendation systems, in particular to a movie recommendation method based on collaborative enhancement and a graph-annotation intention neural network.
Background
In recent years, people are increasingly used to watching movies on the internet, and the rise of a movie playing platform brings great convenience to people. But as the number of movies increases, it is difficult for users to quickly find movies of interest to themselves. Movie recommendation systems have therefore been developed and are a current research hotspot. Collaborative filtering is used as a typical personalized recommendation model, utilizes collaborative signals of users, recommends based on interaction information between the users and articles, and is widely applied to various large recommendation scenes. It is considered that similar users often have common potential preference, and the method is visual and effective and has strong interpretability.
However, in the movie recommendation scene, there is often a problem of sparse data. With a large number of movie bases, movies that are not watched by the user have a large specific gravity difference from movies that they watch. At the same time, registration of new users causes the movie recommendation system to have a problem of cold start. In facing these problems, recommendation alone using collaborative filtering does not perform well. Movie knowledge maps are also often applied to auxiliary recommendation systems in recent years as information carriers capable of carrying a large amount of movie semantic structure information and relationship information for recommendation. The movie knowledge graph can connect independent movie entities through different types of relations, and extend movie attribute information, so that potential semantic correlation among movies is captured, and accuracy and interpretability of recommendation prediction are improved. The node information and the side information in the film knowledge graph can be effectively captured by using the graph neural network. Therefore, the method expands on the basis of collaborative filtering recommendation, and uses the graph neural network to mine the knowledge graph information of the film, so that the problems of data sparsity and cold start can be effectively relieved, and the recommendation effect is improved.
Disclosure of Invention
In order to overcome the problems of data sparsity and cold start existing in the prior film recommendation method that interaction information, cooperation information and auxiliary information are not fully utilized and improve the accuracy of a recommendation algorithm, the invention provides a film recommendation method based on cooperation enhancement and a graph-annotation intention neural network.
The technical scheme adopted for solving the technical problems is as follows: a movie recommendation method based on cooperative enhancement and a graph annotation intention neural network comprises the following specific steps:
step 1: extracting characteristic information of each entity and relation information among the entities in the film data set to be processed, and constructing a film knowledge graph by taking the entities as nodes and taking the entity relation as node continuous edges, wherein the nodes comprise: movie name node, actor node, theme node, genre node, and year node; acquiring a user and a movie set according to a user-movie interaction matrix;
step 2: preprocessing the knowledge graph data of the film by using a knowledge graph representation learning algorithm TransH to obtain an initial embedded vector of a knowledge graph entity and a relation of the film;
step 3: any user in the user set calculates an initial interactive movie entity set by using the interactive information, and calculates an interactive embedded vector of the user according to the set;
step 4: according to the mapping of the initial interactive movie entity set of the user to the movie knowledge graph for propagation, calculating an embedding vector of a first layer and a head entity embedding vector after random sampling;
step 5: calculating multi-layer embedded vectors of the end user on the film knowledge graph through the head entity embedded vectors in the step 4;
step 6: calculating an embedded vector of a user through the interactive embedded vector in the step 3, the embedded vectors of all layers in the step 4 and the multi-layer embedded vector in the step 5; traversing the user set, repeating the steps 3 to 6, and calculating the embedded vectors of all users;
step 7: taking one movie in the movie set, calculating an initial collaborative movie entity set by using collaborative interaction information, and calculating a collaborative embedding vector of the movie according to the set;
step 8: according to the collaborative interaction film entity set of the film, mapping the collaborative interaction film entity set to a film knowledge graph for transmission, calculating an embedding vector of a first layer and a head entity embedding vector after random sampling;
step 9: calculating multi-layer embedded vectors of the final movie on the movie knowledge graph through the head entity embedded vectors in the step 8;
step 10: calculating an embedded vector of the film through the self embedded vector of the film, the collaborative embedded vector in the step 7, the embedded vectors of all layers in the step 8 and the multi-layer embedded vector in the step 9; traversing the film set, repeating the steps 7-10, and calculating the embedded vectors of all films;
step 11: predicting the probability of watching the movie by the user according to the embedded vectors of the user and the movie, calculating a loss function, and ending calculation when the loss value is smaller than the set minimum loss value; for a user in the collection, ranking the top K movies according to the result of the predictive score from high to low, where K is the set number of recommended movies.
In step 1, in the user-movie interaction matrix y= { Y uv U e U, V e V, u= { U = { 1 ,u 2 ,...,u m The user set is denoted by v= { V 1 ,v 2 ,...,v n The movie set is represented by the movie set, where the user has seen the movie then y uv =1, otherwise y uv =0;
In the step 1, aligning the movie in the user-movie interaction matrix with a corresponding entity on the movie knowledge graph to obtain a corresponding neighbor attribute characteristic of the movie on the movie knowledge graph.
In step 2, mapping the entities and relationships in the knowledge graph into a low-dimensional vector space through a knowledge graph representation learning algorithm TransH to obtain an initial embedded vector, wherein the film knowledge graph G consists of H triples (H, r, t), H epsilon, r epsilon eta and t epsilon respectively as head entities, relationships and tail entities of the triples, epsilon= { e 1 ,e 2 ,...,e A The movie entity set, η= { r 1 ,r 2 ,...,r B -representing a set of relationships between movie entities;
in step 3, for any user u, an initial interactive movie entity set is obtained, and an interactive embedded vector of the user u is calculated according to the set
Where v→e is the finding of the corresponding movie entity e on the movie knowledge graph from the interactive movie of user u,is corresponding to->One movie entity e in the collection i Is used to determine the embedded vector of (a).
In step 4, mapping the initial interactive movie entity set in step 3 onto a movie knowledge graph for transmission, and calculating corresponding attention weight
Wherein the method comprises the steps of
λ r =σ(W 2 ReLU(W 1 z 0 +b 1 )+b 2 ) (4)
For each headBody embedding vector->Embedding vectors for the corresponding relationships, W and b being trainable weights and deviations; reLU is a nonlinear activation function, sigma is a Sigmoid activation function, ++>The corresponding head entity of the user u on the film knowledge graph belongs to the triplet set of the first layer;
in step 4, the attention weight is used to calculate the embedding vector of the first layer of the film user u on the film knowledge graph
Wherein the method comprises the steps of
For all tail entity sets in the first layer of film knowledge graph, L is the total hop count of the film knowledge graph, e t Represents the tail entity of the plant,for one of the tail entities in the knowledge-graph of the first-layer movie,/one of the tail entities is->Representing an embedded vector corresponding to entity e; an initial set of interactive movie entities interacting with user u +.>Corresponding to the 0 th layer, the entity connected with the 0 th layer on the film knowledge graph is the 1 st layer, other layers and the like;
in step 4, according to the tail entity set of random sampling, the attention weight is used to calculate the head entity embedded vector of each layer of the film knowledge graph
Wherein the method comprises the steps ofThe method comprises the steps that a tail entity set randomly samples all corresponding tail entities in a first layer of head entities; />Head entity embedding vector representing layer I, < >>Tail entity embedding vector representing layer I, < >>Representing a stitching operation, σ is a nonlinear activation function.
In step 5, calculating multi-layer embedded vectors of the film knowledge graph through the head entity embedded vectors in step 4
Wherein the attention weight
Is the initial embedded vector for movie v.
In step 6, calculating the embedded vector of user u by stitching the interactive embedded vector in step 3, the embedded vectors of each layer in step 4 and the multi-layer embedded vector in step 5
Traversing the user set U, repeating the steps 3 to 6, and calculating the embedded vectors of all users;
in step 7, for any movie v, an initial collaborative movie entity set is obtained, and collaborative embedded vectors of the movie v are calculated according to the set
Wherein the method comprises the steps of
The movies v and v are represented by a a V a Representing collaborative neighbors with co-interacting users with v a And e is to find the corresponding movie entity e on the movie knowledge graph according to the cooperative neighbors of the movie v.
In step 8, mapping the initial collaborative movie entity set in step 7 onto a movie knowledge graph for transmission, and calculating a corresponding attention weight
Wherein the method comprises the steps of
λ r =σ(W 2 ReLU(W 1 z 0 +b 1 )+b 2 ) (4)
Embedding vectors for respective header entities->Embedding vectors for the corresponding relationships, W and b being trainable weights and deviations; reLU is a nonlinear activation function, sigma is a Sigmoid activation function, ++>The corresponding head entity of the movie v on the movie knowledge graph belongs to a first layer triplet set;
in step 8, the attention weight is used to calculate the embedding vector of the first layer of the film v on the film knowledge graph
Wherein the method comprises the steps of
For all tail entity vectors in the first-layer film knowledge graph, L is the total hop count of the film knowledge graph, e t Represents the tail entity of the plant,for one of the tail entities in the knowledge-graph of the first-layer movie,/one of the tail entities is->Representing an embedded vector corresponding to entity e; an initial collaborative movie entity set for movie v +.>Corresponding to the 0 th layer, the entity connected with the 0 th layer on the film knowledge graph is the 1 st layer, other layers and the like;
in step 8, according to the tail entity set of random sampling, the attention weight is used to calculate the head entity embedded vector of each layer of the film knowledge graph
Wherein the method comprises the steps ofThe method comprises the steps that a tail entity set randomly samples all corresponding tail entities in a first layer of head entities; />Head entity embedding vector representing layer I, < >>Tail entity embedding vector representing layer I, < >>Representing a stitching operation, σ is a nonlinear activation function.
In step 9, calculating multi-layer embedded vectors of the film knowledge graph through the head entity embedded vectors in step 8
Wherein the attention weight
Is the initial embedded vector for movie v.
Calculating an embedded vector of movie v in step 10 by splicing the own embedded vector of movie, the collaborative embedded vector in step 7, the embedded vectors of each layer in step 8, and the multi-layer embedded vector in step 9
Traversing the film set V, repeating the steps 7-10, and calculating the embedded vectors of all films;
step 11 calculating a loss function
Wherein the method comprises the steps of
Sigma is a Sigmoid activation function; function y + And gamma (gamma) - The positive and negative samples are represented respectively,representing cross entropy loss, by +.under alpha parameterization>Is an L2-regularization term; y is uv Is the real interaction situation of user u on movie v, < >>Is the probability that the predicted user u views movie v.
The technical conception of the invention is as follows: the semantic and relation information of the movies in the movie knowledge graph is learned through the graph attention network, the collaborative information, the interaction information and the knowledge graph information are used for recommendation, and the user embedded vector and the movie embedded vector with richer information are learned, so that the prediction is more accurate.
The invention has the advantages that: by introducing the collaboration information and the knowledge graph information into the movie recommendation system, the potential remote interests of the user and the potential characteristics of the movie are mined, the recommendation accuracy is high, and the problems of data sparsity and cold start are solved to a certain extent.
Drawings
Fig. 1 is a schematic diagram of a movie recommendation system composed of a user-movie interaction diagram and a movie knowledge map, wherein a circle represents a user entity, a rectangle represents a song entity, and a diamond represents a knowledge map entity, and in a dashed box, interaction movie neighbors of the user and collaborative interaction movie neighbors of the movie are respectively.
Fig. 2 is a flow chart of the method of the present invention.
Detailed Description
For further explanation of the embodiments, the present invention is further explained below with reference to the drawings.
Referring to fig. 2, a movie recommendation method based on collaborative enhancement and a graph-annotating ideographic neural network includes the following steps:
step 1: extracting characteristic information of each entity and relation information among the entities in the film data set to be processed, and constructing a film knowledge graph by taking the entities as nodes and taking the entity relation as node continuous edges, wherein the nodes comprise: movie name node, actor node, theme node, genre node, and year node; acquiring a user and a movie set according to a user-movie interaction matrix;
step 2: preprocessing the knowledge graph data of the film by using a knowledge graph representation learning algorithm TransH to obtain an initial embedded vector of a knowledge graph entity and a relation of the film;
step 3: any user in the user set calculates an initial interactive movie entity set by using the interactive information, and calculates an interactive embedded vector of the user according to the set;
step 4: according to the mapping of the initial interactive movie entity set of the user to the movie knowledge graph for propagation, calculating an embedding vector of a first layer and a head entity embedding vector after random sampling;
step 5: calculating multi-layer embedded vectors of the end user on the film knowledge graph through the head entity embedded vectors in the step 4;
step 6: calculating an embedded vector of a user through the interactive embedded vector in the step 3, the embedded vectors of all layers in the step 4 and the multi-layer embedded vector in the step 5; traversing the user set, repeating the steps 3 to 6, and calculating the embedded vectors of all users;
step 7: taking one movie in the movie set, calculating an initial collaborative movie entity set by using collaborative interaction information, and calculating a collaborative embedding vector of the movie according to the set;
step 8: according to the collaborative interaction film entity set of the film, mapping the collaborative interaction film entity set to a film knowledge graph for transmission, calculating an embedding vector of a first layer and a head entity embedding vector after random sampling;
step 9: calculating multi-layer embedded vectors of the final movie on the movie knowledge graph through the head entity embedded vectors in the step 8;
step 10: calculating an embedded vector of the film through the self embedded vector of the film, the collaborative embedded vector in the step 7, the embedded vectors of all layers in the step 8 and the multi-layer embedded vector in the step 9; traversing the film set, repeating the steps 7-10, and calculating the embedded vectors of all films;
step 11: predicting the probability of watching the movie by the user according to the embedded vectors of the user and the movie, calculating a loss function, and ending calculation when the loss value is smaller than the set minimum loss value; for a user in the collection, ranking the top K movies according to the result of the predictive score from high to low, where K is the set number of recommended movies.
In step 1, in the user-movie interaction matrix y= { Y uv U e U, V e V, u= { U = { 1 ,u 2 ,...,u m The user set is denoted by v= { V 1 ,v 2 ,...,v n The movie set is represented by the movie set, where the user has seen the movie then y uv =1, otherwise y uv =0;
In the step 1, aligning the movie in the user-movie interaction matrix with a corresponding entity on the movie knowledge graph to obtain a corresponding neighbor attribute characteristic of the movie on the movie knowledge graph.
In step 2, mapping the entities and relationships in the knowledge graph into a low-dimensional vector space through a knowledge graph representation learning algorithm TransH to obtain an initial embedded vector, wherein the film knowledge graph G consists of H triples (H, r, t), H epsilon, r epsilon eta and t epsilon are respectively the first entities of the triplesVolume, relationship and tail entity, ε= { e 1 ,e 2 ,...,e A The movie entity set, η= { r 1 ,r 2 ,...,r B -representing a set of relationships between movie entities;
in step 3, for any user u, an initial interactive movie entity set is obtained, and an interactive embedded vector of the user u is calculated according to the set
Where v→e is the finding of the corresponding movie entity e on the movie knowledge graph from the interactive movie of user u,is corresponding to->One movie entity e in the collection i Is used to determine the embedded vector of (a).
In step 4, mapping the initial interactive movie entity set in step 3 onto a movie knowledge graph for transmission, and calculating corresponding attention weight
Wherein the method comprises the steps of
λ r =σ(W 2 ReLU(W 1 z 0 +b 1 )+b 2 ) (4)
Embedding vectors for respective header entities->Embedding vectors for the corresponding relationships, W and b being trainable weights and deviations; reLU is a nonlinear activation function, sigma is a Sigmoid activation function, ++>The corresponding head entity of the user u on the film knowledge graph belongs to the triplet set of the first layer;
in step 4, the attention weight is used to calculate the embedding vector of the first layer of the film user u on the film knowledge graph
Wherein the method comprises the steps of
For all tail entity sets in the first layer of film knowledge graph, L is the total hop count of the film knowledge graph, e t Represents the tail entity of the plant,for one of the tail entities in the knowledge-graph of the first-layer movie,/one of the tail entities is->Representing an embedded vector corresponding to entity e; as shown in FIG. 1, the initial set of interactive movie entities for user u within the dashed box +.>The corresponding is layer 0, the entity connected with layer 0 on the film knowledge graph is layer 1, other layers and the like;
in step 4, according to the tail entity set of random sampling, the attention weight is used to calculate the head entity embedded vector of each layer of the film knowledge graph
Wherein the method comprises the steps ofThe method comprises the steps that a tail entity set randomly samples all corresponding tail entities in a first layer of head entities; />Head entity embedding vector representing layer I, < >>Tail entity embedding vector representing layer I, < >>Representing a stitching operation, σ is a nonlinear activation function.
In step 5, calculating multi-layer embedded vectors of the film knowledge graph through the head entity embedded vectors in step 4
Wherein the attention weight
Is the initial embedded vector for movie v.
In step 6, calculating the embedded vector of user u by stitching the interactive embedded vector in step 3, the embedded vectors of each layer in step 4 and the multi-layer embedded vector in step 5
Traversing the user set U, repeating the steps 3 to 6, and calculating the embedded vectors of all users;
in step 7, for any movie v, an initial collaborative movie entity set is obtained, and collaborative embedded vectors of the movie v are calculated according to the set
Wherein the method comprises the steps of
The movies v and v are represented by a a V a Representing collaborative neighbors with co-interacting users with v a And e is to find the corresponding movie entity e on the movie knowledge graph according to the cooperative neighbors of the movie v.
In step 8, mapping the initial collaborative movie entity set in step 7 onto a movie knowledge graph for transmission, and calculating a corresponding attention weight
Wherein the method comprises the steps of
λ r =σ(W 2 ReLU(W 1 z 0 +b 1 )+b 2 ) (4)
Embedding vectors for respective header entities->Embedding vectors for the corresponding relationships, W and b being trainable weights and deviations; reLU is a nonlinear activation function, sigma is a Sigmoid activation function, ++>The corresponding head entity of the movie v on the movie knowledge graph belongs to a first layer triplet set;
in step 8, the attention weight is used to calculate the embedding vector of the first layer of the film v on the film knowledge graph
Wherein the method comprises the steps of
For all tail entity vectors in the first-layer film knowledge graph, L is the total hop count of the film knowledge graph, e t Represents the tail entity of the plant,for one of the tail entities in the knowledge-graph of the first-layer movie,/one of the tail entities is->Representing an embedded vector corresponding to entity e; as shown in FIG. 1, the initial collaborative movie entity set +.>Layer 0 corresponds to the entity connected with layer 0 on the film knowledge graph isLayer 1, other layers and so on;
in step 8, according to the tail entity set of random sampling, the attention weight is used to calculate the head entity embedded vector of each layer of the film knowledge graph
Wherein the method comprises the steps ofThe method comprises the steps that a tail entity set randomly samples all corresponding tail entities in a first layer of head entities; />Head entity embedding vector representing layer I, < >>Tail entity embedding vector representing layer I, < >>Representing a stitching operation, σ is a nonlinear activation function.
In step 9, calculating multi-layer embedded vectors of the film knowledge graph through the head entity embedded vectors in step 8
Wherein the attention weight
Is the initial embedded vector for movie v.
Calculating an embedded vector of movie v in step 10 by splicing the own embedded vector of movie, the collaborative embedded vector in step 7, the embedded vectors of each layer in step 8, and the multi-layer embedded vector in step 9
Traversing the film set V, repeating the steps 7-10, and calculating the embedded vectors of all films;
step 11 calculating a loss function
Wherein the method comprises the steps of
Sigma is a Sigmoid activation function; function y + And gamma (gamma) - The positive and negative samples are represented respectively,representing cross entropy loss, by +.under alpha parameterization>Is an L2-regularization term; as in FIG. 1, a certain pair of user u and song v is taken from the interaction matrix Y to calculate +.>y uv Is the real interaction situation of user u on movie v.
As described above, the specific implementation steps of the present invention make the present invention more clear. Any modifications and changes made to the present invention fall within the spirit of the invention and the scope of the appended claims.
Claims (10)
1. A movie recommendation method based on collaborative enhancement and a graph annotation intention neural network comprises the following steps:
step 1: extracting characteristic information of each entity and relation information among the entities in the film data set to be processed, and constructing a film knowledge graph by taking the entities as nodes and taking the entity relation as node continuous edges, wherein the nodes comprise: movie name node, actor node, theme node, genre node, and year node; acquiring a user and a movie set according to a user-movie interaction matrix;
step 2: preprocessing the knowledge graph data of the film by using a knowledge graph representation learning algorithm TransH to obtain an initial embedded vector of a knowledge graph entity and a relation of the film;
step 3: any user in the user set calculates an initial interactive movie entity set by using the interactive information, and calculates an interactive embedded vector of the user according to the set;
step 4: according to the mapping of the initial interactive movie entity set of the user to the movie knowledge graph for propagation, calculating an embedding vector of a first layer and a head entity embedding vector after random sampling;
step 5: calculating multi-layer embedded vectors of the end user on the film knowledge graph through the head entity embedded vectors in the step 4;
step 6: calculating an embedded vector of a user through the interactive embedded vector in the step 3, the embedded vectors of all layers in the step 4 and the multi-layer embedded vector in the step 5; traversing the user set, repeating the steps 3 to 6, and calculating the embedded vectors of all users;
step 7: taking one movie in the movie set, calculating an initial collaborative movie entity set by using collaborative interaction information, and calculating a collaborative embedding vector of the movie according to the set;
step 8: according to the collaborative interaction film entity set of the film, mapping the collaborative interaction film entity set to a film knowledge graph for transmission, calculating an embedding vector of a first layer and a head entity embedding vector after random sampling;
step 9: calculating multi-layer embedded vectors of the final movie on the movie knowledge graph through the head entity embedded vectors in the step 8;
step 10: calculating an embedded vector of the film through the self embedded vector of the film, the collaborative embedded vector in the step 7, the embedded vectors of all layers in the step 8 and the multi-layer embedded vector in the step 9; traversing the film set, repeating the steps 7-10, and calculating the embedded vectors of all films;
step 11: predicting the probability of watching the movie by the user according to the embedded vectors of the user and the movie, calculating a loss function, and ending calculation when the loss value is smaller than the set minimum loss value; for a user in the collection, ranking the top K movies according to the result of the predictive score from high to low, where K is the set number of recommended movies.
2. The method of claim 1, wherein in step 1, the user-movie interaction matrix y= { Y uv U e U, V e V, u= { U = { 1 ,u 2 ,...,u m The user set is denoted by v= { V 1 ,v 2 ,...,v n The movie set is represented by the movie set, where the user has seen the movie then y uv =1, otherwise y uv =0;
In the step 1, aligning the movie in the user-movie interaction matrix with a corresponding entity on the movie knowledge graph to obtain a corresponding neighbor attribute characteristic of the movie on the movie knowledge graph.
3. The method of claim 1, wherein in step 2, the entity and the relation in the knowledge-graph are mapped into the low-dimensional vector space by the knowledge-graph representation learning algorithm TransH to obtain the initial embedded vector, the movie knowledge-graph G is composed of H triples (H, r, t), H epsilon, r epsilon eta and t epsilon are respectively the head entity, the relation and the tail entity of the triples, epsilon= { e 1 ,e 2 ,...,e A The movie entity set, η= { r 1 ,r 2 ,...,r B -representing a set of relationships between movie entities;
in step 3, for any user u, an initial interactive movie entity set is obtained, and an interactive embedded vector of the user u is calculated according to the set
Where v→e is the finding of the corresponding movie entity e on the movie knowledge graph from the interactive movie of user u,is corresponding to->One movie entity e in the collection i Is used to determine the embedded vector of (a).
4. The method of claim 1, wherein in step 4, the initial interactive movie entity set in step 3 is mapped onto a movie knowledge graph for propagation, and the corresponding attention weights are calculated
Wherein the method comprises the steps of
λ r =σ(W 2 ReLU(W 1 z 0 +b 1 )+b 2 ) (4)
Embedding vectors for respective header entities->Embedding vectors for the corresponding relationships, W and b being trainable weights and deviations; reLU is a nonlinear activation function, sigma is a Sigmoid activation function, ++>The corresponding head entity of the user u on the film knowledge graph belongs to the triplet set of the first layer;
in step 4, the attention weight is used to calculate the embedding vector of the first layer of the film user u on the film knowledge graph
Wherein the method comprises the steps of
For all tail entity sets in the first layer of film knowledge graph, L is the total hop count of the film knowledge graph, e t Represents the tail entity of the plant,for one of the tail entities in the knowledge-graph of the first-layer movie,/one of the tail entities is->Representing an embedded vector corresponding to entity e; an initial set of interactive movie entities interacting with user u +.>Corresponding to the 0 th layer, the entity connected with the 0 th layer on the film knowledge graph is the 1 st layer, other layers and the like;
in step 4, according to the tail entity set of random sampling, the attention weight is used to calculate the head entity embedded vector of each layer of the film knowledge graph
Wherein the method comprises the steps ofThe method comprises the steps that a tail entity set randomly samples all corresponding tail entities in a first layer of head entities; />Head entity embedding vector representing layer I, < >>Tail entity embedding vector representing layer I, < >>Representing a stitching operation, σ is a nonlinear activation function.
5. The method of claim 1, wherein the multi-layer embedded vector of the knowledge-graph of the movie is calculated by the head entity embedded vector of step 4 in step 5
Wherein the attention weight
Is the initial embedded vector for movie v.
6. The method of claim 5, wherein the embedding vector of user u is calculated in step 6 by stitching the interactive embedding vector in step 3, the layers of embedding vectors in step 4, and the layers of embedding vectors in step 5
Traversing the user set U, repeating the steps 3 to 6, and calculating the embedded vectors of all users;
in step 7, for any movie v, an initial collaborative movie entity set is obtained, and collaborative embedded vectors of the movie v are calculated according to the set
Wherein the method comprises the steps of
The movies v and v are represented by a a V a Representing collaborative neighbors with co-interacting users with v a And e is to find the corresponding movie entity e on the movie knowledge graph according to the cooperative neighbors of the movie v.
7. The method of claim 6, wherein the mapping of the initial collaborative movie entity set of step 7 onto a movie knowledge graph for propagation in step 8 calculates the corresponding attention weights
Wherein the method comprises the steps of
λ r =σ(W 2 ReLU(W 1 z 0 +b 1 )+b 2 ) (4)
Embedding vectors for respective header entities->Embedding vectors for the corresponding relationships, W and b being trainable weights and deviations; reLU is a nonlinear activation function, sigma is a Sigmoid activation function, ++>The corresponding head entity of the movie v on the movie knowledge graph belongs to a first layer triplet set;
in step 8, the attention weight is used to calculate the embedding vector of the first layer of the film v on the film knowledge graph
Wherein the method comprises the steps of
For all tail entity vectors in the first-layer film knowledge graph, L is the total hop count of the film knowledge graph, e t Represents the tail entity of the plant,for the first layer film knowledge graphOne of the tail entities in the spectrum,/->Representing an embedded vector corresponding to entity e; an initial collaborative movie entity set for movie v +.>Corresponding to the 0 th layer, the entity connected with the 0 th layer on the film knowledge graph is the 1 st layer, other layers and the like;
in step 8, according to the tail entity set of random sampling, the attention weight is used to calculate the head entity embedded vector of each layer of the film knowledge graph
Wherein the method comprises the steps ofThe method comprises the steps that a tail entity set randomly samples all corresponding tail entities in a first layer of head entities; />Head entity embedding vector representing layer I, < >>Tail entity embedding vector representing layer I, < >>Representing a stitching operation, σ is a nonlinear activation function.
8. The method of claim 7, wherein the multi-layer embedded vector of the knowledge-graph of the movie is calculated by the head entity embedded vector of step 8 in step 9
Wherein the attention weight
Is the initial embedded vector for movie v.
9. The method of claim 1, wherein the embedded vector of movie v is calculated in step 10 by concatenating the self-embedded vector of the movie, the collaborative embedded vector in step 7, the individual layer embedded vectors in step 8, and the multi-layer embedded vector in step 9
And traversing the film set V, repeating the steps 7 to 10, and calculating the embedded vectors of all films.
10. The method of claim 1, wherein the loss function is calculated in step 11
Wherein the method comprises the steps of
Sigma is a Sigmoid activation function; function y + And gamma (gamma) - The positive and negative samples are represented respectively,representing cross entropy loss, by +.under alpha parameterization>Is an L2-regularization term; y is uv Is the real interaction situation of user u on movie v, < >>Is the probability that the predicted user u views movie v.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310623642.6A CN116662656A (en) | 2023-05-29 | 2023-05-29 | Movie recommendation method based on collaborative enhancement and graph annotation intention neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310623642.6A CN116662656A (en) | 2023-05-29 | 2023-05-29 | Movie recommendation method based on collaborative enhancement and graph annotation intention neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116662656A true CN116662656A (en) | 2023-08-29 |
Family
ID=87709131
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310623642.6A Pending CN116662656A (en) | 2023-05-29 | 2023-05-29 | Movie recommendation method based on collaborative enhancement and graph annotation intention neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116662656A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117171449A (en) * | 2023-09-21 | 2023-12-05 | 西南石油大学 | Recommendation method based on graph neural network |
-
2023
- 2023-05-29 CN CN202310623642.6A patent/CN116662656A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117171449A (en) * | 2023-09-21 | 2023-12-05 | 西南石油大学 | Recommendation method based on graph neural network |
CN117171449B (en) * | 2023-09-21 | 2024-03-19 | 西南石油大学 | Recommendation method based on graph neural network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113936339B (en) | Fighting identification method and device based on double-channel cross attention mechanism | |
CN109874053A (en) | The short video recommendation method with user's dynamic interest is understood based on video content | |
CN109947987B (en) | Cross collaborative filtering recommendation method | |
CN111460979A (en) | Key lens video abstraction method based on multi-layer space-time frame | |
CN112818251B (en) | Video recommendation method and device, electronic equipment and storage medium | |
CN109034953B (en) | Movie recommendation method | |
CN116662656A (en) | Movie recommendation method based on collaborative enhancement and graph annotation intention neural network | |
CN114357201B (en) | Audio-visual recommendation method and system based on information perception | |
CN113515669A (en) | Data processing method based on artificial intelligence and related equipment | |
Zhang et al. | Image composition assessment with saliency-augmented multi-pattern pooling | |
CN114780866B (en) | Personalized intelligent recommendation method based on spatio-temporal context interest learning model | |
CN114637857A (en) | Knowledge graph convolutional network recommendation method based on denoising | |
Chiang et al. | A multi-embedding neural model for incident video retrieval | |
CN110347853B (en) | Image hash code generation method based on recurrent neural network | |
CN112231579B (en) | Social video recommendation system and method based on implicit community discovery | |
CN112528077B (en) | Video face retrieval method and system based on video embedding | |
CN116701706B (en) | Data processing method, device, equipment and medium based on artificial intelligence | |
CN113590965A (en) | Video recommendation method integrating knowledge graph and emotion analysis | |
Kumar et al. | Content based movie scene retrieval using spatio-temporal features | |
CN116186309B (en) | Graph convolution network recommendation method based on interaction interest graph fusing user intention | |
CN109934188B (en) | Slide switching detection method, system, terminal and storage medium | |
CN115205768B (en) | Video classification method based on resolution self-adaptive network | |
CN115171014B (en) | Video processing method, video processing device, electronic equipment and computer readable storage medium | |
CN113065342B (en) | Course recommendation method based on association relation analysis | |
CN113688281B (en) | Video recommendation method and system based on deep learning behavior sequence |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |