CN115953215A - Search type recommendation method based on time and graph structure - Google Patents

Search type recommendation method based on time and graph structure Download PDF

Info

Publication number
CN115953215A
CN115953215A CN202211533857.0A CN202211533857A CN115953215A CN 115953215 A CN115953215 A CN 115953215A CN 202211533857 A CN202211533857 A CN 202211533857A CN 115953215 A CN115953215 A CN 115953215A
Authority
CN
China
Prior art keywords
user
item
search
history
users
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211533857.0A
Other languages
Chinese (zh)
Other versions
CN115953215B (en
Inventor
郑雷
柴化灿
陈贤宇
晋嘉睿
张伟楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN202211533857.0A priority Critical patent/CN115953215B/en
Publication of CN115953215A publication Critical patent/CN115953215A/en
Application granted granted Critical
Publication of CN115953215B publication Critical patent/CN115953215B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a search type recommendation method based on time and a graph structure, which relates to the field of recommendation systems and comprises the following steps: collecting user behavior historical data of an internet platform, coding users and articles in the historical data by using an inner product neural network, calculating embedded vectors of the users and the articles, mapping, sampling and learning the historical articles, inputting the embedded vectors of the articles into a model, searching similar articles and similar users in the user historical data based on the characteristics of the users and the articles, and searching similar articles of a target article from the historical data of the similar users; inputting the retrieved user history sequence into a time perception model to obtain a hidden state sequence, inputting the hidden state sequence into a multilayer perceptron to predict and perform back propagation, and applying the trained algorithm model to a recommendation algorithm. The recommendation method provided by the invention can process large-scale user history sequences in an online environment, and can effectively improve the recommendation efficiency and performance.

Description

Search type recommendation method based on time and graph structure
Technical Field
The invention relates to the field of recommendation systems, in particular to a search type recommendation method based on time and graph structures.
Background
With the rapid increase of the history length of user behaviors, how to effectively help users find out the commodities which are interested in the users from the huge number of candidate commodities on the internet platform has become a very important problem in the field of recommendation systems. Classical recommendation methods, such as collaborative filtering models and factorization models, mainly focus on modeling the overall interest of a user in order to obtain items of interest to the user. But such models explore less the user's needs over a fixed period of time, which is in fact one of the very important factors affecting the user's needs. Therefore, there are some recent attempts to capture the user's sequence pattern by means of a memory network, a recurrent neural network or a time point model, but most methods cannot be applied in a practical internet environment due to the computational complexity and the limitation of storage space when the user's historical sequence is too long.
Classical recommendation algorithms typically recommend an item that a user wants through a rich user-item interaction history, which is typically in a table, sequence, or graph form. However, according to the user behavior search for click rate prediction published in the international information search conference by Qin Jiarui and the like, as user behavior data and the like are accumulated more and more, it is very difficult to train a model from the whole user log due to the limitation of online calculation. One possible approach is to focus only on the user's recent history, using short-term history instead of long-term history, to generate personalized recommendations. Recent papers, however, indicate that (e.g., "search-based user interest modeling and long-term sequential behavior for click-through rate prediction" at information and knowledge management meetings), these approaches fail to encode the user's demand dependence on periodicity and long-term, so focusing only on the user's recent history is actually a sub-optimal solution. Both papers then suggest that both hard and soft searches can be performed across the entire user history. Qin Jiarui also proposes modeling query construction for context data by reinforcement learning and then using this module to query retrieve relevant behavior using BM25 relevance functions. There is also a paper (search-based user interest modeling and long-term sequential behavior for click-through rate prediction) that proposes matching related items by hard retrieval of item classes and soft retrieval of matching related items based on item characterization vectors. In this manner, the model may use the relevant items retrieved throughout the user's behavioral history to make recommendations. But existing search-based methods ignore time intervals in the user's behavioral history. And generally the retrieved related items are all positively fed back (e.g., the user clicks on the item), which gives the opportunity to consider both positively and negatively fed back related items so that the entire user sequence is fully utilized.
Articles such as "modeling user behavior with time LSTM model" published in artificial intelligence international federation meetings and "recursive poisson factorization of time recommendations" Hou Saini published in the journal of knowledge and data engineering, etc., capture the user's performance better by using the time interval between user behaviors that conventional sequence structures cannot use. One direction is to control the updates of short-term and long-term interest by using new dedicated time gates, such as Zhao Pengpeng a "space-time LSTM model for recommending next points of interest" proposes a distance gate based on a recurrent neural network to control the updates of short-term and long-term points of interest. Another method of using time interval information is to specify the user's sequence history by a point process, i.e. discrete times in the user history are modeled as continuous times, such as Mei Hongyuan, etc. published in the neural information processing system congress paper "neural hox process: a neuro-self-regulating multivariable point process "proposes a hokes process that allows past events to influence future predictions through a complex and realistic style. In addition to the high computational complexity and time consumption of these methods, the direct input of long user sequences into the model also introduces very large noise (interference information), which makes it unfeasible to capture rich sequence patterns directly in the user log.
In the fields of recommendation systems, social networks, drug discovery, mathematical programming and the like, a characterization vector obtained by learning nodes on a graph is used as a basic data module in a model, so that the deep learning method has great application. Mainstream graph characterization learning is primarily to establish more generalized node proximity relations by modeling the adjacency matrix of the graph or by random walk and fitting the actual proximity relations. The most common deep walking model is one of the earliest models using random walking, and a sufficient number of node sequences are obtained by random walking in a directed graph and are input into a jump graph model to learn the embedded vectors of nodes by regarding the node sequences as sentences similar to natural language. The node vector model is a variant of the depth walk model, which controls the tendency of random walk to become biased random walk by two parameters, p and q. In addition, an additional information enhanced graph representation learning model is provided, wherein the model builds a node graph of a building through continuous article access behaviors of hundreds of millions of users on the Internet, samples node sequences through random walks, and trains an embedded vector of nodes through a hopping graph model with characteristics. Wang Hongwei "using a generative confrontation net to learn a graph characterization" published in knowledge and data engineering journal, which proposes a model for learning a graph characterization through a confrontation generation network, the model firstly performs strategy random walk and restart on a graph based on a learnable graph operator, samples node pairs of a central node and a last node before restart, and then a discriminator performs learning to judge whether a pair of nodes has an edge. The embedded vectors of nodes are learned by this countermeasure approach.
For the summary of relevant research at home and abroad: the existing widely used recommendation algorithm in the industry is difficult to solve the problem that online calculation is too complex when the user behavior history length is too long, and meanwhile, the common algorithm cannot effectively utilize the user behavior history, so that the problems of neglecting the time interval of the user behavior, negative feedback of the user and the like exist. In addition, the time perception algorithm in recent years is generally difficult to solve the problem that the direct input time interval brings larger noise and the like.
Therefore, those skilled in the art are devoted to developing a more efficient model for user historical behavior and reasonable time perception, which can perform better than the conventional recommendation algorithm in more situations.
Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, an object of the present invention is to provide a search-based prediction model that combines feedback information on an item in user history data with historical sequence information to improve the performance of the model in predicting a long-tailed user.
In order to achieve the above object, the present invention provides a search-based recommendation method based on time and graph structure, which is characterized in that the method comprises the following steps:
s101: collecting historical data of internet platform user behaviors, coding the user and the articles in the historical data by using an inner product neural network, and calculating an embedded vector of the user and an embedded vector of the articles by using the inner product neural network;
s103: drawing, sampling and learning historical articles of the user, capturing the associated information among the articles, and inputting the embedded vectors of the articles into a model as article features;
s105: based on the user and the item features, using a search model to retrieve similar items of a target item in the user history data;
s107: acquiring similar users of the target user by using the search model, and retrieving similar items of the target item from historical data of the similar users;
s109: inputting the retrieved user history sequence into a time perception model, and obtaining a hidden state sequence after passing through a gate cycle unit and an attention mechanism, wherein the user history sequence comprises user history feedback information and a user history time interval;
s111: inputting the hidden state sequence into a multilayer perceptron for prediction, and performing back propagation according to a first loss function;
s113: and applying the trained algorithm model to a recommendation algorithm.
Further, in step S101, when the inner product neural network is used to calculate the embedding vector, the following method is adopted:
Figure BDA0003975438540000031
wherein ,
Figure BDA0003975438540000032
an embedded vector representing the nth item>
Figure BDA0003975438540000033
An attribute vector representing the nth item>
Figure BDA0003975438540000034
and />
Figure BDA0003975438540000035
Representing a weight in the calculation process>
Figure BDA0003975438540000036
Representing the pth item in the attribute vector for the nth item.
Further, in step S103, when the embedding vector of the item is obtained, an expansion algorithm for enhancing graph characteristic learning is adopted, and an additional information enhanced graph embedding model is used to model the item sequence in the user history.
Further, when the historical item of the user is mapped in step S103, a positive sample and a negative sample are sampled in the map, the positive sample is resampled by random walk, the item type is used as additional information, and the positive sample, the negative sample and the additional information are input to the additional information enhanced map embedding model for learning, so as to obtain an embedding vector of the item.
Further, the random walk is a walking mode in node embedding learning, training is performed using the second loss function after the positive sample and the negative sample are sampled, and the article embedding vector is calculated by adopting the following formula:
Figure BDA00039754385400000310
the second loss function is:
Figure BDA0003975438540000041
wherein ,
Figure BDA0003975438540000042
an embedded vector representing the nth item>
Figure BDA0003975438540000043
Is a weight parameter, is>
Figure BDA0003975438540000044
Represents i m A negative sample of the article.
Further, in step S105, the search model uses an adaptive search algorithm, the adaptive search algorithm supports a hard search and a soft search, the hard search is a search in which the search item types are completely the same, the soft search is a search in which the search item types are completely dissimilar, and the adaptive search algorithm calculates the similarity between the target item type and the user history item type in the following manner:
Figure BDA0003975438540000045
Figure BDA0003975438540000046
/>
wherein ,
Figure BDA0003975438540000047
item set H representing a similar category to the target item in the history of the mth user m Represents the history of the mth user>
Figure BDA0003975438540000048
Representing recent history in the mth user, i n An identification number representing the nth item>
Figure BDA0003975438540000049
Indicates the type of the nth target item>
Figure BDA00039754385400000410
Type of search item, x, indicating nth object item i An attribute vector representing an item, wherein>
Figure BDA00039754385400000411
A pth item in the attribute vector representing the nth item; tau is used to control the degree of similarity, the greater tau, the closer tau is to similarity, and tau decreases from large to small during training.
Further, in step S107, the similarity between users is calculated by calculating the same kind of articles in the history of the target user and other users, and then the similar user of the target user is retrieved, the history behavior of the similar user is supplemented as additional information, and the similarity between users is calculated as follows:
Figure BDA00039754385400000412
Figure BDA00039754385400000413
wherein ,
Figure BDA00039754385400000414
represents a set of users similar to the mth user, U represents the total users, H m Represents the history of the mth user, u m An identification number representing the mth user, in conjunction with a subscriber number associated with the mth subscriber>
Figure BDA00039754385400000415
Indicates the type of the mth target user, and>
Figure BDA00039754385400000416
indicates the type of the retrieving user of the mth target user, based on the status of the retrieving user>
Figure BDA00039754385400000417
And an embedded vector representing the mth user, wherein the iota is used for controlling the similarity degree, the larger the iota is, the closer the iota is to the similarity, and the iota is greatly reduced in the training process.
Further, in step S109, a preliminary result of the hidden state is obtained through calculation by the gate cycle unit, the preliminary result is further obtained through calculation by the attention mechanism, and the hidden state set is aggregated through an aggregation function;
wherein, the gate cycle unit adopts the following calculation formula:
Figure BDA0003975438540000051
Figure BDA0003975438540000052
Figure BDA0003975438540000053
h′ t =f′ t ⊙c′ t +(1-f′ t )⊙h′ t-1
the attention mechanism adopts the following calculation formula:
Figure BDA0003975438540000054
Figure BDA0003975438540000055
Figure BDA0003975438540000056
f t =α′ t ·f′ t
Figure BDA0003975438540000057
/>
h t =f t ⊙c t +(1-f t )⊙h′ t
the aggregation function is:
Figure BDA0003975438540000058
wherein ,
Figure BDA0003975438540000059
characteristic vector representing the t-th item>
Figure BDA00039754385400000510
A user feedback vector representing the user's for the t-th item,
Figure BDA00039754385400000511
is->
Figure BDA00039754385400000512
and />
Figure BDA00039754385400000513
Vector of spliced, h' t and h′t-1 Is a preliminary hidden state, h t Is in a hidden state, W x and Ux For the weighting parameter, Δ t is two items i t-1 and it The time interval between, de (-) is a heuristic decay algorithm, taking de (Δ t) =1/Δ t when the time interval is short, taking de (Δ t) =1/log (e + Δ t) when the time interval is long,
Figure BDA00039754385400000514
Figure BDA00039754385400000515
representing a set of hidden states.
Further, in step S111, the multi-layer perceptron is a neural network combining an intelligent perceptron and a nonlinear function, and the neural network is:
Figure BDA00039754385400000516
the first loss function is:
Figure BDA0003975438540000061
wherein ,
Figure BDA0003975438540000062
is a hidden state set of similar users of the current user, is based on>
Figure BDA0003975438540000063
Figure BDA0003975438540000064
Further, step S113 further includes testing the recommendation algorithm, where the testing environment of the testing includes recording the testing result based on a public data set and an online experiment, and comparing the result difference between the recommendation method and other models.
In the preferred embodiment of the invention, compared with the prior art, the invention has the following beneficial effects:
1. the sequence recommendation algorithm based on search and time perception provided by the invention is gradually transited from hard search to soft search, so that the search process is more reasonable, compared with the traditional classical sequence recommendation algorithm (including the time perception sequence recommendation algorithm), the method can process large-scale user historical sequences in an online environment, and can effectively improve the recommendation efficiency and performance;
2. the method takes the historical feedback of the user as additional information to be input into the model, so that the model has better effect, models the user-object by a graph characteristic learning method, and captures the association between the objects by learning the embedded vector of the object.
The conception, the specific structure and the technical effects of the present invention will be further described with reference to the accompanying drawings to fully understand the objects, the features and the effects of the present invention.
Drawings
FIG. 1 is a schematic flow chart of a preferred embodiment of the present invention;
FIG. 2 is a comparison of experimental results on an offline public data set of a preferred embodiment of the present invention;
FIG. 3 is a comparison experiment result on an online scene for a preferred embodiment of the present invention;
FIG. 4 is a comparison experimental result of the algorithm herein for two configurations on an online scene according to a preferred embodiment of the present invention;
FIG. 5 is a diagram illustrating the results of the statistical comparison between the expansion algorithm and the original algorithm in buckets on-line scene according to a preferred embodiment of the present invention.
Detailed Description
The technical contents of the preferred embodiments of the present invention will be made clear and easily understood by referring to the drawings attached to the specification. The present invention may be embodied in many different forms of embodiments and the scope of the invention is not limited to the embodiments set forth herein.
In the drawings, elements that are structurally identical are represented by like reference numerals, and elements that are structurally or functionally similar in each instance are represented by like reference numerals. The size and thickness of each component shown in the drawings are arbitrarily illustrated, and the present invention is not limited to the size and thickness of each component. The thickness of the components may be exaggerated where appropriate in the figures to improve clarity.
As shown in fig. 1, the search-based recommendation method based on time and graph structure provided in the embodiment of the present invention includes the following steps:
s101: collecting historical data of internet platform user behaviors, coding the user and articles in the historical data by using an inner product neural network, and calculating embedded vectors of the user and the articles by using the inner product neural network.
When the inner product neural network is used for calculating the embedded vector, the following method is adopted:
Figure BDA0003975438540000071
wherein ,
Figure BDA0003975438540000072
an embedded vector representing the Nth item, <' >>
Figure BDA0003975438540000073
An attribute vector representing the nth item>
Figure BDA0003975438540000074
and />
Figure BDA0003975438540000075
Represents a weight in the calculation process, and>
Figure BDA0003975438540000076
representing the pth item in the attribute vector for the nth item.
S103: the method comprises the steps of mapping, sampling and learning historical articles of a user, capturing correlation information among the articles, and inputting embedded vectors of the articles into a model together as article features.
When the embedded vector of the object is obtained, an expansion algorithm for enhancing by graph characteristic learning is adopted, and the object sequence in the user history is modeled by using an additional information enhanced graph embedded model. When the historical articles of the user are mapped, sampling positive samples and negative samples in the map, resampling the positive samples through random walk, using the article types as additional information, inputting the positive samples, the negative samples and the additional information into an additional information enhancement map embedding model for learning, and obtaining the embedding vectors of the articles. When the positive sample is resampled by random walk, the random walk is a walk mode in node embedding learning, the second loss function is used for training after the positive sample and the negative sample are sampled, and an article embedding vector is calculated by adopting the following formula:
Figure BDA00039754385400000713
the second loss function is:
Figure BDA0003975438540000079
/>
wherein ,
Figure BDA00039754385400000710
an embedded vector representing the nth item>
Figure BDA00039754385400000711
Is a learnable weight parameter, <' > is>
Figure BDA00039754385400000712
Represents i m A negative sample of the article.
S105: similar items for the target item are retrieved in the user history data using a search model based on the user and item characteristics.
Items similar to the target item are retrieved from the user history by calculating a similarity between the category of the target item and the category of the items of the user history. The calculation of the similarity of the articles is self-adaptive, and the hard similarity (which requires the articles to be completely the same in extreme cases) which requires high is gradually transited to the soft similarity (which requires the articles to be completely dissimilar in extreme cases) along with the training process. The search model described above uses an adaptive search algorithm that supports both hard and soft searches, hard search being a search in which the types of search items are completely the same, soft search being a search in which the types of search items are completely dissimilar, the adaptive search algorithm calculating the similarity of the target item type and the user history item type as follows:
Figure BDA0003975438540000081
Figure BDA0003975438540000082
wherein ,
Figure BDA0003975438540000083
item set H representing a similar category to the target item in the history of the mth user m Represents the history of the mth user>
Figure BDA0003975438540000084
Representing recent history in the mth user, i n Indicates the identification number, which stands for the nth item>
Figure BDA0003975438540000085
Indicates the type of the nth target item>
Figure BDA0003975438540000086
Type, x, of search item representing nth object item i An attribute vector representing an item, wherein>
Figure BDA0003975438540000087
A pth item in the attribute vector representing the nth item; tau is used to control the degree of similarity, the greater tau, the closer tau is to similarity, and tau decreases from large to small during training.
S107: similar users of the target user are obtained by using the search model, and similar items of the target items are retrieved from historical data of the similar users.
Similarity between the users is calculated by calculating the same kind of articles in the histories of the target user and other users, so that similar users of the target user are retrieved, and the historical behaviors of the similar users are supplemented as additional information.
The similarity between users is calculated as follows:
Figure BDA0003975438540000088
Figure BDA0003975438540000089
wherein ,
Figure BDA00039754385400000810
represents a set of users similar to the mth user, U represents the total users, H m Represents the history of the mth user, u m An identification number representing the mth user, and->
Figure BDA00039754385400000811
Indicates the kind of the mth target user>
Figure BDA00039754385400000812
Indicates the type of the retrieving user of the mth target user, based on the status of the retrieving user>
Figure BDA00039754385400000813
And an embedded vector representing the mth user, wherein the iota is used for controlling the similarity degree, the larger the iota is, the closer the iota is to the similarity, and the iota is greatly reduced in the training process.
S109: and inputting the retrieved user history sequence into a time perception model, and obtaining a hidden state sequence after passing through a gate cycle unit and an attention mechanism, wherein the user history sequence comprises user history feedback information, user history time interval and other information.
And calculating by the gate cycle unit to obtain a preliminary result of the hidden state, further calculating by an attention mechanism to obtain a final hidden state, and aggregating the hidden state set by an aggregation function.
Wherein, the door cycle unit adopts the following calculation formula:
Figure BDA0003975438540000091
Figure BDA0003975438540000092
Figure BDA0003975438540000093
h′ t =f′ t ⊙c′ t +(1-f′ t )⊙h′ t-1
the attention mechanism adopts the following calculation formula:
Figure BDA0003975438540000094
Figure BDA0003975438540000095
Figure BDA0003975438540000096
f t =α′ t ·f′ t
Figure BDA0003975438540000097
h t =f t ⊙c t +(1-f t )⊙h′ t
the aggregation function is:
Figure BDA0003975438540000098
wherein ,
Figure BDA0003975438540000099
represents the characteristic vector of the tth item>
Figure BDA00039754385400000910
A user feedback vector representing the user's contribution to the tth item,
Figure BDA00039754385400000911
is->
Figure BDA00039754385400000912
and />
Figure BDA00039754385400000913
Vector of spliced, h' t and h′t-1 Is a preliminary hidden state, h t Is in a hidden state, W x and Ux For the weighting parameter, Δ t is two items i t-1 and it The time interval between, de (-) is a heuristic decay algorithm, taking de (Δ t) =1/Δ t when the time interval is short, taking de (Δ t) =1/log (e + Δ t) when the time interval is long,
Figure BDA00039754385400000914
Figure BDA00039754385400000915
representing a set of hidden states.
S111: and inputting the hidden state sequence into a multilayer perceptron to predict, and performing back propagation according to a first loss function.
The multilayer perceptron is a neural network combining an intelligent perceptron and a nonlinear function, and the neural network is as follows:
Figure BDA00039754385400000916
the first loss function is:
Figure BDA00039754385400000917
wherein ,
Figure BDA00039754385400000918
is a hidden state set of similar users of the current user, is based on>
Figure BDA00039754385400000919
Figure BDA0003975438540000101
S113: and applying the trained algorithm model to a recommendation algorithm.
And testing the recommendation algorithm, wherein the testing comprises recording a test result based on a public data set and an online experiment, and comparing the result difference of the recommendation method and other models.
The search type recommendation method based on the time and the graph structure provided by the embodiment of the invention has the following technical effects:
1. the invention provides a sequence recommendation algorithm based on search and time perception, and compared with the traditional classical sequence recommendation algorithm (including the time perception sequence recommendation algorithm), the sequence recommendation algorithm can process large-scale user history sequences in an online environment.
2. The invention provides a self-adaptive search-based algorithm, which gradually transits from hard search to soft search, so that the search process is more reasonable.
3. According to the invention, the historical feedback of the user is input into the model as additional information, and the model using the skill has a better experiment effect.
4. In the invention, a graph characteristic learning method is used for modeling the user-object, and the association between the objects is captured by learning the embedded vector of the objects.
5. The method can obtain better effect compared with the domestic and foreign advanced recommendation algorithm on three types of classical data sets from real internet application, and can effectively improve the recommendation efficiency and performance in an online recommendation scene.
As shown in fig. 2 to 5, for the search-based recommendation method based on time and graph structure provided in the embodiment of the present invention, in order to verify the experimental results of the present invention, the present invention shows comparative experimental results in three types of real offline public data sets and a real online recommendation scene, and for the offline public data sets, the comparative baseline algorithm is 3 reference methods commonly used in the recommendation algorithm: deep interest network, deep interest evolution network and behavior modeling based on search three recommendation algorithms.
The comparison test is respectively carried out on three public data sets of Tianmao, paibao and Taobao and a recommendation scene of 'guessing you like' of a tenderer bank, the first three data sets are from application of a real internet platform, and the recommendation scene of 'guessing you like' is an online scene which is very important in the industry. In an off-line data set comparison experiment, main evaluation indexes are the area under the ROC curve (AUC), the Accuracy (ACC) and the logarithmic loss value (Logloss), "the text algorithm-" is the configuration without using the label skill in the algorithm, and "the text algorithm +" is the configuration using the graph to characterize and learn in the algorithm; for the online recommendation scene of 'guessing you like', the main evaluation indexes are Click Through Rate (CTR), AUC, average item click rate (AIC) and user click rate (CUR), when an expansion algorithm and an original algorithm are compared, average click times (CPC) are also used, wherein indexes with w/o subscripts are results obtained after users without clicks are removed, bucket-dividing statistics is carried out on the expansion experiment, the experiment results of the users with different history lengths under the two algorithms are compared in a statistical mode (the difference value of CTR refers to the CTR result of the expansion algorithm minus the CTR value of the original algorithm), and the overall comparison experiment effect is shown in 1,2,3,4. It can be observed from the figure that the result of the invention compared with the result of the baseline recommendation algorithm can obtain better result on the benefit index, which shows that the technical scheme of the invention is more effective, and meanwhile, the expanded algorithm has better result compared with the original algorithm.
The present invention will be described in detail below with reference to preferred embodiments thereof.
As shown in fig. 1, the implementation process of the present invention includes the following steps:
step one, collecting real user behavior history on an internet platform, and coding the types of the user and the object by using an inner product neural network for unified processing.
In this step, the basic elements are defined as follows:
(1) user identification number u and item identification number i: a vector representing an identification number unique to each different user and item, where u m Identification number, i, representing mth user n An identification number representing an nth item;
(2) user attribute vector x u And an item attribute vector x i : an attribute vector representing the user and the item, wherein
Figure BDA0003975438540000111
Represents the pth entry in an attribute vector for the mth user, and { }>
Figure BDA0003975438540000112
A pth item in the attribute vector representing the nth item;
(3) user embedding vector e u And an item embedding vector e i : an embedded vector representing the user and the item, wherein
Figure BDA0003975438540000113
An embedded vector representing the mth user, and->
Figure BDA0003975438540000114
An embedded vector representing the nth item;
the inner product neural network is used on this basis to compute the embedding vector:
Figure BDA0003975438540000115
wherein
Figure BDA0003975438540000116
and />
Figure BDA0003975438540000117
Representing a weight learnable in the calculation process>
Figure BDA0003975438540000118
The calculation method of (1) is the same as above.
And step two, using a graph characteristic learning to carry out an enhanced expansion algorithm, carrying out graph building, sampling and learning on historical articles of a user, capturing the associated information among the articles, and inputting the learned articles embedded vectors serving as article characteristics into the model together.
The sequence of items in the user history is modeled using an additional information enhancement graph embedding model, obtaining an embedding vector for its items. The method comprises the steps of establishing a graph of an article in user history, sampling positive and negative samples in the graph, resampling the positive sample through random walk, using the article type as additional information, learning the obtained positive sample, negative sample and additional information through an additional information enhancement graph embedding model, obtaining an embedding vector, and processing the embedding vector as additional features of the article. The walking mode used by resampling is a walking mode in node embedding learning, and the mode of calculating an article embedding vector is as follows:
Figure BDA0003975438540000119
wherein ,
Figure BDA00039754385400001110
an embedded vector representing an nth item>
Figure BDA00039754385400001111
Is a learnable weight parameter, and the second loss function is:
Figure BDA00039754385400001112
wherein ,
Figure BDA00039754385400001113
represents i m A negative sample of the article.
And step three, based on the user and article characteristics obtained in the step one and the step two, using a search-based model for the target user and the target article, and using an adaptive algorithm to search for articles with higher similarity to the target article in the history of the user.
And (3) the user and item code pairs obtained in the first step and the second step are used for retrieving items in the user history similar to the target item through an adaptive search-based module. Items similar to the target item are retrieved from the user history by calculating a similarity between the category of the target item and the category of the user history items. The calculation of the similarity of the articles is self-adaptive, and the hard similarity (which requires the articles to be completely the same in extreme cases) which requires high is gradually transited to the soft similarity (which requires the articles to be completely dissimilar in extreme cases) along with the training process. Under the self-adaptive algorithm, the whole algorithm framework forms an end-to-end style. In addition to calculating the similarity between the target item type and the user historical item type, the adaptive algorithm is also applied to find similar users. The similarity between the users is calculated by calculating the objects of the same type in the histories of the target user and other users, so that the users similar to the target user are searched, and the historical behaviors of the users are used as additional information to supplement, so that a better effect is achieved.
On this basis, the basic elements are defined as follows:
(1) item attribute vector x i : an attribute vector representing the item, wherein
Figure BDA0003975438540000121
A pth item in the attribute vector representing the nth item;
(2) user embedding vector e u : an embedded vector representing a user, wherein
Figure BDA0003975438540000122
An embedded vector representing the mth user;
(3) class c of user u And article type c i : indicates the kinds of users and articles, wherein
Figure BDA0003975438540000123
Indicates the kind of the mth target user>
Figure BDA0003975438540000124
Indicates the type of the retrieving user of the mth target user, based on the status of the retrieving user>
Figure BDA0003975438540000125
Indicates the type of the nth target item>
Figure BDA0003975438540000126
A search item type indicating an nth target item;
(4) user identification number u and item identification number i: a vector representing an identification number unique to each different user and item, where u m Identification number, i, representing mth user n Denotes the nth objectAn identification number of the article;
on the basis, searching for similar articles:
Figure BDA0003975438540000127
Figure BDA0003975438540000128
wherein ,
Figure BDA0003975438540000129
item set H representing a similar category to the target item in the history of the mth user m Represents the history of the mth user>
Figure BDA00039754385400001210
And (3) representing the recent history in the mth user, wherein tau is used for controlling the degrees of hard similarity and soft similarity, the larger tau is, the closer tau is to the hard similarity, and tau is reduced from large to small in the training process.
The method for finding similar users is similar:
Figure BDA00039754385400001211
Figure BDA00039754385400001212
wherein ,
Figure BDA00039754385400001213
representing a set of users similar to the mth user, U representing the total users, iota being used to control the degree of hard similarity and soft similarity, the greater iota being closer to hard similarity, the smaller iota becomes during training.
And step four, similarly to the step three, obtaining users with higher similarity to the target user by using a self-adaptive algorithm, and retrieving the articles with higher similarity to the target articles in the history of the users.
For a given user u m And given target item i n Thereafter, a similar item set can be retrieved through step 3
Figure BDA0003975438540000131
And a similar user->
Figure BDA0003975438540000132
Can be written into and/or taken out>
Figure BDA0003975438540000133
For->
Figure BDA0003975438540000134
Each article i in n Let us know user u m Feedback (click or not, etc.) on the item. And then inputting the feature vector of the object and the user feedback click splicing into a gate cycle unit for learning to discover a useful sequence pattern. After passing through the gate cycle unit, the change in user interest is modeled using an attention mechanism, and this feature of time intervals is taken into account. The hidden state sequence obtained after the gate cycle unit and the attention mechanism obtains a vector for representing the hidden state sequence through a designed aggregation function. On this basis, the following basic elements are defined:
(1) feature vector of an article
Figure BDA0003975438540000135
A vector representing features of the tth item;
(2) user feedback vector
Figure BDA0003975438540000136
Representing an embedding vector obtained according to the feedback of the user to the t-th item;
inside the gate cycle unit, we compute the hidden state by:
Figure BDA0003975438540000137
Figure BDA0003975438540000138
Figure BDA0003975438540000139
h′ t =f′ t ⊙c′ t +(1-f′ t )⊙h′ t-1
in this formula, W x and Ux Are all weight parameters that can be learned,
Figure BDA00039754385400001310
is->
Figure BDA00039754385400001311
and />
Figure BDA00039754385400001312
Vector by concatenation, and h' t and h′t-1 Is a preliminary hidden state. After passing through the gate cycle unit, the preliminary hidden state is further calculated by the following attention mechanism:
Figure BDA00039754385400001313
Figure BDA00039754385400001314
Figure BDA00039754385400001315
f t =α′ t ·f′ t
Figure BDA00039754385400001316
h t =f t ⊙c t +(1-f t )⊙h′ t
in this formula, W x Is a learnable weight parameter, h t Is a hidden state. Δ t is two items i t-1 and it The time interval between de (-) is a heuristic decay algorithm that takes de (Δ t) =1/Δ t when the time interval is short and de (Δ t) =1/log (e + Δ t) when the time interval is long.
For each set
Figure BDA00039754385400001317
We can get a series of hidden states->
Figure BDA00039754385400001318
Each set of hidden states may be aggregated by an aggregation function as follows:
Figure BDA0003975438540000141
wherein
Figure BDA0003975438540000142
For users similar to the current user, the same method is used to calculate ≥ s>
Figure BDA0003975438540000143
wherein />
Figure BDA0003975438540000144
And step five, inputting the retrieved user history sequence into a time perception model, and inputting the user history feedback information and the user history time interval into the model as characteristics.
The neural network ultimately used for prediction is a combination of an intelligent perceptron and a nonlinear function, as follows:
Figure BDA0003975438540000145
the first loss function used therein is:
Figure BDA0003975438540000146
and step six, inputting the obtained hidden state into a multilayer perceptron for prediction, and performing back propagation according to the first loss function.
And step seven, applying the trained algorithm model to the public data set and an online experiment, recording the experiment result, and comparing the difference between the result and the result of other models.
For the search-type recommendation method based on time and graph structure provided by the embodiment of the invention, comparison experiments are performed in three real off-line public data sets and a real on-line recommendation scene, and the test results are shown in fig. 2-5. During comparison experiments, the off-line public data set selects three applications from a real internet platform, namely tianmao, paibao, taobao and the like, and the online scene selection recruiter bank selects a recommendation scene of ' guessing you like ' (the recommendation scene of guessing you like ' is an online scene which is very important in the industry). For an offline public data set, the baseline algorithm for comparison is 3 reference methods commonly used in the recommendation algorithm: deep interest network, deep interest evolution network and behavior modeling based on search three recommendation algorithms. In an off-line data set comparison experiment, main evaluation indexes are the area under the ROC curve (AUC), the Accuracy (ACC) and the logarithmic loss value (Logloss), "the text algorithm-" is the configuration without using the label skill in the algorithm, and "the text algorithm +" is the configuration using the graph to characterize and learn in the algorithm; for the 'guess you like' online recommendation scene, the main evaluation indexes are Click Through Rate (CTR), AUC, average item click rate (AIC) and user click rate (CUR), when an extended algorithm and an original algorithm are compared, average click times (CPC) are also used, wherein indexes with w/o subscripts are results obtained after users without clicks are removed, barrel counting is carried out on the extended experiment, and the experiment results of the users with different historical lengths under the two algorithms are counted and compared (the difference value of CTR refers to the CTR value of the CTR result of the extended algorithm minus the CTR value of the original algorithm). It can be observed from fig. 2-5 that the results of the present invention can obtain better results in the benefit index than the results of the baseline recommendation algorithm, which indicates that the technical scheme of the present invention is more effective, and meanwhile, the extended algorithm has better results than the original algorithm.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (10)

1. A search-based recommendation method based on time and graph structure is characterized by comprising the following steps:
s101: collecting historical data of internet platform user behaviors, coding the user and an article in the historical data by using an inner product neural network, and calculating an embedded vector of the user and an embedded vector of the article by using the inner product neural network;
s103: drawing, sampling and learning historical articles of the user, capturing the associated information among the articles, and inputting the embedded vectors of the articles into a model as article features;
s105: based on the user and the item features, using a search model to retrieve similar items of a target item in the user history data;
s107: acquiring similar users of the target user by using the search model, and retrieving similar items of the target item from historical data of the similar users;
s109: inputting the retrieved user history sequence into a time perception model, and obtaining a hidden state sequence after a gate cycle unit and an attention mechanism, wherein the user history sequence comprises user history feedback information and a user history time interval;
s111: inputting the hidden state sequence into a multilayer perceptron to predict, and performing back propagation according to a first loss function;
s113: and applying the trained algorithm model to a recommendation algorithm.
2. The method of claim 1, wherein in step S101, when calculating the embedding vector using the inner product neural network, the following is adopted:
Figure FDA0003975438530000011
wherein ,
Figure FDA0003975438530000012
an embedded vector representing the nth item>
Figure FDA0003975438530000013
An attribute vector representing an nth item>
Figure FDA0003975438530000014
and />
Figure FDA0003975438530000015
Represents a weight in the calculation process, and>
Figure FDA0003975438530000016
the pth entry in the attribute vector representing the nth item.
3. The method according to claim 2, wherein in step S103, when obtaining the embedding vector of the item, an expansion algorithm enhanced by graph feature learning is adopted, and an additional information enhanced graph embedding model is used to model the item sequence in the user history.
4. The method according to claim 3, wherein, when the user' S historical articles are mapped in step S103, positive and negative samples are sampled in the map, the positive samples are resampled by random walk, the article type is used as additional information, and the positive samples, the negative samples and the additional information are input into the additional information enhancement map embedding model for learning, so as to obtain the embedding vector of the article.
5. The method of claim 4, wherein the random walk is a walk in node-embedding learning, trained using the second loss function after the positive and negative samples are sampled, the item-embedding vector being calculated using the following formula:
Figure FDA0003975438530000021
the second loss function is:
Figure FDA0003975438530000022
/>
wherein ,
Figure FDA0003975438530000023
an embedded vector representing the nth item>
Figure FDA0003975438530000024
Is a weight parameter, is->
Figure FDA0003975438530000025
Represents i m A negative sample of the article.
6. The method according to claim 5, wherein in step S105, the search model uses an adaptive search algorithm, the adaptive search algorithm supports hard search and soft search, the hard search is search for retrieving the same kind of items, the soft search is search for retrieving the different kinds of items, the adaptive search algorithm calculates the similarity between the target kind of items and the user history kind of items as follows:
Figure FDA0003975438530000026
Figure FDA0003975438530000027
wherein ,
Figure FDA0003975438530000028
item set H representing a similar category of the target item in the history of the first user m Represents the history of the mth user>
Figure FDA0003975438530000029
Representing recent history in the mth user, i n Indicates the identification number, which stands for the nth item>
Figure FDA00039754385300000210
Indicates the type of the nth target item>
Figure FDA00039754385300000211
Type, x, of search item representing nth object item i An attribute vector representing an item, wherein>
Figure FDA00039754385300000212
A pth item in the attribute vector representing the nth item; tau is used to control the degree of similarity, the greater tau, the closer tau is to similarity, and tau decreases from large to small during training.
7. The method according to claim 6, wherein in step S107, similarity between users is calculated by calculating the same kind of articles in the history of the target user and other users, and then similar users of the target user are retrieved, the history behavior of the similar users is supplemented as additional information, and the similarity between users is calculated as follows:
Figure FDA0003975438530000031
Figure FDA0003975438530000032
wherein ,
Figure FDA00039754385300000311
represents a set of users similar to the mth user, U represents the total users, H m Represents the history of the mth user, u m An identification number representing the mth user, and->
Figure FDA00039754385300000312
Indicates the kind of the mth target user>
Figure FDA00039754385300000313
Indicates the type of the retrieving user of the mth target user, based on the status of the retrieving user>
Figure FDA00039754385300000314
The embedded vector representing the mth user, iota is used to control similarTo a greater extent, the greater the iota, the closer the similarity, and the smaller iota becomes during training.
8. The method according to claim 7, wherein in step S109, a preliminary result of the hidden states is obtained by the gate loop unit, and the preliminary result is further obtained by the attention mechanism, and the set of hidden states is aggregated by an aggregation function;
wherein, the gate cycle unit adopts the following calculation formula:
Figure FDA0003975438530000033
/>
Figure FDA0003975438530000034
Figure FDA0003975438530000035
h′ t =f′ t ⊙c′ t +(1-f′ t )⊙h′ t-1
the attention mechanism adopts the following calculation formula:
Figure FDA0003975438530000036
Figure FDA0003975438530000037
Figure FDA0003975438530000038
f t =α′ t ·f′ t
Figure FDA0003975438530000039
h t =f t ⊙c t +(1-f t )⊙h′ t
the aggregation function is:
Figure FDA00039754385300000310
wherein ,
Figure FDA00039754385300000315
represents the characteristic vector of the tth item>
Figure FDA00039754385300000316
Represents a user feedback vector, based on the user's status of the tth item>
Figure FDA00039754385300000317
Is->
Figure FDA00039754385300000318
and />
Figure FDA00039754385300000319
Vector of spliced, h' t and h′t-1 Is a preliminary hidden state, h t Is in a hidden state, W x and Ux For the weighting parameter, Δ t is two items i t-1 and it The time interval between, de (-) is a heuristic decay algorithm, taking de (Δ t) =1/Δ t when the time interval is short, taking de (Δ t) =1/log (e + Δ t) when the time interval is long,
Figure FDA0003975438530000041
Figure FDA0003975438530000042
representing a set of hidden states.
9. The method according to claim 8, wherein in step S111, the multi-layered perceptron is a neural network combining a smart perceptron and a non-linear function, the neural network being:
Figure FDA0003975438530000043
the first loss function is:
Figure FDA0003975438530000044
wherein ,
Figure FDA0003975438530000046
is a hidden state set of similar users of the current user, is based on>
Figure FDA0003975438530000047
Figure FDA0003975438530000045
10. The method of claim 9, further comprising testing the recommended algorithm in step S113, wherein the testing environment of the testing comprises recording the testing results based on public data sets and on-line experiments, and comparing the results of the recommended algorithm with those of other models.
CN202211533857.0A 2022-12-01 2022-12-01 Search type recommendation method based on time and graph structure Active CN115953215B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211533857.0A CN115953215B (en) 2022-12-01 2022-12-01 Search type recommendation method based on time and graph structure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211533857.0A CN115953215B (en) 2022-12-01 2022-12-01 Search type recommendation method based on time and graph structure

Publications (2)

Publication Number Publication Date
CN115953215A true CN115953215A (en) 2023-04-11
CN115953215B CN115953215B (en) 2023-09-05

Family

ID=87286678

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211533857.0A Active CN115953215B (en) 2022-12-01 2022-12-01 Search type recommendation method based on time and graph structure

Country Status (1)

Country Link
CN (1) CN115953215B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116522011A (en) * 2023-05-16 2023-08-01 深圳九星互动科技有限公司 Big data-based pushing method and pushing system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109191240A (en) * 2018-08-14 2019-01-11 北京九狐时代智能科技有限公司 A kind of method and apparatus carrying out commercial product recommending
CN109522474A (en) * 2018-10-19 2019-03-26 上海交通大学 Recommended method based on interaction sequence data mining depth user's similitude
CN110516160A (en) * 2019-08-30 2019-11-29 中国科学院自动化研究所 User modeling method, the sequence of recommendation method of knowledge based map
CN111127142A (en) * 2019-12-16 2020-05-08 东北大学秦皇岛分校 Article recommendation method based on generalized neural attention
CN113190751A (en) * 2021-05-10 2021-07-30 南京理工大学 Recommendation algorithm for generating fused keywords
CN113722583A (en) * 2021-07-31 2021-11-30 华为技术有限公司 Recommendation method, recommendation model training method and related products
US20220207587A1 (en) * 2020-12-30 2022-06-30 Beijing Wodong Tianjun Information Technology Co., Ltd. System and method for product recommendation based on multimodal fashion knowledge graph
CN114693397A (en) * 2022-03-16 2022-07-01 电子科技大学 Multi-view multi-modal commodity recommendation method based on attention neural network
CN115221387A (en) * 2022-07-13 2022-10-21 全拓科技(杭州)股份有限公司 Enterprise information integration method based on deep neural network

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109191240A (en) * 2018-08-14 2019-01-11 北京九狐时代智能科技有限公司 A kind of method and apparatus carrying out commercial product recommending
CN109522474A (en) * 2018-10-19 2019-03-26 上海交通大学 Recommended method based on interaction sequence data mining depth user's similitude
CN110516160A (en) * 2019-08-30 2019-11-29 中国科学院自动化研究所 User modeling method, the sequence of recommendation method of knowledge based map
CN111127142A (en) * 2019-12-16 2020-05-08 东北大学秦皇岛分校 Article recommendation method based on generalized neural attention
US20220207587A1 (en) * 2020-12-30 2022-06-30 Beijing Wodong Tianjun Information Technology Co., Ltd. System and method for product recommendation based on multimodal fashion knowledge graph
CN113190751A (en) * 2021-05-10 2021-07-30 南京理工大学 Recommendation algorithm for generating fused keywords
CN113722583A (en) * 2021-07-31 2021-11-30 华为技术有限公司 Recommendation method, recommendation model training method and related products
CN114693397A (en) * 2022-03-16 2022-07-01 电子科技大学 Multi-view multi-modal commodity recommendation method based on attention neural network
CN115221387A (en) * 2022-07-13 2022-10-21 全拓科技(杭州)股份有限公司 Enterprise information integration method based on deep neural network

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
LIN, JING 等: "FISSA: Fusing Item Similarity Models with Self-Attention Networks for Sequential Recommendation", 《RECSYS 2020: 14TH ACM CONFERENCE ON RECOMMENDER SYSTEMS》, pages 130 - 139 *
YANGYANG XU 等: "SSSER: Spatiotemporal Sequential and Social Embedding Rank for Successive Point-of-Interest Recommendation", 《IEEE ACCESS》, vol. 7, pages 156804 - 156823, XP011757021, DOI: 10.1109/ACCESS.2019.2950061 *
潘冠源: "神经协同过滤模型的改进及其在推荐***中的应用", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 1, pages 138 - 3336 *
蒋忠元 等: "社交网络中的社团隐私研究综述", 《网络与信息安全学报》, vol. 7, no. 2, pages 10 - 21 *
贾伟涛: "基于用户动态兴趣的推荐算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 5, pages 138 - 1508 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116522011A (en) * 2023-05-16 2023-08-01 深圳九星互动科技有限公司 Big data-based pushing method and pushing system
CN116522011B (en) * 2023-05-16 2024-02-13 深圳九星互动科技有限公司 Big data-based pushing method and pushing system

Also Published As

Publication number Publication date
CN115953215B (en) 2023-09-05

Similar Documents

Publication Publication Date Title
Wu et al. Modeling the evolution of users’ preferences and social links in social networking services
Lin et al. A survey on reinforcement learning for recommender systems
Chen et al. Hybrid-order gated graph neural network for session-based recommendation
CN113806630B (en) Attention-based multi-view feature fusion cross-domain recommendation method and device
Chulyadyo et al. A personalized recommender system from probabilistic relational model and users’ preferences
CN105760443A (en) Project recommending system, device and method
CN112632296B (en) Knowledge graph-based paper recommendation method and system with interpretability and terminal
CN114639483A (en) Electronic medical record retrieval method and device based on graph neural network
Gui et al. Mention recommendation in twitter with cooperative multi-agent reinforcement learning
CN115953215B (en) Search type recommendation method based on time and graph structure
Abugabah et al. Dynamic graph attention-aware networks for session-based recommendation
Liu et al. Efficient hyperparameters optimization through model-based reinforcement learning and meta-learning
Yang et al. Intellitag: An intelligent cloud customer service system based on tag recommendation
CN113449176A (en) Recommendation method and device based on knowledge graph
Yan et al. Modeling long-and short-term user behaviors for sequential recommendation with deep neural networks
CN113256024A (en) User behavior prediction method fusing group behaviors
He et al. Using Cognitive Interest Graph and Knowledge-activated Attention for Learning Resource Recommendation
Xu et al. Similarmf: a social recommender system using an embedding method
Jia et al. A self-supervised learning framework for sequential recommendation
Hou et al. Hierarchical Transition-Aware Graph Attention Network for Session-based Recommendation
Yan et al. A short-term forecasting model with inhibiting normal distribution noise of sale series
Anari et al. Optimizing membership functions using learning automata for fuzzy association rule mining
CN116521972B (en) Information prediction method, device, electronic equipment and storage medium
Lu et al. Current Interest Enhanced Graph Neural Networks for Session-based Recommendation
Hartatik et al. A Comparison of BAT and firefly algorithm in neighborhood based collaborative filtering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant