CN114282687A - Multi-task time sequence recommendation method based on factorization machine - Google Patents

Multi-task time sequence recommendation method based on factorization machine Download PDF

Info

Publication number
CN114282687A
CN114282687A CN202111667759.1A CN202111667759A CN114282687A CN 114282687 A CN114282687 A CN 114282687A CN 202111667759 A CN202111667759 A CN 202111667759A CN 114282687 A CN114282687 A CN 114282687A
Authority
CN
China
Prior art keywords
time sequence
embedding
model
feature
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111667759.1A
Other languages
Chinese (zh)
Other versions
CN114282687B (en
Inventor
卢暾
应亦周
顾宁
李东胜
张鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN202111667759.1A priority Critical patent/CN114282687B/en
Publication of CN114282687A publication Critical patent/CN114282687A/en
Application granted granted Critical
Publication of CN114282687B publication Critical patent/CN114282687B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Complex Calculations (AREA)

Abstract

The invention belongs to the technical field of time sequence recommendation, and particularly relates to a multi-task time sequence recommendation method based on a factorization machine. The method comprises the following specific steps: processing data according to different recommended task requirements and obtaining a similarity matrix; taking the static characteristics of the user and the dynamic characteristics screened out according to the similarity matrix as the input of the static task and the dynamic task of the model; different tasks pass through an embedding layer, an attention mechanism, a factorization interaction layer and a linear layer to obtain a final given result; updating model parameters according to the result and the loss, and continuously training until a convergence stopping condition is reached; the model is saved, loaded into the new data and the TOPN recommendation is obtained. The invention aims to improve the practicability and accuracy of the factorization machine model in the time sequence recommendation scene, and combines multitask, attention mechanism and the like with the factorization machine, thereby improving the effect of the factorization machine in the actual time sequence recommendation task.

Description

Multi-task time sequence recommendation method based on factorization machine
Technical Field
The invention belongs to the technical field of time sequence recommendation, and particularly relates to a multi-task time sequence recommendation method based on a factorization machine.
Background
Factorization Machine (Factorization Machine)[1]The method is the most classical recommendation method in the recommendation field, is proposed in 2010 by Steffen Rendle, solves the problem of feature combination of large-scale sparse data, and is compared with the traditional Linear Regression (LR) and SVM classes[2]The recommendation effect of the algorithm is remarkably improved. To better capture interactions between features, a factorizer derivative processes many variants, such as NFM[3]、DeepFM[4]、xDeepFM[5]And the like.
However, in the practical application scene, with the rise of the e-commerce industry and music entertainment, the timing sequence of user behaviors becomes more important, so that the timing sequence recommendation has attracted extensive attention in and out of the industry in recent years, and the core problem is to design a recommendation model around the timing sequence of user behaviors. The conventional factorization machine and the corresponding depth and width extension method thereof do not consider time sequence information, and meanwhile, most of the existing sequence recommendation algorithms focus on the transmission structure between sequential actions and the influence of historical events on the current prediction, such as ARMA (auto-regressive moving average)[6]、RNN[7]、GRU[8]The classical model largely ignores the fixed context information of the user, the feature interaction among the information and the interaction between the information and the time feature, and the feature transfer of the long sequence needs large data storage space and causes large-scale calculation amount.
Disclosure of Invention
The invention aims to provide a time series recommendation method based on a factorization machine, which is small in calculation amount and good in recommendation effect.
The invention provides a time sequence recommendation method based on a factorization machine, which is characterized in that an original factorization machine model is used[1]First, second order interaction and timing models of[9]The method can be used for learning the context information of the non-time-sequence characteristics on one hand, learning the time sequence information contained in the time sequence on the other hand, and combining the two parts of information for further learning to improve the recommendation effectThe multidimensional angles of degree and depth solve the problem that the original factorizer performs poorly on timing recommendations. The method comprises the following specific steps.
Step 1: arranging and splitting original data into non-time-sequence static context and historical time sequence information of a user, and generating a user-article matrix on the basis; the user input characteristics are divided into two parts, namely user static characteristics and user time sequence characteristics: [ s ] of1,s2,...,sn,d1,d2,...,dm,target];
Wherein s is the initial letter of English static, n dimensions are provided in total to represent non-time sequence characteristics, d is the initial letter of English dynamic to represent time sequence characteristics, m dimensions are provided in total, and target represents a target object.
Step 2: and (3) performing importance screening on the time sequences segmented in the step (1) according to cosine similarity of the user-item matrix in the step (1), and reserving the most relevant time sequences within a limited time sequence input length:
Figure BDA0003451554300000021
where sim is the similarity function, I is the article, I1,I2Two items are represented, r represents a score,
Figure BDA0003451554300000022
and respectively scoring the certain item i by the user a and the user b.
And step 3: embedding two-dimensional features of the non-time sequence features and the screened time sequence features:
Estatic=Embedding(Dfeature,Dembedding)
Figure BDA0003451554300000023
Figure BDA0003451554300000024
wherein E isstaticAs a result of the non-timing feature Embedding, DfeatureRepresenting the feature dimension requiring Embedding, DEmbeddingRepresenting the dimension of an Embedding result which needs to be obtained finally, f is the first letter of the English feature and represents the characteristic, theta is a hyperparameter in the Embedding process, wiThe first order parameter, v, needs to be learned for the modeliSecond order parameters need to be learned for the model.
And 4, step 4: and performing first-order and second-order feature interaction on the non-time sequence embedding result by adopting a factorization machine:
Figure BDA0003451554300000025
where f, as above, is the input characteristic, w is the linear partial weight, w is0Is global weight, N in the formula represents the input characteristic number, k represents the dimension of a hidden vector V and is used for obtaining second-order interaction in a factorization machine,
Figure BDA0003451554300000026
is a non-time-sequential characteristic fiThe predicted result of (1).
And 5: one-dimensional embedding is performed on the timing characteristics:
Etime=Embedding(Dfeature,Dembedding)
Figure BDA0003451554300000027
wherein E istimeIndicating the result of the timing characteristic Embedding, similar to that in step 3, DfeatureAnd DEmbeddingRespectively representing the feature dimension needing Embedding and the dimension of the finally obtained Embedding result, wherein theta is a hyper parameter of Embedding, t represents a time sequence feature, and u is a parameter needing to be learned by the model.
Step 6: the weights of the embedded items in the time sequence are extracted by using a self-attention mechanism, and then a factorizer is pushed for interaction:
Figure BDA0003451554300000028
Figure BDA0003451554300000029
wherein T is Attention weight, K, Q, V are parameters in the self-Attention mechanism, WQ、Wk、WVLinear transformation parameters of Q, K, V, dkDenotes the dimension of K, finally
Figure BDA0003451554300000031
Is tiThe prediction result of the time.
And 7: splicing the embedding of the non-time sequence and the time sequence characteristics, and acquiring the combination characteristics of the non-time sequence and the time sequence by using a self-attention mechanism and a factorization machine:
Ecross=[Estatic,Etime]
Figure BDA0003451554300000033
Figure BDA0003451554300000034
in step 6, C is the weight learned from attention,
Figure BDA0003451554300000035
is the prediction result of the cross feature c.
And 8: integrating the results of the step 4, the step 6 and the step 7 by adopting a linear layer to finally obtain a prediction result:
Figure BDA0003451554300000036
wherein the content of the first and second substances,
Figure BDA0003451554300000037
showing step 4, step 6 and step 7 respectively
Figure BDA0003451554300000038
And
Figure BDA0003451554300000039
and step 9: comparing the results of the step 4, the step 6, the step 7 and the step 8 with the target result, and improving the training effect and efficiency by the intervention auxiliary loss:
Figure BDA00034515543000000310
Figure BDA00034515543000000311
Figure BDA00034515543000000312
Figure BDA00034515543000000313
wherein, w1,w2,w3Loss of parts for which the model needs to be updated separately1,Loss2,Loss3The final Loss is Loss.
Step 10: and (5) updating the model hyper-parameter according to the loss in the step 9, continuously training, finally reaching the training convergence condition, and finishing the training and storing the model.
Step 11: and (3) for new data, obtaining model input and user-item matrix input into the loaded model stored in the step 10 by using the mathematical method in the step 1, and sequencing obtained results to obtain a recommendation result.
The invention adopts a frame structure similar to the multi-task learning, and separately learns the time sequence characteristics and the non-time sequence characteristics of the user interaction, thereby avoiding the common noise problem in time sequence recommendation and effectively avoiding the loss of interaction information through the learning of the middle cross layer. Moreover, the method is based on the factorization machine, fully utilizes the advantages of the factorization machine, can be integrated with the context characteristics in the data, improves the accuracy of the recommendation result, and solves the problem of data sparsity common in the recommendation system. Meanwhile, in the training process, the importance of acquiring time sequence data by using a self-attention mechanism is skillfully utilized, and the time sequence recommendation effect is further improved. Finally, the parallel structure of the invention can learn time sequence, non-time sequence and cross characteristics in parallel, and has high efficiency in time sequence recommendation.
Drawings
FIG. 1 is a block diagram of the method of the present invention.
FIG. 2 is a diagram of the model architecture of the present invention.
FIG. 3 is a schematic diagram of the auxiliary losses in the present invention.
FIG. 4 is a flow chart of the method of the present invention.
Detailed Description
The method of the present invention is further described below with reference to the accompanying drawings and the summary of the invention.
Since the recommendations relate to different tasks, and the specific data processing methods are different, the data input formats in the two cases (Rank and Regression) are first explained here:
format 1: for the Rank task, the data are scored, so that the data only need to be screened, and users with too few historical interactions are eliminated:
[s1,s2,...,sn,d1,d2,...,dm,target]
format 2: for the Regression task, data has no label column, and the positive example and the N negative examples are bound as the minimum unit of input by randomly and negatively sampling the N samples:
[[s1,s2,...,sn,d1,d2,...,dm,PositiveTarget],
[s1,s2,...,sn,d1,d2,...,dm,NegtiveTarget1],
[s1,s2,...,sn,d1,d2,...,dm,NegtiveTarget2],
[s1,s2,...,sn,d1,d2,...,dm,NegtiveTargetn]]
after the input data are generated according to a specified format, splitting the original data in the step 1 into non-time sequence characteristics and time sequence characteristics;
the user-article matrix required to be obtained in the step 2 can be obtained while the input data is generated, and then a plurality of articles most similar to the article corresponding to the current time of the user are selected from all time sequences according to the cosine similarity;
next, a model training part, the model architecture of the present invention is shown in fig. 2, which shows the details of the framework of the model. The model architecture comprises a non-time sequence feature learning unit, a time sequence feature learning unit and a cross learning unit; wherein, the non-time sequence characteristic learning part adopts an interactive method in a factorization machine to obtain a result of the non-time sequence characteristic (namely the leftmost part of the graph) for the result after the non-time sequence input Embedding; the time sequence characteristic learning part learns the attention weight of the result after the time sequence characteristic Embedding through a self-attention mechanism, and then the result is further input into a factor decomposition machine to carry out second-order interactive learning to obtain a prediction result of the time sequence part (namely the rightmost part of the graph); the cross learning unit splices the time sequence characteristics and the non-time sequence characteristics and then inputs the spliced time sequence characteristics and the spliced non-time sequence characteristics into the self-attention mechanism and the second-order factorization machine to obtain a cross learning prediction result (namely the middle part of the graph); and integrating the three results in the last layer to obtain a final comprehensive prediction result.
Therefore, in the training process, for non-time sequence Embedding in the step 3, the result after Embedding is regarded as weight, the first-order interaction uses the first-order Embedding, the dimension is N, the second-order interaction uses the second-order Embedding, and the dimension is N k (the selection of k needs to be capable of mining as much second-order interaction feature information as possible, and simultaneously can ensure strong generalization capability of the model) unlike One-hot encoding input of a conventional factorization machine.
In the One-hot, only the input with the value of 1 and the weight need to be calculated, and the step 4 only needs to extract the weight of the corresponding position from the Embedding result for calculation, so that the calculation process is further simplified, the model efficiency is improved, and the prediction result y1 of the first task is obtained.
And 5, performing two-dimensional Embedding on the time sequence data, connecting the obtained Embedding result with the Embedding in the step 3, and inputting the obtained Embedding result and the Embedding result in the step 3 into a Self-orientation layer and a factorization machine layer of each task in sequence to obtain the prediction results y2 and y3 of the second task and the third task.
And the last layer of the model is a linear integration layer, in order to further improve the training result and speed, auxiliary losses are added into the linear integration layer, and the comprehensive result y of y1, y2 and y3 is obtained through regression.
And (3) storing the model after the model reaches the training end condition, loading the model after carrying out data processing as in the steps 1 and 2 on the new user data to obtain the recommendation result of the article, and selecting TOPN for recommendation.
Reference documents:
[1]Rendle S.Factorization machines[C].2010IEEE International conference on data mining.IEEE,2010:995-1000.
[2]Zhang,Hao,et al.SVM-KNN:Discriminative nearest neighbor classification for visual category recognition[C].2006IEEE Computer Society Conference on Computer Vision and Pattern Recognition(CVPR'06).IEEE,2006(2):2126-2136.
[3]He,Xiangnan,and Tat-Seng Chua.Neural factorization machines for sparse predictive analytics[C].Proceedings of the 40th International ACM SIGIR conference on Research and Development in Information Retrieval.2017:(355-364).
[4]Guo H,Tang R,Ye Y,et al.DeepFM:a factorization-machine based neural network for CTR prediction[J].arXiv preprint arXiv:1703.04247,2017.
[5]Lian J,Zhou X,Zhang F,et al.xDeepFM:Combining explicit and implicit feature interactions for recommender systems[C].Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery&Data Mining.2018:1754-1763.
[6]Benjamin,Michael A.,Robert A.Rigby,and D.Mikis Stasinopoulos.Generalized autoregressive moving average models[J].Journal of the American Statistical association 98.461(2003):214-223.
[7]Zaremba,Wojciech,Ilya Sutskever,and Oriol Vinyals.Recurrent neural network regularization[J].arXiv preprint arXiv:1409.2329(2014).
[8]Dey,Rahul,and Fathi M.Salem.Gate-variants of gated recurrent unit(GRU)neural networks[C].2017 IEEE 60th international midwest symposium on circuits and systems(MWSCAS).IEEE,2017.
[9]Wang S,Hu L,Wang Y,et al.Sequential recommender systems:challenges,progress and prospects[J].arXiv preprint arXiv:2001.04830,2019。

Claims (1)

1. a multi-task time sequence recommendation method based on a factorization machine is characterized in that a first-order interaction, a second-order interaction and a time sequence model of an original factorization machine model are combined, on one hand, context information of non-time sequence characteristics is learned, on the other hand, time sequence information contained in a time sequence is learned, on the other hand, two parts of information are combined for further learning to improve a recommendation effect, and the problem that the original factorization machine is poor in performance on time sequence recommendation is solved from the multi-dimensional angles of breadth and depth; the method comprises the following specific steps:
step 1: arranging and splitting original data into non-time-sequence static context and historical time sequence information of a user, and generating a user-article matrix on the basis; the user input characteristics are divided into two parts, namely user static characteristics and user time sequence characteristics:
[s1,s2,...,sn,d1,d2,...,dm,target];
wherein s represents a non-time sequence characteristic, n dimensions are shared, d represents a time sequence characteristic, m dimensions are shared, and target represents a target article;
step 2: and (2) performing importance screening on the time sequences segmented in the step (1) according to cosine similarity of the user-article matrix in the step (1), and reserving the most relevant time sequences within a limited time sequence input length:
Figure FDA0003451554290000011
where sim is the similarity function, I is the article, I1,I2Two items are represented, r represents a score,
Figure FDA0003451554290000012
respectively scoring a certain article i for a user a and a user b;
and step 3: embedding two-dimensional features of the non-time sequence features and the screened time sequence features:
Estatic=Embedding(Dfeature,Dembedding)
Figure FDA0003451554290000013
Figure FDA0003451554290000014
wherein E isstaticAs a result of the non-timing feature Embedding, DfeatureRepresenting the feature dimension requiring Embedding, DEmbeddingRepresenting the dimension of the finally obtained Embedding result; f represents the feature, and theta is a hyper-parameter in the Embedding process,wiThe first order parameter, v, needs to be learned for the modeliSecond order parameters to be learned for the model;
and 4, step 4: and performing first-order and second-order feature interaction on the non-time sequence embedding result by adopting a factorization machine:
Figure FDA0003451554290000015
where f is the same as the input characteristic, w is the linear partial weight0Is global weight, N represents the feature number of the input, k represents the dimension of the hidden vector V, and is used for obtaining second-order interaction in a factorization machine,
Figure FDA0003451554290000021
is a non-time-sequential characteristic fiThe predicted result of (2);
and 5: one-dimensional embedding is performed on the timing characteristics:
Etime=Embedding(Dfeature,Dembedding)
Figure FDA0003451554290000022
wherein E istimeIndicating the result of the timing characteristic Embedding, similar to that in step 3, DfeatureAnd DEmbeddingRespectively representing the feature dimension needing Embedding and the dimension of an Embedding result needing to be finally obtained, wherein theta is a hyper parameter of Embedding, t represents a time sequence feature, and u is a parameter needing to be learned by the model;
step 6: the weights of the embedded items in the time sequence are extracted by using a self-attention mechanism, and then a factorizer is pushed for interaction:
Figure FDA0003451554290000023
Figure FDA0003451554290000024
wherein T is Attention weight, K, Q, V are parameters in the self-Attention mechanism, WQ、Wk、WVLinear transformation parameters of Q, K, V, dkDenotes the dimension of K, finally
Figure FDA0003451554290000025
Is tiA prediction result of a time;
and 7: splicing the embedding of the non-time sequence and the time sequence characteristics, and acquiring the combination characteristics of the non-time sequence and the time sequence by using a self-attention mechanism and a factorization machine:
Ecross=[Estatic,Etime]
Figure FDA0003451554290000026
Figure FDA0003451554290000027
in step 6, C is the weight learned from attention,
Figure FDA0003451554290000028
the predicted result is the cross feature c;
and 8: integrating the results of the step 4, the step 6 and the step 7 by adopting a linear layer to finally obtain a prediction result:
Figure FDA0003451554290000029
wherein the content of the first and second substances,
Figure FDA00034515542900000210
respectively show step 4, step 6 and step 7Is/are as follows
Figure FDA00034515542900000211
And
Figure FDA00034515542900000212
and step 9: comparing the results of the step 4, the step 6, the step 7 and the step 8 with the target result, and improving the training effect and efficiency by the intervention auxiliary loss:
Figure FDA0003451554290000031
Figure FDA0003451554290000032
Figure FDA0003451554290000033
Figure FDA0003451554290000034
wherein, w1,w2,w3Loss of parts for which the model needs to be updated separately1,Loss2,Loss3The final Loss is Loss;
step 10: updating the model hyper-parameter according to the loss in the step 9, continuously training, finally reaching the training convergence condition, and finishing training and storing the model;
step 11: and (3) for new data, obtaining model input and user-item matrix input into the loaded model stored in the step 10 by using the mathematical method in the step 1, and sequencing obtained results to obtain a recommendation result.
CN202111667759.1A 2021-12-31 2021-12-31 Multi-task time sequence recommendation method based on factorization machine Active CN114282687B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111667759.1A CN114282687B (en) 2021-12-31 2021-12-31 Multi-task time sequence recommendation method based on factorization machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111667759.1A CN114282687B (en) 2021-12-31 2021-12-31 Multi-task time sequence recommendation method based on factorization machine

Publications (2)

Publication Number Publication Date
CN114282687A true CN114282687A (en) 2022-04-05
CN114282687B CN114282687B (en) 2023-03-07

Family

ID=80879523

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111667759.1A Active CN114282687B (en) 2021-12-31 2021-12-31 Multi-task time sequence recommendation method based on factorization machine

Country Status (1)

Country Link
CN (1) CN114282687B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111177579A (en) * 2019-12-17 2020-05-19 浙江大学 Integrated diversity enhanced ultra-deep factorization machine model and construction method and application thereof
CN111859142A (en) * 2020-07-28 2020-10-30 山东大学 Cross-equipment migration recommendation system based on interconnection and intercommunication home platform and working method thereof
CN112115371A (en) * 2020-09-30 2020-12-22 山东建筑大学 Neural attention mechanism mobile phone application recommendation model based on factorization machine
CN112732936A (en) * 2021-01-11 2021-04-30 电子科技大学 Radio and television program recommendation method based on knowledge graph and user microscopic behaviors
CN112883288A (en) * 2021-03-09 2021-06-01 东南大学 Software reviewer hybrid recommendation method based on deep learning and multi-Agent optimization
CN113139850A (en) * 2021-04-26 2021-07-20 西安电子科技大学 Commodity recommendation model for relieving data sparsity and commodity cold start
CN113420421A (en) * 2021-05-28 2021-09-21 西安邮电大学 QoS prediction method based on time sequence regularization tensor decomposition in moving edge calculation

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111177579A (en) * 2019-12-17 2020-05-19 浙江大学 Integrated diversity enhanced ultra-deep factorization machine model and construction method and application thereof
CN111859142A (en) * 2020-07-28 2020-10-30 山东大学 Cross-equipment migration recommendation system based on interconnection and intercommunication home platform and working method thereof
CN112115371A (en) * 2020-09-30 2020-12-22 山东建筑大学 Neural attention mechanism mobile phone application recommendation model based on factorization machine
CN112732936A (en) * 2021-01-11 2021-04-30 电子科技大学 Radio and television program recommendation method based on knowledge graph and user microscopic behaviors
CN112883288A (en) * 2021-03-09 2021-06-01 东南大学 Software reviewer hybrid recommendation method based on deep learning and multi-Agent optimization
CN113139850A (en) * 2021-04-26 2021-07-20 西安电子科技大学 Commodity recommendation model for relieving data sparsity and commodity cold start
CN113420421A (en) * 2021-05-28 2021-09-21 西安邮电大学 QoS prediction method based on time sequence regularization tensor decomposition in moving edge calculation

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
DUAN LI: "A hybrid intelligent service recommendation by latent semantics and explicit ratings", 《INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS》 *
RAVAKHAH MAHDI: "Balanced hierarchical max margin matrix factorization for recommendation system", 《BALANCED HIERARCHICAL MAX MARGIN MATRIX FACTORIZATION FOR RECOMMENDATION SYSTEM》 *
刘亦欣: "融合注意力与深度因子分解机的时间上下文推荐模型", 《计算机与现代化》 *
燕彩蓉: "因子分解机模型的宽度和深度扩展研究", 《软件学报》 *

Also Published As

Publication number Publication date
CN114282687B (en) 2023-03-07

Similar Documents

Publication Publication Date Title
Middlehurst et al. HIVE-COTE 2.0: a new meta ensemble for time series classification
Anderson et al. Physical representation-based predicate optimization for a visual analytics database
US11605019B2 (en) Visually guided machine-learning language model
Rakotomamonjy Sparse support vector infinite push
Shamsolmoali et al. High-dimensional multimedia classification using deep CNN and extended residual units
CN114186084B (en) Online multi-mode Hash retrieval method, system, storage medium and equipment
Xing et al. Few-shot single-view 3d reconstruction with memory prior contrastive network
CN110727872A (en) Method and device for mining ambiguous selection behavior based on implicit feedback
CN111080551B (en) Multi-label image complement method based on depth convolution feature and semantic neighbor
CN111079011A (en) Deep learning-based information recommendation method
Zhao et al. Fint: field-aware interaction neural network for ctr prediction
Khoshaba et al. Machine learning algorithms in Bigdata analysis and its applications: A Review
Zhao et al. Binary multi-view sparse subspace clustering
CN114282687B (en) Multi-task time sequence recommendation method based on factorization machine
CN116071119B (en) Model-agnostic inverse fact interpretation method based on multi-behavior recommendation model
He et al. Multilabel classification by exploiting data‐driven pair‐wise label dependence
Dhoot et al. Efficient Dimensionality Reduction for Big Data Using Clustering Technique
Xu et al. Social image refinement and annotation via weakly-supervised variational auto-encoder
CN109614581A (en) The Non-negative Matrix Factorization clustering method locally learnt based on antithesis
Hua et al. Cross-modal correlation learning with deep convolutional architecture
Di Deep interest network for taobao advertising data click-through rate prediction
Valem et al. Rank flow embedding for unsupervised and semi-supervised manifold learning
Lin et al. MOD: A deep mixture model with online knowledge distillation for large scale video temporal concept localization
Riana Deep Neural Network for Click-Through Rate Prediction
Pal et al. Random partition based adaptive distributed kernelized SVM for big data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant