CN115391665A - Video recommendation method and device, electronic equipment and storage medium - Google Patents

Video recommendation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115391665A
CN115391665A CN202211168200.9A CN202211168200A CN115391665A CN 115391665 A CN115391665 A CN 115391665A CN 202211168200 A CN202211168200 A CN 202211168200A CN 115391665 A CN115391665 A CN 115391665A
Authority
CN
China
Prior art keywords
video
recommendation
candidate
sequence
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211168200.9A
Other languages
Chinese (zh)
Inventor
姚倩媛
施雯
段勇
郑聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN202211168200.9A priority Critical patent/CN115391665A/en
Publication of CN115391665A publication Critical patent/CN115391665A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/75Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7847Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Library & Information Science (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application relates to a video recommendation method, a video recommendation device, electronic equipment and a storage medium, wherein the method comprises the following steps: processing each video in the candidate video pool by using a preset target estimation model and a preset heterogeneous graph convolution network algorithm to obtain a user feedback parameter corresponding to each video and content similarity corresponding to each video; based on user feedback parameters, content similarity and a preset graph search algorithm, video recommendation sequence search is carried out in a candidate video pool, and a preset number of candidate video recommendation sequences and recommendation labels corresponding to the candidate video recommendation sequences are obtained; and detecting a target video recommendation sequence in the preset number of candidate video recommendation sequences according to the recommendation labels, and recommending videos according to the arrangement sequence of the videos corresponding to the target video recommendation sequence. By the method and the device, the problems that the video recommendation cannot meet the requirement of diversity and the video recommendation effect is poor due to multi-target and multi-mode characteristics of recommended videos are solved.

Description

Video recommendation method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of intelligent analysis technologies, and in particular, to a video recommendation method and apparatus, an electronic device, and a storage medium.
Background
In the related technology, video recommendation scenes often have the characteristics of multiple targets and multiple modes, specifically, multiple targets mean that business targets are rich, wherein the business targets comprise indexes such as click rate, playing time, income conversion and the like; the multi-mode refers to that the recommended content is various in types, wherein the recommended content comprises the following modal information: long video, short video, live broadcast, advertisement, book.
In the related art, the mainstream video recommendation is a video waterfall stream recommendation scene, and in the related art, for the rearrangement stage of the video waterfall stream, a diversity algorithm is adopted for the finely-arranged video with a high score, for example: a determinant Point Process algorithm (DPP for short) and a maximum boundary correlation algorithm (MMR for short) select a suitable recommendation list according to the ranking score and the similarity between videos. However, in the related art, the video recommendation list obtained by the above method has the following problems: firstly, a multi-target recommendation scene cannot be considered, specifically, in video recommendation, the sequence of videos may affect click, viewing duration and income, for example: if the video which can be seen only by the high-level user VIP is ranked in the first few places and recommended to a non-VIP user, and the corresponding video cannot be seen after the non-VIP user clicks, the subsequent video cannot be browsed continuously, so that the user experience is not good; secondly, as the recommended videos are multi-modal, scores and similarity of long and short videos and live advertisements are difficult to measure in the same dimension, diversity requirements of users are not met, and the recommendation effect is poor.
Aiming at the problems that in the related art, video recommendation cannot meet the requirement of diversity and the video recommendation effect is poor due to multi-target and multi-modal characteristics of recommended videos, an effective solution is not provided.
Disclosure of Invention
The application provides a video recommendation method, a video recommendation device, electronic equipment and a storage medium, which are used for at least solving the problems that the video recommendation in the related art cannot meet the requirement of diversity and the video recommendation effect is poor due to the multi-target and multi-mode characteristics of recommended videos.
In a first aspect, the present application provides a video recommendation method, including: processing each video in a candidate video pool by using a preset target estimation model and a preset heterogeneous graph convolution network algorithm to obtain a user feedback parameter corresponding to each video and content similarity corresponding to each video, wherein the user feedback parameter is used for representing multiple target satisfaction degrees corresponding to the videos; based on the user feedback parameters, the content similarity and a preset graph search algorithm, performing video recommendation sequence search in the candidate video pool to obtain a preset number of candidate video recommendation sequences and recommendation labels corresponding to the candidate video recommendation sequences, wherein the recommendation labels are used for representing the recommendation scores of the corresponding candidate video recommendation sequences; and detecting a target video recommendation sequence in the preset number of candidate video recommendation sequences according to the recommendation label, and recommending videos according to the arrangement sequence of the videos corresponding to the target video recommendation sequence.
In a second aspect, the present application provides a video recommendation apparatus, including:
the processing module is used for processing each video in the candidate video pool by using a preset target estimation model and a preset heterogeneous graph convolution network algorithm to obtain a user feedback parameter corresponding to each video and content similarity corresponding to each video, wherein the user feedback parameter is used for representing various target satisfaction degrees corresponding to the videos;
the retrieval module is used for searching video recommendation sequences in the candidate video pool based on the user feedback parameters, the content similarity and a preset graph search algorithm to obtain a preset number of candidate video recommendation sequences and recommendation labels corresponding to the candidate video recommendation sequences, wherein the recommendation labels are used for representing the recommendation scores of the corresponding candidate video recommendation sequences;
and the determining module is used for detecting a target video recommendation sequence in a preset number of candidate video recommendation sequences according to the recommendation labels and recommending videos according to the arrangement sequence of videos corresponding to the target video recommendation sequence.
In a third aspect, an electronic device is provided, which includes a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
the processor is configured to implement the steps of the video recommendation method according to any one of the embodiments of the first aspect when executing the program stored in the memory.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the video recommendation method according to any one of the embodiments of the first aspect.
Compared with the related art, the embodiment provides a video recommendation method, a video recommendation device, an electronic device and a storage medium, and each video in a candidate video pool is processed by using a preset target estimation model and a preset heterogeneous graph convolution network algorithm to obtain a user feedback parameter corresponding to each video and content similarity corresponding to each video, wherein the user feedback parameter is used for representing multiple target satisfaction degrees corresponding to the videos; based on the user feedback parameters, the content similarity and a preset graph search algorithm, searching video recommendation sequences in the candidate video pool to obtain a preset number of candidate video recommendation sequences and recommendation labels corresponding to the candidate video recommendation sequences, wherein the recommendation labels are used for representing the recommendation scores of the corresponding candidate video recommendation sequences; according to the recommendation labels, target video recommendation sequences are detected in the preset number of candidate video recommendation sequences, and video recommendation is performed according to the arrangement sequence of videos corresponding to the target video recommendation sequences, so that the problems that diversity requirements cannot be met and video recommendation effects are poor due to multi-target and multi-modal characteristics of recommended videos are solved, and the advantages that the recommended videos meet the diversity requirements of users and the recommendation effects are improved are achieved.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a schematic flowchart of a video recommendation method according to an embodiment of the present application;
fig. 2 is a block diagram of a video recommendation apparatus according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Before describing the embodiments of the present application, the related art related to the embodiments of the present application is explained as follows:
a Click Through Rate (CTR) prediction model, which is a model for predicting the probability of a user clicking a video, is a binary model, and adopts a Logistic Regression (LR) model at the early stage of the CTR prediction model; however, the structure of the LR model is too simple, and the assumption of independence between features is not applicable in an actual recommended scene (there is correlation between features in the actual recommended scene), so a Factorization Machine (FM for short) model is extended on the basis of the LR model, and calculation of direct correlation between features is increased, but the FM model only focuses on correlation between two features, and therefore, a Deep FM model is proposed, and the Deep FM model realizes cross calculation between features in a low dimension (FM) and a high dimension (Deep), so that the model calculation process and the obtained result more conform to the actual recommended scene (there are many features in the actual scene, and there may be correlation between two or more features).
Graph Neural Networks (GNNs) are a class of deep learning-based methods for processing graph domain information. Due to its better performance and interpretability, GNN has recently become a widely used method of graph analysis. The graph neural network related to the application comprises a graph neural network GCN, a graph attention network GAT and a graph sampling aggregation model GraphSAGE, wherein,
graph neural Network (GCN for short) is a natural extension of Convets on Graph structure; the GCN adopts local sensing areas, shared weights and down-sampling in a spatial domain, has the characteristics of stability and invariance relative to displacement, scaling and distortion, and can well extract the spatial features of the image.
Graph Attention Network (GAT) proposes weighted summation of adjacent node features by an Attention mechanism, wherein weights of the adjacent node features completely depend on the node features and are independent of a Graph structure; the core difference between GAT and GCN is how to collect and accumulate the signature representations of neighbor nodes with distance 1; GAT replaces the fixed standardization operation in GCN with an attention mechanism; essentially, GAT simply replaces the normalized function of the original GCN with a neighbor node feature aggregation function using attention weights, where GAT has the following advantages: in the GAT, each node in the graph may allocate different weights to the neighboring nodes according to the characteristics of the neighboring nodes; after GAT draws attention, it only relates to neighboring nodes, i.e. nodes sharing edges, and does not need to obtain information of the whole graph.
The Graph sampling aggregation model (Graph Sample and aggregation, referred to as Graph SAGE for short) comprises sampling and aggregation (Sample and aggregation), wherein firstly, connection information between nodes is used for sampling neighbors, and then, the information of the neighboring nodes is continuously fused together through a multi-layer aggregation function. The fused information is used for predicting the node label GraphSAGE, a sampling and aggregation method is adopted, the method is indicative (an inductive framework, the inductive framework can process newly added nodes in a Graph, or the knowledge of the Graph learned before is used for deducing a new Graph label), the mapping of Graph Embedding can be obtained by simultaneously using node characteristic information and structural information, and the Embedding can be generated on the new node by efficiently using the attribute information of the node; the GraphSAGE can simultaneously learn the topological structure of each node neighborhood and the distribution of the node characteristics in the neighborhood; the graph stores the mapping for generating the embedding (a computing function is learned, the generalization capability is strong), the expandability is stronger, and the performance of node classification and link prediction problems is better.
The beam search (beam search) optimizes the search space (similar to pruning) on the basis of breadth first so as to achieve the purpose of reducing the memory consumption; the beam search is a heuristic search, belongs to an optimal priority algorithm in the optimization field, and the optimal priority algorithm is a graph search algorithm which can sort all possible solutions according to heuristic rules which are used for measuring how close the obtained solution is to a target solution; however, there are some differences between the beam search and the best-first algorithm, the beam search only stores a part of solutions as candidate solutions, and the best-first algorithm takes all solutions as candidates, which is specifically as follows:
the beam search is a search tree which is constructed by using breadth-first search, a series of solutions are generated at each layer, then the solutions are ranked, and the best K solutions are selected as candidate solutions, wherein K is called as a bundling width; only the selected K solutions can be continuously expanded downwards, so that the larger the bundling width is, the fewer solutions are cut off; in a specific application, beam search is used in a test stage in seq2seq to find an optimal result during decoding, the bundling width is assumed to be 2, and the dictionary size is 3 (a, b, c), so that the decoding process is as follows; when generating the 2 nd word, combining the current sequences a and c with all words in a word list respectively to obtain new 6 sequences aa, ab, ac, ca, cb and cc, then selecting 2 sequences with the highest scores from the sequences as the current sequences, adding the current sequences aa and cb, and continuously repeating the process until meeting an end symbol; finally, 2 sequences with the highest scores are output.
Various technologies described in this application may be used for retrieval, re-ranking, and pushing in video, advertising, long and short video recommendations.
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
Fig. 1 is a flowchart illustrating a video recommendation method according to an embodiment of the present application. As shown in fig. 1, an embodiment of the present application provides a video recommendation method, which includes the following steps:
step S101, each video in the candidate video pool is processed by using a preset target estimation model and a preset heterogeneous graph convolution network algorithm, and user feedback parameters corresponding to each video and content similarity corresponding to each video are obtained, wherein the user feedback parameters are used for representing various target satisfaction degrees corresponding to the videos.
In this embodiment, before executing the video recommendation method steps of the embodiment of the present application, a plurality of pre-estimation models need to be constructed according to different business objectives, where the plurality of pre-estimation models constructed at least include the following pre-estimation models: the method comprises the steps that a CTR prediction model, a duration prediction model, a income prediction model and a glide probability prediction model are used, and the target satisfaction degree, namely the target satisfaction rate, corresponding to each video is predicted through a plurality of prediction models.
In this embodiment, the CTR prediction model is used to predict the probability that a user clicks on a corresponding video, that is, the click rate; in this embodiment, when the CTR prediction model is constructed, a set tag is used to represent whether a corresponding video is clicked by a user, for example: setting a label 0 to indicate that the user is not clicked, and setting a label 1 to indicate that the user is clicked; meanwhile, the characteristics adopted for constructing the CTR pre-estimation model comprise user characteristics and video characteristics, the depth of the set neural network is three layers, and the loss function is a cross entropy loss function; in this embodiment, the duration estimation model is used to predict the probability that the video playing duration exceeds a set threshold, and the network layer structure, sample and characteristics corresponding to the duration estimation model are similar to those of the CTR estimation model, which are different in the configured labels, for example: defining that the long video and the live content are effectively played after being played for more than 2 minutes, and the corresponding label is 1, for example: defining effective playing of short videos and advertisement contents for more than 15 seconds, correspondingly setting the label as 1, and correspondingly setting the classification model of duration for the rest labels as 0; in this embodiment, the revenue estimation model is used to predict the probability that the video brings revenue conversion, the revenue conversion index is set as a revenue index that can be shared by the corresponding user on the corresponding video, the network layer structure, sample and features corresponding to the revenue estimation model are all similar to those of the CTR estimation model, and the corresponding revenue index, that is, the corresponding tag, is set as having revenue, the corresponding tag is 1, otherwise, the tag is 0; in this embodiment, the glide estimation model is used to estimate a probability of video gliding during browsing after estimating a corresponding video, and is associated with a browsing depth, where a tag corresponding to the model is set as: when the video is the last video browsed by the user, the corresponding label is 0, if a browsable video still exists behind the video, the corresponding label is 1, and meanwhile, the downslide probability of video browsing predicted by the downslide estimation model is also related to the exposure rate of the video, for example: if the click rate, the effective playing rate, and the probability of revenue conversion of a video are all high, but the probability of downslide in browsing of the video corresponding to the previous video is low, the exposure rate of the video is low, that is, the downslide estimation model is also used for representing the exposure rate of the corresponding video, and in the embodiment of the application, the exposure rate is used for representing the probability of browsing or playing of the corresponding video, which may also be referred to as the probability of video exposure.
In this embodiment, after a plurality of target prediction models are constructed, a heterogeneous graph neural network is also constructed, and in this embodiment, the heterogeneous graph convolution network algorithm adopted includes one of the following: the graph neural network GCN, the graph attention network GAT, and the graph sampling aggregation model GraphSAGE are also used in the embodiment to construct the heterogeneous graph neural network, where one of the following graph neural networks is used: graphSage, GAT and GCN are trained, and nodes of the heteromorphic neural network are set to be multimode contents by combining the existing knowledge graph characteristics, for example: the method comprises the steps of long and short videos, advertisements and live broadcasts, so that a heterogeneous graph network is formed, and the similarity of video contents corresponding to the videos is determined.
In this embodiment, after the construction of the plurality of estimation models and the heterogeneous graph network is completed, the estimation models and the heterogeneous graph network are sequentially adopted to obtain the user feedback parameters and the similarity of the video content for each video in the candidate video pool, so as to provide a data basis for calculating the recommendation score based on the target satisfaction corresponding to the user feedback parameters and the similarity of the video content in the subsequent steps.
Step S102, based on user feedback parameters, content similarity and a preset graph search algorithm, video recommendation sequence search is carried out in a candidate video pool, and a preset number of candidate video recommendation sequences and recommendation labels corresponding to the candidate video recommendation sequences are obtained, wherein the recommendation labels are used for representing recommendation scores of the corresponding candidate video recommendation sequences.
In the present embodiment, the graph search algorithm is a beam-search algorithm; in this embodiment, a recommendation score corresponding to each video is calculated through the obtained user feedback parameters, then videos with a set number (the set number corresponds to a beam width set by a beam search algorithm, for example, set as k) are selected based on the recommendation scores to serve as candidate video sets corresponding to a current time step in a plurality of time steps corresponding to the beam search algorithm, and then videos with the same set number are selected according to the corresponding recommendation scores at each subsequent time step to serve as the candidate video sets at the time step; in this embodiment, in a current time step, calculating the exposure rate used for calculating the recommended score of the current candidate video is the product of the exposure rate used for calculating the recommended score of the corresponding candidate video and the exposure rate corresponding to the current candidate video in the previous time step; after one candidate video set is selected, combining the candidate video set with the candidate video set selected previously until a plurality of candidate video recommendation sequences are formed; then, determining a recommendation score corresponding to each candidate video recommendation sequence according to the recommendation scores corresponding to all videos in each candidate video recommendation sequence and the similarity mean value, namely determining a corresponding recommendation label.
In the embodiment, when the videos are selected at each time step, only the K videos with the highest recommendation scores are selected by using the beam-search algorithm, so that the retrieval amount is reduced.
Step S103, detecting a target video recommendation sequence in a preset number of candidate video recommendation sequences according to the recommendation label, and recommending videos according to the arrangement sequence of videos corresponding to the target video recommendation sequence.
In this embodiment, after determining a recommended tag corresponding to each candidate video recommended sequence, performing TOPN screening according to the corresponding recommended tag, and selecting one candidate video recommended sequence with the highest recommended score corresponding to the recommended tag as a finally selected target video recommended sequence.
Processing each video in the candidate video pool by using a preset target estimation model and a preset heterogeneous graph convolution network algorithm through the steps S101 to S103 to obtain a user feedback parameter corresponding to each video and content similarity corresponding to each video, wherein the user feedback parameter is used for representing multiple target satisfaction degrees corresponding to the videos; based on user feedback parameters, content similarity and a preset graph search algorithm, performing video recommendation sequence search in a candidate video pool to obtain a preset number of candidate video recommendation sequences and recommendation labels corresponding to the candidate video recommendation sequences, wherein the recommendation labels are used for representing the recommendation scores of the corresponding candidate video recommendation sequences; according to the recommendation labels, the target video recommendation sequences are detected from the preset number of candidate video recommendation sequences, and video recommendation is performed according to the arrangement sequence of videos corresponding to the target video recommendation sequences, so that the problems that the video recommendation cannot meet the requirement of diversity and the video recommendation effect is poor due to the multi-target and multi-mode characteristics of the recommended videos are solved, and the advantages that the recommended videos meet the requirement of user diversity and the recommendation effect is improved are achieved.
It should be noted that, in the embodiment of the present application, the video recommendation may be completed according to the following steps: firstly, constructing an estimation model, and measuring the multi-target satisfaction degree and the exposure probability of a video through a multi-target estimation model; secondly, constructing a heteromorphic graph network, mainly fusing a knowledge graph and video expression to obtain a vector of a video, and calculating similarity; and finally, performing sequence retrieval, namely retrieving by using a beam-search method to obtain an optimal recommendation list.
In some embodiments, the step S103 of detecting the target video recommendation sequence from a preset number of candidate video recommendation sequences according to the recommendation label may be implemented by the following steps: and selecting the candidate video recommendation sequences with the maximum recommendation scores according to the sequence of the recommendation scores from high to low, and determining that the target video recommendation sequences comprise the candidate video recommendation sequences with the maximum recommendation scores.
In this embodiment, after the recommended label corresponding to each candidate video recommended sequence is determined, TOPN screening is performed according to the corresponding recommended label, and one candidate video recommended sequence with the highest recommended score corresponding to the recommended label is selected as the finally selected target video recommended sequence.
In order to determine a user feedback parameter corresponding to each video in the video candidate pool, in some embodiments, before processing each video in the candidate video pool by using a preset target prediction model and a preset heterogeneous graph convolution network algorithm, the following steps are further performed:
step 21, determining multiple service targets corresponding to the video, and constructing a task label corresponding to each service target, wherein the service targets include one of the following: the method comprises the steps that the video click rate, the video playing time length, the video income conversion rate and the video exposure rate are used, and a task tag is used for determining whether a business target corresponding to a video reaches the standard or not;
step 21, training and generating various target estimation models by taking the task labels as corresponding labels and based on the depth and the cross neural network, wherein the target estimation models comprise one of the following: click rate estimation model, duration estimation model, income estimation model and exposure rate estimation model.
Determining multiple service targets corresponding to the video through the steps, and constructing a task label corresponding to each service target, wherein the service targets comprise one of the following: the method comprises the steps that the video click rate, the video playing time length, the video income conversion rate and the video exposure rate are used, and a task tag is used for determining whether a business target corresponding to a video reaches the standard or not; taking the task labels as corresponding labels, and training and generating various target estimation models based on the depth and the cross neural network, wherein the target estimation models comprise one of the following: the click rate estimation model, the duration estimation model, the income estimation model and the exposure rate estimation model are used for building a plurality of estimation models, so that the target satisfaction degree corresponding to each video is predicted through the estimation models.
In some embodiments, in step S101, each video in the candidate video pool is processed by using a preset target estimation model to obtain a user feedback parameter corresponding to each video, and the method may be implemented by the following steps: respectively utilizing a click rate estimation model, a duration estimation model, a income estimation model and an exposure rate estimation model to estimate a business target corresponding to each video, and determining user feedback parameters including the estimated following target satisfaction degrees: click rate, effective playing probability, probability of revenue conversion, exposure rate.
In this embodiment, each video is subjected to estimation processing through a multi-target estimation model, so that the multi-target satisfaction and the exposure probability of each video are measured.
In some embodiments, in step S102, based on the user feedback parameter, the content similarity, and a preset graph search algorithm, video recommendation sequence search is performed in the candidate video pool to obtain a preset number of candidate video recommendation sequences and a recommendation tag corresponding to each candidate video recommendation sequence, and the following steps may be implemented:
and step 31, determining the application exposure rate corresponding to each video in the current time step, and determining the current recommended score corresponding to the current candidate video based on the application exposure rate and other target satisfaction degrees in the user feedback parameters, wherein the application exposure rate is determined according to the exposure rate in the user feedback parameters corresponding to the current candidate video and the application exposure rate corresponding to the video selected in the previous time step.
In this embodiment, the application exposure rate is an exposure rate that characterizes participation in calculating the corresponding current recommended score, and the exposure rate is related to the exposure rate in the user feedback parameter corresponding to the video and the exposure rate used by the video located before the video in the same candidate video recommended sequence for calculating the corresponding current recommended score in the last time step; in specific application, calculating the exposure rate of the corresponding current recommendation score to be equal to the product of the exposure rate in the user feedback parameter corresponding to the video and the exposure rate used by the video positioned in front of the video in the same candidate video recommendation sequence in the last time step for calculating the corresponding current recommendation score; for example: setting the exposure probability corresponding to a video selected as a corresponding candidate output sequence as alpha, setting the exposure rate in the user feedback parameter corresponding to the video as gamma, and setting the exposure rate corresponding to the video selected as the corresponding candidate output sequence as alpha x gamma, wherein at the moment, the next video finishes the calculation of the corresponding current recommendation score by taking the exposure rate as alpha x gamma; in this embodiment, the time step refers to a search time set when performing sequence search using the bundle search algorithm, and the number of steps of the time step is related to the length of the set video recommendation sequence, for example: m ordered video lists need to be recommended to a user from the video set N to be recommended, and the step number of the corresponding time step is M
In some of these alternative embodiments, the current recommendation score, F (i), is calculated as follows:
F(i)=a×ctr(i)×ER(i)+b×Le(i)×ER(i)+c×RCR(i)×ER(i)
wherein i represents the current time step, a, b and c are respectively corresponding set hyper-parameters, ctr (i) represents the click rate of the corresponding video, le (i) represents the effective playing probability of the corresponding video, RCR (i) represents the probability of income transformation of the corresponding video, and ER (i) represents the application exposure rate corresponding to the current time step.
By adopting the formula, the current recommended value corresponding to the current candidate video is determined based on the application exposure rate and other target satisfaction degrees in the user feedback parameters.
And step 32, retrieving a set number of videos from the current candidate videos according to the current recommendation score to obtain a current candidate output sequence corresponding to the current time step, and combining the current candidate output sequence with all history candidate output sequences retrieved at all time steps before the current time step to generate extended video sequences with a set number, wherein the current candidate videos do not include any video in the history candidate output sequences, and the number of videos of each extended video sequence is equal to the number of steps corresponding to the current time step.
In some optional embodiments, according to the current recommendation score, a set number of videos are retrieved from the current candidate videos to obtain a current candidate output sequence corresponding to the current time step, and the following steps are adopted to implement: and selecting a set number of videos from the current candidate videos according to the sequence of the current recommendation score from high to low to obtain the current candidate output sequence.
In the present embodiment, the number of beam widths (beam sizes) set when performing the beam search is set, for example: k; in this embodiment, the K videos with the highest current recommendation score are selected as the current candidate output sequence corresponding to the current time step.
And step 33, repeatedly executing the step of retrieving the current candidate output sequence corresponding to the corresponding time step according to the current recommendation score corresponding to the corresponding time step, and repeatedly executing the step of combining the corresponding current candidate output sequence with all the retrieved historical candidate output sequences until each extended video sequence comprises a preset number of videos, so as to obtain a preset number of candidate video recommendation sequences.
In this embodiment, when the length of the target video recommendation sequence required to be recommended is long, the number of beam search steps to be correspondingly executed is correspondingly determined; when the step number of the time step is equal to the length of the target video recommendation sequence, stopping searching; of course, in some optional real-time modes, when the length of the target video recommendation sequence is set, a video which meets the requirement cannot be searched by setting or when a next candidate output sequence is selected, the search is stopped; in this embodiment, after the search for the current candidate output sequence is completed at each time step, the output current candidate output sequence is combined with at least one history candidate output sequence that has been obtained before, that is, before the search for multiple candidate video recommendation sequences is completed, a current candidate output sequence is obtained and added to the history candidate output sequence, and of course, the history candidate output sequence is also formed by combining multiple single candidate output sequences that are combined in sequence.
And step 34, determining a total recommendation score corresponding to the candidate video recommendation sequence according to the sum of all current recommendation scores corresponding to each candidate video recommendation sequence and the mean value of the content similarity corresponding to all videos, wherein the recommendation label comprises the total recommendation score.
In this embodiment, after the plurality of candidate video recommendation sequences are searched, one of the plurality of candidate video recommendation sequences needs to be selected, so that a total recommendation total score corresponding to each candidate video recommendation sequence needs to be calculated, and then a highest score is selected as a final recommendation sequence according to the total recommendation score.
Determining the application exposure rate corresponding to each video in the current time step through the steps, and determining the current recommended value corresponding to the current candidate video based on the application exposure rate and the satisfaction degree of other targets in the user feedback parameters, wherein the application exposure rate is determined according to the exposure rate in the user feedback parameters corresponding to the current candidate video and the application exposure rate corresponding to the video selected in the previous time step; retrieving a set number of videos from the current candidate videos according to the current recommendation score to obtain a current candidate output sequence corresponding to the current time step, and combining the current candidate output sequence with all history candidate output sequences retrieved at all time steps before the current time step to generate an extended video sequence with the set number, wherein the current candidate videos do not include any video in the history candidate output sequences, and the number of the videos in each extended video sequence is equal to the number of steps corresponding to the current time step; repeatedly executing the current candidate output sequence corresponding to the corresponding time step according to the current recommendation score corresponding to the corresponding time step, and repeatedly executing the combination of the corresponding current candidate output sequence and all the searched historical candidate output sequences until each extended video sequence comprises a preset number of videos, so as to obtain a preset number of candidate video recommendation sequences; and determining a recommended total score corresponding to the corresponding candidate video recommendation sequence according to the sum of all current recommended scores corresponding to each candidate video recommendation sequence and the mean value of content similarities corresponding to all videos, wherein the recommended label comprises the recommended total score, so that a recommended video with a better sequence level is obtained by using a beam-search algorithm in combination with the obtained user feedback parameters corresponding to each video and the similarity of the video content, the performance requirement of corresponding equipment is met, the obtained recommended video sequence is optimal, the requirement of user diversity is met, and the recommendation effect is improved.
It should be noted that the steps illustrated in the above-described flow diagrams or in the flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order different than here.
The present embodiment further provides a video recommendation apparatus, which is used to implement the foregoing embodiments and preferred embodiments, and the description already made is omitted. As used hereinafter, the terms "module," "unit," "subunit," and the like may implement a combination of software and/or hardware for a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 2 is a block diagram of a video recommendation apparatus according to an embodiment of the present application, and as shown in fig. 2, the apparatus includes:
the processing module 21 is configured to process each video in the candidate video pool by using a preset target prediction model and a preset heterogeneous graph convolution network algorithm to obtain a user feedback parameter corresponding to each video and content similarity corresponding to each video, where the user feedback parameter is used to represent multiple target satisfaction degrees corresponding to the videos;
the retrieval module 22 is coupled to the processing module 21 and configured to search for video recommendation sequences in a candidate video pool based on the user feedback parameters, the content similarity and a preset graph search algorithm to obtain a preset number of candidate video recommendation sequences and recommendation labels corresponding to each candidate video recommendation sequence, where the recommendation labels are used to represent the recommendation scores of the corresponding candidate video recommendation sequences;
the determining module 23 is coupled to the retrieving module 22, and configured to detect a target video recommendation sequence from a preset number of candidate video recommendation sequences according to the recommendation tag, and perform video recommendation according to an arrangement sequence of videos corresponding to the target video recommendation sequence.
According to the video recommendation device, each video in the candidate video pool is processed by using a preset target estimation model and a preset heterogeneous graph convolution network algorithm, so that a user feedback parameter corresponding to each video and content similarity corresponding to each video are obtained, wherein the user feedback parameter is used for representing multiple target satisfaction degrees corresponding to the videos; based on user feedback parameters, content similarity and a preset graph search algorithm, performing video recommendation sequence search in a candidate video pool to obtain a preset number of candidate video recommendation sequences and recommendation labels corresponding to the candidate video recommendation sequences, wherein the recommendation labels are used for representing the recommendation scores of the corresponding candidate video recommendation sequences; according to the recommendation labels, target video recommendation sequences are detected in the preset number of candidate video recommendation sequences, and video recommendation is performed according to the arrangement sequence of videos corresponding to the target video recommendation sequences, so that the problems that diversity requirements cannot be met and video recommendation effects are poor due to multi-target and multi-mode characteristics of recommended videos are solved, and the advantages that the recommended videos meet the requirements of user diversity and the recommendation effects are improved are achieved.
In some embodiments, the determining module 23 is further configured to select the candidate video recommendation sequences with the largest recommendation score in the order of the recommendation scores from high to low, and determine that the target video recommendation sequence includes the candidate video recommendation sequence with the largest recommendation score.
In some embodiments, before processing each video in the candidate video pool by using a preset target pre-estimation model and a preset heterogeneous graph convolution network algorithm, the apparatus is further configured to determine multiple service targets corresponding to the videos, and construct a task tag corresponding to each service target, where the service target includes one of: the method comprises the steps that the video click rate, the video playing time length, the video income conversion rate and the video exposure rate are used, and a task tag is used for determining whether a business target corresponding to a video reaches the standard or not; taking the task labels as corresponding labels, training and generating various target estimation models based on the depth and the cross neural network, wherein the target estimation models comprise one of the following: click rate estimation model, duration estimation model, income estimation model and exposure rate estimation model.
In some embodiments, the processing module 21 is further configured to predict a service target corresponding to each video by using a click-through rate prediction model, a duration prediction model, a revenue prediction model, and an exposure rate prediction model, respectively, and determine that the user feedback parameters include predicted satisfaction degrees of the following targets: click rate, effective playing probability, probability of revenue conversion, exposure rate.
In some embodiments, the retrieving module 22 further comprises:
the first determining unit is used for determining the application exposure rate corresponding to each video in the current time step and determining the current recommended score corresponding to the current candidate video based on the application exposure rate and the satisfaction degree of other targets in the user feedback parameters, wherein the application exposure rate is determined according to the exposure rate in the user feedback parameters corresponding to the current candidate video and the application exposure rate corresponding to the video selected in the previous time step;
the first retrieval unit is coupled with the first determination unit and used for retrieving a set number of videos from the current candidate videos according to the current recommendation score to obtain a current candidate output sequence corresponding to the current time step, and combining the current candidate output sequence with all historical candidate output sequences retrieved at all time steps before the current time step to generate a set number of extended video sequences, wherein the current candidate videos do not include any video in the historical candidate output sequences, and the number of videos of each extended video sequence is equal to the number of steps corresponding to the current time step;
the first processing unit is coupled with the first retrieval unit and used for repeatedly executing the current candidate output sequence corresponding to the corresponding time step according to the current recommendation score corresponding to the corresponding time step, and repeatedly executing the combination of the corresponding current candidate output sequence and all the retrieved historical candidate output sequences until each extended video sequence comprises a preset number of videos, so as to obtain a preset number of candidate video recommendation sequences;
and the first calculating unit is coupled with the first processing unit and determines the total recommendation score corresponding to the corresponding candidate video recommendation sequence according to the sum of all current recommendation scores corresponding to each candidate video recommendation sequence and the mean value of the content similarity corresponding to all videos, wherein the recommendation label comprises the total recommendation score.
In some embodiments, the first retrieving unit is further configured to select a set number of videos from the current candidate videos in an order from high to low of the current recommendation score, so as to obtain a current candidate output sequence.
In some of these embodiments, the first determining unit is configured to calculate the current recommendation score, F (i), according to the following formula:
F(i)=a×ctr(i)×ER(i)+b×Le(i)×ER(i)+c×RCR(i)×ER(i)
wherein i represents the current time step, a, b and c are respectively corresponding set hyper-parameters, ctr (i) represents the click rate of the corresponding video, le (i) represents the effective playing probability of the corresponding video, RCR (i) represents the probability of income conversion of the corresponding video, and ER (i) represents the application exposure rate corresponding to the current time step.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application, and as shown in fig. 3, an embodiment of the present application provides an electronic device, which includes a processor 31, a communication interface 32, a memory 33, and a communication bus 34, where the processor 31, the communication interface 32, and the memory 33 complete mutual communication through the communication bus 34,
a memory 33 for storing a computer program;
the processor 31, when executing the program stored in the memory 33, implements the method steps of fig. 1.
The processing in the electronic device implements the method steps in fig. 1, and the technical effect brought by the method is consistent with the technical effect of executing the video recommendation method in fig. 1 in the foregoing embodiment, and is not described again here.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 3, but this does not mean only one bus or one type of bus.
The communication interface is used for communication between the terminal and other devices.
The Memory may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Alternatively, the memory may be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
The present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the video recommendation method provided in any one of the foregoing method embodiments.
In yet another embodiment provided by the present application, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the steps of the video recommendation method of any of the above embodiments.
It is noted that, in this document, relational terms such as "first" and "second," and the like, are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (12)

1. A method for video recommendation, comprising:
processing each video in a candidate video pool by using a preset target estimation model and a preset heterogeneous graph convolution network algorithm to obtain a user feedback parameter corresponding to each video and content similarity corresponding to each video, wherein the user feedback parameter is used for representing multiple target satisfaction degrees corresponding to the videos;
based on the user feedback parameters, the content similarity and a preset graph search algorithm, searching video recommendation sequences in the candidate video pool to obtain a preset number of candidate video recommendation sequences and recommendation labels corresponding to the candidate video recommendation sequences, wherein the recommendation labels are used for representing the recommendation scores of the corresponding candidate video recommendation sequences;
and detecting a target video recommendation sequence in the preset number of candidate video recommendation sequences according to the recommendation label, and recommending videos according to the arrangement sequence of the videos corresponding to the target video recommendation sequence.
2. The method of claim 1, wherein detecting a target video recommendation sequence among a preset number of the candidate video recommendation sequences according to the recommendation tag comprises: and selecting the candidate video recommendation sequence with the maximum recommendation score according to the sequence of the recommendation scores from high to low, and determining that the target video recommendation sequence comprises the candidate video recommendation sequence with the maximum recommendation score.
3. The method of claim 1, wherein before processing each video in the candidate video pool by using a preset target prediction model and a preset heterogeneous graph convolution network algorithm, the method further comprises:
determining a plurality of business targets corresponding to the video, and constructing a task label corresponding to each business target, wherein the business target comprises one of the following: the task tag is used for determining whether the business target corresponding to the video reaches the standard or not;
and training and generating a plurality of target estimation models by taking the task labels as corresponding labels based on a depth and a cross neural network, wherein the target estimation models comprise one of the following models: click rate estimation model, duration estimation model, income estimation model and exposure rate estimation model.
4. The method according to claim 3, wherein processing each video in the candidate video pool by using a preset target estimation model to obtain a user feedback parameter corresponding to each video comprises:
respectively utilizing the click rate estimation model, the duration estimation model, the income estimation model and the exposure rate estimation model to estimate a business target corresponding to each video, and determining that the user feedback parameters comprise the estimated following target satisfaction degrees: click-through rate, effective play probability, probability of revenue conversion, exposure rate.
5. The method of claim 4, wherein performing video recommendation sequence search in the candidate video pool based on the user feedback parameters, the content similarity, and a preset graph search algorithm to obtain a preset number of candidate video recommendation sequences and a recommendation label corresponding to each candidate video recommendation sequence comprises:
determining an application exposure rate corresponding to each video in a current time step, and determining a current recommended score corresponding to the current candidate video based on the application exposure rate and other target satisfaction degrees in the user feedback parameters, wherein the application exposure rate is determined according to the exposure rate in the user feedback parameters corresponding to the current candidate video and the application exposure rate corresponding to the selected video in the previous time step;
retrieving a set number of videos from the current candidate videos according to the current recommendation score to obtain a current candidate output sequence corresponding to the current time step, and combining the current candidate output sequence with all history candidate output sequences retrieved at all time steps before the current time step to generate a set number of extended video sequences, wherein the current candidate videos do not include any videos in the history candidate output sequences, and the number of videos in each extended video sequence is equal to the number of steps corresponding to the current time step;
repeatedly executing the current candidate output sequence corresponding to the corresponding time step according to the current recommendation score corresponding to the corresponding time step, and repeatedly executing the combination of the corresponding current candidate output sequence and all the searched historical candidate output sequences until each extended video sequence comprises a preset number of videos, so as to obtain a preset number of candidate video recommendation sequences;
and determining a total recommendation score corresponding to the candidate video recommendation sequence according to the sum of all the current recommendation scores corresponding to each candidate video recommendation sequence and the mean value of the content similarity corresponding to all the videos, wherein the recommendation label comprises the total recommendation score.
6. The method of claim 5, wherein retrieving a set number of videos from the current candidate videos according to the current recommendation score to obtain a current candidate output sequence corresponding to the current time step comprises: and selecting a set number of videos from the current candidate videos according to the sequence of the current recommendation score from high to low to obtain the current candidate output sequence.
7. The method of claim 5, wherein determining a current recommendation score corresponding to the current candidate video based on the application exposure and other of the target satisfaction levels in the user feedback parameters comprises:
calculating the current recommendation score F (i) as follows:
F(i)=a×ctr(i)×ER(i)+b×Le(i)×ER(i)+c×RCR(i)×ER(i)
wherein i represents the current time step, a, b and c are respectively corresponding set hyper-parameters, ctr (i) represents the click rate of the corresponding video, le (i) represents the effective playing probability of the corresponding video, RCR (i) represents the probability of income conversion of the corresponding video, and ER (i) represents the application exposure rate corresponding to the current time step.
8. The method of claim 1, wherein the graph search algorithm comprises a beam search algorithm.
9. The method of claim 1, wherein the heterogeneous graph convolutional network algorithm comprises one of: graph neural network GCN, graph attention network GAT, graph sampling aggregation model GraphSAGE.
10. A video recommendation apparatus, comprising:
the processing module is used for processing each video in the candidate video pool by using a preset target estimation model and a preset heterogeneous graph convolution network algorithm to obtain a user feedback parameter corresponding to each video and content similarity corresponding to each video, wherein the user feedback parameter is used for representing multiple target satisfaction degrees corresponding to the videos;
the retrieval module is used for searching video recommendation sequences in the candidate video pool based on the user feedback parameters, the content similarity and a preset graph search algorithm to obtain a preset number of candidate video recommendation sequences and recommendation labels corresponding to the candidate video recommendation sequences, wherein the recommendation labels are used for representing the recommendation scores of the corresponding candidate video recommendation sequences;
and the determining module is used for detecting a target video recommendation sequence in the preset number of candidate video recommendation sequences according to the recommendation label and recommending videos according to the video arrangement sequence corresponding to the target video recommendation sequence.
11. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the steps of the video recommendation method of any of claims 1-9 when executing a program stored in a memory.
12. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the video recommendation method according to any one of claims 1-9.
CN202211168200.9A 2022-09-23 2022-09-23 Video recommendation method and device, electronic equipment and storage medium Pending CN115391665A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211168200.9A CN115391665A (en) 2022-09-23 2022-09-23 Video recommendation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211168200.9A CN115391665A (en) 2022-09-23 2022-09-23 Video recommendation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115391665A true CN115391665A (en) 2022-11-25

Family

ID=84129297

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211168200.9A Pending CN115391665A (en) 2022-09-23 2022-09-23 Video recommendation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115391665A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116701706A (en) * 2023-07-29 2023-09-05 腾讯科技(深圳)有限公司 Data processing method, device, equipment and medium based on artificial intelligence

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116701706A (en) * 2023-07-29 2023-09-05 腾讯科技(深圳)有限公司 Data processing method, device, equipment and medium based on artificial intelligence
CN116701706B (en) * 2023-07-29 2023-09-29 腾讯科技(深圳)有限公司 Data processing method, device, equipment and medium based on artificial intelligence

Similar Documents

Publication Publication Date Title
CN110837602B (en) User recommendation method based on representation learning and multi-mode convolutional neural network
CN108804633B (en) Content recommendation method based on behavior semantic knowledge network
CN111061946B (en) Method, device, electronic equipment and storage medium for recommending scenerized content
CN105488024B (en) The abstracting method and device of Web page subject sentence
US20180341696A1 (en) Method and system for detecting overlapping communities based on similarity between nodes in social network
CN110909182B (en) Multimedia resource searching method, device, computer equipment and storage medium
CN105991397B (en) Information dissemination method and device
CN110457581A (en) A kind of information recommended method, device, electronic equipment and storage medium
CN109325182B (en) Information pushing method and device based on session, computer equipment and storage medium
CN112989169B (en) Target object identification method, information recommendation method, device, equipment and medium
CN112199600A (en) Target object identification method and device
CN107153656A (en) A kind of information search method and device
CN113011471A (en) Social group dividing method, social group dividing system and related devices
CN112364245B (en) Top-K movie recommendation method based on heterogeneous information network embedding
CN115391665A (en) Video recommendation method and device, electronic equipment and storage medium
CN111582448B (en) Weight training method and device, computer equipment and storage medium
CN110851708B (en) Negative sample extraction method, device, computer equipment and storage medium
Annam et al. Entropy based informative content density approach for efficient web content extraction
CN113312523B (en) Dictionary generation and search keyword recommendation method and device and server
CN112183069B (en) Keyword construction method and system based on historical keyword put-in data
CN114722313A (en) Search result sorting method, device, equipment and storage medium
CN111291904B (en) Preference prediction method and device and computer equipment
CN110719224B (en) Topological potential community detection method based on label propagation
CN112328835A (en) Method and device for generating vector representation of object, electronic equipment and storage medium
CN113901056A (en) Interface recommendation method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination