CN116821475A - Video recommendation method and device based on client data and computer equipment - Google Patents

Video recommendation method and device based on client data and computer equipment Download PDF

Info

Publication number
CN116821475A
CN116821475A CN202310572613.1A CN202310572613A CN116821475A CN 116821475 A CN116821475 A CN 116821475A CN 202310572613 A CN202310572613 A CN 202310572613A CN 116821475 A CN116821475 A CN 116821475A
Authority
CN
China
Prior art keywords
video
client
candidate
similarity
videos
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310572613.1A
Other languages
Chinese (zh)
Other versions
CN116821475B (en
Inventor
陆殿军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Honey Network Technology Co ltd
Original Assignee
Guangzhou Honey Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Honey Network Technology Co ltd filed Critical Guangzhou Honey Network Technology Co ltd
Priority to CN202310572613.1A priority Critical patent/CN116821475B/en
Publication of CN116821475A publication Critical patent/CN116821475A/en
Application granted granted Critical
Publication of CN116821475B publication Critical patent/CN116821475B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application belongs to the field of artificial intelligence, and relates to a video recommendation method, a device, computer equipment and a storage medium based on client data, wherein the method comprises the following steps: acquiring a client image of a client, wherein the client image comprises an interest tag and a viewing habit tag, and the viewing habit tag is generated based on viewing habit data of the client on a historical video; obtaining video portraits of each candidate video, wherein each video portrait comprises a content label and a playing style label, the content label corresponds to the interest label, and the playing style label corresponds to the watching habit label; respectively calculating the similarity between the client portrait and the video images of each candidate video; obtaining video scores of candidate videos; calculating recommendation scores of the candidate videos according to the similarity and the video scores corresponding to the candidate videos; and selecting target videos from the candidate videos according to the obtained recommendation scores, and recommending the videos to the clients according to the selected target videos. The video recommendation method and device improve accuracy of video recommendation.

Description

Video recommendation method and device based on client data and computer equipment
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to a video recommendation method, apparatus, computer device, and storage medium based on client data.
Background
With the development of internet technology, various video websites (including video applications) have also been rapidly developed. Currently, browsing videos has become an integral part of network entertainment. To attract customers, the video website may make video recommendations to the customer. Current video recommendation techniques pre-label the video, which may indicate the content of the video, typically with respect to the subject matter of the video, the persons involved in the video, etc. The server of the video website predicts the type of the video which is interested by the client according to the labels of the videos which are watched by the client, and recommends the video with high heat under the type of the video which is interested by the client. However, such video recommendation techniques remain a bottleneck and may produce a martai effect of video hotness, affecting the recommendation of video.
Disclosure of Invention
The embodiment of the application aims to provide a video recommendation method, a video recommendation device, computer equipment and a storage medium based on client data so as to improve the accuracy of video recommendation.
In order to solve the above technical problems, the embodiments of the present application provide a video recommendation method based on client data, which adopts the following technical scheme:
acquiring a customer portrait of a customer, wherein the customer portrait comprises an interest tag and a viewing habit tag, and the viewing habit tag is generated based on the viewing habit data of the customer on a historical video;
obtaining video portraits of each candidate video, wherein each video portrait comprises a content label and a playing style label, the content label corresponds to the interest label, and the playing style label corresponds to the watching habit label;
respectively calculating the similarity between the client portrait and the video images of the candidate videos;
obtaining video scores of the candidate videos;
calculating recommendation scores of the candidate videos according to the similarity and the video scores corresponding to the candidate videos;
and selecting target videos from the candidate videos according to the obtained recommendation scores, and recommending the videos to the clients according to the selected target videos.
Further, before the step of obtaining the customer representation of the customer, the method further includes:
obtaining viewing history data of a client on a history video, wherein the viewing history data comprises content tags and video style tags of the history video, a playing source of the history video, operation behavior data and viewing habit data of the client on the history video;
And generating a customer portrait of the customer according to the viewing history data.
Further, the step of generating a customer representation of the customer based on the viewing history data includes:
generating a tendency label of the client to the historical video according to the operation behavior data;
clustering the content tags, the video style tags and the viewing habit data to obtain interest tags and viewing habit tags;
adding weights to the interest tags and the viewing habit tags according to trend tags and play sources of historical videos corresponding to the interest tags and the viewing habit tags, wherein the added weights are used for calculating the similarity between the client portrait and video images of each candidate video;
and generating the customer portrait of the customer according to the interest tag, the viewing habit tag and the corresponding weight.
Further, the step of calculating the similarity between the client portrait and the video portraits of the candidate videos respectively includes:
for each candidate video representation, constructing an interest tag in the customer representation and a content tag in the video representation as a first tag combination;
Constructing a second label combination of a viewing habit label in the client image and a playing style label in the video image;
and inputting the first label combination and the second label combination into an image evaluation model to obtain the similarity between the client image and the video image.
Further, the step of calculating the similarity between the client portrait and the video portraits of the candidate videos respectively includes:
converting interest tags in the customer representation into a first sequence and viewing habit tags in the customer representation into a second sequence;
converting content tags in the video images into a third sequence and converting play style tags in the video images into a fourth sequence for each candidate video image;
calculating a first Euclidean distance between the first sequence and the third sequence, and setting the first Euclidean distance as a first similarity;
calculating a second Euclidean distance between the second sequence and the fourth sequence, and setting the second Euclidean distance as a second similarity;
and calculating the similarity between the client image and the video image according to the first similarity and the second similarity.
Further, before the step of obtaining the video score of each candidate video, the method further includes:
for each candidate video, acquiring multi-dimensional evaluation information of the candidate video, wherein the multi-dimensional evaluation information comprises publisher information, audio information, image information and content information;
for each type of evaluation information, processing the evaluation information through a preset evaluation model to obtain an evaluation score of the evaluation information;
and calculating the video scores of the candidate videos according to the evaluation scores.
Further, the step of calculating the recommendation score of each candidate video according to the similarity and the video score corresponding to each candidate video includes:
according to a preset mapping rule, mapping the similarity and video score corresponding to each candidate video respectively to obtain standard similarity and standard video score;
and calculating the recommendation score of each candidate video for the client according to the standard similarity and the standard video score of each candidate video.
In order to solve the above technical problems, the embodiment of the present application further provides a video recommendation device based on client data, which adopts the following technical scheme:
The client portrait acquisition module is used for acquiring client portraits of clients, wherein the client portraits comprise interest tags and viewing habit tags, and the viewing habit tags are generated based on the viewing habit data of the clients on historical videos;
the video portrait acquisition module is used for acquiring video portraits of each candidate video, wherein each video portrait comprises a content tag and a playing style tag, the content tag corresponds to the interest tag, and the playing style tag corresponds to the viewing habit tag;
the similarity calculation module is used for calculating the similarity between the client portrait and the video portraits of the candidate videos respectively;
the video score acquisition module is used for acquiring the video scores of the candidate videos;
the recommendation score calculating module is used for calculating recommendation scores of the candidate videos according to the similarity and the video scores corresponding to the candidate videos;
and the selecting and recommending module is used for selecting target videos from the candidate videos according to the obtained recommending scores and recommending the videos to the clients according to the selected target videos.
To solve the above technical problem, the embodiments of the present application further provide a computer device, where the computer device includes a memory and a processor, where the memory stores computer readable instructions, and the processor executes the computer readable instructions to implement the steps of the video recommendation method based on client data as described above.
To solve the above technical problem, embodiments of the present application further provide a computer readable storage medium having computer readable instructions stored thereon, which when executed by a processor implement the steps of the video recommendation method based on client data as described above.
Compared with the prior art, the embodiment of the application has the following main beneficial effects: the method comprises the steps that a customer portrait of a customer is obtained, the customer portrait comprises an interest tag and a viewing habit tag, the viewing habit tag is generated based on the viewing habit data of the customer on a historical video, the characteristic on video production and browsing habit of the customer is focused, and the preference of the customer on the video can be recorded more comprehensively; the method comprises the steps that video portraits of candidate videos are obtained, the video portraits comprise content labels and playing style labels, the content labels correspond to interest labels, and the playing style labels correspond to viewing habit labels and are descriptions of candidate video characteristics; calculating the similarity between the customer portrait and the video image of each candidate video, wherein the higher the similarity is, the closer the candidate video is to the preference of the customer; the candidate videos have video scores describing the video quality, and the recommendation score of each candidate video is calculated according to the similarity and the video score of each candidate video, and the recommendation value of the candidate video to the client is represented by a numerical value; selecting target videos from the candidate videos according to the recommendation scores, and recommending the videos to the clients according to the target videos; the application considers the watching habit and video quality of the clients, and the evaluation factors are more comprehensive, thereby improving the accuracy of video recommendation.
Drawings
In order to more clearly illustrate the solution of the present application, a brief description will be given below of the drawings required for the description of the embodiments of the present application, it being apparent that the drawings in the following description are some embodiments of the present application, and that other drawings may be obtained from these drawings without the exercise of inventive effort for a person of ordinary skill in the art.
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flow chart of one embodiment of a client data based video recommendation method in accordance with the present application;
FIG. 3 is a flow chart of one embodiment of generating a customer image in accordance with the present application;
FIG. 4 is a flow chart of one embodiment of step S208 of FIG. 3;
FIG. 5 is a flow chart of one embodiment of step S203 of FIG. 2;
FIG. 6 is a flow chart of another embodiment of step S203 in FIG. 2;
FIG. 7 is a schematic diagram of one embodiment of a client data based video recommendation device in accordance with the present application;
FIG. 8 is a schematic structural view of one embodiment of a computer device according to the present application.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the applications herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "comprising" and "having" and any variations thereof in the description of the application and the claims and the description of the drawings above are intended to cover a non-exclusive inclusion. The terms first, second and the like in the description and in the claims or in the above-described figures, are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In order to make the person skilled in the art better understand the solution of the present application, the technical solution of the embodiment of the present application will be clearly and completely described below with reference to the accompanying drawings.
As shown in fig. 1, a system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may interact with the server 105 via the network 104 using the terminal devices 101, 102, 103 to receive or send messages or the like. Various communication client applications, such as a web browser application, a shopping class application, a search class application, an instant messaging tool, a mailbox client, social platform software, etc., may be installed on the terminal devices 101, 102, 103.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablet computers, electronic book readers, MP3 players (Moving Picture Experts Group Audio Layer III, dynamic video expert compression standard audio plane 3), MP4 (Moving Picture Experts Group Audio Layer IV, dynamic video expert compression standard audio plane 4) players, laptop and desktop computers, and the like.
The server 105 may be a server providing various services, such as a background server providing support for pages displayed on the terminal devices 101, 102, 103.
It should be noted that, the video recommendation method based on the client data provided by the embodiment of the present application is generally executed by a server, and accordingly, the video recommendation device based on the client data is generally disposed in the server.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow chart of one embodiment of a client data based video recommendation method in accordance with the present application is shown. The video recommendation method based on the client data comprises the following steps:
Step S201, a customer portrait of a customer is acquired, the customer portrait comprises interest labels and viewing habit labels, and the viewing habit labels are generated based on viewing habit data of the customer on historical videos.
In this embodiment, an electronic device (for example, a server shown in fig. 1) on which the video recommendation method based on the client data operates may communicate with the terminal device through a wired connection or a wireless connection. It should be noted that the wireless connection may include, but is not limited to, 3G/4G/5G connection, wiFi connection, bluetooth connection, wiMAX connection, zigbee connection, UWB (ultra wideband) connection, and other now known or later developed wireless connection.
Specifically, a customer representation of a customer is obtained, the customer representation being pre-generated, including an interest tag and a viewing habit tag. The interest tags represent elements of interest to the client, including the subject matter of the video (e.g., "comedy movie"), the person/object to which the video relates (e.g., "Zhang Sanj" an actor), the activity (e.g., "XXX photo contest"), and so on. The viewing habit labels are generated based on the viewing habit data of the clients on the historical videos, wherein the historical videos are videos watched by the clients in the past, the viewing habit data are data related to the viewing habit of the clients, and the viewing habit labels generated according to the viewing habit data can reflect the preference and interest of the clients on the videos, but are not the subject matters of the videos, characters, objects, activities and the like related to the videos. Viewing habit labels can focus on features in video production, for example, video content can be presented with highlights and key links just before beginning, and can also be pushed slowly and by the hand; the video rhythm can be gentle, and can be more rapid and tense; the customer may have a preference for the above feature when browsing the video, and the viewing habit tag may record the above preference. In addition, when a customer watches the video, the customer can watch the video in a jump mode, watch the video in a fast forward mode and the like, different watching modes can be associated with the characteristics of the video in terms of production, and the video can be recorded by using the watching habit labels; for videos of different interest tags, clients may have different viewing habits, so that the viewing habit tags may also superimpose tags under the interest tags.
Step S202, obtaining video portraits of each candidate video, wherein each video portrait comprises a content label and a playing style label, the content label corresponds to an interest label, and the playing style label corresponds to a viewing habit label.
The candidate video may be a currently existing video in the video website, and the server needs to select the candidate video to push to the client.
Specifically, video portraits of each candidate video are acquired, and the video portraits can be automatically or manually added and generated when the video is uploaded to a video website. The video portraits include content tags corresponding to interest tags of the client portraits and play style tags corresponding to viewing habit tags. Corresponding here means that two types of tags can share the same full set of tags, just one type describing a customer representation and one type describing a video representation.
Step S203, the similarity between the customer portrait and the video image of each candidate video is calculated.
Specifically, the server determines the similarity between the client portrait and the video image of each candidate video according to a preset portrait similarity evaluation strategy. The similarity is a numerical value whose size indicates the degree of similarity, proximity, between two images.
Step S204, obtaining video scores of the candidate videos.
Specifically, the server also calculates video scores of the candidate videos in advance, wherein the video scores are video quality scores of the candidate videos and are calculated in advance according to information of multiple dimensions.
Step S205, according to the similarity and video scores corresponding to the candidate videos, calculating the recommendation scores of the candidate videos.
Specifically, currently, each candidate video already has its similarity between its video representation and the customer representation, as well as its video score. The server may calculate a recommendation score based on the similarity and video scores that each candidate video has. The recommendation score is a numerical value, and is used for representing the recommendation value of the candidate video to the client, and the higher the recommendation score is, the more worth recommending the candidate video to the client.
Step S206, selecting target videos from the candidate videos according to the obtained recommendation scores, and recommending the videos to the clients according to the selected target videos.
Specifically, a target video is selected from the candidate videos according to the recommendation scores, and the target video can be multiple. When a video recommendation instruction is received, sending the selected target video to a client held by a client, and displaying and playing the target video through the client, thereby completing video recommendation; the video recommendation instruction may be an instruction triggered by the website front end when the client opens the video website through the client.
In the embodiment, the client portrait of the client is obtained, the client portrait comprises the interest tag and the viewing habit tag, the viewing habit tag is generated based on the viewing habit data of the client on the historical video, the characteristic on the video production and the client browsing habit is focused, and the preference of the client on the video can be more comprehensively recorded; the method comprises the steps that video portraits of candidate videos are obtained, the video portraits comprise content labels and playing style labels, the content labels correspond to interest labels, and the playing style labels correspond to viewing habit labels and are descriptions of candidate video characteristics; calculating the similarity between the customer portrait and the video image of each candidate video, wherein the higher the similarity is, the closer the candidate video is to the preference of the customer; the candidate videos have video scores describing the video quality, and the recommendation score of each candidate video is calculated according to the similarity and the video score of each candidate video, and the recommendation value of the candidate video to the client is represented by a numerical value; selecting target videos from the candidate videos according to the recommendation scores, and recommending the videos to the clients according to the target videos; the application considers the watching habit and video quality of the clients, and the evaluation factors are more comprehensive, thereby improving the accuracy of video recommendation.
Further, as shown in fig. 3, before the step S201, a step of generating a customer portrait may be further included, where the step of generating a customer portrait includes:
in step S207, viewing history data of the client on the history video is obtained, where the viewing history data includes content tags and video style tags of the history video, play sources of the history video, operation behavior data of the client on the history video, and viewing habit data.
Step S208, generating the customer portrait of the customer according to the viewing history data.
Specifically, the server needs to generate a customer representation of the customer. Firstly, viewing history data of a client on a history video is obtained, wherein the history video can be video which is recommended to the client and watched by the client according to a video recommendation method based on the client data in the embodiments of the application (the video recommendation method based on the client data in the application can be executed for a plurality of times, so that timeliness and accuracy of video recommendation are improved), or can be video which is recommended to the client according to other video recommendation methods and watched by the client.
The viewing history data includes content tags and video style tags of the history video, play sources of the history video, operational behavior data of clients on the history video, and viewing habit data. Wherein, the content tag records the subject matter of the video, and the characters, objects, activities and the like related to the video. The video style tag records features in terms of video production. The historical video watched by the user has a play source, which means that the user watches the video in any mode, including automatic recommendation of a video website, the user finds the historical video based on searching, the user finds the historical video from a watching history record and watches again, and the user finds the historical video from a favorites or praise column and watches again.
Viewing video by a customer generates operational behavior data and viewing habit data. The actions of praying, collecting, commenting, forwarding and the like of the video by the client form operation behavior data. The preferences of the video production features and the preferences of the video playing aspects constitute viewing habit data when the clients browse videos.
The server processes the viewing history data to generate a customer representation of the customer.
In this embodiment, viewing history data of a client on a history video is obtained, including content tags and video style tags of the history video, play sources of the history video, operation behavior data and viewing habit data of the client on the history video, and viewing of the client on the history video is recorded from multiple dimensions, so that accuracy of client portraits generated according to the viewing history data is ensured.
Further, as shown in fig. 4, the step S208 may include:
step S2081, according to the operation behavior data, generating a tendency label of the client to the historical video.
Specifically, the operation behavior data includes actions such as praying, clicking, collecting, commenting, bullet screen publishing, forwarding, and rewarding the historical video by the client, and subjective tendency and preference of the historical video by the client can be represented to a certain extent, so that a tendency label of the historical video by the client can be generated according to the operation behavior data. For example, when a client praise, collect and forward a historical video and leave comments "like, the video is very thorough and very good", a tendency label of "very like" can be generated; when a customer steps on a video point and leaves comments of 'disorder seven eight slots, no logic', a dislike tendency label can be generated.
The trend labels may be generated from a rule base or model employing artificial intelligence.
And step S2082, clustering the content tags, the video style tags and the viewing habit data to obtain interest tags and viewing habit tags.
Specifically, the server clusters all content tags, video style tags and viewing habit data to obtain interest tags and viewing habit tags, wherein the interest tags can be obtained by clustering the content tags and the video style tags, and the viewing habit tags can be obtained by clustering the video style tags and the viewing habit data.
Interest tags and viewing habit tags may be associated with some historical video. In one embodiment, if the number of historical videos to which interest tags and viewing habit tags are associated is small, e.g., less than a preset number threshold, the interest tags and viewing habit tags may be deleted to ensure that the obtained interest tags and viewing habit tags are obtained based on more historical videos, and more samples may ensure accuracy of the tags.
And S2083, adding weights to the interest tags and the viewing habit tags according to the trend tags and the play sources of the historical videos corresponding to the interest tags and the viewing habit tags, wherein the added weights are used for calculating the similarity between the customer portrait and the video portraits of the candidate videos.
In particular, interest tags and viewing habit tags may be associated with historical videos that have a tendency to tag and play sources. When the trend labels indicate that clients have preference on historical videos, the interest labels and the viewing habit labels are important, information brought by the labels needs to be paid attention, and higher weight can be added. The playing source displays the mode that the client watches the video, if the playing source displays the historical video which is found from the historical record, the favorites or the praise column based on the search, the client can have stronger interest in the historical video, the information brought by the labels needs to be paid attention, and higher weight can also be added. The weights may be involved in the computation of the similarity between the customer representation and the video image of each candidate video.
And S2084, generating a customer portrait of the customer according to the interest tags, the viewing habit tags and the weights corresponding to the interest tags and the viewing habit tags.
Specifically, the server may generate a customer representation of the customer based on the interest tags, the viewing habit tags, and their respective weights.
In the embodiment, according to the operation behavior data, generating a tendency label of the subjectivity of the client to the historical video; clustering the content tags, the video style tags and the viewing habit data to obtain interest tags and viewing habit tags; according to trend labels and play sources of historical videos corresponding to the interest labels and the viewing habit labels, weights are added to the interest labels and the viewing habit labels so as to distinguish the importance of the interest labels and the viewing habit labels; and generating a customer portrait describing the customer according to the interest tags, the viewing habit tags and the corresponding weights thereof so as to carry out subsequent video recommendation.
Further, as shown in fig. 5, in an embodiment, the step S203 may include:
step S2031, for each video portrait of the candidate video, constructs an interest tag in the client portrait and a content tag in the video portrait as a first tag combination.
Specifically, since the interest tag in the client portrait corresponds to the content tag in the video portrait, the interest tag in the client portrait and the content tag in the video portrait are constructed as a first tag combination for the video portraits of each candidate video.
The interest tag may further include tags of a plurality of sub-categories (such as a subject of a video, a person, an object, an activity, etc. related to the video), the content tag may also include a plurality of sub-categories, and the interest tag and the sub-category corresponding to the content tag may be combined together to obtain a first tag combination according to the combination of the plurality of sub-categories. If multiple tags are also included in the sub-class, the tags under the sub-class may be ordered by pinyin such that the tags in the tag combination have a certain order.
Step S2032, the viewing habit label in the client image and the playing style label in the video image are constructed as a second label combination.
Specifically, a viewing habit label in the client image and a playing style label in the video image are combined together to obtain a second label combination.
Step S2033, inputting the first label combination and the second label combination into the portrait assessment model to obtain the similarity between the client portrait and the video portrait.
Specifically, the first label combination and the second label combination are input into a portrait assessment model, and the portrait assessment model can be constructed based on random forests or GBDT (Gradient Boosting Decison Tree), or can be obtained by deep learning based on an artificial intelligence model. The portrait assessment model may output a similarity between the client image and the video image.
The interest tags and viewing habit tags in the tag combination may be weighted and the portrait assessment model may incorporate the weights into the calculation.
In this embodiment, for each candidate video image, an interest tag in the client image and a content tag in the video image are constructed as a first tag combination, and a viewing habit tag in the client image and a playing style tag in the video image are constructed as a second tag combination; the resulting label combination is input to a representation assessment model to calculate the similarity between the customer representation and the video representation from the model.
Further, as shown in fig. 6, in another embodiment, the step S203 may include:
step S2034 converts the interest tags in the customer portrait into a first sequence and converts the viewing habit tags in the customer portrait into a second sequence.
Step S2035, for each candidate video, converts the content tag in the video representation into a third sequence and converts the play style tag in the video representation into a fourth sequence.
Specifically, the application also provides another calculation mode of the similarity. The server converts the interest tags in the client image into a first sequence and the viewing habit tags in the client image into a second sequence.
For each candidate video, converting the content tag in the video representation to a third sequence and converting the play style tag in the video representation to a fourth sequence.
The obtained sequence may be a vector sequence, and the text label is converted into the vector sequence according to a preset conversion rule.
As mentioned above, some tags, such as interest tags, may in turn contain tags of multiple subclasses. Therefore, the labels may be arranged in a predetermined order (for example, an arrangement order of the sub-class labels is specified) and then converted into a vector sequence.
Step S2036, a first euclidean distance between the first sequence and the third sequence is calculated, and the first euclidean distance is set to the first similarity.
Step S2037, a second euclidean distance between the second sequence and the fourth sequence is calculated, and the second euclidean distance is set to a second similarity.
Step S2038, calculating the similarity between the client portrait and the video portrait based on the first similarity and the second similarity.
Specifically, the euclidean distance between the first sequence and the third sequence is calculated as the first euclidean distance, and the first euclidean distance is set as the first similarity. And calculating the Euclidean distance between the second sequence and the fourth sequence, taking the Euclidean distance as a second Euclidean distance, and setting the second Euclidean distance as a second similarity. If the tag is weighted, the weights also need to participate in the calculation of the euclidean distance, for example by increasing the value (sum of squares) of the corresponding term in the euclidean distance according to the weights.
The similarity between the customer representation and the video representation is then calculated based on the first similarity and the second similarity, and in one embodiment, the first similarity and the second similarity are added to obtain the similarity between the customer representation and the video representation.
In one embodiment, the cosine similarity between the first sequence and the third sequence may also be calculated as the first similarity, the cosine similarity between the second sequence and the fourth sequence may be calculated as the second similarity, and then the similarity between the client image and the video image may be calculated.
In the embodiment, labels in the portrait are converted into vector sequences, and the Euclidean distance between the vector sequences is calculated to realize quantitative evaluation of the similarity between the client portrait and the video portrait.
Further, before the step S204, the method may further include: for each candidate video, acquiring multi-dimensional evaluation information of the candidate video, wherein the multi-dimensional evaluation information comprises publisher information, audio information, image information and content information; for each type of evaluation information, processing the evaluation information through a preset evaluation model to obtain an evaluation score of the evaluation information; and calculating the video scores of the candidate videos according to the evaluation scores.
Specifically, when scoring the candidate video, multi-dimensional evaluation information of the candidate video including publisher information, audio information, image information, and content information is first acquired. The publisher information may be information related to a publisher/producer of the candidate video, such as a video score of the video published by the publisher in the past, heat data of the video, and the like. The audio information includes the audio quality (e.g., sharpness, noise condition, etc.) of the candidate video, the score of the music used (which score may be determined by other music score tables), and so forth. The image information may be a cover map of the candidate video and a screenshot in the middle of the video. The content information may be text information of voices, utterances, etc. in the candidate video.
For each type of evaluation information, the evaluation information is processed through a preset evaluation model, wherein the evaluation model comprises a rule model, an artificial intelligence based model and a combination of the rule model and the artificial intelligence based model. Different kinds of evaluation information are processed using different evaluation models, and evaluation scores of each kind of evaluation information are output, for example, for publisher information and audio information, calculation of evaluation scores may be performed using a rule model. For image information, image quality (such as image definition) can be detected, whether sensitive content appears in the image is identified through an artificial intelligence model, and corresponding evaluation scores are generated according to detection results. And carrying out text content detection and scoring on the content information through the artificial intelligent model to obtain an evaluation score.
And finally, calculating the video scores of the candidate videos according to the evaluation scores, for example, the evaluation scores of various evaluation information can be added to obtain the video scores of the candidate videos.
In this embodiment, multi-dimensional evaluation information of candidate videos is obtained, including publisher information, audio information, image information and content information; and processing each piece of evaluation information through a corresponding evaluation model to obtain evaluation scores, and calculating video scores of the candidate videos according to each evaluation score, so that the candidate videos are comprehensively scored from multiple dimensions, and the accuracy of the video scores is ensured.
Further, the step S205 may include: according to a preset mapping rule, mapping the similarity and video score corresponding to each candidate video respectively to obtain standard similarity and standard video score; and calculating the recommendation score of each candidate video for the client according to the standard similarity and the standard video score of each candidate video.
Specifically, the similarity and video score corresponding to each candidate video are two different numerical information, mapping processing is needed to be performed on the similarity and the video score according to a preset mapping rule, and the mapping processing can be understood as converting the similarity/video score into a preset interval to obtain the standard similarity and the standard video score.
According to the standard similarity and the standard video score of each candidate video, the recommendation score of each candidate video for the client can be calculated, for example, the standard similarity and the standard video score are directly added or weighted and summed to obtain the recommendation score.
In this embodiment, mapping processing is performed on the similarity and the video score corresponding to each candidate video to obtain a standard similarity and a standard video score, and a recommendation score of each candidate video for a client is calculated according to the standard similarity and the standard video score, so as to realize evaluation calculation of whether the candidate video is worth pushing to the client.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by computer readable instructions stored in a computer readable storage medium that, when executed, may comprise the steps of the embodiments of the methods described above. The storage medium may be a nonvolatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a random access Memory (Random Access Memory, RAM).
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order and may be performed in other orders, unless explicitly stated herein. Moreover, at least some of the steps in the flowcharts of the figures may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order of their execution not necessarily being sequential, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
With further reference to fig. 7, as an implementation of the method shown in fig. 2, the present application provides an embodiment of a video recommendation apparatus based on client data, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus is specifically applicable to various electronic devices.
As shown in fig. 7, the video recommendation device 300 based on client data according to the present embodiment includes: a customer representation acquisition module 301, a video representation acquisition module 302, a similarity calculation module 303, a video score acquisition module 304, a recommendation score calculation module 305, and a selection recommendation module 306, wherein:
the client portrait acquisition module 301 is configured to acquire a client portrait of a client, where the client portrait includes an interest tag and a viewing habit tag, and the viewing habit tag is generated based on viewing habit data of the client on a historical video.
The video portrait acquisition module 302 is configured to acquire video portraits of candidate videos, where each video portrait includes a content tag and a play style tag, and the content tag corresponds to an interest tag and the play style tag corresponds to a viewing habit tag.
And the similarity calculation module 303 is used for calculating the similarity between the client portrait and the video images of the candidate videos respectively.
The video score obtaining module 304 is configured to obtain a video score of each candidate video.
The recommendation score calculating module 305 is configured to calculate a recommendation score of each candidate video according to the similarity and the video score corresponding to each candidate video.
The selecting recommending module 306 is configured to select a target video from the candidate videos according to the obtained recommendation score, and perform video recommendation on the client according to the selected target video.
In the embodiment, the client portrait of the client is obtained, the client portrait comprises the interest tag and the viewing habit tag, the viewing habit tag is generated based on the viewing habit data of the client on the historical video, the characteristic on the video production and the client browsing habit is focused, and the preference of the client on the video can be more comprehensively recorded; the method comprises the steps that video portraits of candidate videos are obtained, the video portraits comprise content labels and playing style labels, the content labels correspond to interest labels, and the playing style labels correspond to viewing habit labels and are descriptions of candidate video characteristics; calculating the similarity between the customer portrait and the video image of each candidate video, wherein the higher the similarity is, the closer the candidate video is to the preference of the customer; the candidate videos have video scores describing the video quality, and the recommendation score of each candidate video is calculated according to the similarity and the video score of each candidate video, and the recommendation value of the candidate video to the client is represented by a numerical value; selecting target videos from the candidate videos according to the recommendation scores, and recommending the videos to the clients according to the target videos; the application considers the watching habit and video quality of the clients, and the evaluation factors are more comprehensive, thereby improving the accuracy of video recommendation.
In some optional implementations of the present embodiment, the video recommendation device 300 based on the client data may further include: the system comprises a history acquisition module and a client portrait generation module, wherein:
the history acquisition module is used for acquiring the viewing history data of the client on the history video, wherein the viewing history data comprises content tags and video style tags of the history video, play sources of the history video, operation behavior data and viewing habit data of the client on the history video.
And the client portrait generation module is used for generating client portraits of clients according to the viewing history data.
In this embodiment, viewing history data of a client on a history video is obtained, including content tags and video style tags of the history video, play sources of the history video, operation behavior data and viewing habit data of the client on the history video, and viewing of the client on the history video is recorded from multiple dimensions, so that accuracy of client portraits generated according to the viewing history data is ensured.
In some alternative implementations of the present embodiment, the customer representation generation module may include: the system comprises a tendency label generation sub-module, a clustering sub-module, a weight adding sub-module and a portrait generation sub-module, wherein:
And the tendency label generating sub-module is used for generating tendency labels of clients to the historical video according to the operation behavior data.
And the clustering sub-module is used for clustering the content tags, the video style tags and the viewing habit data to obtain interest tags and viewing habit tags.
The weight adding sub-module is used for adding weights to the interest tags and the viewing habit tags according to the trend tags and the play sources of the historical videos corresponding to the interest tags and the viewing habit tags, and the added weights are used for calculating the similarity between the customer portrait and the video portraits of the candidate videos.
And the portrait generation sub-module is used for generating a customer portrait of the customer according to the interest tags, the viewing habit tags and the corresponding weights.
In the embodiment, according to the operation behavior data, generating a tendency label of the subjectivity of the client to the historical video; clustering the content tags, the video style tags and the viewing habit data to obtain interest tags and viewing habit tags; according to trend labels and play sources of historical videos corresponding to the interest labels and the viewing habit labels, weights are added to the interest labels and the viewing habit labels so as to distinguish the importance of the interest labels and the viewing habit labels; and generating a customer portrait describing the customer according to the interest tags, the viewing habit tags and the corresponding weights thereof so as to carry out subsequent video recommendation.
In some alternative implementations of the present embodiment, the similarity calculation module 303 may include: the first building sub-module, the second building sub-module and the combined input sub-module, wherein:
a first construction sub-module for constructing, for each video representation of the candidate video, an interest tag in the client representation and a content tag in the video representation as a first tag combination.
And the second construction submodule is used for constructing a second label combination of the viewing habit labels in the client image and the playing style labels in the video image.
And the combination input sub-module is used for inputting the first label combination and the second label combination into the portrait assessment model to obtain the similarity between the client portrait and the video portrait.
In this embodiment, for each candidate video image, an interest tag in the client image and a content tag in the video image are constructed as a first tag combination, and a viewing habit tag in the client image and a playing style tag in the video image are constructed as a second tag combination; the resulting label combination is input to a representation assessment model to calculate the similarity between the customer representation and the video representation from the model.
In some optional implementations of the present embodiment, the similarity calculation module 303 may further include: customer portrait conversion submodule, video portrait conversion submodule, first computation submodule, second computation submodule and similarity computation submodule, wherein:
And the client portrait conversion sub-module is used for converting interest labels in the client portrait into a first sequence and converting viewing habit labels in the client portrait into a second sequence.
A video representation conversion sub-module for converting, for each candidate video image, a content label in the video representation into a third sequence and a play style label in the video representation into a fourth sequence.
The first calculation sub-module is used for calculating a first Euclidean distance between the first sequence and the third sequence, and setting the first Euclidean distance as a first similarity.
And the second computing sub-module is used for computing a second Euclidean distance between the second sequence and the fourth sequence and setting the second Euclidean distance as a second similarity.
And the similarity calculation submodule is used for calculating the similarity between the client portrait and the video portrait according to the first similarity and the second similarity.
In the embodiment, labels in the portrait are converted into vector sequences, and the Euclidean distance between the vector sequences is calculated to realize quantitative evaluation of the similarity between the client portrait and the video portrait.
In some optional implementations of the present embodiment, the video recommendation device 300 based on the client data may include: the system comprises an information acquisition module, an evaluation score calculation module and a scoring calculation module, wherein:
And the information acquisition module is used for acquiring multi-dimensional evaluation information of the candidate videos for each candidate video, wherein the multi-dimensional evaluation information comprises publisher information, audio information, image information and content information.
And the evaluation score calculation module is used for processing the evaluation information through a preset evaluation model for each type of evaluation information to obtain the evaluation score of the evaluation information.
And the scoring calculation module is used for calculating the video scores of the candidate videos according to the evaluation scores.
In this embodiment, multi-dimensional evaluation information of candidate videos is obtained, including publisher information, audio information, image information and content information; and processing each piece of evaluation information through a corresponding evaluation model to obtain evaluation scores, and calculating video scores of the candidate videos according to each evaluation score, so that the candidate videos are comprehensively scored from multiple dimensions, and the accuracy of the video scores is ensured.
In some alternative implementations of the present embodiment, the recommendation score calculation module 305 may include: a mapping processing sub-module and a scoring computation sub-module, wherein:
and the mapping processing sub-module is used for respectively mapping the similarity and the video score corresponding to each candidate video according to a preset mapping rule to obtain the standard similarity and the standard video score.
And the score calculating sub-module is used for calculating the recommendation score of each candidate video for the client according to the standard similarity and the standard video score of each candidate video.
In this embodiment, mapping processing is performed on the similarity and the video score corresponding to each candidate video to obtain a standard similarity and a standard video score, and a recommendation score of each candidate video for a client is calculated according to the standard similarity and the standard video score, so as to realize evaluation calculation of whether the candidate video is worth pushing to the client.
In order to solve the technical problems, the embodiment of the application also provides computer equipment. Referring specifically to fig. 8, fig. 8 is a basic structural block diagram of a computer device according to the present embodiment.
The computer device 4 comprises a memory 41, a processor 42, a network interface 43 communicatively connected to each other via a system bus. It should be noted that only computer device 4 having components 41-43 is shown in the figures, but it should be understood that not all of the illustrated components are required to be implemented and that more or fewer components may be implemented instead. It will be appreciated by those skilled in the art that the computer device herein is a device capable of automatically performing numerical calculations and/or information processing in accordance with predetermined or stored instructions, the hardware of which includes, but is not limited to, microprocessors, application specific integrated circuits (Application Specific Integrated Circuit, ASICs), programmable gate arrays (fields-Programmable Gate Array, FPGAs), digital processors (Digital Signal Processor, DSPs), embedded devices, etc.
The computer equipment can be a desktop computer, a notebook computer, a palm computer, a cloud server and other computing equipment. The computer equipment can perform man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch pad or voice control equipment and the like.
The memory 41 includes at least one type of readable storage medium including flash memory, hard disk, multimedia card, card memory (e.g., SD or DX memory, etc.), random Access Memory (RAM), static Random Access Memory (SRAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), programmable Read Only Memory (PROM), magnetic memory, magnetic disk, optical disk, etc. In some embodiments, the storage 41 may be an internal storage unit of the computer device 4, such as a hard disk or a memory of the computer device 4. In other embodiments, the memory 41 may also be an external storage device of the computer device 4, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like, which are provided on the computer device 4. Of course, the memory 41 may also comprise both an internal memory unit of the computer device 4 and an external memory device. In this embodiment, the memory 41 is typically used for storing an operating system and various application software installed on the computer device 4, such as computer readable instructions of a video recommendation method based on client data. Further, the memory 41 may be used to temporarily store various types of data that have been output or are to be output.
The processor 42 may be a central processing unit (Central Processing Unit, CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments. The processor 42 is typically used to control the overall operation of the computer device 4. In this embodiment, the processor 42 is configured to execute computer readable instructions stored in the memory 41 or process data, such as computer readable instructions for executing the video recommendation method based on the client data.
The network interface 43 may comprise a wireless network interface or a wired network interface, which network interface 43 is typically used for establishing a communication connection between the computer device 4 and other electronic devices.
The computer device provided in this embodiment may perform the video recommendation method based on the client data. The video recommendation method based on the client data herein may be the video recommendation method based on the client data of the above-described respective embodiments.
In the embodiment, the client portrait of the client is obtained, the client portrait comprises the interest tag and the viewing habit tag, the viewing habit tag is generated based on the viewing habit data of the client on the historical video, the characteristic on the video production and the client browsing habit is focused, and the preference of the client on the video can be more comprehensively recorded; the method comprises the steps that video portraits of candidate videos are obtained, the video portraits comprise content labels and playing style labels, the content labels correspond to interest labels, and the playing style labels correspond to viewing habit labels and are descriptions of candidate video characteristics; calculating the similarity between the customer portrait and the video image of each candidate video, wherein the higher the similarity is, the closer the candidate video is to the preference of the customer; the candidate videos have video scores describing the video quality, and the recommendation score of each candidate video is calculated according to the similarity and the video score of each candidate video, and the recommendation value of the candidate video to the client is represented by a numerical value; selecting target videos from the candidate videos according to the recommendation scores, and recommending the videos to the clients according to the target videos; the application considers the watching habit and video quality of the clients, and the evaluation factors are more comprehensive, thereby improving the accuracy of video recommendation.
The present application also provides another embodiment, namely, a computer-readable storage medium storing computer-readable instructions executable by at least one processor to cause the at least one processor to perform the steps of a video recommendation method based on client data as described above.
In the embodiment, the client portrait of the client is obtained, the client portrait comprises the interest tag and the viewing habit tag, the viewing habit tag is generated based on the viewing habit data of the client on the historical video, the characteristic on the video production and the client browsing habit is focused, and the preference of the client on the video can be more comprehensively recorded; the method comprises the steps that video portraits of candidate videos are obtained, the video portraits comprise content labels and playing style labels, the content labels correspond to interest labels, and the playing style labels correspond to viewing habit labels and are descriptions of candidate video characteristics; calculating the similarity between the customer portrait and the video image of each candidate video, wherein the higher the similarity is, the closer the candidate video is to the preference of the customer; the candidate videos have video scores describing the video quality, and the recommendation score of each candidate video is calculated according to the similarity and the video score of each candidate video, and the recommendation value of the candidate video to the client is represented by a numerical value; selecting target videos from the candidate videos according to the recommendation scores, and recommending the videos to the clients according to the target videos; the application considers the watching habit and video quality of the clients, and the evaluation factors are more comprehensive, thereby improving the accuracy of video recommendation.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present application.
It is apparent that the above-described embodiments are only some embodiments of the present application, but not all embodiments, and the preferred embodiments of the present application are shown in the drawings, which do not limit the scope of the patent claims. This application may be embodied in many different forms, but rather, embodiments are provided in order to provide a thorough and complete understanding of the present disclosure. Although the application has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments described in the foregoing description, or equivalents may be substituted for elements thereof. All equivalent structures made by the content of the specification and the drawings of the application are directly or indirectly applied to other related technical fields, and are also within the scope of the application.

Claims (10)

1. A video recommendation method based on client data, comprising the steps of:
acquiring a customer portrait of a customer, wherein the customer portrait comprises an interest tag and a viewing habit tag, and the viewing habit tag is generated based on the viewing habit data of the customer on a historical video;
obtaining video portraits of each candidate video, wherein each video portrait comprises a content label and a playing style label, the content label corresponds to the interest label, and the playing style label corresponds to the watching habit label;
respectively calculating the similarity between the client portrait and the video images of the candidate videos;
obtaining video scores of the candidate videos;
calculating recommendation scores of the candidate videos according to the similarity and the video scores corresponding to the candidate videos;
and selecting target videos from the candidate videos according to the obtained recommendation scores, and recommending the videos to the clients according to the selected target videos.
2. The method of claim 1, further comprising, prior to the step of capturing a customer representation of the customer:
obtaining viewing history data of a client on a history video, wherein the viewing history data comprises content tags and video style tags of the history video, a playing source of the history video, operation behavior data and viewing habit data of the client on the history video;
And generating a customer portrait of the customer according to the viewing history data.
3. The video recommendation method based on client data according to claim 2, wherein said step of generating a client representation of said client based on said viewing history data comprises:
generating a tendency label of the client to the historical video according to the operation behavior data;
clustering the content tags, the video style tags and the viewing habit data to obtain interest tags and viewing habit tags;
adding weights to the interest tags and the viewing habit tags according to trend tags and play sources of historical videos corresponding to the interest tags and the viewing habit tags, wherein the added weights are used for calculating the similarity between the client portrait and video images of each candidate video;
and generating the customer portrait of the customer according to the interest tag, the viewing habit tag and the corresponding weight.
4. The method of claim 1, wherein the step of separately calculating the similarity between the customer representation and the video representations of the candidate videos comprises:
For each candidate video representation, constructing an interest tag in the customer representation and a content tag in the video representation as a first tag combination;
constructing a second label combination of a viewing habit label in the client image and a playing style label in the video image;
and inputting the first label combination and the second label combination into an image evaluation model to obtain the similarity between the client image and the video image.
5. The method of claim 1, wherein the step of separately calculating the similarity between the customer representation and the video representations of the candidate videos comprises:
converting interest tags in the customer representation into a first sequence and viewing habit tags in the customer representation into a second sequence;
converting content tags in the video images into a third sequence and converting play style tags in the video images into a fourth sequence for each candidate video image;
calculating a first Euclidean distance between the first sequence and the third sequence, and setting the first Euclidean distance as a first similarity;
Calculating a second Euclidean distance between the second sequence and the fourth sequence, and setting the second Euclidean distance as a second similarity;
and calculating the similarity between the client image and the video image according to the first similarity and the second similarity.
6. The method of claim 1, further comprising, prior to the step of obtaining video scores for the candidate videos:
for each candidate video, acquiring multi-dimensional evaluation information of the candidate video, wherein the multi-dimensional evaluation information comprises publisher information, audio information, image information and content information;
for each type of evaluation information, processing the evaluation information through a preset evaluation model to obtain an evaluation score of the evaluation information;
and calculating the video scores of the candidate videos according to the evaluation scores.
7. The method of claim 1, wherein the step of calculating the recommendation score for each candidate video based on the similarity and video score corresponding to each candidate video comprises:
according to a preset mapping rule, mapping the similarity and video score corresponding to each candidate video respectively to obtain standard similarity and standard video score;
And calculating the recommendation score of each candidate video for the client according to the standard similarity and the standard video score of each candidate video.
8. A video recommendation device based on customer data, comprising:
the client portrait acquisition module is used for acquiring client portraits of clients, wherein the client portraits comprise interest tags and viewing habit tags, and the viewing habit tags are generated based on the viewing habit data of the clients on historical videos;
the video portrait acquisition module is used for acquiring video portraits of each candidate video, wherein each video portrait comprises a content tag and a playing style tag, the content tag corresponds to the interest tag, and the playing style tag corresponds to the viewing habit tag;
the similarity calculation module is used for calculating the similarity between the client portrait and the video portraits of the candidate videos respectively;
the video score acquisition module is used for acquiring the video scores of the candidate videos;
the recommendation score calculating module is used for calculating recommendation scores of the candidate videos according to the similarity and the video scores corresponding to the candidate videos;
and the selecting and recommending module is used for selecting target videos from the candidate videos according to the obtained recommending scores and recommending the videos to the clients according to the selected target videos.
9. A computer device comprising a memory having stored therein computer readable instructions which when executed implement the steps of the client data based video recommendation method of any of claims 1 to 7.
10. A computer readable storage medium having stored thereon computer readable instructions which when executed by a processor implement the steps of the client data based video recommendation method according to any of claims 1 to 7.
CN202310572613.1A 2023-05-19 2023-05-19 Video recommendation method and device based on client data and computer equipment Active CN116821475B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310572613.1A CN116821475B (en) 2023-05-19 2023-05-19 Video recommendation method and device based on client data and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310572613.1A CN116821475B (en) 2023-05-19 2023-05-19 Video recommendation method and device based on client data and computer equipment

Publications (2)

Publication Number Publication Date
CN116821475A true CN116821475A (en) 2023-09-29
CN116821475B CN116821475B (en) 2024-02-02

Family

ID=88111827

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310572613.1A Active CN116821475B (en) 2023-05-19 2023-05-19 Video recommendation method and device based on client data and computer equipment

Country Status (1)

Country Link
CN (1) CN116821475B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117119231A (en) * 2023-10-13 2023-11-24 深圳市知酷信息技术有限公司 Video charging regulation and control system based on block chain
CN117651165A (en) * 2023-10-20 2024-03-05 广州太棒了传媒科技有限公司 Video recommendation method and device based on client data

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107888950A (en) * 2017-11-09 2018-04-06 福州瑞芯微电子股份有限公司 A kind of method and system for recommending video
CN109151500A (en) * 2018-09-29 2019-01-04 北京数美时代科技有限公司 A kind of main broadcaster's recommended method, system and computer equipment for net cast
CN110941740A (en) * 2019-11-08 2020-03-31 腾讯科技(深圳)有限公司 Video recommendation method and computer-readable storage medium
CN113569135A (en) * 2021-06-30 2021-10-29 深圳市东信时代信息技术有限公司 User portrait based recommendation method and device, computer equipment and storage medium
WO2022037011A1 (en) * 2020-08-20 2022-02-24 连尚(新昌)网络科技有限公司 Method and device for providing video information
CN114282054A (en) * 2020-09-28 2022-04-05 苏宁云计算有限公司 Video recommendation method and device, computer equipment and storage medium
CN114339417A (en) * 2021-12-30 2022-04-12 未来电视有限公司 Video recommendation method, terminal device and readable storage medium
CN114419501A (en) * 2022-01-11 2022-04-29 平安普惠企业管理有限公司 Video recommendation method and device, computer equipment and storage medium
CN114996517A (en) * 2022-07-07 2022-09-02 黄军 Big data-based teaching video recommendation method and device and computer equipment
CN116055809A (en) * 2022-12-31 2023-05-02 企知道科技有限公司 Video information display method, electronic device and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107888950A (en) * 2017-11-09 2018-04-06 福州瑞芯微电子股份有限公司 A kind of method and system for recommending video
CN109151500A (en) * 2018-09-29 2019-01-04 北京数美时代科技有限公司 A kind of main broadcaster's recommended method, system and computer equipment for net cast
CN110941740A (en) * 2019-11-08 2020-03-31 腾讯科技(深圳)有限公司 Video recommendation method and computer-readable storage medium
WO2022037011A1 (en) * 2020-08-20 2022-02-24 连尚(新昌)网络科技有限公司 Method and device for providing video information
CN114282054A (en) * 2020-09-28 2022-04-05 苏宁云计算有限公司 Video recommendation method and device, computer equipment and storage medium
CN113569135A (en) * 2021-06-30 2021-10-29 深圳市东信时代信息技术有限公司 User portrait based recommendation method and device, computer equipment and storage medium
CN114339417A (en) * 2021-12-30 2022-04-12 未来电视有限公司 Video recommendation method, terminal device and readable storage medium
CN114419501A (en) * 2022-01-11 2022-04-29 平安普惠企业管理有限公司 Video recommendation method and device, computer equipment and storage medium
CN114996517A (en) * 2022-07-07 2022-09-02 黄军 Big data-based teaching video recommendation method and device and computer equipment
CN116055809A (en) * 2022-12-31 2023-05-02 企知道科技有限公司 Video information display method, electronic device and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SHUMEET BALUJA等: "Video suggestion and discovery for youtube: taking random walks through the view graph", 《WWW \'08: PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON WORLD WIDE WEB》, pages 895 *
储姗姗: "视频网站推荐算法的研究与应用", 《中国优秀硕士学位论文全文数据库信息科技辑中国优秀硕士学位论文全文数据库信息科技辑》, no. 10, pages 138 - 966 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117119231A (en) * 2023-10-13 2023-11-24 深圳市知酷信息技术有限公司 Video charging regulation and control system based on block chain
CN117651165A (en) * 2023-10-20 2024-03-05 广州太棒了传媒科技有限公司 Video recommendation method and device based on client data
CN117651165B (en) * 2023-10-20 2024-05-24 力恒信息科技(广州)有限公司 Video recommendation method and device based on client data

Also Published As

Publication number Publication date
CN116821475B (en) 2024-02-02

Similar Documents

Publication Publication Date Title
US11644947B1 (en) Graphical user interfaces and systems for presenting content summaries
CN109819284B (en) Short video recommendation method and device, computer equipment and storage medium
CN110020411B (en) Image-text content generation method and equipment
JP6745384B2 (en) Method and apparatus for pushing information
CN116821475B (en) Video recommendation method and device based on client data and computer equipment
US20170097984A1 (en) Method and system for generating a knowledge representation
US11200241B2 (en) Search query enhancement with context analysis
CN111818370B (en) Information recommendation method and device, electronic equipment and computer-readable storage medium
US20190332605A1 (en) Methods, systems and techniques for ranking blended content retrieved from multiple disparate content sources
CN108959323B (en) Video classification method and device
WO2020215977A1 (en) System, method and device for displaying information
CN113781149B (en) Information recommendation method and device, computer readable storage medium and electronic equipment
US8725795B1 (en) Content segment optimization techniques
CN112040339A (en) Method and device for making video data, computer equipment and storage medium
US20200057821A1 (en) Generating a platform-based representative image for a digital video
US20160371598A1 (en) Unified attractiveness prediction framework based on content impact factor
US11880423B2 (en) Machine learned curating of videos for selection and display
CN115964520A (en) Metadata tag identification
CN112035740B (en) Project use time length prediction method, device, equipment and storage medium
CN110555135A (en) Content recommendation method, content recommendation device and electronic equipment
CN109408725B (en) Method and apparatus for determining user interest
CN114119078A (en) Target resource determination method, device, electronic equipment and medium
CN110555131B (en) Content recommendation method, content recommendation device and electronic equipment
CN111753107A (en) Resource display method, device, equipment and storage medium
US11907508B1 (en) Content analytics as part of content creation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: Room 3105, No. 490 Tianhe Road, Tianhe District, Guangzhou City, Guangdong Province, 510000

Patentee after: Guangzhou Honey Network Technology Co.,Ltd.

Country or region after: China

Address before: Room 101, Room A63, 1st Floor, No. 1023 Gaopu Road, Tianhe District, Guangzhou City, Guangdong Province, 510000 (office only)

Patentee before: Guangzhou Honey Network Technology Co.,Ltd.

Country or region before: China

CP03 Change of name, title or address