CN111078944B - Video content heat prediction method and device - Google Patents

Video content heat prediction method and device Download PDF

Info

Publication number
CN111078944B
CN111078944B CN201811214009.7A CN201811214009A CN111078944B CN 111078944 B CN111078944 B CN 111078944B CN 201811214009 A CN201811214009 A CN 201811214009A CN 111078944 B CN111078944 B CN 111078944B
Authority
CN
China
Prior art keywords
video content
value
content
heat
comment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811214009.7A
Other languages
Chinese (zh)
Other versions
CN111078944A (en
Inventor
陈步华
梁洁
陈戈
庄一嵘
唐宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Telecom Corp Ltd
Original Assignee
China Telecom Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Telecom Corp Ltd filed Critical China Telecom Corp Ltd
Priority to CN201811214009.7A priority Critical patent/CN111078944B/en
Publication of CN111078944A publication Critical patent/CN111078944A/en
Application granted granted Critical
Publication of CN111078944B publication Critical patent/CN111078944B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Development Economics (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The disclosure provides a method and a device for predicting video content heat, and relates to the field of data communication. The method comprises the following steps: determining a bullet screen emotion quantization value, a comment emotion quantization value and a content mode correlation coefficient value of video content; determining a heat loss function of video content; and predicting the heat value of the video content based on the bullet screen emotion quantization value, the comment emotion quantization value, the content mode correlation coefficient value and the heat loss function. According to the method, the screen emotion quantization value, the comment emotion quantization value, the content mode correlation coefficient value and the heat loss function are used, the change trend of the video content can be accurately predicted, and therefore the video content with the high heat value can be cached in a CDN node, and the video content with the high heat can be recommended to a user in time.

Description

Video content heat prediction method and device
Technical Field
The present disclosure relates to the field of data communication, and in particular, to a method and an apparatus for predicting popularity of video content.
Background
The video service is a new video service which is established on the broadband internet and the mobile internet and is open to the public, and is a technology of multimedia interactive service integrating images, data and the like.
A CDN (Content Delivery Network) is a carrier Network for video services, and is constructed on a broadband or mobile Network to provide a large-scale streaming service for videos. The CDN is generally deployed in a hierarchical manner, a central node stores full content, and a regional cache node and an edge node store problem content, where the content stored by the edge cache node is the minimum.
Because the edge CDN node has a limited cache space and stores a small amount of content, the edge cache node can only store content with high heat in the cache, thereby reducing the traffic back to the source and improving the quality of service.
Since the video content file is very large, compared with a CDN such as a web page or a small file, the time required for updating or replacing the content of a video is long. In addition, the CDN server has limited cache space, and only can store the content with high heat in the cache, thereby reducing the flow back to the source and improving the service quality; the recommendation mechanism of the video client correctly guesses the favorite contents (the hot contents willing to be accessed) of the user and recommends the favorite contents to the user, so that the service quality of the user can be improved, the access amount of the user is increased, and the brand and the economic benefit of the client are improved.
The existing CDN hot degree algorithm is based on statistics of past service data, and is very difficult to accurately predict hot degree change of contents, so that the service quality of part of hot spot film sources is low.
Disclosure of Invention
The technical problem to be solved by the present disclosure is to provide a method and an apparatus for predicting video content heat, which can accurately predict the change trend of video content.
According to an aspect of the present disclosure, a method for predicting video content heat is provided, including: determining a bullet screen emotion quantization value, a comment emotion quantization value and a content mode correlation coefficient value of video content; determining a heat loss function of the video content; and predicting the heat value of the video content based on the bullet screen emotion quantization value, the comment emotion quantization value, the content mode correlation coefficient value and the heat loss function.
Optionally, predicting the heat value of the video content based on the barrage emotion quantization value, the comment emotion quantization value, the content mode association coefficient value, and the heat churn function comprises: respectively determining adjusting coefficients of the bullet screen emotion quantized value, the comment emotion quantized value and the content mode correlation coefficient value; calculating a weighted summation calculation value of the bullet screen emotion quantization value, the comment emotion quantization value and the content mode correlation coefficient value; predicting a heat value of the video content based on a product of the weighted sum calculation value and an existing heat value of the video content and a heat churn function.
Optionally, determining the barrage emotion quantization value and the comment emotion quantization value of the video content includes: acquiring barrage content and comment content of video content; determining a bullet screen emotion quantization value by utilizing a natural language processing engine for deep learning based on bullet screen contents; and determining a quantitative value of the comment emotion by using a deep-learning natural language processing engine based on the comment content.
Optionally, determining the content pattern correlation coefficient value comprises: acquiring the watching behavior information of a user on a video content mode; determining a capacitance mode relevance coefficient value using a deep learning relevance measure engine based on the viewing behavior information.
Optionally, determining the heat churn function for the video content comprises: determining a time at which the video content was first requested; a heat loss function is determined based on the time of the first request for video content, the current time, and the cooling factor.
Optionally, the heat loss function is
Figure BDA0001833095390000021
Wherein, t 0 Is the time at which the video content was first requested, t is the current time, and b is the cooling factor.
According to another aspect of the present disclosure, there is also provided a video content popularity prediction apparatus, including: the prediction parameter determining unit is used for determining a bullet screen emotion quantization value, a comment emotion quantization value and a content mode correlation coefficient value of the video content; the system comprises a heat loss function determining unit, a heat loss function determining unit and a heat loss function determining unit, wherein the heat loss function determining unit is used for determining a heat loss function of video content; and the video heat prediction unit is used for predicting the heat value of the video content based on the barrage emotion quantization value, the comment emotion quantization value, the content mode correlation coefficient value and the heat loss function.
Optionally, the video heat prediction unit is configured to determine adjustment coefficients of the barrage emotion quantization value, the comment emotion quantization value, and the content mode correlation coefficient value, respectively; calculating a weighted summation calculation value of the bullet screen emotion quantization value, the comment emotion quantization value and the content mode correlation coefficient value; and predicting a heat value of the video content based on a product of the weighted sum calculation value and an existing heat value of the video content and the heat churn function.
Optionally, the prediction parameter determination unit includes: the prediction parameter acquisition module is used for acquiring barrage content and comment content of the video content; and the deep learning quantization module is used for determining a bullet screen emotion quantization value by using a deep learning natural language processing engine based on bullet screen contents and determining a comment emotion quantization value by using the deep learning natural language processing engine based on comment contents.
Optionally, the prediction parameter acquisition module is further configured to acquire viewing behavior information of the user on the video content mode; the deep learning quantization module is further configured to determine a volume mode relevance coefficient value using a deep learning relevance measure engine based on the viewing behavior information.
Optionally, the heat loss function determining unit is configured to determine a time at which the video content is requested for the first time; a heat loss function is determined based on the time of the first request for video content, the current time, and the cooling factor.
Optionally, the heat loss function is
Figure BDA0001833095390000031
Wherein, t 0 Is the time at which the video content was first requested, t is the current time, and b is the cooling factor.
According to another aspect of the present disclosure, there is also provided a video content popularity prediction apparatus, including: a memory; and a processor coupled to the memory, the processor configured to perform the video content heat prediction method as described above based on instructions stored in the memory.
According to another aspect of the present disclosure, a computer-readable storage medium is also proposed, on which computer program instructions are stored, which instructions, when executed by a processor, implement the steps of the video content heat prediction method described above.
Compared with the prior art, the method and the device have the advantages that the curtain emotion quantization value, the comment emotion quantization value, the content mode correlation coefficient value and the heat loss function are used, the change trend of the video content can be accurately predicted, the video content with high heat value can be cached in the CDN node, and the video content with high heat value can be recommended to the user in time.
Other features of the present disclosure and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the disclosure.
The present disclosure may be more clearly understood from the following detailed description, taken with reference to the accompanying drawings, in which:
fig. 1 is a flowchart illustrating a video content heat prediction method according to an embodiment of the disclosure.
Fig. 2 is a flowchart illustrating a video content heat prediction method according to another embodiment of the disclosure.
FIG. 3 is a schematic diagram of an emotion analysis framework of a convolutional neural network according to the present disclosure.
Fig. 4 is a schematic structural diagram of an embodiment of a video content heat prediction apparatus according to the present disclosure.
Fig. 5 is a schematic structural diagram of another embodiment of the video content heat prediction apparatus according to the present disclosure.
Fig. 6 is a schematic structural diagram of a video content heat prediction apparatus according to still another embodiment of the disclosure.
Fig. 7 is a schematic structural diagram of a video content heat prediction apparatus according to still another embodiment of the disclosure.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
For the purpose of promoting a better understanding of the objects, aspects and advantages of the present disclosure, reference is made to the following detailed description taken in conjunction with the accompanying drawings.
Fig. 1 is a flowchart illustrating a video content heat prediction method according to an embodiment of the disclosure.
In step 110, the value of the quantified bullet feeling, the value of the quantified comment feeling and the value of the content mode correlation coefficient of the video content are determined. The higher the barrage emotion quantization value and the comment emotion quantization value are, the more the description user likes the video content, and the higher the popularity of the video content is; the content mode correlation coefficient value may reflect a correlation between content and content. In this embodiment, the bullet screen emotion quantized value, the comment emotion quantized value, and the content pattern correlation coefficient value are used as prediction parameters for predicting the heat value.
At step 120, a heat churn function for the video content is determined. Some videos or files will cool down over time, and therefore, a heat loss function needs to be introduced.
In step 130, a heat value of the video content is predicted based on the barrage sentiment quantified value, the comment sentiment quantified value, the content mode correlation coefficient value and the heat loss function. For example, different adjusting coefficients are set for the bullet screen emotion quantization value, the comment emotion quantization value and the content mode association coefficient value, then the bullet screen emotion quantization value, the comment emotion quantization value and the content mode association coefficient value are subjected to weighted summation calculation, and finally the product of the weighted summation calculation value and the existing heat value and the heat loss function of the video content is calculated to determine the heat value of the video content.
In the embodiment, the curtain emotion quantization value, the comment emotion quantization value, the content mode correlation coefficient value and the heat loss function are used, the change trend of the video content can be accurately predicted, and therefore the video content with the high heat value can be cached in the CDN node, and the video content with the high heat value can be recommended to the user in time.
Fig. 2 is a flowchart illustrating a video content heat prediction method according to another embodiment of the disclosure.
In step 210, barrage content, comment content of the video content and viewing behavior information of the user on the video content mode are obtained. For example, the barrage content and the comment content may be acquired by a crawler.
In step 220, the bullet screen emotion quantization value and the comment emotion quantization value are determined by the deep learning natural language processing engine, and the volume mode correlation coefficient value is determined by the deep learning correlation measurement engine.
When the video is played, the more the positive evaluations are, the more the pop-up information rolling on the picture and the reply message of the video content evaluation area can reflect the preference of the user to the video content, and the heat is higher. For example, the bullet screen emotion quantization value and the comment emotion quantization value may be used for a real number representation between [0,1], where 0 represents that the user highly dislikes to view the video content and 1 represents that the user highly likes to view the video content. And the viewing behavior of the user for the content mode, for example, viewing the most content of a certain actor, viewing the content of war subject matter, viewing the most content of landscape scenery, etc., can determine that the videos in the same mode have strong correlation. The value range of the content mode correlation coefficient value can be real numbers before [0,1], 0 represents that no correlation exists between video contents, and 1 represents that the correlation between the video contents is completely strong.
In one embodiment, the barrage sentiment quantified value and the comment sentiment quantified value are determined by a deep-learning natural language processing engine, which may be RNN (Recurrent Neural Network), CNN (Convolutional Neural Network), or the like. For example, CNN is used for quantitative analysis modeling of user comment content or quantitative analysis modeling of screen content. As shown in fig. 3, the quantitative analysis modeling of the user comment content is taken as an example for introduction. The first layer is a word vector embedding layer, and words are mapped into vectors by word2 vec; the second layer is a convolutional layer, which uses a plurality of filters; the third layer is a pooling layer, and max-over-time pooling can be adopted for example; and finally, putting the results of all the pooling layers on a long characteristic vector, adding dropout regulation, and finally outputting the results by adopting softmax.
For the word vector embedding layer, when the text is processed by adopting the CNN model, the text needs to be converted into input features which can be identified by the CNN. Firstly, after dividing words of a comment content sentence, mapping each divided word of the sentence to a d-dimensional real number vector, training a word vector table in advance by using word2vec, and enabling x to be i ∈R d A d-dimensional word vector representing the ith word in the sentence, all words forming a sentence matrix M j ∈R l×d Wherein j represents the jth comment sentence in all comment sets, l represents the number of words in the sentence, d represents the vector dimension represented by each word vector, each row of the matrix represents the word vector representation of the words in the sentence, and the matrix M is divided into j As input to CNN.
For convolutional layers, input sentence matrix M j Using a filter (sliding window) w of size h x d h×d Performing convolution operation, wherein a convolution sliding block relates to h words, and the width d of the sliding block is the same as the expression dimension of a word vector:
c i =f(w×x i:i+h-1 +b) (1)
wherein b represents an offset, f (-) is a non-linear convolution kernel, x i,i+h-1 Representing the ith through the (i + h-1) th rows of the matrix, c i Representing the local features resulting from the convolution operation. Thus, in sentence matrix M j The convolution sliding window will act on { x 1,h ,x 2,h+1 ,……,x l-h+1,l And f, local feature areas. Therefore:
C=[c 1 ,c 2 ,…,c l-h+1 ] (2)
wherein C ∈ R l-h+1
For the pooling layer, a max-over-time pooling method is used. This method simply extracts the maximum from the previous feature vector, which represents the most important signal, i.e. the most important feature of the local features obtained by sliding the window. It can be seen that this pooling approach can solve the variable length sentence input problem and the problem of different filter sizes. Therefore, for C of the output generated by a filter, a max-over-time method is adopted to carry out feature mapping to obtain a pooled feature s:
s=max(C) (3)
for the entire convolutional neural network model, multiple filters w will be used j h×d (h is a different value) to the input matrix M j l×d Performing convolution operation to generate a plurality of features, and combining the features as an input vector V of the full connection layer:
V=[s (1,1)… s (1,h)… s (m,h) ] (4)
wherein s is (m,h) Representing the characteristic produced by the mth filter of size h.
The output of the one-dimensional vector of the pooling layer is connected with a softmax layer in a full-connection mode, and the softmax layer can be set according to the requirement of the task (generally reflecting the probability distribution on the final category). Therefore, finally, the classification result is generated by using the fully-connected output through a softmax function, and the model optimizes the parameters through a back propagation algorithm by using an actual classification label.
P(y|V,W,b)=softmax(W×V+b) (5)
Wherein y represents a category label of emotion analysis, W represents parameters of a full connection layer, and b is a bias item.
And finally, averaging the emotion degree labels output by different users through the CNN model for a certain film comment to finally obtain a comprehensive emotion quantization value Y.
In one embodiment, the CNN is used to determine content pattern correlation coefficient values. For example, CNN is used to extract the picture features to be identified, and CNN is used to extract the picture features with tags in the template library. For example, an actor picture, a landscape picture, a war picture and the like are marked, and then the similarity between the extracted picture characteristics of the sample to be detected and the marked picture is measured.
The cosine similarity can be used for representing the similarity between the extracted features of the sample picture to be detected and the picture with the label. One picture feature is represented by one vector, and the cosine similarity can represent the similarity between the extracted picture feature of the sample to be detected and the picture with the label. Given two vectors, x and y, the remaining chord similarity cos (x, y) is given by the dot product and the vector length, as follows:
Figure BDA0001833095390000081
cosine similarity measures the similarity between two vectors by measuring their cosine values of their angle. The cosine value ranges between [ -1,1], the closer the value is to 1, the closer the directions of the two vectors are represented; the closer to-1, the more opposite their direction; close to 0 means that the two vectors are nearly orthogonal, e.g., completely dissimilar.
At step 230, a time of the first request for the video content is determined, and a heat loss function is determined based on the time of the first request for the video content, the current time, and the cooling factor. A natural cooling rule of heat is a heat loss function, and in one embodiment, a formula can be used
Figure BDA0001833095390000082
Determining a heat loss function, wherein e is the base of an exponential function, t 0 At the time of the first request for video content, t is the current time, b is the cooling factor, and b is the cooling coefficient used to adjust the cooling rate.
In step 240, adjusting coefficients of the bullet screen emotion quantization value, the comment emotion quantization value and the content mode correlation coefficient value are determined respectively. Wherein, the adjusting coefficient of each heat prediction parameter can be adjusted according to the actual content operation condition.
In step 250, a weighted summation calculation value of the bullet screen emotion quantization value, the comment emotion quantization value and the content mode correlation coefficient value is calculated. For example, if the adjustment coefficient of the bullet screen emotion quantization value x is w1, the adjustment coefficient of the comment emotion quantization value y is w2, and the adjustment coefficient of the content pattern association coefficient value z is w3, the weighted sum calculation result is w1 × x + w2 × y + w3 × z.
In step 260, a heat value of the video content is predicted based on the product of the weighted sum calculation and the existing heat value of the video content and the heat churn function. For example, if the existing heat value of the video content is h, the heat value of the video content is predicted to be h
Figure BDA0001833095390000091
According to the current internet video environment, the heat of the video cannot be counted only by the historical heat, and the heat of the video is not completely changed into a cold piece according to the natural cooling rule. For example, when a movie shows a hot movie star, the visit rate is high at first, but if the quality of the movie is not high, the user's word of mouth is not good (obtained from comments and barrage), and the like, the movie will become a cold movie in a short time, and the natural cooling rule under the visit rate is not followed. Therefore, in this embodiment, based on deep learning, the change trend of the video content can be accurately predicted by using the curtain emotion quantization value, the comment emotion quantization value, the content mode association coefficient value, and the heat loss function, for example, when the video content is not on line or the content on line becomes a hotspot due to some reason, the content can be accurately identified, and the hotspot content is distributed to the CDN edge node in advance, so that the recommendation accuracy of the recommendation system is improved, the hit rate of the CDN is improved, and the service quality is improved. The method is suitable for the CDN of whole file and fragment storage and the video client recommendation system.
Fig. 4 is a schematic structural diagram of an embodiment of a video content heat prediction apparatus according to the present disclosure. The apparatus includes a prediction parameter determination unit 410, a heat churn function determination unit 420, and a video heat prediction unit 430.
The prediction parameter determination unit 410 is configured to determine a bullet screen emotion quantization value, a comment emotion quantization value, and a content mode association coefficient value of the video content. The higher the barrage emotion quantization value and the comment emotion quantization value are, the more the description user likes the video content, and the higher the popularity of the video content is; the content mode correlation coefficient value may reflect a correlation between content and content. In this embodiment, the bullet screen emotion quantized value, the comment emotion quantized value, and the content pattern correlation coefficient value are used as prediction parameters for predicting the heat value.
The heat loss function determining unit 420 is used for determining a heat loss function of the video content. Some videos or files will cool down over time, and therefore, a heat loss function needs to be introduced.
The video heat prediction unit 430 is configured to predict a heat value of the video content based on the barrage emotion quantization value, the comment emotion quantization value, the content pattern association coefficient value, and the heat loss function. For example, different adjusting coefficients are set for the bullet screen emotion quantization value, the comment emotion quantization value and the content mode association coefficient value, then the bullet screen emotion quantization value, the comment emotion quantization value and the content mode association coefficient value are subjected to weighted summation calculation, and finally the product of the weighted summation calculation value and the existing heat value and the heat loss function of the video content is calculated to determine the heat value of the video content.
In the embodiment, the curtain emotion quantization value, the comment emotion quantization value, the content mode correlation coefficient value and the heat loss function are used, the change trend of the video content can be accurately predicted, so that the video content with a high heat value can be cached in the CDN node, the CDN node can accurately identify the content when the video content is not on line or the content which is on line becomes a hotspot due to some reasons, and the hotspot content is distributed to the CDN edge node in advance.
Fig. 5 is a schematic structural diagram of another embodiment of the video content heat prediction apparatus according to the present disclosure. The prediction parameter determination unit 410 includes a prediction parameter acquisition module 411 and a deep learning quantization module 412.
The prediction parameter collecting module 411 is configured to obtain barrage content and comment content of the video content, and viewing behavior information of the user on the video content mode. For example, the barrage content and the comment content may be acquired by a crawler.
The deep learning quantization module 412 is configured to determine a bullet screen emotion quantization value and a comment emotion quantization value by using a deep learning natural language processing engine, and determine a volume mode correlation coefficient value by using a deep learning correlation measure engine. The bullet screen emotion quantization value and the comment emotion quantization value may be used for a real number representation between [0,1], where 0 represents that the user highly dislikes viewing of the video content, and 1 represents that the user highly likes viewing of the video content. The value range of the content mode correlation coefficient value can be real number before [0,1], 0 represents that no correlation exists between video contents, and 1 represents that the correlation between the video contents is completely strong.
The heat loss function determination unit 420 is configured to determine a time at which the video content was first requested, and determine a heat loss function based on the time at which the video content was first requested, a current time, and a cooling factor. In one embodiment, a formula may be utilized
Figure BDA0001833095390000111
Determining a heat loss function, wherein t 0 Is the time at which the video content was first requested, t is the current time, b is the cooling factor, and b is the cooling coefficient used to adjust the cooling rate.
The video heat prediction unit 430 is configured to determine adjustment coefficients of the barrage emotion quantization value, the comment emotion quantization value and the content mode correlation coefficient value respectively; calculating a weighted summation calculation value of the bullet screen emotion quantization value, the comment emotion quantization value and the content mode correlation coefficient value; predicting a heat value of the video content based on a product of the weighted sum calculation value and an existing heat value of the video content and a heat churn function. For example, if the adjustment coefficient of the bullet screen emotion quantization value x is w1, the adjustment coefficient of the comment emotion quantization value y is w2, and the adjustment coefficient of the content pattern association coefficient value z is w3, the weighted sum calculation result is w1 × x + w2 × y + w3 × z, the current heat value of the video content is h, and the heat loss function is w
Figure BDA0001833095390000112
Predicting the heat value of the video content to
Figure BDA0001833095390000113
In the embodiment, based on deep learning, the curtain emotion quantization value, the comment emotion quantization value, the content mode association coefficient value and the heat loss function are used, so that the change trend of the video content can be accurately predicted, the video content with the high heat value can be cached in a CDN node, and the video content with the high heat value can be recommended to a user in time.
The embodiment of the disclosure can be applied to websites and mobile phone video client recommendation systems, and improves the service quality and audience rating of users. For example, after the popularity value of each video content is obtained, the video contents are sorted according to the popularity value, so that the video contents with high popularity values are recommended to the user.
Fig. 6 is a schematic structural diagram of a video content heat prediction apparatus according to still another embodiment of the disclosure. The apparatus includes a memory 610 and a processor 620. Wherein: memory 610 may be a magnetic disk, flash memory, or any other non-volatile storage medium. The memory 610 is used for storing instructions in the embodiments corresponding to fig. 1 and 2. Processor 620 is coupled to memory 610 and may be implemented as one or more integrated circuits, such as a microprocessor or microcontroller. The processor 620 is configured to execute instructions stored in the memory.
In one embodiment, the apparatus 700 may also include a memory 710 and a processor 720, as shown in FIG. 7. Processor 720 is coupled to memory 710 by BUS 730. The apparatus 700 may be further connected to an external storage device 750 through a storage interface 740 for retrieving external data, and may be further connected to a network or another computer system (not shown) through a network interface 760, which will not be described in detail herein.
In the embodiment, the data instructions are stored in the memory and processed by the processor, and the curtain emotion quantization value, the comment emotion quantization value, the content mode correlation coefficient value and the heat loss function are used, so that the change trend of the video content can be accurately predicted.
In another embodiment, a computer-readable storage medium has stored thereon computer program instructions which, when executed by a processor, implement the steps of the method in the corresponding embodiments of fig. 1, 2. As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, apparatus, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable non-transitory storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Thus far, the present disclosure has been described in detail. Some details well known in the art have not been described in order to avoid obscuring the concepts of the present disclosure. It will be fully apparent to those skilled in the art from the foregoing description how to practice the presently disclosed embodiments.
Although some specific embodiments of the present disclosure have been described in detail by way of example, it should be understood by those skilled in the art that the foregoing examples are for purposes of illustration only and are not intended to limit the scope of the present disclosure. It will be appreciated by those skilled in the art that modifications may be made to the above embodiments without departing from the scope and spirit of the present disclosure. The scope of the present disclosure is defined by the appended claims.

Claims (12)

1. A video content heat prediction method comprises the following steps:
determining a bullet screen emotion quantization value, a comment emotion quantization value and a content mode correlation coefficient value of the video content, wherein the content mode correlation coefficient value reflects the correlation between the video content and is determined by a deep learning correlation measurement engine based on the viewing behavior information of the video content mode by the user;
determining a heat loss function of the video content;
and predicting the heat value of the video content based on the barrage emotion quantization value, the comment emotion quantization value, the content mode correlation coefficient value and the heat loss function.
2. The video content heat prediction method of claim 1, wherein predicting the heat value of the video content based on the barrage sentiment quantified value, the comment sentiment quantified value, the content pattern association coefficient value, and the heat churn function comprises:
respectively determining adjusting coefficients of the bullet screen emotion quantized value, the comment emotion quantized value and the content mode correlation coefficient value;
calculating a weighted summation calculation value of the bullet screen emotion quantization value, the comment emotion quantization value and the content mode correlation coefficient value;
predicting a heat value of the video content based on a product of the weighted sum calculation and an existing heat value of the video content and the heat churn function.
3. The video content popularity prediction method of claim 1 or 2, wherein determining the barrage sentiment quantified value and the comment sentiment quantified value of the video content comprises:
acquiring barrage content and comment content of the video content;
determining the bullet screen emotion quantization value by utilizing a deep learning natural language processing engine based on the bullet screen content;
and determining the comment emotion quantization value by utilizing a deep-learning natural language processing engine based on the comment content.
4. The video content heat prediction method of claim 1 or 2, wherein determining a heat churn function for video content comprises:
determining a time at which the video content was first requested;
determining the heat runoff function based on a time of a first request for the video content, a current time, and a cooling factor.
5. The method of claim 4, wherein the heat loss function is e -b(t -t 0 );
Wherein, t 0 Is the time of the first request for the video content, t is the current time, and b is the cooling factor.
6. An apparatus for predicting popularity of video content, comprising:
the prediction parameter determining unit is used for determining a barrage emotion quantization value, a comment emotion quantization value and a content mode correlation coefficient value of the video content, wherein the content mode correlation coefficient value reflects the correlation among the video content, and is determined by a deep learning correlation measurement engine based on the viewing behavior information of the video content mode by the user;
a heat loss function determining unit, configured to determine a heat loss function of the video content;
and the video heat prediction unit is used for predicting the heat value of the video content based on the barrage emotion quantization value, the comment emotion quantization value, the content mode correlation coefficient value and the heat loss function.
7. The video content heat prediction apparatus according to claim 6,
the video heat prediction unit is used for respectively determining adjusting coefficients of the bullet screen emotion quantized value, the comment emotion quantized value and the content mode correlation coefficient value; calculating a weighted summation calculation value of the bullet screen emotion quantization value, the comment emotion quantization value and the content mode correlation coefficient value; predicting a heat value of the video content based on a product of the weighted sum calculation and an existing heat value of the video content and the heat churn function.
8. The video content hotness prediction device according to claim 6 or 7, wherein the prediction parameter determining unit includes:
the prediction parameter acquisition module is used for acquiring barrage content and comment content of the video content;
and the deep learning quantization module is used for determining the bullet screen emotion quantization value by utilizing a deep learning natural language processing engine based on the bullet screen content and determining the comment emotion quantization value by utilizing the deep learning natural language processing engine based on the comment content.
9. The video content hotness prediction apparatus according to claim 6 or 7, wherein,
the heat loss function determining unit is used for determining the time of the first request of the video content; determining the heat runoff function based on a time of a first request for the video content, a current time, and a cooling factor.
10. The video content heat prediction apparatus of claim 9, wherein the heat churn function is
Figure FDA0004066428250000031
Wherein, t 0 Is the time of the first request for the video content, t is the current time, and b is the cooling factor.
11. An apparatus for predicting popularity of video content, comprising:
a memory; and
a processor coupled to the memory, the processor configured to perform the video content heat prediction method of any of claims 1 to 5 based on instructions stored in the memory.
12. A computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the video content heat prediction method of any one of claims 1 to 5.
CN201811214009.7A 2018-10-18 2018-10-18 Video content heat prediction method and device Active CN111078944B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811214009.7A CN111078944B (en) 2018-10-18 2018-10-18 Video content heat prediction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811214009.7A CN111078944B (en) 2018-10-18 2018-10-18 Video content heat prediction method and device

Publications (2)

Publication Number Publication Date
CN111078944A CN111078944A (en) 2020-04-28
CN111078944B true CN111078944B (en) 2023-04-07

Family

ID=70308578

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811214009.7A Active CN111078944B (en) 2018-10-18 2018-10-18 Video content heat prediction method and device

Country Status (1)

Country Link
CN (1) CN111078944B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112566184B (en) * 2020-11-30 2023-10-03 中国联合网络通信集团有限公司 Communication method
CN113297934B (en) * 2021-05-11 2024-03-29 国家计算机网络与信息安全管理中心 Multi-mode video behavior analysis method for detecting Internet violence harmful scene
CN114970955B (en) * 2022-04-15 2023-12-15 黑龙江省网络空间研究中心 Short video heat prediction method and device based on multi-mode pre-training model

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101477556A (en) * 2009-01-22 2009-07-08 苏州智讯科技有限公司 Method for discovering hot sport in internet mass information
CN106604066A (en) * 2016-12-13 2017-04-26 宁夏凯速德科技有限公司 Improved personalized recommendation method and system applied to video application
CN106776528A (en) * 2015-11-19 2017-05-31 ***通信集团公司 A kind of information processing method and device
CN107105320A (en) * 2017-03-07 2017-08-29 上海交通大学 A kind of Online Video temperature Forecasting Methodology and system based on user emotion
CN108304399A (en) * 2017-01-12 2018-07-20 武汉斗鱼网络科技有限公司 The recommendation method and device of Web content

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020120501A1 (en) * 2000-07-19 2002-08-29 Bell Christopher Nathan Systems and processes for measuring, evaluating and reporting audience response to audio, video, and other content
US20110060649A1 (en) * 2008-04-11 2011-03-10 Dunk Craig A Systems, methods and apparatus for providing media content
US20160267377A1 (en) * 2015-03-12 2016-09-15 Staples, Inc. Review Sentiment Analysis

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101477556A (en) * 2009-01-22 2009-07-08 苏州智讯科技有限公司 Method for discovering hot sport in internet mass information
CN106776528A (en) * 2015-11-19 2017-05-31 ***通信集团公司 A kind of information processing method and device
CN106604066A (en) * 2016-12-13 2017-04-26 宁夏凯速德科技有限公司 Improved personalized recommendation method and system applied to video application
CN108304399A (en) * 2017-01-12 2018-07-20 武汉斗鱼网络科技有限公司 The recommendation method and device of Web content
CN107105320A (en) * 2017-03-07 2017-08-29 上海交通大学 A kind of Online Video temperature Forecasting Methodology and system based on user emotion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈亮等.基于深度信念网络的在线视频热度预测.《计算机工程与应用》.2016,全文. *

Also Published As

Publication number Publication date
CN111078944A (en) 2020-04-28

Similar Documents

Publication Publication Date Title
WO2020207196A1 (en) Method and apparatus for generating user tag, storage medium and computer device
US20220222920A1 (en) Content processing method and apparatus, computer device, and storage medium
CN109902708B (en) Recommendation model training method and related device
CN110737801B (en) Content classification method, apparatus, computer device, and storage medium
US11409791B2 (en) Joint heterogeneous language-vision embeddings for video tagging and search
CN106547908B (en) Information pushing method and system
US11019017B2 (en) Social media influence of geographic locations
US8473981B1 (en) Augmenting metadata of digital media objects using per object classifiers
US10685236B2 (en) Multi-model techniques to generate video metadata
WO2022199504A1 (en) Content identification method and apparatus, computer device and storage medium
CN111078944B (en) Video content heat prediction method and device
EP2568429A1 (en) Method and system for pushing individual advertisement based on user interest learning
EP3367676A1 (en) Video content analysis for automatic demographics recognition of users and videos
CN112364204B (en) Video searching method, device, computer equipment and storage medium
CN107545301B (en) Page display method and device
CN110489574B (en) Multimedia information recommendation method and device and related equipment
US20230004608A1 (en) Method for content recommendation and device
CN111858969B (en) Multimedia data recommendation method, device, computer equipment and storage medium
KR101725510B1 (en) Method and apparatus for recommendation of social event based on users preference
CN111984824A (en) Multi-mode-based video recommendation method
CN111639230B (en) Similar video screening method, device, equipment and storage medium
CN112699667A (en) Entity similarity determination method, device, equipment and storage medium
CN116034401A (en) System and method for retrieving video using natural language descriptions
CN113220974B (en) Click rate prediction model training and search recall method, device, equipment and medium
CN116541592A (en) Vector generation method, information recommendation method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant