CN115878891A - Live content generation method, device, equipment and computer storage medium - Google Patents

Live content generation method, device, equipment and computer storage medium Download PDF

Info

Publication number
CN115878891A
CN115878891A CN202211527211.1A CN202211527211A CN115878891A CN 115878891 A CN115878891 A CN 115878891A CN 202211527211 A CN202211527211 A CN 202211527211A CN 115878891 A CN115878891 A CN 115878891A
Authority
CN
China
Prior art keywords
content
user
behavior data
determining
live broadcast
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211527211.1A
Other languages
Chinese (zh)
Inventor
王玲
周静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
MIGU Culture Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
MIGU Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, MIGU Culture Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202211527211.1A priority Critical patent/CN115878891A/en
Publication of CN115878891A publication Critical patent/CN115878891A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the invention relates to the technical field of video playing and discloses a live content generation method, which comprises the following steps: acquiring user behavior data of a user watching a current live broadcast; the user behavior data correspond to a plurality of preset behavior characteristic dimensions; determining the correlation among a plurality of preset behavior feature dimensions according to the user behavior data; screening the user behavior data according to the correlation degree to obtain screened behavior data; respectively determining the preference degree of the user for each selectable content tag according to the screened behavior data; determining a target content label corresponding to the user from the selectable content labels according to the preference degree; and updating the current live broadcast content corresponding to the user in real time according to the target content label. Through the mode, the embodiment of the invention realizes the real-time generation of the live broadcast content according to the interest preference of the user, thereby improving the watching experience of the live broadcast user.

Description

Live content generation method, device, equipment and computer storage medium
Technical Field
The embodiment of the invention relates to the technical field of computer data processing, in particular to a live content generation method, a live content generation device, live content generation equipment and a computer storage medium.
Background
In the existing live broadcast process, the anchor generally shows fixed live broadcast content prepared in advance to audiences.
The inventor finds out in the process of implementing the embodiment of the invention that: the existing live content generation has the problem that the content is single and fixed, so that the watching experience of a user is poor.
Disclosure of Invention
In view of the above problems, embodiments of the present invention provide a live content generation method, which is used to solve the problem in the prior art that the experience of a live viewing user is not good.
According to an aspect of an embodiment of the present invention, there is provided a live content generation method, including:
acquiring user behavior data of a user watching a current live broadcast; the user behavior data correspond to a plurality of preset behavior characteristic dimensions;
determining the correlation among the multiple preset behavior feature dimensions according to the user behavior data;
screening the user behavior data according to the correlation degree to obtain screened behavior data;
respectively determining the preference degree of the user for each selectable content label according to the screened behavior data;
determining a target content label corresponding to the user from the selectable content labels according to the preference degree;
and updating the current live broadcast content corresponding to the user in real time according to the target content tag.
In an optional manner, the method further comprises:
and determining the user behavior data under the behavior feature dimension with the correlation degree smaller than a preset threshold as the screened behavior data.
In an optional manner, the method further comprises:
and performing weighted summation processing on the screened behavior data under all the behavior feature dimensions corresponding to the user to obtain the preference degree of the user for the selectable content tag.
In an optional manner, the method further comprises:
when the adjustment granularity of the current live content is a single user, determining the selectable content tags in preset positions before the preference degree is arranged in a descending order as the target content tags;
when the adjustment granularity of the current live content is multi-user, carrying out weighted summation on each selectable content label according to the preference degree corresponding to a target user group to obtain a selection weight corresponding to the selectable content label; the target user group is obtained by screening a plurality of users according to user portrait information;
determining the target content tag from the selectable content tags according to the selection weight.
In an optional manner, the method further comprises:
and sorting the selection weights according to a heap sorting algorithm, and determining the selectable content labels of preset bits before descending sorting of the selection weights as the target content labels.
In an optional manner, the method further comprises:
when the selection weight of the target content label and the selection weight of the selectable content label corresponding to the current live broadcast are determined to meet a preset relation, updating the live broadcast content of the current live broadcast in real time according to the target content label and a preset content prediction model; and the content prediction model is used for determining updated live content related to the target content label according to the target content label and the current live content.
In an optional manner, the method further comprises:
determining the dimension of the content to be updated according to the live content of the current live broadcast;
determining content characteristic information of the target content label under the dimension of the content to be updated according to the content prediction model;
and updating the live broadcast content of the current live broadcast in real time according to the content characteristic information.
According to another aspect of the embodiments of the present invention, there is provided a live content generating apparatus, including:
the acquisition module is used for acquiring user behavior data of a user watching the current live broadcast; the user behavior data correspond to a plurality of preset behavior characteristic dimensions;
the first determining module is used for determining the correlation among the plurality of preset behavior feature dimensions according to the user behavior data;
the screening module is used for screening the user behavior data according to the correlation degree to obtain screened behavior data;
the second determining module is used for respectively determining the preference degree of the user for each selectable content tag according to the screened behavior data;
a third determining module, configured to determine, according to the preference, a target content tag corresponding to the user from the selectable content tags;
and the updating module is used for updating the current live broadcast content corresponding to the user in real time according to the target content label.
According to another aspect of the embodiments of the present invention, there is provided a live content generating device, including:
the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is configured to store at least one executable instruction that causes the processor to perform the operations of the live content generation method embodiment as described in any one of the preceding claims.
According to a further aspect of an embodiment of the present invention, a computer-readable storage medium is provided, where at least one executable instruction is stored, and the executable instruction causes a live content generation device to perform the operation of the embodiment of the live content generation method described in any one of the foregoing items.
According to the embodiment of the invention, user behavior data of a user watching a current live broadcast under a plurality of preset behavior characteristic dimensions are obtained; different behavior characteristic dimensions are used for representing the behavior types of the users aiming at live broadcast, such as approval, appreciation, comment and the like aiming at the live broadcast content currently watched. Considering that the number of users watching live broadcast is very huge, such as millions, when the user behavior data of each user are analyzed in real time, and live broadcast content corresponding to the user is generated in real time according to an analysis result, dimension reduction needs to be performed on the data, so that the efficiency of generating the live broadcast content is improved. Further, considering that the users are taken as a unified behavior subject, there is a correlation between behaviors of different dimensions, and the probability that more users enjoy more rewards is also higher. Therefore, the correlation between the plurality of preset behavior feature dimensions can be determined according to the user behavior data. And screening the user behavior data according to the relevancy to obtain screened behavior data, namely selecting more representative characteristic dimensions from a plurality of behavior characteristic dimensions, and respectively determining the preference of the user for each selectable content label according to the screened behavior data, so as to determine the target content label corresponding to the user from the selectable content labels according to the preference. And finally, updating the current live broadcast content corresponding to the user in real time according to the target content tag. The embodiment of the invention can reduce the dimension of the user behavior data and select representative behavior feature dimensions by the correlation between the behavior feature dimensions corresponding to the user behavior data, thereby reducing the data analysis amount, improving the efficiency of preference analysis according to the user behavior data of the user, realizing the real-time change of the live broadcast picture according to the target content label, enabling each user watching the live broadcast to obtain the live broadcast content which best accords with the preference of the user in real time, and improving the live broadcast watching experience of the user.
The foregoing description is only an overview of the technical solutions of the embodiments of the present invention, and the embodiments of the present invention can be implemented according to the content of the description in order to make the technical means of the embodiments of the present invention more clearly understood, and the detailed description of the present invention is provided below in order to make the foregoing and other objects, features, and advantages of the embodiments of the present invention more clearly understandable.
Drawings
The drawings are only for purposes of illustrating embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 shows a flow diagram of a live content generation method provided by an embodiment of the present invention;
fig. 2 is a schematic flowchart illustrating a method for generating live content according to another embodiment of the present invention to determine a preference of an optional content tag;
fig. 3 is a schematic flowchart illustrating a method for generating live content according to another embodiment of the present invention to determine a preference of an optional content tag;
fig. 4 is a schematic structural diagram illustrating a live content generating apparatus provided by an embodiment of the present invention;
fig. 5 is a schematic structural diagram illustrating a live content generating device according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein.
Fig. 1 shows a flow diagram of a live content generation method provided by an embodiment of the present invention, which is executed by a computer processing device. The computer processing device may include a cell phone, a notebook computer, etc. As shown in fig. 1, the method comprises the steps of:
step 10: acquiring user behavior data of a user watching a current live broadcast; wherein the user behavior data corresponds to a plurality of preset behavior feature dimensions.
In one embodiment of the present invention, the user behavior data may include historical behavior data of each user in a preset historical time interval (e.g., may be half a year or 1 month) and real-time behavior data in a time interval that is a short distance from the current time (e.g., may be half an hour or 10 minutes). The behavior feature dimension is used for representing a behavior feature capable of reflecting the preference of the user, such as watching duration, the number of praise, the amount of appreciation, the concentration degree during watching, the emotional pleasure degree during watching and the like.
Step 20: and determining the correlation among the preset behavior feature dimensions according to the user behavior data.
And extracting behavior characteristic values under behavior characteristic dimensions capable of reflecting preferences of the user for the content watched by the user according to the user behavior data, wherein the behavior characteristic dimensions can comprise real-time behavior characteristic dimensions and historical behavior characteristic dimensions. The real-time behavior feature dimension may include, among other things, a degree of interest, an emotion score, an action score, and an environmental score. The emotion score is used to represent the degree of pleasure of the emotion of the user, the action score is used to represent the degree of intensity, change, etc. of the action of the user, the environment score is used to represent the degree of silence, etc. of the environment in which the user is currently located, and the action score and the environment score are used. The historical behavior feature dimension can comprise behaviors such as viewing duration, number of praise, number of attention and amount of reward, which can represent the attention degree of the user history on the selectable content labels.
The correlation between each two of the behavior characteristic dimensions can be calculated, the correlation is used for representing the strength of the correlation of characteristic values between the two behavior characteristic dimensions, and the stronger the correlation is, the greater the degree of mutual influence of the characteristic values under the two behavior characteristic dimensions is. The correlation degree between the three behavior feature dimensions of the number of praise, the amount of reward and the watching duration is generally large.
Specifically, for each user and each behavior feature dimension, an average value and a standard deviation of feature values of all selectable content tags in the behavior feature dimension of the user are calculated, then differences between values of all selectable content tags in all behavior feature dimensions and the average value are calculated, and the correlation is determined according to a ratio of a sum of all differences to the standard deviation.
Specifically, the correlation r between any two behavior feature dimensions X and Y is calculated X,Y The following:
Figure BDA0003975290300000061
where n is the total number of selectable content tags, x i 、y i Refers to the feature values in any two different behavior feature dimensions for the ith selectable content tag. If xi refers to the viewing time of the ith selectable content tag by the user, and yi refers to the number of praise on the ith tag.
Figure BDA0003975290300000062
Are the mean values, σ, of X and Y, respectively X 、σ Y Standard deviations for X and Y, respectively. R is not less than 0 X,Y ≤1。r X,Y The larger the value of (a), the stronger the correlation between X and Y, indicating that X or Y can be deleted as redundancy.
Step 30: and screening the user behavior data according to the correlation degree to obtain screened behavior data.
Considering that the number of users watching live broadcast may be large, such as millions, and meanwhile, in order to perform personalized processing on live broadcast content more accurately, the number of selectable content tags is also preferably as large as possible and refined, such as reaching millions or more, therefore, in order to achieve efficient real-time personalized transformation on live broadcast content, a relatively important part of behavior feature dimensions may be screened out from a plurality of behavior feature dimensions to perform preference calculation, specifically, the behavior feature dimensions may be screened out according to the correlation of feature values under the behavior feature dimensions, for example, for a specific user, there is a certain rule in behavior feature, for example, for a user with a large number of points, the amount of reward and the duration of viewing are generally relatively large, that is, the correlation between three behavior feature dimensions of the amount of reward, the amount of reward and the duration of viewing is large, and therefore, one or two of the three behavior feature dimensions may be selected as a basis for calculating the preference.
Thus, in a further embodiment of the present invention, step 30 further comprises:
step 301: and determining the user behavior data under the behavior feature dimension with the correlation degree smaller than a preset threshold as the screened behavior data.
In an embodiment of the present invention, the user behavior data corresponding to the behavior feature dimension with the correlation degree smaller than the preset threshold is determined as the screened behavior data. For example, there are 3 behavioral characteristic dimensions a, B, C. The correlation re between each two of them is as follows: re (A, B) =0.1, re (A, C) =0.8, re (B, C) =0.5, and assuming that the preset threshold is set to be 0.2, the user behavior data under the behavior characteristic dimension A, B is screened out and used as a basis for calculating the preference degree, so that the data condition under the behavior characteristic dimension C does not need to be considered, and the efficiency of calculating the preference degree is improved.
Step 40: and respectively determining the preference degree of the user for each selectable content label according to the screened behavior data.
Specifically, the selectable content label can be obtained by classifying and labeling the historical live content. The classification mode can be manual labeling or cluster analysis according to live pictures and voice contents. As for education related live broadcasts, its optional content tags derived from content analysis may include "language", "english", "math", etc.
Optionally, the selectable content tags may also be a multi-level tree of selectable content tags, such as "language", "english", "math" may be used as the first-level selectable content tags, and the first-level selectable content tags may be further subdivided into two levels of classifications, for example, the second-level selectable content tags under the "language" tag may include: "take a word", "write", "read" and "ancient writing" etc. Further, the secondary selectable content labels may be further subdivided into three categories, for example, the three categories of selectable content labels under the secondary selectable content label of "ancient language" may include "poetry of the Tang Dynasty", "Song dynasty" and "literary language" and the like.
In an embodiment of the present invention, the preference degree of each user for each selectable content tag may be obtained by performing weighted summation according to the behavior feature values of each user under multiple behavior feature dimensions under each selectable content tag.
Thus, step 40 further comprises: step 401: and performing weighted summation processing on the screened behavior data under all the behavior feature dimensions corresponding to the user to obtain the preference degree of the user for the selectable content tag.
Specifically, for a single user, performing weighted summation according to the feature values under the behavior feature dimensions corresponding to the screened behavior data to obtain the preference value of each user for each selectable content tag.
Correspondingly, when the number of the users is multiple, for each selectable content label, the preference value of all the users for the label is calculated, and the preference degree corresponding to the selectable content label is obtained.
Step 50: and determining a target content label corresponding to the user from the selectable content labels according to the preference degree.
In one embodiment of the present invention, in order to improve the timeliness of live content adjustment in consideration of the large number of users watching live, the adjustment of live content may include various granularities, such as a single user or multiple users thereof. When the adjustment strength is a single user, the representation determines a target content tag corresponding to the user according to user behavior data of the single user, and performs live broadcast content corresponding to the user according to the target content tag corresponding to the user, namely, each live broadcast watching user is an independent live broadcast content adjustment unit, and user behaviors and preference contents among the users are not influenced with each other.
Correspondingly, in order to improve the efficiency of live content generation, live content corresponding to a plurality of users can be uniformly adjusted, for example, overall preference of all users is determined according to user behavior data of all users (or a representative user group) watching live broadcast currently, live content is adjusted according to the preference, and the adjusted live content is uniformly displayed to all the users. When the target content tags are selected uniformly according to a plurality of users, the preference degrees of all users corresponding to the selectable content tags can be weighted and summed to obtain the selection weight of each selectable content tag.
Specifically, step 50 further comprises:
step 501: and when the adjustment granularity of the current live content is a single user, determining the selectable content tags of preset positions before the preference degree descending order arrangement as the target content tags.
When the adjustment granularity of the current live content is a single user, that is, when live content changes with interest preference one by one for a single user, the selectable content tags of the preset bits before the preference degree corresponding to the user is arranged in a descending order can be determined as the target content tags of the user.
Step 502: when the adjustment granularity of the current live content is multi-user, carrying out weighted summation on each selectable content label according to the preference degree corresponding to a target user group to obtain a selection weight corresponding to the selectable content label; and the target user group is obtained by screening a plurality of users according to the user portrait information.
Specifically, multiple users may be all users currently watching live broadcast, and for each selectable content tag, the preference degrees of the users for the selectable content tag are obtained by performing weighted summation according to the preference degrees of the users for the tag and a preset weight corresponding to the behavior feature dimension. And then, carrying out weighted summation according to the preference degrees of all users in the target user group for the specific selectable content label to obtain the selection weight corresponding to the specific selectable content label.
Wherein, regarding the calculation of the preference degree of each user for the specific selectable content label: the first k preferences fi with the minimum relevance can be selected as behavior data after screening, and the preference Score (i) of the user for the ith optional content tag is calculated as follows:
Figure BDA0003975290300000091
wherein f is i A preference value representing a user's preference for the ith selectable content tag; weight parameter a i Fixed parameter combinations may be used, or customized parameter combinations based on user portrait characteristics may be used. For example, if the viewer is analyzed to have more idle time, the weight of the viewing duration parameter is larger; analyzing that the social attribute of the audience is rich, and the parameter weight of the praise quantity and the attention quantity is large; analyzing that the fund of the audience is abundant, and the weight of the parameter of the reward amount is larger; when the degree of learning of the audience is analyzed to be high, the weight of the parameter of the attention degree is high, and the like.
Optionally, the preference of the user for the ith selectable content tag may be further normalized according to the average value of the preferences corresponding to all the selectable content tags and the tag difference as follows:
Figure BDA0003975290300000092
wherein, W' i Is the preference, W, corresponding to the normalized ith selectable content tag i Namely the Score (i) described above,
Figure BDA0003975290300000093
、σ Score respectively, the mean and standard deviation of Score (i) when i walks to all the selectable content tags. Fig. 2 may be referred to as an example of the specific preference calculation.
In another embodiment of the present invention, in order to improve the processing efficiency of obtaining the preference degree through weighted summation in the foregoing embodiment, so as to implement real-time live broadcast picture transformation with lower time delay, a MapReduce divide and conquer method may be further used to calculate large-scale data and distribute the large-scale data to a plurality of computing nodes, and then a final result is obtained by integrating intermediate results of the respective nodes, and a process of performing weighted summation on the screened data according to MapReduce may refer to fig. 3, as shown in fig. 3, the process may include the following steps:
map (mapping): and in the preprocessing stage, mapping the original data into each K-V (Key-Value, key Value pair) and sending the K-V to reduce.
Shuffle (Shuffle): the shuffle classification sends the same key to the same reduce.
Reduce (reduction): and a polymerization stage of polymerizing the same bond and then outputting.
Step 503: determining the target content tag from the selectable content tags according to the selection weight.
In one embodiment of the invention, the selectable content tags with the preset bits before the descending sorting of the selection weights are determined as the target content tags.
Considering that the number of users watching live broadcast is very huge, the generated preference data volume is also large, so that the workload of sequencing is large, and the live broadcast content of the users cannot be updated in real time, therefore, the users can be screened according to the user portrait information to obtain a user group with large weight, and the selectable content tags are sequenced and screened according to the preference corresponding to the user group. Specifically, the user liveness can be determined according to the user portrait information, and users with user liveness meeting preset conditions are added into the target user group. The frequency and duration of the watching behaviors of the user, and the frequency and scale (such as the number of comments, the amount of prizes and the like) of the interactive behaviors can be determined according to the user portrait information, so that the liveness of the user can be determined.
Furthermore, in order to improve the selection efficiency of the target content tags and realize the real-time updating of the live broadcast picture content watched by the target content tags according to the preference of the user, an optimized sorting algorithm can be adopted to sort the tags according to the selection weights, so that the efficiency of screening the target content tags according to the selection weights is improved. Considering that only a plurality of front preset optional content labels with the maximum selection weight need to be selected, and the selection weights of all the optional content labels do not need to be sorted, the labels can be sorted according to the selection weights by adopting an optimized sorting algorithm, so that the efficiency of screening the target content labels according to the selection weights is improved.
Specifically, a top preset number of selectable content tags with the largest selection weight may be selected by using a heap sorting method, and in a further embodiment of the present invention, step 503 further includes:
step 5031: and sorting the selection weights according to a heap sorting algorithm, and determining the selectable content labels of preset bits before descending sorting of the selection weights as the target content labels.
Specifically, the sum of a plurality of labels preset in the top with the largest selection weight may be obtained by using a minimum heap sorting method. With the selection weight for 10000 (all can be) selectable content tags, the process of picking out the first one hundred tags with the largest selection weight according to the minimum heap sorting algorithm includes the following steps:
the first 100 of 10000 numbers are first placed into an array to form a minimum heap. Values between 100 and 10000 are recycled. And judging whether the current value is larger than the top of the pile or not in each loop. If the current value is greater than the heap top, the heap top is removed first, then the current number is placed in the heap top, and the minimum heap ordering is performed within 100 numbers. When the circulation of the values between 100 and 10000 is completed, the first 100 values with the maximum selection weight can be obtained. The process of picking the first a labels with the largest selection weight is as follows: pick the top a content tags. For example, a =3, the content labels and selection weights of the top 3 are (chinese-ancient-tang poem, 0.4), (chinese-ancient-song chose, 0.2), (math-geometry-solid geometry, 0.1).
Step 60: and updating the current live broadcast content corresponding to the user in real time according to the target content label.
In an embodiment of the present invention, a content dimension to be updated and a target content feature in the content dimension are determined according to a target content tag, and live content of a current live broadcast is updated according to the target content feature, where a corresponding search may be performed on the target content tag according to a preset content-tag database, or a prediction of the target content feature may be performed according to the target content tag according to a preset content prediction model.
Thus, in a further embodiment of the present invention, step 60 further comprises:
step 601: when the selection weight of the target content label and the selection weight of the selectable content label corresponding to the current live broadcast are determined to meet a preset relation, updating the live broadcast content of the current live broadcast in real time according to the target content label and a preset content prediction model; and the content prediction model is used for determining updated live content related to the target content label according to the target content label and the current live content.
In an embodiment of the present invention, the preset relationship may be that the selection weight of the target content tag is greater than the selection weight of the selectable content tag corresponding to the current live broadcast. The target content tag may change with each calculation time frequency, and may be transformed according to the selection weight. If the selection weight of the newly added target content label is more than the threshold value of the multiple of the selection weight of the current selectable content label, the new selectable content label can be switched to quickly, otherwise, the current selectable content label can be applied first and then the new selectable content label is switched to. For example, a =3, the first time the selectable content labels and selection weights for the top 3 are calculated as (chinese-ancient-tang poem, 0.4), (chinese-ancient-sons, 0.2), (math-geometry-solid geometry, 0.1). Currently the chinese-ancient-down poetry labels are being applied, and the second calculation, the labels and selection weights of the top 3 are (chinese-ancient-literary language, 0.6), (chinese-ancient-down poetry, 0.2), (math-geometry-solid geometry, 0.1). The threshold for the multiplier is 2, so it is possible to switch to a new content tag language-medium-reading faster.
The content prediction model may be obtained by pre-training, and may be one of a linear regression model, a neural network model, a decision tree model, and the like. The content prediction model is obtained by taking a content label sample as input and taking live broadcast content corresponding to the content label sample as output training.
In one embodiment of the present invention, step 601 further comprises:
step 6011: and determining the dimension of the content to be updated according to the live content of the current live broadcast.
In an embodiment of the present invention, a content dimension to be updated is determined according to a content type of live content currently live, for example, for a live anchor type, the content dimension to be updated may include text, score, related recommendations, and the like. And aiming at the virtual anchor type, the dimension of the content to be updated can comprise character images, shapes, clothes, backgrounds, props and the like.
Step 6012: and determining content characteristic information of the target content label under the dimension of the content to be updated according to the content prediction model.
In an embodiment of the present invention, the content prediction model includes one of a linear regression model, a neural network model, a decision tree model, and the like, and the content prediction model outputs content feature information of the target content tag in multiple preset content dimensions, for example, when the target content tag is "basketball", the content feature information may include relevant feature information of NBA, ball star, and basket.
Step 6013: and updating the live broadcast content of the current live broadcast in real time according to the content characteristic information.
In an embodiment of the present invention, live content is transformed in real time according to content characteristic information, for example, content characteristic information is added on the basis of the current live content, or the current live content is replaced with live content corresponding to the content characteristic information.
According to the embodiment of the invention, user behavior data of a user watching a current live broadcast under a plurality of preset behavior characteristic dimensions are obtained; different behavior characteristic dimensions are used for representing the behavior type of the user aiming at live broadcast, such as praise, appreciation, comment and the like aiming at the currently watched live broadcast content. Considering that the number of users watching live broadcast is very huge, such as millions, when the user behavior data of each user are analyzed in real time, and live broadcast content corresponding to the user is generated in real time according to an analysis result, dimension reduction needs to be performed on the data, so that the efficiency of generating the live broadcast content is improved. Further, considering that the users are taken as a unified behavior subject, there is a correlation between behaviors of different dimensions, and the probability that more users enjoy more rewards is also higher. Accordingly, the degree of correlation between the plurality of preset behavior feature dimensions may be determined from the user behavior data. And screening the user behavior data according to the relevancy to obtain screened behavior data, namely selecting more representative characteristic dimensions from a plurality of behavior characteristic dimensions, respectively determining the preference of the user to each selectable content label according to the screened behavior data, and accordingly determining the target content label corresponding to the user from the selectable content labels according to the preference. And finally, updating the current live broadcast content corresponding to the user in real time according to the target content label. The embodiment of the invention can reduce the dimension of the user behavior data and select representative behavior feature dimensions by the correlation between the behavior feature dimensions corresponding to the user behavior data, thereby reducing the data analysis amount, improving the efficiency of preference analysis according to the user behavior data of the user, realizing the real-time change of the live broadcast picture according to the target content label, enabling each user watching the live broadcast to obtain the live broadcast content which best meets the preference of the user in real time, and improving the live broadcast watching experience of the user.
Fig. 4 is a schematic structural diagram illustrating a live content generating apparatus according to an embodiment of the present invention. As shown in fig. 4, the apparatus 70 includes: an obtaining module 701, a first determining module 702, a screening module 703, a second determining module 704, a third determining module 705, and an updating module 706.
An obtaining module 701, configured to obtain user behavior data of a user watching a current live broadcast; the user behavior data correspond to a plurality of preset behavior characteristic dimensions;
a first determining module 702, configured to determine, according to the user behavior data, a degree of correlation between the multiple preset behavior feature dimensions;
a screening module 703, configured to screen the user behavior data according to the relevance to obtain screened behavior data;
a second determining module 704, configured to determine, according to the screened behavior data, preference degrees of the user for each selectable content tag respectively;
a third determining module 705, configured to determine, according to the preference, a target content tag corresponding to the user from the selectable content tags;
an updating module 706, configured to update, in real time, the current live content corresponding to the user according to the target content tag.
The operation process of the live content generating device provided by the embodiment of the invention is substantially the same as that of the method embodiment, and is not described again.
The embodiment of the invention provides a live content generation device, which acquires user behavior data of a user watching a current live broadcast under a plurality of preset behavior characteristic dimensions; different behavior characteristic dimensions are used for representing the behavior type of the user aiming at live broadcast, such as praise, appreciation, comment and the like aiming at the currently watched live broadcast content. Considering that the number of users watching live broadcast is very huge, such as millions, when the user behavior data of each user are analyzed in real time, and live broadcast content corresponding to the user is generated in real time according to an analysis result, dimension reduction needs to be performed on the data, so that the efficiency of generating the live broadcast content is improved. Further, considering that the users are taken as a unified behavior subject, there is a correlation between behaviors of different dimensions, and the probability that more users enjoy more rewards is also higher. Therefore, the correlation between the plurality of preset behavior feature dimensions can be determined according to the user behavior data. And screening the user behavior data according to the relevancy to obtain screened behavior data, namely selecting more representative characteristic dimensions from a plurality of behavior characteristic dimensions, respectively determining the preference of the user to each selectable content label according to the screened behavior data, and accordingly determining the target content label corresponding to the user from the selectable content labels according to the preference. And finally, updating the current live broadcast content corresponding to the user in real time according to the target content label. The embodiment of the invention can reduce the dimension of the user behavior data and select representative behavior feature dimensions by the correlation between the behavior feature dimensions corresponding to the user behavior data, thereby reducing the data analysis amount, improving the efficiency of preference analysis according to the user behavior data of the user, realizing the real-time change of the live broadcast picture according to the target content label, enabling each user watching the live broadcast to obtain the live broadcast content which best meets the preference of the user in real time, and improving the live broadcast watching experience of the user.
Fig. 5 is a schematic structural diagram of a live content generating device according to an embodiment of the present invention, and a specific embodiment of the present invention is not limited to a specific implementation of the live content generating device.
As shown in fig. 5, the live content generating apparatus may include: a processor (processor) 802, a Communications Interface 804, a memory 806, and a communication bus 808.
Wherein: the processor 802, communication interface 804, and memory 806 communicate with one another via a communication bus 808. A communication interface 804 for communicating with network elements of other devices, such as clients or other servers. The processor 802 is configured to execute the program 810, and may specifically execute the relevant steps in the embodiment of the live content generation method described above.
In particular, program 810 may include program code comprising computer-executable instructions.
The processor 802 may be a central processing unit CPU, or an Application Specific Integrated Circuit ASIC (Application Specific Integrated Circuit), or one or more Integrated circuits configured to implement embodiments of the present invention. The live content generation device comprises one or more processors, which can be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
The memory 806 stores a program 810. The memory 806 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 810 may be specifically invoked by the processor 802 to cause a live content generating device to perform the following operations:
acquiring user behavior data of a user watching a current live broadcast; the user behavior data correspond to a plurality of preset behavior characteristic dimensions;
determining the correlation among the multiple preset behavior feature dimensions according to the user behavior data;
screening the user behavior data according to the correlation degree to obtain screened behavior data;
respectively determining the preference degree of the user for each selectable content label according to the screened behavior data;
determining a target content label corresponding to the user from the selectable content labels according to the preference degree;
and updating the current live broadcast content corresponding to the user in real time according to the target content label.
The operation process of the live content generating device provided by the embodiment of the invention is substantially the same as that of the method embodiment, and is not described again.
According to the live content generation equipment provided by the embodiment of the invention, the user behavior data of the user watching the live broadcast under a plurality of preset behavior characteristic dimensions is obtained; different behavior characteristic dimensions are used for representing the behavior types of the users aiming at live broadcast, such as approval, appreciation, comment and the like aiming at the live broadcast content currently watched. Considering that the number of users watching live broadcasting is very huge, such as millions, when real-time analysis is performed on user behavior data of each user, and live broadcasting content corresponding to the user is generated in real time according to an analysis result, dimension reduction needs to be performed on the data, so that the efficiency of generating the live broadcasting content is improved. Further, considering that the users are taken as a unified behavior subject, there is a correlation between behaviors of different dimensions, and the probability that more users enjoy more rewards is also higher. Accordingly, the degree of correlation between the plurality of preset behavior feature dimensions may be determined from the user behavior data. And screening the user behavior data according to the relevancy to obtain screened behavior data, namely selecting more representative characteristic dimensions from a plurality of behavior characteristic dimensions, and respectively determining the preference of the user for each selectable content label according to the screened behavior data, so as to determine the target content label corresponding to the user from the selectable content labels according to the preference. And finally, updating the current live broadcast content corresponding to the user in real time according to the target content label. The embodiment of the invention can reduce the dimension of the user behavior data and select representative behavior feature dimensions by the correlation between the behavior feature dimensions corresponding to the user behavior data, thereby reducing the data analysis amount, improving the efficiency of preference analysis according to the user behavior data of the user, realizing the real-time change of the live broadcast picture according to the target content label, enabling each user watching the live broadcast to obtain the live broadcast content which best meets the preference of the user in real time, and improving the live broadcast watching experience of the user.
The embodiment of the invention provides a computer-readable storage medium, wherein at least one executable instruction is stored in the storage medium, and when the executable instruction runs on live content generation equipment, the live content generation equipment is enabled to execute a live content generation method in any method embodiment.
The executable instructions may be specifically configured to cause a live content generation device to perform the following operations:
acquiring user behavior data of a user watching a current live broadcast; the user behavior data correspond to a plurality of preset behavior characteristic dimensions;
determining the correlation among the multiple preset behavior feature dimensions according to the user behavior data;
screening the user behavior data according to the correlation degree to obtain screened behavior data;
respectively determining the preference degree of the user for each selectable content label according to the screened behavior data;
determining a target content label corresponding to the user from the selectable content labels according to the preference degree;
and updating the current live broadcast content corresponding to the user in real time according to the target content label.
The operation process of the executable instructions stored in the computer storage medium provided by the embodiment of the invention is substantially the same as that of the method embodiment, and is not described again.
The executable instruction stored in the computer storage medium provided by the embodiment of the invention obtains the user behavior data of the user watching the current live broadcast under a plurality of preset behavior characteristic dimensions; different behavior characteristic dimensions are used for representing the behavior type of the user aiming at live broadcast, such as praise, appreciation, comment and the like aiming at the currently watched live broadcast content. Considering that the number of users watching live broadcast is very huge, such as millions, when the user behavior data of each user are analyzed in real time, and live broadcast content corresponding to the user is generated in real time according to an analysis result, dimension reduction needs to be performed on the data, so that the efficiency of generating the live broadcast content is improved. Further, considering that the users are taken as a unified behavior subject, there is a correlation between behaviors of different dimensions, and the probability that more users enjoy more rewards is also higher. Accordingly, the degree of correlation between the plurality of preset behavior feature dimensions may be determined from the user behavior data. And screening the user behavior data according to the relevancy to obtain screened behavior data, namely selecting more representative characteristic dimensions from a plurality of behavior characteristic dimensions, and respectively determining the preference of the user for each selectable content label according to the screened behavior data, so as to determine the target content label corresponding to the user from the selectable content labels according to the preference. And finally, updating the current live broadcast content corresponding to the user in real time according to the target content label. The embodiment of the invention can reduce the dimension of the user behavior data and select representative behavior feature dimensions by the correlation between the behavior feature dimensions corresponding to the user behavior data, thereby reducing the data analysis amount, improving the efficiency of preference analysis according to the user behavior data of the user, realizing the real-time change of the live broadcast picture according to the target content label, enabling each user watching the live broadcast to obtain the live broadcast content which best meets the preference of the user in real time, and improving the live broadcast watching experience of the user.
The embodiment of the invention provides a live content generation device, which is used for executing the live content generation method.
Embodiments of the present invention provide a computer program, where the computer program can be called by a processor to enable a live content generating device to execute a live content generating method in any of the above method embodiments.
Embodiments of the present invention provide a computer program product, where the computer program product includes a computer program stored on a computer-readable storage medium, and the computer program includes program instructions, when the program instructions are run on a computer, cause the computer to execute the live content generation method in any of the above method embodiments.
The algorithms or displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. In addition, embodiments of the present invention are not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the embodiments of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed to reflect the intent: that the invention as claimed requires more features than are expressly recited in each claim.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components in the embodiments may be combined into one module or unit or component, and may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names. The steps in the above embodiments should not be construed as limiting the order of execution unless specified otherwise.

Claims (10)

1. A live content generation method, comprising:
acquiring user behavior data of a user watching a current live broadcast; the user behavior data correspond to a plurality of preset behavior characteristic dimensions;
determining the correlation among the multiple preset behavior feature dimensions according to the user behavior data;
screening the user behavior data according to the correlation degree to obtain screened behavior data;
respectively determining the preference degree of the user for each selectable content label according to the screened behavior data;
determining a target content label corresponding to the user from the selectable content labels according to the preference degree;
and updating the current live broadcast content corresponding to the user in real time according to the target content label.
2. The method of claim 1, wherein the screening the user behavior data according to the correlation to obtain screened behavior data, further comprising:
and determining the user behavior data under the behavior feature dimension with the correlation degree smaller than a preset threshold as the screened behavior data.
3. The method of claim 1, wherein the determining the preference of the user for each selectable content tag according to the filtered behavior data further comprises:
and performing weighted summation processing on the screened behavior data under all the behavior feature dimensions corresponding to the user to obtain the preference degree of the user for the selectable content tag.
4. The method according to claim 1, wherein the determining the target content tag corresponding to the user from the selectable content tags according to the preference comprises:
when the adjustment granularity of the current live content is a single user, determining the selectable content tags in preset positions before the preference degree is arranged in a descending order as the target content tags;
when the adjustment granularity of the current live content is multi-user, carrying out weighted summation on each selectable content label according to the preference degree corresponding to a target user group to obtain a selection weight corresponding to the selectable content label; the target user group is obtained by screening a plurality of users according to user portrait information;
determining the target content tag from the selectable content tags according to the selection weight.
5. The method of claim 4, wherein the determining the target content tag from the selectable content tags according to the selection weight comprises:
and sorting the selection weights according to a heap sorting algorithm, and determining the selectable content labels of preset bits before descending sorting of the selection weights as the target content labels.
6. The method of claim 4, wherein the updating the currently live content in real-time according to the target content tag comprises:
when the selection weight of the target content label and the selection weight of the selectable content label corresponding to the current live broadcast are determined to meet a preset relation, updating the live broadcast content of the current live broadcast in real time according to the target content label and a preset content prediction model; and the content prediction model is used for determining updated live content related to the target content label according to the target content label and the current live content.
7. The method of claim 6, wherein the updating the currently live content in real time according to the target content tag and a preset content prediction model comprises:
determining the dimensionality of the content to be updated according to the live content of the current live broadcast;
determining content characteristic information of the target content label under the dimension of the content to be updated according to the content prediction model;
and updating the live broadcast content of the current live broadcast in real time according to the content characteristic information.
8. An apparatus for generating live content, the apparatus comprising:
the acquisition module is used for acquiring user behavior data of a user watching the current live broadcast; the user behavior data correspond to a plurality of preset behavior characteristic dimensions;
the first determining module is used for determining the correlation among the preset behavior feature dimensions according to the user behavior data;
the screening module is used for screening the user behavior data according to the correlation degree to obtain screened behavior data;
the second determining module is used for respectively determining the preference degree of the user for each selectable content tag according to the screened behavior data;
a third determining module, configured to determine, according to the preference, a target content tag corresponding to the user from the selectable content tags;
and the updating module is used for updating the current live broadcast content corresponding to the user in real time according to the target content label.
9. A live content generation device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface are communicated with each other through the communication bus;
the memory is configured to store at least one executable instruction that causes the processor to perform the operations of the live content generating method of any one of claims 1-7.
10. A computer-readable storage medium having stored therein at least one executable instruction that, when executed on a live content generation device, causes the live content generation device to perform operations of the live content generation method of any one of claims 1-7.
CN202211527211.1A 2022-12-01 2022-12-01 Live content generation method, device, equipment and computer storage medium Pending CN115878891A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211527211.1A CN115878891A (en) 2022-12-01 2022-12-01 Live content generation method, device, equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211527211.1A CN115878891A (en) 2022-12-01 2022-12-01 Live content generation method, device, equipment and computer storage medium

Publications (1)

Publication Number Publication Date
CN115878891A true CN115878891A (en) 2023-03-31

Family

ID=85765185

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211527211.1A Pending CN115878891A (en) 2022-12-01 2022-12-01 Live content generation method, device, equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN115878891A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117150124A (en) * 2023-08-16 2023-12-01 湖北太昇科技有限公司 User characteristic analysis method and system based on smart home

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117150124A (en) * 2023-08-16 2023-12-01 湖北太昇科技有限公司 User characteristic analysis method and system based on smart home

Similar Documents

Publication Publication Date Title
WO2020228514A1 (en) Content recommendation method and apparatus, and device and storage medium
CN112632385A (en) Course recommendation method and device, computer equipment and medium
CN111460221B (en) Comment information processing method and device and electronic equipment
CN110737783A (en) method, device and computing equipment for recommending multimedia content
CN111898031A (en) Method and device for obtaining user portrait
WO2021155691A1 (en) User portrait generating method and apparatus, storage medium, and device
CN111859149A (en) Information recommendation method and device, electronic equipment and storage medium
Gowda et al. Learn2augment: learning to composite videos for data augmentation in action recognition
CN112749330B (en) Information pushing method, device, computer equipment and storage medium
CN114283350B (en) Visual model training and video processing method, device, equipment and storage medium
CN111597446B (en) Content pushing method and device based on artificial intelligence, server and storage medium
CN116935170B (en) Processing method and device of video processing model, computer equipment and storage medium
CN115878891A (en) Live content generation method, device, equipment and computer storage medium
CN110929169A (en) Position recommendation method based on improved Canopy clustering collaborative filtering algorithm
CN114328913A (en) Text classification method and device, computer equipment and storage medium
CN113780365A (en) Sample generation method and device
CN116910357A (en) Data processing method and related device
CN116756281A (en) Knowledge question-answering method, device, equipment and medium
CN110750712A (en) Software security requirement recommendation method based on data driving
CN116204709A (en) Data processing method and related device
CN112801053B (en) Video data processing method and device
CN115129902A (en) Media data processing method, device, equipment and storage medium
CN114357138A (en) Question and answer identification method and device, electronic equipment and readable storage medium
CN113762324A (en) Virtual object detection method, device, equipment and computer readable storage medium
CN117575894B (en) Image generation method, device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination