CN108471544B - Method and device for constructing video user portrait - Google Patents

Method and device for constructing video user portrait Download PDF

Info

Publication number
CN108471544B
CN108471544B CN201810262253.4A CN201810262253A CN108471544B CN 108471544 B CN108471544 B CN 108471544B CN 201810262253 A CN201810262253 A CN 201810262253A CN 108471544 B CN108471544 B CN 108471544B
Authority
CN
China
Prior art keywords
target
video
actor
user
target user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810262253.4A
Other languages
Chinese (zh)
Other versions
CN108471544A (en
Inventor
王程明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN201810262253.4A priority Critical patent/CN108471544B/en
Publication of CN108471544A publication Critical patent/CN108471544A/en
Application granted granted Critical
Publication of CN108471544B publication Critical patent/CN108471544B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/251Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44222Analytics of user selections, e.g. selection of programs or purchase activity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4532Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4668Learning process for intelligent management, e.g. learning user preferences for recommending movies for recommending content, e.g. movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computing Systems (AREA)
  • Computer Graphics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a method and a device for constructing a video user portrait, wherein the method comprises the following steps: acquiring a target video of a target user, and dividing the target video into at least one video segment; determining actor information in each video clip of the target video, counting the occurrence times of each target actor in the actor information, and calculating the weight value of each target actor in the target video based on the occurrence times of each target actor; and calculating to obtain the favorite representation values of the target user to each target actor according to the weight values of each target actor in the target video, and generating the target user portrait based on the favorite representation values of the target user to each target actor. According to the method and the device, the purpose of accuracy of user portrait is achieved through user portrait at an actor level in the video.

Description

Method and device for constructing video user portrait
Technical Field
The invention relates to the technical field of video recommendation, in particular to a method and a device for constructing a video user portrait.
Background
With the rapid development of the internet, network video has become one of the main sources for people to acquire video information and entertainment information. In addition, the number of videos is rapidly increased, and in order to improve the experience effect of the user, each large video website or client often carries out corresponding video recommendation on the user according to the favorite degree of the video user.
One of the key technologies adopted when recommending video information to a user is to establish a user portrait, which is a basic mode for analyzing the behavior attribute (such as browsing videos or watching video records) and the basic attribute (such as basic information of the user) of the user, abstracting a user overall view, and supporting large data applications such as personalized recommendation and automated marketing. The common user portrait data in the current video industry is mainly performed at the granularity of the video level watched by the user, for example, when a certain video includes actor a, actor B, actor C, etc., if a certain user likes actor C to watch the video, for this case, if the user portrait is constructed according to the granularity of the video level, the obtained user portrait result is that the user likes actor a, actor B, and actor C, and cannot accurately reflect the real preference of the user, which may cause inaccuracy of the video recommendation result.
Disclosure of Invention
In view of the above problems, the present invention provides a method and an apparatus for constructing a user portrait of a video, which achieve the purpose of precision of user portrait through user portrait at an actor level in the video.
In order to achieve the purpose, the invention provides the following technical scheme:
a method of constructing a video user representation, comprising:
acquiring a target video of a target user, and dividing the target video into at least one video segment;
determining actor information in each video clip of the target video, counting the occurrence times of each target actor in the actor information, and calculating the weight value of each target actor in the target video based on the occurrence times of each target actor;
and calculating to obtain the favorite representation values of the target user to each target actor according to the weight values of each target actor in the target video, and generating the target user portrait based on the favorite representation values of the target user to each target actor.
Preferably, the obtaining a target video of a target user, and dividing the target video into at least one video segment includes:
analyzing the obtained target video of the target user to obtain actor information in the target video, wherein the actor information comprises at least one target actor;
acquiring the appearance time of each target actor in the target video, and marking the target video according to the appearance time of each target actor;
and dividing the marked target video to obtain at least one video segment.
Preferably, the determining actor information in each video segment of the target video, counting the occurrence times of each target actor in the actor information, and calculating a weight value of each target actor in the target video based on the occurrence times of each target actor includes:
determining actor information in each video segment of the target video;
counting the occurrence times of the corresponding target actor in each video clip according to the actor information in each video clip;
calculating the total occurrence times of all target actors in the target video according to the occurrence times of the target actors in each video segment;
and calculating the ratio of the total times of the appearance of all the target actors to the times of the appearance of the target actors, and recording the ratio as the weighted value of the target actors.
Preferably, the calculating, according to the weighted value of each target actor in the target video, a favorite feature value of the target user for each target actor, and generating the target user portrait based on the favorite feature value of the target user for each target actor includes:
detecting and obtaining a watching behavior record of the target user for each video clip, and determining the watching times of the target user for each video clip according to the watching behavior record;
calculating the product between the watching times of each video clip and the weight value of the target actor corresponding to the video clip, and recording the product as a favorite representation value of the target user to the target actor;
and generating the target user portrait according to the favorite representation values of the target user to each actor.
Preferably, when the viewing behavior record includes fast-forward play record and playback record of video segments by the target user, the detecting obtains the viewing behavior record of each video segment by the target user, and determines the number of times of viewing of each video segment by the target user according to the viewing behavior record, including:
if the target user fast-forwards plays the first video clip, recording the playing times of the first video clip as zero, if the target user plays the second video clip, adding one to the playing times of the second video clip, and sequentially counting to obtain the watching times of each video clip by the target user.
An apparatus for constructing a video user representation, comprising:
the system comprises an acquisition module, a video processing module and a video processing module, wherein the acquisition module is used for acquiring a target video of a target user and dividing the target video into at least one video segment;
the determining module is used for determining actor information in each video segment of the target video, counting the occurrence times of each target actor in the actor information, and calculating the weight value of each target actor in the target video based on the occurrence times of each target actor;
and the generating module is used for calculating a favorite representation value of the target user to each target actor according to the weight value of each target actor in the target video, and generating the target user portrait based on the favorite representation value of the target user to each target actor.
Preferably, the obtaining module includes:
the analysis unit is used for analyzing the acquired target video of the target user to obtain actor information in the target video, wherein the actor information comprises at least one target actor;
the marking unit is used for acquiring the appearance time of each target actor in the target video and marking the target video according to the appearance time of each target actor;
and the dividing unit is used for dividing the marked target video to obtain at least one video segment.
Preferably, the determining module comprises:
a determining unit for determining actor information in each video segment of the target video;
the counting unit is used for counting the occurrence frequency of the corresponding target actor in each video segment according to the actor information in each video segment;
the first calculating unit is used for calculating the total occurrence times of all target actors in the target video according to the occurrence times of the target actors in each video segment;
and the second calculating unit is used for calculating the ratio of the total times of appearance of all the target actors to the times of appearance of the target actors, and recording the ratio as the weight value of the target actors.
Preferably, the generating module comprises:
the detection unit is used for detecting and obtaining the watching behavior record of the target user to each video clip and determining the watching times of the target user to each video clip according to the watching behavior record;
the third calculating unit is used for calculating the product between the watching times of each video segment and the weight value of the target actor corresponding to the video segment, and recording the product as the favorite representation value of the target user to the target actor;
and the generating unit is used for generating the target user portrait according to the favorite representation values of the target user to each actor.
Preferably, the detection unit is specifically configured to:
when the watching behavior record comprises fast-forward playing record and playback record of the target user on the video clips, if the target user carries out fast-forward playing on a first video clip, the playing times of the first video clip are recorded as zero, if the target user carries out playback on a second video clip, the playing times of the second video clip are added by one, and the watching times of the target user on each video clip are obtained through counting in sequence.
Compared with the prior art, the method and the device have the advantages that the target video watched by the target user is divided into video segments according to the actor information, then the actor weight value is calculated, the favorite representation value of the target user to each actor is obtained, the favorite representation value representing the favorite degree of the target user to the actors is obtained through the actor information accurate calculation, the favorite degree of a certain actor can be better represented from the actor level, namely the accurate positioning of the video user on the actor level is formed, and compared with the integral statistics of a certain video in the prior art, the accuracy of portrait of the user is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flowchart illustrating a method for constructing a video user representation according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a target video partitioning method according to a second embodiment of the present invention;
FIG. 3 is a flowchart illustrating a user representation generation method according to a second embodiment of the present invention;
FIG. 4 is a schematic structural diagram of an apparatus for constructing a video user representation according to a third embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
Referring to fig. 1, a schematic flow chart of a method for constructing a video user representation according to an embodiment of the present invention may include the following steps:
s11, acquiring a target video of a target user, and dividing the target video into at least one video segment;
the target video refers to a video watched by a user, and the embodiment of the invention performs user imaging based on actor levels, so that the target video with statistical analysis significance is a video containing characters, and if the target user watches a video without characters, which is similar to animal world, the video does not have statistical analysis significance and cannot be identified as the target video. Before generating a user portrait for a target user, a certain amount of target videos need to be statistically analyzed, so that the target user can be accurately portrait, and more accurate video information is recommended for the target user. In each embodiment of the present invention, each target video meeting the statistical number is analyzed, only the analysis process of one of the target videos is described, and the method has universality and is also applicable to other target videos.
The target user refers to a video user who is to be constructed a video user image, and the user may frequently watch or access a corresponding video.
When the target video is divided, the target video is divided mainly through actor information and the occurrence time of each actor, namely, the time period of the actor in the target video is identified in a face recognition mode, the whole video is divided into a plurality of video segments according to the occurrence time point of the actor, and each video segment corresponds to one or more actors. The divided video clips correspond to each actor, namely, the fine granularity of the film can be thinner by drawing the user from the perspective of the actor, so that the recommendation result is more accurate. This section will be described in detail in another embodiment of the present invention.
S12, determining actor information in each video segment of the target video, counting the occurrence times of each target actor in the actor information, and calculating the weight value of each target actor in the target video based on the occurrence times of each target actor;
since the content presented by each video segment is different from the corresponding actor, and to construct the user's portrait at the actor level pair, it is necessary to specify actor information in each video segment, where the actor information represents the main actor of the target video, i.e. the starring in the general sense, because the match or other people except the starring in different videos are not fixed, the appearance in each video is also dispersed, the statistical significance is poor, and more actor information that is also the starring actor is concerned by the user and the attention of other actors is poor, so in the embodiment of the present invention, at least one target actor, i.e. the main actor, is included in the actor information in each video segment.
In order to accurately generate the user portrait, weight value assignment needs to be performed on each target actor in the actor information, and weight assignment is performed on the basis of the number of occurrences of the actor.
The method specifically comprises the following steps:
determining actor information in each video segment of the target video;
counting the occurrence times of a target actor corresponding to each video clip according to actor information in each video clip, wherein the actor information comprises the target actor;
calculating the total occurrence times of all target actors in the target video according to the occurrence times of the target actors in each video segment;
and calculating the ratio of the total times of the appearance of all the target actors to the times of the appearance of the target actors, and recording the ratio as the weighted value of the target actors.
For example, when the target video is processed, first, several segments are split according to the appearance of the actor, and each actor appears once in one segment, which is recorded as a time, and it is assumed that there are two video segments, where segment 1 is a dialogue between actor a and actor B, segment 2 is a performance between actor a and actor C, the total number of occurrences of all actors is 4, the number of occurrences of actor a is 2, the number of occurrences of actor C is 1, the weight of actor a is 4/2-2, and the weight of actor C is 4/1-4.
And S13, calculating preference characterization values of the target user for the target actors according to the weight values of the target actors in the target video, and generating the target user portrait based on the preference characterization values of the target user for the target actors.
Because the target users have different preference degrees for the target actors, when the target videos are viewed, the number of viewing times for each video segment may be different, for example, a certain video segment is a favorite actor of the user, the video user may view the segment again through playback or the like after viewing the segment, and correspondingly, if the user does not like each video segment, the user may not view the segment through fast forward operation. The degree of the preference of the user to the actors can be reflected better by counting the corresponding watching times of each video clip, and the counting precision is higher compared with that of the traditional whole film.
And then expressing the preference degree of the target user to each target actor by adopting a preference representation numerical value based on the watching times and the weighted values of the actors, so that the preference degree of the target user to each target actor can be more objectively described. A user representation of the target user at the actor's preference level may thus be generated.
For example, also assuming that the user views segment 1 twice and segment 2 1 once in the example in S12, the preference value for actor C is 4 × 2 — 8. The user portrait is obtained by analyzing the behavior attribute (such as browsing or watching video records) and the basic attribute (such as basic information of the user) of the user to abstract a user overall view. The user portrait is performed according to the preference value of the user to the actors, namely the user portrait can reflect the preference degree of the user to each actor, and the process of generating the user portrait is that the preference degree of each user is used as the behavior information of the user to be analyzed, so that the whole appearance of the user is abstracted.
According to the technical scheme disclosed by the embodiment of the invention, the target video watched by the target user is divided into video segments according to the contained actor information, then the favorite representation values of the target user to each actor are obtained by calculating the weight values of the actors, and the favorite representation values representing the favorite degrees of the target user to the actors are obtained by accurately calculating the actor information, so that the favorite degrees of a certain actor can be better represented from the actor level, namely, the accurate positioning of the video user on the actor level is formed, and compared with the integral statistics of a certain video in the prior art, the accuracy of user portrait is improved.
Example two
Referring to a video playing method provided in the first embodiment of the present invention, the method will be further described in a second embodiment of the present invention with reference to a specific application scenario, and referring to fig. 2, which is a schematic flow diagram of target video partitioning provided in the second embodiment of the present invention, the process includes:
s111, analyzing the obtained target video of the target user to obtain actor information in the target video, wherein the actor information comprises at least one target actor;
s112, acquiring the occurrence time of each target actor in the target video, and marking the target video according to the occurrence time of each target actor;
s113, dividing the marked target video to obtain at least one video clip.
The method comprises the steps of firstly obtaining a film X watched by a video user, identifying time periods of occurrence of main actors in the film X in a face recognition mode, dividing the whole film X into a plurality of video segments according to time points of occurrence of the actors, wherein each video segment corresponds to one or more target actors.
For example, the target actors in the actor information of the movie X are actor a, actor B, and actor C, the time lengths of the occurrences of the actors are marked, and the marked target video is divided into the following video segments:
Figure BDA0001610546550000081
Figure BDA0001610546550000091
then, in an embodiment of the present invention, a method for generating a user representation is further provided, and with reference to fig. 3, the method includes:
s131, detecting and obtaining the watching behavior record of the target user to each video clip, and determining the watching times of the target user to each video clip according to the watching behavior record;
s132, calculating the product between the watching times of each video clip and the weight value of the target actor corresponding to the video clip, and recording the product as a favorite representation value of the target user to the target actor;
and S133, generating the target user portrait according to the favorite representation values of the target user to each actor.
When the number of viewing times is counted in S131, it is necessary to record according to the viewing behavior of the user, such as fast forward or playback, which is only one of the two specific reference statistical criteria provided in this embodiment, such as the number of viewing times plus one, and the measurement from the perspective of slow playback, all belong to the inventive idea of the present invention. When the watching behavior record comprises fast-forward playing record and playback record of the target user on the video clips, when the watching times are counted, if the target user carries out fast-forward playing on a first video clip, the playing times of the first video clip are recorded as zero, if the target user carries out playback on a second video clip, the playing times of the second video clip are added by one, and the watching times of the target user on each video are counted in sequence.
For example, if the user watches movie X without fast forward or playback operation, which is equivalent to watching the entire movie once, the number of times of watching segments 1, 2, 3 and 4 is equal. If the user fast-forwards a certain video segment by fast-forward operation, the number of viewing times for the segment will be 0 or the number of viewing times corresponding to the segment will be minus 1, if the video user repeatedly views a certain segment, the corresponding number of viewing times will be increased, still taking movie X as an example, it is assumed that the number of viewing times for the corresponding segments 1, 2, 3 and 4 is as follows:
fragment # 1: watch for 1 time
Fragment # 2: watch for 3 times
Fragment # 3: watch for 1 time
Fragment # 4: watch for 1 time
The number of times is obtained according to fast forward or playback record statistics of the user, and then a preference representation value is calculated according to the watching times and the weight value of the actor.
For example, taking film X as an example, if actor a appears 1 person times in segment 1, actor B appears 1 person times in segment 2, actors a and B appear 2 person times in segment 3, and actor C appears 1 person times in segment 4, then the total number of occurrences of all actors is 1+1+2+ 1-5, actor a appears 2 times, actor B appears 2 times, and actor C appears 1 time.
The weight of actor a is 5/2-2.5;
the weight of the actor B is 5/2-2.5;
the weight of actor C is 5/1-5.
And then calculating the preference value of the user to the actor, wherein the preference characterization value of the user to the actor is the watching times of the video segment of the actor appearing by the user and the weight value of the actor.
Then, the value of the characterization of the preference of the video user for actor a is (1+1) × 2.5 ═ 5;
the value of the characterization of the preference of the video user for actor B is (3+1) × 2.5 ═ 10;
the value of the attribute of the video user's preference for the actor C is 1 × 5 ═ 5;
thus indicating that the video user prefers actor C.
And finally, generating a target user portrait according to the obtained preference characterization values, namely describing the preference degree of the user to each actor in the target user portrait. For example, in the above example, what appears in the user representation is that the user prefers actor C.
Correspondingly, the embodiment of the invention can also comprise:
and pushing video information meeting the target user image to the target user according to the target user image.
That is, the actor with higher preference characterization value is more attractive to the target user, and the video program in which the actor liked by the user plays can be preferentially recommended based on the preference value representing the user figure.
Specifically, for example, the leading actor in the movie "time of peak" is dragon and kristoke, and the actor is calm and is beginning to act as a parietal. However, the original fan prefers the video segment appearing at the beginning of the movie and repeatedly watches the video segment, and the segment without the beginning of the movie is skipped by fast-forwarding as long as the episode is not affected. For this viewing behavior, the user's portrait is liked to a still beginning, rather than a dragon and a kristak. Accordingly, the result recommended to the user should be a video program of the standing show, not a video program of the dragon.
Similarly, user management can be performed based on the user profile. For example, if a video platform obtains that the actor favored by the target user a is B according to the user portrait of the client, and the speaker or the contracted artist of the platform includes the actor B, the member privilege information of the relevant video including the actor B can be pushed to the target user a, so that the target user a is promoted to become the member user of the platform, and thus the success rate of screening potential member users by using the user portrait is greatly improved.
According to the technical scheme disclosed by the second embodiment of the invention, the target video watched by the user is divided according to the appearance time point of the actor to obtain a plurality of video segments, then the watching times of each video segment are counted based on the fast forward or playback operation of the user, finally, the favorite representation value of the target user to the actor is obtained through calculation, the user portrait can be performed according to the favorite representation value, and then the video recommendation is performed on the video user. The user repeatedly watches the segments of the favorite actors, and the user often cannot repeatedly watch the segments of the favorite actors, and even can directly fast forward and skip under the condition of not influencing the plot. The film watching behavior of the user is directly response to the preference, and the preference degree of the user to a certain actor can be calculated through analyzing the fast forward and playback behaviors of the user, so that the portrait accuracy of the user is improved.
EXAMPLE III
Corresponding to the method for constructing a video user portrait disclosed in the first embodiment and the second embodiment of the present invention, a third embodiment of the present invention further provides a device for constructing a video user portrait, referring to fig. 4, the device may include:
the system comprises an acquisition module 10, a video processing module and a video processing module, wherein the acquisition module is used for acquiring a target video of a target user and dividing the target video into at least one video segment;
a determining module 20, configured to determine actor information in each video segment of the target video, count occurrence times of each target actor in the actor information, and calculate a weight value of each target actor in the target video based on the occurrence times of each target actor;
a generating module 30, configured to calculate, according to the weighted value of each target actor in the target video, a favorite feature value of the target user for each target actor, and generate the target user portrait based on the favorite feature value of the target user for each target actor.
Optionally, the obtaining module includes:
the analysis unit is used for analyzing the acquired target video of the target user to obtain actor information in the target video, wherein the actor information comprises at least one target actor;
the marking unit is used for acquiring the appearance time of each target actor in the target video and marking the target video according to the appearance time of each target actor;
and the dividing unit is used for dividing the marked target video to obtain at least one video segment.
Optionally, the determining module includes:
a determining unit for determining actor information in each video segment of the target video;
the counting unit is used for counting the occurrence frequency of the corresponding target actor in each video segment according to the actor information in each video segment;
the first calculating unit is used for calculating the total occurrence times of all target actors in the target video according to the occurrence times of the target actors in each video segment;
and the second calculating unit is used for calculating the ratio of the total times of appearance of all the target actors to the times of appearance of the target actors, and recording the ratio as the weight value of the target actors.
Optionally, the generating module includes:
the detection unit is used for detecting and obtaining the watching behavior record of the target user to each video clip and determining the watching times of the target user to each video clip according to the watching behavior record;
the third calculating unit is used for calculating the product between the watching times of each video segment and the weight value of the target actor corresponding to the video segment, and recording the product as the favorite representation value of the target user to the target actor;
and the generating unit is used for generating the target user portrait according to the favorite representation values of the target user to each actor.
Optionally, the detection unit is specifically configured to:
when the watching behavior record comprises fast-forward playing record and playback record of the target user on the video clips, if the target user carries out fast-forward playing on a first video clip, the playing times of the first video clip are recorded as zero, if the target user carries out playback on a second video clip, the playing times of the second video clip are added by one, and the watching times of the target user on each video clip are obtained through counting in sequence.
In the third embodiment of the present invention, the target video viewed by the target user is segmented according to the actor information, and then the actor weight value is calculated to obtain the favorite representation values of the target user for each actor, and the actor information is used to accurately calculate the favorite representation values representing the favorite degrees of the target user for the actors, so that the favorite degrees of a certain actor can be better represented from the actor level, that is, the accurate positioning of the video user on the actor level is formed, and compared with the integral statistics of a certain video in the prior art, the accuracy of the portrait of the user is improved.
The terms "first" and "second," and the like in the description and claims of the present invention and the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "comprising" and "having," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not set forth for a listed step or element but may include steps or elements not listed.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. A method of constructing a video user representation, comprising:
acquiring a target video of a target user, and dividing the target video into at least one video segment;
determining actor information in each video segment of the target video, counting the occurrence times of target actors in the actor information, and calculating a weight value of each target actor in the target video based on the occurrence times of the target actors, wherein the calculating the weight value of each target actor in the target video based on the occurrence times of the target actors comprises: calculating the total occurrence times of all target actors in the target video according to the occurrence times of the target actors in each video segment; calculating the ratio of the total times of appearance of all the target actors to the times of appearance of the target actors, and recording the ratio as the weight value of the target actors;
and calculating to obtain the favorite representation values of the target user to each target actor according to the weight values of each target actor in the target video, and generating the target user portrait based on the favorite representation values of the target user to each target actor.
2. The method of claim 1, wherein the obtaining a target video of a target user, and dividing the target video into at least one video segment comprises:
analyzing the obtained target video of the target user to obtain actor information in the target video, wherein the actor information comprises at least one target actor;
acquiring the appearance time of each target actor in the target video, and marking the target video according to the appearance time of each target actor;
and dividing the marked target video to obtain at least one video segment.
3. The method of claim 1, wherein calculating a value of a preference characterization of the target user for each target actor according to a weighted value of each target actor in the target video, and generating the target user representation based on the value of the preference characterization of the target user for each target actor comprises:
detecting and obtaining a watching behavior record of the target user for each video clip, and determining the watching times of the target user for each video clip according to the watching behavior record;
calculating the product between the watching times of each video clip and the weight value of the target actor corresponding to the video clip, and recording the product as a favorite representation value of the target user to the target actor;
and generating the target user portrait according to the favorite representation values of the target user to each actor.
4. The method of claim 3, wherein when the viewing behavior record comprises a fast-forward play record and a playback record of video segments by the target user, the detecting obtains a viewing behavior record of each video segment by the target user, and determines the number of views of each video segment by the target user according to the viewing behavior record, comprising:
if the target user fast-forwards plays the first video clip, recording the playing times of the first video clip as zero, if the target user plays the second video clip, adding one to the playing times of the second video clip, and sequentially counting to obtain the watching times of each video clip by the target user.
5. An apparatus for constructing a video user representation, comprising:
the system comprises an acquisition module, a video processing module and a video processing module, wherein the acquisition module is used for acquiring a target video of a target user and dividing the target video into at least one video segment;
a determining module, configured to determine actor information in each video segment of the target video, count occurrences of each target actor in the actor information, and calculate a weight value of each target actor in the target video based on the occurrences of each target actor, where the calculating a weight value of each target actor in the target video based on the occurrences of each target actor includes: calculating the total occurrence times of all target actors in the target video according to the occurrence times of the target actors in each video segment; calculating the ratio of the total times of appearance of all the target actors to the times of appearance of the target actors, and recording the ratio as the weight value of the target actors;
and the generating module is used for calculating a favorite representation value of the target user to each target actor according to the weight value of each target actor in the target video, and generating the target user portrait based on the favorite representation value of the target user to each target actor.
6. The apparatus of claim 5, wherein the obtaining module comprises:
the analysis unit is used for analyzing the acquired target video of the target user to obtain actor information in the target video, wherein the actor information comprises at least one target actor;
the marking unit is used for acquiring the appearance time of each target actor in the target video and marking the target video according to the appearance time of each target actor;
and the dividing unit is used for dividing the marked target video to obtain at least one video segment.
7. The apparatus of claim 5, wherein the generating module comprises:
the detection unit is used for detecting and obtaining the watching behavior record of the target user to each video clip and determining the watching times of the target user to each video clip according to the watching behavior record;
the third calculating unit is used for calculating the product between the watching times of each video segment and the weight value of the target actor corresponding to the video segment, and recording the product as the favorite representation value of the target user to the target actor;
and the generating unit is used for generating the target user portrait according to the favorite representation values of the target user to each actor.
8. The apparatus according to claim 7, wherein the detection unit is specifically configured to:
when the watching behavior record comprises fast-forward playing record and playback record of the target user on the video clips, if the target user carries out fast-forward playing on a first video clip, the playing times of the first video clip are recorded as zero, if the target user carries out playback on a second video clip, the playing times of the second video clip are added by one, and the watching times of the target user on each video clip are obtained through counting in sequence.
CN201810262253.4A 2018-03-28 2018-03-28 Method and device for constructing video user portrait Active CN108471544B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810262253.4A CN108471544B (en) 2018-03-28 2018-03-28 Method and device for constructing video user portrait

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810262253.4A CN108471544B (en) 2018-03-28 2018-03-28 Method and device for constructing video user portrait

Publications (2)

Publication Number Publication Date
CN108471544A CN108471544A (en) 2018-08-31
CN108471544B true CN108471544B (en) 2020-09-15

Family

ID=63265915

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810262253.4A Active CN108471544B (en) 2018-03-28 2018-03-28 Method and device for constructing video user portrait

Country Status (1)

Country Link
CN (1) CN108471544B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109788311B (en) * 2019-01-28 2021-06-04 北京易捷胜科技有限公司 Character replacement method, electronic device, and storage medium
CN110008376A (en) * 2019-03-22 2019-07-12 广州新视展投资咨询有限公司 User's portrait vector generation method and device
CN110598618A (en) * 2019-09-05 2019-12-20 腾讯科技(深圳)有限公司 Content recommendation method and device, computer equipment and computer-readable storage medium
CN110769286B (en) * 2019-11-06 2021-04-27 山东科技大学 Channel-based recommendation method and device and storage medium
CN111666908B (en) * 2020-06-09 2023-05-16 广州市百果园信息技术有限公司 Method, device, equipment and storage medium for generating interest portraits of video users
CN112569596B (en) * 2020-12-11 2022-11-22 腾讯科技(深圳)有限公司 Video picture display method and device, computer equipment and storage medium
CN113938712B (en) * 2021-10-13 2023-10-10 北京奇艺世纪科技有限公司 Video playing method and device and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521340A (en) * 2011-12-08 2012-06-27 中国科学院自动化研究所 Method for analyzing TV video based on role
CN103702117A (en) * 2012-09-27 2014-04-02 索尼公司 Image processing apparatus, image processing method, and program
CN105072495A (en) * 2015-08-13 2015-11-18 天脉聚源(北京)传媒科技有限公司 Statistics method and device for person popularity and program pushing method and device
CN105095431A (en) * 2015-07-22 2015-11-25 百度在线网络技术(北京)有限公司 Method and device for pushing videos based on behavior information of user
CN105701169A (en) * 2015-12-31 2016-06-22 北京奇艺世纪科技有限公司 Film and television program retrieving method and terminal
CN105843857A (en) * 2016-03-16 2016-08-10 合网络技术(北京)有限公司 Video recommendation method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201415872A (en) * 2012-10-01 2014-04-16 Chunghwa Wideband Best Network Co Ltd Electronic programming guide display method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521340A (en) * 2011-12-08 2012-06-27 中国科学院自动化研究所 Method for analyzing TV video based on role
CN103702117A (en) * 2012-09-27 2014-04-02 索尼公司 Image processing apparatus, image processing method, and program
CN105095431A (en) * 2015-07-22 2015-11-25 百度在线网络技术(北京)有限公司 Method and device for pushing videos based on behavior information of user
CN105072495A (en) * 2015-08-13 2015-11-18 天脉聚源(北京)传媒科技有限公司 Statistics method and device for person popularity and program pushing method and device
CN105701169A (en) * 2015-12-31 2016-06-22 北京奇艺世纪科技有限公司 Film and television program retrieving method and terminal
CN105843857A (en) * 2016-03-16 2016-08-10 合网络技术(北京)有限公司 Video recommendation method and device

Also Published As

Publication number Publication date
CN108471544A (en) 2018-08-31

Similar Documents

Publication Publication Date Title
CN108471544B (en) Method and device for constructing video user portrait
CN109189951B (en) Multimedia resource recommendation method, equipment and storage medium
CN106331778B (en) Video recommendation method and device
JP4636147B2 (en) Information processing apparatus and method, program, and recording medium
KR102112973B1 (en) Estimating and displaying social interest in time-based media
KR100493902B1 (en) Method And System For Recommending Contents
US20140289241A1 (en) Systems and methods for generating a media value metric
JP6235556B2 (en) Content presentation method, content presentation apparatus, and program
JP5546632B2 (en) Method and mechanism for analyzing multimedia content
CN106326391B (en) Multimedia resource recommendation method and device
US20150365725A1 (en) Extract partition segments of personalized video channel
CN109753601B (en) Method and device for determining click rate of recommended information and electronic equipment
CN109511015B (en) Multimedia resource recommendation method, device, storage medium and equipment
CN110941738B (en) Recommendation method and device, electronic equipment and computer-readable storage medium
CN107454442B (en) Method and device for recommending video
US20090158307A1 (en) Content processing apparatus, content processing method, program, and recording medium
JP2008542870A (en) Method and apparatus for estimating the overall interest of a group of users for content
CN107562848B (en) Video recommendation method and device
US20220107978A1 (en) Method for recommending video content
KR20130090344A (en) Apparatus, system, method and computer readable recording media storing the program for related recommendation of tv program contents and web contents
CN111435371A (en) Video recommendation method and system, computer program product and readable storage medium
CN111405363A (en) Method and device for identifying current user of set top box in home network
CN109063080B (en) Video recommendation method and device
CN105956061B (en) Method and device for determining similarity between users
WO2018001223A1 (en) Playlist recommending method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant