CN116264625A - Video scenario visualization method, device, equipment and storage medium - Google Patents

Video scenario visualization method, device, equipment and storage medium Download PDF

Info

Publication number
CN116264625A
CN116264625A CN202111521920.4A CN202111521920A CN116264625A CN 116264625 A CN116264625 A CN 116264625A CN 202111521920 A CN202111521920 A CN 202111521920A CN 116264625 A CN116264625 A CN 116264625A
Authority
CN
China
Prior art keywords
scenario
video
storyline
original
authored
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111521920.4A
Other languages
Chinese (zh)
Inventor
钟尚儒
黎琪
朱斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202111521920.4A priority Critical patent/CN116264625A/en
Publication of CN116264625A publication Critical patent/CN116264625A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
    • H04N21/4623Processing of entitlement messages, e.g. ECM [Entitlement Control Message] or EMM [Entitlement Management Message]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44204Monitoring of content usage, e.g. the number of times a movie has been viewed, copied or the amount which has been watched

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The application discloses a video scenario visualization method, device, equipment and storage medium, and relates to the field of video playing. The method comprises the following steps: acquiring an original episode video and a plurality of secondary authored videos, wherein the secondary authored videos are videos obtained by editing the original episode video; screening target secondary creation videos containing scenario key fragments of the original episode video from the plurality of secondary creation videos; and generating a storyline of the original episode video based on the target secondarily authored video, wherein the storyline comprises at least two target secondarily authored videos. The method and the system can generate the storyline for the original episode video based on massive secondary authored videos.

Description

Video scenario visualization method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the field of video playing, in particular to a video scenario visualization method, device and equipment and a storage medium.
Background
IP (Intellectual Property ) episodes such as movies, television shows are currently the most popular multimedia content. Each large online video platform is provided with an exclusive IP episode to attract chasing fans.
Because the chasing time of many chasing lovers is limited, a scenario introduction interface of the IP scenario is provided on the online video platform. Taking 20 dramas as an example, the scenario introduction interface includes a hierarchical scenario introduction for each 1 drama or a hierarchical scenario introduction for each 3-5 dramas. The introduction of the grading scenario is compiled by staff or a hot net friend.
However, the above scenario introduction forms are relatively single, and only limited scenario information can be viewed, and the accuracy of the scenario information is largely limited by the personal level of staff or enthusiasts.
Disclosure of Invention
The application provides a video scenario visualization method, device, equipment and storage medium, which can generate a scenario story line for an original scenario video based on massive secondary authored videos. The technical scheme is as follows:
according to one aspect of the present application, there is provided a method of visualizing a video scenario, the method comprising:
acquiring an original episode video and a plurality of secondary authored videos, wherein the secondary authored videos are videos obtained by editing the original episode video;
screening target secondary creation videos containing scenario key fragments of the original episode video from the plurality of secondary creation videos;
And generating a storyline of the original episode video based on the target secondarily authored video, wherein the storyline comprises at least two target secondarily authored videos.
According to one aspect of the present application, there is provided a method of visualizing a video scenario, the method comprising:
displaying a story line page of a first storyline of an original episode video, the story line page including introduction information of at least two storyline key segments belonging to the first storyline;
receiving triggering operation of introduction information of a first scenario key fragment in the at least two scenario key fragments;
and responding to the triggering operation, and playing the secondary authored video corresponding to the first scenario key fragment, wherein the secondary authored video is obtained by clipping the original scenario video.
According to one aspect of the present application, there is provided a video scenario visualization apparatus, the apparatus comprising:
the video acquisition module is used for acquiring an original episode video and a plurality of secondary authored videos, wherein the secondary authored videos are videos obtained by editing the original episode video;
the scenario mining module is used for screening target secondary creation videos containing scenario key fragments of the original scenario videos from the plurality of secondary creation videos;
And the plot context module is used for generating plot story lines of the original plot video based on the target secondary authored video, wherein the plot story lines comprise at least two target secondary authored videos.
According to one aspect of the present application, there is provided a video scenario visualization apparatus, the apparatus comprising:
the system comprises a display module, a first video processing module and a second video processing module, wherein the display module is used for displaying a story line page of a first storyline story line of an original episode video, and the story line page comprises introduction information of at least two storyline key fragments belonging to the first storyline story line;
the man-machine interaction module is used for receiving triggering operation of introduction information of a first scenario key fragment in the at least two scenario key fragments;
and the display module is used for responding to the triggering operation and playing the secondary creation video corresponding to the first scenario key fragment, wherein the secondary creation video is obtained by clipping the original scenario video.
According to one aspect of the present application, there is provided a computer device comprising: a processor and a memory storing a computer program to be executed by the processor to cause the computer device to implement a method of visualizing a video scenario as described above.
According to another aspect of the present application, there is provided a computer readable storage medium storing a computer program for execution by a processor to implement a method of visualizing a video scenario as described above.
According to another aspect of the present application, there is provided a computer program product storing a computer program for execution by a processor to implement a method of visualizing a video scenario as described above.
The beneficial effects that technical scheme that this application embodiment provided include at least:
in the case that a plurality of secondary creation videos (short videos) obtained based on the editing of the original episode video exist, a target secondary creation video belonging to a critical episode is screened out from the plurality of secondary creation videos, and an episode story line of the original episode video is generated based on the target secondary creation video.
And because different secondarily authored videos possibly originate from different authors editing the highlight plot, the plot line combing of the public taste can be more accurately expressed, and the plot line is prevented from being limited by the thinking limitation of individual staff or hot net friends.
Drawings
FIG. 1 illustrates a block diagram of a computer system provided in one embodiment of the present application;
FIG. 2 illustrates a flow chart of a method for visualizing video scenarios provided by one embodiment of the present application;
FIG. 3 is a schematic diagram illustrating a process for constructing a storyline according to one embodiment of the present application;
FIG. 4 illustrates a flow chart of a method for visualizing video scenarios provided by one embodiment of the present application;
FIG. 5 illustrates a flow chart of a method for visualizing video scenarios provided by one embodiment of the present application;
FIG. 6 illustrates a flow chart of a method for visualizing video scenarios provided by one embodiment of the present application;
FIG. 7 is a statistical schematic diagram of scenario key indexes calculated based on "consumption signals" and "production signals" according to one embodiment of the present application;
FIG. 8 illustrates a flow chart of a method for visualizing video scenarios provided by one embodiment of the present application;
FIG. 9 illustrates a related interface diagram of a storyline provided in one embodiment of the present application;
FIG. 10 illustrates a flow chart of a method for visualizing video scenarios provided by one embodiment of the present application;
FIG. 11 illustrates a related interface diagram of a storyline provided in one embodiment of the present application;
FIG. 12 illustrates a related interface diagram of a storyline provided in one embodiment of the present application;
FIG. 13 illustrates a related interface diagram of a storyline provided in one embodiment of the present application;
FIG. 14 illustrates a related interface diagram of a storyline provided in one embodiment of the present application;
FIG. 15 illustrates a related interface diagram of a storyline provided in one embodiment of the present application;
FIG. 16 is a schematic diagram illustrating a scenario map construction process according to an embodiment of the present application;
FIG. 17 illustrates a block diagram of a video scenario visualization device provided by one embodiment of the present application;
FIG. 18 illustrates a block diagram of a video scenario visualization device provided by one embodiment of the present application;
FIG. 19 illustrates a block diagram of a computer device provided in one embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Short video authoring is currently the main object authored from media authors. After broadcasting a hot IP episode, numerous self-media authors can perform secondary creation based on the IP episode, and massive short videos based on the IP episode clips appear. The application provides a visual technical scheme of video scenario. A storyline of the original episode video is generated based on a plurality of short videos, each of which includes one or more short videos, the storyline being exhibited by a plurality of short videos (high-energy storyline segments).
FIG. 1 illustrates a block diagram of a computer system provided in an exemplary embodiment of the present application. The computer system 100 includes: a terminal 120 and a server 140.
The terminal 120 has an application (also called a client) installed and running. The application program may be any one of an online video program, a short video program, a microblog program, a browser program, an instant messaging program, an e-commerce program, a social program, and the like. Illustratively, the terminal 120 is a terminal used by the first user. Optionally, the terminal 120 logs in with a user account. The terminal 120 uses services (e.g., online video service, short video service, search service, encyclopedia service, social service) provided by the server 140 through a user account. The terminal 120 includes, but is not limited to, a mobile phone, a computer, an intelligent voice interaction device, an intelligent home appliance, a vehicle-mounted terminal, etc.
The terminal 120 is connected to the server 140 through a wireless network or a wired network.
Server 140 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. The server 140 includes a processor 144 and a memory 142, where the memory 142 includes a receiving module 1421, a control module 1422, and a sending module 1423, where the receiving module 1421 is configured to receive a request sent by a client, such as a play request of an IP episode, or a search request of the IP episode, or a episode character search request of the IP episode, and so on; the data processing module 1422 is used to provide features such as storyline analysis, processing, and output of IP episodes. The server 140 is used for providing a background service for the client and providing scenario visualization information of the IP scenario in response to various requests of the terminal 120. Optionally, the server 140 takes on primary computing work and the terminal 120 takes on secondary computing work; alternatively, the server 140 takes on secondary computing work and the terminal 120 takes on primary computing work; alternatively, a distributed computing architecture is employed between the server 140 and the terminal 120 for collaborative computing.
The embodiments of the present application are illustrated with terminal 140 comprising a smart phone. Those skilled in the art will recognize that the number of terminals may be greater or lesser. Such as the above-mentioned terminals may be only one, or the above-mentioned terminals may be several tens or hundreds, or more. The number of terminals and the device type are not limited in the embodiment of the present application.
Fig. 2 shows a flowchart of a method for visualizing a video scenario provided in an exemplary embodiment of the present application. This embodiment is applied to the server 140 by this method. The method comprises the following steps:
step 202: acquiring an original episode video and a plurality of secondary authored videos, wherein the secondary authored videos are videos obtained by editing the original episode video;
the original episode video may be in the form of a movie, a television show, documentaries, a cartoon, etc. The video length of the original episode video is longer, such as the video length of the original episode video is greater than a first threshold. The first threshold is 10 minutes, 30 minutes, etc. The original episode video may be referred to as a "long video". The producer of the original episode video may be a movie producer.
The secondary authored video is a video obtained by editing the original episode video. For example, one or more video clips in the original episode video are clipped to a short video. The producer of the secondary authored video may be a media author, a pre-episode propaganda unit, a hot net friend, etc. The producer of the secondary authored video may be different from the producer of the original episode video.
In some embodiments, the original episode video and the secondary authored video are on the same network platform, and a server of the network platform obtains the original episode video and the plurality of secondary authored videos in a database of the network platform. In other embodiments, the original episode video, all or part of the secondary authored video are on different web platforms, and the server obtains the original episode video and the plurality of secondary authored videos through a web crawler tool.
Referring to fig. 3 for an exemplary comparison of popular original episode video 1, a massive amount of secondary authored video based on the original episode video 1 clip may appear. The secondary authored video may be clipped from media author a, may be clipped from video number author B, may be clipped from net friend C. Each secondary authored video includes one or more scenario segments.
Step 204: screening target secondary creation videos belonging to scenario key fragments of the original episode video from a plurality of secondary creation videos;
because the secondary creation video is prone to editing in the aspects of highlight clips, high-energy clips, sweet shots, special effect lenses and the like when being manually clipped, the secondary creation video carries more critical scenario clips. The server screens out target secondary creation videos belonging to the scenario key fragments of the original episode video from a plurality of secondary creation videos, wherein the target secondary creation videos can be one or more.
The original episode video has first video data including data related to a consumption process of the original episode video. Illustratively, the consumption process-related data includes at least one of a bullet screen count, a comment count, a praise count, a collection count, and a skip play count corresponding to the play time point. For example, at a time point t, there are 26 barrages, 12 comments, 3 praise, and so on; for another example, at another point in time, there are 123 barrages, 26 reviews, and 4 praise.
The secondary authored video has second video data comprising data related to a production process and/or a consumption process of the secondary authored video. Illustratively, the data relating to the production process includes: at least one of author level, number of fans, number of production videos, number of participating network platforms. The consumption process-related data includes at least one of a bullet screen count, a comment count, a point approval count, a collection count, and a skip play count corresponding to the play time point.
In some embodiments, the server calculates scenario key indexes of each play time point based on the first video data and/or the second video data, and screens out target secondarily authored videos belonging to scenario key segments of the original scenario video based on scenario key indexes of a plurality of play time points. The playing time point refers to a time point on a playing progress bar of the original episode video. Because the secondary creation video is obtained by clipping based on the original episode video, after the secondary creation video is aligned with the video frames on the original episode video, the playing time point of each video frame on the secondary creation video can be obtained.
In some embodiments, the server calculates the scenario key index for each play time point based on the first video data, such as based on the number of dramas for each play time point. In some embodiments, the server calculates the scenario key index for each play time point based on the second video data, such as based on the number of secondarily authored videos for each play time point. In some embodiments, the server calculates the scenario key index of each playing time point based on the first video data and the second video data at the same time, for example, the server calculates the scenario key index of each playing time point by performing weighted summation on the number of the bullet curtains of the original episode video at a certain playing time point and the number of the secondary authored videos corresponding to the playing time point.
Referring to fig. 3 schematically, the server calculates scenario key indexes at a plurality of play time points to obtain scenario key index curve 2. And determining the play time segments corresponding to one or more peak scenario key indexes on the scenario key index curve 2 as scenario key segments. For example, the time point t is a local peak scenario index, and the scenario key index=bullet screen number a+short video number b of the time point t. The number of the shots is the number of the shots corresponding to the time point t in the original video play set, or the number of the shots played at the time point t. The short video number is the number of short videos whose playing interval includes a time point t, or the number of short videos including a video frame corresponding to the time point t. Wherein a and b are preset weights.
Since the number of short videos whose play section includes a scenario key section may be plural, 1 target short video may be selected from the plurality of short videos as a target secondarily authored video corresponding to the scenario key section.
Step 206: and generating a storyline of the original episode video based on the target secondary authored video, wherein the storyline comprises at least two target secondary authored videos.
The server generates one or more storylines of the original episode video based on the screened multiple target secondarily authored videos, each storyline comprising at least two target secondarily authored videos.
And under the condition that a plurality of target secondary authored videos corresponding to the same scenario key fragment are available, the server screens out the target secondary authored videos corresponding to the scenario key fragment. Grouping different scenario key fragments according to scenario venation, and then secondarily creating videos corresponding to targets belonging to the same group of scenario key fragments to generate a scenario story line of the original scenario video.
For an original episode video, there may be one or multiple storylines. In some embodiments, according to the division of different characters in the original episode video, a story line corresponding to the different characters, such as a story line corresponding to the character a and a story line corresponding to the character B, may be obtained; in some embodiments, the storylines corresponding to different events, such as the storylines corresponding to event a and the storylines corresponding to event B, may be derived by dividing the different events in the original episode video.
Referring schematically to fig. 3, the server generates a storyline 3 of the original video 1 based on the plurality of short videos obtained by screening. The terminal displays a story line interface of the storyline 3, and a storyline segment playing area and a storyline overview area are displayed on the story line interface, wherein the storyline overview area comprises a plurality of storyline segments arranged in sequence, and each storyline segment corresponds to a target secondary creation video. The user can click on one of the scenario clips and play the scenario clips in the scenario clip playing area.
In summary, in the method provided in this embodiment, in the case that there are multiple secondary creation videos (short videos) obtained based on the original episode video clip, the target secondary creation videos belonging to the key segments of the episode are screened out from the multiple secondary creation videos, and the episode story line of the original episode video is generated based on the target secondary creation videos.
And because different secondarily authored videos possibly originate from different authors editing the highlight plot, the plot line combing of the public taste can be more accurately expressed, and the plot line is prevented from being limited by the thinking limitation of individual staff or hot net friends.
Fig. 4 shows a flowchart of a method for visualizing a video scenario provided in an exemplary embodiment of the present application. This embodiment is applied to the server 140 by this method. The method comprises the following steps:
step 302: acquiring an original episode video and a plurality of secondary authored videos, wherein the secondary authored videos are videos obtained by editing the original episode video;
the original episode video may be in the form of a movie, a television show, documentaries, a cartoon, etc. The video length of the original episode video is longer, such as the video length of the original episode video is greater than a first threshold. The first threshold is 10 minutes, 30 minutes, etc. The original episode video may be referred to as a "long video". The producer of the original episode video may be a movie producer.
The secondary authored video is a video obtained by editing the original episode video. For example, one or more video clips in the original episode video are clipped to a short video. The producer of the secondary authored video may be a media author, a pre-episode propaganda unit, a hot net friend, etc. The producer of the secondary authored video may be different from the producer of the original episode video.
In some embodiments, the original episode video and the secondary authored video are on the same network platform, and a server of the network platform obtains the original episode video and the plurality of secondary authored videos in a database of the network platform. In other embodiments, the original episode video, all or part of the secondary authored video are on different web platforms, and the server obtains the original episode video and the plurality of secondary authored videos through a web crawler tool.
Optionally, the server also filters the plurality of secondary authored videos. The filtering process includes, but is not limited to: filtering the secondary authored video with the inner tolerance of the video under the first condition, filtering the secondary authored video with the definition of the video being worse than that of the second condition, filtering the secondary authored video with the multi-set mixed shearing type, and the like.
Step 304: acquiring first video data of an original episode video and second video data of a secondary authored video;
the original episode video has first video data including data related to a consumption process of the original episode video. Illustratively, the consumption process-related data includes at least one of a bullet screen count, a comment count, a praise count, a collection count, and a skip play count corresponding to the play time point.
The secondary authored video has second video data comprising data related to a production process and/or a consumption process of the secondary authored video. Illustratively, the data relating to the production process includes: at least one of author level, number of fans, number of production videos, number of participating network platforms. The consumption process-related data includes at least one of a bullet screen count, a comment count, a point approval count, a collection count, and a skip play count corresponding to the play time point.
Step 306: calculating scenario key indexes of a plurality of playing time points in the original episode video based on the first video data and the second video data;
the server may calculate scenario key indexes of a plurality of play time points in the original episode video based on at least one of the first video data and the second video data.
The present embodiment is exemplified by a server performing calculation based on both the first video data and the second video data. As shown in fig. 5, this step may include three steps as follows:
306a, determining a plurality of playing time points on a playing progress bar of the original episode video based on a starting time point and an ending time point of the secondary authored video;
Since the secondary authored video is derived based on the original episode video clip. Therefore, the server matches each secondary authored video with the original episode video through a video content understanding technology, such as pattern recognition of video frames, to obtain a start time point and an end time point of each secondary authored video on a playing progress bar of the original episode video.
For example, taking the video episode "suspense episode a" as an example, the obtained analysis data includes the following table one:
list one
Figure BDA0003407829230000071
Figure BDA0003407829230000081
The server takes the starting time point and the ending time point as a plurality of playing time points which need to be calculated on the playing progress bar of the original episode video.
Alternatively, the server may also select a plurality of play time points according to other policies, such as selecting a plurality of play time points according to a fixed step size, or selecting a plurality of play time points according to a random policy, which is not limited in this embodiment.
306b, for each play time point, acquiring first video data and second video data corresponding to the play time point;
for each play time point, the server screens out the first video data corresponding to the play time point and the second video data corresponding to the play time point.
The first video data includes a number of shots of the original video episode and the second video data includes a number of secondarily authored videos as an example.
For the first video data, the original episode video has a number of shots, each having a respective timestamp. Based on the playing time stamp of each bullet screen, bullet screens corresponding to the playing time point A are selected, and the bullet screen quantity corresponding to the playing time point A is calculated.
Aiming at the second video data, the server screens out the secondary creation videos of which the playing interval contains the playing time point A, sums the secondary creation videos corresponding to the playing time point A, and calculates the number of the secondary creation videos corresponding to the playing time point A.
It should be noted that, the first video data may also cover other types, such as comment numbers, praise numbers, and the like; the second video data may also cover other types, such as the number of fans of the author of the secondary authoring video, etc., which will not be described in detail in this embodiment.
306c, carrying out weighted summation on the first video data and the second video data corresponding to the playing time point, and calculating to obtain the scenario key index of the playing time point.
And carrying out weighted summation on the first video data and the second video data corresponding to the playing time points aiming at each playing time point, and calculating to obtain the scenario key index of the playing time points.
Schematic, scenario key index=bullet screen number x a+short video number x b at the playing time point. Wherein a and b are preset weights. Schematically, a+b=1.
Step 308: screening out target secondary authored videos containing scenario key fragments of the original scenario video based on scenario key indexes of a plurality of play time points;
and screening out target secondary authored videos containing scenario key fragments of the original scenario videos based on scenario key indexes meeting local peak conditions in a plurality of playing time points. The local peak condition is a condition for detecting whether the scenario key index of the playing time point belongs to the peak scenario key index in a local time period in the time dimension. For example, local peak conditions include: the current peak scenario key index is greater than other scenario key indexes within a forward predetermined time period, and the current peak scenario key index is greater than other scenario key indexes within a backward predetermined time period, and the magnitude is greater than the threshold.
Illustratively, this step may include the following sub-steps, as shown in FIG. 6:
308a, taking a time axis corresponding to a playing time point as a first coordinate axis, and taking an index axis corresponding to a scenario key index as a second coordinate axis, and constructing contrast information of the scenario key index, wherein the contrast information comprises any one of a function curve, a histogram and a discrete point sequence;
As shown in fig. 5, a discrete point sequence of scenario key indexes is constructed with a time axis corresponding to a play time point as an x axis and an index axis corresponding to scenario key indexes as a y axis. Taking the video length of the original episode video as 42:54 as an example, a plurality of peak scenario key indexes meeting local peak conditions exist in the discrete point sequence.
308b, screening out peak scenario key indexes meeting local peak conditions based on the comparison information of scenario key indexes;
308c, determining a scenario key fragment based on a play time point corresponding to the peak scenario key index;
illustratively, for a play time point corresponding to a peak scenario key index, determining a start time point and an end time point closest to the play time point, and determining a scenario segment between the start time point and the end time point as a scenario key segment.
Or, determining the starting time point and the ending time point which are closest to the playing time point by using 2 adjacent playing time points positioned at the local peak value; and determining the scenario segment between the starting time point and the ending time point as a scenario key segment.
Or, determining a play time point corresponding to the peak scenario key index as a start time point, determining a play time point of the next (or i-th after) scenario key index adjacent to the peak scenario key index as an end time point, and determining a scenario segment between the start time point and the end time point as a scenario key segment.
Or, determining a play time point corresponding to the peak scenario key index as an end time point, determining a play time point of a previous (or an i-th previous) scenario key index adjacent to the peak scenario key index as a start time point, and determining a scenario segment between the start time point and the end time point as a scenario key segment. Wherein i is an integer greater than 1.
For example, still taking the video episode "suspense episode a" as an example, the obtained episode key segments include the following table two:
watch II
Figure BDA0003407829230000091
Figure BDA0003407829230000101
308d, screening out target secondary authored videos containing scenario key fragments in the playing interval.
Schematically, the scenario key segments are multiple, and each scenario key segment corresponds to a respective target secondary authored video. The target secondary authored video corresponding to the same scenario key fragment can be multiple.
Step 310: mining scenario fragment names of each scenario key fragment based on the target secondary creation video;
the scenario segment names are text contents for introducing a scenario summary of one scenario key segment. Optionally, each scenario key section has its own scenario section name.
And for each scenario key fragment, mining scenario fragment names of the scenario key fragment according to one or more target secondary authored videos corresponding to the scenario key fragment.
Schematically, under the condition that at least two target secondarily authored videos correspond to the same scenario key fragment, clustering video titles of the at least two target secondarily authored videos to obtain at least two candidate scenario fragment names. Scoring at least two candidate scenario segment names based on scenario keywords in the video title. And determining the scenario fragment names corresponding to the scenario key fragments from at least two candidate scenario fragment names based on the scoring result.
For example, 120 short videos are corresponding to a certain scenario key segment, and video titles of the 120 short videos are clustered to obtain 3 candidate scenario segment names. And then, respectively counting the occurrence word frequency of each word in the video titles of 120 short videos for the word segmentation results in the 3 candidate scenario fragment names. And weighting and calculating scores of the names of the 3 candidate scenario segments based on the occurrence word frequency and the importance weight of each word segmentation. And determining the name of one candidate scenario segment with the highest score as the scenario segment name corresponding to the key segment of the current scenario according to the order of the scoring result from high to low.
Schematically, under the condition that one scenario key fragment corresponds to one target secondarily-authored video, taking a video title of the target secondarily-authored video as a scenario fragment name of the scenario key fragment; or, in the case that the same scenario key segment corresponds to at least two target secondarily authored videos, taking random one of video titles of the at least two target secondarily authored videos as a scenario segment name of the scenario key segment.
For another example, if a scenario key segment corresponds to only 1 short video, determining the video titles of the 1 short videos as scenario segment names corresponding to the current scenario key segment; for another example, if a scenario key segment corresponds to only 2 short videos, one of the video titles of the 2 short videos is randomly determined as a scenario segment name corresponding to the current scenario key segment.
In some embodiments, the scenario segment names may be generated based on a text generation model obtained by training in advance, for example, video titles of a plurality of short videos corresponding to a certain scenario key segment are input into the text generation model, and the scenario segment names corresponding to the scenario key segment are output by the text generation model.
Step 312: grouping the scenario segment names of the scenario key segments according to scenario venation, and generating a scenario story line of the original scenario video by secondarily creating videos of targets corresponding to the scenario key segments belonging to the same group.
Illustratively, the scenario venues may be divided according to scenario characters, or may be divided according to scenario types, or may be divided according to scenario characters and scenario types, and the division manner of the scenario venues is not limited in this embodiment. In different episodes, the scenario venues may also be divided according to a timeline, a genre, a desquamate, a certain genre, a certain location. In this embodiment, the scenario context is illustrated as including the scenario character and/or scenario type.
The scenario characters are characters appearing in the scenario key section. Different scenario characters have character relations, such as friends, father and son, couples, university classmates and the like.
The scenario type is classification information of scenario creation. For example, the scenario types include love, suspense, music, singing, genre, desquamation, armed action, treasures, tomb detection, case finding, and the like. The present embodiment is not limited to a specific form of scenario type.
The step comprises at least one of the following steps:
grouping the scenario segment names of the scenario key segments according to scenario characters, and generating a scenario story line of the original scenario video by secondarily creating videos of targets corresponding to the scenario key segments belonging to the same scenario character.
For example, a target secondary authored video corresponding to a plurality of scenario key segments belonging to the same scenario character is generated into a scenario story line of the original scenario video according to the sequence from front to back of the playing time point, or is generated into a scenario story line of the original scenario video according to the sequence from front to back of the time line in the scenario.
And grouping the scenario segment names of the scenario key segments according to scenario types, and generating a scenario story line of the original scenario video from the scenario key segments belonging to the same scenario type.
For example, a target secondary authored video corresponding to a plurality of scenario key segments belonging to the same scenario type is generated into a scenario story line of the original scenario video according to the sequence from front to back of the playing time point, or is generated into a scenario story line of the original scenario video according to the sequence from front to back of the time line in the scenario.
Grouping the scenario segment names of the scenario key segments according to the scenario characters and the scenario types, and generating a scenario story line of the original scenario video from the scenario key segments belonging to the same scenario characters and scenario types.
For example, a target secondary creation video corresponding to a plurality of scenario key segments belonging to the same "scenario character and scenario type" is generated into one scenario story line of the original scenario video in the order from front to back according to the playing time point, or into one scenario story line of the original scenario video in the order from front to back according to the time line in the scenario.
For the case that the scenario venation includes scenario characters:
the server also needs to determine candidate scenario characters in the original episode video.
In some embodiments, the server performs named entity recognition on the scenario fragment names of the video titles or scenario key fragments of the target secondarily authored video to obtain a character entity recognition result; and determining or mining or identifying the scenario characters based on the character entity identification result. For example, 10 person names are identified, and the 3 person names with the highest occurrence number are selected as the main scenario characters.
In some embodiments, the server stores a pre-built episode character entity dictionary, and the server performs similar entity recognition on episode fragment names of the episode key fragments based on the pre-built episode character entity dictionary to obtain a similar entity recognition result; and determining or mining or identifying the scenario characters based on the similar entity identification results. For example, the pre-constructed scenario character entity dictionary contains a principal angle of "breeze master", and the "breeze old people and breeze Shi Zu" similar to the "breeze master" are identified as the same scenario character.
It should be noted that the named entity recognition and similar entity mining techniques described above may be used in combination.
For the case that the scenario context includes scenario type:
in some embodiments, the scenario types described above are worker-configured.
In some embodiments, based on an unsupervised method, statistics of Text statistics of scenario segment names of scenario key segments, which may be word frequency information, text ranking information, etc.; and mining the scenario type based on the text statistical information. For example, when "emotion" frequently occurs in a video title, determining or mining or identifying that the scenario type is an love type; for another example, when "ancient toms" frequently appear in a video title, the scenario type is determined or mined or identified as a tomb type. The scenario type keywords may be obtained based on a preset word stock.
In some embodiments, based on a supervised method, combining scenario type keywords in scenario segment names of scenario key segments with n candidate scenario types to obtain n combined text pairs; inputting n combined text pairs into a matching model for scoring; mining scenario types based on scoring results of n combined text pairs;
where n is a positive integer, and the matching model is a machine learning model for identifying the degree of matching between the input text and the candidate scenario types.
Schematically, table three shows the correspondence of the mined scenario segment names, scenario characters and scenario types.
Watch III
Figure BDA0003407829230000121
Figure BDA0003407829230000131
Schematically, for the same storyline, the server may further use a summary text generation model to perform summary generation on the storyline segment names of the plurality of storyline key segments belonging to the storyline, so as to obtain the storyline name of the storyline; or, the server may cluster the keywords in the scenario segment names of the scenario key segments belonging to the scenario storyline, and combine the keywords that occur frequently into the storyline name of the scenario storyline; alternatively, the storyline name is manually generated by a worker.
Schematically, in the case that at least two target secondary creation videos correspond to the same plot key segment, selecting one target secondary creation video from the at least two target secondary creation videos based on the scoring dimension, wherein the target secondary creation video is used as the target secondary creation video corresponding to the plot key segment in the plot story line, namely, the secondary creation video played for the plot key segment finally.
In summary, in the method provided in this embodiment, in the case that there are multiple secondary creation videos (short videos) obtained based on the original episode video clip, the target secondary creation videos belonging to the key segments of the episode are screened out from the multiple secondary creation videos, and the episode story line of the original episode video is generated based on the target secondary creation videos.
And because different secondarily authored videos possibly originate from different authors editing the highlight plot, the plot line combing of the public taste can be more accurately expressed, and the plot line is prevented from being limited by the thinking limitation of individual staff or hot net friends.
Fig. 8 illustrates a flowchart of a method for visualizing a video scenario provided in an exemplary embodiment of the present application. The method may be performed by a terminal, the method comprising:
step 602: displaying a story line page of a first storyline of an original episode video, the story line page including introduction information of at least two storyline key segments belonging to the first storyline;
the storyline page is a user interface for displaying a storyline of the original episode video. Illustratively, a storyline is presented using a storyline page. Alternatively, multiple storylines are presented using the same storyline page. The present embodiment is illustrated with one storyline using one storyline page for presentation.
The same original episode video may have one or more storylines. In the case where there are multiple storylines, the first storyline is one of the multiple storylines. For example, the first storyline is the most dominant one of the plurality of storylines; for another example, the first storyline is one of the plurality of storylines that best matches the search keyword of the user; for another example, the first storyline is any one or random one of a plurality of storylines.
Referring schematically to fig. 9, the storyline page 10 includes a scenario clip playing area 11 and a scenario introduction area 12. A first storyline "storyline 1" is displayed in the storyline introduction area 12: the li xx investigates master forest xx cause of death ", and introduction information 13 of a plurality of scenario key segments belonging to the first scenario story line. Taking the scenario key segments ordered in the second scenario as an example, the introduction information 13 of the scenario key segments includes the following information:
scenario segment name of scenario key segment "and what xx old classmates are reclinated";
identification "segment 02" of scenario key segment;
duration "05:36" of the secondarily authored video corresponding to the scenario key fragment;
"what xx" of scenario characters appearing in the scenario key section;
segment tags "high energy" for scenario key segments.
It should be noted that, besides the introduction information of at least two plot key segments, the first plot line may further include other plot introduction information, such as text, picture, audio, etc., and the specific form of the plot introduction information further included in the first plot line is not limited in this embodiment.
Step 604: receiving triggering operation of introduction information of a first scenario key fragment in at least two scenario key fragments;
The first scenario key section is any one of the at least two scenario key sections, or the first scenario key section is one of the at least two scenario key sections that is of interest to the user.
The triggering operation may be at least one of a single click operation, a double click operation, a long press operation, a sliding operation, a pressure touch operation, a hover touch operation, a binocular gaze operation, a gesture operation, and a somatosensory operation.
Illustratively, if the user is interested in a first scenario key section of the at least two scenario key sections, the first scenario key section may be clicked. Correspondingly, the terminal receives the triggering operation of the user on the introduction information of the first scenario key fragment in the at least two scenario key fragments.
Step 606: and responding to the triggering operation, and playing a secondary authored video corresponding to the first scenario key fragment, wherein the secondary authored video is obtained by editing the original scenario video.
In some embodiments, in response to the triggering operation, jumping from the storyline page to another video playing page, and playing the secondarily authored video corresponding to the first scenario key segment in the other video playing page.
The second created video corresponding to the first scenario key segment is the target second created video corresponding to the first scenario key segment mentioned in the above embodiment. In the case where the first scenario key section corresponds to a plurality of target post-authored videos, the post-authored video played here is typically one screened out of the plurality of target post-authored videos for the first scenario key section.
In some embodiments, the storyline page includes a storyline clip play area, and in response to the triggering operation, the secondarily authored video corresponding to the first storyline key clip is played in the storyline clip play area of the storyline page.
Referring to fig. 9 schematically, after the user clicks the first scenario key section "section 02", a secondary authored video corresponding to the first scenario key section "section 02" is played in the scenario section playing area 11 of the story line page.
In summary, in the method provided in this embodiment, in the case that there are multiple secondary creation videos (short videos) obtained based on the original episode video clip, the target secondary creation videos belonging to the critical episode are screened out from the multiple secondary creation videos, and the episode story line of the original episode video is generated based on the target secondary creation videos.
Fig. 10 shows a flowchart of a method for visualizing a video scenario provided in an exemplary embodiment of the present application. The method may be performed by a terminal, the method comprising:
Step 602a: a viewing portal for displaying a first storyline of an original episode video;
the terminal has a client program running thereon, which may be an online video program, a browser program, a short video program, a microblog program, etc.
A viewing portal of a first storyline of the original episode video is displayed in the client program. The view portal is a graphical User Interface (UI) element in the form of a button, menu item, search portal, or the like.
Illustratively, this step may employ at least one of four possible designs:
possibly design one
Assuming that the first storyline is a storyline corresponding to the first storyline person; the method comprises the following steps:
and displaying a character introduction interface of the first scenario character, wherein the character introduction interface comprises a viewing entrance of a scenario story line corresponding to the first scenario character. Alternatively, where the first storyline character has a plurality of storylines, the character introduction interface may include the plurality of storylines.
Referring to fig. 11, the terminal displays a character introduction interface 20 of a first scenario character "li xx". The character presentation interface 20 includes 2 story lines associated with "li xx":
Story line 1: the Lixx investigates master forest xx causes of death.
Storyboard 2: the wheat xx case restarted for investigation fourteen years later.
If there are more storylines associated with the first storyline character "Li xx", then multiple storylines may be displayed using multiple cards, one card for each storyline. The user views different storyline cards by sliding the storyline cards up and down. When a user clicks on a certain storyline card (viewing entry of the storyline), a storyline page corresponding to the storyline is entered.
Illustratively, the viewing portal of the storyline has displayed thereon at least one of the following information:
names of storylines;
the number of scenario key segments belonging to the current scenario story line;
select or summary of high-energy or highlight key segments belonging to the current storyline.
Illustratively, when the character introduction interface of the first scenario character is displayed, the following manner may be adopted:
displaying a character relation diagram of an original episode video, wherein the character relation diagram comprises a first episode character and the character relations in the episodes of other characters except the first episode character;
and responding to the selected operation of the first scenario character, and displaying a character introduction interface of the first scenario character.
Referring to fig. 11, the terminal displays a character relation diagram 24 of an original episode video. The character relation chart 24 includes: plum xx, what xx and horse xx are examples. Plum xx and what xx are college classmates, and plum xx and horse xx are friends. If the user selects the Li xx, the terminal displays a character introduction interface of the Li xx; if the user selects the xx, the terminal displays a character introduction interface of the xx; if the user selects the horse xx, the terminal displays a character introduction interface of the horse xx.
Design two possible
Assuming that the first storyline is a storyline corresponding to the first storyline type; the method comprises the following steps:
and displaying a storyline aggregation interface of the original episode video, wherein the storyline aggregation interface comprises at least two storylines of different storylines, and the at least two storylines of different storylines comprise viewing inlets of the storylines corresponding to the first storyline.
Referring to fig. 12, assume that the original episode video includes at least four storylines:
story line 1: "Lixx survey Master forest xx cause of death";
storyboard 2: "fourteen years later wheat xx case restart investigation";
storyline 3: "jail horse xx suddenly dies";
storyboard 4: "who is an interior ghost.
The four storylines can be displayed in an aggregation mode in the same storyline aggregation interface, namely, the viewing entrance of the storyline. If the storyline is multiple, a part of the storyline can be folded, and after the button is clicked to 'view more storylines', the folded rest storylines can be viewed.
Three possible designs
Assuming that the first storyline is a storyline matching the search key; the method comprises the following steps:
in the event that a search keyword is received, a search results interface is displayed that matches the search keyword, the search results interface including a viewing entry for a storyline that matches the search keyword.
Referring to fig. 13, a user inputs a search keyword "suspense episode a" in a search box 31, and a terminal displays a search result interface matching the search keyword "suspense episode a", the search result interface including a viewing entry of a storyline matching the search keyword. For example, the search results interface includes a storyline 1 for the character "Li xx". If a plurality of matched storylines exist, the storylines can be displayed by a plurality of storyline cards respectively. In fig. 12, if the user slides the story line card 32 of the storyline 1 to the left, the story line card of the storyline 2 can be viewed.
Four possible designs
The first storyline is a storyline to which the target secondary authored video belongs; the method comprises the following steps:
and displaying a video playing interface of the target secondary authored video, wherein the video playing interface comprises a viewing inlet of a storyline to which the target secondary authored video belongs.
Referring to fig. 14, the terminal displays a video playing interface of the short video, and a playing entry of the short video "xx storm" high-energy clip "is displayed below the today recommendation of the video playing interface. And a view button "view more video recommendations for the storyline" to which the short video "[ xx storm ] high-energy clip" belongs. After the user clicks the view button to "view more video recommendations of the story line", other short videos of the storyline to which the short videos "xx storm" high-energy clip "belong can be viewed.
It should be noted that the different designs described above may also be implemented in combination. As shown in fig. 15, the user inputs a search keyword "suspense episode a" in a search entry of the browser, and the terminal displays a search result page 30 of the original video episode "suspense episode a", where the search result page 30 includes a brief introduction of "suspense episode a", a play entry of each episode, and a story line aggregation card for aggregation according to different plot roles.
After the user selects the story line aggregation card of the scenario character "li xx", the terminal displays a character introduction page 20 of the scenario character "li xx". The character introduction page 20 includes a character relationship diagram of the storyline "li xx", a story line 1, and a story line 2. After the user clicks the view entry of the storyline 1, a storyline page 10 of the storyline 1 corresponding to the storyline character "li xx" is displayed, and introduction information of a plurality of storyline key segments is included on the storyline page 10.
Since the storyline character "li xx" has 2 storylines, the storyline page currently displayed as the storyline 1, the terminal switches and displays the storyline page of the first storyline "storyline 1" as the storyline page of the second storyline "storyline 2" in response to a sliding operation (such as left-sliding).
Step 602b: in response to a triggering operation on the viewing portal, the storyline page includes introduction information for at least two storyline key segments belonging to the first storyline;
the introduction information of at least two scenario key segments belonging to the first scenario story line may be displayed in a time-axis order from front to back. If the current page cannot display the introduction information of all the scenario key fragments, the up-down sliding operation can be used for triggering and viewing the introduction information of other scenario key fragments.
Step 604: receiving triggering operation of introduction information of a first scenario key fragment in at least two scenario key fragments;
and responding to the triggering operation, and playing the secondary authored video corresponding to the first scenario key fragment in the scenario fragment playing area.
The first scenario key section is any one of the at least two scenario key sections, or the first scenario key section is one of the at least two scenario key sections that is of interest to the user.
The triggering operation may be at least one of a single click operation, a double click operation, a long press operation, a sliding operation, a pressure touch operation, a hover touch operation, a binocular gaze operation, a gesture operation, and a somatosensory operation.
Illustratively, if the user is interested in a first scenario key section of the at least two scenario key sections, the first scenario key section may be clicked. Correspondingly, the terminal receives the triggering operation of the user on the introduction information of the first scenario key fragment in the at least two scenario key fragments.
Step 606: and responding to the triggering operation, and playing a secondary authored video corresponding to the first scenario key fragment, wherein the secondary authored video is obtained by editing the original scenario video.
In some embodiments, in response to the triggering operation, jumping from the storyline page to another video playing page, and playing the secondarily authored video corresponding to the first scenario key segment in the other video playing page.
In some embodiments, the storyline page includes a storyline clip play area, and in response to the triggering operation, the secondarily authored video corresponding to the first storyline key clip is played in the storyline clip play area of the storyline page.
Referring to fig. 9 schematically, a user may select different scenario key sections in the scenario introduction area below and then view the selected scenario key sections in the scenario section play area 11 above.
In summary, in the method provided in this embodiment, in the case that there are multiple secondary creation videos (short videos) obtained based on the original episode video clip, the target secondary creation videos belonging to the critical episode are screened out from the multiple secondary creation videos, and the episode story line of the original episode video is generated based on the target secondary creation videos.
In a specific example, taking an original video episode as a long video and a secondary authored video as a short video, referring to fig. 16, the processing flow of the server includes two stages: a scenario mining stage 92 and a scenario venation stage 94.
Scenario mining stage 92
Scenario mining relies on long video assets of original episode video and short video assets obtained by secondary authoring (editing, splicing) based on the long video assets. The long video resources comprise video data of the original episode video and corresponding consumption data (such as user barrages, comments and the like); the short video assets include secondarily authored short video data and related production/consumption data (author rating, author posting, number of authors, short video endorsements, short video comments, etc.).
The specific mining steps comprise scenario detection (key scenario fragment extraction) and scenario extraction:
scenario detection
Short length ": with video content understanding capabilities (mainly video frame pattern recognition), multiple short video resources partially matched with long video clips are recalled and preliminary matching information (short video matches to approximate start and end time points of long video, i.e., original episode time intervals) is used. And the signal based on the short video resource primarily filters out low-quality resources such as multi-set mixed shearing and the like. Short-band length data obtained by taking IP episode 'Sa Hei xx' as an example
And counting the production signals and consumption signals at each time point for each set of long video content, and acquiring peak time slices through a peak detection strategy, namely, the key scenario time slices of each set. And counting the 'production signals' at each time point, namely accumulating the original episode time intervals corresponding to the plurality of candidate short videos to obtain the number distribution of the short videos on the whole time line. Similarly, histogram distribution statistics on the timeline may also be performed for "consumer signals" (e.g., user's barrage signals).
The two signals are weighted and accumulated according to a certain weight, and a plurality of time intervals corresponding to local peaks of the signals are screened through strategies (for example, the signal values are judged to be larger than the signal values of the front time and the rear time and are larger than a certain threshold value), so that time segments of a plurality of key scenario are obtained. And accumulating the short video resources associated with each time point in the time slices, so that the short video resources associated with each time slice can be obtained.
Scenario extraction
And (3) mining and extracting structural information such as scenario names, scenario characters, scenario types and the like from time slices of a plurality of key scenarios and corresponding short video resources obtained by scenario detection.
Mining scenario figures: the system is characterized in that the system is mined from the information such as the title, description and the like of related short video resources based on a named entity recognition technology and a pre-constructed episode character entity dictionary (name ).
And (5) mining scenario names: short video titles associated with scenario key segments may be clustered. For the same scenario key segment, scoring the candidate short video titles by combining scenario characters and scenario key word information, and selecting the short video title with the highest scoring as the scenario name of the scenario key segment; scenario name generation may also be performed in consideration of the training text generation model.
And mining the scenario types: the unsupervised method is to mine type keywords (such as emotion and gunshot) from the title of related short video and the description of the short video by counting word frequency or Text Rank method as scenario type information; the supervised method is to manually specify a plurality of scenario types, such as character story lines (e.g. "Li Chengyang survey master Lin Han cause of death"), and select the best scenario type by scoring matching model pairs of scenario names and candidate scenario type composition text.
The context stage 94
The scenario detection step can obtain one or more short video resources corresponding to each scenario key fragment, and the best matching short video can be further screened from the candidate short video resources to be used as a description video of the scenario. The screening method is based on the matching degree of the short video title and the scenario name, the matching degree of the short video mapping time and the scenario detection time, the quality of the short video (whether the content is clear, the author grade and the like) and performs comprehensive scoring and sorting, and the highest scoring short video is taken.
Through the three steps, key scenario time segments of each episode, corresponding structured scenario content information (scenario names, scenario characters, scenario types and the like) and best matched short videos can be mined, and subsequent scenario venation construction and search result satisfaction can be performed.
Fig. 17 shows a block diagram of a video scenario visualization apparatus provided in an exemplary embodiment of the present application. The video scenario visualization device comprises:
a video acquisition module 1720, configured to acquire an original episode video and a plurality of secondary authored videos, where the secondary authored videos are videos obtained by editing the original episode video;
The scenario mining module 1740 is configured to screen out a target secondary authored video containing scenario key segments of the original scenario video from the plurality of secondary authored videos;
and a storyline context module 1760 configured to generate a storyline of the original storyline video based on the target secondarily authored video, where the storyline includes at least two of the target secondarily authored videos.
In some embodiments, the scenario mining module 1740 is configured to calculate scenario key indexes of a plurality of play time points in the original scenario video based on at least one of the first video data and the second video data; screening secondary creation videos containing scenario key fragments of the original scenario video based on scenario key indexes of the multiple play time points;
wherein the first video data comprises data related to a consumption process of the original episode video, and the second video data comprises data related to a production process and/or a consumption process of the secondary authored video.
In some embodiments, the scenario mining module 1740 is configured to obtain, for each of the play time points, the first video data and the second video data corresponding to the play time point; and carrying out weighted summation on the first video data and the second video data corresponding to the playing time point, and calculating to obtain the scenario key index of the playing time point.
In some embodiments, the scenario mining module 1740 is configured to construct contrast information of the scenario key index by using a time axis corresponding to the playing time point as a first coordinate axis and an index axis corresponding to the scenario key index as a second coordinate axis, where the contrast information includes any one of a function curve, a histogram, and a discrete point sequence; screening out peak scenario key indexes meeting local peak conditions based on the comparison information of the scenario key indexes; determining the scenario key fragment based on a play time point corresponding to the peak scenario key index; and screening out target secondary authored videos of which the playing intervals contain the scenario key fragments.
In some embodiments, a scenario context module 1760 for mining scenario segment names for each of the scenario key segments based on the target secondarily authored video;
grouping the scenario segment names of the scenario key segments according to scenario venation, and generating a scenario story line of the original scenario video by secondarily creating videos of targets corresponding to the scenario key segments belonging to the same group.
In some embodiments, the scenario context module 1760 is configured to cluster video titles of at least two target secondarily authored videos to obtain at least two candidate scenario fragment names when at least two target secondarily authored videos correspond to the same scenario key fragment; scoring the at least two candidate scenario segment names based on scenario keywords in the video title; and determining the scenario fragment names corresponding to the scenario key fragments from the at least two candidate scenario fragment names based on the scoring result.
In some embodiments, the context comprises a context character;
the scenario context module 1760 is configured to group scenario segment names of the scenario key segments according to the scenario characters, and generate a scenario story line of the original scenario video by creating a target secondary creation video corresponding to the scenario key segments of the same scenario character.
The scenario context module 1760 is used for carrying out named entity recognition on scenario fragment names of the scenario key fragments to obtain character entity recognition results; identifying the scenario character based on the character entity identification result; and/or; based on a pre-constructed episode character entity dictionary, carrying out similar entity recognition on the episode fragment names of the episode key fragments to obtain a similar entity recognition result; and identifying the scenario characters based on the similar entity identification results.
In some embodiments, the context comprises a context type;
the scenario context module 1760 is configured to group scenario segment names of the scenario key segments according to the scenario types, and generate a scenario story line of the original scenario video from the scenario key segments belonging to the same scenario type.
In some embodiments, the scenario context module 1760 is configured to count text statistics of scenario segment names of the scenario key segments; mining the scenario type based on the text statistics; and/or combining the scenario fragment names of the scenario key fragments with n candidate scenario types to obtain n combined text pairs; inputting the n combined text pairs into a matching model for scoring; mining the scenario type based on scoring results of the n combined text pairs;
wherein n is a positive integer, and the matching model is a machine learning model for identifying the degree of matching between the input text and the candidate scenario types.
In some embodiments, the context module 1760 is configured to:
and under the condition that at least two target secondary creation videos correspond to the same plot key segment, selecting one target secondary creation video from the at least two target secondary creation videos based on a scoring dimension, and taking the selected target secondary creation video as the target secondary creation video corresponding to the plot key segment in the plot story line.
Fig. 18 shows a block diagram of a video scenario visualization apparatus provided in an exemplary embodiment of the present application. The video scenario visualization device comprises:
A display module 1820, configured to display a storyline page of a first storyline of an original episode video, where the storyline page includes introduction information of at least two storyline key segments belonging to the first storyline;
the man-machine interaction module 1840 is configured to receive a triggering operation of introducing information of a first scenario key segment of the at least two scenario key segments, and other triggering operations or selecting operations or man-machine interaction operations;
the display module 1820 is configured to respond to the triggering operation, and play a secondary authored video corresponding to the first scenario key segment, where the secondary authored video is a video obtained by editing the original scenario video.
In some embodiments, the display module 1820 is configured to display a viewing portal of the first storyline of the original episode video;
and responding to the triggering operation on the viewing portal, and displaying a story line page of the first storyline of the original episode video.
In some embodiments, the first storyline is a storyline corresponding to a first storyline character;
the display module 1820 is configured to display a character introduction interface of the first scenario character, where the character introduction interface includes a viewing entry of a scenario story line corresponding to the first scenario character.
In some embodiments, the display module 1820 is configured to display a character relationship diagram of the original episode video, where the character relationship diagram includes the first scenario character and a mid-scenario character relationship of other characters except the first scenario character; and responding to the selection operation of the first scenario character, and displaying the character introduction interface of the first scenario character.
In some embodiments, the first storyline is a storyline corresponding to a first storyline type;
the display module 1820 is configured to display a storyline aggregation interface of the original episode video, where the storyline aggregation interface includes at least two storylines of different storylines, and the storylines of the at least two different storylines include viewing entries of the storylines corresponding to the first storyline.
In some embodiments, the first storyline is a storyline matching a search keyword;
the display module 1820 is configured to display a search result interface matched with the search keyword, where the search result interface includes a viewing entry of a storyline matched with the search keyword, when the search keyword is received.
In some embodiments, the first storyline is a storyline to which the target secondarily authored video belongs;
the display module 1820 is configured to display a video playing interface of the target secondarily authored video, where the video playing interface includes a viewing entry of a storyline to which the target secondarily authored video belongs.
In some embodiments, the storyline page includes a scenario introduction area and a scenario segment play area, the scenario introduction area displaying introduction information of at least two scenario key segments belonging to the first scenario storyline;
the display module 1820 is configured to play, in response to the triggering operation, a secondary authored video corresponding to the first scenario key segment in the scenario segment play area.
In some embodiments, the introduction information of the first scenario key section includes at least one of the following information:
a scenario segment name of the first scenario key segment;
identification of the first scenario key segment;
duration of the secondarily created video corresponding to the first scenario key segment;
the scenario characters appearing in the first scenario key fragment;
and the segment labels of the first scenario key segments.
Fig. 19 is a schematic structural diagram of a computer device according to an embodiment of the present application. Generally, the computer device 1900 includes: a processor 1920 and a memory 1940.
Processor 1920 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1920 may be implemented in at least one of a DSP (Digital Signal Processing, digital data processing), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 1920 may also include a main processor, which is a processor for processing data in the awake state, also called a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 1920 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and rendering of content required to be displayed by the display screen. In some embodiments, the processor 1920 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 1940 may include one or more computer-readable storage media, which may be non-transitory. Memory 1940 may also include high-speed random access memory, as well as nonvolatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1940 is used to store at least one instruction for execution by processor 1920 to implement the methods provided by the method embodiments herein.
In an exemplary embodiment, there is also provided a computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which are loaded and executed by a processor to implement the method for visualizing a video scenario provided by the above-described respective method embodiments.
The application further provides a computer readable storage medium, wherein at least one instruction, at least one section of program, code set or instruction set is stored in the storage medium, and the at least one instruction, the at least one section of program, the code set or instruction set is loaded and executed by the processor to realize the video scenario visualization method provided by the method embodiment.
Optionally, the application further provides a computer program product containing instructions which, when run on a computer device, cause the computer device to perform the method of visualizing a video scenario described in the above aspects.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments is merely exemplary in nature and is in no way intended to limit the invention, since it is intended that all modifications, equivalents, improvements, etc. that fall within the spirit and scope of the invention.

Claims (20)

1. A method of visualizing a video scenario, the method comprising:
acquiring an original episode video and a plurality of secondary authored videos, wherein the secondary authored videos are videos obtained by editing the original episode video;
Screening target secondary creation videos containing scenario key fragments of the original episode video from the plurality of secondary creation videos;
and generating a storyline of the original episode video based on the target secondarily authored video, wherein the storyline comprises at least two target secondarily authored videos.
2. The method of claim 1, wherein the screening out the target secondary authored video containing the scenario-critical sections of the original episode video from the plurality of secondary authored videos comprises:
calculating scenario key indexes of a plurality of play time points in the original episode video based on at least one of the first video data and the second video data;
screening secondary creation videos containing scenario key fragments of the original scenario video based on scenario key indexes of the multiple play time points;
wherein the first video data comprises data related to a consumption process of the original episode video, and the second video data comprises data related to a production process and/or a consumption process of the secondary authored video.
3. The method of claim 2, wherein the calculating scenario key indexes for a plurality of play time points in the original episode video based on at least one of the first video data and the second video data comprises:
For each playing time point, acquiring the first video data and the second video data corresponding to the playing time point;
and carrying out weighted summation on the first video data and the second video data corresponding to the playing time point, and calculating to obtain the scenario key index of the playing time point.
4. The method of claim 2, wherein screening out the target secondarily authored video containing the scenario key segments of the original scenario video based on the scenario key indexes of the plurality of play time points comprises:
taking a time axis corresponding to the playing time point as a first coordinate axis, and taking an index axis corresponding to the scenario key index as a second coordinate axis, and constructing comparison information of the scenario key index, wherein the comparison information comprises any one of a function curve, a histogram and a discrete point sequence;
screening out peak scenario key indexes meeting local peak conditions based on the comparison information of the scenario key indexes;
determining the scenario key fragment based on a play time point corresponding to the peak scenario key index;
and screening out target secondary authored videos of which the playing intervals contain the scenario key fragments.
5. The method of any of claims 1-4, wherein the generating a storyline of the original episode video based on the target secondary authored video comprises:
mining scenario segment names of each scenario key segment based on the target secondary creation video;
grouping the scenario segment names of the scenario key segments according to scenario venation, and generating a scenario story line of the original scenario video by secondarily creating videos of targets corresponding to the scenario key segments belonging to the same group.
6. The method of claim 5, wherein mining scenario segment names for each of the scenario key segments based on the target secondary authoring video comprises:
clustering video titles of at least two target secondarily authored videos under the condition that the same scenario key fragment corresponds to at least two target secondarily authored videos, so as to obtain at least two candidate scenario fragment names;
scoring the at least two candidate scenario segment names based on scenario keywords in the video title;
and determining the scenario fragment names corresponding to the scenario key fragments from the at least two candidate scenario fragment names based on the scoring result.
7. The method of claim 5, wherein the scenario context comprises scenario characters and/or scenario types;
grouping the scenario segment names of the scenario key segments according to scenario venation, and generating a scenario story line of the original scenario video from the scenario key segments belonging to the same group, wherein the scenario story line comprises the following steps:
grouping the scenario segment names of the scenario key segments according to the scenario characters, and generating a scenario story line of the original scenario video by secondarily creating videos of targets corresponding to the scenario key segments belonging to the same scenario character;
or alternatively, the process may be performed,
grouping the scenario segment names of the scenario key segments according to the scenario types, and generating a scenario story line of the original scenario video from the scenario key segments belonging to the same scenario type;
or alternatively, the process may be performed,
grouping the scenario segment names of the scenario key segments according to the scenario characters and the scenario types, and generating a scenario story line of the original scenario video from the scenario key segments belonging to the same scenario character and scenario type.
8. A method of visualizing a video scenario, the method comprising:
displaying a story line page of a first storyline of an original episode video, the story line page including introduction information of at least two storyline key segments belonging to the first storyline;
receiving triggering operation of introduction information of a first scenario key fragment in the at least two scenario key fragments;
and responding to the triggering operation, and playing the secondary authored video corresponding to the first scenario key fragment, wherein the secondary authored video is obtained by clipping the original scenario video.
9. The method of claim 8, wherein the displaying the storyline page of the first storyline of the original episode video comprises:
displaying a viewing portal of the first storyline of the original episode video;
and responding to the triggering operation on the viewing portal, and displaying a story line page of the first storyline of the original episode video.
10. The method of claim 8, wherein the first storyline is a storyline corresponding to a first storyline character;
The viewing portal of the first storyline displaying the original episode video includes:
and displaying a character introduction interface of the first scenario character, wherein the character introduction interface comprises a viewing entrance of a scenario story line corresponding to the first scenario character.
11. The method of claim 10, wherein the displaying the character presentation interface of the first scenario character comprises:
displaying a character relation diagram of the original episode video, wherein the character relation diagram comprises the first episode character and the in-episode character relation of other characters except the first episode character;
and responding to the selection operation of the first scenario character, and displaying the character introduction interface of the first scenario character.
12. The method of claim 8, wherein the first storyline is a storyline corresponding to a first storyline type;
the viewing portal of the first storyline displaying the original episode video includes:
and displaying a story line aggregation interface of the original episode video, wherein the story line aggregation interface comprises at least two storylines with different storylines, and the storylines with different storylines comprise viewing inlets of the storylines corresponding to the first storyline.
13. The method of claim 8, wherein the first storyline is a storyline matching a search keyword;
the viewing portal of the first storyline displaying the original episode video includes:
and displaying a search result interface matched with the search keyword under the condition that the search keyword is received, wherein the search result interface comprises a viewing inlet of a storyline matched with the search keyword.
14. The method of claim 8, wherein the first storyline is a storyline to which the target secondarily authored video belongs;
the viewing portal of the first storyline displaying the original episode video includes:
and displaying a video playing interface of the target secondarily authored video, wherein the video playing interface comprises a viewing inlet of a storyline to which the target secondarily authored video belongs.
15. A method according to any one of claims 8 to 14, wherein the storyline page comprises a storyline presentation area and a storyline clip play area, the storyline presentation area displaying presentation information of at least two storyline key clips belonging to the first storyline;
And responding to the triggering operation, and playing the secondary authored video corresponding to the first scenario key fragment, wherein the secondary authored video comprises the following components:
and responding to the triggering operation, and playing the secondary authored video corresponding to the first scenario key fragment in the scenario fragment playing area.
16. A video scenario visualization apparatus, the apparatus comprising:
the video acquisition module is used for acquiring an original episode video and a plurality of secondary authored videos, wherein the secondary authored videos are videos obtained by editing the original episode video;
the scenario mining module is used for screening target secondary creation videos containing scenario key fragments of the original scenario videos from the plurality of secondary creation videos;
and the plot context module is used for generating plot story lines of the original plot video based on the target secondary authored video, wherein the plot story lines comprise at least two target secondary authored videos.
17. A video scenario visualization apparatus, the apparatus comprising:
the system comprises a display module, a first video processing module and a second video processing module, wherein the display module is used for displaying a story line page of a first storyline story line of an original episode video, and the story line page comprises introduction information of at least two storyline key fragments belonging to the first storyline story line;
The man-machine interaction module is used for receiving triggering operation of introduction information of a first scenario key fragment in the at least two scenario key fragments;
and the display module is used for responding to the triggering operation and playing the secondary creation video corresponding to the first scenario key fragment, wherein the secondary creation video is obtained by clipping the original scenario video.
18. A computer device, the computer device comprising: a processor and a memory storing a computer program to be run by the processor to cause the computer device to implement a method of visualizing a video scenario as claimed in any one of claims 1 to 15.
19. A computer readable storage medium storing a computer program for execution by a processor to cause a device having the processor to implement a method of visualizing a video scenario according to any one of claims 1 to 15.
20. A computer program product, characterized in that the computer program product stores a computer program that is executed by a processor to cause a device having the processor to implement a method of visualizing a video scenario according to any one of claims 1 to 15.
CN202111521920.4A 2021-12-13 2021-12-13 Video scenario visualization method, device, equipment and storage medium Pending CN116264625A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111521920.4A CN116264625A (en) 2021-12-13 2021-12-13 Video scenario visualization method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111521920.4A CN116264625A (en) 2021-12-13 2021-12-13 Video scenario visualization method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116264625A true CN116264625A (en) 2023-06-16

Family

ID=86722108

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111521920.4A Pending CN116264625A (en) 2021-12-13 2021-12-13 Video scenario visualization method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116264625A (en)

Similar Documents

Publication Publication Date Title
CN111143610B (en) Content recommendation method and device, electronic equipment and storage medium
US11048752B2 (en) Estimating social interest in time-based media
EP3855753B1 (en) Method and apparatus for locating video playing node, device and storage medium
US9253511B2 (en) Systems and methods for performing multi-modal video datastream segmentation
CN108776676B (en) Information recommendation method and device, computer readable medium and electronic device
CN112533051B (en) Barrage information display method, barrage information display device, computer equipment and storage medium
US20160014482A1 (en) Systems and Methods for Generating Video Summary Sequences From One or More Video Segments
US11166076B2 (en) Intelligent viewer sentiment predictor for digital media content streams
CN111984689A (en) Information retrieval method, device, equipment and storage medium
CN108028962A (en) Video service condition information is handled to launch advertisement
CN109511015B (en) Multimedia resource recommendation method, device, storage medium and equipment
CN113779381B (en) Resource recommendation method, device, electronic equipment and storage medium
US10482142B2 (en) Information processing device, information processing method, and program
US20110179003A1 (en) System for Sharing Emotion Data and Method of Sharing Emotion Data Using the Same
EP2874102A2 (en) Generating models for identifying thumbnail images
Chen et al. Livesense: Contextual advertising in live streaming videos
CN111597446B (en) Content pushing method and device based on artificial intelligence, server and storage medium
CN111581435A (en) Video cover image generation method and device, electronic equipment and storage medium
CN116264625A (en) Video scenario visualization method, device, equipment and storage medium
Ikeda et al. Predicting online video advertising effects with multimodal deep learning
CN110309415B (en) News information generation method and device and readable storage medium of electronic equipment
CN113709529B (en) Video synthesis method, device, electronic equipment and computer readable medium
CN116980693A (en) Image processing method, device, electronic equipment and storage medium
CN117221623A (en) Resource determination method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40087305

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination