CN116668789A - Video positioning playing method and device - Google Patents

Video positioning playing method and device Download PDF

Info

Publication number
CN116668789A
CN116668789A CN202310505844.0A CN202310505844A CN116668789A CN 116668789 A CN116668789 A CN 116668789A CN 202310505844 A CN202310505844 A CN 202310505844A CN 116668789 A CN116668789 A CN 116668789A
Authority
CN
China
Prior art keywords
video
playing
preset
video clip
text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310505844.0A
Other languages
Chinese (zh)
Inventor
耿皦阳
杨光
丁斯也
来高强
李婷
刘博�
李猛
马帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youku Technology Co Ltd
Original Assignee
Beijing Youku Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Youku Technology Co Ltd filed Critical Beijing Youku Technology Co Ltd
Priority to CN202310505844.0A priority Critical patent/CN116668789A/en
Publication of CN116668789A publication Critical patent/CN116668789A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6587Control parameters, e.g. trick play commands, viewpoint selection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • User Interface Of Digital Computer (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

One or more embodiments of the present disclosure relate to the technical field of terminals, and in particular provide a video positioning playing method and device, where the video positioning playing method includes detecting a triggering operation for an interactive identifier in a text display area of a video playing page, and in response to detecting the triggering operation for the interactive identifier, jumping the video playing page to a target video clip for playing. In the embodiment of the specification, the preset keywords are determined based on the video content of the video clips, and the user can intuitively determine whether the video clips are interested based on the text content of the preset keywords, so that the accurate positioning and skip play of the interested clips are realized. And the same video and cross video positioning and skip can be realized, and the flexibility is higher. In addition, through configuration and management of preset keywords, the controllability of the interactive identification is better, and interference of irrelevant interactive identifications to users is avoided.

Description

Video positioning playing method and device
Technical Field
One or more embodiments of the present disclosure relate to the field of terminal technologies, and in particular, to a video positioning playing method and device.
Background
In the process of watching the video, the user has objective requirements of fast jumping to the interested content for watching, in the related technology, the user is generally required to realize the jumping play of the interested content in a fast forward mode, a dragging progress bar mode and the like, the operation is very complicated, and the accurate positioning of the interested video clips is difficult to realize.
Disclosure of Invention
In view of this, one or more embodiments of the present disclosure provide a video positioning playing method, apparatus, electronic device, and storage medium, which aim to achieve accurate positioning of a video clip of interest to a user, achieve fast skip play, and improve user viewing experience.
In a first aspect, one or more embodiments of the present disclosure provide a video positioning playing method, applied to a client, where the method includes:
detecting triggering operation of interaction identification in a text display area aiming at a video playing page; the interactive identification is a corresponding identification generated based on a preset keyword in text information issued in the text display area, wherein the preset keyword is a text for describing video content;
in response to detecting triggering operation for the interactive identification, jumping the video playing page to a target video clip for playing; the target video clip is a video clip associated with a preset keyword corresponding to the interaction identifier.
In one or more embodiments of the present disclosure, the responding to the detection of the triggering operation for the interactive identifier jumps the video playing page to the target video clip for playing, includes:
in response to detection of triggering operation for the interaction identifier, determining a target video segment associated with a preset keyword corresponding to the interaction identifier based on an association relationship between the preset keyword and the video segment;
and determining a playing time point based on the starting time point of the target video clip, and jumping the video playing page to the playing time point for playing.
In one or more embodiments of the present disclosure, the video positioning playing method further includes:
receiving the association relation between the preset keywords and the video clips, which are sent by the server;
and storing the association relation.
In one or more embodiments of the present disclosure, before the detecting the triggering operation for the interactive identifier in the text presentation area of the video playing page, the method further includes:
acquiring text information released by a user in the text display area of the video playing page;
responding to the detection of the preset keywords in the text information, and rendering the preset keywords based on a preset rendering mode to obtain the interactive identifications corresponding to the preset keywords;
And displaying the interaction identification in the text display area.
In one or more embodiments of the present disclosure, the responding to the detection of the triggering operation for the interactive identifier jumps the video playing page to the target video clip for playing, includes:
in response to detecting a triggering operation for the interactive identification, jumping the video playing page to the target video segment of the first video for playing,
or, skipping the video playing page from a first video to the target video segment of a second video, wherein the second video is different from the first video.
In a second aspect, one or more embodiments of the present disclosure provide a video positioning playing method, which is applied to a server, where the method includes:
generating preset keywords corresponding to each video clip based on the video content of each video clip;
establishing an association relationship between the preset keywords and the video clips based on the preset keywords of each video clip;
and sending the association relation to a client so that the client determines a target video fragment associated with a preset keyword corresponding to the triggered interaction identifier based on the association relation.
In one or more embodiments of the present disclosure, the generating, based on the video content of each video clip, a preset keyword corresponding to each video clip includes:
acquiring each video clip;
and determining preset keywords corresponding to each video clip according to the barrage information and/or the point description information corresponding to each video clip.
In a third aspect, one or more embodiments of the present disclosure provide a video positioning playing device, applied to a client, where the device includes:
the interaction detection module is configured to detect triggering operation of interaction identification in a text display area of the video playing page; the interactive identification is a corresponding identification generated based on a preset keyword in text information issued in the text display area, wherein the preset keyword is a text for describing video content;
the video skipping module is configured to skip the video playing page to a target video clip for playing in response to detecting the triggering operation for the interaction identifier; the target video clip is a video clip associated with a preset keyword corresponding to the interaction identifier.
In one or more embodiments of the present specification, the video skip module is configured to:
In response to detection of triggering operation for the interaction identifier, determining a target video segment associated with a preset keyword corresponding to the interaction identifier based on an association relationship between the preset keyword and the video segment;
and determining a playing time point based on the starting time point of the target video clip, and jumping the video playing page to the playing time point for playing.
In one or more embodiments of the present disclosure, the video positioning playback device further includes:
the receiving module is configured to receive the association relationship between the preset keywords and the video clips, which are sent by the server side;
and the storage module is configured to store the association relation.
In one or more embodiments of the present disclosure, the video positioning playback device further includes:
the information acquisition module is configured to acquire text information released by a user in the text display area of the video playing page;
the rendering module is configured to respond to the detection of the preset keywords in the text information, and render the preset keywords based on a preset rendering mode to obtain the interaction identifiers corresponding to the preset keywords;
And the display module is configured to display the interaction identification in the text display area.
In one or more embodiments of the present description, the video skip module is configured to:
in response to detecting a triggering operation for the interactive identification, jumping the video playing page to the target video segment of the first video for playing,
or, skipping the video playing page from a first video to the target video segment of a second video, wherein the second video is different from the first video.
In a fourth aspect, one or more embodiments of the present disclosure provide a video positioning playing device, applied to a server, where the device includes:
the keyword generation module is configured to generate preset keywords corresponding to each video clip based on the video content of each video clip;
the relation establishing module is configured to establish an association relation between the preset keywords and the video clips based on the preset keywords of each video clip;
and the sending module is configured to send the association relation to the client so that the client determines the target video clip associated with the preset keyword corresponding to the triggered interaction identifier based on the association relation.
In one or more embodiments of the present specification, the keyword generation module is configured to:
acquiring each video clip;
and determining preset keywords corresponding to each video clip according to the barrage information and/or the point description information corresponding to each video clip.
In a fifth aspect, one or more embodiments of the present specification provide an electronic device, including:
a processor; and
a memory storing computer instructions for causing the processor to perform the method of any implementation of the first or second aspects.
In a sixth aspect, one or more embodiments of the present specification provide a storage medium storing computer instructions for causing a computer to perform the method according to any embodiment of the first or second aspects.
The video positioning playing method of one or more embodiments of the present disclosure includes detecting a triggering operation for an interactive identifier in a text display area of a video playing page, and in response to detecting the triggering operation for the interactive identifier, jumping the video playing page to a target video clip for playing. In the embodiment of the specification, the preset keywords are determined based on the video content of the video clips, and the user can intuitively determine whether the video clips are interested based on the text content of the preset keywords, so that the accurate positioning and skip play of the interested clips are realized. And the same video and cross video positioning and skip can be realized, and the flexibility is higher. In addition, through configuration and management of preset keywords, the controllability of the interactive identification is better, and interference of irrelevant interactive identifications to users is avoided.
Drawings
Fig. 1 is a schematic view of a scenario of the related art.
Fig. 2 is a schematic structural diagram of a video playback system according to an exemplary embodiment of the present disclosure.
Fig. 3 is a schematic diagram of a video playback page in an example embodiment of the present description.
Fig. 4 is a flowchart of a video positioning playing method in an exemplary embodiment of the present specification.
Fig. 5 is a schematic view of a video positioning playing method according to an exemplary embodiment of the present disclosure.
Fig. 6 is a flowchart of a video positioning playing method in an exemplary embodiment of the present specification.
Fig. 7 is a flowchart of a video positioning playing method in an exemplary embodiment of the present specification.
Fig. 8 is a schematic view of a video positioning playing method according to an exemplary embodiment of the present disclosure.
Fig. 9 is a schematic view of a video positioning playing method according to an exemplary embodiment of the present disclosure.
Fig. 10 is a flowchart of a video positioning playback method in an exemplary embodiment of the present specification.
Fig. 11 is a flowchart of a video positioning playback method in an exemplary embodiment of the present specification.
Fig. 12 is a flowchart of a video positioning playback method in an exemplary embodiment of the present specification.
Fig. 13 is a block diagram of a video positioning playback apparatus according to an exemplary embodiment of the present specification.
Fig. 14 is a block diagram of a video positioning playback apparatus according to an exemplary embodiment of the present specification.
Fig. 15 is a block diagram of an electronic device in an example embodiment of the present specification.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with one or more embodiments of the present specification. Rather, they are merely examples of apparatus and methods consistent with aspects of one or more embodiments of the present description as detailed in the accompanying claims.
In other embodiments, the steps of the corresponding methods are not necessarily performed in the order shown and described in this specification. In some other embodiments, the method may include more or fewer steps than described in this specification. Furthermore, individual steps described in this specification may be described as being broken down into multiple steps in other embodiments; while various steps described in this specification may be combined into a single step in other embodiments.
In addition, it should be noted that, user information (including but not limited to user equipment information, text information issued by a user, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in this specification are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of related data are required to comply with related laws and regulations and standards of related countries and regions, and are provided with corresponding operation entries for the user to select authorization or rejection.
Nowadays, the mobile terminal becomes one of the most important channels for video playing, and users can watch various network videos at any time and any place through the mobile terminal, so that the convenience of video watching is greatly improved, and the lives of people are enriched.
In the process of watching video, especially when the same video program is repeatedly brushed for multiple times, the objective requirement exists that the user quickly jumps to the interesting content for watching. For example, a user watching a movie for three hours may wish to skip a boring scenario and watch the scenario of interest directly. For another example, a user watching a multiple episode of a television series may wish to jump directly to a scenario to begin watching. For another example, a user may watch a series of shows, and may wish to watch only a show segment of an actor in the show.
Under the scene demand, generally, the user needs to jump to the interested content rapidly through modes such as fast forward, dragging progress bars and the like, the user operation is complicated, the positioning efficiency of the interested video clips is low, and accurate positioning is difficult. For example, taking the television series "XX pass" as an example, when a user watches, the user expects to skip the scenario of "female is in the temple", and directly starts to watch from the scenario of "female is in the temple", at this time, the user needs to drag the progress bar one by one to search, firstly, the scenario of "female is in which one by one is determined, and then the progress bar is continuously dragged to find a specific time point corresponding to the scenario, so that the whole process is very tedious and inefficient.
In some related technologies, to achieve accurate positioning and skip of a video playing time point, a user may be allowed to send comments with the time point in a video comment area, and other users may skip the playing video to the playing time corresponding to the time point by clicking the time point.
For example, as shown in fig. 1, all users may send comments with time points in the video comment area, when the terminal detects the time points in the comments, the terminal may render and display the time point information in a highlighted, thickened, underlined and other striking manner, for example, in the example in fig. 1, a certain user issues comments "1:15" in the comment area, and the terminal detects the comment content as the time point, so that the comments may be rendered and displayed as " 1:15”。
When the user himself or herselfAnd the other users click the time point in the comment area, so that the currently played video can jump to the playing time corresponding to the time point. For example, in the example of fig. 1, the original video playing page is shown in fig. 1 (a), that is, the video is played to the time of 00:30, at which time the end user clicks "shown in the comment area:1:15and (c) jumping the video playing page to the time 01:15 shown in (b) of fig. 1, so as to realize accurate video positioning and jumping.
Although the above related technical solutions can realize accurate positioning and skip of video playing, they can only skip playing at different time points of the same video based on the time point issued by the user.
On the one hand, for the user, the information amount presented by the time point is not visual enough, and the user cannot directly determine whether the video clip of interest corresponds to the user through the time point. For example, in one example scenario, when a user views a movie, he desires to jump to a combat scene directly for viewing, and only through the time points published by other users in the comment area, the user cannot know which time point corresponds to the combat scene of the movie scenario, and thus cannot realize accurate positioning and jumping.
On the other hand, it can only realize skip play at different time points of the same video, and can not realize skip across videos. For example, in the television series viewing scene of the foregoing example, the user expects to jump to the scenario of "woman main palace" directly to view, but the related technical scheme can only jump within the same video set, and the user cannot locate directly to the video and the time point corresponding to the scenario of "woman main palace", from the first video set, so that jump play of the cross video cannot be realized.
Based on the defects in the related technical schemes, the embodiment of the specification provides a video positioning playing method, a video positioning playing device, electronic equipment and a storage medium, and aims to realize accurate positioning of video clips of interest to users, realize fast jump playing, meet the video watching requirements of the users and improve video watching experience.
Fig. 2 shows an architecture diagram of a video playing system in some embodiments of the present disclosure, and an application scenario of the method and apparatus of the present disclosure is described below with reference to fig. 2.
As shown in fig. 2, in some embodiments, the video playback system illustrated in the present specification includes a client 100 and a server 200, where the client 100 and the server 200 establish a wireless communicable connection through a network 300.
The client 100 has a video playing function, which may be any device type capable of playing video, such as a smart phone, a tablet computer, a notebook computer, an electronic display screen, etc., which is not limited in this specification. The server 200 is a provider of video data, and may be any electronic device capable of implementing video data distribution, such as a server, a cloud server, a server cluster, and the like, which is not limited in this specification.
When a user views a video by using the client 100, the client 100 may send a data request to the server 200 through the network 300, and after receiving the data request, the server 200 may send corresponding video data to the client 100, and the client 100 may play the video according to the received video data. For the principle of video playing of the client 100, those skilled in the art can understand and fully implement the principles with reference to the related art, and will not be described in detail in this specification.
When the client 100 plays the video, the user may operate the video playing page to implement corresponding functions, such as video playing, pausing, comment making, barrage making, video definition adjusting, video playing speed adjusting, and the like. A video playback page of the client 100 in some embodiments of the present description is shown in fig. 3, and is described below in conjunction with fig. 3.
In the example of fig. 3, the client 100 is exemplified by a smart phone, and the video playing page of the client 100 includes a video playing area 110 and a comment area 120. The video playing area 110 refers to an area for playing video pictures, the comment area 120 refers to an area provided for users to communicate and evaluate, and each user can post related comments for the video in the comment area 120.
In some embodiments, the video playing area 110 further includes a bullet screen area, where bullet screens are comment subtitles popped up when watching video, and compared with conventional comments, bullet screens can be suspended on top of video and can be published in combination with real-time scenario, so that the bullet screens are more interactive and interesting. For example, in the example of fig. 3, a barrage area is disposed on the upper layer of the video playing area 110, and the user may issue a real-time barrage for scenario communication and interaction.
On the basis of the above embodiment example, a video positioning playing method of the present specification will be described below with reference to the embodiment of fig. 4, and the video positioning playing method may be applied to the client 100 and executed by the client 100.
As shown in fig. 4, in some embodiments, the video positioning playing method illustrated in the present specification includes:
s410, detecting triggering operation of interaction identification in a text display area of the video playing page.
S420, in response to detection of triggering operation for the interactive identification, the video playing page is jumped to the target video clip to be played.
In some embodiments of the present disclosure, the text presentation area of the video playing page may include the comment area 120 and/or the bullet screen area illustrated in fig. 3. For instance, in one example, the text presentation area of the present description includes a comment area 120 of the video play page. For another example, the text presentation area of the present specification includes a bullet screen area of a video playback page. For yet another example, the text presentation area of the present description includes a comment area 120 and a bullet screen area of a video play page.
In the embodiment of the present disclosure, interactive identifiers are presented in a text display area of a video playing page, where the interactive identifiers are corresponding identifiers generated based on preset keywords included in text information issued by a user in the text display area. For example, in the scenario shown in fig. 3, each user may post text information in a comment area, and the text information may be post after being freely edited by the user, so that other terminal users may see the content of the text information in the comment area. Of course, it will be understood by those skilled in the art that the text display area may also be a bullet screen area, and each user may freely edit the text information and issue it in the bullet screen area, so that other users may see the content of the text information in the bullet screen area.
For convenience of explanation, the text display area is hereinafter described by taking an evaluation area of a video playing page as an example, but it is understood that the text display area is not limited to the evaluation area, and the description is omitted.
When the user freely issues text information in the text display area, the text information may contain preset keywords. The preset keywords refer to keywords defined in advance according to video contents of video clips, that is, the preset keywords may describe video contents of a certain video clip.
For example, for a movie and television play, viewers and producers often define interesting or propagation effect "scenes" segments from the complete movie and television play video according to the factors of the play development, the special shooting effect, the actor interaction, etc., and these "scenes" segments are irrelevant to specific time points, but are strongly related to the content of the play, such as the smiling segments, the play propulsion nodes, classical speech segments, etc.
In this embodiment, the preset keywords corresponding to the video content of the video clip are set, and the preset keywords may be used as descriptive sentences for the video content. For example, taking the foregoing tv series "XX transmission" as an example, most users of the scenario content of "female main in temple" do not like to see, but like to start to see from "after female main re-palace", so that for the video clip of "after female main re-palace", the corresponding preset keyword is defined as a text word capable of describing video content, such as "female main palace".
For example, in another example, taking a cyclic movie-like drama as an example, a scenario may be advanced in a continuous cyclic repetition process, so that the scenario of each cycle may be defined as a video frequency band, and a preset keyword corresponding to each video segment is defined as an nth cycle, which can describe a text of video content.
The preset keywords can be determined in advance by collecting user comments, barrages and viewpoint description information of producers or critique persons. For example, for some "scenes" segments, users are more enthusiastic to discuss and share the scenario, so that comment information or bullet screen information of these "scenes" segments can be collected from various channels. And for a producer or a film commenter, the film producer or the film commenter can also guide and issue some point description information aiming at the 'scene of name' fragments in order to attract a user to watch videos, so that the point description information can be acquired. Based on comment, barrage and point description information aiming at a certain video segment, a preset keyword corresponding to the video segment can be determined.
The following embodiments of the present specification describe a process of determining preset keywords corresponding to video clips, which are not described in detail herein.
Referring to fig. 3, when a user posts comments in the comment area 120 based on predetermined preset keywords, the text information of the posting may include the preset keywords. Under the condition that the text information issued by the user contains the preset keywords, the client can render and display the preset keywords in the text information so as to distinguish the preset keywords from other text information, and therefore interaction identifiers corresponding to the preset keywords are obtained.
For example, in some embodiments, when a user issues text information in the text display area, the client 100 may perform text detection on the text information issued by the user to determine whether the text information includes a preset keyword. If the text information issued by the user does not comprise the preset keywords, the text information can be normally displayed in the text display area. If the text system information issued by the user comprises the preset keywords, rendering and displaying the preset keywords contained in the text information according to a preset rendering mode, so that the preset keywords and other text information have different display effects, and the interactive identifications corresponding to the preset keywords are obtained.
For example, in the example scenario of fig. 5, the playing video of the video playing page takes the foregoing "XX pass" as an example, and the "XX pass" includes a video segment of "after female major palace", where the preset keyword corresponding to the video segment is "female major palace".
As shown in the scene (a) of fig. 5, the user inputs "see several times" in the comment area 120 of the video play page, and each time, see from the woman's main palace-! "text information and release. The client 100 can perform text detection on text information issued by the user, and detect and obtain a preset keyword ' woman's main palace ' from the text information. After determining that the preset keyword ' woman's main palace ' is detected, rendering and displaying the preset keyword to obtain a corresponding interactive identifier, wherein the rendering and displaying mode of the preset keyword can be, for example, adding underlining, thickening, changing colors, changing fonts and the like. For example, in an example, as shown in fig. 5 (b), it can be seen that the font and format of the preset keyword "woman main palace" are different from those of other text information, and the interaction identifier 400 corresponding to the preset keyword can be used.
Of course, those skilled in the art will appreciate that fig. 5 is only an exemplary embodiment of the present disclosure, and in other manners, the interactive identifier may be other manners of presentation, such as a different font color, adding a jump flag, etc., which is not limited in this disclosure. The text detection and interactive identification rendering process are performed on the text information, and the description is given in the following embodiments.
After the interactive identifier corresponding to the preset keyword is displayed in the text display area, all users can see the text information and the interactive identifier of the preset keyword included in the text information through the text display area, so that each user can trigger the interactive identifier.
In the embodiment of the present specification, the triggering operation refers to the interaction operation of the user with the interactive identifier, and the triggering operation may be any operation mode suitable for implementation. For example, in one example, the triggering operation of the user with the interaction identifier is clicking on the interaction identifier; for example, in another example, the triggering of the user with the interaction identity is a double click on the interaction identity; for example, in yet another example, the triggering operation of the user with the interactive identifier is pressing the interactive identifier for a long time; this description is not limiting.
When the triggering operation of the user for the interaction identifier is detected, the user is indicated to expect to directly jump to the video segment corresponding to the interaction identifier for watching, so that the current playing video of the video playing page is jumped to the video segment associated with the preset keyword corresponding to the interaction identifier for playing, and the video segment associated with the preset keyword, namely the target video segment.
In some embodiments, an association relationship between each preset keyword and the video clip may be preset, so that, in a case that a triggering operation of the interaction identifier is detected, a target video clip associated with the preset keyword corresponding to the interaction identifier may be determined based on the association relationship. For example, in the example scenario of fig. 5, the user clicks the interactive identifier of "female main palace", and the client detects the triggering operation of the user for the interactive identifier, so that the video played in course can be directly jumped to the video segment corresponding to the preset keyword "female main palace", and the video segment is the target video segment.
In the embodiment of the specification, the preset keywords are determined based on the video content of the video clips, so that the content of the video clips can be intuitively described by the preset keywords, a user can quickly determine whether the video clips are interested based on the text content of the preset keywords, and accurate positioning and video skipping of the interested clips are realized based on the interactive identification of the preset keywords.
In addition, in the embodiment of the present specification, the method is applicable to fast-skip of not only different video clips of the same video but also different video clips of different videos. It can be appreciated that, since the preset keywords describe the video content of the video clip, the video clip need not be limited to the video clip in the same video, but may be a video clip of a different video.
For example, in one example, the television series "XX transmission" described above is still taken as an example, and includes a plurality of episodes, where each episode is a video. For a whole television series, the included "scene" segments may be located in videos of different episodes, and in this embodiment of the present disclosure, a user may implement scenario positioning and skip across videos based on video content described by a preset keyword, for example, the user may skip from a currently played first episode of videos to other episode of video playing time corresponding to a scenario by clicking an interactive identifier of "female main palace".
In addition, the video clips, the preset keywords and the association relation of the video clips and the preset keywords can be configured by authorities (such as a video platform side), so that the user can be guided to realize the jump of the 'named scene' clips in an accurate manner. For example, in the related technical solution illustrated in fig. 1, all users may issue time points in the comment area, and the video clips corresponding to the time points are not interesting content of the users, and are likely to be time points when the users try to issue new functions at will, thereby causing trouble to other users. In some embodiments of the present disclosure, although the user may freely issue comments or barrage text information, the preset keywords are guided and set by authorities, so that corresponding interactive identifiers are generated only for the preset keywords in the text information, and other users can intuitively see whether the video segments associated with the preset keywords are interesting video segments of the user, so as to accurately locate and skip the interesting segments, and have high controllability.
As can be seen from the foregoing, in the embodiment of the present disclosure, the preset keyword is determined based on the video content of the video segment, and the user can intuitively determine whether the video segment is of interest based on the text content of the preset keyword, so as to realize accurate positioning and skip play of the segment of interest. And the same video and cross video positioning and skip can be realized, and the flexibility is higher. In addition, through configuration and management of preset keywords, the controllability of the interactive identification is better, and interference of irrelevant interactive identifications to users is avoided.
Referring to fig. 2, in some embodiments, the server 200 may configure corresponding preset keywords for different video clips in advance, and establish an association relationship between the video clips and the preset keywords. In the association relationship, each video clip has respective attribute information, such as a play request address, a start time point, an end time point, and the like, and meanwhile, each video clip is preconfigured with a corresponding preset keyword, where the preset keyword is used to describe video content of the video clip.
The process of configuring the video clip, the preset keyword, and the association relationship between the two in the server 200 will be described in the following embodiments related to the server 200, which will not be described in detail herein.
For the client 100, it may receive the association between the preset keyword and the video clip sent by the server 200 through the network 300, that is, after the server 200 configures the association between the preset keyword and the video clip, the association may be sent to the client 100. After receiving the association, the client 100 may store the association. Of course, the client 100 may also update, modify, delete, etc. the stored association relationship based on the instruction below the server 200, which will not be described in detail in this specification.
When a user issues text information in the text display area by using the client 100, the client 100 can perform text detection on the text information based on preset keywords in the association relationship, and render and display interactive identifications corresponding to the preset keywords, which is described below with reference to the embodiment of fig. 6.
As shown in fig. 6, in some embodiments, the video positioning playing method illustrated in the present specification further includes:
s610, acquiring text information released by a user in a text display area of a video playing page.
In some embodiments, the text display area illustrated in the present specification takes the foregoing comment area 120 as an example, and text information that is posted in the text display area, that is, comment information.
Taking the example scenario of fig. 5 as an example, when the user views the video using the client 100, comment information can be freely input in the comment area of the video playing page, for example, as shown in (a) of fig. 5, the user inputs "how many times the user views in the comment area, and each time, the user starts to watch-! And release, at this time, the client 100 can detect comment information released by the user in the comment area.
And S620, responding to the detection of the preset keywords in the text information, and rendering the preset keywords based on a preset rendering mode to obtain interaction identifiers corresponding to the preset keywords.
In combination with the foregoing, the client 100 stores the association relationship issued by the server 200, where the association relationship includes each preset keyword. Therefore, after obtaining the text information issued by the user, the client 100 may detect the text information according to the preset keywords included in the association relationship, and determine whether the text information includes the preset keywords. The text detection Algorithm may be, for example, a general character string matching Algorithm, a KMP (The Knuth-Morris-Pratt Algorithm) matching Algorithm, and The like, which will not be described in detail in this specification.
Under the condition that the text information is detected to not comprise any preset keyword, the text information can be directly issued in a text display area without rendering display.
And under the condition that the text information comprises at least one preset keyword is detected, the preset keyword can be rendered based on a preset rendering mode, and an interaction identifier corresponding to the preset keyword is obtained. In this embodiment of the present disclosure, the preset rendering manner may include one or more of the following:
1) Color rendering
And performing color rendering on the preset keywords to enable the font colors of the preset keywords to be different from the font colors of other text information. For example, in one example, preset keywords in text information may be in a red font, while other text may be in a black font.
2) Font rendering
And rendering the fonts of the preset keywords so that the fonts of the preset keywords are different from the fonts of other text information. For example, in one example, the preset keywords in the text information may be in "bold" while other text is in "Song Ti".
3) Format rendering
And performing format rendering on the preset keywords so that the font format of the preset keywords is different from that of other text information. For example, in one example, the preset keywords in the text information are rendered in font formats such as "bold", "underline", and the other text is not required to be rendered in font formats.
Of course, it will be understood by those skilled in the art that the preset rendering method is not limited to the above example, and may be any rendering method suitable for implementation, so long as the preset keywords in the text information can be distinguished from other texts, which will not be repeated in the present specification.
And after the preset keywords in the text information are rendered in a preset rendering mode, the interactive identifications corresponding to the preset keywords can be obtained.
And S630, displaying the interaction identification in the text display area.
In the embodiment of the present disclosure, after rendering a preset keyword to obtain an interaction identifier, the interaction identifier may be displayed in a text display area.
For example, in the example scenario of fig. 5, as shown in (b) of fig. 5, when the text information is subjected to file detection to determine the preset keyword "woman's main palace", the preset keyword "woman's main palace" may be rendered by, for example, thickening/underlining, changing fonts/colors, and the like, so as to obtain a corresponding interactive identifier. The interactive identification and other text information are then presented together in a text presentation area.
As shown in fig. 7, in some embodiments, the video positioning playing method illustrated in the present specification, in response to detecting a triggering operation for the interactive identifier, jumps a video playing page to a process of playing a target video clip, including:
S421, in response to detection of triggering operation for the interaction identifier, determining a target video segment associated with the preset keyword corresponding to the interaction identifier based on the association relationship between the preset keyword and the video segment.
S422, determining a playing time point based on the starting time point of the target video clip, and jumping the video playing page to the playing time point for playing.
In combination with the foregoing, the association relationship sent by the server 200 refers to an association relationship between a preset keyword and a video clip, and the client 100 may store the association relationship after receiving the association relationship.
For example, in one example, the association relationship between the preset keyword and the video clip may be as shown in the following table one:
list one
Video clip Preset keywords Attribute information
Fragment 1 Keyword A {address 1,ST 1,ET 1}
Fragment 2 Keyword B {address 2,ST 2,ET 2}
Fragment 3 Keyword C {address 3,ST 3,ET 3}
Fragment 4 Keyword D {address 4,ST 4,ET 4}
…… …… ……
As shown in table one, each video clip has respective attribute information, in which address represents a play request address, ST represents a start time point, and ET represents an end time point. Meanwhile, each video clip is preconfigured with an associated preset keyword.
In this embodiment of the present disclosure, after detecting a triggering operation for an interaction identifier, the client 100 may determine, based on the triggering operation, a preset keyword corresponding to the interaction identifier, and then determine, based on the association relationship of the above example, a video segment associated with the preset keyword, that is, a target video segment described in this disclosure.
Taking the foregoing example scenario of fig. 5 as an example, as shown in (b) of fig. 5, the user clicks the interactive identifier "female main palace" in the text display area, and the client 100 detects the triggering operation of the user clicking the interactive identifier, so as to determine that the preset keyword corresponding to the interactive identifier is "female main palace". Thus, in combination with the association relationship illustrated in fig. 1, for example, a target video clip associated with the preset keyword "female major palace" can be determined.
After the target video clip is determined, the current video playing page needs to be jumped to the video playing page corresponding to the target video clip. First, the client 100 may determine the play time point according to the start time point included in the target video clip, and the start time point ST of the target video clip may be obtained according to attribute information of the target video clip, for example, the start time point ST is included in the attribute information shown in table one.
In some embodiments, when the target video clip is positioned for playback, the playback time point may be identical to the start time point ST, that is, the start time point ST of the target video clip is determined as the playback time point. In other embodiments, the playing time point may be advanced by a period of time compared to the starting time point ST, so that the user may be given a certain transition time to take care of the user's viewing experience, for example, in one example, a time point 1-5 seconds before the starting time point may be determined as the playing time point, for example, the starting time point of the target video clip is 22:35 seconds, so that the determined playing time point may be 22:33 seconds.
After determining the playing time point, the client 100 may generate a corresponding video playing request according to the playing request address and the playing time point corresponding to the target video clip, and then send the video playing request to the server 200, so that the server 200 plays the video data of the target video clip below the client 100, and the client 100 implements skip play for the target video clip.
As can be seen from the foregoing, in the embodiment of the present disclosure, the preset keyword is determined based on the video content of the video segment, and the user can intuitively determine whether the video segment is of interest based on the text content of the preset keyword, so as to realize accurate positioning and skip play of the segment of interest. And the same video and cross video positioning and skip can be realized, and the flexibility is higher. In addition, through configuration and management of preset keywords, the controllability of the interactive identification is better, and interference of irrelevant interactive identifications to users is avoided.
It is worth to describe that the video clips in the present specification may be multiple video clips of the same video or multiple video clips of different videos, so that the same video skip can be realized or the skip of the cross video can be realized based on the skip of the preset keywords to the target video clip.
For example, in some embodiments, referring to fig. 8, in the video playing page shown in fig. 8 (a), the user is watching a loop type movie, at which point the user clicks on the interactive identifier of "fifth loop" in the text presentation area. The client detects the triggering operation of the interaction identifier of the user for the fifth cycle, and can determine the target video clip corresponding to the fifth cycle of the preset keyword based on the method process, so that the current playing page is directly jumped to the playing time point corresponding to the target video clip. For example, as shown in fig. 8 (b), the playing page is jumped from original 10:30 to 74:25, and the user can directly locate to the scenario of the fifth cycle for viewing.
For example, in other embodiments, referring to fig. 9, in the video playing page shown in fig. 9 (a), the user is watching a first set of videos of a television series, where the first set of videos is the first video described in this specification. At this time, the user clicks the interactive identifier of the "female main palace" in the text display area, and the client detects the triggering operation of the user on the interactive identifier of the "female main palace", so that the target video clip corresponding to the preset keyword "female main palace" can be determined based on the foregoing method process.
It should be noted that, the target video clip corresponding to the preset keyword "woman major palace" is not a video clip in the first video, but a video clip in the second video, and the second video is different from the first video. Therefore, after the target video clip is determined, the current playing page can be jumped to the playing time point corresponding to the target video clip of the second video. For example, as shown in fig. 9 (b), the playing page jumps from 6:30 of the original first video set to 34:25 of the second sixteen video set, i.e. the second video set described in the present specification. The user can directly locate the scenario of the female major palace for viewing.
According to the above example, in the embodiment of the present disclosure, video positioning and skip playing are implemented based on the preset keywords, and a user can intuitively see whether the video is an interesting segment according to the preset keywords, so as to determine whether to skip playing, and compared with the conventional time point skip, the video watching experience of the user is improved. Moreover, not only can the video clip skip in the same video be realized, but also the video clip skip crossing the video can be realized, and the flexibility is improved.
In some embodiments, the video positioning playing method illustrated in the present disclosure may be applied to the server 200, and executed by the server 200, and is described below with reference to fig. 10.
As shown in fig. 10, in some embodiments, the video positioning playing method illustrated in the present specification includes:
s1010, generating preset keywords corresponding to each video clip based on the video content of each video clip.
S1020, establishing association relation between the preset keywords and the video clips based on the preset keywords of each video clip.
And S1030, the association relation is sent to the client so that the client can determine the target video clip associated with the preset keyword corresponding to the triggered interaction identifier based on the association relation.
In the embodiment of the present disclosure, each video clip may be edited by the producer or the platform according to the scenario, or may be selected by combining with the network discussion and the net friend comments. The selection of video clips can be made from several dimensions:
1) Determining video clips from scenario advances
For example, taking a movie theatre of a circulation type, the scenario includes a plurality of circulation scenarios, so that the whole movie theatre can be divided into a plurality of video clips according to the time node of each circulation, that is, each video clip corresponds to a circulation scenario.
For example, taking a suspense movie and television play as an example, the scenario includes a pre-scenario laying, mid-term cue discovery and later-stage secret breaking, so that the whole movie and television play can be divided into a plurality of video clips based on the advancing rhythm of the scenario.
2) Determining video clips based on starring actor's participation
For example, in a movie, a starring actor does not appear in the early stages, but does appear in the middle stages of the movie, so that the movie can be divided into a plurality of video clips based on whether the actor appears or not.
For example, in a movie theatre, the men and women grow and have no interaction in the early stage of the scenario, but only have some interactions in the middle stage of the movie theatre, so that the movie theatre can be divided into a plurality of video clips based on the interaction scenario of the men and women.
3) Determining video clips from speech
For example, in some movie shows, many well-known speech scenes are often included, so that these speech scenes can be divided from the movie shows as video clips.
While the foregoing has been presented with respect to determining video clips, it will be appreciated by those skilled in the art that the discovery and selection of video clips is not limited to the examples described above, and that any other suitable implementation may be used, and this is not a limitation of this disclosure. In addition, it can be understood that even for the same movie, video clips can be selected from the above dimensions respectively, and different video clips can have overlapping video contents, which will not be described in detail in this disclosure.
In this embodiment of the present disclosure, after the video clips are divided, attribute information corresponding to the video clips may be determined at the same time, where the attribute information may include, for example, a play request address, a start time point, an end time point, etc., and those skilled in the art may understand that the foregoing table is an example, and the description is omitted herein.
After each video clip is determined through the above-described process, a corresponding preset keyword needs to be generated based on the video content of the video clip. In some embodiments, the preset keywords may be directly given based on human experience. In other embodiments, the preset keywords may be determined by collecting bullet screen information and/or point of view description information, as described below in connection with fig. 11.
As shown in fig. 11, in some embodiments, the video positioning playing method illustrated in the present specification includes a process of generating a preset keyword based on video content of a video clip, including:
s1011, obtaining each video clip.
S1012, determining preset keywords corresponding to each video clip according to bullet screen information and/or point of view description information corresponding to each video clip.
It can be appreciated that for a certain video clip, especially a "named scene" clip with higher user attention, the user may prefer to send bullet screen information while watching the video clip, and may continue to participate in comments, parsing, etc. on various network platforms after watching, so that corresponding bullet screen information and comment information may be obtained for the video clip.
Meanwhile, in order to attract users to watch videos, the video producer also provides corresponding viewpoint description information for the 'scene of name' segment, for example, text information describing video content such as 'man and woman owners meet again at intervals of 20 years', and the like, so that corresponding viewpoint description information can be obtained for the video segment.
In this embodiment of the present disclosure, the server 200 may obtain bullet screen information, comment information, and point description information corresponding to each video clip through a corresponding data acquisition means. Based on the obtained data information, the server 200 can generate preset keywords corresponding to each video segment through technologies such as data cleaning, screening, keyword extraction and the like. For the process of generating the preset keyword by the server 200, those skilled in the art can understand and fully implement the process with reference to the related art, which is not repeated in this disclosure.
It should be noted that, the number of preset keywords corresponding to one video clip is not limited to one, and two or more preset keywords may be set for one video clip, and the principles thereof are identical, which is not described in detail in this disclosure.
After determining the preset keywords corresponding to each video clip, the server 200 may establish an association relationship between the preset keywords and the video clips. For example, in one example, the association relationship may be as shown in the foregoing table one, where each video clip corresponds to a preset keyword and attribute information associated therewith.
With reference to fig. 2, after obtaining the association relationship between the preset keyword and the video clip, the server 200 may send the association relationship to the client 100 through the network 300, and the client 100 may store the association relationship after receiving the association relationship, so as to implement the video positioning playing method described above based on the association relationship.
It should be noted that, in some embodiments, the server 200 may update, modify, and delete the association relationship configured before in combination with the current hotspot consultation periodically, so as to obtain the latest association relationship and send the latest association relationship to the client 100. After receiving the latest association, the client 100 may update, modify, or delete the stored association.
In some embodiments, in order to increase the interest and interactivity of the user when watching the video, the server 200 may also match the corresponding color egg animation effect with respect to the preset keywords when establishing the above-mentioned association relationship. Therefore, the client 100 can realize the positioning and skip play of the video clips based on the association relationship, and simultaneously can display the corresponding colored egg animation effect on the video play page, thereby increasing the interestingness and interactivity of the user when watching the video. It will be understood and fully implemented by those skilled in the art that the description is not repeated.
Fig. 12 is an interactive flowchart of a video positioning playing method according to some embodiments of the present disclosure, and is described below with reference to fig. 12.
And S01, the service end generates corresponding preset keywords based on the video content of each video clip.
S02, the service end establishes association relation between the preset keywords and the video clips based on the preset keywords of each video clip.
S03, the server sends the association relation to the client.
S04, the client stores the association relation.
For the method procedures of S01 to S04, those skilled in the art may refer to the foregoing embodiment of fig. 10, and this will not be repeated in the present specification.
S05, the user issues text information in a text display area of the client.
The user can freely release text information in the text display area of the video playing page by using the client. The video playing page may be as shown in fig. 3, and the user may freely issue comments and a barrage in the comment area 120 or the barrage area.
S06, the client detects the text of the text information.
And S07, under the condition that the preset keywords are detected in the text information, rendering the preset keywords based on a preset rendering mode to obtain interactive identifications, and displaying the interactive identifications in a text display area.
For the method procedures of S06 to S07, those skilled in the art may refer to the foregoing embodiment of fig. 6, and this will not be repeated in the present specification.
And S08, triggering the interactive identification in the text display area by the user.
Any client user can see the interactive identifications in the text presentation area, so that the user can trigger the corresponding interactive identifications by, for example, a click operation.
S09, determining a target video clip associated with the preset keyword corresponding to the interaction identifier based on the association relation, and determining a playing time point based on the starting time point of the target video clip.
S10, jumping the video playing page to a playing time point of the target video clip for playing.
As for the method procedures of S09 to S10, those skilled in the art may refer to the foregoing embodiments of fig. 4 and 7, and this will not be repeated in the present specification.
As can be seen from the foregoing, in the embodiment of the present disclosure, the preset keyword is determined based on the video content of the video segment, and the user can intuitively determine whether the video segment is of interest based on the text content of the preset keyword, so as to realize accurate positioning and skip play of the segment of interest. And the same video and cross video positioning and skip can be realized, and the flexibility is higher. In addition, through configuration and management of preset keywords, the controllability of the interactive identification is better, and interference of irrelevant interactive identifications to users is avoided.
In some embodiments, the present description provides a video positioning playback device that is applicable to the client 100.
As shown in fig. 13, in some embodiments, the video positioning playing device illustrated in the present specification includes:
the interaction detection module 10 is configured to detect a triggering operation of interaction identification in a text display area of a video playing page; the interactive identification is a corresponding identification generated based on preset keywords in text information issued in a text display area, wherein the preset keywords are texts used for describing video content;
a video skip module 20 configured to skip the video playing page to the target video clip for playing in response to detecting the trigger operation for the interactive identifier; the target video clip is a video clip associated with a preset keyword corresponding to the interaction identifier.
In one or more embodiments of the present description, video skip module 20 is configured to:
in response to detection of triggering operation for the interaction identifier, determining a target video segment associated with a preset keyword corresponding to the interaction identifier based on an association relationship between the preset keyword and the video segment;
And determining a playing time point based on the starting time point of the target video clip, and jumping the video playing page to the playing time point for playing.
In one or more embodiments of the present disclosure, the video positioning playback device further includes:
the receiving module is configured to receive the association relation between the preset keywords and the video clips, which are sent by the server side;
and the storage module is configured to store the association relation.
In one or more embodiments of the present disclosure, the video positioning playback device further includes:
the information acquisition module is configured to acquire text information released by a user in a text display area of the video playing page;
the rendering module is configured to respond to the detection of the preset keywords in the text information, and render the preset keywords based on a preset rendering mode to obtain interaction identifiers corresponding to the preset keywords;
and the display module is configured to display the interactive identification in the text display area.
In one or more embodiments of the present description, video skip module 20 is configured to:
in response to detecting the triggering operation for the interactive identification, jumping the video playing page to a target video segment of the first video for playing,
Or the video playing page is played by jumping the first video to a target video segment of a second video, wherein the second video is different from the first video.
In some embodiments, the present disclosure provides a video positioning playback device, which may be applied to the server 200.
As shown in fig. 14, in some embodiments, the video positioning playing device illustrated in the present specification includes:
the keyword generation module 30 is configured to generate preset keywords corresponding to each video clip based on the video content of each video clip;
the relationship establishing module 40 is configured to establish an association relationship between the preset keywords and the video clips based on the preset keywords of each video clip;
the sending module 50 is configured to send the association relationship to the client, so that the client determines the target video segment associated with the preset keyword corresponding to the triggered interaction identifier based on the association relationship.
In one or more embodiments of the present description, the keyword generation module 30 is configured to:
acquiring each video clip;
and determining preset keywords corresponding to each video clip according to the barrage information and/or the point description information corresponding to each video clip.
In some embodiments, the present disclosure provides an electronic device, where the electronic device may include the client or the server, and the electronic device includes:
a processor; and
and a memory storing computer instructions for causing the processor to perform the method of any of the embodiments described above.
In some embodiments, the present description provides a storage medium having stored thereon computer instructions for causing a computer to perform the method of any of the embodiments described above.
Fig. 15 is a schematic structural diagram of an electronic device provided in an exemplary embodiment of the present specification. Referring to fig. 15, at the hardware level, the device includes a processor 702, an internal bus 704, a network interface 706, a memory 708, and a non-volatile storage 710, although other scenarios may also include the hardware required. One or more embodiments of the present description may be implemented in a software-based manner, such as by the processor 702 reading a corresponding computer program from the non-volatile storage 710 into the memory 708 and then running. Of course, in addition to software implementation, one or more embodiments of the present disclosure do not exclude other implementation manners, such as a logic device or a combination of software and hardware, etc., that is, the execution subject of the following processing flow is not limited to each logic unit, but may also be hardware or a logic device.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. A typical implementation device is a computer, which may be in the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email device, game console, tablet computer, wearable device, or a combination of any of these devices.
In a typical configuration, a computer includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, read only compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic disk storage, quantum memory, graphene-based storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by the computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The foregoing describes certain embodiments of the present description. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The terminology used in the one or more embodiments of the specification is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the specification. As used in this specification, one or more embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in one or more embodiments of the present specification to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of one or more embodiments of the present description. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
The foregoing description of the preferred embodiment(s) is (are) merely intended to illustrate the embodiment(s) of the present invention, and it is not intended to limit the embodiment(s) of the present invention to the particular embodiment(s) described.

Claims (10)

1. A video positioning playing method, which is applied to a client, the method comprising:
detecting triggering operation of interaction identification in a text display area aiming at a video playing page; the interactive identification is a corresponding identification generated based on a preset keyword in text information issued in the text display area, wherein the preset keyword is a text for describing video content;
in response to detecting triggering operation for the interactive identification, jumping the video playing page to a target video clip for playing; the target video clip is a video clip associated with a preset keyword corresponding to the interaction identifier.
2. The method of claim 1, wherein the jumping the video playback page to the target video clip for playback in response to detecting the trigger for the interactive identification comprises:
In response to detection of triggering operation for the interaction identifier, determining a target video segment associated with a preset keyword corresponding to the interaction identifier based on an association relationship between the preset keyword and the video segment;
and determining a playing time point based on the starting time point of the target video clip, and jumping the video playing page to the playing time point for playing.
3. The method according to claim 1 or 2, wherein prior to said detecting a triggering operation for interactive identification in a text presentation area of a video play page, the method further comprises:
acquiring text information released by a user in the text display area of the video playing page;
responding to the detection of the preset keywords in the text information, and rendering the preset keywords based on a preset rendering mode to obtain the interactive identifications corresponding to the preset keywords;
and displaying the interaction identification in the text display area.
4. The method according to claim 1 or 2, wherein the step of jumping the video playing page to a target video clip for playing in response to detecting the trigger operation for the interactive identification comprises:
In response to detecting a triggering operation for the interactive identification, jumping the video playing page to the target video segment of the first video for playing,
or, skipping the video playing page from a first video to the target video segment of a second video, wherein the second video is different from the first video.
5. The video positioning playing method is characterized by being applied to a server, and comprises the following steps:
generating preset keywords corresponding to each video clip based on the video content of each video clip;
establishing an association relationship between the preset keywords and the video clips based on the preset keywords of each video clip;
and sending the association relation to a client so that the client determines a target video fragment associated with a preset keyword corresponding to the triggered interaction identifier based on the association relation.
6. The method according to claim 5, wherein generating the preset keyword corresponding to each video clip based on the video content of each video clip comprises:
acquiring each video clip;
and determining preset keywords corresponding to each video clip according to the barrage information and/or the point description information corresponding to each video clip.
7. A video positioning playback device, for application to a client, the device comprising:
the interaction detection module is configured to detect triggering operation of interaction identification in a text display area of the video playing page; the interactive identification is a corresponding identification generated based on a preset keyword in text information issued in the text display area, wherein the preset keyword is a text for describing video content;
the video skipping module is configured to skip the video playing page to a target video clip for playing in response to detecting the triggering operation for the interaction identifier; the target video clip is a video clip associated with a preset keyword corresponding to the interaction identifier.
8. A video positioning and playing device, which is applied to a server, the device comprising:
the keyword generation module is configured to generate preset keywords corresponding to each video clip based on the video content of each video clip;
the relation establishing module is configured to establish an association relation between the preset keywords and the video clips based on the preset keywords of each video clip;
and the sending module is configured to send the association relation to the client so that the client determines the target video clip associated with the preset keyword corresponding to the triggered interaction identifier based on the association relation.
9. An electronic device, comprising:
a processor; and
memory storing computer instructions for causing the processor to perform the method according to any one of claims 1 to 4 or to perform the method according to any one of claims 5 to 6.
10. A storage medium having stored thereon computer instructions for causing a computer to perform the method according to any one of claims 1 to 4 or to perform the method according to any one of claims 5 to 6.
CN202310505844.0A 2023-05-06 2023-05-06 Video positioning playing method and device Pending CN116668789A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310505844.0A CN116668789A (en) 2023-05-06 2023-05-06 Video positioning playing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310505844.0A CN116668789A (en) 2023-05-06 2023-05-06 Video positioning playing method and device

Publications (1)

Publication Number Publication Date
CN116668789A true CN116668789A (en) 2023-08-29

Family

ID=87718129

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310505844.0A Pending CN116668789A (en) 2023-05-06 2023-05-06 Video positioning playing method and device

Country Status (1)

Country Link
CN (1) CN116668789A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111294660A (en) * 2020-03-12 2020-06-16 咪咕文化科技有限公司 Video clip positioning method, server, client and electronic equipment
CN113691853A (en) * 2021-07-16 2021-11-23 北京达佳互联信息技术有限公司 Page display method and device and storage medium
CN114640868A (en) * 2022-03-11 2022-06-17 湖南快乐阳光互动娱乐传媒有限公司 Video drainage method and related equipment
CN115396738A (en) * 2021-05-25 2022-11-25 腾讯科技(深圳)有限公司 Video playing method, device, equipment and storage medium
WO2023020325A1 (en) * 2021-08-20 2023-02-23 北京字跳网络技术有限公司 Video page display method and apparatus, and electronic device and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111294660A (en) * 2020-03-12 2020-06-16 咪咕文化科技有限公司 Video clip positioning method, server, client and electronic equipment
CN115396738A (en) * 2021-05-25 2022-11-25 腾讯科技(深圳)有限公司 Video playing method, device, equipment and storage medium
CN113691853A (en) * 2021-07-16 2021-11-23 北京达佳互联信息技术有限公司 Page display method and device and storage medium
WO2023020325A1 (en) * 2021-08-20 2023-02-23 北京字跳网络技术有限公司 Video page display method and apparatus, and electronic device and storage medium
CN114640868A (en) * 2022-03-11 2022-06-17 湖南快乐阳光互动娱乐传媒有限公司 Video drainage method and related equipment

Similar Documents

Publication Publication Date Title
US20190394529A1 (en) Resource recommendation method, device, apparatus and computer readable storage medium
KR101944469B1 (en) Estimating and displaying social interest in time-based media
CN104105002B (en) The methods of exhibiting and device of audio-video document
CN104798346B (en) For supplementing the method and computing system of electronic information relevant to broadcast medium
CN111970577B (en) Subtitle editing method and device and electronic equipment
JP5144838B1 (en) Information processing apparatus, information processing method, and program
US20150331856A1 (en) Time-based content aggregator
US20130144891A1 (en) Server apparatus, information terminal, and program
CN111279709B (en) Providing video recommendations
KR20150052123A (en) Aiding discovery of program content by providing deeplinks into most interesting moments via social media
CN109379639B (en) Method and device for pushing video content object and electronic equipment
EP3996373A2 (en) Method and apparatus of generating bullet comment, device, and storage medium
CN111800668B (en) Barrage processing method, barrage processing device, barrage processing equipment and storage medium
Grainge Introduction: ephemeral media
CN112287168A (en) Method and apparatus for generating video
US20230421859A1 (en) Systems and methods for recommending content using progress bars
JP2014153977A (en) Content analysis device, content analysis method, content analysis program, and content reproduction system
US10270820B2 (en) Impromptu community streamer
WO2019146466A1 (en) Information processing device, moving-image retrieval method, generation method, and program
CN114845149A (en) Editing method of video clip, video recommendation method, device, equipment and medium
CN116049490A (en) Material searching method and device and electronic equipment
CN116668789A (en) Video positioning playing method and device
CN117786159A (en) Text material acquisition method, apparatus, device, medium and program product
CN113420242A (en) Shopping guide method, resource distribution method, content display method and equipment
US11601481B2 (en) Image-based file and media loading

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination