CN113395605A - Video note generation method and device - Google Patents

Video note generation method and device Download PDF

Info

Publication number
CN113395605A
CN113395605A CN202110821263.9A CN202110821263A CN113395605A CN 113395605 A CN113395605 A CN 113395605A CN 202110821263 A CN202110821263 A CN 202110821263A CN 113395605 A CN113395605 A CN 113395605A
Authority
CN
China
Prior art keywords
video
note
target
identifier
target video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110821263.9A
Other languages
Chinese (zh)
Other versions
CN113395605B (en
Inventor
钟冰清
莫凯茜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Bilibili Technology Co Ltd
Original Assignee
Shanghai Bilibili Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Bilibili Technology Co Ltd filed Critical Shanghai Bilibili Technology Co Ltd
Priority to CN202110821263.9A priority Critical patent/CN113395605B/en
Publication of CN113395605A publication Critical patent/CN113395605A/en
Application granted granted Critical
Publication of CN113395605B publication Critical patent/CN113395605B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8549Creating video summaries, e.g. movie trailer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides a video note generation method and a video note generation device, wherein the video note generation method comprises the following steps: displaying at least one video note on a playing interface of the target video; in the case that the first target video pen is marked as a video note which is not associated with the target video, responding to the association operation aiming at the first target video note, and adding the video identification of the target video in the association relationship between the note identification and the video identification of the first target video note. Therefore, the first target video note can correspond to more than one video identifier, namely the first target video note can comprise note contents of different videos, so that the note contents of different videos can be recorded in the same video note, subsequent users can conveniently view and operate the note contents, and great flexibility is improved.

Description

Video note generation method and device
Technical Field
The application relates to the technical field of computers, in particular to a video note generation method. The application also relates to a video note generating device, a computing device and a computer readable storage medium.
Background
With the rapid development of computer technology, compared with characters and pictures, videos carry richer and more expressive information, so that the videos are more and more concerned and favored by users, and the users can watch the videos anytime and anywhere through video playing software installed on a terminal or by visiting a video playing website.
In the prior art, a video is usually watched while the video is separated from video playing software or a video playing website, related content of the video is recorded in a notebook or office software, and the content recorded in the notebook or office software is subsequently browsed to check the record aiming at the video at that time. However, for a video viewer, the above method is cumbersome to operate, cannot conveniently record or view content related to the video, and is inefficient in recording or viewing content during the process of playing the video.
Disclosure of Invention
In view of this, the present application provides a video note generating method. The application also relates to a video note generating device, a computing device and a computer readable storage medium, which are used for solving the problem that the efficiency of recording or viewing the content in the process of playing the video is low in the prior art.
According to a first aspect of embodiments of the present application, a video note generation method is provided, including:
displaying at least one video note on a playing interface of the target video;
in the case that a first target video pen is marked as a video note which is not associated with the target video, responding to the association operation aiming at the first target video note, and adding the video identification of the target video in the association relationship between the note identification and the video identification of the first target video note.
According to a second aspect of embodiments of the present application, there is provided a video note generating apparatus, including:
the display module is configured to display at least one video note on a playing interface of the target video;
the adding module is configured to respond to the association operation aiming at the first target video note when the first target video pen is marked as a video note which is not associated with the target video, and add the video identification of the target video in the association relationship between the note identification and the video identification of the first target video note.
According to a third aspect of embodiments herein, there is provided a computing device comprising:
a memory and a processor;
the memory is configured to store computer-executable instructions and the processor is configured to execute the computer-executable instructions to implement the operational steps of any of the video note generation methods.
According to a fourth aspect of embodiments herein, there is provided a computer-readable storage medium storing computer-executable instructions that, when executed by a processor, implement the operational steps of any of the video note generation methods.
According to the video note generation method, at least one video note is displayed on a playing interface of a target video; in the case that a first target video pen is marked as a video note which is not associated with the target video, responding to the association operation aiming at the first target video note, and adding the video identification of the target video in the association relationship between the note identification and the video identification of the first target video note.
Under the condition, corresponding note content can be directly checked and/or recorded in the playing interface of the target video through video note taking, and the video related note taking is not required to be recorded through other notebooks or office software, so that the note taking efficiency in the video playing process is greatly improved. In addition, one video note can be selected from the video notes displayed in the playing interface of the target video, the selected video note can be not associated with the currently played target video, and then the video identifier of the target video is added in the association relation between the original note identifier and the video identifier of the selected video note, so that the selected video note can correspond to more than one video identifier, namely the selected video note can comprise note contents of different videos, so that the note contents of different videos can be recorded in the same video note, the follow-up viewing and operation of a user are facilitated, and the great flexibility is improved.
Drawings
FIG. 1 is a flow chart of a video note generation method provided by an embodiment of the present application;
fig. 2 is a schematic view of a first playback interface provided in an embodiment of the present application;
fig. 3 is a schematic diagram of a second playing interface provided in an embodiment of the present application;
FIG. 4 is a data structure diagram of a video note according to an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating a process for generating a video note according to an embodiment of the present application;
FIG. 6 is a diagram illustrating a single video note mapping multiple videos according to an embodiment of the present application;
FIG. 7 is a process flow diagram of a video note generation method applied to a cooking teaching video according to an embodiment of the present application;
FIG. 8 is a schematic structural diagram of a video note generating apparatus according to an embodiment of the present application;
fig. 9 is a block diagram of a computing device according to an embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.
The terminology used in the one or more embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the present application. As used in one or more embodiments of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present application refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It will be understood that, although the terms first, second, etc. may be used herein in one or more embodiments of the present application to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first aspect may be termed a second aspect, and, similarly, a second aspect may be termed a first aspect, without departing from the scope of one or more embodiments of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
It should be noted that, for some learning-type videos/web lessons, currently, a user does not have a convenient method for creating, recording and synchronously generating notes with videos, and only can use recording software of a third party to learn recording, when the user wants to play back and review, the user can only drag a video progress bar to repeatedly watch corresponding learning segments, and the user can record the notes in the recording software of the third party by himself, so that the playing time of the videos cannot be recorded conveniently, the video skipping cannot be performed quickly, and a linkage function between the notes and the videos is lacked.
The embodiment of the application provides a video note generation method, which can provide a note recording method combined with a video, so that a user watching the video can watch the video and record key time marks, related note contents and the like based on a video playing interface directly. In addition, based on the video note with the time mark created by the user, a backtracking function can be provided through content prompt and the time mark of the video note, so that the video can be quickly jumped to the corresponding moment of the video, and the video watching experience is improved.
According to the embodiment of the application, the creation of many-to-many association relation between played videos and video notes is realized, a user is supported to create multiple notes under one video, note contents and time marks of multiple videos can be recorded in the same video note, and great flexibility is provided.
The embodiment of the application also provides a data structure for representing the association relationship, the data structure can support a note creator to add a time mark in a self-defined mode, and the time mark can support a note viewer to use a time mark function to quickly position the marked time. Therefore, the method and the device can support a note creator to custom generate, create, modify and delete the related content of the note, so that the related content can be stored and shared more conveniently without secondary modification and editing of the video.
In the present application, a video note generating method is provided, and the present application relates to a video note generating apparatus, a computing device, and a computer readable storage medium, which are described in detail in the following embodiments one by one.
Fig. 1 shows a flowchart of a video note generation method according to an embodiment of the present application, which specifically includes the following steps:
step 102: and displaying at least one video note on a playing interface of the target video.
Specifically, the target video may be a currently played video, the target video may be any type of video, such as a learning video, an entertainment video, and the like, the video note may be a text recording related content of the video, and the video note may record multimedia information such as characters and images (the images may be acquired from a screenshot of the target video or uploaded by the user, and may be a video time point corresponding to the screenshot when the user captures the video) in the process of watching the target video. In addition, the video note may include at least one of a video note created by the user who is currently logged in and a video note created by other users. In practical application, a part of area can be divided in a playing interface of a target video, and at least one acquired video note is displayed in the area.
In an optional implementation manner of this embodiment, the corresponding video note may be obtained and displayed according to the user identifier of the login user, so that at least one video note is displayed on the play interface of the target video, and the specific implementation process may be as follows:
acquiring at least one first video note created by a login user according to a user identifier of the login user, wherein the at least one first video note comprises video notes associated and/or not associated with the target video;
and displaying the at least one first video note on a playing interface of the target video.
Specifically, the user identifier may refer to a symbol, a number, or a word that can uniquely identify a user, for example, the user identifier may be an account ID (identity) of the user; the first video note is a video note created by the logged-in user.
In addition, the video note associated with the target video may mean that the note related to the target video has been recorded, that is, at least one note entry exists in each note entry recorded by the video note associated with the target video, which is created based on the target video; the video note not associated with the target video may mean that the note related to the target video has not been recorded, that is, each note entry recorded by the video note not associated with the target video is not related to the target video and is created based on other videos.
In practical application, each video note can carry the user identifier of the note creation user when being created, so that in the playing process of the target video, after a certain user logs in, each video note corresponding to the user identifier of the logged-in user can be obtained from a video note library of a video playing platform, and the obtained video notes are displayed in a playing interface of the target video. The video note library is a database for storing all video notes corresponding to the video playing platform.
In addition, since the playing interface of the target video may display a plurality of video notes, when the plurality of video notes are displayed, the detailed content of each video note may not be displayed, but only the note identifier of each video note is displayed, so that each video note in the obtained at least one video note may include the note identifier, and at this time, when the playing interface of the target video displays at least one first video note, the note identifier of the at least one first video note may be displayed. The note identifier may be a preset identifier for indicating a corresponding video note, and for example, the note identifier includes, but is not limited to, a note title, a note number, and the like.
For example, assume that user A has created 5 video notes in total, video note 1 was created for video A, video note 2 was created for video B, video note 3 was created for video C, video note 4 was created for video D, and video note 5 was created for video E. Assuming that the user a logs in during the process of watching the video C, at this time, the 5 video notes created by the user a may be acquired from the video note library according to the user identifier of the user a, and displayed in the playing interface of the video C, as shown in fig. 2.
In the playing process of any video, all video notes created by a login user can be acquired, and the acquired video notes can comprise video notes associated with the played video and can also comprise video notes not associated with the played video; that is, each video note created by the login user can be displayed in the playing process of a certain video, so that the user can conveniently add, modify, delete and the like any video note subsequently, and notes related to the currently played video can be recorded flexibly and efficiently.
In an optional implementation manner of this embodiment, in addition to obtaining each video note created by the login user and displaying the video note on the video playing interface of the target video, the method may also obtain video notes created by other users and also display the video notes in the video playing interface of the target video, that is, after obtaining at least one first video note created by the login user according to the user identifier of the login user, further include:
acquiring a corresponding second video note according to the video identifier and the video type of the target video, recording the second video note as user creation different from the login user, and recording the second video note to comprise video notes associated with and/or not associated with the target video;
correspondingly, displaying a note title of the at least one first video note on a playing interface of the target video, including:
displaying note titles of the at least one first video note and the second video note on a playing interface of the target video.
Specifically, the video identifier may refer to a symbol, a number, or a text that can uniquely identify a video, for example, the video identifier may be a video ID; the video type may refer to a type to which the video belongs, such as education, business, entertainment, and the like, and may be a more detailed type in practical applications, such as a mathematical examination education video, an english examination education video, and the like, which is not limited in this application. In addition, the second video note refers to a video note created by a user other than the logged-in user, and thus the second video note may also include video notes associated and/or unassociated with the target video.
In practical application, when the second video notes created by other users except the login user are obtained, a large number of video notes may exist and cannot be displayed completely, so that the second video notes associated with the target video can be obtained based on the video identifier of the target video, that is, the second video notes associated with the target video are obtained from a large number of video notes. In addition, in the process of watching the video, the user can jump to other videos of the same type based on the video notes created by other users, and each video note can also carry the video type, so that the video note corresponding to the video of the same type as the target video can be screened out from each second video note created by other users according to the video type of the target video.
It should be noted that the first video note created by the login user is a record of the corresponding video, so that the login user can perform editing operations such as addition, deletion, modification and the like on the first video note created by the login user; and the second video notes created by other users are records of the corresponding videos by other users, so that the login user can only check and click without obtaining editing authorization, and cannot execute editing operations such as addition, deletion, modification and the like, namely the second video notes are only used for displaying included note items and adjusting the playing progress.
In specific implementation, the note titles of the first video note and the second video note can be displayed on the playing interface of the target video, and the note titles of the first video note and the second video note can be displayed in different areas.
In addition, because the number of the acquired first video notes and the number of the acquired second video notes may be huge, a user cannot quickly find a video note required by the user from the displayed first video notes or the displayed second video notes, the display area of the video notes may further include a note search box, and the user may input keywords or identifiers of the required video notes in the note search box to search, so that the video note required by the user can be quickly found, and subsequent viewing or editing operations are performed.
Along the above example, the video notes 1-5 are video notes created by the user a, and in addition, according to the video identifier of the video C, the video note 6 and the video note 7 associated with the video C created by the user B are obtained, and according to the video type "education class" of the video C, the video note 8 created by the user B for the video F which is the video of the same education class and the video note 9 created by the user C for the video G which is the video of the same education class are obtained. At this time, the video notes 1-5 are displayed in the self-created note area and the video notes 6-9 are displayed in the non-self-created note area, as shown in FIG. 3.
According to the embodiment of the application, not only can the first video note created by the login user, but also the second video note created by other users can be obtained, and the first video note and the second video note are displayed in the playing interface of the target video at the same time, so that the user can conveniently check and edit the video note created by the user, the user can conveniently check the note content created by other users, and the user can conveniently and flexibly jump to the corresponding playing time in the corresponding video.
Step 104: in the case that the first target video pen is marked as a video note which is not associated with the target video, in response to the association operation for the first target video note, the video identifier of the target video is added in the association relationship between the note identifier and the video identifier of the first target video note.
Specifically, the first target video note may be any one of video notes created by the login user, that is, the first target video note may be any one of the first video notes. It should be noted that, after the user selects the first target video note, if the selected first target video note is not associated with the target video, the user may associate the first target video note with the target video through a preset association operation.
The association operation may be an operation of clicking an association control, or an operation of adding a note content related to the target video in the first target video note, that is, the first target video note may be associated with the target video by adding a note content related to the target video in the first target video note, or the first target video note may be associated with the target video directly through a preset association control without adding a note content related to the target video in the first target video note, which is not limited in the present application.
In practical application, when the association operation for the first target video note is detected, it is described that the user wants to associate the first target video note with the target video, and at this time, the video identifier of the target video may be added to the association relationship between the note identifier and the video identifier of the first target video note, so as to associate the first target video note with the target video.
In an optional implementation manner of this embodiment, associating the first target video note with the target video by adding the note content related to the target video to the first target video note, that is, in response to the association operation for the first target video note, before adding the video identifier of the target video in the association relationship between the note identifier and the video identifier of the first target video note, the method further includes:
in the case that an adding operation aiming at a first target video note is detected, generating a note entry corresponding to the adding operation in the first target video note, or adding note content of the target video in the note entry corresponding to the adding operation;
determining that the association operation is detected.
When the first target video note is associated with the target video by adding the note content related to the target video to the first target video note, the association operation for the first target video note is the addition operation. The adding operation refers to an operation of adding a time node and note content related to a target video in a first target video note after a login user selects the first target video note from at least one displayed video note, for example, the adding operation may be an operation triggered by inserting a time stamp control in an editing area after entering the first target video note.
In practical application, a user can randomly select one video note from at least one displayed video note, namely a first target video note, in the process of watching a target video; after the first target video note is selected, the play interface of the target video may display detailed information of the first target video note, where the detailed information may include an edit control, each note entry, and the like, each note entry may include a timestamp and note data, and the edit control may include an add control, a modify control, a delete control, and the like. Then, a login user can generate a new note entry in the first target video note by triggering the adding control, and the related note content of the target video is added into the newly generated note entry; or, the login user can select a note entry in the first target video note, and the note entry is modified through the modification control, so that the related note content of the target video is added into the note entry.
One video note in the application can comprise one or more note entries, and the plurality of note entries can correspond to different videos. The user can generate a new note entry in the selected first target video note, the related note content of the target video is added in the new note entry, an existing note entry in the first target video note can be edited, the related note content of the target video is added in the note entry, the related note content of the target video is added in the selected video note, the operation mode of associating the first target video note and the target video is flexible and diverse, the requirements under different application scenes can be met, and the adaptability and the flexibility are high.
In an optional implementation manner of this embodiment, a special format of the video note may be set, so that the video note may record content such as video information and note information, that is, a note entry corresponding to the adding operation is generated in the first target video note, and a specific implementation process may be as follows:
determining a first video marking progress corresponding to the adding operation in the target video, acquiring a video title and a video identifier of the target video, and taking the first video marking progress, the video title and the video identifier as time marks;
acquiring note content input by a login user, determining note format parameters of the note content, and taking the note content and the note format parameters as note data;
and adding the time mark and the note data into the first target video note, and generating a note entry corresponding to the adding operation.
Specifically, the first video mark progress may refer to a playing progress time in the target video, and the first video mark progress corresponding to the adding operation in the target video may refer to a playing progress time of the target video when the adding operation is performed, such as 90 seconds, 120 seconds, and the like. A time stamp is data that describes an arbitrary playing moment in a video playing, usually shown as "time: dividing into: the second + video title "format, for ease of reading, is" 01:30A01 "(1 st minute 30 seconds of A01 video). Additionally, the note content may include text and/or pictures.
In practical applications, the data structure of the video note may be an array structure, each element in the array represents a time stamp recorded by a user or note data with style information, and the time stamp may include a first video stamp progress, a video title, and a video identifier, where the first video stamp progress is used to describe a playing progress time when adding the record, the video title is used to show a video to which a current time stamp belongs in the multi-video note content, and the video identifier is used to obtain the video to which the time stamp belongs. The note data may include note specific content entered by the user, as well as note format parameters describing font size, color, and background color.
In addition, the login user can input the note content in the text form, and adds a corresponding playing progress moment (first video marking progress) for the note content in the text form through a control for adding a time mark; or the login user can upload the picture as note content and add corresponding playing progress time for the picture through a control added with a time mark; or the login user can directly capture the target video, the capture is used as the note content, and the playing progress moment corresponding to the capture is obtained and used as the first video marking progress.
In an optional implementation manner of this embodiment, the time stamp may further include a state identifier of the time stamp, in addition to the first video stamp progress, the video title and the video identifier, where the state identifier is used to indicate whether the video to which the time stamp belongs is valid, that is, the time stamp further includes the state identifier; after the video title and the video identifier of the target video are obtained, the method further comprises the following steps:
and setting the state flag of the time mark as valid.
It should be noted that, during the playing process of the target video, when a new note entry is added to the first target video note based on the target video, the target video can be normally played, and thus the status flag of the added timestamp should be valid. In the subsequent process, if the target video is deleted, that is, the target video cannot be normally played through the video identifier of the target video, the state identifier of the time mark of the note entry related to the target video may be modified to be invalid. Therefore, whether the video to which the time mark belongs is still valid, namely whether the video can be played normally can be determined simply and quickly through the state identification of the time mark.
In an optional implementation manner of this embodiment, a data structure of the video note may be a JSON (lightweight data exchange format) structure, where the JSON structure may include an associated video list, a note title, a note text, and the like, that is, each video note includes a list of video notes associated with itself, and it may be determined through the list whether the first target video note is associated with the target video, that is, the first target video note includes an associated video list; after the playing interface of the target video displays at least one video note, the method further comprises the following steps:
determining whether the target video is a video in the associated video list or not according to the video identifier of the target video;
if so, determining that the first target video pen is marked as a video note associated with the target video;
if not, determining that the first target video pen is marked as a video note not associated with the target video.
It should be noted that, if the video identifier of the target video is the video identifier in the associated video list, it is indicated that the first target video pen is marked as the video note associated with the target video; and if the video identifier of the target video is not the video identifier in the associated video list, indicating that the first target video pen is marked as a video note not associated with the target video.
In practical applications, the video note may include an associated video identifier list, a note title, and a note body content, the note body content may include note data and a time stamp, the note data may include the note content input by the user and note format parameters describing font size, color, and background, and the time stamp may include a corresponding video identifier, a first video stamp progress, a corresponding video title, and a status identifier.
For example, fig. 4 is a schematic diagram of a data structure of a video Note provided in an embodiment of the present application, as shown in fig. 4, a video identifier (e.g., video ID — 01) may include Note identifiers (Note IDs) of a plurality of video notes, each of the Note identifiers includes an associated video identifier list (e.g., video ID — 01, video ID — 02, video ID — 03, … …), a Note Title (Note Title), Note body Content (Content), the Note body Content includes Note data and a timestamp, the Note data includes user-input Note Content (e.g., "insert": Content XX), and Note format parameters (e.g., "attributes" { "size": 16px "," bold ": true } for describing font size, color, and background, and the timestamp includes corresponding video identifiers (e.g., video ID — 01), The first video marks the progress (the moment of play progress, e.g., 60), the corresponding video title (e.g., title XX), and the status identification (whether the belonging video is valid, e.g., true).
In practical applications, the video note editing function can be provided through a rich text editor, which is a text editor that can be embedded in a browser and can support setting of various text formats, such as font size, color, and the like.
In specific implementation, a time mark adding button can be provided on a tool bar at the top of the rich text editor, a login user (namely a creator) can click in any video playing process of a target video, the number of seconds of the current playing time of a player (namely the first video mark progress) is obtained when the login user clicks, and the number of seconds is stored in a data structure field second of a time mark; then, acquiring the unique source ID of the currently played target video, storing the unique source ID into the unique ID of the video source of the data structure of the time mark, and setting the state identifier of the time mark to be effective; and acquiring the video title of the currently played target video, and storing the video title into the video title of the time-marked data structure. Secondly, the obtained second may be analyzed as "time: dividing into: second, creating a < div > tag, adding each content contained in the data structure of the time mark to the attribute of the < div > tag, inserting the formatted second into the display content of the < div > tag, acquiring the cursor position of the login user in the rich text editor, inserting the created < div > node carrying the time mark information into the cursor position, and displaying the < div > node in the rich text editor.
For example, fig. 5 is a schematic diagram of a generation process of a video note provided in an embodiment of the present application, and as shown in fig. 5, a time stamp obtaining button is clicked, and then a playing video title, a playing video source ID, and a current playing second (for example, 90 seconds, which may be 01:30 after being formatted) may be obtained by a player. Thereafter, a time stamp div is generated and inserted into the rich text marker.
In an optional implementation manner of this embodiment, after the playing interface of the target video displays at least one video note, the user may also create a new video note for the target video instead of selecting an existing video note, that is, after the playing interface of the target video displays at least one video note, the method further includes:
under the condition that a note creating operation is detected, creating a second target video note and generating a note identifier of the second target video note;
under the condition that a time mark adding operation is detected, determining a corresponding second video mark progress of the time mark adding operation in the target video, and adding the second video mark progress to the second target video note;
and storing the incidence relation between the note identification of the second target video note and the video identification of the target video.
Specifically, the note creating operation is an operation triggered by a preset note creating control, and when the note creating operation is detected, it indicates that the login user wants to create a new video note, and records note content related to the target video, so that a second target video note can be created at this time, and a note identifier of the second target video note is generated, where the second target video note is a blank and new video note.
In addition, the time stamp adding operation may refer to an operation of inserting content into the newly created second target video note, and in the case that the time stamp adding operation is detected, it indicates that the user wants to insert a corresponding playing time into the newly created second target video note for recording, so that a corresponding second video stamp progress of the time stamp adding operation in the target video may be determined at this time, and the second video stamp progress may be added to the second target video note.
In practical applications, after the second target video note is created for the target video, the association relationship between the second target video note and the target video needs to be correspondingly stored, so as to associate the second target video note with the target video.
In an optional implementation manner of this embodiment, after the play interface of the target video displays at least one video note, the user may delete a note entry included in the displayed at least one video note except that a new video note is created for the target video or a new note entry is added to an existing video note, that is, after the play interface of the target video displays at least one video note, the method further includes:
receiving a deletion operation for a third target video note, and deleting a note entry indicated by the deletion operation in the third target video note;
judging whether a first video identifier is included in a second video identifier, wherein the first video identifier is a video identifier corresponding to the deleted note entry, and the second video identifier is a video identifier corresponding to the remaining note entries of the third target video note;
and if not, deleting the first video identifier in the association relationship between the note identifier and the video identifier of the third target video note.
Specifically, the third target video note is any one of the video notes created by the login user, that is, the third target video pen is marked as any one of the first video notes. The deletion operation refers to an operation of deleting a note entry in a third target video note after the login user selects the third target video note from the displayed at least one video note, for example, the deletion operation may be an operation triggered by a deletion control of a note entry after entering the third target video note.
It should be noted that, after deleting the note entry indicated by the deletion operation, it is necessary to determine the first video identifier corresponding to the deleted note entry, and determine whether the third target video note further includes the note entry corresponding to the first video identifier, so as to determine whether the third target video note is further associated with the video corresponding to the first video identifier after deleting the note entry. If the second video identifications corresponding to the remaining note entries of the third target video note do not include the first video identification corresponding to the deleted note entry, it is indicated that the third target video note is unrelated to the video corresponding to the first video identification, and at this time, the first video identification can be deleted in the original association relationship between the note identification and the video identification of the third target video note, so that the third target video note is unrelated to the video corresponding to the first video identification.
For example, the third target video note is a video note 1, which includes 3 note entries, where the note entry 1 corresponds to a video a, the note entry 2 corresponds to a video B, and the note entry 3 corresponds to a video C, and at this time, the association relationship between the note identifier and the video identifier of the video note 1 is shown in table 1 below. Assuming that the user deletes the note entry 2 after selecting the video note 1, since the remaining note entries 1 and 3 are unrelated to the video B after deleting the note entry 2, the video B in the following table 1 is deleted, and at this time, the association relationship between the note identifier and the video identifier of the video note 1 is shown in the following table 2.
Table 1 table of association between note identifier and video identifier of video note 1
Figure BDA0003172026960000131
Table 2 updated table of association relationship between note identifier and video identifier of video note 1
Figure BDA0003172026960000132
The embodiment of the application provides a data structure of a video note, the data structure can support a note creator to add a time mark and note data in a self-defined mode, and the time mark can support a note viewer to use a time mark function to quickly position a marked time. Therefore, the method and the device can support a note creator to custom generate, create, modify and delete the related content of the note, so that the related content can be stored and shared more conveniently without secondary modification and editing of the video.
In practical application, the video identifier of the target video is added in the association relationship between the note identifier and the video identifier of the first target video note, which may be adding the corresponding relationship between the note identifier of the first target video note and the video identifier of the target video in the association relationship between the note identifier and the video identifier of the original text.
In an optional implementation manner of this embodiment, if the first target video note stores each video associated with the first target video note in the form of an associated video list, the associated video list may be updated, that is, the video identifier of the target video is added to the association relationship between the note identifier and the video identifier of the first target video note, and a specific implementation process may be as follows:
adding the video identification of the target video to the associated video list of the first target video note.
For example, assuming that the first target video note is a video note 3 in which note entries related to video a, video B and video C are stored, that is, the associated video list of the video note 3 is as shown in table 3 below, and assuming that the target video is a video D, after generating a note entry corresponding to the video D in the video note 3, the video D may be added in the association relationship as described in table 3 below, so as to obtain an updated associated video list of the video note 3 as shown in table 4 below.
TABLE 3 associated video List for video Note 3
Figure BDA0003172026960000141
TABLE 4 updated associated video List for video Note 3
Figure BDA0003172026960000142
In the embodiment of the application, one of the video notes displayed in the playing interface of the target video can be selected at will, the selected video note can be not associated with the currently played target video, then the note content to be recorded is added into the selected video note, and the video identifier of the target video is added in the association relationship between the original note identifier and the video identifier of the first target video note, so that the first target video note can correspond to more than one video identifier, namely the first target video note can comprise the note content of different videos, so that the note contents of different videos can be recorded in the same video note, the follow-up user can conveniently view and operate the video note, and the great flexibility is improved.
In an optional implementation manner of this embodiment, in the process of viewing the target video, a user may view each video note displayed on the play interface of the target video, and the user may jump to the corresponding video and play at the time mark by clicking a certain time mark in a certain video note, that is, after at least one video note is displayed on the play interface of the target video in this embodiment of the present application, the method may further include:
under the condition that the selection operation of a target note item aiming at a fourth target video note is detected, analyzing the target note item, and obtaining a target time mark included by the target note item;
determining whether the video corresponding to the target note entry is the target video or not according to the target time mark;
under the condition that the video corresponding to the target note item is not the target video, switching the currently played target video into the video corresponding to the target note item;
and adjusting the playing progress of the video corresponding to the target note item according to the target time mark.
In practical applications, the fourth target video note may be a video note created by the login user itself or any one of video notes created by other users, that is, the fourth target video note may be any one of the first video note or the second video note. The target note entry is a note entry selected from the displayed detailed information after the user clicks into the fourth target video note.
It should be noted that, after a user selects a certain note entry, the note entry may be analyzed to obtain a corresponding target time stamp, and since the note entry selected by the user is not necessarily a note corresponding to a currently played video, it may also be determined whether the note entry selected by the user is a note corresponding to the currently played video based on the obtained target time stamp, and if not, the video corresponding to the note entry selected by the user is skipped to, and then the playing progress of the skipped video is adjusted.
In an optional implementation manner of this embodiment, when a note entry is generated, a corresponding video identifier is added to a time stamp of a video note, that is, the obtained target time stamp should include a corresponding video identifier, and at this time, according to the target time stamp, it is determined whether a video corresponding to the target note entry is the target video, where a specific implementation process may be as follows:
determining a video identification included by the target timestamp;
and under the condition that the video identification included by the target time mark is different from the video identification of the target video, determining that the video corresponding to the target note entry is not the target video.
It should be noted that, if the video identifier included in the target time stamp is different from the video identifier of the target video, it is indicated that the video corresponding to the target note entry is not the target video, and at this time, the video corresponding to the video identifier included in the target time stamp should be skipped to first, and then the playing progress is adjusted.
In an optional implementation manner of this embodiment, when a note entry is generated, a corresponding video tagging progress is recorded in a time tag of a video note, that is, a playing progress time is recorded, that is, an obtained target time tag should include a target video tagging progress, and at this time, according to the target time tag, a playing progress of a video corresponding to the target note entry is adjusted, where a specific implementation process may be as follows:
determining a target video marking progress included by the target time mark;
and adjusting the playing progress of the video corresponding to the target note entry to the target video marking progress.
For example, assuming that the currently played video is video C, the user selects a note entry 1 in a video note 4 from video notes displayed on the playing interface of video C, assuming that the note entry 1 is a note corresponding to video a, and the time stamp of the note entry 1 includes a target video stamp progress of "01: 30 ", at this time, the currently played video C may be switched to the video a, and then the playing progress of the video a may be adjusted to a position of 1 minute and 30 seconds.
For example, fig. 6 is a schematic diagram of a single video note mapping multiple videos provided by an embodiment of the present application, and as shown in fig. 6, a note creator views a video a, a video B, and a video C, and adds text content (i.e., note content) and a time stamp in a rich text editor. Assume that the added timestamps include timestamp A (02:30), which corresponds to video A's 02: 30; timestamp B1(00:30), timestamp B1 corresponding to video B at 00: 30; timestamp B2(01:40), timestamp B2 corresponds to video B at 01: 40. The note reader can read the text content and click on the time stamp, pointing to video A at 02:30 when clicking on time stamp A, to video B at 00:30 when clicking on time stamp B1, and to video B at 01:40 when clicking on time stamp B2.
In an optional implementation manner of this embodiment, when generating the note entry, a state identifier is set in the timestamp, that is, the obtained target timestamp should include the state identifier. Because some videos may be deleted due to time, content, and the like, after a certain video is deleted, the corresponding video cannot be found based on the video identifier, so as to be played, if the corresponding video cannot be found according to the video identifier, the state identifier in the time stamp can be modified, thereby avoiding performing useless finding operation next time, that is, switching the currently played target video to the video corresponding to the target note entry, including:
searching a corresponding video to be jumped according to the video identification included by the target time mark;
and under the condition that the video to be skipped is found, switching the currently played target video into the video to be skipped.
It should be noted that, when the video to be skipped is found, the video to be skipped can be played normally without modifying the state identifier in the time stamp, at this time, the currently played target video can be directly switched to the video to be skipped, and the playing progress is subsequently adjusted.
In addition, under the condition that the video to be jumped cannot be found, the state identification in the target time mark can be updated to be invalid, and an error prompt is displayed. That is to say, under the condition that the video to be skipped is not found, it is indicated that the video to be skipped is abnormal and cannot be played, at this time, the state identifier in the target time stamp can be updated to be invalid, and an abnormal prompt is displayed. The target note entry can be set to be in a transition state of only showing and not jumping subsequently, so that useless clicking and useless analysis and processing operation are avoided.
In an optional implementation manner of this embodiment, the time stamp may include a state identifier, and it may be determined whether a video to which the time stamp belongs is still valid through the state identifier, and according to the state identifier, it may be determined whether to perform subsequent video search and jump operations, that is, according to the target time stamp, it is determined whether a video corresponding to the target note entry is before the target video, and the method may further include:
acquiring a state identifier included in the target time mark;
and if the state identification included in the target time stamp is valid, executing the operation step of determining whether the video corresponding to the target note entry is the target video.
It should be noted that, under the condition that the status flag included in the target timestamp is valid, the operation step of determining whether the video corresponding to the target note entry is the target video is executed again, so that redundant operations that are not used are prevented from being executed too much, and processing resources are saved.
According to the video note generation method, the corresponding note content can be directly checked and/or recorded in the playing interface of the target video through the video note, the video related note is not required to be recorded through other notebooks or office software, and the note recording efficiency in the video playing process is greatly improved. In addition, one video note can be selected from the video notes displayed in the playing interface of the target video, the selected video note can be not associated with the currently played target video, and then the video identifier of the target video is added in the association relation between the original note identifier and the video identifier of the selected video note, so that the selected video note can correspond to more than one video identifier, namely the selected video note can comprise note contents of different videos, so that the note contents of different videos can be recorded in the same video note, the follow-up viewing and operation of a user are facilitated, and the great flexibility is improved.
Moreover, in the embodiment of the application, based on the video note with the time mark created by the user, a backtracking function can be provided through content prompt and the time mark of the video note, so that the video can be quickly jumped to the corresponding moment of the video, and the experience of watching the video is improved.
In the following, with reference to fig. 7, the video note generating method provided by the present application is further described by taking an application of the video note generating method in a cooking teaching video as an example. Fig. 7 shows a processing flow chart of a video note generation method applied to a cooking teaching video according to an embodiment of the present application, which specifically includes the following steps:
step 702: and displaying at least one video note on a playing interface of the family dish teaching video.
The video note may include at least one of a video note created by the user who logs in at present and a video note created by other users, and the video note may be a video note associated and/or unassociated with the family vegetable teaching video.
Step 704: in the case that an adding operation for a first target video note is detected, a note entry corresponding to the adding operation is generated in the first target video note.
In practical application, a first video marking progress corresponding to the adding operation in the family dish teaching video can be determined, a video title and a video identifier of the family dish teaching video are obtained, and the first video marking progress, the video title and the video identifier are used as time marks; then, acquiring note content input by a login user, determining note format parameters of the note content, and taking the note content and the note format parameters as note data, wherein the note content comprises texts and/or pictures; and then, adding the time mark and the note data into the first target video note, and generating a note entry corresponding to the adding operation.
Step 706: and under the condition that the first target video pen is marked as a video note which is not associated with the family dish teaching video, adding the video identifier of the family dish teaching video in the association relationship between the note identifier and the video identifier of the first target video note.
Step 708: in the event a note creation operation is detected, a second target video note is created and a note identification for the second target video note is generated.
Step 710: and under the condition that a time mark adding operation is detected, determining a second video mark progress corresponding to the time mark adding operation in the family dish teaching video, and adding the second video mark progress to the second target video note.
Step 712: and storing the association relationship between the note identification of the second target video note and the video identification of the family dish teaching video.
Step 714: receiving a deletion operation for a third target video note, and deleting a note entry indicated by the deletion operation in the third target video note.
Step 716: judging whether a first video identifier is included in a second video identifier, wherein the first video identifier is a video identifier corresponding to the deleted note entry, and the second video identifier is a video identifier corresponding to the remaining note entries of the third target video note;
step 718: and if not, deleting the first video identifier in the association relationship between the note identifier and the video identifier of the third target video note.
Step 720: in the case that a selection operation of a target note entry for a fourth target video note is detected, the target note entry is parsed, and a target time stamp included in the target note entry is obtained.
Step 722: and determining whether the video corresponding to the target note item is the family dish teaching video or not according to the target time mark, and switching the currently played family dish teaching video to the video corresponding to the target note item under the condition that the video corresponding to the target note item is not the family dish teaching video.
Step 724: and adjusting the playing progress of the video corresponding to the target note item according to the target time mark.
The video note generation method provided by the application can directly record corresponding note contents in the playing interface of the home dish teaching video without recording related notes of the video through other notebooks or office software, and greatly improves the note recording efficiency in the video playing process. In addition, one of the video notes displayed in the playing interface of the family dish teaching video can be selected at will, the selected video note can be not associated with the currently played family dish teaching video, then the note content to be recorded is added into the selected video note, and the video identifier of the family dish teaching video is added in the association relation between the original note identifier and the video identifier of the selected video note, so that the selected video note can correspond to more than one video identifier, namely the selected video note can comprise note contents of different videos, so that the note contents of different videos can be recorded in the same video note, the follow-up user can conveniently view and operate, and the great flexibility is improved.
Moreover, in the embodiment of the application, based on the video note with the time mark created by the user, a backtracking function can be provided through content prompt and the time mark of the video note, so that the video can be quickly jumped to the corresponding moment of the video, and the experience of watching the video is improved.
Corresponding to the above method embodiment, the present application further provides a video note generating apparatus embodiment, and fig. 8 shows a schematic structural diagram of a video note generating apparatus provided in an embodiment of the present application. As shown in fig. 8, the apparatus includes:
a display module 802 configured to display at least one video note on a play interface of a target video;
an adding module 804, configured to, in a case that a first target video pen is marked as a video note not associated with the target video, add a video identifier of the target video in an association relationship between a note identifier and a video identifier of the first target video note in response to an association operation for the first target video note.
Optionally, the display module 802 is further configured to:
acquiring at least one first video note created by a login user according to a user identifier of the login user, wherein the at least one first video note comprises video notes associated and/or not associated with the target video;
and displaying the at least one first video note on a playing interface of the target video.
Optionally, the display module 802 is further configured to:
acquiring a corresponding second video note according to the video identifier and the video type of the target video, recording the second video note as user creation different from the login user, and recording the second video note to comprise video notes associated with and/or not associated with the target video;
displaying the at least one first video note and the second video note on a playing interface of the target video.
Optionally, the apparatus further comprises a generating module configured to:
in the case that an adding operation aiming at a first target video note is detected, generating a note entry corresponding to the adding operation in the first target video note, or adding note content of the target video in the note entry corresponding to the adding operation;
determining that the association operation is detected.
Optionally, the generation module is further configured to:
determining a first video marking progress corresponding to the adding operation in the target video, acquiring a video title and a video identifier of the target video, and taking the first video marking progress, the video title and the video identifier as time marks;
acquiring note content input by a login user, determining note format parameters of the note content, and taking the note content and the note format parameters as note data;
and adding the time mark and the note data into the first target video note, and generating a note entry corresponding to the adding operation.
Optionally, the time stamp further comprises a status identifier; the generation module is further configured to:
and setting the state flag of the time mark as valid.
Optionally, the apparatus further comprises a creating module configured to:
under the condition that a note creating operation is detected, creating a second target video note and generating a note identifier of the second target video note;
under the condition that a time mark adding operation is detected, determining a corresponding second video mark progress of the time mark adding operation in the target video, and adding the second video mark progress to the second target video note;
and storing the incidence relation between the note identification of the second target video note and the video identification of the target video.
Optionally, the first target video note includes an associated video list; the add module 804 is further configured to:
determining whether the target video is a video in the associated video list or not according to the video identifier of the target video;
if so, determining that the first target video pen is marked as a video note associated with the target video;
if not, determining that the first target video pen is marked as a video note not associated with the target video.
Optionally, the adding module 804 is further configured to:
adding the video identification of the target video to the associated video list of the first target video note.
Optionally, the apparatus further comprises a deletion module configured to:
receiving a deletion operation for a third target video note, and deleting a note entry indicated by the deletion operation in the third target video note;
judging whether a first video identifier is included in a second video identifier, wherein the first video identifier is a video identifier corresponding to the deleted note entry, and the second video identifier is a video identifier corresponding to the remaining note entries of the third target video note;
and if not, deleting the first video identifier in the association relationship between the note identifier and the video identifier of the third target video note.
Optionally, the apparatus further comprises a switching module configured to:
under the condition that the selection operation of a target note item aiming at a fourth target video note is detected, analyzing the target note item, and obtaining a target time mark included by the target note item;
determining whether the video corresponding to the target note entry is the target video or not according to the target time mark;
under the condition that the video corresponding to the target note item is not the target video, switching the currently played target video into the video corresponding to the target note item;
and adjusting the playing progress of the video corresponding to the target note item according to the target time mark.
Optionally, the switching module is further configured to:
determining a video identification included by the target timestamp;
and under the condition that the video identification included by the target time mark is different from the video identification of the target video, determining that the video corresponding to the target note entry is not the target video.
Optionally, the switching module is further configured to:
determining a target video marking progress included by the target time mark;
and adjusting the playing progress of the video corresponding to the target note entry to the target video marking progress.
Optionally, the time stamp further comprises a status identifier; the switching module is further configured to:
acquiring a state identifier included in the target time mark;
and if the state identification included in the target time stamp is valid, executing the operation step of determining whether the video corresponding to the target note entry is the target video.
Optionally, the switching module is further configured to:
searching a corresponding video to be jumped according to the video identification included by the target time mark;
and under the condition that the video to be skipped is found, switching the currently played target video into the video to be skipped.
According to the video note generating device, corresponding note content can be directly checked and/or recorded in the playing interface of the target video through the video note, the video related note is not required to be recorded through other notebooks or office software, and note recording efficiency in the video playing process is greatly improved. In addition, one video note can be selected from the video notes displayed in the playing interface of the target video, the selected video note can be not associated with the currently played target video, and then the video identifier of the target video is added in the association relation between the original note identifier and the video identifier of the selected video note, so that the selected video note can correspond to more than one video identifier, namely the selected video note can comprise note contents of different videos, so that the note contents of different videos can be recorded in the same video note, the follow-up viewing and operation of a user are facilitated, and the great flexibility is improved.
Moreover, in the embodiment of the application, based on the video note with the time mark created by the user, a backtracking function can be provided through content prompt and the time mark of the video note, so that the video can be quickly jumped to the corresponding moment of the video, and the experience of watching the video is improved.
The above is a schematic scheme of a video note generating device of the present embodiment. It should be noted that the technical solution of the video note generating apparatus and the technical solution of the video note generating method belong to the same concept, and details that are not described in detail in the technical solution of the video note generating apparatus can be referred to the description of the technical solution of the video note generating method.
Fig. 9 illustrates a block diagram of a computing device 900 provided in accordance with an embodiment of the present application. Components of the computing device 900 include, but are not limited to, a memory 910 and a processor 920. The processor 920 is coupled to the memory 910 via a bus 930, and a database 950 is used to store data.
Computing device 900 also includes access device 940, access device 940 enabling computing device 900 to communicate via one or more networks 960. Examples of such networks include the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. Access device 940 may include one or more of any type of network interface (e.g., a Network Interface Card (NIC)) whether wired or wireless, such as an IEEE902.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.
In one embodiment of the present application, the above-described components of computing device 900 and other components not shown in FIG. 9 may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device architecture shown in FIG. 9 is for purposes of example only and is not limiting as to the scope of the present application. Those skilled in the art may add or replace other components as desired.
Computing device 900 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), a mobile phone (e.g., smartphone), a wearable computing device (e.g., smartwatch, smartglasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 900 may also be a mobile or stationary server.
The processor 920 is configured to execute the following computer-executable instructions to implement the operation steps of the video note generating method.
The above is an illustrative scheme of a computing device of the present embodiment. It should be noted that the technical solution of the computing device and the technical solution of the video note generation method belong to the same concept, and details that are not described in detail in the technical solution of the computing device can be referred to the description of the technical solution of the video note generation method.
An embodiment of the present application also provides a computer-readable storage medium, which stores computer-executable instructions, which are executed by a processor to implement the operation steps of the video note generating method.
The above is an illustrative scheme of a computer-readable storage medium of the present embodiment. It should be noted that the technical solution of the storage medium belongs to the same concept as the technical solution of the video note generation method, and details that are not described in detail in the technical solution of the storage medium can be referred to the description of the technical solution of the video note generation method.
The foregoing description of specific embodiments of the present application has been presented. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The computer instructions comprise computer program code which may be in the form of source code, object code, an executable file or some intermediate form, or the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, etc. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
It should be noted that, for the sake of simplicity, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present application is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The preferred embodiments of the present application disclosed above are intended only to aid in the explanation of the application. Alternative embodiments are not exhaustive and do not limit the invention to the precise embodiments described. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the application and its practical applications, to thereby enable others skilled in the art to best understand and utilize the application. The application is limited only by the claims and their full scope and equivalents.

Claims (18)

1. A video note generation method, comprising:
displaying at least one video note on a playing interface of the target video;
in the case that a first target video pen is marked as a video note which is not associated with the target video, responding to the association operation aiming at the first target video note, and adding the video identification of the target video in the association relationship between the note identification and the video identification of the first target video note.
2. The video note generation method of claim 1, wherein displaying at least one video note on a playback interface of the target video comprises:
acquiring at least one first video note created by a login user according to a user identifier of the login user, wherein the at least one first video note comprises video notes associated and/or not associated with the target video;
and displaying the at least one first video note on a playing interface of the target video.
3. The video note generation method according to claim 2, wherein after acquiring the at least one first video note created by the logged-in user according to the user identifier of the logged-in user, the method further comprises:
acquiring a corresponding second video note according to the video identifier and the video type of the target video, recording the second video note as user creation different from the login user, and recording the second video note to comprise video notes associated with and/or not associated with the target video;
correspondingly, the at least one first video note is displayed on the playing interface of the target video, and the method comprises the following steps:
displaying the at least one first video note and the second video note on a playing interface of the target video.
4. The video note generation method of any one of claims 1 to 3, wherein in response to the association operation for the first target video note, before adding the video identifier of the target video in the association relationship between the note identifier and the video identifier of the first target video note, further comprising:
in the case that an adding operation aiming at a first target video note is detected, generating a note entry corresponding to the adding operation in the first target video note, or adding note content of the target video in the note entry corresponding to the adding operation;
determining that the association operation is detected.
5. The video note generation method of claim 4, wherein generating a note entry corresponding to the add operation in the first target video note comprises:
determining a first video marking progress corresponding to the adding operation in the target video, acquiring a video title and a video identifier of the target video, and taking the first video marking progress, the video title and the video identifier as time marks;
acquiring note content input by a login user, determining note format parameters of the note content, and taking the note content and the note format parameters as note data;
and adding the time mark and the note data into the first target video note, and generating a note entry corresponding to the adding operation.
6. The video note generation method of claim 5, wherein the time stamp further comprises a state identification; after the video title and the video identifier of the target video are obtained, the method further comprises the following steps:
and setting the state flag of the time mark as valid.
7. The video note generation method of any one of claims 1 to 3, further comprising, after the displaying of the at least one video note at the play interface of the target video:
under the condition that a note creating operation is detected, creating a second target video note and generating a note identifier of the second target video note;
under the condition that a time mark adding operation is detected, determining a corresponding second video mark progress of the time mark adding operation in the target video, and adding the second video mark progress to the second target video note;
and storing the incidence relation between the note identification of the second target video note and the video identification of the target video.
8. The video note generation method of any of claims 1-3, wherein the first target video note includes an associated video list; after the playing interface of the target video displays at least one video note, the method further comprises the following steps:
determining whether the target video is a video in the associated video list or not according to the video identifier of the target video;
if so, determining that the first target video pen is marked as a video note associated with the target video;
if not, determining that the first target video pen is marked as a video note not associated with the target video.
9. The video note generation method of claim 8, wherein adding the video identifier of the target video in the association relationship between the note identifier and the video identifier of the first target video note comprises:
adding the video identification of the target video to the associated video list of the first target video note.
10. The video note generation method of any one of claims 1 to 3, further comprising, after the displaying of the at least one video note at the play interface of the target video:
receiving a deletion operation for a third target video note, and deleting a note entry indicated by the deletion operation in the third target video note;
judging whether a first video identifier is included in a second video identifier, wherein the first video identifier is a video identifier corresponding to the deleted note entry, and the second video identifier is a video identifier corresponding to the remaining note entries of the third target video note;
and if not, deleting the first video identifier in the association relationship between the note identifier and the video identifier of the third target video note.
11. The video note generation method of any one of claims 1-3, wherein the method further comprises:
under the condition that the selection operation of a target note item aiming at a fourth target video note is detected, analyzing the target note item, and obtaining a target time mark included by the target note item;
determining whether the video corresponding to the target note entry is the target video or not according to the target time mark;
under the condition that the video corresponding to the target note item is not the target video, switching the currently played target video into the video corresponding to the target note item;
and adjusting the playing progress of the video corresponding to the target note item according to the target time mark.
12. The method of claim 11, wherein determining whether the video corresponding to the target note entry is the target video according to the target timestamp comprises:
determining a video identification included by the target timestamp;
and under the condition that the video identification included by the target time mark is different from the video identification of the target video, determining that the video corresponding to the target note entry is not the target video.
13. The video note generation method of claim 11, wherein adjusting the playing progress of the video corresponding to the target note entry according to the target timestamp comprises:
determining a target video marking progress included by the target time mark;
and adjusting the playing progress of the video corresponding to the target note entry to the target video marking progress.
14. The video note generation method of claim 11, wherein the time stamp further comprises a status identification; determining whether the video corresponding to the target note entry is before the target video according to the target time stamp, further comprising:
acquiring a state identifier included in the target time mark;
and if the state identification included in the target time stamp is valid, executing the operation step of determining whether the video corresponding to the target note entry is the target video.
15. The video note generating method of claim 11, wherein switching the currently played target video to a video corresponding to the target note entry comprises:
searching a corresponding video to be jumped according to the video identification included by the target time mark;
and under the condition that the video to be skipped is found, switching the currently played target video into the video to be skipped.
16. A video note generation apparatus, comprising:
the display module is configured to display at least one video note on a playing interface of the target video;
the adding module is configured to respond to the association operation aiming at the first target video note when the first target video pen is marked as a video note which is not associated with the target video, and add the video identification of the target video in the association relationship between the note identification and the video identification of the first target video note.
17. A computing device, comprising:
a memory and a processor;
the memory is configured to store computer-executable instructions, and the processor is configured to execute the computer-executable instructions to implement the operational steps of the video note generation method of any of the preceding claims 1-15.
18. A computer-readable storage medium storing computer-executable instructions which, when executed by a processor, perform the operational steps of the video note generation method of any one of claims 1-15.
CN202110821263.9A 2021-07-20 2021-07-20 Video note generation method and device Active CN113395605B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110821263.9A CN113395605B (en) 2021-07-20 2021-07-20 Video note generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110821263.9A CN113395605B (en) 2021-07-20 2021-07-20 Video note generation method and device

Publications (2)

Publication Number Publication Date
CN113395605A true CN113395605A (en) 2021-09-14
CN113395605B CN113395605B (en) 2022-12-13

Family

ID=77626572

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110821263.9A Active CN113395605B (en) 2021-07-20 2021-07-20 Video note generation method and device

Country Status (1)

Country Link
CN (1) CN113395605B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114510600A (en) * 2022-04-18 2022-05-17 光合新知(北京)科技有限公司 Learning system and method based on human-computer interaction
CN115134650A (en) * 2022-06-27 2022-09-30 上海哔哩哔哩科技有限公司 Video note display method and device
CN116225298A (en) * 2023-03-13 2023-06-06 广州文石信息科技有限公司 Note processing method, device, terminal equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020145742A1 (en) * 2001-04-10 2002-10-10 Donna Koenig Multimedia laboratory notebook
US20110125784A1 (en) * 2009-11-25 2011-05-26 Altus Learning Systems, Inc. Playback of synchronized media archives augmented with user notes
EP3278337A1 (en) * 2015-04-03 2018-02-07 Microsoft Technology Licensing, LLC Capturing notes from passive recordings with visual content
CN109672940A (en) * 2018-12-11 2019-04-23 北京新鼎峰软件科技有限公司 Video playback method and video playback system based on note contents
CN110381382A (en) * 2019-07-23 2019-10-25 腾讯科技(深圳)有限公司 Video takes down notes generation method, device, storage medium and computer equipment
CN111523293A (en) * 2020-04-08 2020-08-11 广东小天才科技有限公司 Method and device for assisting user in information input in live broadcast teaching
CN111556371A (en) * 2020-05-20 2020-08-18 维沃移动通信有限公司 Note recording method and electronic equipment
CN112087656A (en) * 2020-09-08 2020-12-15 远光软件股份有限公司 Online note generation method and device and electronic equipment
CN112116836A (en) * 2020-09-23 2020-12-22 绍兴市寅川软件开发有限公司 Online learning note and teaching multimedia linkage acquisition method and system
CN112115301A (en) * 2020-08-31 2020-12-22 湖北美和易思教育科技有限公司 Video annotation method and system based on classroom notes
CN112839258A (en) * 2021-04-22 2021-05-25 北京世纪好未来教育科技有限公司 Video note generation method, video note playing method, video note generation device, video note playing device and related equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020145742A1 (en) * 2001-04-10 2002-10-10 Donna Koenig Multimedia laboratory notebook
US20110125784A1 (en) * 2009-11-25 2011-05-26 Altus Learning Systems, Inc. Playback of synchronized media archives augmented with user notes
EP3278337A1 (en) * 2015-04-03 2018-02-07 Microsoft Technology Licensing, LLC Capturing notes from passive recordings with visual content
CN109672940A (en) * 2018-12-11 2019-04-23 北京新鼎峰软件科技有限公司 Video playback method and video playback system based on note contents
CN110381382A (en) * 2019-07-23 2019-10-25 腾讯科技(深圳)有限公司 Video takes down notes generation method, device, storage medium and computer equipment
CN111523293A (en) * 2020-04-08 2020-08-11 广东小天才科技有限公司 Method and device for assisting user in information input in live broadcast teaching
CN111556371A (en) * 2020-05-20 2020-08-18 维沃移动通信有限公司 Note recording method and electronic equipment
CN112115301A (en) * 2020-08-31 2020-12-22 湖北美和易思教育科技有限公司 Video annotation method and system based on classroom notes
CN112087656A (en) * 2020-09-08 2020-12-15 远光软件股份有限公司 Online note generation method and device and electronic equipment
CN112116836A (en) * 2020-09-23 2020-12-22 绍兴市寅川软件开发有限公司 Online learning note and teaching multimedia linkage acquisition method and system
CN112839258A (en) * 2021-04-22 2021-05-25 北京世纪好未来教育科技有限公司 Video note generation method, video note playing method, video note generation device, video note playing device and related equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114510600A (en) * 2022-04-18 2022-05-17 光合新知(北京)科技有限公司 Learning system and method based on human-computer interaction
CN115134650A (en) * 2022-06-27 2022-09-30 上海哔哩哔哩科技有限公司 Video note display method and device
CN116225298A (en) * 2023-03-13 2023-06-06 广州文石信息科技有限公司 Note processing method, device, terminal equipment and storage medium
CN116225298B (en) * 2023-03-13 2024-01-23 广州文石信息科技有限公司 Note processing method, device, terminal equipment and storage medium

Also Published As

Publication number Publication date
CN113395605B (en) 2022-12-13

Similar Documents

Publication Publication Date Title
CN113395605B (en) Video note generation method and device
US7908556B2 (en) Method and system for media landmark identification
US8347231B2 (en) Methods, systems, and computer program products for displaying tag words for selection by users engaged in social tagging of content
US20100180218A1 (en) Editing metadata in a social network
US8930308B1 (en) Methods and systems of associating metadata with media
CN113079417B (en) Method, device and equipment for generating bullet screen and storage medium
CN111654749B (en) Video data production method and device, electronic equipment and computer readable medium
EP3322192A1 (en) Method for intuitive video content reproduction through data structuring and user interface device therefor
US10186300B2 (en) Method for intuitively reproducing video contents through data structuring and the apparatus thereof
JP2009140452A (en) Information processor and method, and program
CN111263186A (en) Video generation, playing, searching and processing method, device and storage medium
CN112287168A (en) Method and apparatus for generating video
US10732796B2 (en) Control of displayed activity information using navigational mnemonics
CN111368141A (en) Video tag expansion method and device, computer equipment and storage medium
US20170201777A1 (en) Generating video content items using object assets
CN112329403A (en) Live broadcast document processing method and device
CN112040339A (en) Method and device for making video data, computer equipment and storage medium
WO2019146466A1 (en) Information processing device, moving-image retrieval method, generation method, and program
CN102054019A (en) Information processing apparatus, scene search method, and program
CN109116718B (en) Method and device for setting alarm clock
CN113407775B (en) Video searching method and device and electronic equipment
WO2008087742A1 (en) Moving picture reproducing system, information terminal device and information display method
JP2012068982A (en) Retrieval result output device, retrieval result output method and retrieval result output program
CA3078190A1 (en) Apparatus and method for automatic generation of croudsourced news media from captured contents
US20180047428A1 (en) Information processing apparatus, information processing method, and non-transitory computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant