CN111263186A - Video generation, playing, searching and processing method, device and storage medium - Google Patents

Video generation, playing, searching and processing method, device and storage medium Download PDF

Info

Publication number
CN111263186A
CN111263186A CN202010099142.3A CN202010099142A CN111263186A CN 111263186 A CN111263186 A CN 111263186A CN 202010099142 A CN202010099142 A CN 202010099142A CN 111263186 A CN111263186 A CN 111263186A
Authority
CN
China
Prior art keywords
video
label
information
tag
video file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010099142.3A
Other languages
Chinese (zh)
Inventor
刘杉
邵枫轩
柴剑平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Communication University of China
Original Assignee
Communication University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Communication University of China filed Critical Communication University of China
Priority to CN202010099142.3A priority Critical patent/CN111263186A/en
Publication of CN111263186A publication Critical patent/CN111263186A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/232Content retrieval operation locally within server, e.g. reading video streams from disk arrays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/432Content retrieval operation from a local storage medium, e.g. hard-disk
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors
    • H04N21/8405Generation or processing of descriptive data, e.g. content descriptors represented by keywords

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The disclosure relates to a video generation, playing, searching and processing method, a device and a storage medium, wherein the video generation method comprises the steps of obtaining first characteristic information corresponding to each video frame of a video to be processed; determining second characteristic information corresponding to each first characteristic information according to a preset characteristic library; extracting a text label in the second characteristic information, and determining label information of each video frame of the video to be processed according to the text label, a label library updated according to a preset period and a user-defined label; and generating a first target video file according to the second characteristic information and the label information, and/or adding corresponding second characteristic information and label information in each video frame of the video to be processed to generate a second target video file. The method disclosed by the invention can directly view the characteristic information of the video frame and comprises multi-dimensional label information.

Description

Video generation, playing, searching and processing method, device and storage medium
Technical Field
The present disclosure relates to the field of video technologies, and in particular, to a method, an apparatus, and a storage medium for video generation, playing, searching, and processing.
Background
With the continuous blowout of the creation amount of network videos, in the face of rich video resources on the network, how to solve the huge video resource classification is significant for meeting the information acquisition requirements of users.
The existing method is to briefly describe the video content according to the label description and the title content by classifying the video with labels. The existing method has the defects of limited labels, low user selectivity, preset labels, low matched label content dimension, incapability of tracking current affair hotspots and incapability of directly viewing the characteristic information of video frames.
Disclosure of Invention
In view of the above, the present disclosure provides a video generation method capable of directly viewing feature information of a video frame and including multi-dimensional label information.
According to an aspect of the present disclosure, there is provided a video generation method, the method including:
acquiring first characteristic information corresponding to each video frame of a video to be processed, wherein the first characteristic information comprises at least one of audio characteristic information, picture characteristic information and lens characteristic information;
determining second feature information corresponding to each piece of first feature information according to a preset feature library, wherein the second feature information comprises features, of which the matching degree with the first feature information is greater than or equal to a first preset threshold value, in the feature library;
extracting a text label in the second characteristic information, and determining label information of each video frame of the video to be processed according to the text label, a label library updated according to a preset period and a user-defined label;
and generating a first target video file according to the second characteristic information and the label information, and/or adding corresponding second characteristic information and label information in each video frame of the video to be processed to generate a second target video file.
In one possible implementation, the tag information includes at least one of a first tag, a second tag, a third tag, and a fourth tag,
the first tab comprises the user-defined tab;
the second label comprises the text label;
the third label comprises a first matching label, and the first matching label comprises a label of which the matching degree with the text label in the label library is greater than or equal to a second preset threshold;
the fourth tag information comprises a second matching tag, and the second matching tag comprises a tag in the tag library, wherein the matching degree of the second matching tag with the user-defined tag is greater than or equal to a third preset threshold.
In a possible implementation manner, the first target video file and the second target video file are used to support the retrieval of the second feature information and the tag information corresponding to each video frame, and are also used to support the display of the second feature information and the tag information corresponding to each video frame.
According to another aspect of the present disclosure, there is provided a video playing method, the method including:
when a first video file is played, sequentially displaying second characteristic information of the first video file corresponding to the display operation information according to a preset hierarchical relationship based on the acquired display operation information;
the first video file is generated according to the video generation method, and the display operation information includes a click operation of a user.
According to another aspect of the present disclosure, there is provided a video search method, the method including:
searching a target video file comprising the search keyword and/or the search tag from a plurality of second video files of a video library based on the obtained search keyword and/or search tag;
displaying a target video file comprising the search keyword and/or the search tag, and displaying second characteristic information matched with the search keyword and/or the search tag in the target video file,
wherein the plurality of second video files are generated according to the aforementioned video generation method.
In one possible implementation, the method further includes:
and displaying label information matched with the search keyword and/or the search label in the target video file, wherein the label information comprises at least one of the first label, the second label, the third label and the fourth label.
In a possible implementation manner, displaying tag information in the target video file, which matches the search keyword and/or the search tag, includes:
if the priority of the label information is different, displaying the label information according to the priority sequence;
if the priority of the label information is the same, displaying the label information according to the sequence of the time of the video frames corresponding to the label information;
wherein the priority of the first tag, the second tag, the third tag and the fourth tag is sequentially reduced.
According to another aspect of the present disclosure, there is provided a video processing method, the method including:
updating a third video file according to the modification content corresponding to the modification operation information based on the acquired modification operation information, and saving the user name and the modification time corresponding to the modification operation information;
the third video file is generated according to the video generation method, and the modification operation information includes a modification operation of the third video file by a user.
According to another aspect of the present disclosure, there is provided a video processing apparatus including:
the video generation module is used for executing the video generation method;
the video playing module is used for executing the video playing method;
the video searching module is used for executing the video searching method;
a video processing module for executing the video processing method
According to another aspect of the present disclosure, a non-transitory computer-readable storage medium is provided, on which computer program instructions are stored, which when executed by a processor implement the aforementioned video generation method, and/or the aforementioned video playing method, and/or the aforementioned video searching method, and/or the aforementioned video processing method.
According to the embodiment of the disclosure, by acquiring first feature information corresponding to each video frame of a video to be processed, determining second feature information corresponding to each first feature information according to a preset feature library, extracting a text label in the second feature information, and determining label information of each video frame of the video to be processed according to the text label, a label library updated according to a preset period and a user-defined label; and generating a first target video file according to the second characteristic information and the label information, and/or adding corresponding second characteristic information and label information in each video frame of the video to be processed to generate a second target video file.
The tag information obtained by the embodiment of the disclosure can fully utilize the first characteristic information of the video to be processed to obtain high-dimensional tag information, and according to the tag library updated according to the preset period and the tag information determined by the user-defined tag, the current affair hot spot can be tracked and the user requirement can be met; and the target video file generated based on the second characteristic information and the label information supports the direct acquisition of the characteristic information and the label information of each video frame of the target video file.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features, and aspects of the disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flow diagram of a video generation method according to an embodiment of the present disclosure.
Fig. 2 shows a schematic flow chart of a video playing method according to an embodiment of the present disclosure.
Fig. 3 shows a flow diagram of a video search method according to an embodiment of the present disclosure.
Fig. 4 shows a flow diagram of a video processing method according to an embodiment of the present disclosure.
Fig. 5 shows a schematic structural diagram of a video processing apparatus according to an embodiment of the present disclosure.
Fig. 6 shows a process flow diagram of a video processing apparatus according to an embodiment of the present disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a flow diagram of a video generation method according to an embodiment of the present disclosure. As shown in fig. 1, the method includes:
step S101, acquiring first characteristic information corresponding to each video frame of a video to be processed;
step S102, determining second characteristic information corresponding to each first characteristic information according to a preset characteristic library;
step S103, extracting a text label in the second characteristic information, and determining label information of each video frame of the video to be processed according to the text label, a label library updated according to a preset period and a user-defined label;
step S104, generating a first target video file according to the second characteristic information and the label information, and/or adding corresponding second characteristic information and label information in each video frame of the video to be processed to generate a second target video file.
According to the embodiment of the disclosure, by acquiring first feature information corresponding to each video frame of a video to be processed, determining second feature information corresponding to each first feature information according to a preset feature library, extracting a text label in the second feature information, and determining label information of each video frame of the video to be processed according to the text label, a label library updated according to a preset period and a user-defined label; and generating a first target video file according to the second characteristic information and the label information, and/or adding corresponding second characteristic information and label information in each video frame of the video to be processed to generate a second target video file.
The tag information obtained by the embodiment of the disclosure can fully utilize the first characteristic information of the video to be processed to obtain high-dimensional tag information, and according to the tag library updated according to the preset period and the tag information determined by the user-defined tag, the current affair hot spot can be tracked and the user requirement can be met; and the target video file generated based on the second characteristic information and the label information supports the direct acquisition of the characteristic information and the label information of each video frame of the target video file.
In one possible implementation, the first feature information includes at least one of audio feature information, picture feature information, and lens feature information.
It is understood that the video to be processed may include any one of a movie and a video shot by a mobile terminal. For convenience of explanation, the video to be processed is taken as a movie as an example for explanation. In an exemplary manner, the first and second electrodes are,
the audio characteristic information may for example comprise the content as in table 1-1,
the picture characteristic information may for example comprise the content as in tables 1-2,
the lens characteristic information may, for example, include the contents as in tables 1-3.
TABLE 1-1
Figure BDA0002386335710000061
Tables 1 to 2
Figure BDA0002386335710000071
Tables 1 to 3
Figure BDA0002386335710000072
In a possible implementation manner, the second feature information includes a feature in a preset feature library, where a matching degree with the first feature information is greater than or equal to a first preset threshold.
The first preset threshold value can be set by a user according to the needs of the user, and the value of the first preset threshold value is not limited in the embodiment of the disclosure. The preset feature library may include first feature information corresponding to each video frame of a plurality of videos to be processed. Illustratively, the preset feature library may include A, B, C first feature information corresponding to each video frame of three types of videos to be processed. The A, B, C three kinds of videos to be processed may include all of the audio feature information, the picture feature information, and the lens feature information, or may include portions of the audio feature information, the picture feature information, and the lens feature information. Taking A, B, C example where three pending videos all include audio feature information, A, B, C three pending videos may include the same information in the audio feature information, for example, A, B, C three pending videos may each include a vehicle sound, but it is possible that a vehicle sound in pending video is a car sound, B vehicle sound in pending video is a car sound, and C vehicle sound in pending video is an airplane sound. It should be noted that, the number of videos here is only an exemplary illustration, and the embodiment of the present disclosure does not limit the number of videos to be processed contained in the preset feature library.
According to a preset feature library, the feature with the matching degree of the first feature information being larger than or equal to a first preset threshold value is determined as second feature information corresponding to the first feature information, and feature information of each video frame of the video to be processed can be fully utilized, so that high-dimensional label information can be acquired in the subsequent process.
In a possible implementation manner, the text label in the second feature information is extracted, where the text label may include a text label corresponding to the audio feature information, the picture feature information, and the lens feature information. Exemplarily, taking the type of the audio feature information as music, if the music name of the music is "XX station", the "XX station" is taken as a text label corresponding to the audio feature information; exemplarily, taking the type of the picture feature information as an environment, if the environment of the picture feature information is volcanic explosion, taking "volcanic" as a text label corresponding to the picture feature information; illustratively, taking the type of the shot feature information as an example of a behavior, if the behavior of the shot feature information is fighting, the "action" is taken as a text label corresponding to the shot feature information.
By extracting the text label in the second characteristic information, the source of the label information is enriched, and the content of each video frame of the video to be processed can be accurately described according to the label information of each video frame of the video to be processed, which is determined by the text label.
In a possible implementation manner, determining, according to the text tag, a tag library updated according to a preset period, and a user-defined tag, tag information of each video frame of the video to be processed includes:
and determining label information of each video frame of the video to be processed according to the text label, wherein the text label is directly used as the label information of the corresponding video frame. For example, if the background music of the video frame is "XX stage", the environment of the picture feature information is volcanic eruption, and the behavior of the shot feature information is fighting, the tag information of the video frame may include "XX stage", "volcanic", "action".
In a possible implementation manner, the tag content of the tag library updated according to the preset period may be obtained by crawling text information published in various portal websites and community forums through a web crawler, or may be added manually by a user. The tag library can be updated according to a preset period, and can also be automatically updated according to user settings, wherein parameters which can be set by a user in a self-defined mode comprise a target website (supporting multi-selection and self-addition), a crawling data format (pictures, characters, videos, audios and the like), a crawling execution time and a crawling execution period. And automatically capturing the content of the target website which is not updated in the previous period aiming at the crawling execution period. In addition, the tag library updated according to the preset period in the embodiment of the present disclosure may include tag content, tag source, and tag word vector parameters.
The tag information of each video frame of the video to be processed is determined through the tag library updated according to the preset period, so that the tag information can be ensured to track real-time hot spots.
And determining the tag information of each video frame of the video to be processed according to the user-defined tag, wherein the tag information of the corresponding video frame is directly taken as the user-defined tag. For example, if the frame of the frame feature information of the video frame is a racing car, the user may set a custom tag "racing car" for the video frame and use the tag as the tag information of the video frame.
By the aid of the tag information of each video frame of the video to be processed, which is determined by the user-defined tag, the user can expand the number and content of the tag information according to needs, and user experience is improved.
In one possible implementation, the tag information includes at least one of a first tag, a second tag, a third tag, and a fourth tag,
the first tab comprises the user-defined tab;
the second label comprises the text label;
the third label comprises a first matching label, and the first matching label comprises a label of which the matching degree with the text label in the label library is greater than or equal to a second preset threshold;
the fourth tag information comprises a second matching tag, and the second matching tag comprises a tag in the tag library, wherein the matching degree of the second matching tag with the user-defined tag is greater than or equal to a third preset threshold.
In one possible implementation, the text label may include text labels corresponding to the audio feature information, the picture feature information, and the lens feature information. Taking an OCR (Optical character recognition) in the image feature information and a character line in the audio feature information as an example, there may be continuous description texts (for example, continuous long-line text of characters) in the language text content of the OCR and the dialogue content of the character line, and these continuous description texts may be subjected to data discretization.
Exemplarily, word segmentation processing can be performed on the content of the character lines to obtain word segmentation texts retaining main contents of the conversation; the word segmentation processing can be carried out on the OCR subtitles and the combined rolling news to obtain word segmentation texts with main contents of the images preserved. Through the method, all the discrete texts of the video to be processed can be obtained and can be represented as the feature word set { T }, and then text classification is carried out to obtain the classification result of the discrete texts of the video to be processed. And finally, extracting n in the feature word set by a TF-IDF algorithm based on a preset corpus1Individual key word by word frequencyExtracting n in feature word set2A total of n extracted from the keywords1+n2A keyword. The keywords may be a piece of text information, and the keywords may be converted into vector parameters by processing the keywords. The keywords may be represented as:
Key={w1,w2,w3… … I wi∈T,i=1,2,3…n}
Wherein, Key represents the word vector parameter corresponding to the keyword, wiRepresenting the ith keyword in the feature word set, n ═ n1+n2
In one possible implementation manner, the third tag may include a first matching tag, and the first matching tag may include a tag in the tag library, where a matching degree with a text tag is greater than or equal to a second preset threshold.
The value of the second preset threshold may be set according to the requirement of the user, which is not limited in the embodiment of the present disclosure. Exemplarily, the cosine similarity of the word vectors corresponding to the text labels and the labels in the label library can be calculated by the method shown in formula (1), and the matching degree between the text labels and the labels in the label library is determined:
formula (1):
w*=similarity(Key,D)
wherein, w*The label with the highest cosine similarity with the text label in the label library is represented, namely the word vector parameter corresponding to the first matching label, Key represents the word vector parameter corresponding to the keyword, and D represents the set of labels in the label library.
In order to obtain a high-matching label, the keyword and the label determined from the label library can be reserved as the output content of the label, the determined label is continuously imported into the label library as a new round of input variable, and the cosine similarity is calculated so as to determine the label with higher matching degree. The user can set the turn of importing the determined label into the label library by himself.
In a possible implementation manner, the fourth tag information may include a second matching tag, and the second matching tag may include a tag in the tag library, where a matching degree with the user-defined tag is greater than or equal to a third preset threshold.
The value of the third preset threshold may be set according to the requirement of the user, which is not limited in the embodiment of the present disclosure. Exemplarily, the cosine similarity of the word vector parameters corresponding to the user-defined tag and the tags in the tag library can be calculated by the method shown in formula (2), and the matching degree between the user-defined tag and the tags in the tag library is determined:
formula (2):
Labelsup=similarity(Input,D)
wherein, LabelsupAnd the word vector parameters corresponding to the second matching tags are represented, and the Input represents the word vector parameters corresponding to the user-defined tags.
The label information obtained by the embodiment of the disclosure can make full use of the characteristic information of the video to be processed and comprises various types of labels, so that the label information can be ensured to have rich dimensionality.
In a possible implementation manner, the first target video file and the second target video file according to the embodiments of the present disclosure are configured to support retrieving second feature information and tag information corresponding to each video frame, and further support displaying the second feature information and tag information corresponding to each video frame.
In a possible implementation manner, the first target Video file may be an avfs (audio Video feature storage), and the first target Video file may only store feature information and tag information of a Video frame, but does not support direct playing of media information, so as to reduce storage cost of the Video file. And generating a unique second target video file corresponding to each first target video file. Illustratively, the first target video file may be imported into a video playback system (e.g., video playback software) to implement the function of playing media information.
The second target video file may be nsav (new Standard Audio video), and the second target video file may not only store the feature information and the tag information of the video frame, but also support direct playing of the media information. Illustratively, a current frame picture of the video frames in the second target video file, and audio feature information, picture feature information, shot feature information, and tag information of the current frame picture can be viewed through a video playing system (e.g., video playing software).
Illustratively, taking movie "XX phase" as an example, a user may retrieve, through the video playing system, second feature information and tag information corresponding to respective video frames of the first target video file and the second target video file, and display the second feature information and the tag information.
If the user positions the video file at the position of 51 minutes 41 seconds through the video playing system, the current video picture is a picture of the man leading role and the woman leading role shaking hands on the billiard table, the current picture can display the feature of billiards in the picture information, and the whole time point of the picture of the billiards appearing in the movie can be displayed in the video file by clicking the feature of the billiards or searching the feature of the billiards. In addition, the scene information scene of the same shot can be clicked, so that all time points and time periods of other characters in the movie in the same shot can be checked.
The target video file generated based on the second characteristic information and the label information supports direct acquisition of the characteristic information and the label information of each video frame of the target video file, and is beneficial for a user to quickly acquire related information of the target video file, and user experience is improved.
Fig. 2 shows a schematic flow chart of a video playing method according to an embodiment of the present disclosure. As shown in fig. 2, the method includes:
step S201, when a first video file is played, sequentially displaying second feature information of the first video file corresponding to the display operation information according to a preset hierarchical relationship based on the acquired display operation information.
In a possible implementation manner, the first video file is generated according to the video generation method of the corresponding embodiment of the foregoing fig. 1, and the display operation information includes a click operation of a user.
The second characteristic information includes a plurality of types and a plurality of quantities of characteristic information, and when the first video file is played, the second characteristic information of the first video file corresponding to the display operation information can be sequentially displayed according to a preset hierarchical relationship.
Exemplarily, the second characteristic information is taken as the audio characteristic information. If the current background music is 'XX station', based on the acquired display operation information, the second feature information may be displayed in a tree structure. Wherein, if the user clicks the music name, the album to which the background music belongs can be displayed: XX West, singer: week X, author, music style, etc. The user can continue to click on the singer 'week X', if the information related to the singer 'week X' exists, the information is displayed, and if the information related to the singer 'week X' does not exist, the user can perform the whole-network search by using the singer 'week X' as the keyword, and the search result is displayed. In addition, the user can also customize the items of the search results searched by using the singer 'week X' as the key word.
For example, if the search result using the artist "week X" as the keyword includes information such as "personal information", "early experience", and "work set", the user may select "personal information" as the node, and may jump to the node of "personal information" next time another user clicks the artist "week X". In addition, taking the example that the "personal information" includes information such as english name, nickname, nationality, and constellation, when the user clicks the "personal information", the content included in the "personal information" can be automatically displayed. In addition, the corresponding hierarchical relationship of the second characteristic information can be retained in the corresponding video frame.
According to the video playing method, the characteristic information can be sequentially displayed according to the preset hierarchical relation when the video file is played, the requirement of a user for obtaining information is met, and the user experience is improved.
Fig. 3 shows a flow diagram of a video search method according to an embodiment of the present disclosure. As shown in fig. 3, the method includes:
step S301, searching a target video file comprising the search keyword and/or the search tag from a plurality of second video files of a video library based on the obtained search keyword and/or search tag;
step S302, displaying a target video file comprising the search keyword and/or the search tag, and displaying second characteristic information matched with the search keyword and/or the search tag in the target video file.
In a possible implementation manner, the plurality of second video files are generated according to the video generation method of the foregoing corresponding embodiment of fig. 1.
The video library may be pre-stored with a plurality of second video files, where the second video files may be generated according to the video generation method corresponding to the foregoing embodiment in fig. 1, or may be obtained from a website through a web crawler. Illustratively, taking the search tag and/or the search keyword as a road movie as an example, a target video file including the road movie is searched from a plurality of second video files in the video library, if the road movies such as "XX phase" and "XX word" are pre-stored in the video library, the road movie in the video library is displayed, and second feature information (e.g., a road picture, a time point at which the road movie appears) matching the road movie in the video library is displayed.
In one possible implementation, the method further includes:
and displaying the label information matched with the search keyword and/or the search label in the target video file.
Wherein the tag information includes at least one of the first, second, third and fourth tags described in the corresponding embodiment of fig. 1.
The video searching method of the embodiment of the disclosure can also display the tag information matched with the search keyword and/or the search tag in the target video file, so that the user can quickly acquire the part which is possibly interested in the current target video file.
In a possible implementation manner, displaying tag information in the target video file, which matches the search keyword and/or the search tag, includes:
if the priority of the label information is different, displaying the label information according to the priority sequence;
if the priority of the label information is the same, displaying the label information according to the sequence of the time of the video frames corresponding to the label information;
wherein the priority of the first tag, the second tag, the third tag and the fourth tag is sequentially reduced.
Illustratively, the first tag may include a user-defined tag, where the user-defined tag is content directly supplemented by a user, the user-defined tag often reflects description of the target video file by tag information better, and the first tag may be set to be a first priority; the second label may include a text label, the text label is a label obtained by determining second feature information corresponding to each first feature information according to a preset label library and extracting the second feature information, and the second label can directly reflect the content of the video file, fully utilize the feature information corresponding to each video frame of the video file, and can be set as a second priority; the third tag may include a first matching tag, where the first matching tag includes a tag in the tag library whose matching degree with the text tag is greater than or equal to a second preset threshold, and the third tag may be set as a third priority; the fourth tag may include a second matching tag, the second matching tag includes a tag in the tag library, where a matching degree with the user-defined tag is greater than or equal to a third preset threshold, and the fourth tag may be set to have a fourth priority.
For the label information with the same priority, the label information can be displayed according to the sequence of the time of the label information corresponding to the video frame. For each tag, the source and priority of the tag may be noted and displayed. In addition, according to the displayed label, the user can also select to recursively input the label into the label library again and output a plurality of labels.
For example, if the displayed label is "fire", it may be regarded as the initial label, and the user may select to input the initial label into the label library again, and output a plurality of associated first-round labels, for example, three first-round labels such as "accident", "mountain fire", and "fire"; the user may also select to input each first-round tag into the tag library again, and output a plurality of associated second-round tags, for example, a second-round tag corresponding to a first-round tag "accident" may be "tragedy" or "dangerous situation", a second-round tag corresponding to a first-round tag "mountain fire" may be "flood" or "big fire", and a second-round tag corresponding to a first-round tag "fire" may be "open fire" or "fire", that is, 6 second-round tags may be generated by three first-round tags; for avoiding repeated description, the above contents can be analogized, 12 third-round labels can be correspondingly generated according to 6 second-round labels, and finally, after three rounds of recursions, the displayed label "fire" can correspond to 21 new labels. Thus, for labels that are already displayed, a number of new labels may be generated, depending on the number of recursions selected by the user.
According to the target video file of the video searching method, a plurality of types and quantities of tag information may exist, and by setting the priority of the tag information, the tag information with high matching degree with the search keyword and/or the search tag can be preferentially displayed, so that a user can accurately know the part which is possibly interested in the target video file, and the user experience is improved.
Fig. 4 shows a flow diagram of a video processing method according to an embodiment of the present disclosure. As shown in fig. 4, the method includes:
step S401, based on the obtained modification operation information, updating a third video file according to the modification content corresponding to the modification operation information, and saving the user name and the modification time corresponding to the modification operation information.
In a possible implementation manner, the third video file is generated according to the video generation method in the corresponding embodiment of the foregoing fig. 1, and the modification operation information includes a modification operation performed on the third video file by a user.
The user can modify the part of the third video file, the label information and/or the characteristic information of which is described inaccurately, and save the user name and the modification time corresponding to the modification operation information. Taking the third video file as an example, the third video file includes a first target video file and a second target video file, if the modification time of the first target video file and the modification time of the second target video file by the user are not synchronous, the second target video file has a higher priority coverage authority.
Illustratively, if the modification date of the first target video file by the user is 2019, 6, 19, and 11: 21, the modification date of the second target video file is 2019, 6, 19 and 13: 21, the user can be prompted whether to overlay the first target video file with the content in the second target video file, the user can select to be overlay by himself, if the user selects overlay, the content in the modified second target video file is overlaid on the first target video file, and if the user selects not overlay or not, the content of the modified first target video file and the content of the modified second target video file are respectively saved.
Additionally, the user may also annotate the third video file. Illustratively, the user may annotate the tag information and the second feature information in the third video file, the user may view the annotation content by clicking on the tag information and the second feature information, may view the annotation content by retrieving keywords, or may directly view all annotation sets in the third video file, and each annotation may retain a user name and a creation or modification time.
The video processing method of the embodiment of the disclosure allows a user to modify and annotate the third video file, so that the content of the third video file is more accurate, the content of the third video file better conforms to the description habit of the user, and the user experience is improved.
Fig. 5 shows a schematic structural diagram of a video processing apparatus according to an embodiment of the present disclosure. As shown in fig. 5, the video processing apparatus includes:
a video generating module 501, configured to execute the video generating method according to the foregoing embodiment shown in fig. 1;
a video playing module 502, configured to execute the video playing method according to the embodiment of fig. 2;
a video search module 503, configured to execute the video search method according to the embodiment of fig. 3;
the video processing module 504 is configured to execute the video processing method according to the embodiment shown in fig. 4.
Fig. 6 shows a process flow diagram of a video processing apparatus according to an embodiment of the present disclosure. As shown in fig. 6, the processing flow of the video processing apparatus may cover the content of the foregoing embodiments corresponding to fig. 1 to fig. 4.
In one possible implementation, the video processing apparatus may perform corresponding steps according to the file type and the operation of the user. For example, if the file type is the second target video file, the user may select to directly play the second target video file and perform basic playing operations, such as pause, fast forward, fast rewind, and the like; if the file type is the first target video file, the file cannot be directly played; if the file type is a common video file, the user can select to directly play the common video file, and execute basic playing operation, and in addition, can also generate characteristic information and a label corresponding to the common video file; the user can select to display the feature information and/or the tags of the video files, wherein, a plurality of display modes are available, for example, the feature information and/or the tags can be displayed according to the time axis sequence, the feature information and/or the tags can be displayed based on the search keywords, and the feature information and/or the tags corresponding to the video frames can be displayed according to the video frames selected by the user.
In one possible implementation, the video processing device may also search for feature information and/or tag information. Illustratively, a user may enter search text, wherein the search text may include search tags and/or search keywords. All video files in the video library including the search text can be displayed based on the search text, and the position of the video frame where the search text is located in the video files can also be displayed.
In one possible implementation, the video processing device may also update the tag library. The tag library may be updated according to a preset period, and the tag content of the tag library updated according to the preset period may be obtained by crawling text information published in various web portals and community forums through a web crawler or manually adding the tag content by a user. The tag library can also be automatically updated according to the setting of a user, wherein the user-definable parameters comprise a target website (supporting multi-selection and self-addition), a crawling data format (pictures, characters, videos, audios and the like), a crawling execution time and a crawling execution period. Illustratively, the contents of the updated tag library may also be saved.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. A method of video generation, the method comprising:
acquiring first characteristic information corresponding to each video frame of a video to be processed, wherein the first characteristic information comprises at least one of audio characteristic information, picture characteristic information and lens characteristic information;
determining second feature information corresponding to each piece of first feature information according to a preset feature library, wherein the second feature information comprises features, of which the matching degree with the first feature information is greater than or equal to a first preset threshold value, in the feature library;
extracting a text label in the second characteristic information, and determining label information of each video frame of the video to be processed according to the text label, a label library updated according to a preset period and a user-defined label;
and generating a first target video file according to the second characteristic information and the label information, and/or adding corresponding second characteristic information and label information in each video frame of the video to be processed to generate a second target video file.
2. The method of claim 1,
the tag information includes at least one of a first tag, a second tag, a third tag, and a fourth tag,
the first tab comprises the user-defined tab;
the second label comprises the text label;
the third label comprises a first matching label, and the first matching label comprises a label of which the matching degree with the text label in the label library is greater than or equal to a second preset threshold;
the fourth tag information comprises a second matching tag, and the second matching tag comprises a tag in the tag library, wherein the matching degree of the second matching tag with the user-defined tag is greater than or equal to a third preset threshold.
3. The method of claim 2,
the first target video file and the second target video file are used for supporting the retrieval of second characteristic information and label information corresponding to each video frame and are also used for supporting the display of the second characteristic information and label information corresponding to each video frame.
4. A video playback method, the method comprising:
when a first video file is played, sequentially displaying second characteristic information of the first video file corresponding to the display operation information according to a preset hierarchical relationship based on the acquired display operation information;
wherein the first video file is generated according to the video generation method of any one of claims 1 to 3, and the display operation information includes a click operation by a user.
5. A method for video search, the method comprising:
searching a target video file comprising the search keyword and/or the search tag from a plurality of second video files of a video library based on the obtained search keyword and/or search tag;
displaying a target video file comprising the search keyword and/or the search tag, and displaying second characteristic information matched with the search keyword and/or the search tag in the target video file,
wherein the plurality of second video files are generated according to the video generation method of any one of claims 1 to 3.
6. The method of claim 5, further comprising:
displaying label information matched with the search keyword and/or the search label in the target video file, wherein the label information comprises at least one of the first label, the second label, the third label and the fourth label in claim 2.
7. The method according to claim 6, wherein displaying the tag information in the target video file matching the search keyword and/or the search tag comprises:
if the priority of the label information is different, displaying the label information according to the priority sequence;
if the priority of the label information is the same, displaying the label information according to the sequence of the time of the video frames corresponding to the label information;
wherein the priority of the first tag, the second tag, the third tag and the fourth tag is sequentially reduced.
8. A method of video processing, the method comprising:
updating a third video file according to the modification content corresponding to the modification operation information based on the acquired modification operation information, and saving the user name and the modification time corresponding to the modification operation information;
wherein the third video file is generated according to the video generation method of any one of claims 1 to 3, and the modification operation information includes a modification operation of the third video file by a user.
9. A video processing apparatus, comprising:
a video generation module for performing the video generation method of any one of claims 1 to 3;
a video playing module for executing the video playing method of claim 4;
a video search module for performing the video search method of any one of claims 5 to 7;
a video processing module for performing the video processing method of claim 8.
10. A non-transitory computer readable storage medium having stored thereon computer program instructions, wherein the computer program instructions, when executed by a processor, implement the video generation method of any one of claims 1 to 3, and/or the video playing method of claim 4, and/or the video searching method of any one of claims 5 to 7, and/or the video processing method of claim 8.
CN202010099142.3A 2020-02-18 2020-02-18 Video generation, playing, searching and processing method, device and storage medium Pending CN111263186A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010099142.3A CN111263186A (en) 2020-02-18 2020-02-18 Video generation, playing, searching and processing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010099142.3A CN111263186A (en) 2020-02-18 2020-02-18 Video generation, playing, searching and processing method, device and storage medium

Publications (1)

Publication Number Publication Date
CN111263186A true CN111263186A (en) 2020-06-09

Family

ID=70949355

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010099142.3A Pending CN111263186A (en) 2020-02-18 2020-02-18 Video generation, playing, searching and processing method, device and storage medium

Country Status (1)

Country Link
CN (1) CN111263186A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111901668A (en) * 2020-09-07 2020-11-06 三星电子(中国)研发中心 Video playing method and device
CN112000841A (en) * 2020-07-29 2020-11-27 北京达佳互联信息技术有限公司 Electronic tag data processing method and device, electronic equipment and storage medium
CN112203139A (en) * 2020-10-12 2021-01-08 广州欢网科技有限责任公司 Program content identification method and intelligent system of intelligent television
CN112399262A (en) * 2020-10-30 2021-02-23 深圳Tcl新技术有限公司 Video searching method, television and storage medium
CN112487248A (en) * 2020-12-01 2021-03-12 深圳市易平方网络科技有限公司 Video file label generation method and device, intelligent terminal and storage medium
CN113038175A (en) * 2021-02-26 2021-06-25 北京百度网讯科技有限公司 Video processing method and device, electronic equipment and computer readable storage medium
CN113099299A (en) * 2021-03-10 2021-07-09 北京蜂巢世纪科技有限公司 Video editing method and device
CN116150428A (en) * 2021-11-16 2023-05-23 腾讯科技(深圳)有限公司 Video tag acquisition method and device, electronic equipment and storage medium
CN116150428B (en) * 2021-11-16 2024-06-07 腾讯科技(深圳)有限公司 Video tag acquisition method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090144772A1 (en) * 2007-11-30 2009-06-04 Google Inc. Video object tag creation and processing
CN105138670A (en) * 2015-09-06 2015-12-09 天翼爱音乐文化科技有限公司 Audio file label generation method and system
CN105677735A (en) * 2015-12-30 2016-06-15 腾讯科技(深圳)有限公司 Video search method and apparatus
CN105912547A (en) * 2015-12-15 2016-08-31 乐视网信息技术(北京)股份有限公司 Method and device for realizing data rapid processing based on web spider
CN106096050A (en) * 2016-06-29 2016-11-09 乐视控股(北京)有限公司 A kind of method and apparatus of video contents search
CN110502664A (en) * 2019-08-27 2019-11-26 腾讯科技(深圳)有限公司 Video tab indexes base establishing method, video tab generation method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090144772A1 (en) * 2007-11-30 2009-06-04 Google Inc. Video object tag creation and processing
CN105138670A (en) * 2015-09-06 2015-12-09 天翼爱音乐文化科技有限公司 Audio file label generation method and system
CN105912547A (en) * 2015-12-15 2016-08-31 乐视网信息技术(北京)股份有限公司 Method and device for realizing data rapid processing based on web spider
CN105677735A (en) * 2015-12-30 2016-06-15 腾讯科技(深圳)有限公司 Video search method and apparatus
WO2017114388A1 (en) * 2015-12-30 2017-07-06 腾讯科技(深圳)有限公司 Video search method and device
CN106096050A (en) * 2016-06-29 2016-11-09 乐视控股(北京)有限公司 A kind of method and apparatus of video contents search
CN110502664A (en) * 2019-08-27 2019-11-26 腾讯科技(深圳)有限公司 Video tab indexes base establishing method, video tab generation method and device

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112000841A (en) * 2020-07-29 2020-11-27 北京达佳互联信息技术有限公司 Electronic tag data processing method and device, electronic equipment and storage medium
CN112000841B (en) * 2020-07-29 2023-09-26 北京达佳互联信息技术有限公司 Electronic tag data processing method and device, electronic equipment and storage medium
CN111901668A (en) * 2020-09-07 2020-11-06 三星电子(中国)研发中心 Video playing method and device
CN112203139A (en) * 2020-10-12 2021-01-08 广州欢网科技有限责任公司 Program content identification method and intelligent system of intelligent television
CN112399262A (en) * 2020-10-30 2021-02-23 深圳Tcl新技术有限公司 Video searching method, television and storage medium
CN112399262B (en) * 2020-10-30 2024-02-06 深圳Tcl新技术有限公司 Video searching method, television and storage medium
CN112487248A (en) * 2020-12-01 2021-03-12 深圳市易平方网络科技有限公司 Video file label generation method and device, intelligent terminal and storage medium
CN113038175A (en) * 2021-02-26 2021-06-25 北京百度网讯科技有限公司 Video processing method and device, electronic equipment and computer readable storage medium
CN113099299A (en) * 2021-03-10 2021-07-09 北京蜂巢世纪科技有限公司 Video editing method and device
CN116150428A (en) * 2021-11-16 2023-05-23 腾讯科技(深圳)有限公司 Video tag acquisition method and device, electronic equipment and storage medium
CN116150428B (en) * 2021-11-16 2024-06-07 腾讯科技(深圳)有限公司 Video tag acquisition method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111263186A (en) Video generation, playing, searching and processing method, device and storage medium
US10070170B2 (en) Content annotation tool
US9396763B2 (en) Computer-assisted collaborative tagging of video content for indexing and table of contents generation
US20180213289A1 (en) Method of authorizing video scene and metadata
KR102299379B1 (en) Determining search queries to obtain information during the user experience of an event
CN107527619B (en) Method and device for positioning voice control service
US20190130185A1 (en) Visualization of Tagging Relevance to Video
KR101916874B1 (en) Apparatus, method for auto generating a title of video contents, and computer readable recording medium
US20170109339A1 (en) Application program activation method, user terminal, and server
CN113010698B (en) Multimedia interaction method, information interaction method, device, equipment and medium
CN107515870B (en) Searching method and device and searching device
WO2023016349A1 (en) Text input method and apparatus, and electronic device and storage medium
CN110019948B (en) Method and apparatus for outputting information
CN112287168A (en) Method and apparatus for generating video
US11968428B2 (en) Navigating content by relevance
CN114329223A (en) Media content searching method, device, equipment and medium
CN109116718B (en) Method and device for setting alarm clock
CN113407775B (en) Video searching method and device and electronic equipment
WO2019146466A1 (en) Information processing device, moving-image retrieval method, generation method, and program
CN110020106B (en) Recommendation method, recommendation device and device for recommendation
CN112559913B (en) Data processing method, device, computing equipment and readable storage medium
US10296533B2 (en) Method and system for generation of a table of content by processing multimedia content
US20230297618A1 (en) Information display method and electronic apparatus
WO2015094311A1 (en) Quote and media search method and apparatus
EP4099711A1 (en) Method and apparatus and storage medium for processing video and timing of subtitles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200609