CN110781345A - Video description generation model acquisition method, video description generation method and device - Google Patents

Video description generation model acquisition method, video description generation method and device Download PDF

Info

Publication number
CN110781345A
CN110781345A CN201911051111.4A CN201911051111A CN110781345A CN 110781345 A CN110781345 A CN 110781345A CN 201911051111 A CN201911051111 A CN 201911051111A CN 110781345 A CN110781345 A CN 110781345A
Authority
CN
China
Prior art keywords
video
description
frame
frames
video frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911051111.4A
Other languages
Chinese (zh)
Other versions
CN110781345B (en
Inventor
张水发
李岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN201911051111.4A priority Critical patent/CN110781345B/en
Publication of CN110781345A publication Critical patent/CN110781345A/en
Application granted granted Critical
Publication of CN110781345B publication Critical patent/CN110781345B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/71Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/75Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The present disclosure provides a method for acquiring a video description generation model, a method for generating a video description, an apparatus, an electronic device, and a computer-readable storage medium, where the method for acquiring a video description generation model includes: acquiring a plurality of videos from a preset video library; for each video, identifying each video frame in the video to extract characters in the video frame; combining characters corresponding to the video frames of each video to serve as video description of the videos; and training the video frames and the video descriptions corresponding to the videos respectively as training samples to obtain a video description generation model. The embodiment of the disclosure can effectively reduce the manual labeling cost.

Description

Video description generation model acquisition method, video description generation method and device
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, and in particular, to a method for obtaining a video description generation model, a method and an apparatus for generating a video description, an electronic device, and a computer-readable storage medium.
Background
Under the era background of stable development of the internet and big data, the demand of multimedia information shows explosive growth, and the traditional information processing technology cannot meet the demand of multimedia data on tasks such as labeling, description and the like, for example, with the explosive growth of the number of internet videos at present, the demand of video description is increasing day by day. Video description (videotaping) is a technique for generating content description information for a video. In the field of artificial intelligence, a video description generation model is generally adopted to automatically generate a video description for a video.
The inventor discovers in the process of realizing the disclosure that: in the training stage of the video description generation model, the training samples are very difficult to obtain, a large amount of manual labeling is needed, and labeling style homogenization can be caused by labeling of a small amount of labeling personnel, so that the generated description language does not meet the requirements of the public.
Disclosure of Invention
In view of this, the present disclosure provides a method for acquiring a video description generation model, a method for generating a video description, an apparatus for generating a video description, an electronic device, and a computer-readable storage medium.
A first aspect of the present disclosure provides a method for acquiring a video description generative model, where the method specifically includes:
acquiring a plurality of videos from a preset video library;
for each video, identifying each video frame in the video to extract characters in the video frame;
combining characters corresponding to the video frames of each video to serve as video description of the videos;
and training the video frames and the video descriptions corresponding to the videos respectively as training samples to obtain a video description generation model.
Optionally, after the identifying each video frame in the video to extract the text in the video frame, the method further includes:
and matching the characters corresponding to each video frame with the pre-stored slogan text, and deleting the characters which are matched with each other uniformly.
Optionally, after the identifying each video frame in the video to extract the text in the video frame, the method further includes:
performing word segmentation on characters corresponding to all video frames in the video to obtain a plurality of word sequences;
and deleting the word sequences with the occurrence frequency not less than the set value.
Optionally, after the identifying each video frame in the video to extract the text in the video frame, the method further includes:
for each video frame in each video, comparing the video frame with other video frames in the video one by one to determine whether the video frame is similar to any one of the other video frames;
and if so, deleting one of the video frames, and combining the characters corresponding to the two video frames respectively to be used as the characters corresponding to the video frames which are not deleted.
Optionally, the method further comprises:
performing word segmentation on characters corresponding to the undeleted video frames to obtain a plurality of word sequences;
deleting word sequences whose frequency of occurrence is not less than the first specified value or not more than the second specified value.
Optionally, determining whether the video frame is similar to any other video frame through a pre-established classification network;
the classification network comprises an input layer, a difference layer, a splicing layer, a convolution layer and an output layer;
the input layer is used for acquiring two input video frames;
the difference layer is used for carrying out subtraction operation on the two video frames to obtain a difference image;
the splicing layer is used for splicing the difference image and the two video frames to obtain a spliced image;
the convolution layer is used for carrying out feature extraction on the spliced image to generate a feature vector;
and the output layer is used for outputting a similar result according to the feature vector.
Optionally, the video description generation model comprises an encoder network and a decoder network;
the encoder network is used for extracting the characteristics of a plurality of input video frames and generating the visual characteristics of the video;
the decoder network is used for sequentially generating decoding words according to the visual characteristics and combining the generated decoding words into a video description.
Optionally, the encoder network comprises an input layer, a plurality of convolutional layers, and a stitching layer;
the input layer is used for acquiring a plurality of input video frames;
the plurality of convolutional layers are respectively used for extracting the characteristics of a plurality of video frames;
the splicing layer is used for splicing the characteristics of the video frames to generate visual characteristics.
Optionally, the decoder network is a long-short term memory network.
Optionally, the training with the video frames and the video descriptions corresponding to the multiple videos as training samples to obtain a video description generation model includes:
inputting the video frame into a specified video description generation model to obtain a prediction description;
and adjusting parameters of the video description generation model according to the difference between the prediction description and the video description corresponding to the video frame to obtain the trained model.
Optionally, the adjusting parameters of the video description generation model according to the difference between the prediction description and the video description corresponding to the video frame includes:
respectively obtaining the feature vector of the prediction description and the feature vector of the video description corresponding to the video frame;
and adjusting parameters of the video description generation model according to the difference between the feature vector of the prediction description and the feature vector of the video description corresponding to the video frame.
Optionally, the adjusting parameters of the video description generation model according to a difference between the feature vector of the prediction description and the feature vector of the video description corresponding to the video frame includes:
determining whether the prediction description is similar to the video description corresponding to the video frame according to the distance between the feature vector of the prediction description and the feature vector of the video description corresponding to the video frame;
and adjusting parameters of the video description generation model according to the similar result.
Optionally, the feature vector is a word vector.
Optionally, the distance is a cosine distance.
According to a second aspect of the embodiments of the present disclosure, there is provided a video description generation method, including:
acquiring a target video;
taking the video frame of the target video as the input of a pre-established video description generation model so as to obtain the video description corresponding to the target video from the video description generation model; the video description generation model is obtained based on video frames and video description training corresponding to a plurality of videos respectively, and the generation of the video description of each video comprises the following steps: and identifying each video frame in the video to extract characters in the video frame, and combining the characters corresponding to the video frame of the video to be used as the video description of the video.
Optionally, after the acquiring the target video, the method further includes:
for each video frame in the target video, comparing the video frame with other video frames in the target video one by one to determine whether the video frame is similar to any one of the other video frames;
and if so, deleting one of the video frames.
Optionally, determining whether the video frame is similar to any other video frame through a pre-established classification network;
the classification network comprises an input layer, a difference layer, a splicing layer, a convolution layer and an output layer;
the input layer is used for acquiring two input video frames;
the difference layer is used for carrying out subtraction operation on the two video frames to obtain a difference image;
the splicing layer is used for splicing the difference image and the two video frames to obtain a spliced image;
the convolution layer is used for carrying out feature extraction on the spliced image to generate a feature vector;
and the output layer is used for outputting a similar result according to the feature vector.
Optionally, the video description generation model comprises an encoder network and a decoder network;
the encoder network is used for extracting the characteristics of a plurality of input video frames and generating the visual characteristics of a target video;
the decoder network is used for sequentially generating decoding words according to the visual characteristics and combining the generated decoding words into a video description.
Optionally, the encoder network comprises an input layer, a plurality of convolutional layers, and a stitching layer;
the input layer is used for acquiring a plurality of input video frames;
the plurality of convolutional layers are respectively used for extracting the characteristics of a plurality of video frames;
the splicing layer is used for splicing the characteristics of the video frames to generate visual characteristics.
Optionally, the decoder network is a long-short term memory network.
According to a third aspect of the embodiments of the present disclosure, there is provided an apparatus for obtaining a video description generative model, the apparatus including:
the video acquisition module is used for acquiring a plurality of videos from a preset video library;
the character extraction module is used for identifying each video frame in the videos so as to extract characters in the video frame;
the video description acquisition module is used for combining characters corresponding to video frames of each video to serve as video description of the video;
and the model training module is used for training the video frames and the video descriptions corresponding to the videos respectively as training samples to obtain a video description generation model.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a video description generation apparatus, the apparatus including:
the target video acquisition module is used for acquiring a target video;
the video description generation module is used for taking a video frame of the target video as the input of a video description generation model so as to obtain a video description corresponding to the target video from the video description generation model; the video description generation model is obtained based on video frames and video description training corresponding to a plurality of videos respectively, and the generation of the video description of each video comprises the following steps: and identifying each video frame in the video to extract characters in the video frame, and combining the characters corresponding to the video frame of the video to be used as the video description of the video.
According to a fifth aspect of embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the method of any one of the first and second aspects.
According to a sixth aspect of embodiments of the present disclosure, there is also provided a computer-readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the method of any one of the first and second aspects.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
the method includes the steps that a plurality of videos are obtained from a preset video library, for each video, each video frame in the videos is identified to extract characters in the video frame, then characters corresponding to the video frame of each video are combined to serve as video descriptions of the videos, finally the video frames and the video descriptions corresponding to the videos are used as training samples to be trained, and a video description generation model is obtained.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
FIG. 1 is a flow chart illustrating a method for obtaining a video description generative model according to an exemplary embodiment of the present disclosure;
FIG. 2A is an architecture diagram illustrating a video description generation model according to an exemplary embodiment of the present disclosure;
FIG. 2B is an architecture diagram of another video description generative model illustrating the present disclosure in accordance with an exemplary embodiment;
FIG. 3 is a flowchart illustrating a second method for obtaining a video description generative model according to an exemplary embodiment of the present disclosure;
FIG. 4 is a flowchart illustrating a third method for obtaining a video description generative model according to an exemplary embodiment of the present disclosure;
FIG. 5 is a flowchart illustrating a fourth method for obtaining a video description generative model according to an exemplary embodiment of the present disclosure;
FIG. 6 is a block diagram of a classification network shown in accordance with an exemplary embodiment of the present disclosure;
FIG. 7 is a flow diagram illustrating a method for generating a video description according to an exemplary embodiment of the present disclosure;
FIG. 8 is a block diagram illustrating an embodiment of an apparatus for obtaining a video description generative model according to an exemplary embodiment of the present disclosure;
fig. 9 is a block diagram of an embodiment of an acquisition device of a video description generation device according to an exemplary embodiment of the present disclosure;
FIG. 10 is a block diagram of an electronic device provided in accordance with an exemplary embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
Aiming at the problems that in the training phase of a video description generation model, the acquisition of a training sample is difficult and the labeling style is homogeneous due to labeling of a small number of labeling personnel, the embodiment of the disclosure provides an acquisition method of a video description generation model, which can be executed by an electronic device, wherein the electronic device can be a computer, a smart phone, a tablet, a personal digital assistant or a server and other computing devices.
Referring to fig. 1, a flowchart of a method for acquiring a video description generative model according to an exemplary embodiment of the present disclosure is shown, the method including:
in step S101, a plurality of videos are acquired from a preset video library.
In step S102, for each video, each video frame in the video is identified to extract the text in the video frame.
In step S103, the text corresponding to the video frame of each video is merged as the video description of the video.
In step S104, the video frames and the video descriptions corresponding to the multiple videos are used as training samples to be trained, and a video description generation model is obtained.
It can be understood that, the source of the video library in the embodiment of the present disclosure is not limited at all, and may be specifically selected according to an actual application scenario, for example, the video library may be disposed on the electronic device, or the video library may also be disposed on a server, and the electronic device obtains a video from the server.
In an embodiment, the video library stores a plurality of videos from which the electronic device can obtain a plurality of videos, where the number of videos obtained by the electronic device from the database may be specifically set according to actual situations, and this is not limited in this disclosure, for example, all videos may be obtained, or a specified number of videos (such as 50%, 60%, etc. of all videos) may be obtained.
In one embodiment, for each video, the electronic device identifies each video frame in the video to extract the characters in the video frame, and determines the corresponding relationship between each video frame and the characters thereof; as an example, the electronic device may recognize each video frame by OCR (Optical Character Recognition) technology to extract the text in the video frame.
In a possible implementation manner, for each video frame, the electronic device performs image processing and character recognition on the video frame to obtain characters corresponding to the video frame; the image processing process includes but is not limited to graying, binarization, image noise reduction and other operations; the electronic equipment can graye the color image by a component method, a maximization method, an average method or a weighted average method and the like, can binarize by a bimodal method, a P parameter method or an iterative method and the like, and can perform noise reduction processing by a mean value filter, an adaptive wiener filter, a median filter, a morphological noise filter or a wavelet denoising method and the like; the character recognition process can be completed through a pre-established character recognition model, and after the image processed by the image processing is pre-processed (such as inclination correction, character segmentation and the like), the image is used as the input of the character recognition model, and a recognition result is obtained from the character recognition model; the character recognition model can be obtained based on machine learning algorithm or deep learning algorithm training.
In an embodiment, after obtaining the texts corresponding to all the video frames of each video, the electronic device may combine the texts corresponding to all the video frames of the video to serve as the video description of the video, and then the electronic device trains the video frames and the video descriptions corresponding to the multiple videos respectively as training samples to obtain a video description generation model; referring to fig. 2A, the video description generation model includes an encoder network 11 and a decoder network 12; the encoder network 11 is configured to extract features of a plurality of input video frames and generate visual features of a video; the decoder network 12 is configured to sequentially generate decoded words according to the visual features, and combine the generated decoded words into a video description.
In one embodiment, referring to fig. 2B, the encoder network 11 includes an input layer 111, a plurality of convolutional layers 112, and a splicing layer 113; the input layer 111 is used for acquiring a plurality of input video frames; the plurality of convolutional layers 112 are respectively used for extracting the features of a plurality of video frames; the splicing layer 113 is used for splicing the features of a plurality of video frames to generate visual features; the decoder network 12 is a Long Short-Term Memory network 121 (LSTM).
In an embodiment, in a training process, the electronic device inputs the video frame into a specified video description generation model to obtain a prediction description, and then adjusts parameters of the video description generation model according to a difference between the prediction description and a video description corresponding to the video frame to obtain a trained model.
In an implementation manner, the electronic device may respectively obtain the feature vector of the prediction description and the feature vector of the video description corresponding to the video frame, and then adjust the parameter of the video description generation model according to the difference between the feature vector of the prediction description and the feature vector of the video description corresponding to the video frame, so as to obtain the trained model.
It can be understood that, the feature vector in the embodiment of the present disclosure is not limited in any way, and may be specifically selected according to actual situations, for example, the feature vector may be a Word vector, the electronic device may obtain a Word vector of the prediction description and a Word vector of a video description corresponding to the video frame through a preset Word vector generation model, and the electronic device may input the prediction description or the video description corresponding to the video frame into the preset Word vector generation model respectively to obtain a corresponding Word vector from the Word vector generation model, where the Word vector generation model is used to generate a Word vector according to any input description, for example, the Word vector generation model may be a Word2vec model, an equal glove model, or an ELMo model, and the Word2vec model, the glove model, or the ELMo model represents a correlation model used to generate a Word vector, for the building process of the Word2vec model, the glove model or the ELMo model, reference may be made to specific implementation manners in the related art, which are not described herein again.
Specifically, the electronic device may determine whether the prediction description is similar to the video description corresponding to the video frame according to a distance between the feature vector of the prediction description and the feature vector of the video description corresponding to the video frame, and then adjust parameters of a model generated by the video description according to a similar result to obtain a trained model; the distance may be a cosine distance, if the previous distance is smaller than a specified value, it indicates that the prediction description is similar to the video description corresponding to the video frame, otherwise, it indicates that the prediction description is not similar to the video description corresponding to the video frame; the specified value can be specifically set according to actual conditions, and the embodiment of the disclosure does not set any limit to this. In the embodiment, a plurality of videos are acquired from a preset video library, for each video, each video frame in the videos is identified to extract characters in the video frame, then characters corresponding to the video frame of each video are combined to serve as video descriptions of the videos, finally the video frames and the video descriptions corresponding to the videos are respectively used as training samples to be trained, and a video description generation model is acquired.
Referring to fig. 3, a flowchart of a second method for obtaining a video description generative model according to an exemplary embodiment of the present disclosure is shown, where the method includes:
in step S201, a plurality of videos are acquired from a preset video library. Similar to step S101, the description is omitted here.
In step S202, for each video, each video frame in the video is identified to extract the text in the video frame. Similar to step S102, the description is omitted here.
In step S203, for the corresponding characters of each video frame, matching the corresponding characters with the pre-stored slogan text, and deleting the characters that are matched with each other.
In step S204, the text corresponding to the video frame of each video is merged as the video description of the video. Similar to step S103, the description is omitted here.
In step S205, the video frames and the video descriptions corresponding to the videos are used as training samples to be trained, so as to obtain a video description generation model. Similar to step S104, the description is omitted here.
The slogan text is content that has no strong correlation with the video frame, and as an example, the slogan text may be some widely used terms, such as a television station logo (e.g., CCTV), a logo (e.g., fast logo), or a sentence number (e.g., an advertisement number or other promotional number), etc.; it is understood that the specific setting of the slogan text is not limited in any way in the present disclosure, and the specific setting can be performed according to the actual scene.
In addition, the slogan text can be stored in the electronic device before the matching step is met, the specific storage time of the slogan text is not limited, and the specific storage time can be set according to actual conditions.
In this embodiment, after extracting the characters of all the video frames, the electronic device matches the corresponding characters of each video frame with a pre-stored slogan text, and if the matching is consistent, it indicates that the characters matched consistently do not have a strong correlation with the video frames, the electronic device deletes the characters matched consistently as noise, so that the influence of the electronic device on the model training result is avoided, and the accuracy of model prediction is improved.
Then, after the characters matched with the slogan text are deleted, combining the characters corresponding to the video frames of each video by the electronic equipment to serve as the video description of the video, and then training the video frames and the video description corresponding to the videos by the electronic equipment to serve as training samples so as to obtain a video description generation model.
Referring to fig. 4, a flowchart of a method for acquiring a third video description generative model according to an exemplary embodiment of the present disclosure is shown, where the method includes:
in step S301, a plurality of videos are acquired from a preset video library. Similar to step S101, the description is omitted here.
In step S302, for each video, each video frame in the video is identified to extract the text in the video frame. Similar to step S102, the description is omitted here.
In step S303, word segmentation is performed on the characters corresponding to all the video frames in the video, a plurality of word sequences are obtained, and the word sequences with the occurrence frequency not less than a set value are deleted, so as to obtain the characters corresponding to each video frame after the word sequences are deleted.
In step S304, the text corresponding to the video frame of each video is merged as the video description of the video. Similar to step S103, the description is omitted here.
In step S305, the video frames and the video descriptions corresponding to the plurality of videos are used as training samples to be trained, and a video description generation model is obtained. Similar to step S104, the description is omitted here.
In this embodiment, considering that a vocabulary with too high frequency of occurrence may affect a result of model training, such a vocabulary may not have a strong correlation with video content, for example, it may be some Logo or a publicity slogan, or a plurality of similar or identical vocabularies existing in a video, therefore, after extracting characters of video frames, the electronic device performs segmentation on characters corresponding to all video frames in each video to obtain a plurality of word sequences, and then the electronic device performs statistics on the frequency of occurrence of each word sequence, and deletes the word sequence if the frequency of occurrence of the word sequence is not less than a set value.
It can be understood that, in the embodiment of the present disclosure, specific values of the setting values are not limited at all, and may be specifically set according to actual situations.
In another embodiment, the electronic device may further compare the word sequence with a pre-stored useless vocabulary, and delete the word sequences that are consistent in comparison; it is understood that, in the present disclosure, specific selection of the useless vocabulary is not limited at all, and may be specifically set according to actual situations, for example, the useless vocabulary may be selected by a user, such as "ones" and "o" as the useless vocabulary, or selected based on a preset semantic rule, such as selecting prepositions or exclamations as the useless vocabulary; in the embodiment, the word sequence is further subjected to noise processing, so that the prediction accuracy of the model is improved.
Then, for each video, after acquiring the characters corresponding to all the video frames from which the word sequence is deleted, the electronic device merges the characters corresponding to the video frames of the video to serve as the video description of the video, and then the electronic device trains the video frames and the video descriptions corresponding to the multiple videos respectively to serve as training samples, so as to acquire a video description generation model.
Referring to fig. 5, a flowchart of a fourth method for acquiring a video description generative model according to an exemplary embodiment of the present disclosure is shown, where the method includes:
in step S401, a plurality of videos are acquired from a preset video library. Similar to step S101, the description is omitted here.
In step S402, for each video, each video frame in the video is identified to extract the text in the video frame. Similar to step S102, the description is omitted here.
In step S403, for each video frame in each video, comparing the video frame with other video frames in the video one by one to determine whether the video frame is similar to any of the other video frames, if so, deleting one of the video frames, and merging the characters corresponding to the two video frames respectively to obtain the characters corresponding to the video frame that is not deleted.
In step S404, the text corresponding to the video frame of each video is merged as the video description of the video. Similar to step S103, the description is omitted here.
In step S405, the video frames and the video descriptions corresponding to the multiple videos are used as training samples to be trained, and a video description generation model is obtained. Similar to step S104, the description is omitted here.
In this embodiment, after extracting the text of the video frame, the electronic device performs similar picture judgment, compares the video frame with other video frames in the video one by one for each video frame in each video, and in each comparison process, judges whether the video frame is similar to any of the other video frames, if so, the electronic device deletes one of the video frames, and combines the text corresponding to the two video frames respectively as the text corresponding to the video frame that is not deleted; in the embodiment, video description model training is performed by using dissimilar video frames in the target video, which is beneficial to realizing accuracy of a model prediction result, accurate description can be performed on complete content of the video, and error of video description caused by similar content is avoided.
For example, if the picture a and the picture B are similar pictures, the text corresponding to the picture a is "hello", and the text corresponding to the picture B is "i play a ball", the text of the picture a and the text of the picture B are combined after one of the pictures is deleted (for example, the picture a is deleted), and the picture B and the text corresponding to the picture B "hello i play a ball" are obtained.
It can be understood that, for two video frames determined to be similar, which one of the two video frames is specifically selected by the electronic device for deletion is not limited in any way, and may be specifically set according to actual situations.
In a possible implementation manner, the electronic device may determine whether the video frame is similar to any other video frame through a pre-established classification network, please refer to fig. 6, which is a structural diagram of a classification network shown in the present disclosure according to an exemplary embodiment, where the classification network includes an input layer 21, a difference layer 22, a splicing layer 23, a convolution layer 24, and an output layer 25; the input layer 21 is used for acquiring two input video frames; the difference layer 22 is used for performing subtraction operation on the two video frames to obtain a difference image; the splicing layer 23 is configured to splice the difference image and the two video frames to obtain a spliced image; the convolutional layer 24 is used for performing feature extraction on the spliced image to generate a feature vector; the output layer 25 is configured to output a similar result according to the feature vector.
Specifically, the electronic device inputs the video frame and any one of the other video frames into the classification network, performs subtraction operation on the two video frames through the classification network to obtain a difference image, then splices the difference image with the two video frames to obtain a spliced image, then performs feature extraction on the spliced image to generate a feature vector, and obtains a similar result according to the feature vector.
The classification model can be obtained by training based on a plurality of training samples, the plurality of training samples comprise positive samples and negative samples, the positive samples comprise two similar pictures and similar conclusions, and the negative samples comprise two similar pictures and dissimilar conclusions.
In another embodiment, the electronic device may further perform word segmentation on the text corresponding to the undeleted video frame to obtain a plurality of word sequences, then count occurrence frequencies of the word sequences, and delete the word sequences if the occurrence frequencies of the word sequences are not less than a first specified value or not greater than a second specified value; wherein the first specified value is greater than the second specified value; the word frequency is counted on the image dimension, the word sequences with too high or too low frequency are removed, and the situation that the model is trapped in local optimization during training is effectively avoided.
It is understood that, the specific values of the first specified value and the second specified value in the embodiments of the present disclosure are not limited in any way, and may be specifically set according to actual situations.
For example, after deleting the similar pictures and merging the characters corresponding to the two similar pictures, the picture C and the corresponding characters "what was what: "what is what", "good children", "whose family", "sleep", the frequency (number of times) of appearance of each word sequence is counted: what is { "is what": 4, "good children": 1, "whose family": 2, "sleep": 2} if the first specified value is 4 times and the second specified value is 1 time, the word sequence "what" and the word sequence "good children" are deleted, and the character "whose family goes to sleep" corresponding to the picture C is obtained.
Then, for each video, after acquiring the characters corresponding to the remaining video frames from which the similar video frames are deleted, the electronic device merges the characters corresponding to the remaining video frames of the video as the video description of the video, and then the electronic device trains the video frames and the video descriptions corresponding to the plurality of videos as training samples, thereby acquiring a video description generation model.
Referring to fig. 7, a flowchart of a video description generation method according to an exemplary embodiment of the present disclosure is shown, where the method may be performed by an electronic device, and the electronic device may be a computing device such as a computer, a smart phone, a tablet, a personal digital assistant, or a server, and the method includes:
in step S501, a target video is acquired.
In step S502, a video frame of the target video is used as an input of a video description generation model to obtain a video description corresponding to the target video from the video description generation model; the video description generation model is obtained based on video frames and video description training corresponding to a plurality of videos respectively, and the generation of the video description of each video comprises the following steps: and identifying each video frame in the video to extract characters in the video frame, and combining the characters corresponding to the video frame of the video to be used as the video description of the video.
In this embodiment, after acquiring a target video, the electronic device takes a video frame of the target video as an input of a video description generation model to acquire a video description corresponding to the target video from the video description generation model; it is to be understood that, as for the source of the target video, the embodiment of the present disclosure does not limit this, and may be specifically configured according to the actual situation, for example, the target video may be uploaded by the user to the electronic device, or downloaded by the electronic device from a specified server.
Wherein the video description generation model comprises a decoder network and an encoder network; the encoder network is used for extracting the characteristics of a plurality of input video frames and generating the visual characteristics of the video; the decoder network is used for sequentially generating decoding words according to the visual characteristics and combining the generated decoding words into a video description.
In one embodiment, the encoder network includes an input layer, a plurality of convolutional layers, and a stitching layer; the input layer is used for acquiring a plurality of input video frames; the plurality of convolutional layers are respectively used for extracting the characteristics of a plurality of video frames; the splicing layer is used for splicing the characteristics of a plurality of video frames to generate visual characteristics; the decoder network is a Long Short-Term Memory network (LSTM).
In an embodiment, after acquiring a target video, the electronic device compares each video frame in the target video with other video frames in the target video one by one, determines whether the video frame is similar to any one of the other video frames in each comparison process, and deletes one of the video frames if yes; in the embodiment, the process of generating the video description is realized by using the dissimilar video frames in the target video, which is beneficial to realizing accurate description of the complete content of the video and avoiding the error of the video description caused by the similar content.
It can be understood that, for two video frames determined to be similar, which one of the two video frames is specifically selected by the electronic device for deletion is not limited in any way, and may be specifically set according to actual situations.
In a possible implementation manner, the electronic device may determine whether the video frame is similar to any other video frame through a pre-established classification network, where the classification network includes an input layer, a difference layer, a concatenation layer, a convolution layer, and an output layer; the input layer is used for acquiring two input video frames; the difference layer is used for carrying out subtraction operation on the two video frames to obtain a difference image; the splicing layer is used for splicing the difference image and the two video frames to obtain a spliced image; the convolution layer is used for carrying out feature extraction on the spliced image to generate a feature vector; and the output layer is used for outputting a similar result according to the feature vector.
Optionally, the generating of the video description of each video specifically includes: and identifying each video frame in the video to extract characters in the video frame, matching the characters corresponding to each video frame with a pre-stored slogan text, deleting the characters which are consistent in matching, and finally combining the characters corresponding to the video frames of the video to serve as the video description of the video.
Optionally, the generating of the video description of each video specifically includes: identifying each video frame in the video to extract the characters in the video frame, performing word segmentation on the characters corresponding to all the video frames in the video to obtain a plurality of word sequences, deleting the word sequences with the occurrence frequency not less than a set value to obtain the characters corresponding to each video frame after the word sequences are deleted, and finally merging the characters corresponding to the video frames of the video to serve as the video description of the video.
Optionally, the generating of the video description of each video specifically includes: identifying each video frame in the video to extract characters in the video frame, comparing the video frame with other video frames in the video one by one for each video frame in the video to determine whether the video frame is similar to any one of the other video frames, if so, deleting one of the video frames, merging characters corresponding to the two video frames respectively to be used as characters corresponding to the video frames which are not deleted, and finally merging the characters corresponding to the video frames of the video to be used as video description of the video.
Optionally, the generating of the video description of each video specifically includes: identifying each video frame in the video to extract characters in the video frame, comparing the video frame with other video frames in the video one by one for each video frame in the video to determine whether the video frame is similar to any one of the other video frames, if so, deleting one of the video frames, merging characters corresponding to the two video frames respectively to be used as characters corresponding to undeleted video frames, segmenting the characters corresponding to the undeleted video frames to obtain a plurality of word sequences, deleting the word sequences with the occurrence frequency not less than a first specified value or not more than a second specified value to obtain the characters corresponding to the undeleted video frames after deleting the word sequences, and finally merging the characters corresponding to the video frames of the video to be used as the video description of the video.
Accordingly, referring to fig. 8, a block diagram of an embodiment of an apparatus for obtaining a video description generative model according to an embodiment of the present disclosure is shown, where the apparatus includes:
the video obtaining module 601 is configured to obtain a plurality of videos from a preset video library.
A text extraction module 602, configured to, for each video, identify each video frame in the video to extract text in the video frame.
The video description obtaining module 603 is configured to combine texts corresponding to video frames of each video as video descriptions of the video.
The model training module 604 is configured to train video frames and video descriptions corresponding to the multiple videos as training samples to obtain a video description generation model.
Optionally, after the text extraction module 602, the method further includes:
and the character deleting module is used for matching the characters corresponding to each video frame with the pre-stored slogan text and deleting the characters which are matched with each other.
Optionally, after the text extraction module 602, the method further includes:
and the first word sequence acquisition module is used for segmenting words corresponding to all video frames in the video to acquire a plurality of word sequences.
And the first word sequence deleting module is used for deleting the word sequences with the frequency of occurrence not less than a set value.
Optionally, after the text extraction module 602, the method further includes:
and the video frame comparison module is used for comparing each video frame in each video with other video frames in the video one by one so as to determine whether the video frame is similar to any one of the other video frames.
And the video frame deleting module is used for deleting one of the video frames if the video frame is the original video frame, and combining the characters corresponding to the two video frames respectively to be used as the characters corresponding to the video frames which are not deleted.
Optionally, the method further comprises:
and the second word sequence acquisition module is used for segmenting words corresponding to the undeleted video frame to acquire a plurality of word sequences.
And the second word sequence deleting module is used for deleting the word sequences with the frequency of occurrence not less than the first specified value or not more than the second specified value.
Optionally, it is determined whether the video frame is similar to any other video frame through a pre-established classification network.
The classification network includes an input layer, a differential layer, a splice layer, a convolutional layer, and an output layer.
The input layer is used for acquiring two input video frames.
And the difference layer is used for carrying out subtraction operation on the two video frames to obtain a difference image.
And the splicing layer is used for splicing the difference image and the two video frames to obtain a spliced image.
And the convolution layer is used for carrying out feature extraction on the spliced image to generate a feature vector.
And the output layer is used for outputting a similar result according to the feature vector.
Optionally, the video description generation model comprises a decoder network and an encoder network.
The encoder network is used for extracting the characteristics of a plurality of input video frames and generating the visual characteristics of the video.
The decoder network is used for sequentially generating decoding words according to the visual characteristics and combining the generated decoding words into a video description.
Optionally, the encoder network comprises an input layer, a plurality of convolutional layers, and a stitching layer.
The input layer is used for acquiring a plurality of input video frames.
The plurality of convolutional layers are respectively used for extracting the characteristics of a plurality of video frames.
The splicing layer is used for splicing the characteristics of the video frames to generate visual characteristics.
Optionally, the decoder network is a long-short term memory network.
Optionally, the model training module 604 includes:
the prediction description acquisition unit is used for inputting the video frame into a specified video description generation model to obtain prediction description;
and the parameter adjusting unit is used for adjusting parameters of the video description generation model according to the difference between the prediction description and the video description corresponding to the video frame to obtain the trained model.
Optionally, the parameter adjusting unit includes:
a feature vector obtaining subunit, configured to obtain a feature vector of the prediction description and a feature vector of a video description corresponding to the video frame respectively;
and the parameter adjusting subunit is used for adjusting the parameters of the video description generation model according to the difference between the feature vector of the prediction description and the feature vector of the video description corresponding to the video frame.
Optionally, the parameter adjusting subunit includes:
and determining whether the prediction description is similar to the video description corresponding to the video frame or not according to the distance between the feature vector of the prediction description and the feature vector of the video description corresponding to the video frame, and adjusting the parameters of the video description generation model according to the similar result.
Optionally, the feature vector is a word vector.
Optionally, the distance is a cosine distance.
Accordingly, referring to fig. 9, a block diagram of an embodiment of a video description generating apparatus according to an embodiment of the present disclosure is shown, where the apparatus includes:
a target video obtaining module 701, configured to obtain a target video.
A video description generation module 702, configured to use a video frame of the target video as an input of a video description generation model, so as to obtain a video description corresponding to the target video from the video description generation model; the video description generation model is obtained based on video frames and video description training corresponding to a plurality of videos respectively, and the generation of the video description of each video comprises the following steps: and identifying each video frame in the video to extract characters in the video frame, and combining the characters corresponding to the video frame of the video to be used as the video description of the video.
Optionally, after the acquiring the target video, the method further includes:
and the similarity judgment module is used for comparing each video frame in the target video with other video frames in the target video one by one so as to determine whether the video frame is similar to any one of the other video frames.
And the video frame deleting module is used for deleting one of the video frames if the video frame deleting module is used for deleting one of the video frames.
Optionally, it is determined whether the video frame is similar to any other video frame through a pre-established classification network.
The classification network includes an input layer, a differential layer, a splice layer, a convolutional layer, and an output layer.
The input layer is used for acquiring two input video frames.
And the difference layer is used for carrying out subtraction operation on the two video frames to obtain a difference image.
And the splicing layer is used for splicing the difference image and the two video frames to obtain a spliced image.
And the convolution layer is used for carrying out feature extraction on the spliced image to generate a feature vector.
And the output layer is used for outputting a similar result according to the feature vector.
Optionally, the video description generation model comprises a decoder network and an encoder network.
The encoder network is used for extracting the characteristics of a plurality of input video frames and generating the visual characteristics of the target video.
The decoder network is used for sequentially generating decoding words according to the visual characteristics and combining the generated decoding words into a video description.
Optionally, the encoder network comprises an input layer, a plurality of convolutional layers, and a stitching layer.
The input layer is used for acquiring a plurality of input video frames.
The plurality of convolutional layers are respectively used for extracting the characteristics of a plurality of video frames.
The splicing layer is used for splicing the characteristics of the video frames to generate visual characteristics.
Optionally, the decoder network is a long-short term memory network.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the disclosed solution. One of ordinary skill in the art can understand and implement it without inventive effort.
Accordingly, as shown in fig. 10, the present disclosure further provides an electronic device 80, which includes a processor 81; a memory 82 for storing executable instructions, the memory 82 comprising a computer program 83; wherein the processor 81 is configured to perform any of the methods described above.
The Processor 81 executes a computer program 83 included in the memory 82, and the Processor 81 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, a discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 82 stores the computer program of the above method, and the memory 82 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. Also, the apparatus may cooperate with a network storage device that performs a storage function of the memory through a network connection. The storage 82 may be an internal storage unit of the device 80, such as a hard disk or a memory of the device 80. The memory 82 may also be an external storage device of the device 80, such as a plug-in hard disk, Smart Media Card (SMC), Secure Digital (SD) Card, Flash memory Card (Flash Card), etc. provided on the device 80. Further, memory 82 may also include both internal storage units of device 80 and external storage devices. The memory 82 is used for storing computer programs 83 as well as other programs and data required by the device. The memory 82 may also be used to temporarily store data that has been output or is to be output.
The various embodiments described herein may be implemented using a computer-readable medium such as computer software, hardware, or any combination thereof. For a hardware implementation, the embodiments described herein may be implemented using at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a processor, a controller, a microcontroller, a microprocessor, and an electronic unit designed to perform the functions described herein. For a software implementation, the implementation such as a process or a function may be implemented with a separate software module that allows performing at least one function or operation. The software codes may be implemented by software applications (or programs) written in any suitable programming language, which may be stored in memory and executed by the controller.
The electronic device 80 includes, but is not limited to, the following forms of presence: (1) a mobile terminal: such devices are characterized by mobile communications capabilities and are primarily targeted at providing voice, data communications. Such terminals include: smart phones (e.g., iphones), multimedia phones, functional phones, and low-end phones, etc.; (2) ultra mobile personal computer device: the equipment belongs to the category of personal computers, has calculation and processing functions and generally has the characteristic of mobile internet access. Such terminals include: PDA, MID, and UMPC devices, etc., such as iPad; (3) a server: the device for providing the computing service, the server comprises a processor, a hard disk, a memory, a system bus and the like, the server is similar to a general computer architecture, but has higher requirements on processing capacity, stability, reliability, safety, expandability, manageability and the like as long as highly reliable service is provided; (4) other electronic devices with computing capabilities. The device may include, but is not limited to, a processor 81, a memory 82. Those skilled in the art will appreciate that fig. 8 is merely an example of the electronic device 80, and does not constitute a limitation of the electronic device 80, and may include more or fewer components than shown, or combine certain components, or different components, e.g., the device may also include an input-output device, a network access device, a bus, a camera device, etc.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as a memory comprising instructions, executable by a processor of an apparatus to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer readable storage medium, instructions in the storage medium, when executed by a processor of a terminal, enable the terminal to perform the above-described method.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
The present disclosure is to be considered as limited only by the preferred embodiments and not limited to the specific embodiments described herein, and all changes, equivalents, and modifications that come within the spirit and scope of the disclosure are desired to be protected.

Claims (10)

1. A method for acquiring a video description generation model is characterized by comprising the following steps:
acquiring a plurality of videos from a preset video library;
for each video, identifying each video frame in the video to extract characters in the video frame;
combining characters corresponding to the video frames of each video to serve as video description of the videos;
and training the video frames and the video descriptions corresponding to the videos respectively as training samples to obtain a video description generation model.
2. The method of claim 1, wherein after identifying each video frame in the video to extract text in the video frame, the method further comprises:
and matching the characters corresponding to each video frame with the pre-stored slogan text, and deleting the characters which are matched with each other uniformly.
3. The method of claim 1, wherein after identifying each video frame in the video to extract text in the video frame, the method further comprises:
performing word segmentation on characters corresponding to all video frames in the video to obtain a plurality of word sequences;
and deleting the word sequences with the occurrence frequency not less than the set value.
4. The method of claim 1, wherein after identifying each video frame in the video to extract text in the video frame, the method further comprises:
for each video frame in each video, comparing the video frame with other video frames in the video one by one to determine whether the video frame is similar to any one of the other video frames;
and if so, deleting one of the video frames, and combining the characters corresponding to the two video frames respectively to be used as the characters corresponding to the video frames which are not deleted.
5. A method for generating a video description, comprising:
acquiring a target video;
taking the video frame of the target video as the input of a video description generation model so as to obtain the video description corresponding to the target video from the video description generation model; the video description generation model is obtained based on video frames and video description training corresponding to a plurality of videos respectively, and the generation of the video description of each video comprises the following steps: and identifying each video frame in the video to extract characters in the video frame, and combining the characters corresponding to the video frame of the video to be used as the video description of the video.
6. The method of claim 5, further comprising, after said obtaining the target video:
for each video frame in the target video, comparing the video frame with other video frames in the target video one by one to determine whether the video frame is similar to any one of the other video frames;
and if so, deleting one of the video frames.
7. An apparatus for obtaining a video description generative model, comprising:
the video acquisition module is used for acquiring a plurality of videos from a preset video library;
the character extraction module is used for identifying each video frame in the videos so as to extract characters in the video frame;
the video description acquisition module is used for combining characters corresponding to video frames of each video to serve as video description of the video;
and the model training module is used for training the video frames and the video descriptions corresponding to the videos respectively as training samples to obtain a video description generation model.
8. A video description generation apparatus, comprising:
the target video acquisition module is used for acquiring a target video;
the video description generation module is used for taking a video frame of the target video as the input of a video description generation model so as to obtain a video description corresponding to the target video from the video description generation model; the video description generation model is obtained based on video frames and video description training corresponding to a plurality of videos respectively, and the generation of the video description of each video comprises the following steps: and identifying each video frame in the video to extract characters in the video frame, and combining the characters corresponding to the video frame of the video to be used as the video description of the video.
9. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the method of any one of claims 1 to 6.
10. A computer readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the method of any one of claims 1 to 6.
CN201911051111.4A 2019-10-31 2019-10-31 Video description generation model obtaining method, video description generation method and device Active CN110781345B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911051111.4A CN110781345B (en) 2019-10-31 2019-10-31 Video description generation model obtaining method, video description generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911051111.4A CN110781345B (en) 2019-10-31 2019-10-31 Video description generation model obtaining method, video description generation method and device

Publications (2)

Publication Number Publication Date
CN110781345A true CN110781345A (en) 2020-02-11
CN110781345B CN110781345B (en) 2022-12-27

Family

ID=69387914

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911051111.4A Active CN110781345B (en) 2019-10-31 2019-10-31 Video description generation model obtaining method, video description generation method and device

Country Status (1)

Country Link
CN (1) CN110781345B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113792166A (en) * 2021-08-18 2021-12-14 北京达佳互联信息技术有限公司 Information acquisition method and device, electronic equipment and storage medium

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004072504A (en) * 2002-08-07 2004-03-04 Sony Corp Device, method and system for displaying image, program and recording medium
JP2004282402A (en) * 2003-03-14 2004-10-07 Toshiba Corp Content processing device and program
CA2528506A1 (en) * 2004-11-30 2006-05-30 Oculus Info Inc. System and method for interactive multi-dimensional visual representation of information content and properties
US20140112527A1 (en) * 2012-10-18 2014-04-24 Microsoft Corporation Simultaneous tracking and text recognition in video frames
CN105279495A (en) * 2015-10-23 2016-01-27 天津大学 Video description method based on deep learning and text summarization
US20160275642A1 (en) * 2015-03-18 2016-09-22 Hitachi, Ltd. Video analysis and post processing of multiple video streams
CN106202349A (en) * 2016-06-29 2016-12-07 杭州华三通信技术有限公司 Web page classifying dictionary creation method and device
CN108259991A (en) * 2018-03-14 2018-07-06 优酷网络技术(北京)有限公司 Method for processing video frequency and device
CN108683924A (en) * 2018-05-30 2018-10-19 北京奇艺世纪科技有限公司 A kind of method and apparatus of video processing
CN109409221A (en) * 2018-09-20 2019-03-01 中国科学院计算技术研究所 Video content description method and system based on frame selection
US20190130580A1 (en) * 2017-10-26 2019-05-02 Qualcomm Incorporated Methods and systems for applying complex object detection in a video analytics system
CN109740152A (en) * 2018-12-25 2019-05-10 腾讯科技(深圳)有限公司 Determination method, apparatus, storage medium and the computer equipment of text classification
CN109918509A (en) * 2019-03-12 2019-06-21 黑龙江世纪精彩科技有限公司 Scene generating method and scene based on information extraction generate the storage medium of system
CN109964490A (en) * 2016-11-22 2019-07-02 脸谱公司 Enhance live video
CN110019817A (en) * 2018-12-04 2019-07-16 阿里巴巴集团控股有限公司 A kind of detection method, device and the electronic equipment of text in video information
CN110097096A (en) * 2019-04-16 2019-08-06 天津大学 A kind of file classification method based on TF-IDF matrix and capsule network
CN110119735A (en) * 2018-02-06 2019-08-13 上海全土豆文化传播有限公司 The character detecting method and device of video
CN110147745A (en) * 2019-05-09 2019-08-20 深圳市腾讯计算机***有限公司 A kind of key frame of video detection method and device
CN110163051A (en) * 2018-07-31 2019-08-23 腾讯科技(深圳)有限公司 Text Extraction, device and storage medium
CN110377787A (en) * 2019-06-21 2019-10-25 北京奇艺世纪科技有限公司 A kind of video classification methods, device and computer readable storage medium

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004072504A (en) * 2002-08-07 2004-03-04 Sony Corp Device, method and system for displaying image, program and recording medium
JP2004282402A (en) * 2003-03-14 2004-10-07 Toshiba Corp Content processing device and program
CA2528506A1 (en) * 2004-11-30 2006-05-30 Oculus Info Inc. System and method for interactive multi-dimensional visual representation of information content and properties
US20140112527A1 (en) * 2012-10-18 2014-04-24 Microsoft Corporation Simultaneous tracking and text recognition in video frames
US20160275642A1 (en) * 2015-03-18 2016-09-22 Hitachi, Ltd. Video analysis and post processing of multiple video streams
CN105279495A (en) * 2015-10-23 2016-01-27 天津大学 Video description method based on deep learning and text summarization
CN106202349A (en) * 2016-06-29 2016-12-07 杭州华三通信技术有限公司 Web page classifying dictionary creation method and device
CN109964490A (en) * 2016-11-22 2019-07-02 脸谱公司 Enhance live video
US20190130580A1 (en) * 2017-10-26 2019-05-02 Qualcomm Incorporated Methods and systems for applying complex object detection in a video analytics system
CN110119735A (en) * 2018-02-06 2019-08-13 上海全土豆文化传播有限公司 The character detecting method and device of video
CN108259991A (en) * 2018-03-14 2018-07-06 优酷网络技术(北京)有限公司 Method for processing video frequency and device
CN108683924A (en) * 2018-05-30 2018-10-19 北京奇艺世纪科技有限公司 A kind of method and apparatus of video processing
CN110163051A (en) * 2018-07-31 2019-08-23 腾讯科技(深圳)有限公司 Text Extraction, device and storage medium
CN109409221A (en) * 2018-09-20 2019-03-01 中国科学院计算技术研究所 Video content description method and system based on frame selection
CN110019817A (en) * 2018-12-04 2019-07-16 阿里巴巴集团控股有限公司 A kind of detection method, device and the electronic equipment of text in video information
CN109740152A (en) * 2018-12-25 2019-05-10 腾讯科技(深圳)有限公司 Determination method, apparatus, storage medium and the computer equipment of text classification
CN109918509A (en) * 2019-03-12 2019-06-21 黑龙江世纪精彩科技有限公司 Scene generating method and scene based on information extraction generate the storage medium of system
CN110097096A (en) * 2019-04-16 2019-08-06 天津大学 A kind of file classification method based on TF-IDF matrix and capsule network
CN110147745A (en) * 2019-05-09 2019-08-20 深圳市腾讯计算机***有限公司 A kind of key frame of video detection method and device
CN110377787A (en) * 2019-06-21 2019-10-25 北京奇艺世纪科技有限公司 A kind of video classification methods, device and computer readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
任永功等: "基于信息增益特征关联树的文本特征选择算法", 《计算机科学》 *
梁栋等: "一种基于相关性分析与对数搜索聚类的跨媒体检索方法", 《中国科技论文》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113792166A (en) * 2021-08-18 2021-12-14 北京达佳互联信息技术有限公司 Information acquisition method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN110781345B (en) 2022-12-27

Similar Documents

Publication Publication Date Title
CN109117777B (en) Method and device for generating information
CN111368893B (en) Image recognition method, device, electronic equipment and storage medium
CN109146892B (en) Image clipping method and device based on aesthetics
CN108694217B (en) Video label determination method and device
CN112559800B (en) Method, apparatus, electronic device, medium and product for processing video
CN112954450B (en) Video processing method and device, electronic equipment and storage medium
CN110688524B (en) Video retrieval method and device, electronic equipment and storage medium
CN106611015B (en) Label processing method and device
CN113850162B (en) Video auditing method and device and electronic equipment
US8396303B2 (en) Method, apparatus and computer program product for providing pattern detection with unknown noise levels
CN110807472B (en) Image recognition method and device, electronic equipment and storage medium
CN111177470A (en) Video processing method, video searching method and terminal equipment
CN111314732A (en) Method for determining video label, server and storage medium
CN111836118B (en) Video processing method, device, server and storage medium
CN111144370A (en) Document element extraction method, device, equipment and storage medium
CN113705300A (en) Method, device and equipment for acquiring phonetic-to-text training corpus and storage medium
CN113221918A (en) Target detection method, and training method and device of target detection model
CN113205047A (en) Drug name identification method and device, computer equipment and storage medium
CN116977774A (en) Image generation method, device, equipment and medium
CN110781345B (en) Video description generation model obtaining method, video description generation method and device
CN113705666B (en) Split network training method, use method, device, equipment and storage medium
CN115098729A (en) Video processing method, sample generation method, model training method and device
CN115080770A (en) Multimedia data processing method and device, electronic equipment and readable storage medium
CN111046232B (en) Video classification method, device and system
CN115004245A (en) Target detection method, target detection device, electronic equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant