CN109800326B - Video processing method, device, equipment and storage medium - Google Patents

Video processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN109800326B
CN109800326B CN201910069610.XA CN201910069610A CN109800326B CN 109800326 B CN109800326 B CN 109800326B CN 201910069610 A CN201910069610 A CN 201910069610A CN 109800326 B CN109800326 B CN 109800326B
Authority
CN
China
Prior art keywords
video
sentence
determining
description
word segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910069610.XA
Other languages
Chinese (zh)
Other versions
CN109800326A (en
Inventor
杨芷
仇贲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huya Information Technology Co Ltd
Original Assignee
Guangzhou Huya Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huya Information Technology Co Ltd filed Critical Guangzhou Huya Information Technology Co Ltd
Priority to CN201910069610.XA priority Critical patent/CN109800326B/en
Publication of CN109800326A publication Critical patent/CN109800326A/en
Application granted granted Critical
Publication of CN109800326B publication Critical patent/CN109800326B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention discloses a video processing method, a video processing device, video processing equipment and a storage medium. The method comprises the steps of determining a reference video, wherein the reference video is provided with a category and a video description statement; determining candidate videos belonging to the category, the candidate videos having video description statements; and selecting a target video related to the reference video from the candidate videos, wherein the video description sentence of the target video is related to the video description sentence semanteme of the reference video, so that the problem of poor judgment effect of the correlation degree between the videos caused by the fact that words such as keywords and/or labels cannot embody the complete video content is solved, the accuracy of judging the video correlation degree is improved, and the watching duration and the click rate of a user are further improved.

Description

Video processing method, device, equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of video playing platforms, in particular to a video processing method, a video processing device, video processing equipment and a storage medium.
Background
The number of videos in the video sharing platform is huge and is increased day by day, and a user needs to spend a lot of time to find interesting videos.
In order to solve the problem, a video sharing platform generally adopts a video personalized recommendation technology to recommend videos which may be interested in the video sharing platform to a user. Generally, keywords in a video title and/or tags of videos are used to process the videos to obtain correlation between the videos, so as to recommend the videos. However, the keyword and/or the video tag consider less video related information, and the real content of the video cannot be embodied, so that the video recommendation effect is poor.
Disclosure of Invention
The invention provides a video processing method, a video processing device, video processing equipment and a storage medium, which are used for prolonging the watching time of a user and increasing the click rate.
In a first aspect, an embodiment of the present invention provides a video processing method, including:
determining a reference video, wherein the reference video has a category and a video description sentence;
determining candidate videos belonging to the category, the candidate videos having video description statements;
and selecting a target video related to the reference video from the candidate videos, wherein the video description sentence of the target video is semantically related to the video description sentence of the reference video.
Further, selecting a target video related to the reference video from the candidate videos, wherein the video description sentence of the target video is semantically related to the video description sentence of the reference video, and the method comprises the following steps:
converting the video description sentence into a sentence vector of the reference video;
determining sentence vectors into which the video description sentences of the candidate videos are converted;
determining similarity of sentence vectors of the reference video and sentence vectors of the candidate videos;
and determining a target video related to the reference video from the candidate videos according to the similarity.
Further, converting the video description sentence into a sentence vector of the reference video, including:
performing word segmentation processing on the video description sentences according to the word segmentation modes corresponding to the categories to obtain reference words;
and converting the reference word segmentation into sentence vectors of the reference video according to the vector conversion mode corresponding to the category.
Further, performing word segmentation processing on the video description sentence according to the word segmentation mode corresponding to the category to obtain reference word segmentation, including;
performing word segmentation processing on the video description sentence to obtain a reference word segmentation of the reference video;
determining stop words corresponding to the categories;
removing reference participles identical to the stop word from the reference participles of the reference video.
Further, converting the reference participle into a sentence vector of the reference video according to a vector conversion mode corresponding to the category, including:
determining a sentence vector conversion model corresponding to the category;
and converting the video description sentence by using the sentence vector conversion model to obtain a sentence vector of the reference video.
Further, determining similarity between the sentence vector of the reference video and the sentence vector of the candidate video includes:
calculating a distance between the sentence vector of the candidate video and the sentence vector of the reference video;
and taking the distance as the similarity of the sentence vector of the reference video and the sentence vector of the candidate video.
Further, the method also comprises the following steps:
if the number of the target videos is lower than a preset number threshold, selecting the target videos from preset candidate videos until the number of the target videos is equal to the number threshold;
wherein the candidate video comprises at least one of:
the videos which belong to the same user as the reference video, the videos which meet a preset first heat condition in the category and the videos which meet a preset second heat condition in all the categories.
In a second aspect, an embodiment of the present invention provides a video processing method, including:
sending the reference video to a client for playing;
sending the video information of the target video to the client for displaying;
the target video is determined by:
determining a reference video, wherein the reference video has a category and a video description sentence;
determining candidate videos belonging to the category, the candidate videos having video description statements;
and selecting a target video related to the reference video from the candidate videos, wherein the video description sentence of the target video is semantically related to the video description sentence of the reference video.
In a third aspect, an embodiment of the present invention provides a video processing method, including:
playing a reference video sent by a server;
receiving video information of a target video sent by a server;
displaying the video information;
the target video is determined by:
determining a reference video, wherein the reference video has a category and a video description sentence;
determining candidate videos belonging to the category, the candidate videos having video description statements;
and selecting a target video related to the reference video from the candidate videos, wherein the video description sentence of the target video is semantically related to the video description sentence of the reference video.
In a fourth aspect, an embodiment of the present invention provides a video processing apparatus, including:
a reference video determination module for determining a reference video, the reference video having a category and a video description statement;
a candidate video determination module for determining candidate videos belonging to the category, the candidate videos having video description sentences;
and the target video determining module is used for selecting a target video related to the reference video from the candidate videos, and the video description sentence of the target video is semantically related to the video description sentence of the reference video.
In a fifth aspect, an embodiment of the present invention provides a video processing apparatus, including:
the reference video sending module is used for sending the reference video to the client for playing;
the video information sending module is used for sending the video information of the target video to the client for displaying;
the target video is determined by:
determining a reference video, wherein the reference video has a category and a video description sentence;
determining candidate videos belonging to the category, the candidate videos having video description statements;
and selecting a target video related to the reference video from the candidate videos, wherein the video description sentence of the target video is semantically related to the video description sentence of the reference video.
In a sixth aspect, an embodiment of the present invention provides a video processing apparatus, including:
the reference video playing module is used for playing a reference video sent by the server;
the video information receiving module is used for receiving the video information of the target video sent by the server;
the video information display module is used for displaying the video information;
the target video is determined by:
determining a reference video, wherein the reference video has a category and a video description sentence;
determining candidate videos belonging to the category, the candidate videos having video description statements;
and selecting a target video related to the reference video from the candidate videos, wherein the video description sentence of the target video is semantically related to the video description sentence of the reference video.
In a seventh aspect, an embodiment of the present invention provides a video processing apparatus, including: a memory and one or more processors;
the memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the video processing method as described in any one of the first, second, and third aspects.
In an eighth aspect, embodiments of the present invention provide a storage medium containing computer-executable instructions for performing a video processing method as described in any one of the first, second and third aspects when executed by a computer processor.
The method comprises the steps of determining a reference video, wherein the reference video is provided with a category and a video description sentence; determining candidate videos belonging to the category, the candidate videos having video description statements; and selecting a target video related to the reference video from the candidate videos, wherein the video description sentence of the target video is related to the video description sentence semanteme of the reference video, so that the problem of poor judgment effect of the correlation degree between the videos caused by the fact that words such as keywords and/or labels cannot embody the complete video content is solved, the accuracy of judging the video correlation degree is improved, and the watching duration and the click rate of a user are further improved.
Drawings
Fig. 1A is a flowchart of a video processing method according to an embodiment of the present invention;
fig. 1B is a schematic structural diagram of a video processing system according to an embodiment of the present invention;
fig. 1C is a schematic interface diagram of a client according to an embodiment of the present invention;
fig. 2 is a flowchart of a video processing method according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of a video processing apparatus according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a video processing apparatus according to a fourth embodiment of the present invention;
fig. 5 is a schematic structural diagram of a video processing apparatus according to a fifth embodiment of the present invention;
fig. 6 is a schematic structural diagram of a video processing apparatus according to a sixth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1A is a flowchart of a video processing method according to an embodiment of the present invention, where this embodiment is applicable to a case of analyzing a correlation between videos, and this embodiment does not limit an application scene, and all cases of performing semantic analysis on video content are applicable to an application scene for determining a correlation between videos. The method can be executed by video processing equipment, and the video processing equipment is not limited in the embodiment and can be a computer, a mobile phone, a tablet, a server and the like. In the embodiment, a video processing apparatus is taken as an example of a server to describe.
Referring to fig. 1A, the video processing apparatus specifically includes the steps of:
s110, determining a reference video, wherein the reference video has a category and a video description sentence.
The video processing method provided in this embodiment is mainly used for determining a target video related to a reference video from candidate videos. Further, the videos may be subjected to a tagging operation or a video recommendation according to the correlation between the videos. Specifically, the present embodiment takes a video sharing website as an example for detailed description.
In one embodiment, tagging may be adding a tag of the target video to the reference video according to a degree of correlation of the reference video and the target video. The reference video can be a video uploaded by a user, and the target video is a video uploaded in a video sharing website and labeled. Tags are phrases that represent video content.
In another embodiment, the reference video may be a video being watched by the user, a video already watched, a video collected and liked, and the like. The candidate videos are videos that have been uploaded in a video sharing website. The target video is a video related to the reference video selected from the candidate videos. The video recommendation can be to recommend the target video related to the reference video for the user according to the correlation degree of the reference video and the target video.
Further, in the present embodiment, the category may be a classification according to video content. The category of the reference video can be determined by a preset category in a video sharing website selected by a user; or the video sharing website can determine according to the channel uploaded by the video or the analysis of the video content.
Further, in this embodiment, the video description sentence is in a text form, and the video description sentence is not limited in this embodiment, and may be one sentence or more than one sentence describing the video content. Illustratively, the video description sentence may be a title, a brief, a comment, etc. of the video. For clarity of description, the present embodiment takes a video description sentence as an example of a title of a video for detailed description.
And S120, determining candidate videos belonging to the category, wherein the candidate videos have video description sentences.
In this embodiment, the candidate video is a video belonging to the same category as the reference video. It should be noted that the candidate video in the present embodiment may not be limited to a video belonging to the same category as the reference video. In the embodiment, by defining the category of the candidate video, on one hand, the selected direction of the target video can be reduced, so that the speed of determining the target video is increased; on the other hand, when the candidate video and the reference video belong to the same category, the accuracy of determining the target video may be increased.
S130, selecting a target video related to the reference video from the candidate videos, wherein the video description sentence of the target video is related to the video description sentence semanteme of the reference video.
In this embodiment, the correlation between the reference video and the target video may be determined by determining the semantics of the video content of the reference video. Further, in the embodiment, semantic analysis is mainly performed on the reference video through the video description sentence of the reference video.
Among them, semantics is the meaning implied by a language. In brief, language is defined as a vector of symbols. The symbols themselves have no meaning, and only symbols with meaning can be used, at which time the language is converted into information, while the meaning of the language (symbols) is semantic. For computer science, semantics generally refers to the user's interpretation of computer representations (i.e., symbols) that are used to describe the real world, i.e., the way the user contacts the computer representation and the real world. For example, the symbols "stool" and "chair" are encoded differently in a computer, but semantically, the symbols "stool" and "chair" are used to represent the same item in the real world. Thus, in order for a computer to recognize the correlation between the symbols "stool" and "chair", it is necessary to analyze the semantics of the symbols "stool" and "chair". Generally, the semantics of the symbols in the computer can be determined by the technology of natural language processing, which is not limited in this embodiment.
It should be noted that the semantics have domain characteristics, and the semantics not belonging to any domain do not exist. The term "semantic heterogeneity" refers to the difference in interpretation between the same things, which means that the same things are understood differently in different fields. In order to ensure the correctness of the semantic analysis result of the video description statement, in this embodiment, the interpretation difference caused by semantic isomerism is reduced by defining the candidate video as a video belonging to the same category as the reference video, so as to increase the accuracy of determining the correlation degree between the reference video and the target video.
Further, the technical scheme that keywords and/or tags are used for determining the correlation degree between the reference video and the target video is distinguished, the syntax structure of the statement can be reserved by using the video description statement in the embodiment, and therefore the determined video description statement of the target video and the video description statement of the reference video are partially or completely consistent in the syntax structure of the statement. Therefore, by determining that the video description sentence of the target video is semantically related to the video description sentence of the reference video, the accuracy and pertinence of the determination of the correlation degree of the reference video and the target video can be achieved.
According to the technical scheme of the embodiment, a reference video is determined, wherein the reference video is provided with a category and a video description statement; determining candidate videos belonging to the category, the candidate videos having video description statements; the method comprises the steps of selecting a target video related to a reference video from candidate videos, wherein a video description statement of the target video is semantically related to a video description statement of the reference video, and therefore, unlike the determination of the relevance by using words such as keywords and/or labels, the technical scheme of the embodiment determines the relevance between the reference video and the target video by using a complete statement of the video description statement, solves the problem of poor judgment effect of the relevance between videos caused by the fact that the words such as the keywords and/or the labels cannot embody the content of the complete video, achieves the purpose of increasing the accuracy of judging the relevance of the videos, and further increases the watching duration and the click rate of a user when the video processing method is applied to an application scene recommended by the videos.
Further, the present embodiment will describe an application scenario in which the video processing method is applied to video recommendation. Based on the above embodiments, fig. 1B is a schematic structural diagram of a video processing system according to an embodiment of the present invention. The video processing system comprises a client and a server, wherein the client and the server are in communication connection through a network. In a video processing system, the target video is determined by: determining a reference video, wherein the reference video has a category and a video description sentence; determining candidate videos belonging to the category, the candidate videos having video description statements; and selecting a target video related to the reference video from the candidate videos, wherein the video description sentence of the target video is semantically related to the video description sentence of the reference video.
In one embodiment, the video processing method is applied to a server and sends a reference video to a client for playing; and sending the video information of the target video to the client for display.
Specifically, after receiving a play request about a reference video sent by the client 10, the server 20 of the video sharing website responds to the play request to send the reference video to the client 10 for playing. Further, the target video related to the reference video may be determined by the above-mentioned determination method of the target video. The video information may include information such as a title of the target video, an uploaded user nickname, a viewing amount, comment content, a number of comments, and a preview. The uploading user is a user nickname for uploading the target video; the watching amount is the total number of people watching the target video recorded by the video sharing website; the comment content is a comment sent by a viewer user watching the target video; the number of comments refers to the number of the comments; the preview may be a screenshot or a dynamic view of the target video. The video information can be obtained from a database of the video sharing website.
In an embodiment, the video processing method is applied to a client 10, fig. 1C is an interface schematic diagram of a client according to an embodiment of the present invention, and referring to fig. 1C, the client 10 may play a reference video sent by a server 20; receiving video information of a target video sent by the server 20; and displaying the video information.
Specifically, the client 10 includes: a first display area 11, a second display area 12 and a third display area 13, wherein the first display area 11 is used for playing a reference video sent by the server 20; the second display area 12 is used for displaying video description sentences of the reference video, such as titles, brief descriptions, and the like; the third display area 13 is used to display video information received from the server 20.
It should be noted that the interface schematic diagram of the client 10 shown in fig. 1C is only one interface display manner of the client 10, and further, the third display area may also be displayed in a pop-up window form.
Example two
Fig. 2 is a flowchart of a video processing method according to a second embodiment of the present invention, which is further detailed based on the second embodiment of the present invention, and the method specifically includes the following steps:
s210, determining a reference video, wherein the reference video has a category and a video description sentence.
In this embodiment, the video processing method provided in this embodiment is mainly used to determine a target video related to a reference video from candidate videos. Further, the videos may be subjected to a tagging operation or a video recommendation according to the correlation between the videos. Specifically, the present embodiment takes a video sharing website as an example for detailed description.
S220, determining candidate videos belonging to the category, wherein the candidate videos have video description sentences.
In this embodiment, the candidate video is a video belonging to the same category as the reference video. It should be noted that the candidate video in the present embodiment may not be limited to a video belonging to the same category as the reference video. In the embodiment, by defining the category of the candidate video, on one hand, the selected direction of the target video can be reduced, so that the speed of determining the target video is increased; on the other hand, when the candidate video and the reference video belong to the same category, the accuracy of determining the target video may be increased.
In this embodiment, the correlation between the reference video and the target video may be determined by determining the semantics of the video content of the reference video. Further, in the embodiment, semantic analysis is mainly performed on the reference video through the video description sentence of the reference video. For example, the semantic analysis may use a natural language processing technique, and the embodiment will be explained by determining a sentence vector according to a video description sentence.
And S230, converting the video description sentence into a sentence vector of the reference video.
In this embodiment, the video description sentence is in a text form, and in a computer, the video description sentence is composed of characters, and the characters are one information unit. For natural languages using alphabetical systems or syllabic writing, it corresponds approximately to a phoneme, phoneme-like unit or symbol. Simply a chinese character, kana, korean … …, or a letter of english or other western language. Further, the characters may be represented using character encoding. However, the character codes themselves have no semantic meaning, and for example, it cannot be determined whether different words represent the same item according to the character codes. Therefore, the video description sentences can be converted into sentence vectors for representation through natural language processing technology, furthermore, the correlation degree between the reference video and the target video can be determined through calculating the distance between the sentence vectors, and whether the video description sentences are semantically related or not is determined according to the correlation degree.
It should be noted that in this embodiment, since the semantics have a domain characteristic, the semantics not belonging to any domain do not exist. The term "semantic heterogeneity" refers to the difference in interpretation between the same things, which means that the same things are understood differently in different fields. Further, in order to ensure the correctness of the semantic analysis result of the video description sentence, in this embodiment, the video description sentence is converted into a sentence vector suitable for the category, that is, when the sentence vector is converted, the categories of the reference video and the candidate video are taken as consideration factors of the domain, so that the domain can be distinguished by the category.
The present embodiment does not limit the language used by the video description sentence, and the present embodiment takes the example that the video description sentence is chinese as an example for detailed description. Unlike the characters based on the alphabet system, the chinese is not a word, so when natural language processing is performed on the video description sentence of the chinese, word segmentation processing is also required to be performed on the video description sentence. Specifically, word segmentation processing is carried out on the video description sentences according to the word segmentation modes corresponding to the categories to obtain reference words; and converting the reference word segmentation into sentence vectors of the reference video according to the vector conversion mode corresponding to the category.
It should be noted that, because the chinese characters have semantic information, different characters have different meanings and actions in different fields, and the word forming rules of each character are different. Therefore, the domain needs to be considered when performing the word segmentation processing on the video description sentence. In this embodiment, the domain is determined by referring to the category corresponding to the video, and the video description sentence is subjected to word segmentation processing according to the word segmentation mode corresponding to the category.
Further, performing word segmentation processing on the video description sentence to obtain a reference word segmentation of the reference video; determining stop words corresponding to the categories; removing reference participles identical to the stop word from the reference participles of the reference video.
Specifically, the stop word is a word or a word that needs to be filtered when the video description sentence is processed, and the stop word may be stored through a stop word table, and the stop word corresponding to the category may be determined through the stop word table corresponding to the category. In this embodiment, the stop words include: generic stop words and generic stop words. The general stop words can be functional words contained in human language, and the functional words have no actual meanings compared with other words, such as prepositions, articles and the like. The stop words of the category are vocabulary words which are widely applied in the category but have little relation to the accuracy of analyzing the semantics of the video description sentence, and if the category corresponding to the content in the reference video is live broadcast of the game A, the name "A" of the game A is the stop words in the category.
Optionally, before performing word segmentation processing on the video description sentence to obtain a reference word segmentation of the reference video, removing a preset symbol in the video description sentence, where the preset symbol may be ″, or a combination thereof? "and". "and the like.
In addition, the problem of semantic isomerism can be solved by performing word segmentation processing by using a word segmentation mode corresponding to a category, and a sentence vector conversion model corresponding to the category can be determined; and converting the video description sentence by using the sentence vector conversion model to obtain a sentence vector of the reference video. The sentence vector conversion model can adopt the technology of an unsupervised algorithm, such as a document steering vector model (Doc2 Vec). The Doc2Vec aims to obtain a vector expression of a document with a fixed length, and has the advantages that the sentence length of the document is not limited, and sentences with different lengths are accepted as training samples. In this embodiment, Doc2Vec is used to perform conversion processing on the video description sentences, so as to obtain sentence vectors of the reference video.
In this embodiment, the Doc2Vec models corresponding to different categories are obtained by training according to different categories of video description sentence sets. Training of the Doc2Vec model may select a Distributed Bag of Words (DBOW) algorithm. Specifically, in the training process, a window value may be set to indicate the maximum distance between the current word and the predicted word in a sentence, for example, the window value is 5; a negative sampling value can be set for setting the number of the noise words, and if the negative sampling value is 5, the setting of 5 noise words is represented; a minimum count value can be set for discarding words with a word frequency less than the minimum count value, and the minimum count value can be adjusted according to a training result of the Doc2Vec model, if the minimum count value is set to 5, if the value is 5, the model cannot be trained, the value is changed to 3, and if the model cannot be trained, the value is changed to 1; the dimension of the sentence vector can also be set; a configuration threshold value of random down-sampling of high-frequency words can be set; the number of parallel rows of training may also be set. And only sentence vectors can be trained, word vectors are not trained, and the training speed is accelerated.
S240, sentence vectors converted from the video description sentences of the candidate videos are determined.
In this embodiment, the video description sentences of the candidate videos may be sentence vectors converted by the Doc2Vec model obtained by the training. Sentence vectors of the candidate videos can be stored in a database of the video sharing website after conversion, and are read when in use.
And S250, determining the similarity between the sentence vector of the reference video and the sentence vector of the candidate video.
In the embodiment, the distance between the sentence vector of the candidate video and the sentence vector of the reference video is calculated; and taking the distance as the similarity of the sentence vector of the reference video and the sentence vector of the candidate video. Wherein, the distance may be a euclidean distance, and the similarity is smaller as the distance is longer.
And S260, determining a target video related to the reference video from the candidate videos according to the similarity.
In this embodiment, a preset similarity threshold may be set, and when the similarity between the candidate video and the reference video exceeds the preset similarity threshold, the candidate video may be determined to be the target video related to the reference video.
According to the technical scheme of the embodiment, a reference video is determined, wherein the reference video is provided with a category and a video description statement; determining candidate videos belonging to the category, the candidate videos having video description statements; converting the video description sentence into a sentence vector of the reference video; determining sentence vectors into which the video description sentences of the candidate videos are converted; determining similarity of sentence vectors of the reference video and sentence vectors of the candidate videos; according to the similarity, the target video related to the reference video is determined from the candidate videos, and therefore, unlike the determination of the relevance by using words such as keywords and/or labels, the technical scheme of the embodiment determines the relevance between the reference video and the target video by using a complete sentence of a video description sentence, solves the problem of poor judgment effect of the relevance between videos caused by the fact that the words such as the keywords and/or the labels cannot embody the content of the complete video, achieves the purpose of increasing the accuracy of judging the relevance of the videos, and further increases the watching duration and the click rate of a user when the video processing method is applied to an application scene recommended by the videos. Further, performing word segmentation processing on the video description sentences according to the word segmentation mode corresponding to the category to obtain reference words; and converting the reference participles into sentence vectors of the reference video according to the vector conversion mode corresponding to the category, so that the problem of semantic isomerism is solved by determining the field according to the category corresponding to the reference video, the correctness of semantic analysis results of the video description sentences is ensured, and the correctness of the target video for determining the reference video is increased.
On the basis of the above embodiments, the present embodiment will explain that the video processing method is applied to an application scene of video recommendation. The reference video may be a video that the user is watching, a video that has been watched, a video that is favorite, a video that is praise, and the like. The candidate videos are videos that have been uploaded in a video sharing website. The target video is a video related to the reference video selected from the candidate videos. The video recommendation can be to recommend the target video related to the reference video for the user according to the correlation degree of the reference video and the target video. If the number of the target videos is lower than a preset number threshold, selecting the target videos from preset candidate videos until the number of the target videos is equal to the number threshold; wherein the candidate video comprises at least one of: the videos which belong to the same user as the reference video, the videos which meet a preset first heat condition in the category and the videos which meet a preset second heat condition in all the categories. The first popularity condition and the second popularity condition can be determined by the number of praise with the video, the number of sharing times, the number of comment pieces, the number of viewers and the watching duration.
EXAMPLE III
Fig. 3 is a schematic structural diagram of a video processing apparatus according to a third embodiment of the present invention.
The embodiment can be applied to the situation of analyzing the correlation degree between videos, and further, recommendation of the videos can be performed according to the correlation degree between the videos. The embodiment does not limit the application scenes, and any application scene that performs semantic analysis on video description sentences related to videos so as to determine the correlation degree between the videos is applicable. The video description sentence is in a text form, and the video description sentence is not limited in this embodiment, and may be one sentence or more than one sentence describing video content. Illustratively, the video description sentence may be a title, a brief, a comment, etc. of the video. For clarity of description, the present embodiment takes a video description sentence as an example of a title of a video for detailed description.
Further, the apparatus may be integrated in a video processing device, which is not limited in this embodiment and may be a computer, a mobile phone, a tablet, a server, and the like. In the embodiment, a video processing apparatus is taken as an example of a server to describe.
Referring to fig. 3, the video processing apparatus specifically includes the following structure: a reference video determination module 310, a candidate video determination module 320, and a target video determination module 330.
A reference video determining module 310, configured to determine a reference video, where the reference video has a category and a video description sentence.
A candidate video determination module 320, configured to determine candidate videos belonging to the category, where the candidate videos have video description sentences.
A target video determining module 330, configured to select a target video related to the reference video from the candidate videos, where a video description sentence of the target video is semantically related to a video description sentence of the reference video.
According to the technical scheme of the embodiment, a reference video is determined, wherein the reference video is provided with a category and a video description statement; determining candidate videos belonging to the category, the candidate videos having video description statements; and selecting a target video related to the reference video from the candidate videos, wherein the video description sentence of the target video is related to the video description sentence semanteme of the reference video, so that the problem of poor judgment effect of the correlation degree between the videos caused by the fact that words such as keywords and/or labels cannot embody the complete video content is solved, the accuracy of judging the video correlation degree is improved, and the watching duration and the click rate of a user are further improved.
On the basis of the above technical solution, the target video determining module 330 includes:
a first sentence vector determination submodule for converting the video description sentence into a sentence vector of the reference video.
And the second sentence vector determination submodule is used for determining sentence vectors converted from the video description sentences of the candidate videos.
And the similarity determining submodule is used for determining the similarity of the sentence vector of the reference video and the sentence vector of the candidate video.
And the target video determining sub-module is used for determining the target video related to the reference video from the candidate videos according to the similarity.
On the basis of the above technical solution, the first sentence vector determination submodule includes:
and the reference word segmentation obtaining unit is used for carrying out word segmentation processing on the video description sentence according to the word segmentation mode corresponding to the category to obtain a reference word segmentation.
And the sentence vector determining unit is used for converting the reference word segmentation into the sentence vector of the reference video according to the vector conversion mode corresponding to the category.
On the basis of the technical scheme, the reference word segmentation obtaining unit comprises the following steps:
and the reference word segmentation determining subunit is used for performing word segmentation processing on the video description sentence to obtain a reference word segmentation of the reference video.
And the stop word determining subunit is used for determining the stop words corresponding to the categories.
A stop word removing subunit, configured to remove a reference word that is the same as the stop word from the reference word of the reference video.
On the basis of the above technical solution, the sentence vector determination unit includes:
and the model determining subunit is used for determining the sentence vector conversion model corresponding to the category.
And the sentence vector conversion subunit is used for converting the video description sentence by using the sentence vector conversion model to obtain a sentence vector of the reference video.
On the basis of the technical scheme, the similarity determining submodule comprises:
a distance calculation unit for calculating a distance between the sentence vector of the candidate video and the sentence vector of the reference video.
And a similarity determining unit for taking the distance as the similarity of the sentence vector of the reference video and the sentence vector of the candidate video.
On the basis of the technical scheme, the device further comprises:
the target video selection module is used for selecting a target video from preset candidate videos if the number of the target videos is lower than a preset number threshold value until the number of the target videos is equal to the number threshold value; wherein the candidate video comprises at least one of: the videos which belong to the same user as the reference video, the videos which meet a preset first heat condition in the category and the videos which meet a preset second heat condition in all the categories.
The product can execute the method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 4 is a schematic structural diagram of a video processing apparatus according to a fourth embodiment of the present invention.
The video processing device specifically comprises the following structure: a reference video sending module 410 and a video information sending module 420.
A reference video sending module 410, configured to send a reference video to a client for playing;
the video information sending module 420 is configured to send video information of a target video to the client for display; wherein the target video is determined by: determining a reference video, wherein the reference video has a category and a video description sentence; determining candidate videos belonging to the category, the candidate videos having video description statements; and selecting a target video related to the reference video from the candidate videos, wherein the video description sentence of the target video is semantically related to the video description sentence of the reference video.
The product can execute the method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
EXAMPLE five
Fig. 5 is a schematic structural diagram of a video processing apparatus according to a fifth embodiment of the present invention.
The video processing device specifically comprises the following structure: a reference video playing module 510, a video information receiving module 520, and a video information display module 530.
A reference video playing module 510, configured to play a reference video sent by a server;
a video information receiving module 520, configured to receive video information of a target video sent by a server;
a video information display module 530, configured to display the video information; wherein the target video is determined by: determining a reference video, wherein the reference video has a category and a video description sentence; determining candidate videos belonging to the category, the candidate videos having video description statements; and selecting a target video related to the reference video from the candidate videos, wherein the video description sentence of the target video is semantically related to the video description sentence of the reference video.
The product can execute the method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
EXAMPLE six
Fig. 6 is a schematic structural diagram of a video processing apparatus according to a sixth embodiment of the present invention. As shown in fig. 6, the video processing apparatus includes: a processor 60, a memory 61, an input device 62, and an output device 63. The number of the processors 60 in the video processing device may be one or more, and one processor 60 is taken as an example in fig. 6. The number of the memories 61 in the video processing apparatus may be one or more, and one memory 61 is illustrated in fig. 6 as an example. The processor 60, the memory 61, the input device 62 and the output device 63 of the video processing apparatus may be connected by a bus or other means, and fig. 6 illustrates an example of connection by a bus. In an embodiment, the video processing device may be a computer, a mobile phone, a tablet, a server, or the like. In the embodiment, a video processing apparatus is taken as an example of a server to describe.
The memory 61 is a computer-readable storage medium, and can be used for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the video processing method according to any embodiment of the present invention (for example, the reference video determining module 310, the candidate video determining module 320, and the target video determining module 330 in the video processing apparatus). The memory 61 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the device, and the like. Further, the memory 61 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the memory 61 may further include memory located remotely from the processor 60, which may be connected to the device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 62 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the video processing apparatus, and may also be a camera for acquiring images and a sound pickup apparatus for acquiring audio data. The output device 63 may include an audio device such as a speaker. It should be noted that the specific composition of the input device 62 and the output device 63 can be set according to actual situations.
The processor 60 executes various functional applications of the device and data processing, i.e., implements the above-described video processing method, by executing software programs, instructions, and modules stored in the memory 61.
EXAMPLE seven
An embodiment of the present invention further provides a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform a video processing method, including:
determining a reference video, wherein the reference video has a category and a video description sentence;
determining candidate videos belonging to the category, the candidate videos having video description statements;
and selecting a target video related to the reference video from the candidate videos, wherein the video description sentence of the target video is semantically related to the video description sentence of the reference video.
Of course, the storage medium provided by the embodiment of the present invention contains computer-executable instructions, and the computer-executable instructions are not limited to the operations of the video processing method described above, and may also perform related operations in the video processing method provided by any embodiment of the present invention, and have corresponding functions and advantages.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions to enable a computer device (which may be a robot, a personal computer, a server, or a network device) to execute the video processing method according to any embodiment of the present invention.
It should be noted that, in the above video processing apparatus, the units and modules included in the apparatus are merely divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (11)

1. A video processing method, comprising:
determining a reference video, wherein the reference video has a category and a video description sentence;
determining candidate videos belonging to the category, the candidate videos having video description statements;
selecting a target video related to the reference video from the candidate videos, wherein the video description sentence of the target video is semantically related to the video description sentence of the reference video;
wherein the selecting a target video related to the reference video from the candidate videos, the video description sentence of the target video being semantically related to the video description sentence of the reference video, comprises:
converting the video description sentence into a sentence vector of the reference video;
determining sentence vectors into which the video description sentences of the candidate videos are converted;
determining similarity of sentence vectors of the reference video and sentence vectors of the candidate videos;
determining a target video related to the reference video from the candidate videos according to the similarity;
wherein the converting the video description sentence into a sentence vector of the reference video comprises:
performing word segmentation processing on the video description sentences according to the word segmentation modes corresponding to the categories to obtain reference words;
converting the reference word segmentation into sentence vectors of the reference video according to the vector conversion mode corresponding to the category;
performing word segmentation processing on the video description sentences according to the word segmentation modes corresponding to the categories to obtain reference words comprising;
performing word segmentation processing on the video description sentence to obtain a reference word segmentation of the reference video;
determining stop words corresponding to the categories;
removing reference participles identical to the stop word from the reference participles of the reference video.
2. The method of claim 1, wherein converting the reference participle into a sentence vector of the reference video according to a vector conversion manner corresponding to the category comprises:
determining a sentence vector conversion model corresponding to the category;
and converting the video description sentence by using the sentence vector conversion model to obtain a sentence vector of the reference video.
3. The method of claim 1, wherein determining similarity between sentence vectors of the reference video and sentence vectors of the candidate video comprises:
calculating a distance between the sentence vector of the candidate video and the sentence vector of the reference video;
and taking the distance as the similarity of the sentence vector of the reference video and the sentence vector of the candidate video.
4. The method of claim 1, further comprising:
if the number of the target videos is lower than a preset number threshold, selecting the target videos from preset candidate videos until the number of the target videos is equal to the number threshold;
wherein the candidate video comprises at least one of:
the videos which belong to the same user as the reference video, the videos which meet a preset first heat condition in the category and the videos which meet a preset second heat condition in all the categories.
5. A video processing method, comprising:
sending the reference video to a client for playing;
sending the video information of the target video to the client for displaying;
the target video is determined by:
determining a reference video, wherein the reference video has a category and a video description sentence;
determining candidate videos belonging to the category, the candidate videos having video description statements;
selecting a target video related to the reference video from the candidate videos, wherein the video description sentence of the target video is semantically related to the video description sentence of the reference video;
wherein the selecting a target video related to the reference video from the candidate videos, the video description sentence of the target video being semantically related to the video description sentence of the reference video, comprises:
converting the video description sentence into a sentence vector of the reference video;
determining sentence vectors into which the video description sentences of the candidate videos are converted;
determining similarity of sentence vectors of the reference video and sentence vectors of the candidate videos;
determining a target video related to the reference video from the candidate videos according to the similarity;
wherein the converting the video description sentence into a sentence vector of the reference video comprises:
performing word segmentation processing on the video description sentences according to the word segmentation modes corresponding to the categories to obtain reference words;
converting the reference word segmentation into sentence vectors of the reference video according to the vector conversion mode corresponding to the category;
performing word segmentation processing on the video description sentences according to the word segmentation modes corresponding to the categories to obtain reference words comprising;
performing word segmentation processing on the video description sentence to obtain a reference word segmentation of the reference video;
determining stop words corresponding to the categories;
removing reference participles identical to the stop word from the reference participles of the reference video.
6. A video processing method, comprising:
playing a reference video sent by a server;
receiving video information of a target video sent by a server;
displaying the video information;
the target video is determined by:
determining a reference video, wherein the reference video has a category and a video description sentence;
determining candidate videos belonging to the category, the candidate videos having video description statements;
selecting a target video related to the reference video from the candidate videos, wherein the video description sentence of the target video is semantically related to the video description sentence of the reference video;
wherein the selecting a target video related to the reference video from the candidate videos, the video description sentence of the target video being semantically related to the video description sentence of the reference video, comprises:
converting the video description sentence into a sentence vector of the reference video;
determining sentence vectors into which the video description sentences of the candidate videos are converted;
determining similarity of sentence vectors of the reference video and sentence vectors of the candidate videos;
determining a target video related to the reference video from the candidate videos according to the similarity;
wherein the converting the video description sentence into a sentence vector of the reference video comprises:
performing word segmentation processing on the video description sentences according to the word segmentation modes corresponding to the categories to obtain reference words;
converting the reference word segmentation into sentence vectors of the reference video according to the vector conversion mode corresponding to the category;
performing word segmentation processing on the video description sentences according to the word segmentation modes corresponding to the categories to obtain reference words comprising;
performing word segmentation processing on the video description sentence to obtain a reference word segmentation of the reference video;
determining stop words corresponding to the categories;
removing reference participles identical to the stop word from the reference participles of the reference video.
7. A video processing apparatus, comprising:
a reference video determination module for determining a reference video, the reference video having a category and a video description statement;
a candidate video determination module for determining candidate videos belonging to the category, the candidate videos having video description sentences;
a target video determining module, configured to select a target video related to the reference video from the candidate videos, where a video description sentence of the target video is semantically related to a video description sentence of the reference video;
the target video determination module comprises:
a first sentence vector determination submodule for converting the video description sentence into a sentence vector of the reference video;
a second sentence vector determination submodule for determining a sentence vector into which the video description sentence of the candidate video is converted;
a similarity determining submodule for determining similarity between the sentence vector of the reference video and the sentence vector of the candidate video;
a target video determining sub-module, configured to determine a target video related to the reference video from the candidate videos according to the similarity;
the first sentence vector determination submodule includes:
a reference word segmentation obtaining unit, configured to perform word segmentation processing on the video description sentence according to the word segmentation mode corresponding to the category to obtain a reference word segmentation;
a sentence vector determining unit, configured to convert the reference word segmentation into a sentence vector of the reference video according to a vector conversion manner corresponding to the category;
the reference word segmentation obtaining unit comprises:
a reference word segmentation determining subunit, configured to perform word segmentation processing on the video description sentence to obtain a reference word segmentation of the reference video;
the stop word determining subunit is used for determining the stop words corresponding to the categories;
a stop word removing subunit, configured to remove a reference word that is the same as the stop word from the reference word of the reference video.
8. A video processing apparatus, comprising:
the reference video sending module is used for sending the reference video to the client for playing;
the video information sending module is used for sending the video information of the target video to the client for displaying;
the target video is determined by:
determining a reference video, wherein the reference video has a category and a video description sentence;
determining candidate videos belonging to the category, the candidate videos having video description statements;
selecting a target video related to the reference video from the candidate videos, wherein the video description sentence of the target video is semantically related to the video description sentence of the reference video;
wherein the selecting a target video related to the reference video from the candidate videos, the video description sentence of the target video being semantically related to the video description sentence of the reference video, comprises:
converting the video description sentence into a sentence vector of the reference video;
determining sentence vectors into which the video description sentences of the candidate videos are converted;
determining similarity of sentence vectors of the reference video and sentence vectors of the candidate videos;
determining a target video related to the reference video from the candidate videos according to the similarity;
wherein the converting the video description sentence into a sentence vector of the reference video comprises:
performing word segmentation processing on the video description sentences according to the word segmentation modes corresponding to the categories to obtain reference words;
converting the reference word segmentation into sentence vectors of the reference video according to the vector conversion mode corresponding to the category;
performing word segmentation processing on the video description sentences according to the word segmentation modes corresponding to the categories to obtain reference words comprising;
performing word segmentation processing on the video description sentence to obtain a reference word segmentation of the reference video;
determining stop words corresponding to the categories;
removing reference participles identical to the stop word from the reference participles of the reference video.
9. A video processing apparatus, comprising:
the reference video playing module is used for playing a reference video sent by the server;
the video information receiving module is used for receiving the video information of the target video sent by the server;
the video information display module is used for displaying the video information;
the target video is determined by:
determining a reference video, wherein the reference video has a category and a video description sentence;
determining candidate videos belonging to the category, the candidate videos having video description statements;
selecting a target video related to the reference video from the candidate videos, wherein the video description sentence of the target video is semantically related to the video description sentence of the reference video;
wherein the selecting a target video related to the reference video from the candidate videos, the video description sentence of the target video being semantically related to the video description sentence of the reference video, comprises:
converting the video description sentence into a sentence vector of the reference video;
determining sentence vectors into which the video description sentences of the candidate videos are converted;
determining similarity of sentence vectors of the reference video and sentence vectors of the candidate videos;
determining a target video related to the reference video from the candidate videos according to the similarity;
wherein the converting the video description sentence into a sentence vector of the reference video comprises:
performing word segmentation processing on the video description sentences according to the word segmentation modes corresponding to the categories to obtain reference words;
converting the reference word segmentation into sentence vectors of the reference video according to the vector conversion mode corresponding to the category;
performing word segmentation processing on the video description sentences according to the word segmentation modes corresponding to the categories to obtain reference words comprising;
performing word segmentation processing on the video description sentence to obtain a reference word segmentation of the reference video;
determining stop words corresponding to the categories;
removing reference participles identical to the stop word from the reference participles of the reference video.
10. A video processing apparatus, comprising: a memory and one or more processors;
the memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the video processing method of any of claims 1-6.
11. A storage medium containing computer-executable instructions for performing the video processing method of any of claims 1-6 when executed by a computer processor.
CN201910069610.XA 2019-01-24 2019-01-24 Video processing method, device, equipment and storage medium Active CN109800326B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910069610.XA CN109800326B (en) 2019-01-24 2019-01-24 Video processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910069610.XA CN109800326B (en) 2019-01-24 2019-01-24 Video processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109800326A CN109800326A (en) 2019-05-24
CN109800326B true CN109800326B (en) 2021-07-02

Family

ID=66560385

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910069610.XA Active CN109800326B (en) 2019-01-24 2019-01-24 Video processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109800326B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111754980A (en) * 2020-05-21 2020-10-09 华南理工大学 Intelligent scoring method and device based on semantic recognition and storage medium
CN115205725B (en) * 2022-02-22 2023-10-27 广州云智达创科技有限公司 Video scene analysis method, device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104008169A (en) * 2014-05-30 2014-08-27 中国测绘科学研究院 Semanteme based geographical label content safe checking method and device
CN104516986A (en) * 2015-01-16 2015-04-15 青岛理工大学 Method and device for recognizing sentence
CN105893444A (en) * 2015-12-15 2016-08-24 乐视网信息技术(北京)股份有限公司 Sentiment classification method and apparatus
CN105912631A (en) * 2016-04-07 2016-08-31 北京百度网讯科技有限公司 Search processing method and device

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102890690B (en) * 2011-07-22 2017-04-12 中兴通讯股份有限公司 Target information search method and device
CN104219575B (en) * 2013-05-29 2020-05-12 上海连尚网络科技有限公司 Method and system for recommending related videos
CN104834686B (en) * 2015-04-17 2018-12-28 中国科学院信息工程研究所 A kind of video recommendation method based on mixing semantic matrix
CN105760544A (en) * 2016-03-16 2016-07-13 合网络技术(北京)有限公司 Video recommendation method and device
KR102012676B1 (en) * 2016-10-19 2019-08-21 삼성에스디에스 주식회사 Method, Apparatus and System for Recommending Contents
CN106686460B (en) * 2016-12-22 2020-03-13 优地网络有限公司 Video program recommendation method and video program recommendation device
CN107105349A (en) * 2017-05-17 2017-08-29 东莞市华睿电子科技有限公司 A kind of video recommendation method
CN108334640A (en) * 2018-03-21 2018-07-27 北京奇艺世纪科技有限公司 A kind of video recommendation method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104008169A (en) * 2014-05-30 2014-08-27 中国测绘科学研究院 Semanteme based geographical label content safe checking method and device
CN104516986A (en) * 2015-01-16 2015-04-15 青岛理工大学 Method and device for recognizing sentence
CN105893444A (en) * 2015-12-15 2016-08-24 乐视网信息技术(北京)股份有限公司 Sentiment classification method and apparatus
CN105912631A (en) * 2016-04-07 2016-08-31 北京百度网讯科技有限公司 Search processing method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于类别概念的特征选择方法;王琳等;《北京电子科技学院学报》;20060925(第2期);第10-14页 *
对电影评论做情感分析之词干提取和停用词的移除(二);修炼之路;《https://xiulian.blog.csdn.net/article/details/79873382》;20180409;第1-3页 *

Also Published As

Publication number Publication date
CN109800326A (en) 2019-05-24

Similar Documents

Publication Publication Date Title
KR102455616B1 (en) Theme classification method based on multimodality, device, apparatus, and storage medium
US11197036B2 (en) Multimedia stream analysis and retrieval
CN109862397B (en) Video analysis method, device, equipment and storage medium
CN109657054B (en) Abstract generation method, device, server and storage medium
WO2018177139A1 (en) Method and apparatus for generating video abstract, server and storage medium
CN111708915B (en) Content recommendation method and device, computer equipment and storage medium
CN109275047B (en) Video information processing method and device, electronic equipment and storage medium
CN113590850A (en) Multimedia data searching method, device, equipment and storage medium
JP5894149B2 (en) Enhancement of meaning using TOP-K processing
CN112733654B (en) Method and device for splitting video
CN103069414A (en) Information processing device, information processing method, and program
CN112511854A (en) Live video highlight generation method, device, medium and equipment
CN110287375B (en) Method and device for determining video tag and server
CN107948730B (en) Method, device and equipment for generating video based on picture and storage medium
CN111400513A (en) Data processing method, data processing device, computer equipment and storage medium
CN109800326B (en) Video processing method, device, equipment and storage medium
CN114095749A (en) Recommendation and live interface display method, computer storage medium and program product
CN111708909A (en) Video tag adding method and device, electronic equipment and computer-readable storage medium
CN107122393B (en) electronic album generating method and device
CN114707502A (en) Virtual space processing method and device, electronic equipment and computer storage medium
CN114298007A (en) Text similarity determination method, device, equipment and medium
Metze et al. Beyond audio and video retrieval: topic-oriented multimedia summarization
CN112804580B (en) Video dotting method and device
CN114780757A (en) Short media label extraction method and device, computer equipment and storage medium
CN113704549A (en) Method and device for determining video tag

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant