CN108133058B - Video retrieval method - Google Patents

Video retrieval method Download PDF

Info

Publication number
CN108133058B
CN108133058B CN201810095506.3A CN201810095506A CN108133058B CN 108133058 B CN108133058 B CN 108133058B CN 201810095506 A CN201810095506 A CN 201810095506A CN 108133058 B CN108133058 B CN 108133058B
Authority
CN
China
Prior art keywords
video
attribute
user
sub
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810095506.3A
Other languages
Chinese (zh)
Other versions
CN108133058A (en
Inventor
杨香斌
王勇进
王峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Co Ltd
Original Assignee
Hisense Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Co Ltd filed Critical Hisense Co Ltd
Priority to CN201810095506.3A priority Critical patent/CN108133058B/en
Publication of CN108133058A publication Critical patent/CN108133058A/en
Application granted granted Critical
Publication of CN108133058B publication Critical patent/CN108133058B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/738Presentation of query results

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention discloses a video retrieval method and a video device, relates to the technical field of electronic information, and can be used for realizing the rapid retrieval of videos and improving the video retrieval efficiency. The specific scheme is as follows: obtaining a first video set; calculating the information entropy of at least two attribute classifications in a first video set, wherein each attribute classification comprises at least two sub-classifications; and prompting the user to select from the sub-categories of the attribute category with the largest information entropy. The method is used in the video retrieval process.

Description

Video retrieval method
The present application is a divisional application of chinese patent application 201410180892.8 entitled "a video search method and a video apparatus" filed on 30/04/2014.
Technical Field
The invention relates to the technical field of electronic information, in particular to a video retrieval method and a video device.
Background
With the development of multimedia technology, the multimedia technology field has emerged a lot of multimedia web pages, multimedia applications and clients capable of providing video retrieval functions for users.
In the prior art, a multimedia web page, a multimedia application and a video device in a client generally display videos in various attribute classifications for a user in a fixed sequence on a retrieval interface for the user to select, then receive a selection of the user on the attribute classifications listed on the retrieval interface, display sub-classifications in the attribute classifications on the retrieval interface for the user, then receive a selection of the user on the sub-classifications in the attribute classifications, and retrieve all videos contained in the sub-classifications selected by the user. For example, the attribute classification is a classification of videos by attribute, such as a video classified into action, comedy, and science fiction by genre attribute, and a video classified into continent, port, and japanese-korean by regional attribute. Each attribute category comprises a plurality of sub-categories, for example, the type attribute at least comprises action, comedy and science fiction.
In the process of implementing the tree video retrieval, when the retrieval target of the user is ambiguous (i.e. the user is uncertain), the user may randomly select one attribute classification, and then select an appropriate sub-classification from the randomly selected attribute classification to retrieve all videos included in the sub-classification selected by the user. And when all the videos contained in the sub-category selected by the user do not include the video which the user wants to watch, returning to the attribute category selection interface until the user retrieves the video which the user wants to watch.
However, when the video apparatus displays videos in various attribute classifications for a user in a fixed order and retrieves videos according to the randomly selected attribute classification of the user, it may be necessary to receive the user's selection of the attribute classification multiple times, and the video retrieval for the user cannot be performed quickly to find a video that the user wants to view, which results in a low video retrieval efficiency.
Disclosure of Invention
The embodiment of the invention provides a video retrieval method and a video device, which can realize the rapid retrieval of videos and improve the video retrieval efficiency.
In order to achieve the above purpose, the embodiment of the invention adopts the following technical scheme:
in a first aspect of the embodiments of the present invention, a video retrieval method is provided, including:
obtaining a first video set;
calculating information entropy of at least two attribute classifications in the first video set, wherein each attribute classification comprises at least two sub-classifications;
and prompting the user to select from the sub-categories of the attribute category with the largest information entropy.
With reference to the first aspect, in a possible implementation manner, the calculating information entropy of at least two attribute classifications in the first video set includes:
and calculating the information entropy of the attribute classification according to the number of videos contained in each sub-classification in the attribute classification.
With reference to the first aspect and the foregoing possible implementation manners, in another possible implementation manner, the calculating an information entropy of the attribute classification according to the number of videos included in each sub-classification in the attribute classification includes:
determining the video distribution rate in each sub-classification in the attribute classification according to the number of videos contained in each sub-classification in the attribute classification;
and calculating the information entropy of the attribute classification according to the distribution rate.
With reference to the first aspect and the foregoing possible implementation manners, in another possible implementation manner, the calculating information entropy of at least two attribute classifications in the first video set includes:
and calculating the information entropy of at least two attribute classifications in the first video set by combining the current scene information and/or the user behavior parameters.
With reference to the first aspect and the foregoing possible implementation manners, in another possible implementation manner, the method further includes:
obtaining a second video set according to the selection of the user;
calculating information entropy of at least two attribute classifications of the second video set, wherein each attribute classification comprises at least two sub-classifications;
and prompting the user to select from the sub-categories of the attribute category with the largest information entropy.
With reference to the first aspect and the foregoing possible implementation manners, in another possible implementation manner, the method further includes:
and updating the user behavior parameters according to the selection of the user.
With reference to the first aspect and the foregoing possible implementation manners, in another possible implementation manner, the obtaining a first video set includes:
searching according to the search terms input by the user to obtain the first video set;
or, performing relevancy retrieval according to the video currently selected by the user to obtain the first video set;
or, retrieving according to the voice input information of the user to obtain the first video set.
With reference to the first aspect and the foregoing possible implementation manners, in another possible implementation manner, the prompting the user to select from the sub-categories of the attribute category with the largest information entropy includes:
displaying the sub-classification mark of the attribute classification with the maximum information entropy to prompt a user to select;
or prompting the user to select from the sub-categories of the attribute category with the maximum information entropy through voice.
In a second aspect of the embodiments of the present invention, there is also provided a video apparatus, including:
a first obtaining unit, configured to obtain a first video set;
a first calculating unit, configured to calculate information entropy of at least two attribute classifications in the first video set obtained by the first obtaining unit, where each attribute classification includes at least two sub-classifications;
and the first prompting unit is used for prompting the user to select from the sub-classifications of the attribute classification with the maximum information entropy calculated by the first calculating unit.
With reference to the second aspect, in a possible implementation manner, the first calculating unit is further configured to calculate an information entropy of the attribute classification according to the number of videos included in each sub-classification in the attribute classification.
With reference to the second aspect and the foregoing possible implementation manners, in another possible implementation manner, the first computing unit includes:
the determining module is used for determining the video distribution rate in each sub-classification in the attribute classification according to the number of videos contained in each sub-classification in the attribute classification;
and the calculating module is used for calculating the information entropy of the attribute classification according to the distribution rate.
With reference to the second aspect and the foregoing possible implementation manners, in another possible implementation manner, the first calculating unit is further configured to calculate information entropies of at least two attribute classifications in the first video set in combination with current scene information and/or user behavior parameters.
With reference to the second aspect and the foregoing possible implementation manners, in another possible implementation manner, the video apparatus further includes:
the second acquisition unit is used for acquiring a second video set according to the selection of the user;
a second calculating unit, configured to calculate information entropies of at least two attribute classifications of the second video set obtained by the second obtaining unit, where each attribute classification includes at least two sub-classifications;
and the second prompting unit is used for prompting the user to select from the sub-classifications of the attribute classification with the largest information entropy calculated by the second calculating unit.
With reference to the second aspect and the foregoing possible implementation manners, in another possible implementation manner, the video apparatus further includes:
and the updating unit is used for updating the user behavior parameters according to the selection of the user.
With reference to the second aspect and the foregoing possible implementation manners, in another possible implementation manner, the first obtaining unit is further configured to perform a search according to a search term input by the user to obtain the first video set; or, performing relevancy retrieval according to the video currently selected by the user to obtain the first video set; or, retrieving according to the voice input information of the user to obtain the first video set.
With reference to the second aspect and the foregoing possible implementation manners, in another possible implementation manner, the first prompting unit is further configured to display a sub-classification label of the attribute classification with the largest information entropy to prompt the user to select; or prompting the user to select from the sub-categories of the attribute category with the maximum information entropy through voice;
the second prompting unit is also used for displaying the sub-classification mark of the attribute classification with the maximum information entropy to prompt the user to select; or prompting the user to select from the sub-categories of the attribute category with the maximum information entropy through voice.
According to the video retrieval method and the video device provided by the embodiment of the invention, a first video set is obtained; calculating the information entropy of at least two attribute classifications in a first video set, wherein each attribute classification comprises at least two sub-classifications; and prompting the user to select from the sub-categories of the attribute category with the largest information entropy.
The information entropy of one system can reflect the probability distribution of information in the system and the convergence condition of the information in the system, and when information retrieval is carried out, the retrieval range can be effectively reduced by combining the probability distribution and the convergence condition of the information in the system, and the retrieval efficiency is improved. In the scheme, the calculated information entropy of the attribute classification can reflect the probability distribution and convergence condition of the video when the video in the first video set is classified according to different attribute classifications, and the retrieval range of the video can be effectively reduced by combining the probability distribution and convergence condition of the video in different attribute classifications, so that the retrieval efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flowchart of a video retrieval method according to embodiment 1 of the present invention;
fig. 2 is a flowchart of a video retrieval method according to embodiment 2 of the present invention;
fig. 3 is a flowchart illustrating a video apparatus according to embodiment 3 of the present invention;
FIG. 4 is a flowchart illustrating another exemplary video apparatus according to embodiment 3 of the present invention;
FIG. 5 is a flowchart illustrating another exemplary video apparatus according to embodiment 3 of the present invention;
fig. 6 is a flowchart illustrating another video apparatus according to embodiment 3 of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Additionally, the terms "system" and "network" are often used interchangeably herein. The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Example 1
An embodiment of the present invention provides a video retrieval method, as shown in fig. 1, including:
s101, the video device obtains a first video set.
The first video set comprises at least two videos.
Specifically, the video device may perform a search according to a search term input by a user to obtain a first video set; or, performing relevancy retrieval according to a video currently selected by a user to obtain a first video set; alternatively, the retrieval may be performed according to the voice input information of the user to obtain the first video set.
For example, the video apparatus may determine an input keyword of the user according to the search information of the user (including the search term input by the user, the search term corresponding to the video currently selected by the user, or the voice input information of the user), and perform a search according to the input keyword to determine the first video set. The first video set comprises at least two videos, and the videos in the first video set are matched with the input keywords.
For example, the video apparatus may receive search information of a user (including a search word input by the user, a search word corresponding to a video currently selected by the user, or voice input information of the user), and then perform natural language understanding on the search information to obtain an input keyword. The video device can determine the matching keywords corresponding to the input keywords through entity naming labels, presorting the videos in the video information base by adopting different matching keywords, and then determining a first video asset set in the videos contained in the video classification corresponding to the determined matching keywords according to the input keywords.
It should be noted that, in the embodiment, the specific method for the video device to perform natural language understanding on the retrieval information may refer to other method embodiments of the present invention or related descriptions in the prior art, and the embodiments of the present invention are not described herein again; the specific method for determining the matching keyword corresponding to the input keyword by the video apparatus through the entity naming label in this embodiment may refer to other method embodiments of the present invention or related descriptions in the prior art, and details of the embodiments of the present invention are not described herein again.
The video apparatus in the embodiment of the present invention may be a search engine having a function of searching for a video according to the search information of the user, or may also be a search device having a function of searching for a video according to the search information of the user, or a search module in such a search device.
S102, the video device calculates the information entropy of at least two attribute classifications in the first video set, wherein each attribute classification comprises at least two sub-classifications.
Specifically, the video apparatus may determine the number of videos included in each sub-classification in each attribute classification when videos in the first video set are classified according to each attribute classification; and calculating the information entropy of the attribute classification according to the number of videos contained in each sub-classification in the attribute classification.
S103, the video device prompts the user to select from the sub-categories of the attribute category with the largest information entropy.
Specifically, the video device may present all sub-categories in the attribute category with the largest information entropy for the user to select; each attribute class comprises at least two sub-classes; receiving a user selection of a sub-category of the attribute categories presented by the video device; and selecting a second video set according to the number of the users. The second video set is a video set formed by videos which are inquired from the first video set and belong to the sub-classification selected by the user according to the sub-classification selected by the user in the sub-classification of the attribute classification with the maximum information entropy.
The video retrieval method provided by the embodiment of the invention obtains a first video set; calculating the information entropy of at least two attribute classifications in a first video set, wherein each attribute classification comprises at least two sub-classifications; and prompting the user to select from the sub-categories of the attribute category with the largest information entropy.
The information entropy of one system can reflect the probability distribution of information in the system and the convergence condition of the information in the system, and when information retrieval is carried out, the retrieval range can be effectively reduced by combining the probability distribution and the convergence condition of the information in the system, and the retrieval efficiency is improved. In the scheme, the calculated information entropy of the attribute classification can reflect the probability distribution and convergence condition of the video when the video in the first video set is classified according to different attribute classifications, and the retrieval range of the video can be effectively reduced by combining the probability distribution and convergence condition of the video in different attribute classifications, so that the retrieval efficiency is improved.
Example 2
An embodiment of the present invention provides a video retrieval method, as shown in fig. 2, including:
s201, the video device obtains a first video set.
Specifically, S201 may be any one of S201a, S201b, or S201 c.
S201a, the video device carries out searching according to the search words input by the user to obtain a first video set.
S201b, the video device carries out relevancy retrieval according to the video currently selected by the user to obtain a first video set.
S201c, the video device performs retrieval according to the voice input information of the user to obtain a first video set.
Illustratively, in the embodiment of the invention, a retrieval frame is arranged in the video device, and the video device can receive retrieval information of a user through the retrieval frame and then perform retrieval according to the retrieval information of the user to determine the first video set.
It should be noted that the retrieval information of the user may be chinese characters, pinyin, or english letters, and the like, and the language and format of the retrieval information are not limited in the embodiment of the present invention.
The video device can understand natural language of the retrieval information to obtain an input keyword, then determines a matching keyword corresponding to the input keyword through entity naming and labeling, and finally retrieves videos contained in video categories corresponding to the matching keyword from all videos in a video information base maintained by the video device to determine a first video set.
Natural Language Understanding, namely Natural Language Understanding, is an emerging technology that can realize effective communication between people and computers by adopting Natural Language, is commonly called man-machine conversation, and refers to a mechanism that enables computers to make corresponding reactions according to meanings expressed by Natural Language of human society. The main research is to use computer to simulate human language communication process, so that the computer can understand and use human social natural language such as Chinese and English, to realize human-computer natural language communication, to replace human part of mental labor, including inquiry data, answering question, extracting document, compiling data and all processing of natural language information.
It should be noted that, in the embodiment of the present invention, the video apparatus may adopt a "natural language understanding" technology to understand and analyze the search information to obtain an input keyword that can be used for performing video search.
For example, if the retrieval information of the user is "liu a certain slice", the video apparatus: after understanding the natural language of "Liu's certain tablet", the input keywords that can be obtained are "Liu's certain" and "tablet".
It should be noted that the foregoing merely illustrates the basic principle and process of natural language understanding by way of example, and reference may be made to the related description in the prior art for detailed description of the "natural language understanding" technology in the embodiment of the present invention, which is not described herein again.
The entity naming labeling is an important basic tool in application fields such as information extraction, question and answer systems, syntactic analysis, machine translation, Semantic Web-oriented metadata labeling and the like, and plays an important role in the process of bringing the natural language processing technology into practical use. Generally, the task of entity naming labels is to identify named entities in three major categories (entity category, time category and numeric category), seven minor categories (person name, organization name, place name, time, date, currency and percentage) in the text to be processed.
Specifically, after the video apparatus obtains the input keyword, the entity naming label may be used to determine the matching keyword corresponding to the input keyword.
For example, if the input keywords are "liu somewhere" and "movie", the video apparatus may identify the matching keyword corresponding to "liu somewhere" as "actor" and identify the matching keyword corresponding to "movie" by the entity naming label.
It should be noted that, the foregoing merely illustrates the basic principle and process of entity naming and labeling by way of example, and the detailed description of the "entity naming and labeling" technology in the embodiment of the present invention may refer to the related description in the prior art, which is not described herein again.
In the embodiment of the present invention, the video apparatus maintains a video information base, which stores all existing videos, that is, all videos that have been shown and obtain a playback right or links of the videos.
The matching keyword may be a preset keyword. And the videos in the video information base are presorted by adopting different matched keywords.
For example, the matching keywords preset in the video information library may include: actors, video types (including comedy, love, action, etc.), regions (including europe, america, inland, japanese, airport, etc.), directors, etc.
Specifically, the method for the video apparatus to determine the first video set in the videos included in the video category corresponding to the determined matching keyword according to the input keyword may include: the video device determines a video classification mode corresponding to the determined matching keyword; determining sub-classifications in the video classification mode according to the input keywords; and determining a resource set formed by all videos contained in the sub-classification in the video information base as the first video set.
Illustratively, if the input keyword is "liu somewhere", the video device identifies that the matching keyword corresponding to "liu somewhere" is "actor" through the entity naming label; the video device determines the video classification mode corresponding to the actor as a mode for classifying videos in the video information base according to different actors; when the video device classifies the videos in the video information base according to different actors, all videos contained in the sub-classification corresponding to a certain Liu are determined, namely a resource set consisting of all films (videos) played by the certain Liu is the first video set.
It should be noted that, in the embodiment of the present invention, the video apparatus may obtain more than one input keyword according to the user search information, and correspondingly, the video apparatus may identify more than one matching keyword corresponding to the input keyword through the entity naming label.
When the video device acquires at least two input keywords and identifies at least two matching keywords, the video device can determine video classification modes corresponding to the determined at least two matching keywords respectively; determining sub-classifications in a video classification mode corresponding to a matched keyword corresponding to the input keyword according to each input keyword in at least two input keywords respectively; and determining a resource set formed by videos corresponding to all input keywords in at least two input keywords in all videos contained in all the determined sub-classifications in the video information base as the first video set.
Illustratively, if the input keywords are "a certain Liu" and "a slice", the video device identifies that the matching keyword corresponding to the "a certain Liu" is "actor" and the matching keyword corresponding to the "a slice" is "movie" by the entity naming label; the video apparatus may determine the video classification mode corresponding to the "actor" as a mode of classifying the videos in the video information base according to different actors; determining the video classification mode corresponding to the 'movie' as all movies in the video information base; when the video device classifies videos in the video information base according to different actors, all movies (videos) contained in the sub-classification corresponding to a certain Liu are determined, namely, a resource set consisting of all movies (videos) decorated by the certain Liu is determined as a first video set.
It should be further noted that, in the embodiments of the present invention, a method for determining, by a video apparatus according to an input keyword, a first video set in videos included in a video category corresponding to a determined matching keyword includes, but is not limited to, the implementation methods listed above in the embodiments of the present invention, and other methods for acquiring, by a video apparatus, the first video set are not described herein again in the embodiments of the present invention.
Specifically, the video device calculates the entropy of information of at least two attribute classifications when the videos in the first video set are divided according to different attribute classifications, where the method for each attribute classification including at least two sub-classifications may include S202 to S204:
s202, the video device determines the number of videos contained in each sub-classification in at least two types of each attribute classification in the first video set respectively.
The videos can be classified and divided according to different attributes, for example, the videos can be classified into action pieces, comedy pieces, love pieces, thrillers and the like according to type attributes, and the attribute classification is a classification set for classifying the videos according to different attributes.
For example, the attribute classification in the embodiment of the present invention may be at least: type attribute, chronological attribute, regional attribute, rating attribute, and the like. Each attribute class contains at least two subcategories.
For example, the sub-classifications in the type attribute may include at least: action, comedy, love, thriller, etc. The type attribute divides the video into different types according to the film type of the video.
For example, the video apparatus may determine the number of videos included in each sub-category in the type attribute when the videos in the first video set are classified according to the type attribute. For example, if 200 videos are included in the first video set and 200 videos in the first video set are classified according to the genre attributes, the 200 videos include 30 action-pieces, 80 comedy-pieces, 50 love-pieces and 40 thrillers.
For example, the sub-classifications in the chronological attribute may include at least: the 60 s, 70 s, 80 s, 90 s, etc. The video is divided into different ages according to the film shooting time or the first showing time of the video in the age attribute.
The video device may determine the number of videos included in each sub-category in the chronological attribute when the videos in the first set of videos are classified by the chronological attribute. For example, when 200 videos are included in the first video set and 200 videos in the first video set are classified according to the chronological attribute, 200 videos include a video 10 of the 60 th generation, a video 120 of the 70 th generation, a video 60 of the 80 th generation, and a video 10 of the 90 th generation.
For example, the sub-classifications in the region attribute may include at least: european and American tablets, Hongkai tablets, inland tablets, Japanese and Korean tablets, etc. The video is divided into different regions according to the film shooting time or the first showing time of the video in the chronological attribute.
The video apparatus may determine the number of videos included in each sub-category in the region attribute when the videos in the first video set are classified according to the region attribute. For example, when 200 videos are included in the first video set and 200 videos in the first video set are classified according to the region attribute, the european and american slice 6, the harbor table slice 70, the inland slice 120, and the japanese and korean slices 4 are included.
For example, if the video score is between 0-10 points, and 10 points are the highest scores, the sub-categories in the score attributes may include at least: the method comprises the steps of obtaining a first preset number of videos with 8-10 video scores, obtaining a second preset number of videos with 6-7 video scores, obtaining a third preset number of videos with 0-5 video scores and the like. The first preset number, the second preset number, and the third preset number in this embodiment are number thresholds preset by a system or set by a user.
The video device may determine the number of videos included in each sub-category in the rating attribute when the videos in the first video set are classified according to the rating attribute. For example, if the first video set includes 200 videos, when the 200 videos in the first video set are classified according to the regional attributes, 100 videos with a first preset number of 8-10 videos are obtained, 80 videos with a second preset number of 6-7 videos are obtained, and 20 videos with a third preset number of 0-5 videos are obtained.
S203, the video device determines the video distribution rate in each sub-classification in the attribute classification according to the number of videos contained in each sub-classification in the attribute classification.
The video device may determine the distribution rate of the videos in the first video set in each attribute classification according to the number of videos included in each attribute classification.
Illustratively, it is assumed that the sub-classifications in the type attribute include: action, comedy, love and thriller. The first video set comprises 200 videos, and the 200 videos comprise an action 30 video, a comedy 80 video, a love 50 video and a thriller 40 video.
The video apparatus may calculate distribution rates of videos in the first video set in the sub-categories in the type attribute as: 15% of action piece, 40% of comedy piece, 25% of love piece and 20% of thriller.
Illustratively, it is assumed that the sub-classifications in the chronological attribute include: the 60 s, 70 s, 80 s, and 90 s. The first video set comprises 200 videos in total, and the 200 videos comprise 10 videos of 60 s, 120 videos of 70 s, 60 videos of 80 s and 10 videos of 90 s.
The video apparatus may calculate distribution rates of videos in the first video set in the sub-categories in the type attribute as: video 5% in the 60 s, video 60% in the 70 s, video 30% in the 80 s, and video 5% in the 90 s.
It should be noted that, the method for the video apparatus to determine the distribution rate of the video in the first video set in each of the other attribute classifications according to the number of videos included in each of the other attribute classifications may refer to the method for the video apparatus to determine the distribution rate of the video in the first video set in each of the sub classifications in the chronological attribute or the type attribute in the above example, and the method for the video apparatus to determine the distribution rate of the video in the first video set in each of the sub classifications in the other attribute classifications is not described herein again in this embodiment of the present invention.
And S204, the video device calculates the information entropy of the attribute classification according to the distribution rate.
For example, assume that the type attribute S includes n subcategories x, i.e., S ═ x1,x2,…xi,...xnThe probability distribution (distribution rate) of the sub-classification x in the type attribute S is P ═ P (x)1),P(x2),...P(xi),...P(xn) Then the video apparatus can calculate the information entropy of the attribute classification by using formula 1.
Equation 1:
Figure BDA0001562646060000121
wherein H (X) is information entropy of attribute classification, P (x)i) The distribution rate of the ith sub-class in the attribute class.
Illustratively, the distribution rates of the videos in the first video set in the sub-categories in the type attribute are respectively: action piece 15% ═ 0.15. Comedy 40% ═ 0.4, love 25% ═ 0.25, and thriller 20% ═ 0.2, i.e. P ═ P (x)1),P(x2),P(x3),P(x4) 0.15,0.4,0.25,0.2, the video apparatus may calculate the information entropy of the type attribute using formula 1.
Equation 2:
Figure BDA0001562646060000131
wherein x in the above formula1Represents an action piece, P (x)1) Representing the distribution rate of the sub-classification-action pieces in the type attribute; x is the number of2Representing comedies, P (x)2) Represents the subcategory-the distribution rate of comedies in the genre attribute; x is the number of3Representing a love card, P (x)3) Representing the distribution rate of the sub-classification-love photos in the type attribute; x is the number of4Showing a thriller, P (x)4) And representing the distribution rate of the sub-classification-thrillers in the type attribute.
Illustratively, the distribution rates of the videos in the first video set in the sub-classifications in the chronological attribute are respectively: video 5% in 60 s is 0.05, video 60% in 70 s is 0.6, video 30% in 80 s is 0.3, and video 5% in 90 s is 0.05, i.e., P is { P (x) (x is x)1),P(x2),P(x3),P(x4) The video apparatus may calculate the information entropy of the type attribute using formula 1, where 0.05,0.6,0.3, and 0.05.
Equation 3:
Figure BDA0001562646060000132
wherein x in the above formula1Representing 60 s video, P (x)1) Represents the subcategory-the distribution rate of videos of the 60 s in the chronological attribute; x is the number of2Representing videos of the 70 s, P (x)2) Represents the subcategory-the distribution rate of videos of the 70 s in the chronological attribute; x is the number of3Representing videos of the 80 s, P (x)3) Represents the subcategory-the distribution rate of videos of the 80 s in the chronological attribute; x is the number of4Video representing the 90 s, P (x)4) Representing the subcategory-the distribution rate of videos of the 90 s in the chronological attribute.
It should be noted that, the video apparatus may refer to the calculation method in the above example according to the distribution rate of the sub-classifications in the other attribute classifications, and the method for calculating the information entropy of the other attribute classifications is not described herein again.
Further optionally, the method for calculating the information entropy of the attribute classification by the video device may further include: and the video device calculates the information entropy of at least two attribute classifications when the first video set is divided according to different attribute classifications by combining the current scene information and/or the user behavior parameters.
For example, the current scene information may be time information (e.g., may be divided into morning, afternoon, evening, night, etc.) for the user to retrieve the video.
The video device may set a weighting weight of the current scene information according to time information of the user-retrieved video. For example, if the time when the user retrieves the video is at night and the type attribute includes thrillers, the video device sets the weighting weight of the current scene information to be a first weighting threshold value a, the first weighting threshold value a is smaller than 1, and when the information entropy of the type attribute is calculated, the amount of information-P (x) of the thrillers can be given4)log2P(x4) Multiplying by a first weight threshold A to make the amount of information thriller-P (x)4)log2P(x4)×A。
Illustratively, the distribution rates of the videos in the first video set in the sub-categories in the type attribute are respectively: action 15% ═ 0.15%, comedy 40% ═ 0.4%, love 25% ═ 0.25%, thriller 20% ═ 0.2, i.e. P ═ P (x)1),P(x2),P(x3),P(x4) Given that a is 0.8, the video device can calculate the information entropy of the type attribute using formula 1.
Equation 4:
H(X)=-(P(x1)log2P(x1)+P(x2)log2P(x2)+P(x3)log2P(x3)+P(x4)log2P(x4)×0.8)
=-(0.15×log20.15+0.4×log20.4+0.25×log20.25+0.2×log20.2×0.8)
=0.545
according to the formula 2 and the formula 4, when the user searches the video at night, the information entropy of the type attribute calculated by the video device is different from that when the user searches the video at day time, and the size of the information entropy of the type attribute calculated by the video device may determine whether the attribute classification is the attribute classification with the maximum information entropy. In formula 4, when the user searches the video at night, the information entropy of the type attribute is lower than that when the user searches the video at day time, and the probability that the video device preferentially displays all attribute subtypes of the type attribute to the user is reduced, so that the video more conforming to the current scene can be provided for the user to select, and the user experience can be improved.
The user behavior parameters may be video retrieval records of the video device statistics user and preference degrees of the video device or a video terminal where the video device is located to the user for the video of the sub-classification in each attribute classification, which are obtained by recording that the video device or the video device plays the video for the user.
The video apparatus may set the weighting of the user behavior parameter according to the degree of preference of the user for the video of the sub-classification in the respective attribute classifications. For example, if the user pair type attributes are obtained through statistics, the preference degree of an action is 70%, the preference degree of a comedy is 15%, the preference degree of a love is 10%, and the preference degree of a thriller is 5%; the video device multiplies the user preference level for each sub-category by a second weight threshold B, which is generally greater than 1, and takes the product of the preference level and the second weight threshold B as the weighted weight of the user behavior parameter corresponding to each sub-category.
The video apparatus may multiply the information amount of each sub-category by the product of the preference degree of the sub-category and the second weight threshold B when calculating the information entropy of the type attribute. For example, the amount of information of motion picture-P(x1)log2P(x1) Multiplying the preference degree of the sub-classification by a second weight threshold B to obtain an information amount of-P (x)1)log2P(x1) X (70% xb); amount of comedy information-P (x)2)log2P(x2) Multiplying the preference degree of the sub-classification by a second weight threshold B to obtain an information amount of-P (x)2)log2P(x2) X (15% xb); information content-P (x) of love photos3)log2P(x3) Multiplying the preference degree of the sub-classification by a second weight threshold B to obtain an information amount of-P (x)3)log2P(x3) X (10% xb); information quantity-P (x) of thriller4)log2P(x4) Multiplying the preference degree of the sub-classification by a second weight threshold B to obtain an information amount of-P (x)4)log2P(x4)×(5%×B)。
For example, assuming that the second weight threshold B is equal to 2, the video apparatus may calculate the information entropy of the type attribute using equation 1.
Equation 5:
Figure BDA0001562646060000151
according to the formula 2 and the formula 5, it can be concluded that the information entropy of the type attribute calculated by the video apparatus according to the distribution ratio and the user behavior parameter is different from the information entropy of the type attribute calculated by the video apparatus only according to the distribution ratio, and the information entropy may affect the priority level of the attribute classification in the first attribute classification set. Compared with the information entropy of the type attribute calculated according to the distribution rate, which is lower in the information entropy of the type attribute calculated according to the user behavior parameter in formula 5, the probability that the video device preferentially displays all attribute subtypes of the type attribute to the user is reduced, a video more conforming to the preference of the user can be provided for the user to select, and the user experience can be improved.
S205, the video device prompts the user to select from the sub-categories of the attribute category with the largest information entropy.
The information entropy of the attribute classification can reflect the probability distribution and convergence condition of videos classified according to the attribute classification, and if a user performs video retrieval by using each sub-classification in the attribute classification with the maximum information entropy as a retrieval condition, the retrieval range of the videos can be effectively reduced, and the retrieval efficiency is improved; therefore, after calculating the information entropies of at least two attribute classifications, the video device can prompt the user to select from the sub-classifications of the attribute classification with the largest information entropy.
For example, the video device may display the sub-category label of the attribute category with the largest entropy to prompt the user to select; or prompting the user to select from the sub-categories of the attribute category with the maximum information entropy through voice.
It should be noted that, in the embodiment of the present invention, after prompting the user to select from the sub-categories of the attribute category with the largest entropy, the video apparatus may directly display the identification information of all videos in the sub-category selected by the user for the user to determine the video to be retrieved by selecting the identification information of all videos in the displayed sub-categories.
Further optionally, in order to further narrow the search range and improve the search efficiency through the calculation of the information entropy, the video apparatus may obtain all videos (a second video set) in the sub-classification selected by the user according to the selection of the user after prompting the user to select from the sub-classification of the attribute classification with the largest information entropy, calculate the information entropy of at least two attribute classifications when the videos in the second video set are divided according to different attribute classifications, and continue to prompt the user to select from the sub-classification of the attribute classification with the largest information entropy. Specifically, the method of the embodiment of the present invention may further include S206-S208:
and S206, the video device obtains a second video set according to the selection of the user.
Specifically, the videos in the first video set may be divided into sub-attributes according to any attribute classification, and the videos in the first video set are divided into different sub-attributes.
The video device can receive the selection of the user on the sub-classification in the attribute classification with the maximum information entropy, and when the video in the first video set is subjected to sub-attribute classification according to the attribute classification with the maximum information entropy, the video set formed by the sub-classified videos selected by the user is determined as the second video set.
S207, the video device calculates the information entropy of at least two attribute classifications when the videos in the second video set are divided according to different attribute classifications, and each attribute classification comprises at least two sub-classifications.
It should be noted that, referring to the method for the video apparatus to calculate the information entropy of at least two attribute classifications when the video in the second video set is divided according to different attribute classifications, the method for the video apparatus to calculate the information entropy of at least two attribute classifications when the video in the first video set is divided according to different attribute classifications in the embodiment of the present invention may be used, which is not described herein again in this embodiment of the present invention.
And S208, the video device prompts the user to select from the sub-categories of the attribute category with the largest information entropy.
The information entropy of the attribute classification can reflect the probability distribution and convergence condition of videos classified according to the attribute classification, and if a user performs video retrieval by using each sub-classification in the attribute classification with the maximum information entropy as a retrieval condition, the retrieval range of the videos can be effectively reduced, and the retrieval efficiency is improved; therefore, after calculating the information entropies of at least two attribute classifications, the video device can prompt the user to select from the sub-classifications of the attribute classification with the largest information entropy.
For example, the video device may display the sub-category label of the attribute category with the largest entropy to prompt the user to select; or prompting the user to select from the sub-categories of the attribute category with the maximum information entropy through voice.
It should be noted that, in the embodiment of the present invention, the video apparatus may calculate an information entropy of each attribute classification when videos in the first video set, the second video set, or the nth video set are classified according to different attribute classifications; or only calculating the information entropy of the attribute classification used by the user when the videos in the first video set, the second video set or the Nth video set are classified according to different attribute classifications.
Further optionally, the method of the embodiment of the present invention may further include: and the video device updates the user behavior parameters according to the selection of the user. The video device can update the user behavior parameters according to the selection of the sub-classification of the attribute classification with the maximum information entropy prompted by the video device by the user each time.
It should be noted that, in the embodiment of the present invention, after prompting the user to select from the sub-categories of the attribute category with the largest entropy, the video apparatus may directly display the identification information of all videos in the sub-category selected by the user for the user to determine the video to be retrieved by selecting the identification information of all videos in the displayed sub-categories.
Further optionally, in order to further narrow the search range and improve the search efficiency through the calculation of the information entropy, the video apparatus may obtain all videos (a third video set) in the sub-classification selected by the user according to the selection of the user after prompting the user to select from the sub-classification of the attribute classification with the largest information entropy, calculate the information entropy of at least two attribute classifications when the videos in the third video set are divided according to different attribute classifications, and continue to prompt the user to select from the sub-classification of the attribute classification with the largest information entropy.
It should be noted that, the method for the video device to obtain the third video set, calculate the information entropy of at least two attribute classifications when the videos in the third video set are divided according to different attribute classifications, and prompt the user to select from the sub-classification of the attribute classification with the largest information entropy may refer to the related description in the embodiment of the present invention, which is not described herein again.
The video retrieval method provided by the embodiment of the invention obtains a first video set; calculating the information entropy of at least two attribute classifications in a first video set, wherein each attribute classification comprises at least two sub-classifications; and prompting the user to select from the sub-categories of the attribute category with the largest information entropy.
The information entropy of one system can reflect the probability distribution of information in the system and the convergence condition of the information in the system, and when information retrieval is carried out, the retrieval range can be effectively reduced by combining the probability distribution and the convergence condition of the information in the system, and the retrieval efficiency is improved. In the scheme, the calculated information entropy of the attribute classification can reflect the probability distribution and convergence condition of the video when the video in the first video set is classified according to different attribute classifications, and the retrieval range of the video can be effectively reduced by combining the probability distribution and convergence condition of the video in different attribute classifications, so that the retrieval efficiency is improved.
Example 3
An embodiment of the present invention provides a video apparatus, as shown in fig. 3, including: a first acquisition unit 31, a first calculation unit 32, and a first presentation unit 33.
A first obtaining unit 31, configured to obtain a first video set.
A first calculating unit 32, configured to calculate entropy of information of at least two attribute classifications in the first video set obtained by the first obtaining unit 31, where each attribute classification includes at least two sub-classifications.
A first prompting unit 33, configured to prompt the user to select from the sub-categories of the attribute category with the largest information entropy calculated by the first calculating unit 32.
Further, the first calculating unit 32 is further configured to calculate an information entropy of the attribute classification according to the number of videos included in each sub-classification in the attribute classification.
Further, as shown in fig. 4, the first calculating unit 32 includes: a determination module 321 and a calculation module 322.
A determining module 321, configured to determine, according to the number of videos included in each sub-category in the attribute category, a video distribution rate in each sub-category in the attribute category.
A calculating module 322, configured to calculate an information entropy of the attribute classification according to the distribution rate.
Further, the first calculating unit 32 is further configured to calculate information entropy of at least two attribute classifications in the first video set in combination with current scene information and/or user behavior parameters.
Further, as shown in fig. 5, the video apparatus further includes: a second acquisition unit 34, a second calculation unit 35, and a second presentation unit 36.
A second obtaining unit 34, configured to obtain a second video set according to the selection of the user.
A second calculating unit 35, configured to calculate entropy of information of at least two attribute classifications in the second video set obtained by the second obtaining unit 34, where each attribute classification includes at least two sub-classifications.
A second prompting unit 36, configured to prompt the user to select from the sub-categories of the attribute category with the largest information entropy calculated by the second calculating unit 35.
Further, as shown in fig. 6, the video apparatus further includes: and an updating unit 37.
And an updating unit 37, configured to update the user behavior parameter according to the selection of the user.
Further, the first obtaining unit 31 is further configured to perform a search according to a search term input by the user to obtain the first video set; or, performing relevancy retrieval according to the video currently selected by the user to obtain the first video set; or, retrieving according to the voice input information of the user to obtain the first video set.
Further, the first prompting unit 33 is further configured to display a sub-classification label of the attribute classification with the largest information entropy to prompt the user to select; or prompting the user to select from the sub-categories of the attribute category with the maximum information entropy through voice.
The second prompting unit 36 is further configured to display a sub-classification label of the attribute classification with the largest information entropy to prompt the user to select; or prompting the user to select from the sub-categories of the attribute category with the maximum information entropy through voice.
It should be noted that, for specific descriptions of some functional modules in the video apparatus provided in the embodiment of the present invention, reference may be made to corresponding contents in the method embodiment, and details are not described here again.
The video device provided by the embodiment of the invention obtains a first video set; calculating the information entropy of at least two attribute classifications in a first video set, wherein each attribute classification comprises at least two sub-classifications; and prompting the user to select from the sub-categories of the attribute category with the largest information entropy.
The information entropy of one system can reflect the probability distribution of information in the system and the convergence condition of the information in the system, and when information retrieval is carried out, the retrieval range can be effectively reduced by combining the probability distribution and the convergence condition of the information in the system, and the retrieval efficiency is improved. In the scheme, the calculated information entropy of the attribute classification can reflect the probability distribution and convergence condition of the video when the video in the first video set is classified according to different attribute classifications, and the retrieval range of the video can be effectively reduced by combining the probability distribution and convergence condition of the video in different attribute classifications, so that the retrieval efficiency is improved.
Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. For the specific working processes of the system, the apparatus and the unit described above, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (7)

1. A method for video retrieval, comprising:
searching according to a search word input by a user to obtain a first video set;
searching according to the search term input by the user to obtain the first video set, specifically: when the video device acquires at least two input keywords and identifies at least two matching keywords, the video device can determine video classification modes corresponding to the determined at least two matching keywords respectively; determining sub-classifications in a video classification mode corresponding to a matched keyword corresponding to the input keyword according to each input keyword in at least two input keywords respectively; determining a resource set consisting of videos corresponding to all input keywords in at least two input keywords in all videos contained in all sub-categories determined in a video information base as the first video set;
calculating information entropies of at least two attribute classifications in the first video set according to the weighting weight of the current scene information and the weighting weight of the user behavior parameter by combining the current scene information and the user behavior parameter, wherein each attribute classification comprises at least two sub-classifications, the current scene information is time information of a user retrieval video, and the user behavior parameter is a video retrieval record of the user and the preference degree of the user to the video of the sub-classification in each attribute classification; the weighting weight of the current scene information and the weighting weight of the user behavior parameter are used for calculating the information amount of the sub-classification in the attribute classification;
and prompting the user to select from the sub-categories of the attribute category with the largest information entropy.
2. The video retrieval method of claim 1, wherein the calculating the entropy of the information of at least two attribute classifications in the first video set comprises:
and calculating the information entropy of the attribute classification according to the number of videos contained in each sub-classification in the attribute classification.
3. The video retrieval method according to claim 2, wherein the calculating the information entropy of the attribute classification according to the number of videos included in each sub-classification in the attribute classification comprises:
determining the video distribution rate in each sub-classification in the attribute classification according to the number of videos contained in each sub-classification in the attribute classification;
and calculating the information entropy of the attribute classification according to the distribution rate.
4. The video retrieval method of claim 1, further comprising:
obtaining a second video set according to the selection of the user;
calculating information entropy of at least two attribute classifications in the second video set, wherein each attribute classification comprises at least two sub-classifications;
and prompting the user to select from the sub-categories of the attribute category with the largest information entropy.
5. The video retrieval method of claim 4, further comprising:
and updating the user behavior parameters according to the selection of the user.
6. The video retrieval method of any of claims 1-5, wherein the obtaining the first video set comprises:
searching according to the search terms input by the user to obtain the first video set;
or, performing relevancy retrieval according to the video currently selected by the user to obtain the first video set;
or, retrieving according to the voice input information of the user to obtain the first video set.
7. The video retrieval method according to any one of claims 1 or 4, wherein the prompting user to select from the sub-categories of the attribute category with the largest information entropy comprises:
displaying the sub-classification mark of the attribute classification with the maximum information entropy to prompt the user to select;
or prompting the user to select from the sub-categories of the attribute category with the maximum information entropy through voice.
CN201810095506.3A 2014-04-30 2014-04-30 Video retrieval method Active CN108133058B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810095506.3A CN108133058B (en) 2014-04-30 2014-04-30 Video retrieval method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810095506.3A CN108133058B (en) 2014-04-30 2014-04-30 Video retrieval method
CN201410180892.8A CN103942328B (en) 2014-04-30 2014-04-30 A kind of video retrieval method and video-unit

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201410180892.8A Division CN103942328B (en) 2014-04-30 2014-04-30 A kind of video retrieval method and video-unit

Publications (2)

Publication Number Publication Date
CN108133058A CN108133058A (en) 2018-06-08
CN108133058B true CN108133058B (en) 2022-02-18

Family

ID=51189996

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201410180892.8A Active CN103942328B (en) 2014-04-30 2014-04-30 A kind of video retrieval method and video-unit
CN201810095506.3A Active CN108133058B (en) 2014-04-30 2014-04-30 Video retrieval method

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201410180892.8A Active CN103942328B (en) 2014-04-30 2014-04-30 A kind of video retrieval method and video-unit

Country Status (1)

Country Link
CN (2) CN103942328B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107333149A (en) * 2017-06-30 2017-11-07 环球智达科技(北京)有限公司 The aggregation processing method of programme information
CN109286833A (en) * 2018-09-30 2019-01-29 湖南机电职业技术学院 A kind of information processing method and system applied in network direct broadcasting
CN109614517B (en) * 2018-12-04 2023-08-01 广州市百果园信息技术有限公司 Video classification method, device, equipment and storage medium
CN110543862B (en) * 2019-09-05 2022-04-22 北京达佳互联信息技术有限公司 Data acquisition method, device and storage medium
CN111079015B (en) * 2019-12-17 2021-08-31 腾讯科技(深圳)有限公司 Recommendation method and device, computer equipment and storage medium
CN114697748B (en) * 2020-12-25 2024-05-03 深圳Tcl新技术有限公司 Video recommendation method and computer equipment based on voice recognition
CN114120180B (en) * 2021-11-12 2023-07-21 北京百度网讯科技有限公司 Time sequence nomination generation method, device, equipment and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101059814A (en) * 2006-04-17 2007-10-24 株式会社理光 Image processing device and image processing method
JP2007293602A (en) * 2006-04-25 2007-11-08 Nec Corp System and method for retrieving image and program
CN102521321A (en) * 2011-12-02 2012-06-27 华中科技大学 Video search method based on search term ambiguity and user preferences
CN102682132A (en) * 2012-05-18 2012-09-19 合一网络技术(北京)有限公司 Method and system for searching information based on word frequency, play amount and creation time
CN102982153A (en) * 2012-11-29 2013-03-20 北京亿赞普网络技术有限公司 Information retrieval method and device
CN103686236A (en) * 2013-11-19 2014-03-26 乐视致新电子科技(天津)有限公司 Method and system for recommending video resource

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080120328A1 (en) * 2006-11-20 2008-05-22 Rexee, Inc. Method of Performing a Weight-Based Search
JP2010055431A (en) * 2008-08-28 2010-03-11 Toshiba Corp Display processing apparatus and display processing method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101059814A (en) * 2006-04-17 2007-10-24 株式会社理光 Image processing device and image processing method
JP2007293602A (en) * 2006-04-25 2007-11-08 Nec Corp System and method for retrieving image and program
CN102521321A (en) * 2011-12-02 2012-06-27 华中科技大学 Video search method based on search term ambiguity and user preferences
CN102682132A (en) * 2012-05-18 2012-09-19 合一网络技术(北京)有限公司 Method and system for searching information based on word frequency, play amount and creation time
CN102982153A (en) * 2012-11-29 2013-03-20 北京亿赞普网络技术有限公司 Information retrieval method and device
CN103686236A (en) * 2013-11-19 2014-03-26 乐视致新电子科技(天津)有限公司 Method and system for recommending video resource

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于内容的视频检索技术分析与研究;张环;《中国优秀硕士学位论文全文数据库(信息科技辑)》;20130715;第I138-1021页 *

Also Published As

Publication number Publication date
CN103942328A (en) 2014-07-23
CN103942328B (en) 2018-05-04
CN108133058A (en) 2018-06-08

Similar Documents

Publication Publication Date Title
CN108133058B (en) Video retrieval method
US20220044139A1 (en) Search system and corresponding method
US10380249B2 (en) Predicting future trending topics
US11971925B2 (en) Predicting topics of potential relevance based on retrieved/created digital media files
CN106383887B (en) Method and system for collecting, recommending and displaying environment-friendly news data
Firmino Alves et al. A Comparison of SVM versus naive-bayes techniques for sentiment analysis in tweets: A case study with the 2013 FIFA confederations cup
US9805022B2 (en) Generation of topic-based language models for an app search engine
CN104239373B (en) Add tagged method and device for document
US20140379719A1 (en) System and method for tagging and searching documents
US8380727B2 (en) Information processing device and method, program, and recording medium
US20190205743A1 (en) System and method for detangling of interleaved conversations in communication platforms
MX2013005056A (en) Multi-modal approach to search query input.
US20180046721A1 (en) Systems and Methods for Automatic Customization of Content Filtering
CN103136228A (en) Image search method and image search device
US20170046440A1 (en) Information processing device, information processing method, and program
JP2012164242A (en) Related word extraction device, related word extraction method, related word extraction program
JP2010146171A (en) Representation complementing device and computer program
CN104881447A (en) Searching method and device
CN111160699A (en) Expert recommendation method and system
JP6446987B2 (en) Video selection device, video selection method, video selection program, feature amount generation device, feature amount generation method, and feature amount generation program
CN111597469B (en) Display position determining method and device, electronic equipment and storage medium
CN110245357B (en) Main entity identification method and device
CN112182414A (en) Article recommendation method and device and electronic equipment
WO2019231635A1 (en) Method and apparatus for generating digest for broadcasting
CN113821669A (en) Searching method, searching device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant