CN1253822C - K near neighbour method used for video frequency fragment fast liklihood inquiry - Google Patents

K near neighbour method used for video frequency fragment fast liklihood inquiry Download PDF

Info

Publication number
CN1253822C
CN1253822C CN 200310108129 CN200310108129A CN1253822C CN 1253822 C CN1253822 C CN 1253822C CN 200310108129 CN200310108129 CN 200310108129 CN 200310108129 A CN200310108129 A CN 200310108129A CN 1253822 C CN1253822 C CN 1253822C
Authority
CN
China
Prior art keywords
inquiry
video
similarity
temp
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 200310108129
Other languages
Chinese (zh)
Other versions
CN1538326A (en
Inventor
刘芳洁
董道国
薛向阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN 200310108129 priority Critical patent/CN1253822C/en
Publication of CN1538326A publication Critical patent/CN1538326A/en
Application granted granted Critical
Publication of CN1253822C publication Critical patent/CN1253822C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present invention relates to a fast K neighbor inquiry method used for the similar retrieval of video fragments, which has the basic steps: T*k neighbors, namely T*k similar video frames, of each frame in video fragments to be inquired are found out in a video database by an Ordered VA-File; then, all inquiry results are sequenced according to the successive generating position relations of the inquiry results in the video database; if one frame in the database simultaneously belongs to a plurality of T*k neighbors of inquiry frames, the frame sequence numbers of the frames are recorded; finally, a window scan is carried out on a sequence; k video fragments with the maximal similarity are returned. The present invention largely reduces disk access cost and CPU calculation cost in the similar inquiry of the video fragments, and has the advantages of high inquiry efficiency and high inquiry precision.

Description

A kind of k near neighbor method that is used for the quick similar inquiry of video segment
Technical field
The invention belongs to data processing field such as multimedia information retrieval, data mining and cluster analysis, be specifically related to a kind of high dimensional indexing structure of utilizing and realize the quick similar inquiry k near neighbor method of video segment.
Background technology
Over nearest 10 years, computing machine and network high-speed development, digital media information emerge in multitude, in order to realize the efficient access to the magnanimity multimedia messages, multimedia messages is handled and the development of gopher becomes the task of top priority.
Video (video) is the set of continuous in time a series of images frame, being a kind of flow data that does not have structure, is that to wait be one, obtain widely used a kind of comprehensive media information for collection image sequence (image sequence), image (image), text (text).If a video file is regarded as a book that does not have catalogue and index, a two field picture just is equivalent to the one page in the video " book " so.Because this book of video lacks catalogue and index information, people just can't efficiently browse and retrieve it, can't Fast Reading.For seeking interested video segment, people can only take " F.F. " and " fast " this reading method consuming time.
Along with increasing sharply of digital video data amount, tradition browsing mode consuming time far can not satisfy visit and the query demand of people to video content.People more and more wish to find fast own interested video segment in the magnanimity video library, therefore just need set up effective bibliographic structure for video.In general, can be divided into many levels to video, be followed successively by from high to low: program, scene, camera lens and key frame according to the video content granularity.
Camera lens is meant one group of continuous images frame sequence that video camera notes from be opened to the overall process of cutting out.Shot boundary is an outwardness, can adopt the automatic detector lens of certain method border.In actual applications, the user browses that all images frame is very consuming time in the camera lens, and key frame technology therefore commonly used realizes fast browsing.Key frame is meant most important in the representative shot, representational one or more image.According to the complexity of camera lens content, can from a camera lens, extract one or more key frames.In order to set up the video structure model at semantic layer, need carry out scene to video and divide.Scenario definition is an adjacent set of shots on semantically relevant, time, and they can express the high-level notion of video or story etc.Camera lens is a basic physical unit of forming video, and scene (claiming story) then is the unit of video at semantic layer, has only scene to pass on complete relatively semanteme to the beholder usually.Program is then gone up orderly scene by the time and is formed for example news program, entertainment, sports cast, weather forecast etc.
The video information retrieval is the most difficult in a multimedia information retrieval research topic, it also is the research focus of present academia, utilize the bottom physical features of image and video segment to realize that video clip retrieval is a very important research direction, its basic step is: at first the video flowing in the video database is divided into camera lens, and from each camera lens, extract one or more key frames, from each key-frame extraction eigenvector, characterize pairing camera lens then with eigenvector.The inquiry video of when retrieval the user being submitted to is done same processing.Utilize eigenvector to carry out calculation of similarity degree realization similarity inquiry between the video segment then.Different query demand type according to the user submits to can be divided into two classes: video lens retrieval and video clip retrieval.
Searching lens refers to that the inquiry video segment that the user submits to only comprises a camera lens, can utilize the eigenvector of the pairing key frame of this camera lens to realize similarity retrieval fast, at this class retrieval mode, people have proposed a large amount of high dimensional indexing structures and similarity search algorithm, as R-Tree[3], X-Tree[4] and VA-File[5] etc.Video clip retrieval then is meant one section video of the same semanteme of description that inquiry video that the user submits to may be made up of a plurality of continuous camera lenses, for this class inquiry, at first need the inquiry video is carried out camera lens and cuts apart, the feature vector sequence with certain hour order that utilizes the eigenvector of the key frame of each camera lens to form characterizes user's query demand.Measure similarity between two video segments often based on the similarity degree [1] [2] between the eigenvector of each key frame,, directly, will spend high cost at the enterprising line retrieval of raw data base if do not adopt efficient index structure and searching algorithm fast.
Simple and the most direct method of realization video clip retrieval is carried out sequential scanning (SScan) to the original video data storehouse exactly, calculate the similarity between each video segment in inquiry video segment and the database successively according to the similarity model, the video segment of k similarity maximum is a Query Result before returning.When the video data volume was big, the entire video data storehouse just must be stored in the disk, so SScan just need expend a large amount of magnetic disc i/os and CPU calculation cost.In order to quicken inquiry velocity and raising search efficiency, most popular method is exactly to reduce the calculation cost of magnetic disc i/o and CPU by index structure.
Traditional high dimensional indexing structure (R-Tree, X-Tree, VA-File etc.) and similarity retrieval algorithm, the inquiry that they are considered only is used for single high n dimensional vector n, promptly only support the searching lens type, so they can not support that query object is the retrieval of the video segment of feature vector sequence.According to disclosed document, has only S 2-Tree[7] be the high dimensional indexing structure of unique supported feature vector sequence.Its main thought is: with all higher-dimension vector data codings, the retrieval conversion with tactic higher-dimension vector data sequence becomes string matching then.This index structure exists very big limitation for video clip retrieval: at first, because it is to be based upon on the basis of X-Tree, when the dimension of eigenvector surpasses 20, its search efficiency will be lower than sequential scanning SScan, so S 2-tree is suitable for the following application of 20 dimensions.And in video frequency searching was used, extracting a resulting eigenvector of key frame usually will be far above 20 dimension; Secondly, based on S 2The searching algorithm of-Tree requires the order between the resultant result data must be in strict conformity with the order of data query, but in video clip retrieval, two sections its camera lenses of similar in terms of content video, its order may be inconsistent, for such situation, based on S 2The searching algorithm of-Tree is with powerless.
Ordered VA-File[9] be that a kind of effective high dimensional data index structure that we propose recently (has been applied for Chinese invention patent, application number: 03129687.4), it is by to VA-File[5] in approximate vector rearrangement tissue, and the approximate vector file that obtains carried out segmentation, thereby query portion vector only in query script is to realize approximate fast k neighbour inquiry.Experimental result shows: during the speed-up ratio of VA-LOW algorithm [6] up to 100 times, Ordered VA-File can obtain extraordinary Query Result quality in obtaining than VA-File.
Main contribution of the present invention is based on Ordered VA-File[9] proposed to be used for video clip retrieval fast, phase Sihe k neighbour's querying method.
List of references
1.Y.P,Kulkarni,S.R.,Ramadge,P.J.“A?framework?for?measuring?video?similarity?and?itsapplication?to?video?query?by?example”,Proceedings?of?IEEE?International?Conference?onImage?Processing,1999,2:106-110.
2.Dimitrova,N.,Abdel-Mottaled,M.“Content-based?video?retrieval?by?example?video?clip”,Proceedings?of?IS?&?T?and?SPIE?Storage?and?Retrieval?of?Image?and?Video?Databases?VI,1998:184-196.
3.Guttman?A.“R-Trees:A?dynamic?index?structure?for?spatial?searching”,Proc.ACMSIGMOD?Int.Conf.on?Management?of?Data,Boston,MA,1984:47-57.
4.Stefan?Berchtold,Daniel?A.Keim,and?Hans-Peter?Kriegel.“The?X-Tree:An?index?structurefor?high?dimensional?data”,Proc.of?the?22 nd?VLDB?Conference,1996:28-39.
5.Roger?Weber,Hans-J.Schek,Stephen?Blott,“A?Quantitative?Analysis?and?PerformanceStudy?for?Similarity?Search?Methods?in?High-Dimensional?Spaces,”Proc.of?the?24 th?VLDBConference,New?York,USA,1998.
6.R.Weber,K.Bohm,“Trading?Quality?for?Time?with?Nearest?Neighbor?Search”,Proc.Of?the7 th?Conf.On?Extending?Database?Technology,Konstanz,Germany,March?2000.
7.Haixun?Wang?and?Chang-Shing?Perng.“The?S2-Tree:An?Index?Structure?for?SubsequenceMatching?of?Spatial?Objects”.in?the?5th?Pacific-Asic?Conference?on?Knowledge?Discoveryand?Data?Mining(PAKDD),Hong?Kong,2000.
8.Sen-ching?Samson?Cheung,Avideh?Zakhor.“Efficient?Video?Similarity?Measurement?WithVideo?Signature”.IEEE?Trans.On?CAS?for?Video?Technology.Vol.13.No.1.Jan.2003.
9. the quick similar to search method of a higher-dimension vector data obtains number of patent application: 03129687.4.Applicant: Dong Daoguo, Xue Xiangyang (main summary of the invention is to propose a kind of high dimensional indexing structure, is referred to as Ordered VA-File)
10.Fangjie?liu,Daoguo?Dong,Xiangyang?Xue.“A?Fast?Video?Clip?Retrieval?Algorithm?Basedon?VA-File”.SPIE?Electronic?Imaging?2004:Storage?and?Retrieval?for?Media?Database2004.To?Be?Published.
Symbol table (implication of institute's symbolization in whole documents of the present invention)
X, Y represent two video clipss arbitrarily, describe with feature vector sequence
X, y, a, b represent the key frame image, describe with eigenvector
Q representative inquiry video clips is described with feature vector sequence
Q representative inquiry key frame images is described with eigenvector
DB feature vector data storehouse
T inquires about controlled variable, and T is big more, and then the approximate vector number of required inquiry is big more
K returns the quantity of Query Result
(x, y) distance function calculate the distance between high n dimensional vector n x and the y to d
(x, y) similarity function calculate the similarity between high n dimensional vector n x and the y to sim
(X, Y) similarity function calculate the similarity between video clips X and the Y to sim
d iThe natural number id that gives according to the position of key frame in database
s iWith d in the database iThe set of all inquiry frames that frame is similar
W Min, W MaxThe minimum of user-defined return results video segment may length and maximum possible
Length, unit is a frame number
P Begin, P EndEach similarity calculate selected video clips from database reference position and
End position
R TempPreserve the set of the possible Query Result of current institute during approximate query
V TempEach similarity is calculated selected video clips from database
Sim TempkJudge that whether a video clips is the similarity critical value of possible Query Result
Summary of the invention
The objective of the invention is to propose a kind of k near neighbor method that can carry out quick similar inquiry, under the prerequisite that influences the Query Result quality hardly, shorten retrieval time video segment.
The k near neighbor method that is used for the quick similar inquiry of video segment that the present invention proposes, be based on a kind of algorithm that Ordered VA-File realizes, it is each key frame in the video segment to be checked with Ordered VA-File at first, in video database, find a T * k similar neighbour to it, sort then and scan these neighbours' set, find with the highest k the video segment of query fragment similarity and as a result of return.Owing to do not need to scan all videos database, therefore greatly improved retrieval rate.
Basic step of the present invention is as follows: the similarity that defines between any two video segments is " the number sum of all similar frames in the two sections videos " ratio with " two sections video length sums "; (1) at first sets up index structure, i.e. Ordered VA-File for the high dimensional feature vector of the correspondence of each width of cloth key images in the video database.Specifically can be video according to its putting in order in database, for each eigenvector in the video database is given continuous natural number id (being unique) since 1 in entire database, utilize Ordered VA-File to set up index, the Ordered VA-File index file that obtains is kept on the disk for eigenvector; (2) for each frame in the inquiry video segment of user's submission, utilize Ordered VA-File to find out its T * k neighbour, wherein T is the inquiry controlled variable; T * k the neighbour of all inquiry frames is kept in the main memory; (3) T * k neighbour with all inquiry frames sorts according to their priority position relations in video database, if a certain frame in the database belongs to the T * k neighbour of n inquiry frame simultaneously, notes the frame number of these frames; After finishing, ordering obtains binary ordered sequence<d 0, s 0,<d 1, s 1...<d n, s n, d wherein iRepresent the position of this frame in database, s iBe illustrated in the query fragment and d iThe frame number of all similar frames; All ordering work are all finished in main memory, do not need to visit disk; (4) according to certain algorithm this ordered sequence is scanned, according to the maximum length W of user-defined Query Result MaxWith minimum length W Min, calculate might be k neighbour's fragment and the similarity between the query fragment, return preceding k fragment of similarity maximum.
Among the present invention, can adjust the maximum length W of parameter: controlled variable T and Query Result video segment according to concrete application self-adapting MaxWith minimum length W Min, by regulating these parameters, searching system is got compromise between search efficiency and inquiry quality.Generally, the span of T is 3-10.
Among the present invention,, and do not require that two video segments have same frame number or same camera lens number, do not require that each similar camera lens has same or similar appearance order in the video segment yet if two video segments are similar.
In the said method,,, improved retrieval rate in main memory so significantly reduced the disk cost because sequence length is far smaller than the database size and scanning is all finished.
Embodiment
Among the present invention, the similarity between the frame of video is defined as follows: establishing DB is the feature vector data storehouse, and q is a query vector, and y is the eigenvector among the DB, and T is the inquiry controlled variable,
sim ( q , y ) = 1 &Sigma; x &Element; DB ( d ( q , x ) < d ( q , y ) ? 1 : 0 ) < T * k , 0 &Sigma; x &Element; DB ( d ( q , x ) < d ( q , y ) ? 1 : 0 ) > T * k ,
If the similarity value between two width of cloth video frame images equals 1, then claim their similar each other frames.This definition mode makes to be judged two frames whether similar only quantity k is relevant as a result with user's interest, the user is compared to " defining an absolute distance threshold value judges whether similar " this way and has greater flexibility and feasibility, because can't define an absolute threshold under a lot of situations.
Among the present invention, similarity between the video segment is defined as follows: make X, Y represent two sections videos respectively, for the frame among the X, if having at least a frame similar with it among the Y, claim that then this frame is the similar frame of Y among the X, the similar frame number of the Y among all X can be labeled as: ∑ X ∈ X1 Y ∈ Y:sim (x, y)=1}In like manner, the similar frame number of X among all Y can be labeled as ∑ Y ∈ Y1 X ∈ X:sim (x, y)=1}Then the calculating formula of similarity between X and the Y is:
sim ( X , Y ) = &Sigma; x &Element; X 1 { y &Element; Y : sim ( x , y ) = 1 } + &Sigma; y &Element; Y 1 { x &Element; X : sim ( x , y ) = 1 } | X | + | Y |
Among the present invention, utilize Ordered VA-File to set up the application for a patent for invention [9] that the algorithm of index can have been submitted to reference to us for video database.
Among the present invention, the approximate k neighbour's who obtains based on Ordered VA-File algorithm further describes as follows:
If the inquiry video clips that the user submits to is Q, the video clips length that customer requirements returns (being frame number) is at W Min-W MaxBetween (W Min<W Max), below for realizing the detailed step of inquiry:
1) utilizes Ordered VA-File to obtain inquiring about the T * k neighbour of each key frame among the Q fast, and determine their similarity relation according to top similar judgment rule;
2) T * k neighbour with all inquiry frames sorts according to their priority position relations in video database, if the frame in the database belongs to the T * k neighbour of a plurality of inquiry frames simultaneously, notes the frame number of these frames; If resulting ordered sequence is expressed as<+∞, 0 〉,<d 0, s 0,<d 1, s 1...<d n, s n, d i<d jIf i<j.D wherein iRepresent the position of this frame in database, s iBe illustrated in the query fragment and d iThe frame number of all similar frames.Can be according to d and s and similarity model in the hope of the similarity of any one section video in the database and inquiry video, concrete computing formula is: the video clips of establishing in the database is [p, q], p<q, the inquiry video length is L, if the longest ordered sequence that [p, q] comprised is<d i, s i,<d I+1, s I+1...<d I+j, s I+j, then the similarity between [p, q] and the inquiry segment is ( j + 1 ) + | | &cup; k = 0 j s i + k | | ( q - p + 1 ) + L ;
3) initialization P End=d 0, establish approximation collection R Temp=Ф; Sim TempkEqual 0;
4) scanning sequence judges whether to exist d i, make W Min<P End-d i≤ W MaxAnd P End-d I-1>W MaxIf there is qualified d i, change 5), otherwise change 7);
5)P begin=d i。If V Temp={ P Begin, P End, according to similarity Model Calculation V TempSimilarity value sim (Q, V with inquiry Q Temp), if this similarity is greater than current k neighbour similarity sim Tempk, change 6), otherwise change 8);
6) if R TempMiddle all sequences and current sequence all do not have overlapping, so R Temp=R Temp+ { V Temp, otherwise compare R TempIn with current sequence the maximum similarity of overlapping sequence and current sequence, R are arranged TempIn only keep the result of similarity maximum among them; If R TempIn comprised current sequence and sim (Q, V Temp) greater than current k neighbour similarity sim TempkAnd R TempIn element number greater than k, sim Tempk=sim (Q, V Temp); If R TempIn element number equal k, sim TempkEqual R TempThe minimum value of the similarity of all elements.Change 8);
7) if P End-W Min>0, P then Begin=P End-W MinOtherwise P Begin=1.If V Temp={ P Begin, P Begin+ W Min, according to similarity Model Calculation V TempSimilarity value sim (Q, V with inquiry Q Temp), if this similarity is greater than current k neighbour similarity sim Tempk, change 6), otherwise change 8);
8) if P End≠ d n, P then End=d I+1, change 3); Otherwise withdraw from and return R Temp
Among the present invention, there have related parameter to fix really to be then as follows:
1) the parameter setting criterion of index building structure Ordered VA-File has a detailed description in [9].
2) determine the principle of T: T is big more, and is many more for neighbour's number of the independent required inquiry of inquiry frame, needs the data number of ordering and inquiry also many more, causes the increase of query time; Simultaneously, the information that obtains for each frame is also more, and the degree of accuracy of Query Result is also higher.
In a word, the present invention proposes a kind of Ordered of utilization VA-File to video segment carry out fast, be similar to, the algorithm of k neighbour retrieval, and can adjust inquiry velocity and inquiry precision according to user's demand self-adaptation, have high search efficiency.
Utilizing the inventive method, carry out the experimental verification of a lot of examples, is the result of example below.
Experimental data derives from the BBC TV, has comprised various types of programs such as news, physical culture.At first video is carried out camera lens and cut apart, extract a key frame then from each camera lens, entire database comprises 50,000 width of cloth key frame images altogether, and as proper vector, dimension is 192 to each key-frame extraction color histogram.The experiment machine is PIII CPU 1G Hz, the PC of 256 MB of memory, and operating system is Windows 2000Server, translation and compiling environment is Borland C++Builder6.0.
In experiment, 50000 feature vector datas are divided into 1000 sections, unified wherein 50 vector sections of inquiring about when the k neighbour inquires about, the minimum length of return results video clips and maximum length are respectively inquiry video clips length and 1.5 times of inquiry video clips length.
In inquiry velocity test, this algorithm and sequential scanning algorithm and compare based on the segment searching algorithm of VA-File.Be compared to the sequential scanning algorithm, the inquiry velocity of this algorithm has improved more than 30 times; Be compared to the segment searching algorithm based on VA-File, inquiry velocity has also improved more than 10 times.Experimental result shows that this algorithm has reached the video clips requirement of inquiry in real time fully on speed.
In the inquiry accuracy test, the algorithm in this algorithm and sequential scanning algorithm and the literary composition [10] compares.Because OVA-File good outcome quality when the k neighbour inquires about, this algorithm have been obtained very excellent inquiry precision with the comparison of sequential scanning algorithm the time, its inquiry precision is more than 90%; And the algorithm of literary composition in [10] relatively the time, although the algorithm in the literary composition [10] has adopted very complicated similarity model to guarantee the Query Result quality, and the similarity model that this algorithm adopts calculates simply relatively, but experimental result proves that the Query Result set quality that both obtain is very nearly the same, and both query times have differed an order of magnitude.
Experiment shows no matter still inquire about on the precision in inquiry velocity, and this algorithm has all obtained very excellent result, can be applied in fully and realize real-time video clips retrieval in the ripe multimedia information retrieval system.

Claims (1)

1, a kind of k neighbour search method that is used for the quick similar inquiry of video segment, it is characterized in that basic step is as follows: (1) at first sets up index structure for the pairing high dimensional feature vector of each width of cloth key frame image in the video database, i.e. Ordered VA-File; (2) for each frame in the inquiry video segment of user's submission, utilize Ordered VA-File to find out their T * k neighbour, wherein T is the inquiry controlled variable; (3) T * k neighbour with all inquiry frames sorts according to the priority position relation that they occur in video database, if a certain frame in the video database belongs to the T * k neighbour of n inquiry frame simultaneously, notes the frame number of these inquiry frames; After finishing, ordering obtains binary ordered sequence<d 0, s 0,<d 1, s 1...<d n, s n, d wherein iRepresent the position of this frame in database, s iBe illustrated in the query fragment and d iThe frame number of all similar frames; (4) this binary ordered sequence is scanned, according to the maximum length W of user-defined Query Result MaxWith minimum length W Min, calculate might be k neighbour's fragment and the similarity between the video segment to be checked, return preceding k fragment of similarity maximum, wherein:
Whether similar judge two width of cloth video frame images regular as follows: establishing DB is the feature vector data storehouse, and q is a query vector, and y is the eigenvector among the DB, and T is the inquiry controlled variable,
sim ( q , y ) = 1 &Sigma; x &Element; DB ( d ( q , x ) < d ( q , y ) ? 1 : 0 ) < T * k , 0 &Sigma; x &Element; DB ( d ( q , x ) < d ( q , y ) ? 1 : 0 ) > T * k ,
If the similarity value between two width of cloth video frame images equals 1, then claim their similar each other frames;
It is as follows to utilize Ordered VA-File to obtain approximate k neighbour's algorithm: establishing the inquiry video segment that the user submits to is Q, and the video segment length that customer requirements returns is at W Min-W MaxBetween,
1) utilizes Ordered VA-File to obtain inquiring about the T * k neighbour of each key frame among the Q fast, and determine their similarity relation according to similar judgment rule;
2) T * k neighbour with all inquiry frames sorts according to their priority position relations in video database, if the frame in the database belongs to the T * k neighbour of a plurality of inquiry frames simultaneously, notes the frame number of these frames; If resulting ordered sequence is expressed as<+∞, 0 〉,<d 0, s 0,<d 1, s 1...<d n, s n, d i<d iIf i<j; D wherein iRepresent the position of this frame in database, s iBe illustrated in the query fragment and d iThe frame number of all similar frames, try to achieve the similarity of any one section video and inquiry video in the database according to d and s and similarity model, concrete computing formula is: the video segment of establishing in the database is [p, q], p<q, the inquiry video length is L, if the longest ordered sequence that [p, q] comprised is<d i, s i,<d I+1, s I+1...<d I+j, s I+j, then the similarity between [p, q] and the query fragment is ( j + 1 ) + | | &cup; k = 0 j s i + k | | ( q - p + 1 ) + L ;
3) initialization P End=d 0, establish approximation collection R Temp=Φ; Sim TempkEqual 0;
4) scanning sequence judges whether to exist d i, make W Min<P End-d i≤ W MaxAnd P End-d I-1>W MaxIf there is qualified d i, change 5), otherwise change 7);
5) P Begin=d i, establish V Temp={ P Begin, P End, according to similarity Model Calculation V TempSimilarity value sim (Q, V with inquiry Q Temp), if this similarity is greater than current k neighbour similarity sim Tempk, change 6), otherwise change 8);
6) if R TempMiddle all sequences and current sequence all do not have overlapping, so R Temp=R Etmp+ { V Temp, otherwise compare R TempIn with current sequence the maximum similarity of overlapping sequence and current sequence, R are arranged TempIn only keep the result of similarity maximum among them; If R TempIn comprised current sequence and sim (Q, V Temp) greater than current k neighbour similarity sim TempkAnd R TempIn element number greater than k, sim Tempk=sim (Q, V Temp); If R TempIn element number equal k, sim TempkEqual R TempThe minimum value of the similarity of all elements changes 8);
7) if P End-W Min>0, P then Begin=P End-W MinOtherwise P Begin=1, establish V Temp={ P Begin, P Begin+ W Min, according to similarity Model Calculation V TempSimilarity value sim (Q, V with inquiry Q Temp), if this similarity is greater than current k neighbour similarity sim Tempk, change 6), otherwise change 8);
8) if P End≠ d n, P then End=d I+1, change 3); Otherwise withdraw from and return R Temp
CN 200310108129 2003-10-23 2003-10-23 K near neighbour method used for video frequency fragment fast liklihood inquiry Expired - Fee Related CN1253822C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200310108129 CN1253822C (en) 2003-10-23 2003-10-23 K near neighbour method used for video frequency fragment fast liklihood inquiry

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200310108129 CN1253822C (en) 2003-10-23 2003-10-23 K near neighbour method used for video frequency fragment fast liklihood inquiry

Publications (2)

Publication Number Publication Date
CN1538326A CN1538326A (en) 2004-10-20
CN1253822C true CN1253822C (en) 2006-04-26

Family

ID=34334520

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200310108129 Expired - Fee Related CN1253822C (en) 2003-10-23 2003-10-23 K near neighbour method used for video frequency fragment fast liklihood inquiry

Country Status (1)

Country Link
CN (1) CN1253822C (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8941741B1 (en) * 2014-03-25 2015-01-27 Fmr Llc Authentication using a video signature
CN105959687A (en) * 2016-06-23 2016-09-21 北京天文馆 Video coding method and device
CN107153670B (en) * 2017-01-23 2020-08-14 合肥麟图信息科技有限公司 Video retrieval method and system based on multi-image fusion

Also Published As

Publication number Publication date
CN1538326A (en) 2004-10-20

Similar Documents

Publication Publication Date Title
US11151145B2 (en) Tag selection and recommendation to a user of a content hosting service
RU2628192C2 (en) Device for semantic classification and search in archives of digitized film materials
JP3568117B2 (en) Method and system for video image segmentation, classification, and summarization
Wang et al. Contextual weighting for vocabulary tree based image retrieval
Guan et al. A top-down approach for video summarization
CN1520561A (en) Streaming video bookmarks
CN1851709A (en) Embedded multimedia content-based inquiry and search realizing method
CN109783691B (en) Video retrieval method for deep learning and Hash coding
CN101064846A (en) Time-shifted television video matching method combining program content metadata and content analysis
CN111782853B (en) Semantic image retrieval method based on attention mechanism
CN102236714A (en) Extensible markup language (XML)-based interactive application multimedia information retrieval method
Shen et al. Effective and efficient query processing for video subsequence identification
CN101030230A (en) Image searching method and system
Luo et al. Content based sub-image retrieval via hierarchical tree matching
CN1253822C (en) K near neighbour method used for video frequency fragment fast liklihood inquiry
Bartolini et al. Shiatsu: semantic-based hierarchical automatic tagging of videos by segmentation using cuts
Zhao et al. Video shot grouping using best-first model merging
Kulkarni et al. An effective content based video analysis and retrieval using pattern indexing techniques
Memon et al. Region based localized matching image retrieval system using color-size features for image retrieval
Juan et al. Content-based video retrieval system research
Sebastine et al. Semantic web for content based video retrieval
CN1252647C (en) Scene-searching method based on contents
Chatur et al. A simple review on content based video images retrieval
Ionescu et al. Video genre categorization and representation using audio-visual information
Raheem et al. Video Important Shot Detection Based on ORB Algorithm and FLANN Technique

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20060426

Termination date: 20141023

EXPY Termination of patent right or utility model