CN102222103B - Method and device for processing matching relationship of video content - Google Patents

Method and device for processing matching relationship of video content Download PDF

Info

Publication number
CN102222103B
CN102222103B CN 201110169978 CN201110169978A CN102222103B CN 102222103 B CN102222103 B CN 102222103B CN 201110169978 CN201110169978 CN 201110169978 CN 201110169978 A CN201110169978 A CN 201110169978A CN 102222103 B CN102222103 B CN 102222103B
Authority
CN
China
Prior art keywords
video
video content
content
features
match
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN 201110169978
Other languages
Chinese (zh)
Other versions
CN102222103A (en
Inventor
苗广艺
张名举
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CCTV INTERNATIONAL NETWORKS Co Ltd
Original Assignee
CCTV INTERNATIONAL NETWORKS Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CCTV INTERNATIONAL NETWORKS Co Ltd filed Critical CCTV INTERNATIONAL NETWORKS Co Ltd
Priority to CN 201110169978 priority Critical patent/CN102222103B/en
Publication of CN102222103A publication Critical patent/CN102222103A/en
Application granted granted Critical
Publication of CN102222103B publication Critical patent/CN102222103B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses a method and a device for processing a matching relationship of a video content. The method comprises the following steps of: acquiring the video content, and determining a video type of the video content according to parameters of the video content; extracting video characteristics of the video content according to the video type; and querying a matched video content corresponding to the video content in a video characteristic library according to the video characteristics, and generating a video association relationship library, wherein under the condition of successfully querying the matched video content, an association relationship between the video content and the matched video content, which are successfully matched, is stored in the video association relationship library. Through the method, by automatically matching the video content, the efficiency for searching video is improved, and the manpower cost is saved.

Description

The disposal route of the matching relationship of video content and device
Technical field
The present invention relates to video field, in particular to a kind of disposal route and device of matching relationship of video content.
Background technology
Along with the high speed development of media industry and Internet video industry, video website has a large amount of new video renewals to reach the standard grade every day.The video made from website oneself that these videos are uploaded except the online friend, have much to come from the media producer.The source video sequence of media producer generally all is the live telecast signal, because TV station's television channel quantity is very many, the number of videos that the media producer is made every day is also just very huge, has covered various TV programme.
In general, the video that the media producer provides has two kinds of common forms, and a kind of is the complete programs video, and another kind is the program fragment video.The complete programs video is relatively long, such as a complete ball match, a complete news hookup etc.The program fragment video generally all is a program fragment after the montage, and is relatively short, such as the camera lens of a soccer goal-shooting, news item etc.Two kinds of videos all come from identical video signal source, although be different video files, on content, the program fragment video is the part of complete programs video, can find corresponding time period youngster at the complete programs video.
Therefore, exist a kind of incidence relation between program fragment video and the complete programs video, i.e. corresponding complete programs video of program fragment video, it appears on the time period youngster of this complete programs video.This incidence relation is extremely important, and this information has been arranged, for a program fragment video, and the time point that can find the complete programs video that comprises it and its to occur; For the complete programs video, can find it to comprise what program fragments, and where the time point that each program fragment occurs is.And this related information is real based on video content, and it has excavated the relation on the content aspect between the video, is a kind of senior related information.We claim the incidence relation of this novelty to be " content repeats related ".
Existing video website has a large amount of complete programs videos and program fragment video to reach the standard grade every day, but does not also exist content to repeat incidence relation between them, because there is not a kind of effective method can find easily this incidence relation between them.
For the problems referred to above, can realize that incidence relation between the Internet video, these inventory informations are Word messages of artificial input by the inventory information of artificial input video, such as information such as video name, name of tv column, performer's tabulations.Therefore this traditional incidence relation depends critically upon artificial input, and there is not to excavate the information on the video content aspect, follow-up when entering manually the workflow that complete programs video and program fragment video are mated, not only to search multitude of video, also to search time point corresponding on each video, because the number of videos that the media producer is produced every day is very huge, the task amount of manually searching is very huge, may finish hardly.
At present for correlation technique pass through manually to set up incidence relation between the Internet video, cause in the video content matching process searching work heavy, and inefficient problem, effective solution is not yet proposed at present.
Summary of the invention
Pass through manually to set up incidence relation between the Internet video for correlation technique, cause in the video content matching process searching work heavy, and inefficient problem, not yet propose effective problem at present and propose the present invention, for this reason, fundamental purpose of the present invention is to provide a kind of disposal route and device of matching relationship of video content, to address the above problem.
To achieve these goals, according to an aspect of the present invention, a kind of disposal route of matching relationship of video content is provided, and the disposal route of the matching relationship of this video content comprises: obtain video content, and determine the video type of video content according to the parameter of video content; Extract the video features of video content according to video type; In the video features storehouse, inquire about the match video content corresponding with video content according to video features, and generating video incidence relation library, wherein, in the situation that successfully inquire the match video content, the video content that the match is successful and the incidence relation between the match video content are saved to the video incidence relation library.
Further, video type comprises complete programs video and program fragment video, wherein, obtains video content, and determines that according to the parameter of video content the video type of video content comprises: obtain video content; Determine the video type of video content by the attribute flags of checking video content, perhaps, determine the video type of video content according to the video length of video content; Wherein, when the video length of video content during more than or equal to first threshold, video content is the complete programs video; When the video length of video content during less than or equal to Second Threshold, video content is the program fragment video, and first threshold is greater than Second Threshold.
Further, extract the video features of video content, and in the video features storehouse, inquire about the match video content corresponding with video content according to video features and comprise: the time window of extraction predetermined length in video content; The video features of uniform sampling predetermined number in time window; The video features of the predetermined number that samples is made up, with the window video features in the acquisition time window; The match video content of inquiry and window video features coupling in the video features storehouse.
Further, inquiry comprises with the match video content of window video features coupling in the video features storehouse: all the window video features in window video features and the video features storehouse are carried out distance relatively, wherein, in the situation that the distance less than or equal to validation value, the match is successful for the window video features.
Further, video features comprises characteristics of image and audio frequency characteristics, and the step of extracting the characteristics of image of video content comprises: the image of video content is carried out piecemeal; Extract the characteristics of image of each piece image; The characteristics of image that each piece image is corresponding makes up, to obtain characteristics of image; And the step of extracting the audio frequency characteristics of video content comprises: video content is divided into uniform timeslice according to the ticket reserving time leaf length, and adjacent timeslice overlaid; Extract the audio frequency characteristics in each timeslice interval.
Further, before obtaining video content, method also comprises: make video content obtaining video file, and set up the video features storehouse, the video features storehouse comprises: complete programs feature database and program fragment feature database.
Further, in the video features storehouse, inquire about the match video content corresponding with video content according to video features, and the generating video incidence relation library comprises: in the situation that video content is the complete programs video, video features in each time window of complete programs video is mated in the program fragment feature database, obtaining the one or more first program fragment videos corresponding with the complete programs video, and the incidence relation between complete programs video and each the first program fragment video is saved to the video incidence relation library; Perhaps, in the situation that video content is the program fragment video, video features in first time window of program fragment video is mated in the complete programs feature database, obtaining the first complete programs video corresponding with the program fragment video, and the incidence relation between program fragment video and the first complete programs video is saved to the video incidence relation library.
Further, in the video features storehouse, inquiring about the match video content corresponding with video content according to video features, and after the generating video incidence relation library, method comprises also: read video content; Match video content corresponding to inquiry video content in the video incidence relation library; In the situation that search the success of match video content, the match video that the displaying video content is corresponding.
Further, in the situation that search the success of match video content, match video corresponding to displaying video content comprises: in the situation that video content is the program fragment video, play-over the first complete programs video that finds; Perhaps, in the situation that video content is the complete programs video, play-over one or more the first program fragment videos that find, and the form of one or more the first program fragment videos with label is labeled on the progress bar of complete programs video.
To achieve these goals, according to a further aspect in the invention, a kind for the treatment of apparatus of matching relationship of video content is provided, the treating apparatus of the matching relationship of this video content comprises: the video type processing unit, for the video type of determining video content according to the parameter of the video content that gets access to; Extraction unit is for the video features that extracts video content according to video type; The matching treatment unit, be used for according to video features in the video features storehouse inquiry match video content corresponding with video content, and generating video incidence relation library, wherein, in the situation that successfully inquire the match video content, the video content that the match is successful and the incidence relation between the match video content are saved to the video incidence relation library.
Further, the video type processing unit comprises: receiver module is used for obtaining video content; Authentication module 102 is for the video type of determining video content by the attribute flags of checking video content; Perhaps, determine the video type of video content according to the video length of video content, video type comprises: complete programs video and program fragment video; Wherein, when the video length of video content during more than or equal to first threshold, video content is the complete programs video; When the video length of video content during less than or equal to Second Threshold, video content is the program fragment video, and first threshold is greater than Second Threshold.
Further, extraction unit comprises: acquisition module is used for the time window at the one or more predetermined lengths of video content extraction; Sampling module is for the video features of uniform sampling predetermined number in time window.
Further, the matching treatment unit comprises: composite module, and the video features that is used for the predetermined number that will sample makes up, with the window video features in the acquisition time window; Enquiry module is used in the match video content of video features storehouse inquiry with window video features coupling.
Further, enquiry module comprises: comparison module, be used for all window video features in window video features and video features storehouse are carried out distance comparison, wherein, in the situation that apart from less than or equal to validation value, the match is successful for the window video features.
Further, extraction unit is the first extraction unit or the second extraction unit, and wherein, the first extraction unit is used for extracting the characteristics of image of video content, and the first extraction unit comprises: first divides module, is used for the image of video content is carried out piecemeal; The first acquisition module is for the characteristics of image that extracts each piece image; The second composite module is used for the characteristics of image that each piece image is corresponding and makes up, to obtain characteristics of image; And second extraction unit be used for to extract the audio frequency characteristics of video content, the first extraction unit comprises: second divides module, is used for video content is divided into uniform timeslice according to the ticket reserving time leaf length, and adjacent timeslice overlaid; The second acquisition module is used for extracting the audio frequency characteristics in each timeslice interval.
Further, device also comprises: create the feature database unit, be used for setting up the video features storehouse when making video content, the video features storehouse comprises: complete programs feature database and program fragment feature database.
Further, the matching treatment unit comprises the first matching treatment unit or the second matching treatment unit, wherein, the first matching treatment unit is used in the situation that video content is the complete programs video, video features in each time window of complete programs video is mated in the program fragment feature database, obtaining the one or more first program fragment videos corresponding with the complete programs video, and the incidence relation between complete programs video and each the first program fragment video is saved to the video incidence relation library; The second matching treatment unit is used in the situation that video content is the program fragment video, video features in first time window of program fragment video is mated in the complete programs feature database, obtaining the first complete programs video corresponding with the program fragment video, and the incidence relation between program fragment video and the first complete programs video is saved to the video incidence relation library.
Further, device also comprises: reading unit is used for reading video content; The query processing unit is used in match video content corresponding to video incidence relation library inquiry video content; Broadcast unit is used in the situation that search the success of match video content, the match video that the displaying video content is corresponding.
Further, broadcast unit comprises: the first broadcast unit, be used in the situation that video content is the program fragment video, and play-over the first complete programs video that finds; Perhaps, the second broadcast unit, be used in the situation that video content is the complete programs video, play-over one or more the first program fragment videos that find, and the form of one or more the first program fragment videos with label is labeled on the progress bar of complete programs video.
By the present invention, adopt and obtain video content, and determine the video type of video content according to the parameter of video content, video type comprises: complete programs video and program fragment video; Extract the video features of video content according to video type; In the video features storehouse, inquire about the match video content corresponding with video content according to video features, and generating video incidence relation library, wherein, in the situation that successfully inquire the match video content, video content and the corresponding match video content thereof that the match is successful are saved to the video incidence relation library, the video incidence relation library also comprises the incidence relation between video content and the corresponding match video content thereof, what solved related art passes through manually to set up incidence relation between the Internet video, cause in the video content matching process searching work heavy, and inefficient problem, and then realized having improved the efficient of searching for video by the Auto-matching video content, saved the effect of human cost.
Description of drawings
Accompanying drawing described herein is used to provide a further understanding of the present invention, consists of the application's a part, and illustrative examples of the present invention and explanation thereof are used for explaining the present invention, do not consist of improper restriction of the present invention.In the accompanying drawings:
Fig. 1 is the structural representation according to the treating apparatus of the matching relationship of the video content of the embodiment of the invention;
Fig. 2 is the schematic diagram to image block according to the embodiment of the invention;
Fig. 3 is the process flow diagram according to the disposal route of the matching relationship of the video content of the embodiment of the invention;
Fig. 4 is the process flow diagram according to the video feature extraction method of the video content of the embodiment of the invention;
Fig. 5 is the method flow diagram according to the establishment video incidence relation library of the embodiment of the invention;
Fig. 6 is the process flow diagram of the video content that inquires of the broadcast according to the embodiment of the invention.
Embodiment
Need to prove, in the situation that do not conflict, embodiment and the feature among the embodiment among the application can make up mutually.Describe below with reference to the accompanying drawings and in conjunction with the embodiments the present invention in detail.
Fig. 1 is the structural representation according to the treating apparatus of the matching relationship of the video content of the embodiment of the invention.As shown in Figure 1, the treating apparatus of the matching relationship of this video content comprises: video type processing unit 10, for the video type of determining video content according to the parameter of the video content that gets access to; Extraction unit 30 is for the video features that extracts video content according to video type; Matching treatment unit 50, be used for according to video features in the video features storehouse inquiry match video content corresponding with video content, and generating video incidence relation library, wherein, in the situation that successfully inquire the match video content, the video content that the match is successful and the incidence relation between the match video content are saved to the video incidence relation library, namely ID sign and the corresponding match video content thereof of the video content that the match is successful can be saved to the video incidence relation library, simultaneously the incidence relation between the match video content of video content and correspondence thereof be saved to the video incidence relation library.
The above embodiments of the present application realize that by matching treatment unit 50 video content repeats the foundation of incidence relation, then adopt the algorithm that carries out the video content coupling based on video features, can substitute manual working fully, the content of having set up in the match video of all video contents of Automatic analysis between them repeats incidence relation, very high search efficiency is not only arranged, and the time point that inquires is very accurate, thereby substituted the incidence relation of manually setting up between the Internet video, solved that manually to carry out in the video content matching process searching work heavy, and inefficient problem, and then realized having improved the efficient of searching for video by the Auto-matching video content, saved human cost.And further can realize content-based coupling in the source that video content repeats search video segment in the incidence relation, and content-based coupling repeats automatically to generate in the incidence relation wonderful at video content.
As shown in Figure 1, the video type processing unit 10 in the above embodiment of the present invention can comprise: receiver module 101 is used for obtaining video content; Authentication module 102 is for the video type of determining video content by the attribute flags of checking video content; Perhaps, determine the video type of video content according to the video length of video content, video type comprises: complete programs video and program fragment video; Wherein, when the video length of video content during more than or equal to first threshold, video content is the complete programs video; When the video length of video content during less than or equal to Second Threshold, video content is the program fragment video; When the video length of video content was between first threshold and Second Threshold, video content was the complete programs video, also be the program fragment video, and first threshold is greater than Second Threshold.
The above-mentioned video type processing unit 10 of the present invention can realize distinguishing according to the attribute of video itself type of the video content that receives, and also can divide by the video length threshold value type of this video content.
Concrete, in some application scenario, video is when making, and having an attribute, to come this video of mark be a complete programs or a program fragment, in this case, directly just can classify to video according to this attribute.In other application scenarios, video does not have corresponding attribute flags, can only come it is classified according to video length, for example: the method for the overlapping classification of dual threshold.Set two threshold value first threshold Threshold1 and Second Threshold Threshold2, wherein first threshold Threshold1 is less than Second Threshold Threshold2.If video length is less than Threshold1, think that then it is the program fragment video, if video length is greater than Threshold2, think that then it is the complete programs video, if video length is between Threshold1 and Threshold2, thinking that then it is the program fragment video, also is the complete programs video.
Extraction unit 30 in the above embodiment of the present invention can comprise following functional module: acquisition module 301 is used for the time window at video content extraction predetermined length; And sampling module 302, for the video features of uniform sampling predetermined number in time window.Selection and the extraction of the video features of video content have been realized by the above-mentioned functions module.
Particularly, extraction unit 30 among the application can adopt the mode of carrying out characteristic matching at a time window s in second, in this time window s uniform sampling f feature in second, these features are compared, result relatively is as the matching result of video in this time window.For example, we adopt s=10 second as a time window, then evenly choose f=10 feature and connect combination in this window, as the feature of this time window.The more existing technology that adopts single-frame images or single timeslice when two videos carry out characteristic matching of said method has reduced the error of matching result, so that matching effect is better.
For different video features, the extraction unit 30 that relates in above-described embodiment can be the first extraction unit, it also can be the second extraction unit, the video features that relates in the first extraction unit and the second extraction unit can be characteristics of image or audio frequency characteristics, characteristics of image is following any one feature: integral image feature, color histogram feature and YUV color characteristic, audio frequency characteristics are following any one feature: the Mel frequency is fallen general coefficient characteristics and fourier coefficient feature.This first extraction unit and the second extraction unit adopt characteristics of image and audio frequency characteristics double characteristic to describe video, in video matching, characteristics of image and audio frequency characteristics are separately independently mated, all the match is successful to only have two kinds of features, just be the video matching success, can guarantee higher matching accuracy rate like this.
Preferably, the first extraction unit is used for extracting the characteristics of image of video content, and the first extraction unit comprises: first divides module, is used for the image of video content is carried out piecemeal; The first acquisition module, for the characteristics of image that extracts each piece image, this characteristics of image can be the histogram characteristics of image; The second composite module is used for the characteristics of image that each piece image is corresponding and makes up, to obtain characteristics of image.
Preferably, the second extraction unit is used for extracting the audio frequency characteristics of video content, and the first extraction unit comprises: second divides module, be used for video content is divided into uniform timeslice according to the ticket reserving time leaf length, and adjacent timeslice overlaid; The second acquisition module is used for extracting the audio frequency characteristics in each timeslice interval.Concrete, the Mel frequency that the second extraction unit can be used for extracting video content is fallen general coefficient characteristics, and this first extraction unit comprises: second divides module, be used for the time to schedule length be divided into uniform timeslice; The second acquisition module, Mel frequency that be used for to extract the ticket reserving time leaf length fall general coefficient characteristics, and adjacent Mel frequency falls the timeslice overlaid of general coefficient characteristics, and add the differential parameter of voice behavioral characteristics in the Mel frequency is fallen general coefficient characteristics.
Because the feature of Description Image content has a lot, be to describe the principle of the overall condition of video, the first extraction unit selective extraction integral image feature, rather than local feature according to purpose.In order to guarantee the speed of feature extraction, can select the color histogram category feature, the overall condition that this feature both can Description Image can be calculated extraction again fast.Preferably, on color space, can select the YUV color space, it is compared with the RGB color space, more meets the visual characteristic of human eye.
In addition, because the histogram feature of whole image does not comprise the positional information of image, for so that characteristics of image comprises certain positional information, can adopt image is carried out piecemeal, then each piece is extracted respectively histogram feature, with behind these Feature Combinations as the global feature of image.
The mode of image block as shown in Figure 2 at first, cuts into image the 9 palace lattice of 3x3, and the ratio that horizontal direction and vertical direction adopt all is 0.25: 0.5: 0.25, namely 1: 2: 1.Cut down like this, middle layout has accounted for the area of image 1/4th, and 4 lattice at 4 angles have accounted for image 1/4th areas altogether, and 4 lattice on remaining 4 limits have accounted for remaining 1/2nd area.For each lattice, give different weights, middle layout is most important, and it is least important that weight is 4 lattice of 4, four jiaos to the maximum, and it is zero that weight is composed, and other 4 lattice weights are 1.Then, in each lattice, extract histogram feature after, the feature of different lattice be multiply by different weights, then they being linked in sequence combines, as the global feature of image.
The second extraction unit adopts Mel frequency cepstral coefficient (MFCC) feature, and this extracting mode has not only been described the characteristics of audio frequency from the frequency domain well, and compares with the audio frequency characteristics such as Fourier coefficient, more near the auditory properties of people's ear.Generally in speech recognition algorithm, often can add the differential parameter that characterizes the voice dynamic perfromance in the phonetic feature of extraction, can improve the recognition performance of system.In native system, preferably extract first order difference parameter and the second order difference parameter of MFCC parameter, thereby the audio frequency characteristics that generates can improve the accuracy rate of native system.
In addition, in order to keep the continuity of audio frequency characteristics, the second extraction unit can be selected the timeslice of 0.08 second length when extracting audio frequency characteristics, and two adjacent timeslices adopt overlapping mode, overlapping length can be half of a timeslice length, namely 0.04 second, like this so that adjacent audio frequency characteristics has certain continuity at voice data, can reduce because the oversize characteristic matching accuracy rate that causes of timeslice descends.In this manner, average one second audio frequency can extract 25 audio frequency characteristics.
In summary, the selection of video features is very important, directly has influence on the speed of feature extraction, accuracy rate and the speed of video matching.If the computation process of feature is complicated and time consumption very, expensive time of feature extraction meeting so.The mode of said extracted video features of the present invention has not only been described video content well, and higher accuracy rate is arranged.Simultaneously, because the computing method of the length of proper vector and characteristic distance can have influence on the speed of video matching, the video feature vector that extracts in the present embodiment is shorter, so the method for calculated characteristics distance is more succinct, thereby has improved the speed of characteristic matching.
As shown in Figure 1, the matching treatment unit 50 in the above embodiments of the present application can comprise: composite module 501, and the video features that is used for the predetermined number that will sample makes up, with the window video features in the acquisition time window; Enquiry module 502 is used in the match video content of video features storehouse inquiry with window video features coupling.Preferably, this enquiry module 502 can comprise: comparison module, be used for all window video features in window video features and video features storehouse are carried out distance relatively, wherein, in the situation that distance is less than or equal to validation value, the match is successful for the window video features, and video content and the match video content that the match is successful is saved to the video incidence relation library.
For different video type, matching treatment unit 50 in the above embodiments of the present application can be the first matching treatment unit, it also can be the second matching treatment unit, wherein, the first matching treatment unit is used in the situation that video content is the complete programs video, video features according to the complete programs video obtains the one or more first program fragment videos corresponding with the complete programs video in the program fragment feature database, and the incidence relation between complete programs video and each the first program fragment video is saved to the video incidence relation library.Above-described embodiment realized for the complete programs video, goes to mate all window feature in the program fragment feature database with each time window feature, may find many matching results at last, and this matching result is saved to the incidence relation database.
The second matching treatment unit is used in the situation that video content is the program fragment video, video features in first time window of program fragment video is mated in the complete programs feature database, obtaining the first complete programs video corresponding with the program fragment video, and the incidence relation between program fragment video and the first complete programs video is saved to the video incidence relation library.Above-described embodiment is realized for the program fragment video, goes to mate all window feature with first time window feature in the complete programs feature database, may find at last a matching result, and this matching result is saved to the incidence relation database.
In the specific implementation process, the video of each new adding carries out visual classification to it, then extracts feature, if this video is the program fragment video, just mates in the complete programs feature database with its feature; If the complete programs video just mates in the program fragment feature database with its feature.If match corresponding video, just generate a new incidence relation, this incidence relation is deposited in the incidence relation library.
The foundation of characteristic matching and incidence relation library has been realized in the matching treatment unit 50 that relates in above-described embodiment.Because characteristics of image and audio frequency characteristics that extraction unit 30 extracts all are proper vectors that is comprised of floating number, every one dimension of feature all is a floating number, and N dimension histogram feature is exactly N floating number.The proper vector of two N dimensions is carried out distance calculating, if directly adopt Euclidean distance, has N floating number and multiplies each other and an extracting operation, and calculated amount is larger.In order to improve feature speed relatively, can adopt chessboard distance, be about to the distance summation of every one dimension as the distance of two vectors, so only need plus and minus calculation N+1 time, calculated amount reduces greatly.
As from the foregoing, at a time window s in second after the uniform sampling f feature, 50 pairs of two videos in matching treatment unit among the application mate, can be with program fragment video VideoA as the matching request videos, and complete programs video VideoB is as the video that is mated.Get a time window at the section start of VideoA, carrying out distance with all time window feature FeatureBt (t=0-end) of this video with the feature FeatureAt (t=0) of this time window on VideoB calculates, if in some time point t0 place distance less than threshold value, this threshold value is a validation value, and the match is successful with FeatureBt (t=t0) for characterization FeatureAt (t=0).If the situation of two characteristic matching successes occurs, just further verify.Interval D t time span is evenly chosen M-1 time window feature FeatureAt (t=Dt*m after the start time point of VideoA, m=1,2...M-1), same interval D t time span is evenly chosen M-1 time window feature FeatureBt (t=t0+Dt*m after the time point t0 place of VideoB, m=1,2...M-1), with these time window features respectively correspondence mate, if all the match is successful for this M-1 time window feature, VideoA and VideoB are described, and the match is successful at time point t0 place.
Preferably, for the very large database of number of videos, the time of characteristic matching cost is longer, can also adopt following several different methods to reduce time of characteristic matching, such as: set up index, time restriction, column restriction etc.Set up index and can carry out rapidly characteristic key, but can make system's complicated, the renewal frequency of index also can be brought very large impact to the result.Time restriction refers to only the video in the time range be mated, and Production Time, video more remote was deleted from feature database automatically, can reduce like this scope of characteristic matching.The column restriction refers to the column attribute tags according to video, and each match video is only mated the identical video of column attribute, equally also can greatly reduce the scope of video matching.
The match is successful if VideoA and VideoB are at time point t0 place, just generate an incidence relation (VideoA, VideoB, t0), in other words video matching success appears in expression VideoA at the t0 place of VideoB, and this incidence relation deposited in the incidence relation library, this incidence relation library is the video incidence relation library.
Install and can also comprise among above-mentioned each embodiment of the application: create feature database unit 70, be used for setting up the video features storehouse when making video content, the video features storehouse comprises: complete programs feature database and program fragment feature database.
The application is before entering video content extraction and matching treatment process, model video features storehouse will be because video will be divided into two classes, complete programs video and program fragment video, therefore, the video features storehouse can comprise complete programs feature database and program fragment feature database.During visual classification, can distinguish according to the attribute of video itself, also can divide by the video length threshold value.
Device in the above embodiments of the present application can also comprise: reading unit 601 is used for reading video content; The query processing unit is used in match video content corresponding to video incidence relation library inquiry video content; Broadcast unit 602 is used in the situation that search the success of match video content, the match video that the displaying video content is corresponding.Preferably, broadcast unit 602 can comprise: the first broadcast unit, be used in the situation that video content is the program fragment video, and play-over the first complete programs video that finds; Perhaps, the second broadcast unit, be used in the situation that video content is the complete programs video, play-over one or more the first program fragment videos that find, and the form of one or more the first program fragment videos with label is labeled on the progress bar of complete programs video.This mode can fast search to the source of current video fragment, namely comprise the complete video of current video fragment, bring very large facility and novel experience to the user.
Above-described embodiment realization is when video is watched by user selection, and device can automatically be inquired about this video and whether have incidence relation in incidence relation library.If it is the program fragment video that Query Result represents this video, to the complete programs video should be arranged, just with the complete programs video display to the user, allow the user know that current program fragment comes from that complete programs, and can select to watch complete programs.If it is the complete programs video that Query Result represents this video, to a plurality of program fragment videos should be arranged, so just the information with these program fragment videos displays in complete programs, allow the user know that this program can be divided into a plurality of wonderfuls, each fragment can directly be located and be watched.
Had at present on video and improved the method that the user experiences by the mode of markup tags, but these labels all are to adopt manual type to generate, need editor to choose in advance point and the content of input label on opportunity, such method will spend human cost and time cost, can't use at extensive video.Concrete, in above-described embodiment, after the automatic generating video relevance relation of device, automatically generated simultaneously the label of excellent video by device, be the label of each program fragment video, and this label is inserted in the complete programs video, this video tab comprises the content of time point and the label of label, realized that time point is accurate, and can widespread use on extensive video.
Fig. 3 is the process flow diagram according to the disposal route of the matching relationship of the video content of the embodiment of the invention.The method comprises the steps: as shown in Figure 3
Step S10 obtains video content by the video type processing unit 10 among Fig. 1, and determines the video type of video content according to the parameter of video content.
Step S30, the extraction unit 30 by among Fig. 1 realizes extracting according to video type the video features of video content.
Step S50, realize in the video features storehouse, inquiring about the match video content corresponding with video content according to video features by the matching treatment unit 50 among Fig. 1, and generating video incidence relation library, wherein, in the situation that successfully inquire the match video content, the video content that the match is successful and the incidence relation between the match video content are saved to the video incidence relation library, namely ID sign and the corresponding match video content thereof of the video content that the match is successful can be saved to the video incidence relation library, simultaneously the incidence relation between the match video content of video content and correspondence thereof be saved to the video incidence relation library.
The above embodiments of the present application realize that by matching treatment unit 50 video content repeats the foundation of incidence relation, then adopt the algorithm that carries out the video content coupling based on video features, can substitute manual working fully, the content of having set up in the match video of all video contents of Automatic analysis between them repeats incidence relation, very high search efficiency is not only arranged, and the time point that inquires is very accurate, thereby substituted the incidence relation of manually setting up between the Internet video, solved that manually to carry out in the video content matching process searching work heavy, and inefficient problem, and then realized having improved the efficient of searching for video by the Auto-matching video content, saved human cost.And further can realize content-based coupling in the source that video content repeats search video segment in the incidence relation, and content-based coupling repeats automatically to generate in the incidence relation wonderful at video content.
Fig. 4 is the process flow diagram according to the video feature extraction method of the video content of the embodiment of the invention.As shown in Figure 4, among the step S10 in above-described embodiment, video type comprises complete programs video and program fragment video, wherein, obtains video content, and determines that according to the parameter of video content the video type of video content can comprise the steps:
Step S101 obtains video content, namely to the device input video.
Step S102 determines the video type of video content by the attribute flags of checking video content, perhaps, determines the video type of video content according to the video length of video content; Wherein, when the video length of video content during more than or equal to first threshold, video content is the complete programs video; When the video length of video content during less than or equal to Second Threshold, video content is the program fragment video; When the video length of video content was between first threshold and Second Threshold, video content was the complete programs video, also be the program fragment video, and first threshold is greater than Second Threshold.Embodiment in the above-mentioned steps realizes distinguishing according to the attribute of video itself type of the video content that receives, the type that also can divide by the video length threshold value this video content.
Preferably, before step S10 obtained video content, method can also comprise the steps: to make video content, and set up the video features storehouse by extracting video features, and the video features storehouse comprises: complete programs feature database and program fragment feature database.
Fig. 5 is the method flow diagram according to the establishment video incidence relation library of the embodiment of the invention, as shown in Figure 5, the video features of the extraction video content among above-mentioned steps S30 of the present invention and the step S50, and can comprise the steps: according to video features is inquired about the match video content corresponding with video content in the video features storehouse embodiment
Step S301 extracts the time window of predetermined length in video content, and in time window the video features of uniform sampling predetermined number; The video features of the predetermined number that samples is made up, with the window video features in the acquisition time window.The extracting method of the video features of this extraction video content can be applied in the process that creates video features storehouse or video incidence relation library.
Step S501, the match video content of inquiry and window video features coupling in the video features storehouse.
Preferably, inquiry in the video features storehouse in above-mentioned steps can comprise with the step of the match video content of window video features coupling: all the window video features in window video features and the video features storehouse are carried out distance relatively, wherein, in the situation that the distance less than or equal to validation value, the match is successful for the window video features.
Concrete, step S301 can adopt the mode of carrying out characteristic matching at a time window s in second, in this time window s uniform sampling f feature in second, these features is compared, and the result of comparison is as the matching result of video in this time window.The more existing technology that adopts single-frame images or single timeslice when two videos carry out characteristic matching of said method has reduced the error of matching result, so that matching effect is better.
Simultaneously, in the process of above-mentioned video features sampling, because the video content types among the application comprises complete programs video and program fragment video, therefore, program fragment video among the application's step S301 extracts from the program fragment feature database, and the complete programs video extracts in the complete programs feature database.Wherein, video features comprises characteristics of image and audio frequency characteristics, and the step of extracting the characteristics of image of video content comprises: the image of video content is carried out piecemeal; Extract the characteristics of image of each piece image; The characteristics of image that each piece image is corresponding makes up, to obtain characteristics of image; And the step of extracting the audio frequency characteristics of video content comprises: video content is divided into uniform timeslice according to the ticket reserving time leaf length, and adjacent timeslice overlaid; Extract the audio frequency characteristics in each timeslice interval.
Because the feature of Description Image content has a lot, selective extraction integral image feature in the process of extraction video features, rather than local feature.And in order to guarantee the speed of feature extraction, can select the color histogram category feature, the overall condition that this feature both can Description Image can be calculated extraction again fast.Preferably, on color space, can select the YUV color space, it is compared with the RGB color space, more meets the visual characteristic of human eye.
In addition, because the histogram feature of whole image does not comprise the positional information of image, for so that characteristics of image comprises certain positional information, can adopt image is carried out piecemeal, then each piece is extracted respectively histogram feature, with behind these Feature Combinations as the global feature of image.
Concrete, in the process that two videos of realization mate in step S501, can be take program fragment video VideoA as the matching request video, complete programs video VideoB is example for the video that is mated, get a time window at the section start of VideoA, carrying out distance with all time window feature FeatureBt (t=0-end) of this video with the feature FeatureAt (t=0) of this time window on VideoB calculates, if less than threshold value, the match is successful with FeatureBt (t=t0) for characterization FeatureAt (t=0) in some time point t0 place distance.If the situation of two characteristic matching successes occurs, just further verify.Interval D t time span is evenly chosen M-1 time window feature FeatureAt (t=Dt*m after the start time point of VideoA, m=1,2...M-1), same interval D t time span is evenly chosen M-1 time window feature FeatureBt (t=t0+Dt*m after the time point t0 place of VideoB, m=1,2...M-1), with these time window features respectively correspondence mate, if all the match is successful for this M-1 time window feature, VideoA and VideoB are described, and the match is successful at time point t0 place.
For the very large database of number of videos, the time of characteristic matching cost is longer, can adopt several different methods to reduce time of characteristic matching, such as: set up index, time restriction, column restriction etc.Set up index and can carry out rapidly characteristic key, but can make system's complicated, the renewal frequency of index also can be brought very large impact to the result.Time restriction refers to only the video in the time range be mated, and Production Time, video more remote was deleted from feature database automatically, can reduce like this scope of characteristic matching.The column restriction refers to the column attribute tags according to video, and each match video is only mated the identical video of column attribute, equally also can greatly reduce the scope of video matching.
The match is successful if VideoA and VideoB are at time point t0 place, just generate an incidence relation (VideoA, VideoB, t0), in other words video matching success appears in expression VideoA at the t0 place of VideoB, and this incidence relation is deposited in the incidence relation library.
As shown in Figure 5, in the video features storehouse, inquiring about the match video content corresponding with video content according to video features, and in the step of generating video incidence relation library, can realize by step S502 or step S503:
Step S502, in the situation that video content is the complete programs video, can in the program fragment feature database, obtain according to the video features of complete programs video the first program fragment video corresponding with the complete programs video, and ID sign, the first program fragment video and the incidence relation between them of complete programs video is saved to the video incidence relation library.In this step, video features in each time window of complete programs video can be mated in the program fragment feature database, to obtain the one or more first program fragment videos corresponding with the complete programs video, may find some matching results at last for the complete programs video.
Step S503, in the situation that video content is the program fragment video, video features according to the program fragment video obtains the first complete programs video corresponding with the program fragment video in the complete programs feature database, and program fragment video, the first complete programs video and the incidence relation between them are saved to the video incidence relation library.In this step, video features in first time window of program fragment video can be mated in the complete programs feature database, to obtain the first complete programs video corresponding with the program fragment video, may only find a matching result at last for the program fragment video.
In the above embodiment of the present invention, in the video features storehouse, inquire about the match video content corresponding with video content at step S50 according to video features, and after the generating video incidence relation library, can also comprise the steps: to read video content; Match video content corresponding to inquiry video content in the video incidence relation library; In the situation that search the success of match video content, the match video that the displaying video content is corresponding.Preferably, in the situation that search the success of match video content, can comprise in the step of the match video that the displaying video content is corresponding: in the situation that video content is the program fragment video, play-over the first complete programs video that finds; Perhaps, in the situation that video content is the complete programs video, play-over one or more the first program fragment videos that find, and the form of one or more the first program fragment videos with label is labeled on the progress bar of complete programs video.
Fig. 6 is the process flow diagram of the video content that inquires of the broadcast according to the embodiment of the invention.As shown in Figure 6, in the above embodiments of the present application, realized the process that the search of video segment source and wonderful generate automatically.When the user opens a video, device according to video unique ID number in incidence relation library, search its incidence relation.If found the complete programs video of its correspondence, just device displays complete programs; If found the program fragment video of its correspondence, just the form of the program fragment of all its associations with the wonderful label displayed; If what does not all find, just do not do any displaying, a normal play current video.
If current video is a program fragment, and find incidence relation in incidence relation library, the complete programs video of correspondence in incidence relation is exactly the source video of current program fragment so.At this time can show this complete video to the user, and the current program fragment of prompting user comes from this complete video, and provide a link entrance to the user, allow the user can select to watch this complete video.
If current video is a complete programs, and in incidence relation library, find incidence relation, generalized case all can find several incidence relations so, the corresponding program fragment video of each incidence relation, these program fragment videos all come from current complete programs, and appear at the different time point of this complete programs.When showing these program fragments, the user can adopt the form of wonderful label, i.e. several labels on mark on the progress bar of current complete programs video, the corresponding program fragment video of each label, the beginning of a wonderful of expression.Can be for each label add suggestion content, suggestion content can comprise the information such as video name of wonderful video.Each label can provide an operation entry for the user, and wonderful is watched in the position that allows the user can directly jump to this label.
To sum up, the application realizes by content-based video matching and searching algorithm by computerized algorithm, content fast automatic and that accurately set up between the magnanimity video repeats incidence relation, and utilizes two kinds of application apparatus of incidence relation design of this novelty: the search of video segment source and wonderful generate automatically.These two kinds of application will bring novel easily experience for the user.
Wherein, as shown in Figure 6, fragment source video sequence search: the user is when playing a program fragment video, and device meeting automatic search is to the source of current video, the complete programs that namely comprises current program fragment shows user and prompting user can play-over complete programs Search Results.
As shown in Figure 6, wonderful generates automatically: the user is when playing a complete programs video, all program fragments of device meeting automatic search and the meaningful repetition incidence relation of current video, after the arrangement screening, the form of program fragment with label is labeled on the progress bar of complete programs.Each program fragment generates a label, and as a wonderful, prompting user can directly be clicked label and jump to this position and watch this program fragment.As shown in the figure, the blue dot of screen below represents the starting point mark of wonderful, and this is a kind of mode of showing wonderful.
Need to prove, can in the computer installation such as one group of computer executable instructions, carry out in the step shown in the process flow diagram of accompanying drawing, and, although there is shown logical order in flow process, but in some cases, can carry out step shown or that describe with the order that is different from herein.
As can be seen from the above description, the present invention has realized following technique effect: full-automatic, substituted manually-operated fully, and save a large amount of human costs; Speed is fast, and the time that video matching needs is very little, so that can carry out the magnanimity Video processing; Accurately, can accurately locate the time point of program fragment video on the complete programs video of its correspondence; Novel user experiences, and shows that with brand-new mode the relevance between video concerns, so that online friend user can experience the convenience that this patent brings preferably.
Obviously, those skilled in the art should be understood that, above-mentioned each module of the present invention or each step can realize with general calculation element, they can concentrate on the single calculation element, perhaps be distributed on the network that a plurality of calculation elements form, alternatively, they can be realized with the executable program code of calculation element, thereby, they can be stored in the memory storage and be carried out by calculation element, perhaps they are made into respectively each integrated circuit modules, perhaps a plurality of modules in them or step are made into the single integrated circuit module and realize.Like this, the present invention is not restricted to any specific hardware and software combination.
The above is the preferred embodiments of the present invention only, is not limited to the present invention, and for a person skilled in the art, the present invention can have various modifications and variations.Within the spirit and principles in the present invention all, any modification of doing, be equal to replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (15)

1. the disposal route of the matching relationship of a video content is characterized in that, comprising:
Obtain video content, and determine the video type of described video content according to the parameter of described video content;
Extract the video features of described video content according to described video type;
In the video features storehouse, inquire about the match video content corresponding with described video content according to described video features, and the generating video incidence relation library, wherein,
In the situation that successfully inquire described match video content, the described video content that the match is successful and the incidence relation between the described match video content are saved to described video incidence relation library;
Wherein, described video type comprises complete programs video and program fragment video, wherein, obtains video content, and determines that according to the parameter of described video content the video type of described video content comprises: obtain described video content; Determine the video type of described video content by the attribute flags of verifying described video content, perhaps, determine the video type of described video content according to the video length of described video content; Wherein, when the described video length of described video content during more than or equal to first threshold, described video content is described complete programs video; When the described video length of described video content during less than or equal to described Second Threshold, described video content is described program fragment video, and described first threshold is greater than described Second Threshold;
Wherein, in the video features storehouse, inquire about the match video content corresponding with described video content according to described video features, and the generating video incidence relation library comprises: in the situation that described video content is described complete programs video, video features in each described time window of described complete programs video is mated in described program fragment feature database, obtaining the one or more first program fragment videos corresponding with described complete programs video, and the incidence relation between described complete programs video and each described the first program fragment video is saved to described video incidence relation library; Perhaps, in the situation that described video content is described program fragment video, video features in first time window of described program fragment video is mated in described complete programs feature database, obtaining the first complete programs video corresponding with described program fragment video, and the incidence relation between described program fragment video and described the first complete programs video is saved to described video incidence relation library.
2. method according to claim 1 is characterized in that, extracts the video features of described video content, and inquires about the match video content corresponding with described video content according to described video features in the video features storehouse and comprise:
In described video content, extract the time window of one or more predetermined lengths;
The video features of uniform sampling predetermined number in described time window;
The video features of the described predetermined number that samples is made up, to obtain the window video features in the described time window;
The described match video content of inquiry and described window video features coupling in the video features storehouse.
3. method according to claim 2 is characterized in that, inquiry comprises with the described match video content of described window video features coupling in the video features storehouse:
All window video features in described window video features and the described video features storehouse are carried out distance relatively, and wherein, in the situation that described distance is less than or equal to validation value, the match is successful for described window video features.
4. method according to claim 2 is characterized in that, described video features comprises characteristics of image and audio frequency characteristics, wherein,
The step of extracting the described characteristics of image of described video content comprises: the image of described video content is carried out piecemeal; Extract the characteristics of image of each piece image; The described characteristics of image that each piece image is corresponding makes up, to obtain described characteristics of image; And
The step of extracting the described audio frequency characteristics of described video content comprises: described video content is divided into uniform timeslice according to the ticket reserving time leaf length, and adjacent described timeslice overlaid; Extract the audio frequency characteristics in each described timeslice interval.
5. each described method is characterized in that according to claim 1-4, and before obtaining video content, described method also comprises:
Make described video content obtaining video file, and set up described video features storehouse, described video features storehouse comprises: complete programs feature database and program fragment feature database.
6. method according to claim 5 is characterized in that, inquiring about the match video content corresponding with described video content according to described video features in the video features storehouse, and after the generating video incidence relation library, described method comprises also:
Read described video content;
Match video content corresponding to the described video content of inquiry in the video incidence relation library;
In the situation that search the success of described match video content, play described match video corresponding to described video content.
7. method according to claim 6 is characterized in that, in the situation that search the success of described match video content, plays described match video corresponding to described video content and comprises:
In the situation that described video content is described program fragment video, play-over described the first complete programs video that finds; Perhaps,
In the situation that described video content is described complete programs video, play-over one or more described the first program fragment video that finds, and the form of one or more described the first program fragment videos with label is labeled on the progress bar of described complete programs video.
8. the treating apparatus of the matching relationship of a video content is characterized in that, comprising:
The video type processing unit is for the video type of determining described video content according to the parameter of the video content that gets access to;
Extraction unit is for the video features that extracts described video content according to described video type;
The matching treatment unit is used for according to described video features in the video features storehouse inquiry match video content corresponding with described video content, and the generating video incidence relation library, be used for according to described video features the inquiry of video features storehouse and
The match video content that described video content is corresponding, and generating video incidence relation library, wherein,
In the situation that successfully inquire described match video content, the described video content that the match is successful and the incidence relation between the described match video content are saved to described video incidence relation library
Wherein, described video type processing unit comprises: receiver module is used for obtaining described video content; Authentication module, be used for determining by the attribute flags of verifying described video content the video type of described video content, perhaps, determine the video type of described video content according to the video length of described video content, described video type comprises: complete programs video and program fragment video; Wherein, when the described video length of described video content during more than or equal to first threshold, described video content is described complete programs video; When the described video length of described video content during less than or equal to described Second Threshold, described video content is described program fragment video, and described first threshold is greater than described Second Threshold;
Wherein, described matching treatment unit comprises the first matching treatment unit or the second matching treatment unit, wherein, described the first matching treatment unit is used in the situation that described video content is described complete programs video, video features in each described time window of described complete programs video is mated in described program fragment feature database, obtaining the one or more first program fragment videos corresponding with described complete programs video, and the incidence relation between described complete programs video and each described the first program fragment video is saved to described video incidence relation library; Described the second matching treatment unit is used in the situation that described video content is described program fragment video, video features in first time window of described program fragment video is mated in described complete programs feature database, obtaining the first complete programs video corresponding with described program fragment video, and the incidence relation between described program fragment video and described the first complete programs video is saved to described video incidence relation library.
9. device according to claim 8 is characterized in that, described extraction unit comprises:
Acquisition module is used for the time window at the one or more predetermined lengths of described video content extraction;
Sampling module is for the video features of uniform sampling predetermined number in described time window.
10. device according to claim 9 is characterized in that, described matching treatment unit comprises:
Composite module, the video features that is used for the described predetermined number that will sample makes up, to obtain the window video features in the described time window;
Enquiry module is used in the described match video content of video features storehouse inquiry with described window video features coupling.
11. device according to claim 10 is characterized in that, described enquiry module comprises:
Comparison module is used for all window video features in described window video features and described video features storehouse are carried out the distance comparison, and wherein, in the situation that described distance is less than or equal to validation value, the match is successful for described window video features.
12. device according to claim 8 is characterized in that, described extraction unit is the first extraction unit or the second extraction unit, wherein,
Described the first extraction unit is used for extracting the described characteristics of image of described video content, and described the first extraction unit comprises:
First divides module, is used for the image of described video content is carried out piecemeal;
The first acquisition module is for the described characteristics of image that extracts each piece image;
The second composite module is used for the described characteristics of image that each piece image is corresponding and makes up, to obtain described characteristics of image; And
Described the second extraction unit is used for extracting the described audio frequency characteristics of described video content, and described the first extraction unit comprises:
Second divides module, be used for described video content is divided into uniform timeslice according to the ticket reserving time leaf length, and adjacent described timeslice overlaid;
The second acquisition module is used for extracting the audio frequency characteristics in each described timeslice interval.
13. each described device is characterized in that according to claim 8-12, described device also comprises:
Create the feature database unit, be used for setting up described video features storehouse when making described video content, described video features storehouse comprises: complete programs feature database and program fragment feature database.
14. device according to claim 8 is characterized in that, described device also comprises:
Reading unit is used for reading described video content;
The query processing unit is used in match video content corresponding to the video incidence relation library described video content of inquiry;
Broadcast unit is used for playing described match video corresponding to described video content in the situation that search the success of described match video content.
15. device according to claim 14 is characterized in that, described broadcast unit comprises:
The first broadcast unit is used in the situation that described video content is described program fragment video, play-overs described the first complete programs video that finds; Perhaps,
The second broadcast unit, be used in the situation that described video content is described complete programs video, play-over one or more described the first program fragment video that finds, and the form of one or more described the first program fragment videos with label is labeled on the progress bar of described complete programs video.
CN 201110169978 2011-06-22 2011-06-22 Method and device for processing matching relationship of video content Active CN102222103B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110169978 CN102222103B (en) 2011-06-22 2011-06-22 Method and device for processing matching relationship of video content

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110169978 CN102222103B (en) 2011-06-22 2011-06-22 Method and device for processing matching relationship of video content

Publications (2)

Publication Number Publication Date
CN102222103A CN102222103A (en) 2011-10-19
CN102222103B true CN102222103B (en) 2013-03-27

Family

ID=44778655

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110169978 Active CN102222103B (en) 2011-06-22 2011-06-22 Method and device for processing matching relationship of video content

Country Status (1)

Country Link
CN (1) CN102222103B (en)

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102595206B (en) * 2012-02-24 2014-07-02 央视国际网络有限公司 Data synchronization method and device based on sport event video
CN102883127B (en) * 2012-09-21 2016-05-11 浙江宇视科技有限公司 A kind of method and apparatus of the section of recording a video
CN102932693B (en) * 2012-11-09 2015-06-10 北京邮电大学 Method and device for prefetching video-frequency band
WO2015061979A1 (en) * 2013-10-30 2015-05-07 宇龙计算机通信科技(深圳)有限公司 Terminal and method for managing video file
CN103596016B (en) * 2013-11-20 2018-04-13 韩巍 A kind of multimedia video data treating method and apparatus
CN104657376B (en) * 2013-11-20 2018-09-18 航天信息股份有限公司 The searching method and device of video frequency program based on program relationship
US9432702B2 (en) * 2014-07-07 2016-08-30 TCL Research America Inc. System and method for video program recognition
CN105376627B (en) * 2014-08-25 2019-10-11 南京中兴软件有限责任公司 Film source playback method, apparatus and system
CN104410906A (en) * 2014-11-18 2015-03-11 北京国双科技有限公司 Detection method and detection device for video playing behavior
CN105791974B (en) * 2014-12-24 2018-11-02 深圳Tcl数字技术有限公司 Video matching method and device
CA2973740C (en) * 2015-01-30 2021-06-08 Inscape Data, Inc. Methods for identifying video segments and displaying option to view from an alternative source and/or on an alternative device
CN106601243B (en) * 2015-10-20 2020-11-06 阿里巴巴集团控股有限公司 Video file identification method and device
CN105472407A (en) * 2015-12-15 2016-04-06 北京网博视界科技股份有限公司 Automatic video index and alignment method based on continuous image features
CN105872586A (en) * 2016-04-01 2016-08-17 成都掌中全景信息技术有限公司 Real time video identification method based on real time video streaming collection
CN106484891A (en) * 2016-10-18 2017-03-08 网易(杭州)网络有限公司 Game video-recording and playback data retrieval method and system
CN107426610B (en) * 2017-03-29 2020-04-28 聚好看科技股份有限公司 Video information synchronization method and device
CN107734387B (en) * 2017-10-25 2020-11-24 北京网博视界科技股份有限公司 Video cutting method, device, terminal and storage medium
CN110691256B (en) * 2018-07-04 2021-04-20 北京字节跳动网络技术有限公司 Video associated information processing method and device, server and storage medium
CN110691281B (en) * 2018-07-04 2022-04-01 北京字节跳动网络技术有限公司 Video playing processing method, terminal device, server and storage medium
CN111246313A (en) * 2018-11-28 2020-06-05 北京字节跳动网络技术有限公司 Video association method and device, server, terminal equipment and storage medium
CN110134829B (en) * 2019-04-28 2021-12-07 腾讯科技(深圳)有限公司 Video positioning method and device, storage medium and electronic device
CN110121079A (en) * 2019-05-13 2019-08-13 北京百度网讯科技有限公司 Method for processing video frequency, device, computer equipment and storage medium
CN110263220A (en) * 2019-06-28 2019-09-20 北京奇艺世纪科技有限公司 A kind of video highlight segment recognition methods and device
CN110598014B (en) * 2019-09-27 2021-12-10 腾讯科技(深圳)有限公司 Multimedia data processing method, device and storage medium
CN110781348B (en) * 2019-10-25 2023-06-06 北京威晟艾德尔科技有限公司 Video file analysis method
CN111814922B (en) * 2020-09-07 2020-12-25 成都索贝数码科技股份有限公司 Video clip content matching method based on deep learning
CN112203115B (en) * 2020-10-10 2023-03-10 腾讯科技(深圳)有限公司 Video identification method and related device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1461142A (en) * 2003-06-30 2003-12-10 北京大学计算机科学技术研究所 Video segment searching method based on contents
CN101064846A (en) * 2007-05-24 2007-10-31 上海交通大学 Time-shifted television video matching method combining program content metadata and content analysis
CN101159834A (en) * 2007-10-25 2008-04-09 中国科学院计算技术研究所 Method and system for detecting repeatable video and audio program fragment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1461142A (en) * 2003-06-30 2003-12-10 北京大学计算机科学技术研究所 Video segment searching method based on contents
CN101064846A (en) * 2007-05-24 2007-10-31 上海交通大学 Time-shifted television video matching method combining program content metadata and content analysis
CN101159834A (en) * 2007-10-25 2008-04-09 中国科学院计算技术研究所 Method and system for detecting repeatable video and audio program fragment

Also Published As

Publication number Publication date
CN102222103A (en) 2011-10-19

Similar Documents

Publication Publication Date Title
CN102222103B (en) Method and device for processing matching relationship of video content
KR20190139751A (en) Method and apparatus for processing video
CN116801003A (en) Method and system for automatically producing video programs according to scripts
CN103914530B (en) Method and system for monitoring rule-violating advertisements in broadcasting and TV programs
CN105224581B (en) The method and apparatus of picture are presented when playing music
CN110704674A (en) Video playing integrity prediction method and device
CN103299324A (en) Learning tags for video annotation using latent subtags
CN103942279A (en) Method and device for showing search result
CN103797482A (en) Methods and systems for performing comparisons of received data and providing follow-on service based on the comparisons
CN102799605A (en) Method and system for monitoring advertisement broadcast
CN104918060B (en) The selection method and device of point position are inserted in a kind of video ads
CN103488787B (en) A kind of method for pushing and device of the online broadcasting entrance object based on video search
CN104217008A (en) Interactive type labeling method and system for Internet figure video
CN109275047A (en) Video information processing method and device, electronic equipment, storage medium
CN116415017B (en) Advertisement sensitive content auditing method and system based on artificial intelligence
CN111931073B (en) Content pushing method and device, electronic equipment and computer readable medium
CN109684511A (en) A kind of video clipping method, video aggregation method, apparatus and system
CN113570416B (en) Method and device for determining delivered content, electronic equipment and storage medium
CN106202421B (en) method and device for obtaining video and method and device for playing video
US20130013625A1 (en) Estimating apparatus, estimating method, and program
KR102045347B1 (en) Surppoting apparatus for video making, and control method thereof
CN116980665A (en) Video processing method, device, computer equipment, medium and product
CN113282509B (en) Tone recognition, live broadcast room classification method, device, computer equipment and medium
CN113010788B (en) Information pushing method and device, electronic equipment and computer readable storage medium
CN115129922A (en) Search term generation method, model training method, medium, device and equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20111019

Assignee: CCTV INTERNATIONAL NETWORKS WUXI CO., LTD.

Assignor: CCTV International Networks Co., Ltd.

Contract record no.: 2014990000103

Denomination of invention: Method and device for processing matching relationship of video content

Granted publication date: 20130327

License type: Exclusive License

Record date: 20140303

LICC Enforcement, change and cancellation of record of contracts on the licence for exploitation of a patent or utility model