CN109101558A - A kind of video retrieval method and device - Google Patents

A kind of video retrieval method and device Download PDF

Info

Publication number
CN109101558A
CN109101558A CN201810766347.5A CN201810766347A CN109101558A CN 109101558 A CN109101558 A CN 109101558A CN 201810766347 A CN201810766347 A CN 201810766347A CN 109101558 A CN109101558 A CN 109101558A
Authority
CN
China
Prior art keywords
video
keyword
video clip
clip
text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810766347.5A
Other languages
Chinese (zh)
Other versions
CN109101558B (en
Inventor
张蒙
徐荣阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Maoyan Cultural Media Co Ltd
Original Assignee
Beijing Maoyan Cultural Media Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Maoyan Cultural Media Co Ltd filed Critical Beijing Maoyan Cultural Media Co Ltd
Priority to CN201810766347.5A priority Critical patent/CN109101558B/en
Publication of CN109101558A publication Critical patent/CN109101558A/en
Application granted granted Critical
Publication of CN109101558B publication Critical patent/CN109101558B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present invention provides a kind of video retrieval method and devices.The described method includes: receiving the retrieval text of user's input;The keyword prestored in the retrieval text and video database is matched;Wherein, the incidence relation of keyword and video clip is preserved in video database;When being matched to the keyword with the retrieval text matches, obtains and show at least one corresponding video clip of the keyword.The present invention matches the retrieval text that user inputs by video database, to obtain the corresponding video clip of keyword, without manually carrying out the collection and screening of video material, reduces the waste of human resources, saves financial expenditure.

Description

A kind of video retrieval method and device
Technical field
The present invention relates to video search technique areas, more particularly to a kind of video retrieval method and device.
Background technique
With the promotion of people's entertainment content consumption pattern, video clipping work has always biggish demand.Video is cut Volume refer to and the video clip in video is sheared, the video clip then obtained to shearing is spliced, to obtain user The process of desired video.Movie and television play editing work is many times to create work again, and the material in production process is usually By what is be manually collected and screen, many artificial experiences are needed, more wasteful human resources have in turn resulted in economic damage It loses.
Summary of the invention
The present invention provides a kind of video retrieval method and device, to solve in the prior art by manually carrying out video material Collection and screening labor intensive resource, the problem of causing economic loss.
To solve the above-mentioned problems, the invention discloses a kind of video retrieval methods, comprising: receives the retrieval of user's input Text;The keyword prestored in the retrieval text and video database is matched;Wherein, it is preserved in video database The incidence relation of keyword and video clip;When being matched to the keyword with the retrieval text matches, obtains and show institute State at least one corresponding video clip of keyword.
Preferably, it is described reception user input retrieval text the step of before, further includes: target video is divided Processing is cut, to obtain each video clip after dividing processing;For each video clip, the video clip is successively extracted Video caption text;Word segmentation processing is carried out to the video caption text, to obtain the video clip corresponding at least one A keyword;It establishes and saves the incidence relation between the video clip and at least one corresponding described keyword.
Preferably, described that processing is split to target video, the step of to obtain each video clip after dividing processing, Include: that piecemeal processing is carried out to each frame video image in the target video, each frame video image is divided into several videos Block;The correspondence video block of two frame video image adjacent in the target video is compared, to obtain each corresponding video block Changing value;The correspondence video block of maximum value in the changing value is removed, and remaining is corresponded to the squared difference and progress of video block Normalized, to obtain setting value;When the setting value is greater than difference threshold, will be leaned in adjacent two frame video image A frame video image afterwards is as Video segmentation point;Processing is split to the target video with the Video segmentation point, with Each video clip after obtaining dividing processing.
Preferably, described to be directed to each video clip, successively extract the video caption text of the video clip Step, comprising: obtain the corresponding subtitle file of each video clip, each piece of video is extracted from each subtitle file The corresponding video caption text of section;Or successively choose caption area in each video clip;For each video clip Caption area carries out Text region to the caption area, to obtain the corresponding video caption text of the video clip.
Preferably, the acquisition and the step of show the keyword corresponding at least one video clip, comprising: obtain The keyword shared weight at least one described video clip;According to the weight at least one described video clip Displaying is ranked up according to weight size.
In order to solve the above-mentioned technical problem, the invention also discloses a kind of video frequency searching devices, comprising: receiving module is used In the retrieval text for receiving user's input;Matching module, the key for will be prestored in the retrieval text and video database Word is matched;Wherein, the incidence relation of keyword and video clip is preserved in video database;Display module is obtained, is used In when being matched to the keyword with the retrieval text matches, obtains and show at least one corresponding video of the keyword Segment.
Preferably, further includes: dividing processing module, for being split processing to target video, to obtain dividing processing Each video clip afterwards;Extraction module successively extracts the video words of the video clip for being directed to each video clip Curtain text;Keyword obtains module, for carrying out word segmentation processing to the video caption text, to obtain the video clip pair At least one keyword answered;Module is established in association, for establishing and saving the video clip and corresponding described at least one Incidence relation between a keyword.
Preferably, the dividing processing module includes: piecemeal processing submodule, for each frame in the target video Video image carries out piecemeal processing, and each frame video image is divided into several video blocks;Changing value acquisition submodule is used for institute The correspondence video block for stating adjacent two frame video image in target video compares, to obtain the changing value of each corresponding video block; Setting value acquisition submodule corresponds to video block for removing the correspondence video block of maximum value in the changing value, and by remaining It squared difference and is normalized, to obtain setting value;Video segmentation point determines submodule, for big in the setting value When difference threshold, using the frame video image in adjacent two frame video image rearward as Video segmentation point;Piece of video Section acquisition submodule, for being split processing to the target video with the Video segmentation point, after obtaining dividing processing Each video clip.
Preferably, the extraction module includes: the first video caption acquisition submodule, for obtaining each video clip Corresponding subtitle file extracts the corresponding video caption text of each video clip from each subtitle file;Or subtitle Submodule is chosen in region, for successively choosing the caption area in each video clip;Second video caption acquisition submodule, For being directed to the caption area of each video clip, Text region is carried out to the caption area, to obtain the video clip Corresponding video caption text.
Preferably, the acquisition display module includes: Weight Acquisition submodule, for obtain the keyword it is described extremely Shared weight in a few video clip;Sequence shows submodule, is used for according to the weight at least one described piece of video Section is ranked up displaying according to weight size.
Compared with prior art, the present invention includes the following advantages:
The embodiment of the invention provides a kind of video retrieval method and devices, by receiving the retrieval text of user's input, The keyword prestored in text and video database will be retrieved to match, wherein preserved in video database keyword and The incidence relation of video clip obtains when being matched to the keyword with retrieval text matches and shows that keyword is corresponding extremely A few video clip.The present invention matches the retrieval text that user inputs by video database, to obtain key The corresponding video clip of word reduces the waste of human resources, saves without manually carrying out the collection and screening of video material Financial expenditure.
Detailed description of the invention
Fig. 1 shows a kind of step flow chart of video retrieval method provided in an embodiment of the present invention;
Fig. 2 shows a kind of structural schematic diagrams of video frequency searching device provided in an embodiment of the present invention.
Specific embodiment
In order to make the foregoing objectives, features and advantages of the present invention clearer and more comprehensible, with reference to the accompanying drawing and specific real Applying mode, the present invention is described in further detail.
Embodiment one
Referring to Fig.1, a kind of step flow chart of video retrieval method provided in an embodiment of the present invention is shown, it specifically can be with Include the following steps:
Step 101: receiving the retrieval text of user's input.
The embodiment of the present invention can be applied under the scene of video clip retrieval.
The retrieval text of user's input can be a word, such as " Buddha's warrior attendant " or " science fiction ", be also possible to passage, such as " short-sighted frequency about science fiction class " etc., the embodiments of the present invention are not limited thereto.
In a preferred embodiment of an embodiment of the present invention, before above-mentioned steps 101, can also include:
Step S1: being split processing to target video, to obtain each video clip after dividing processing.
In embodiments of the present invention, target video, which refers to, needs to carry out Video segmentation to obtain corresponding each video clip Video.
It is available to arrive the corresponding each video clip of target video, specific implementation by the dividing processing to target video Process is referred to following step execution:
Step S1-1: piecemeal processing is carried out to each frame video image in the target video, by each frame video image It is divided into several video blocks.
In embodiments of the present invention, piecemeal processing can be carried out to each frame video image in target video, by each frame Video image is divided into several video blocks, for example, the first frame video image to be divided into the video block etc. of 3*3.
In practical applications, those skilled in the art can be set according to actual needs the number to each frame video image piecemeal Mesh, the embodiments of the present invention are not limited thereto.
After being divided into several video blocks to each frame video image of target video, step S1-2 is executed.
Step S1-2: the correspondence video block of two frame video image adjacent in the target video is compared, to obtain The changing value of each corresponding video block.
It, then can be to two frames video adjacent in target video after frame video image each in target video is carried out piecemeal The correspondence video block of image compares, to obtain the changing value of each corresponding video block, for example, to the first frame video image and Second frame video image is divided into after the video block of 3*3, can be by the first video block of the first frame video image, the i.e. upper left corner Second video block of the first video block and the second frame video image compares, by the second video block, i.e., and in same a line, and first The subsequent video block of video block is compared with the second video block in the second video image.
In embodiments of the present invention, the changing value of each corresponding video block can be preset, for example, adjacent two frames video figure It is corresponded in video block as in, if scenery is just the same, changing value can be set to 0, if entirely different, changing value can be with It is set as 1, as fruit part, corresponding changing value can be set according to the size of different piece.
In practical applications, those skilled in the art can sets itself video block changing value according to actual needs it is big Small, the embodiments of the present invention are not limited thereto.
In obtaining target video after the changing value of the correspondence video block of adjacent two frame video image, then follow the steps S1-3。
Step S1-3: removing the correspondence video block of maximum value in the changing value, and remaining is corresponded to the difference of video block Quadratic sum is normalized, to obtain setting value.
It can will change after the changing value of each corresponding video block in adjacent two frame video image in obtaining target video The corresponding video block removal of maximum value in value, and remaining is corresponded into the squared difference of video block and is normalized, with Obtain a setting value.
The embodiment of the present invention can be to avoid the scenery that certain video block occurs suddenly to subsequent by removing maximum value Analyze the influence generated.
Step S1-4: the setting value be greater than difference threshold when, by adjacent two frame video image rearward one Frame video image is as Video segmentation point.
In embodiments of the present invention, it can be set and correspond to video block change in difference threshold namely adjacent two frame video image Change is worth corresponding threshold value, is compared with difference threshold with setting value, when setting value is less than difference threshold, indicate this adjacent two The variation of frame video image is smaller, is not considered, to be compared to next adjacent two frame video image.
And when setting value is greater than difference threshold, then it represents that adjacent two frame video image changes greatly, then with the phase A frame video image in adjacent two frame video images rearward is as Video segmentation point, to complete the segmentation portion to target video image Reason.
After determining Video segmentation point, S1-5 is thened follow the steps.
Step S1-5: processing is split to the target video with the Video segmentation point, after obtaining dividing processing Each video clip.
After obtaining Video segmentation point, then target video can be split with Video segmentation point, to be divided Each video clip of cutting that treated.
Certainly, a kind of Video segmentation scheme provided in an embodiment of the present invention, in practical applications, this field skill be above are only Art personnel can also be split processing to target video using other way, and the embodiments of the present invention are not limited thereto.
After being split processing to target video and obtaining each video clip, step S2 is executed.
Step S2: it is directed to each video clip, successively extracts the video caption text of the video clip.
After each video clip for obtaining target video, video caption text can be carried out for each video clip It extracts.
Specifically, the extraction of video caption text can be carried out using the following two kinds mode:
It 1, can be directly from the corresponding video caption file of each video clip for there is the video clip of video caption file Middle extraction video caption text.
2, for by can successively choose in each video clip in video and video clip image is embedded in subtitle together Caption area, such as lower half portion in selecting video segment image, and rectangle of the symmetrical white pixel point as subtitle Region etc. can carry out Text region to caption area, to obtain each piece of video for the caption area of each video clip The corresponding video caption text of section.
In practical applications, those skilled in the art can also obtain the video words in each video clip using other way Curtain text, the embodiments of the present invention are not limited thereto.
It is being directed to each video clip, after the video caption text for successively extracting video clip, is thening follow the steps S3.
Step S3: word segmentation processing is carried out to the video caption text, to obtain the video clip corresponding at least one A keyword.
After the video caption text for extracting each video clip, video caption text can be segmented, to extract The keyword of each video clip, as the keyword of the video frequency band, for example, the video caption text of video clip A is the " West Lake Scenery it is very beautiful ", the keyword extracted can be " West Lake " " scenery ", be used as this using " West Lake " " scenery " vocabulary extracted The keyword of video clip A.
One keyword can be corresponding with for a video clip, multiple Video Key words, this hair can also be corresponding with Bright embodiment is without restriction to this.
After obtaining at least one corresponding keyword of each video clip, step S4 is executed.
Step S4: establishing and saves the pass of the association between the video clip and at least one corresponding described keyword System.
After obtaining at least one corresponding keyword of each video clip, at least one keyword and video can establish The incidence relation of segment, and incidence relation is saved into video database, for example, related between keyword A and video clip A Connection relationship, it is relevant etc. between keyword A and keyword B and video clip B, it, can after obtaining the incidence relation Keyword A to be associated with and save with video clip A, and keyword A and keyword B are associated with and are saved with video clip B.
It is to be appreciated that above-mentioned example is merely to more fully understand the technical solution of the embodiment of the present invention and showing for enumerating Example, not as to sole limitation of the invention.
After the retrieval text for receiving user's input, step 102 is executed.
Step 102: the keyword prestored in the retrieval text and video database is matched.
In embodiments of the present invention, video database can be preset, each piece of video is stored in video database Section, and the incidence relation of each video clip and keyword is saved, it can be established for a video clip and one or more Incidence relation between keyword can also be established for a keyword and be associated with pass between one or more video clips System, the embodiments of the present invention are not limited thereto.
Of course, it is possible to save video database in terminal side, video database can also be saved in server side.
When video database is stored in terminal side, when receiving the retrieval text of user's input, such as " Buddha's warrior attendant ", then may be used With by the keyword saved in the retrieval text and video database match or user input passage " about The short-sighted frequency of science fiction class " can then parse the retrieval text, so that keyword is extracted, such as " science fiction ", according to science fiction Directly matched in local video data library.
It, can be by the inspection when receiving the retrieval text of user's input when video database is stored in server side Suo Wenben is sent to server and carries out matching treatment, and specific matching process is similar in terminal side, and the embodiment of the present invention is herein not It is repeated here again.
After it will retrieve the keyword prestored in text and video database and matched, 103 are thened follow the steps.
Step 103: when being matched to the keyword with the retrieval text matches, obtaining and show that the keyword is corresponding At least one video clip.
It, can be from video after being matched to the corresponding keyword of retrieval text that inputs with user in video database At least one corresponding video clip of keyword is obtained in database, for example, the retrieval text A of user's input, in video data Keyword is corresponded in library b, c, d, relevant between keyword b and video clip 1, video clip 2, keyword c and view It is relevant between frequency segment 1, video clip 3, it is relevant between keyword d and video clip 1, video clip 3, then Have according to the retrieval text A video frequency band obtained: video clip 1, video clip 2, video clip 3.
It is to be appreciated that above-mentioned example is merely to more fully understand the technical solution of the embodiment of the present invention and showing for enumerating Example, not as to sole limitation of the invention.
In a preferred embodiment of an embodiment of the present invention, after above-mentioned steps 103, can also include:
Step N1: the keyword shared weight at least one described video clip is obtained;
Step N2: displaying is ranked up according to weight size at least one described video clip according to the weight.
In embodiments of the present invention, different keywords weight shared in a video clip, example can be preset Such as, video clip A is associated with keyword 1 and keyword 2, and the shared weight in video clip A of keyword 1 is 0.6, keyword 2 Shared weight is 0.4 etc. in video clip A.And when a video clip is only associated with a keyword, then it can set this Keyword weight shared by the video clip is 1 etc..
After obtaining multiple video clips, each video clip may be associated with one or more keywords, obtain user , can be according to corresponding keyword after each video clip searched after the corresponding keyword of retrieval text of input, it can To be ranked up displaying to each video clip according to corresponding keyword shared weight in each video clip.
Certainly, in practical applications, those skilled in the art can also be using other way to each piece of video searched Section is ranked up displaying, and the embodiments of the present invention are not limited thereto.
Video retrieval method provided in an embodiment of the present invention will retrieve text by receiving the retrieval text of user's input It is matched with the keyword prestored in video database, wherein keyword and video clip are preserved in video database Incidence relation obtains when being matched to the keyword with retrieval text matches and shows at least one corresponding video of keyword Segment.The present invention matches the retrieval text that user inputs by video database, to obtain the corresponding view of keyword Frequency segment reduces the waste of human resources, saves financial expenditure without manually carrying out the collection and screening of video material.
Embodiment two
Referring to Fig. 2, a kind of structural schematic diagram of video frequency searching device provided in an embodiment of the present invention is shown, it specifically can be with Include:
Receiving module 210, for receiving the retrieval text of user's input;Matching module 220 is used for the retrieval text It is matched with the keyword prestored in video database;Wherein, keyword and video clip are preserved in video database Incidence relation;Display module 230 is obtained, for obtaining and showing when being matched to the keyword with the retrieval text matches At least one corresponding video clip of the keyword.
Preferably, further includes: dividing processing module, for being split processing to target video, to obtain dividing processing Each video clip afterwards;Extraction module successively extracts the video words of the video clip for being directed to each video clip Curtain text;Keyword obtains module, for carrying out word segmentation processing to the video caption text, to obtain the video clip pair At least one keyword answered;Module is established in association, for establishing and saving the video clip and corresponding described at least one Incidence relation between a keyword.
Preferably, the dividing processing module includes: piecemeal processing submodule, for each frame in the target video Video image carries out piecemeal processing, and each frame video image is divided into several video blocks;Changing value acquisition submodule is used for institute The correspondence video block for stating adjacent two frame video image in target video compares, to obtain the changing value of each corresponding video block; Setting value acquisition submodule corresponds to video block for removing the correspondence video block of maximum value in the changing value, and by remaining It squared difference and is normalized, to obtain setting value;Video segmentation point determines submodule, for big in the setting value When difference threshold, using the frame video image in adjacent two frame video image rearward as Video segmentation point;Piece of video Section acquisition submodule, for being split processing to the target video with the Video segmentation point, after obtaining dividing processing Each video clip.
Preferably, the extraction module includes: the first video caption acquisition submodule, for obtaining each video clip Corresponding subtitle file extracts the corresponding video caption text of each video clip from each subtitle file;Or subtitle Submodule is chosen in region, for successively choosing the caption area in each video clip;Second video caption acquisition submodule, For being directed to the caption area of each video clip, Text region is carried out to the caption area, to obtain the video clip Corresponding video caption text.
Preferably, the acquisition display module 230 includes: Weight Acquisition submodule, for obtaining the keyword in institute State shared weight at least one video clip;Sequence shows submodule, is used for according to the weight at least one described view Frequency segment is ranked up displaying according to weight size.
Video frequency searching device provided in an embodiment of the present invention will retrieve text by receiving the retrieval text of user's input It is matched with the keyword prestored in video database, wherein keyword and video clip are preserved in video database Incidence relation obtains when being matched to the keyword with retrieval text matches and shows at least one corresponding video of keyword Segment.The present invention matches the retrieval text that user inputs by video database, to obtain the corresponding view of keyword Frequency segment reduces the waste of human resources, saves financial expenditure without manually carrying out the collection and screening of video material.
For the various method embodiments described above, for simple description, therefore, it is stated as a series of action combinations, but Be those skilled in the art should understand that, the present invention is not limited by the sequence of acts described because according to the present invention, certain A little steps can be performed in other orders or simultaneously.Secondly, those skilled in the art should also know that, it is retouched in specification The embodiment stated belongs to preferred embodiment, and related actions and modules are not necessarily necessary for the present invention.
All the embodiments in this specification are described in a progressive manner, the highlights of each of the examples are with The difference of other embodiments, the same or similar parts between the embodiments can be referred to each other.
Finally, it is to be noted that, herein, relational terms such as first and second and the like be used merely to by One entity or operation are distinguished with another entity or operation, without necessarily requiring or implying these entities or operation Between there are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant meaning Covering non-exclusive inclusion, so that the process, method, commodity or the equipment that include a series of elements not only include that A little elements, but also including other elements that are not explicitly listed, or further include for this process, method, commodity or The intrinsic element of equipment.In the absence of more restrictions, the element limited by sentence "including a ...", is not arranged Except there is also other identical elements in process, method, commodity or the equipment for including the element.
Above to a kind of video retrieval method provided by the present invention and a kind of video frequency searching device, detailed Jie has been carried out It continues, used herein a specific example illustrates the principle and implementation of the invention, and the explanation of above embodiments is only It is to be used to help understand method and its core concept of the invention;At the same time, for those skilled in the art, according to this hair Bright thought, there will be changes in the specific implementation manner and application range, in conclusion the content of the present specification should not manage Solution is limitation of the present invention.

Claims (10)

1. a kind of video retrieval method characterized by comprising
Receive the retrieval text of user's input;
The keyword prestored in the retrieval text and video database is matched;Wherein, it is preserved in video database The incidence relation of keyword and video clip;
When being matched to the keyword with the retrieval text matches, obtains and show at least one corresponding view of the keyword Frequency segment.
2. the method according to claim 1, wherein it is described reception user input retrieval text the step of it Before, further includes:
Processing is split to target video, to obtain each video clip after dividing processing;
For each video clip, the video caption text of the video clip is successively extracted;
Word segmentation processing is carried out to the video caption text, to obtain at least one corresponding keyword of the video clip;
It establishes and saves the incidence relation between the video clip and at least one corresponding described keyword.
3. according to the method described in claim 2, it is characterized in that, described be split processing to target video, to be divided The step of each video clip of cutting that treated, comprising:
Piecemeal processing is carried out to each frame video image in the target video, each frame video image is divided into several videos Block;
The correspondence video block of two frame video image adjacent in the target video is compared, to obtain each corresponding video block Changing value;
The correspondence video block of maximum value in the changing value is removed, and remaining is corresponded into the squared difference of video block and carries out normalizing Change processing, to obtain setting value;
The setting value be greater than difference threshold when, using the frame video image in adjacent two frame video image rearward as Video segmentation point;
Processing is split to the target video with the Video segmentation point, to obtain each video clip after dividing processing.
4. according to the method described in claim 2, it is characterized in that, it is described be directed to each video clip, successively extract institute The step of stating the video caption text of video clip, comprising:
The corresponding subtitle file of each video clip is obtained, it is corresponding that each video clip is extracted from each subtitle file Video caption text;Or
Successively choose the caption area in each video clip;
For the caption area of each video clip, Text region is carried out to the caption area, to obtain the video clip Corresponding video caption text.
5. the method according to claim 1, wherein described obtain and show the keyword corresponding at least one The step of a video clip, comprising:
Obtain the keyword shared weight at least one described video clip;
Displaying is ranked up according to weight size at least one described video clip according to the weight.
6. a kind of video frequency searching device characterized by comprising
Receiving module, for receiving the retrieval text of user's input;
Matching module, for matching the keyword prestored in the retrieval text and video database;Wherein, video counts According to the incidence relation for preserving keyword and video clip in library;
Display module is obtained, for obtaining and showing the key when being matched to the keyword with the retrieval text matches At least one corresponding video clip of word.
7. device according to claim 6, which is characterized in that further include:
Dividing processing module, for being split processing to target video, to obtain each video clip after dividing processing;
Extraction module successively extracts the video caption text of the video clip for being directed to each video clip;
Keyword obtains module, corresponding to obtain the video clip for carrying out word segmentation processing to the video caption text At least one keyword;
Module is established in association, for establishing and saving the pass between the video clip and at least one corresponding described keyword Connection relationship.
8. device according to claim 7, which is characterized in that the dividing processing module includes:
Piecemeal handles submodule, and for carrying out piecemeal processing to each frame video image in the target video, each frame is regarded Frequency image is divided into several video blocks;
Changing value acquisition submodule, for carrying out pair the correspondence video block of two frame video image adjacent in the target video Than to obtain the changing value of each corresponding video block;
Setting value acquisition submodule corresponds to video for removing the correspondence video block of maximum value in the changing value, and by remaining It the squared difference of block and is normalized, to obtain setting value;
Video segmentation point determines submodule, is used for when the setting value is greater than difference threshold, by the adjacent two frames video figure A frame video image as in rearward is as Video segmentation point;
Video clip acquisition submodule, for being split processing to the target video with the Video segmentation point, to obtain Each video clip after dividing processing.
9. device according to claim 7, which is characterized in that the extraction module includes:
First video caption acquisition submodule, for obtaining the corresponding subtitle file of each video clip, from each subtitle The corresponding video caption text of each video clip is extracted in file;Or
Caption area chooses submodule, for successively choosing the caption area in each video clip;
Second video caption acquisition submodule carries out the caption area for being directed to the caption area of each video clip Text region, to obtain the corresponding video caption text of the video clip.
10. device according to claim 6, which is characterized in that the acquisition display module includes:
Weight Acquisition submodule, for obtaining the keyword shared weight at least one described video clip;
Sequence shows submodule, for being ranked up at least one described video clip according to weight size according to the weight It shows.
CN201810766347.5A 2018-07-12 2018-07-12 Video retrieval method and device Active CN109101558B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810766347.5A CN109101558B (en) 2018-07-12 2018-07-12 Video retrieval method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810766347.5A CN109101558B (en) 2018-07-12 2018-07-12 Video retrieval method and device

Publications (2)

Publication Number Publication Date
CN109101558A true CN109101558A (en) 2018-12-28
CN109101558B CN109101558B (en) 2022-07-01

Family

ID=64846252

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810766347.5A Active CN109101558B (en) 2018-07-12 2018-07-12 Video retrieval method and device

Country Status (1)

Country Link
CN (1) CN109101558B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109933691A (en) * 2019-02-11 2019-06-25 北京百度网讯科技有限公司 Method, apparatus, equipment and storage medium for content retrieval
CN110825913A (en) * 2019-09-03 2020-02-21 上海擎测机电工程技术有限公司 Professional word extraction and part-of-speech tagging method
CN112905829A (en) * 2021-03-25 2021-06-04 王芳 Cross-modal artificial intelligence information processing system and retrieval method
CN113204668A (en) * 2021-05-21 2021-08-03 广州博冠信息科技有限公司 Audio clipping method and device, storage medium and electronic equipment
CN114218438A (en) * 2021-12-23 2022-03-22 北京百度网讯科技有限公司 Video data processing method and device, electronic equipment and computer storage medium
CN115103225A (en) * 2022-06-15 2022-09-23 北京爱奇艺科技有限公司 Video clip extraction method, device, electronic equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101646050A (en) * 2009-09-09 2010-02-10 中国电信股份有限公司 Text annotation method and system, playing method and system of video files
CN101719144A (en) * 2009-11-04 2010-06-02 中国科学院声学研究所 Method for segmenting and indexing scenes by combining captions and video image information
WO2011050280A2 (en) * 2009-10-22 2011-04-28 Chintamani Patwardhan Method and apparatus for video search and delivery
CN102650993A (en) * 2011-02-25 2012-08-29 北大方正集团有限公司 Index establishing and searching methods, devices and systems for audio-video file
CN103761284A (en) * 2014-01-13 2014-04-30 中国农业大学 Video retrieval method and video retrieval system
US20150293928A1 (en) * 2014-04-14 2015-10-15 David Mo Chen Systems and Methods for Generating Personalized Video Playlists
CN107027060A (en) * 2017-04-18 2017-08-08 腾讯科技(深圳)有限公司 The determination method and apparatus of video segment
US20180046621A1 (en) * 2016-08-09 2018-02-15 Zorroa Corporation Linearized Search of Visual Media
CN107704525A (en) * 2017-09-04 2018-02-16 优酷网络技术(北京)有限公司 Video searching method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101646050A (en) * 2009-09-09 2010-02-10 中国电信股份有限公司 Text annotation method and system, playing method and system of video files
WO2011050280A2 (en) * 2009-10-22 2011-04-28 Chintamani Patwardhan Method and apparatus for video search and delivery
CN101719144A (en) * 2009-11-04 2010-06-02 中国科学院声学研究所 Method for segmenting and indexing scenes by combining captions and video image information
CN102650993A (en) * 2011-02-25 2012-08-29 北大方正集团有限公司 Index establishing and searching methods, devices and systems for audio-video file
CN103761284A (en) * 2014-01-13 2014-04-30 中国农业大学 Video retrieval method and video retrieval system
US20150293928A1 (en) * 2014-04-14 2015-10-15 David Mo Chen Systems and Methods for Generating Personalized Video Playlists
US20180046621A1 (en) * 2016-08-09 2018-02-15 Zorroa Corporation Linearized Search of Visual Media
CN107027060A (en) * 2017-04-18 2017-08-08 腾讯科技(深圳)有限公司 The determination method and apparatus of video segment
CN107704525A (en) * 2017-09-04 2018-02-16 优酷网络技术(北京)有限公司 Video searching method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张环 等: ""基于分层语义的体育视频标注及索引研究"", 《计算机应用与软件》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109933691A (en) * 2019-02-11 2019-06-25 北京百度网讯科技有限公司 Method, apparatus, equipment and storage medium for content retrieval
CN110825913A (en) * 2019-09-03 2020-02-21 上海擎测机电工程技术有限公司 Professional word extraction and part-of-speech tagging method
CN112905829A (en) * 2021-03-25 2021-06-04 王芳 Cross-modal artificial intelligence information processing system and retrieval method
CN113204668A (en) * 2021-05-21 2021-08-03 广州博冠信息科技有限公司 Audio clipping method and device, storage medium and electronic equipment
CN114218438A (en) * 2021-12-23 2022-03-22 北京百度网讯科技有限公司 Video data processing method and device, electronic equipment and computer storage medium
CN115103225A (en) * 2022-06-15 2022-09-23 北京爱奇艺科技有限公司 Video clip extraction method, device, electronic equipment and storage medium
CN115103225B (en) * 2022-06-15 2023-12-26 北京爱奇艺科技有限公司 Video clip extraction method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN109101558B (en) 2022-07-01

Similar Documents

Publication Publication Date Title
CN109101558A (en) A kind of video retrieval method and device
CN109472260B (en) Method for removing station caption and subtitle in image based on deep neural network
US20070162873A1 (en) Apparatus, method and computer program product for generating a thumbnail representation of a video sequence
US20120027295A1 (en) Key frames extraction for video content analysis
US8938153B2 (en) Representative image or representative image group display system, representative image or representative image group display method, and program therefor
CN105744292A (en) Video data processing method and device
CN101692269B (en) Method and device for processing video programs
WO2017032245A1 (en) Method and device for generating video file index information
CN105404846A (en) Image processing method and apparatus
CN107480670A (en) A kind of method and apparatus of caption extraction
US10897658B1 (en) Techniques for annotating media content
Al-Azzeh et al. Adaptation of matlab K-means clustering function to create Color Image Features
CN110121105B (en) Clip video generation method and device
JP2009017325A (en) Telop character region extraction device and method
CN114363695A (en) Video processing method, video processing device, computer equipment and storage medium
EP2345978B1 (en) Detection of flash illuminated scenes in video clips and related ranking of video clips
CN104751107A (en) Key data determination method, device and equipment for video
CN104572996A (en) Processing method and device for video webpage
Mishra et al. Real time and non real time video shot boundary detection using dual tree complex wavelet transform
CN103514196B (en) Information processing method and electronic equipment
CN109741283A (en) A kind of method and apparatus for realizing smart filter
CN105869139A (en) Image processing method and apparatus
CN111739042B (en) Complex background power line extraction method based on digital image features
CN116132752B (en) Video comparison group construction, model training and video scoring methods, devices and equipment
CN114710474B (en) Data stream processing and classifying method based on Internet of things

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant