CN105763884A - Video processing method, device and apparatus - Google Patents

Video processing method, device and apparatus Download PDF

Info

Publication number
CN105763884A
CN105763884A CN201410804837.1A CN201410804837A CN105763884A CN 105763884 A CN105763884 A CN 105763884A CN 201410804837 A CN201410804837 A CN 201410804837A CN 105763884 A CN105763884 A CN 105763884A
Authority
CN
China
Prior art keywords
video
frame
frames
advertisement
video resource
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410804837.1A
Other languages
Chinese (zh)
Inventor
梁捷
陈煜瀚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou I9Game Information Technology Co Ltd
Original Assignee
Guangzhou Dongjing Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Dongjing Computer Technology Co Ltd filed Critical Guangzhou Dongjing Computer Technology Co Ltd
Priority to CN201410804837.1A priority Critical patent/CN105763884A/en
Publication of CN105763884A publication Critical patent/CN105763884A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses a video processing method, a video processing device and an apparatus. The processing method includes the following steps that: video resources are acquired; frame-by-frame scanning is performed on the video resources; a first type of frames and a second type of frames in the video resources are detected out according to set rules; and a certain type of frames which have been detected out are combined, so that a new video can be obtained. Correspondingly, the invention also provides a video processing device and an apparatus. With the video processing method, the video processing device and the apparatus provided by the technical schemes of the invention adopted, a video required by the user can be obtained from the whole video resources faster and more conveniently.

Description

Method for processing video frequency, device and equipment
Technical field
The present invention relates to mobile communication technology field, more specifically, relate to a kind of method for processing video frequency, device and equipment.
Background technology
At present, the resource of video website gets more and more, and along with the development of network, video also serves as one of topmost Rich Media resource, is more liked by user, and its importance is also more and more higher.Different from word content, video resource content is difficult to judge that whether it is good-looking at short notice for a user, sees just can judge it is generally required to finish watching or jump.It addition, there are many video resources to be incorporated into many advertising segments.Therefore, user wishes to quickly check video essential part, or wishes that video resource can remove advertisement.For checking the demand of video elite, existing a kind of processing mode is through manually checking the method with editing, makes advance notice fragment, namely manually checks each fragment of video resource, therefrom extract essential part, then reconfigure the video becoming shorter as Highlights.For removing the demand of advertisement, existing a kind of processing mode is also by manually checking and editing, deletes advertising segment, thus providing adless video.
But, no matter existing be make advance notice fragment, or delete advertising segment, being required for manually checking and editing, including shearing, merging, deletion etc., such processing mode expends time in and cost, cannot extensive use, required video can not be extracted for user fast and effectively.
Accordingly, it is desirable to provide a kind of new method for processing video frequency, can the faster more convenient video extracted from whole video resource required for user.
Summary of the invention
In view of above-mentioned, the present invention provides a kind of method for processing video frequency, device and equipment, can the faster more convenient video obtained out from whole video resource required for user.
According to an aspect of the invention, it is provided a kind of method for processing video frequency, including:
Obtain video resource;
Described video resource is scanned frame by frame;
The first kind frame in video resource and Equations of The Second Kind frame is gone out according to setting rule detection;
The one type frame detected is merged and obtains new video.
Preferably, wherein,
Described go out the first kind frame in video resource and Equations of The Second Kind frame includes according to setting rule detection: according to setting feature frame and the normal frames that rule detection goes out in video resource;
Described being merged by the one type frame detected obtains new video and includes: is merged by the feature frame detected and obtains elite video.
Preferably, wherein,
Described according to set rule detection go out the feature frame in video resource and also include after normal frames: feature frame is stored to relief area;
Described being merged by the feature frame detected obtains elite video and includes: after scanning through all frame of video, is merged by the feature frame of storage to relief area and obtains elite video;Or,
Described according to set rule detection go out the feature frame in video resource and also include after normal frames: delete normal frames;
Described being merged by the feature frame detected obtains elite video and includes: after scanning through all frame of video, is merged by remaining feature frame after complete for deletion normal frames and obtains elite video.
Preferably, wherein,
Described according to set rule detection go out the feature frame in video resource and also include after normal frames: at the feature frame judged, point identification has been set, arranged before the normal frames judged terminal identify;And,
Described feature frame is stored to relief area include: the feature frame between described point identification and described terminal being identified extracts storage to relief area.
Preferably, wherein,
Described go out the feature frame in video resource and normal frames includes according to setting rule detection: calculate main body and the ratio of the depth of field in frame of video, if ratio is more than setting threshold value or when certain threshold range, then judges that frame of video is feature frame, be otherwise normal frames.
Preferably, wherein,
Described go out the first kind frame in video resource and Equations of The Second Kind frame includes according to setting rule detection: according to setting advertisement frames and the non-advertisement frames that rule detection goes out in video resource;
Described being merged by the one type frame detected obtains new video and includes: the non-advertisement frames detected is merged and obtains the video without advertisement.
Preferably, wherein,
Described according to set rule detection go out the advertisement frames in video resource and also include after non-advertisement frames: non-advertisement frames is stored to relief area;
Described the non-advertisement frames detected is merged the video obtained without advertisement include: after scanning through all frame of video, the non-advertisement frames of storage to relief area is merged and obtains the video without advertisement;Or,
Described according to set rule detection go out the advertisement frames in video resource and also include after non-advertisement frames: delete advertisement frames;
Described the non-advertisement frames detected is merged the video obtained without advertisement include: after scanning through all frame of video, remaining non-advertisement frames after having deleted advertisement frames is merged and obtains the video without advertisement.
Preferably, wherein,
Described go out the advertisement frames in video resource and non-advertisement frames includes according to setting rule detection: according in video resource between a frame and other frames color or contrast difference judge, if color or contrast difference are more than setting threshold value, then judging that this frame is advertisement frames, other are non-advertisement frames;Or,
Identifying according to the station symbol in video resource and collection of drama and judge, if being absent from described station symbol and collection of drama mark in a frame, then judging that this frame is advertisement frames, other are non-advertisement frames.
Preferably, wherein,
Described according in video resource between a frame and other frames color or contrast difference judge, or identify according to the station symbol in video resource and collection of drama when judging, be both configured to judge setting time period scope.
According to an aspect of the invention, it is provided a kind of video process apparatus, including:
Acquiring unit, is used for obtaining video resource;
Scanning element, scans frame by frame for the video resource that described acquiring unit is obtained;
Taxon, for the scanning result according to described scanning element, and goes out the first kind frame in video resource and Equations of The Second Kind frame according to setting rule detection;
Processing unit, for the classification results according to described taxon, merges the one type frame detected and obtains new video.
Preferably, wherein,
Described taxon includes:
First taxon, for going out the feature frame in video resource and normal frames according to setting rule detection;
Described processing unit includes:
First processing unit, obtains elite video for being merged by the feature frame detected.
Preferably, wherein,
Described first taxon specifically calculates main body and the ratio of the depth of field in frame of video, if ratio is more than setting threshold value or when certain threshold range, then judges that frame of video is feature frame, is otherwise normal frames.
Preferably, wherein,
Described taxon includes:
Second taxon, for going out the advertisement frames in video resource and non-advertisement frames according to setting rule detection;
Described processing unit includes:
Second processing unit, obtains the video without advertisement for the non-advertisement frames detected being merged.
Preferably, wherein,
Described second taxon specifically according in video resource between a frame and other frames color or contrast difference judge, if color or contrast difference are more than setting threshold value, then judging that this frame is advertisement frames, other are non-advertisement frames;Or,
Identifying according to the station symbol in video resource and collection of drama and judge, if being absent from described station symbol and collection of drama mark in a frame, then judging that this frame is advertisement frames, other are non-advertisement frames.
According to a further aspect in the invention, it is provided that a kind of equipment, including memorizer and video process apparatus,
Described memorizer, is used for storing video resource;
Described video process apparatus includes:
Acquiring unit, is used for obtaining video resource;
Scanning element, scans frame by frame for the video resource that described acquiring unit is obtained;
Taxon, for the scanning result according to described scanning element, and goes out the first kind frame in video resource and Equations of The Second Kind frame according to setting rule detection;
Processing unit, for the classification results according to described taxon, merges the one type frame detected and obtains new video.
Utilize said method, by video resource is scanned frame by frame, the first kind frame in video resource and Equations of The Second Kind frame is gone out according to scanning result and according to setting rule detection, thus as required the one type frame detected can be merged and obtain new video, such as if be detected that feature frame in video resource and normal frames, feature frame is merged and can be obtained by elite video, if be detected that advertisement frames in video resource and non-advertisement frames, non-advertisement frames being merged can be obtained by without advertisement video, therefore user's request can be met, provide the user required video.
In order to realize above-mentioned and relevant purpose, one or more aspects of the present invention include the feature that will be explained in below and be particularly pointed out in the claims.Description below and accompanying drawing describe some illustrative aspects of the present invention in detail.But, some modes in the various modes that only can use principles of the invention of these aspects instruction.Additionally, it is contemplated that include all these aspects and their equivalent.
Accompanying drawing explanation
According to the following detailed description that will make reference to, the above and other purpose of the present invention, feature and advantage will become apparent from.In the accompanying drawings:
Fig. 1 is the first process chart of method for processing video frequency of the present invention;
Fig. 2 is the second process chart of method for processing video frequency of the present invention;
Fig. 3 is the 3rd process chart of method for processing video frequency of the present invention;
Fig. 4 is the fourth process flow chart of method for processing video frequency of the present invention;
Fig. 5 is the 5th process chart of method for processing video frequency of the present invention;
Fig. 6 is the first structural representation of video process apparatus of the present invention;
Fig. 7 is the second structural representation of video process apparatus of the present invention;
Fig. 8 is the structural representation of present device.
Label identical in all of the figs indicates similar or corresponding feature or function.
Detailed description of the invention
Various aspects of the disclosure is described below.It is to be understood that teaching herein specifically can embody with varied form, and any concrete structure disclosed herein, function or both are only representational.Based on teaching herein, those skilled in the art are it is to be understood that one aspect disclosed herein can realize independent of any other side, and the two or more aspects in these aspects can combine in various manners.It is, for example possible to use any number of the aspects set forth herein, it is achieved device or put into practice method.Further, it is possible to use other structure, function or except one or more aspects described herein or be not the 26S Proteasome Structure and Function of one or more aspects described herein, it is achieved this device or put into practice this method.Additionally, any aspect described herein can include at least one element of claim.
Each embodiment of the present invention is described below with reference to accompanying drawings.
The present invention provides a kind of method for processing video frequency, can the faster more convenient video extracted from whole video resource required for user, for instance extracts elite video, obtains without advertisement video etc..
Fig. 1 is the first process chart of method for processing video frequency of the present invention.
As it is shown in figure 1, include step:
Step 101, acquisition video resource;
Step 102, described video resource is scanned frame by frame;
Step 103, according to set rule detection go out the first kind frame in video resource and Equations of The Second Kind frame;
Step 104, the one type frame detected is merged obtain new video.
Utilize said method by video resource is scanned frame by frame, the first kind frame in video resource and Equations of The Second Kind frame is gone out according to scanning result and according to setting rule detection, thus as required the one type frame detected can be merged and obtain new video, such as if be detected that feature frame in video resource and normal frames, feature frame is merged and can be obtained by elite video, if be detected that advertisement frames in video resource and non-advertisement frames, non-advertisement frames being merged can be obtained by without advertisement video, therefore user's request can be met, provide the user required video.
Fig. 2 is the second process chart of method for processing video frequency of the present invention.
As in figure 2 it is shown, include step:
Step 201, acquisition video resource;
Step 202, described video resource is scanned frame by frame;
Step 203, according to set rule detection go out the feature frame in video resource and normal frames;
This step includes: calculates main body and the ratio of the depth of field in frame of video, if ratio is more than setting threshold value or when certain threshold range, then judges that frame of video is feature frame, be otherwise normal frames.
Step 204, the feature frame detected is merged obtain elite video.
Wherein, described according to set rule detection go out the feature frame in video resource and also include after normal frames: feature frame is stored to relief area;
Described being merged by the feature frame detected obtains elite video and includes: after scanning through all frame of video, is merged by the feature frame of storage to relief area and obtains elite video;Or,
Described according to set rule detection go out the feature frame in video resource and also include after normal frames: delete normal frames;
Described being merged by the feature frame detected obtains elite video and includes: after scanning through all frame of video, is merged by remaining feature frame after complete for deletion normal frames and obtains elite video.
Fig. 3 is the 3rd process chart of method for processing video frequency of the present invention.The flow chart describe the detailed process generating elite video-frequency band.
The present invention can pass through video and decompose the feature frame and normal frames that detect in video resource, is extracted by feature frame and integrates, just obtains " elite " video;Or, being detected in video resource by video decomposition and differ bigger advertisement frames with other frame of video, after advertisement frames being deleted or non-advertisement frames being extracted, recombinant just obtains the video without advertisement;Thus meeting the video segment that user's request provides required.
For video, part crucial, elite is typically all and have employed feature frame, therefore video resource is decomposed by the present invention in units of frame, video is scanned frame by frame, then process accordingly when feature frame or normal frames being detected, as: after getting video resource, video resource is scanned frame by frame, when feature frame being detected, starting point is set, when normal frames frame being detected, then terminal was set before normal frames, extracts the frame in this interval of set origin-to-destination to relief area.After processing a complete video resource, the combination of resources of relief area will be extracted, be generated as " elite " video-frequency band.
Concrete, as it is shown on figure 3, mainly include step:
Step 301, server obtain video resource, are decomposed by the video resource got in units of frame.
In this step, server can obtain video resource from website, it is also possible to directly obtaining video resource from video resource provider, or obtain video resource from the resources bank that user uploads, the present invention is not limited.
In this step, the video resource got is decomposed by server in units of frame, and this can adopt existing video decomposition method, and the present invention is not limited.
Video resource after decomposition is scanned by step 302, server frame by frame, has arranged point identification at this feature frame when detecting feature frame in scanning process, arranges terminal mark before normal frames when detecting normal frames.
Can being simply divided into two class camera lenses inside video, one is common lens, and it is similar that the depth of field is substantially seen with naked eyes, another kind of, is close-up shot, and camera lens has the very shallow depth of field, the main prominent main body being taken.The so-called depth of field, it is simply that when focal length is directed at certain point, all still can scope clearly before and after it.It can determine it is to highlight reference object blurred background, still takes background clearly.That is, have the space of one section of certain length in camera lens front (focusing point forward and backward), when subject is positioned at this section of space, its imaging on egative film is just before and after focus between the two blur circle, then the length in this section of space at subject place, is called the depth of field.So-called feature, it is simply that when head portrait more than adult's shoulder of the lower frame of picture, or the local of other subject, be known as close-up shot.In close-up shot, subject is full of picture, is more nearly spectators than close shot, and now background is on the back burner, even disappears, and therefore close-up shot can fine show character face's expression.
Frame of video is scanned by the present invention, if calculating the main body ratio ratio with the depth of field more than setting threshold value or when certain threshold range, then judges that this frame of video is feature frame, is otherwise normal frames.It should be noted that, the computational methods of the ratio of main body and the depth of field can adopt existing processing method commonly used in the trade, is calculated by definition, and the present invention does not limit concrete calculating process.
It should be noted that, above-mentioned threshold value is empirical value, for instance in picture the ratio of main body and depth of field part more than 0.8 between time, it is believed that be feature frame, or when in picture, the ratio of main body and depth of field part is between 0.8 to 2, it is believed that be feature frame.
Step 303, by arrange rise point identification and terminal mark between frame of video extract, preserve in the buffer.
This step plays point identification and terminal mark according to what arrange before, the frame of video risen between point identification and terminal mark is extracted, and preserves in the buffer.
Step 304, after scanning through whole video resource, the frame of video of relief area is merged, it is thus achieved that elite video.
In this step, after scanning through whole video resource (namely all video frame numbers all scan through), the frame of video of relief area and feature frame are merged, it is thus achieved that the elite video combined by feature frame.
By above-mentioned handling process, if user uploads a video, then server will process automatically according to above-mentioned flow process, and generate corresponding " elite " video, for user or other user's fast browsing.
It should be noted that, in above-mentioned processing procedure, it is also possible to after detecting normal frames, directly normal frames is deleted, retain feature frame, so can also combine the elite video obtaining being combined by feature frame.
Fig. 4 is the fourth process flow chart of method for processing video frequency of the present invention.Fig. 4 describes the process obtaining the video without advertisement.
As shown in Figure 4, including step:
Step 401, acquisition video resource;
Step 402, described video resource is scanned frame by frame;
Step 403, according to set rule detection go out the advertisement frames in video resource and non-advertisement frames;
This step includes: according in video resource between a frame and other frames color or contrast difference judge, if color or contrast difference are more than setting threshold value, then judging that this frame is advertisement frames, other are non-advertisement frames;Or,
Identifying according to the station symbol in video resource and collection of drama and judge, if being absent from described station symbol and collection of drama mark in a frame, then judging that this frame is advertisement frames, other are non-advertisement frames.
Step 404, the non-advertisement frames detected is merged obtain the video without advertisement.
Wherein, go out the advertisement frames in video resource according to setting rule detection and also include after non-advertisement frames: non-advertisement frames is stored to relief area;
The non-advertisement frames detected is merged the video obtained without advertisement include: after scanning through all frame of video, the non-advertisement frames of storage to relief area is merged and obtains the video without advertisement;Or,
Go out the advertisement frames in video resource according to setting rule detection and also include after non-advertisement frames: deleting advertisement frames;
The non-advertisement frames detected is merged the video obtained without advertisement include: after scanning through all frame of video, remaining non-advertisement frames after having deleted advertisement frames is merged and obtains the video without advertisement.
Fig. 5 is the 5th process chart of method for processing video frequency of the present invention.The flow chart describe the detailed process generating the video without advertisement.
And for video, difference is relatively big and relatively lofty often compared with other frame of video for advertisement frames, therefore video resource is decomposed by the present invention in units of frame, video is scanned frame by frame, identify the big frame of some difference and judge advertisement frames, and process accordingly, as: if it find that some frame and other frames have bigger difference, such as between certain frame and other frames, color or contrast are widely different, then judge that this frame is advertisement frames;Or, judge according to station symbol, collection of drama mark, if this frame is absent from these crucial judgement marks, then judge that this frame is advertisement frames.After judging advertisement frames, then the frame of video that will be deemed as advertisement frames is deleted or by recombinant after the extraction of non-advertisement frames, then can obtain the video-frequency band without advertisement.
Further, in order to improve accuracy, also when comparing according to color or contrast, can arrange in a shorter time range and judge, this is because advertisement is temporally charged, therefore, in the shorter setting time, the change between frame and frame all can alter a great deal as color or contrast.
As it is shown in figure 5, mainly include step:
Step 501, server obtain video resource, are decomposed by the video resource got in units of frame.
In this step, server can obtain video resource from website, it is also possible to directly obtain video resource from video resource provider, or the acquisition video resource present invention is not limited from the resources bank that user uploads.
In this step, the video resource got is decomposed by server in units of frame, and this can adopt existing video decomposition method, and the present invention is not limited.
Video resource after decomposition is scanned by step 502, server frame by frame, identifies as advertisement frames and non-advertisement frames in scanning process.
Because difference is relatively big and relatively lofty often compared with other frame of video for advertisement frames, therefore video is scanned by this step frame by frame, the frame that some difference are big is identified, thus distinguishing advertisement frames and non-advertisement frames according to the change of divergence between frame or setting identification.
As: the some frames of continuous print are judged, if in the middle of some successive frames, if it find that some frame and other frames have bigger difference, such as between certain frame and other frames, color or contrast are widely different (if the difference of contrast is more than certain threshold value, this threshold value rule of thumb value), then judging that this frame is advertisement frames, other are non-advertisement frames accordingly;Or,
Judge according to default key mark, such as gathering certain certain TV play, corresponding key mark can be set for gathering video, as judged according to station symbol, collection of drama mark, if this frame is absent from these key marks, then judging that this frame is advertisement frames, other are non-advertisement frames.
Further, when comparing according to color or contrast, can increase and a shorter time range is set judge, this is because advertisement is temporally charged, therefore, in the shorter setting time, the change between frame and frame all can alter a great deal as color or contrast.Judge by increasing time range, it is possible to more accurately identify out advertisement frames.
Step 503, the advertisement frames that will identify that are deleted, or are undertaken non-advertisement frames extracting preservation in the buffer.
Step 504, after scanning through whole video resource, obtain the video without advertisement owing to all advertisement frames are deleted, or the non-wide frame of relief area merged, obtain the video without advertisement.
By above-mentioned handling process, if video resource provider provides a video, so server will process automatically according to above-mentioned flow process, find out the advertisement frames content in video and delete, or non-advertisement frames is extracted recombinant, just obtain the video without advertisement, thus meeting user's request to reach the purpose of advertisement.
It should be noted that the embodiment shown in Fig. 3 and Fig. 5 can individually be implemented, it is also possible to combine and implement, namely in the process extracting feature frame generation elite video, identifying the advertisement frames in elite video further and delete, thus obtaining the elite video without advertisement, meeting the demand of user.
The above-mentioned method for processing video frequency describing the present invention in detail, accordingly, the present invention provides a kind of video process apparatus and equipment.
Fig. 6 is the first structural representation of video process apparatus of the present invention.
As shown in Figure 6, it is provided that a kind of video process apparatus 60, including:
Acquiring unit 601, is used for obtaining video resource;
Scanning element 602, scans frame by frame for the video resource that described acquiring unit 601 is obtained;
Taxon 603, for the scanning result according to described scanning element 602, and goes out the first kind frame in video resource and Equations of The Second Kind frame according to setting rule detection;
Processing unit 604, for the classification results according to described taxon 603, merges the one type frame detected and obtains new video.
Fig. 7 is the second structural representation of video process apparatus of the present invention.
As it is shown in fig. 7, video process apparatus 60 includes acquiring unit 601, scanning element 602, taxon 603 and processing unit 604.
Described taxon 603 includes: the first taxon 6031, for going out the feature frame in video resource and normal frames according to setting rule detection;
Described processing unit 604 includes: the first processing unit 6041, obtains elite video for being merged by the feature frame detected.
Described first taxon 6031 specifically calculates main body and the ratio of the depth of field in frame of video, if ratio is more than setting threshold value or when certain threshold range, then judges that frame of video is feature frame, is otherwise normal frames.
Described taxon 603 includes: the second taxon 6032, for going out the advertisement frames in video resource and non-advertisement frames according to setting rule detection;
Described processing unit includes 604: the second processing units 6042, obtains the video without advertisement for the non-advertisement frames detected being merged.
Described second taxon 604 specifically according in video resource between a frame and other frames color or contrast difference judge, if color or contrast difference are more than setting threshold value, then judging that this frame is advertisement frames, other are non-advertisement frames;Or, identifying according to the station symbol in video resource and collection of drama and judge, if a frame is absent from described station symbol and collection of drama mark, then judge that this frame is advertisement frames, other are non-advertisement frames.
Further, go out the feature frame in video resource according to setting rule detection and also include after normal frames: feature frame is stored to relief area;
The feature frame detected is merged and obtains elite video and include: after scanning through all frame of video, the feature frame of storage to relief area is merged and obtains elite video;Or,
Go out the feature frame in video resource according to setting rule detection and also include after normal frames: deleting normal frames;
The feature frame detected is merged and obtains elite video and include: after scanning through all frame of video, remaining feature frame after complete for deletion normal frames is merged and obtains elite video.
Further, go out the feature frame in video resource according to setting rule detection and also include after normal frames: point identification has been set at the feature frame judged, terminal mark was set before the normal frames judged;And,
Feature frame is stored to relief area include: the feature frame between described point identification and described terminal being identified extracts storage to relief area.
Further, go out the advertisement frames in video resource according to setting rule detection and also include after non-advertisement frames: non-advertisement frames is stored to relief area;
The non-advertisement frames detected is merged the video obtained without advertisement include: after scanning through all frame of video, the non-advertisement frames of storage to relief area is merged and obtains the video without advertisement;Or,
Go out the advertisement frames in video resource according to setting rule detection and also include after non-advertisement frames: deleting advertisement frames;
The non-advertisement frames detected is merged the video obtained without advertisement include: after scanning through all frame of video, remaining non-advertisement frames after having deleted advertisement frames is merged and obtains the video without advertisement.
Further, according in video resource between a frame and other frames color or contrast difference judge, or identify according to the station symbol in video resource and collection of drama when judging, be both configured to judge setting time period scope.
Fig. 8 is the structural representation of present device.
As shown in Figure 8, equipment 80 includes memorizer 801 and video process apparatus 60.The equipment of the present invention can be server apparatus, it is also possible to is terminal unit, it is preferred to server apparatus.
Memorizer 801, is used for storing video resource;
Video process apparatus 60 includes:
Acquiring unit, is used for obtaining video resource;
Scanning element, scans frame by frame for the video resource that described acquiring unit is obtained;
Taxon, for the scanning result according to described scanning element, and goes out the first kind frame in video resource and Equations of The Second Kind frame according to setting rule detection;
Processing unit, for the classification results according to described taxon, merges the one type frame detected and obtains new video.
The concrete structure of video process apparatus 60, referring to the description of Fig. 6 and Fig. 7, repeats no more herein.
Additionally, typically, mobile equipment of the present invention can be the various handheld devices with Bluetooth function, for instance has the mobile phone of Bluetooth function, personal digital assistant (PDA).
Additionally, the method according to the invention is also implemented as the computer program performed by the processor (such as CPU) in mobile equipment, and it is stored in the memorizer of mobile equipment.It is when executed by this computer program, performs the above-mentioned functions limited in the method for the present invention.
In addition, the method according to the invention is also implemented as a kind of computer program, this computer program includes computer-readable medium, and storage has the computer program of the above-mentioned functions limited in the method for performing the present invention on the computer-readable medium.
Additionally, said method step and system unit can also utilize controller and for storing so that controller realizes the computer readable storage devices realization of the computer program of above-mentioned steps or Elementary Function.
Those skilled in the art will also understand is that, may be implemented as electronic hardware, computer software or both combinations in conjunction with the various illustrative logical blocks described by disclosure herein, module, circuit and algorithm steps.In order to clearly demonstrate this interchangeability of hardware and software, to it, general description is carried out with regard to the function of various exemplary components, square, module, circuit and step.This function is implemented as software and is also implemented as hardware and depends on specifically applying and being applied to the design constraint of whole system.Those skilled in the art can realize described function in every way for every kind of concrete application, but this realization decision is not necessarily to be construed as and causes departing from the scope of the present invention.
Although content disclosed above illustrates the exemplary embodiment of the present invention, it should be noted that under the premise of the scope of the present invention limited without departing substantially from claim, it is possible to it is variously changed and revises.The function of the claim to a method according to inventive embodiments described herein, step and/or action need to not perform with any particular order.Although additionally, the element of the present invention can describe or requirement with individual form, it is also contemplated that multiple, it is odd number unless explicitly limited.
Although being described above with reference to figure each embodiment described according to the present invention, it should be appreciated to those skilled in the art that each embodiment that the invention described above is proposed, it is also possible on without departing from the basis of present invention, make various improvement.Therefore, protection scope of the present invention should be determined by the content of appending claims.

Claims (15)

1. a method for processing video frequency, including:
Obtain video resource;
Described video resource is scanned frame by frame;
The first kind frame in video resource and Equations of The Second Kind frame is gone out according to setting rule detection;
The one type frame detected is merged and obtains new video.
2. method for processing video frequency according to claim 1, wherein,
Described go out the first kind frame in video resource and Equations of The Second Kind frame includes according to setting rule detection: according to setting feature frame and the normal frames that rule detection goes out in video resource;
Described being merged by the one type frame detected obtains new video and includes: is merged by the feature frame detected and obtains elite video.
3. method for processing video frequency according to claim 2, wherein,
Described according to set rule detection go out the feature frame in video resource and also include after normal frames: feature frame is stored to relief area;
Described being merged by the feature frame detected obtains elite video and includes: after scanning through all frame of video, is merged by the feature frame of storage to relief area and obtains elite video;Or,
Described according to set rule detection go out the feature frame in video resource and also include after normal frames: delete normal frames;
Described being merged by the feature frame detected obtains elite video and includes: after scanning through all frame of video, is merged by remaining feature frame after complete for deletion normal frames and obtains elite video.
4. method for processing video frequency according to claim 3, wherein,
Described according to set rule detection go out the feature frame in video resource and also include after normal frames: at the feature frame judged, point identification has been set, arranged before the normal frames judged terminal identify;And,
Described feature frame is stored to relief area include: the feature frame between described point identification and described terminal being identified extracts storage to relief area.
5. the method for processing video frequency according to any one of claim 2 to 4, wherein,
Described go out the feature frame in video resource and normal frames includes according to setting rule detection: calculate main body and the ratio of the depth of field in frame of video, if ratio is more than setting threshold value or when certain threshold range, then judges that frame of video is feature frame, be otherwise normal frames.
6. method for processing video frequency according to claim 1, wherein,
Described go out the first kind frame in video resource and Equations of The Second Kind frame includes according to setting rule detection: according to setting advertisement frames and the non-advertisement frames that rule detection goes out in video resource;
Described being merged by the one type frame detected obtains new video and includes: the non-advertisement frames detected is merged and obtains the video without advertisement.
7. method for processing video frequency according to claim 6, wherein,
Described according to set rule detection go out the advertisement frames in video resource and also include after non-advertisement frames: non-advertisement frames is stored to relief area;
Described the non-advertisement frames detected is merged the video obtained without advertisement include: after scanning through all frame of video, the non-advertisement frames of storage to relief area is merged and obtains the video without advertisement;Or,
Described according to set rule detection go out the advertisement frames in video resource and also include after non-advertisement frames: delete advertisement frames;
Described the non-advertisement frames detected is merged the video obtained without advertisement include: after scanning through all frame of video, remaining non-advertisement frames after having deleted advertisement frames is merged and obtains the video without advertisement.
8. the method for processing video frequency according to claim 6 or 7, wherein,
Described go out the advertisement frames in video resource and non-advertisement frames includes according to setting rule detection: according in video resource between a frame and other frames color or contrast difference judge, if color or contrast difference are more than setting threshold value, then judging that this frame is advertisement frames, other are non-advertisement frames;Or,
Identifying according to the station symbol in video resource and collection of drama and judge, if being absent from described station symbol and collection of drama mark in a frame, then judging that this frame is advertisement frames, other are non-advertisement frames.
9. method for processing video frequency according to claim 8, wherein,
Described according in video resource between a frame and other frames color or contrast difference judge, or identify according to the station symbol in video resource and collection of drama when judging, be both configured to judge setting time period scope.
10. a video process apparatus, including:
Acquiring unit, is used for obtaining video resource;
Scanning element, scans frame by frame for the video resource that described acquiring unit is obtained;
Taxon, for the scanning result according to described scanning element, and goes out the first kind frame in video resource and Equations of The Second Kind frame according to setting rule detection;
Processing unit, for the classification results according to described taxon, merges the one type frame detected and obtains new video.
11. video process apparatus according to claim 10, wherein,
Described taxon includes: the first taxon, for going out the feature frame in video resource and normal frames according to setting rule detection;
Described processing unit includes: the first processing unit, obtains elite video for being merged by the feature frame detected.
12. video process apparatus according to claim 11, wherein,
Described first taxon specifically calculates main body and the ratio of the depth of field in frame of video, if ratio is more than setting threshold value or when certain threshold range, then judges that frame of video is feature frame, is otherwise normal frames.
13. video process apparatus according to claim 10, wherein,
Described taxon includes: the second taxon, for going out the advertisement frames in video resource and non-advertisement frames according to setting rule detection;
Described processing unit includes: the second processing unit, obtains the video without advertisement for the non-advertisement frames detected being merged.
14. video process apparatus according to claim 13, wherein,
Described second taxon specifically according in video resource between a frame and other frames color or contrast difference judge, if color or contrast difference are more than setting threshold value, then judging that this frame is advertisement frames, other are non-advertisement frames;Or,
Identifying according to the station symbol in video resource and collection of drama and judge, if being absent from described station symbol and collection of drama mark in a frame, then judging that this frame is advertisement frames, other are non-advertisement frames.
15. an equipment, including memorizer and video process apparatus,
Described memorizer, is used for storing video resource;
Described video process apparatus includes:
Acquiring unit, is used for obtaining video resource;
Scanning element, scans frame by frame for the video resource that described acquiring unit is obtained;
Taxon, for the scanning result according to described scanning element, and goes out the first kind frame in video resource and Equations of The Second Kind frame according to setting rule detection;
Processing unit, for the classification results according to described taxon, merges the one type frame detected and obtains new video.
CN201410804837.1A 2014-12-18 2014-12-18 Video processing method, device and apparatus Pending CN105763884A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410804837.1A CN105763884A (en) 2014-12-18 2014-12-18 Video processing method, device and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410804837.1A CN105763884A (en) 2014-12-18 2014-12-18 Video processing method, device and apparatus

Publications (1)

Publication Number Publication Date
CN105763884A true CN105763884A (en) 2016-07-13

Family

ID=56341248

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410804837.1A Pending CN105763884A (en) 2014-12-18 2014-12-18 Video processing method, device and apparatus

Country Status (1)

Country Link
CN (1) CN105763884A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108460106A (en) * 2018-02-06 2018-08-28 北京奇虎科技有限公司 A kind of method and apparatus of identification advertisement video
WO2018196811A1 (en) * 2017-04-28 2018-11-01 阿里巴巴集团控股有限公司 Method and device for determining inter-cut time bucket in audio/video
CN108833969A (en) * 2018-06-28 2018-11-16 腾讯科技(深圳)有限公司 A kind of clipping method of live stream, device and equipment
WO2019196795A1 (en) * 2018-04-08 2019-10-17 中兴通讯股份有限公司 Video editing method, device and electronic device
CN110996138A (en) * 2019-12-17 2020-04-10 腾讯科技(深圳)有限公司 Video annotation method, device and storage medium
CN111612875A (en) * 2020-04-23 2020-09-01 北京达佳互联信息技术有限公司 Dynamic image generation method and device, electronic equipment and storage medium
CN114157881A (en) * 2021-10-29 2022-03-08 北京达佳互联信息技术有限公司 Multimedia processing method, device, electronic equipment and storage medium
US11678029B2 (en) 2019-12-17 2023-06-13 Tencent Technology (Shenzhen) Company Limited Video labeling method and apparatus, device, and computer-readable storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1589002A (en) * 2004-08-04 2005-03-02 威盛电子股份有限公司 Method and relative system for high efficiency ad detection in video signal
CN1589003A (en) * 2004-07-07 2005-03-02 威盛电子股份有限公司 Method and interface system for, display interface to help user detect ad fragment
CN101080028A (en) * 2006-05-25 2007-11-28 北大方正集团有限公司 An advertisement video detection method
CN101175214A (en) * 2007-11-15 2008-05-07 北京大学 Method and apparatus for real-time detecting advertisement from broadcast data stream
CN101241553A (en) * 2008-01-24 2008-08-13 北京六维世纪网络技术有限公司 Method and device for recognizing customizing messages jumping-off point and terminal
CN101604325A (en) * 2009-07-17 2009-12-16 北京邮电大学 Method for classifying sports video based on key frame of main scene lens
US20110064378A1 (en) * 2009-09-14 2011-03-17 Gharaat Amir H Multifunction Multimedia Device
CN102685398A (en) * 2011-09-06 2012-09-19 天脉聚源(北京)传媒科技有限公司 News video scene generating method
CN102890950A (en) * 2011-07-18 2013-01-23 大猩猩科技股份有限公司 Media automatic editing device and method, and media broadcasting method and media broadcasting system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1589003A (en) * 2004-07-07 2005-03-02 威盛电子股份有限公司 Method and interface system for, display interface to help user detect ad fragment
CN1589002A (en) * 2004-08-04 2005-03-02 威盛电子股份有限公司 Method and relative system for high efficiency ad detection in video signal
CN101080028A (en) * 2006-05-25 2007-11-28 北大方正集团有限公司 An advertisement video detection method
CN101175214A (en) * 2007-11-15 2008-05-07 北京大学 Method and apparatus for real-time detecting advertisement from broadcast data stream
CN101241553A (en) * 2008-01-24 2008-08-13 北京六维世纪网络技术有限公司 Method and device for recognizing customizing messages jumping-off point and terminal
CN101604325A (en) * 2009-07-17 2009-12-16 北京邮电大学 Method for classifying sports video based on key frame of main scene lens
US20110064378A1 (en) * 2009-09-14 2011-03-17 Gharaat Amir H Multifunction Multimedia Device
CN102890950A (en) * 2011-07-18 2013-01-23 大猩猩科技股份有限公司 Media automatic editing device and method, and media broadcasting method and media broadcasting system
CN102685398A (en) * 2011-09-06 2012-09-19 天脉聚源(北京)传媒科技有限公司 News video scene generating method

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10936878B2 (en) 2017-04-28 2021-03-02 Advanced New Technologies Co., Ltd. Method and device for determining inter-cut time range in media item
WO2018196811A1 (en) * 2017-04-28 2018-11-01 阿里巴巴集团控股有限公司 Method and device for determining inter-cut time bucket in audio/video
CN108810615A (en) * 2017-04-28 2018-11-13 阿里巴巴集团控股有限公司 The method and apparatus for determining spot break in audio and video
TWI685254B (en) * 2017-04-28 2020-02-11 香港商阿里巴巴集團服務有限公司 Method and device for determining insertion time period in audio and video
CN108460106A (en) * 2018-02-06 2018-08-28 北京奇虎科技有限公司 A kind of method and apparatus of identification advertisement video
WO2019196795A1 (en) * 2018-04-08 2019-10-17 中兴通讯股份有限公司 Video editing method, device and electronic device
CN108833969A (en) * 2018-06-28 2018-11-16 腾讯科技(深圳)有限公司 A kind of clipping method of live stream, device and equipment
CN110996138B (en) * 2019-12-17 2021-02-05 腾讯科技(深圳)有限公司 Video annotation method, device and storage medium
CN110996138A (en) * 2019-12-17 2020-04-10 腾讯科技(深圳)有限公司 Video annotation method, device and storage medium
WO2021120814A1 (en) * 2019-12-17 2021-06-24 腾讯科技(深圳)有限公司 Video annotation method and apparatus, device, and computer-readable storage medium
US11678029B2 (en) 2019-12-17 2023-06-13 Tencent Technology (Shenzhen) Company Limited Video labeling method and apparatus, device, and computer-readable storage medium
CN111612875A (en) * 2020-04-23 2020-09-01 北京达佳互联信息技术有限公司 Dynamic image generation method and device, electronic equipment and storage medium
CN114157881A (en) * 2021-10-29 2022-03-08 北京达佳互联信息技术有限公司 Multimedia processing method, device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN105763884A (en) Video processing method, device and apparatus
KR101611440B1 (en) Method and apparatus for processing image
CN106254933B (en) Subtitle extraction method and device
CN107862315B (en) Subtitle extraction method, video searching method, subtitle sharing method and device
US10762649B2 (en) Methods and systems for providing selective disparity refinement
Dirik et al. Analysis of seam-carving-based anonymization of images against PRNU noise pattern-based source attribution
WO2016187888A1 (en) Keyword notification method and device based on character recognition, and computer program product
CN106575223B (en) Image classification method and image classification device
CN107430780B (en) Method for output creation based on video content characteristics
US10068616B2 (en) Thumbnail generation for video
CN105243371A (en) Human face beauty degree detection method and system and shooting terminal
CN110889379A (en) Expression package generation method and device and terminal equipment
CN103198311A (en) Method and apparatus for recognizing a character based on a photographed image
US10296539B2 (en) Image extraction system, image extraction method, image extraction program, and recording medium storing program
CN111401238A (en) Method and device for detecting character close-up segments in video
WO2016031573A1 (en) Image-processing device, image-processing method, program, and recording medium
CN111860346A (en) Dynamic gesture recognition method and device, electronic equipment and storage medium
CN113505707A (en) Smoking behavior detection method, electronic device and readable storage medium
EP3076674B1 (en) Video quality detection method and device
CN106570466B (en) Video classification method and system
KR101717441B1 (en) Apparatus and method for protecting privacy in character image
US20100102961A1 (en) Alert system based on camera identification
CN105229700B (en) Device and method for extracting peak figure picture from multiple continuously shot images
CN108024148B (en) Behavior feature-based multimedia file identification method, processing method and device
KR101193549B1 (en) The Method and system for an episode auto segmentation of the TV program

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20160928

Address after: 510627 Guangdong city of Guangzhou province Whampoa Tianhe District Road No. 163 Xiping Yun Lu Yun Ping radio square B tower 13 floor 02 unit self

Applicant after: GUANGZHOU I9GAME INFORMATION TECHNOLOGY CO., LTD.

Address before: 510627 Guangdong city of Guangzhou province Whampoa Tianhe District Road No. 163 Xiping Yun Lu Yun Ping B radio 14 floor tower square

Applicant before: Guangzhou Dongjing Computer Technology Co., Ltd.

RJ01 Rejection of invention patent application after publication

Application publication date: 20160713

RJ01 Rejection of invention patent application after publication