CN102543136B - Method and device for clipping video - Google Patents
Method and device for clipping video Download PDFInfo
- Publication number
- CN102543136B CN102543136B CN201210036660.6A CN201210036660A CN102543136B CN 102543136 B CN102543136 B CN 102543136B CN 201210036660 A CN201210036660 A CN 201210036660A CN 102543136 B CN102543136 B CN 102543136B
- Authority
- CN
- China
- Prior art keywords
- video
- frame
- value
- eigenvalue
- characteristic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Image Analysis (AREA)
- Studio Devices (AREA)
Abstract
The invention provides a method and a device for clipping a video. The method comprises the following steps of: in a video segment in an appointed time range, computing a characteristic value of a video frame according to the preset characteristic; extracting a target video frame, wherein the target video frame is the video frame of which the characteristic value is in accordance with preset conditions; and splicing the video at a time point which corresponds to the target video frame. By the invention, the video can be clipped at an optimum time point which is automatically selected, complicated manual operation is avoided, and the effectiveness of the selected time point is ensured.
Description
Technical field
The application relates to the technical field of Video processing, particularly a kind of method of video clipping and a kind of device of video clipping.
Background technology
Video clipping relates to the process of being carried out by the video of different time points shearing and splicing, because in video clipping process, choosing of suitable time points is extremely important, but choosing of this time point at present can only by manually having come.
Such as, in the recording process of Classic Course, if there is mistake, an applicable time point is found to do editing with regard to needing in video by the later stage.Use manual operation, need to iterate through put slowly, the operation such as single frames broadcasting finds optimum editing time point, consumes a large amount of time and efforts.And often due to the loaded down with trivial details of work with repeat, the time point chosen of manual operation sometimes also and the time point of non-optimal.
Therefore, the technical problem needing those skilled in the art urgently to solve at present is exactly: how creatively to propose a kind of video clipping method and device, complete video clipping in order to automatically to choose optimum time point, avoid complicated manual operation, and ensure the validity of selected time point.
Summary of the invention
Technical problems to be solved in this application are to provide a kind of method of video clipping, complete video clipping, avoid complicated manual operation, and ensure the validity of selected time point in order to automatically to choose optimum time point.
Present invention also provides a kind of device of video clipping, in order to ensure said method application in practice and realization.
In order to solve the problem, this application discloses a kind of method of video clipping, specifically can comprise:
In the video segment of at the appointed time scope, for frame of video by presetting feature calculation characteristic value;
Extract target video frame, described target video frame is that characteristic value meets pre-conditioned frame of video;
The time point place corresponding at described target video frame carries out video-splicing.
Preferably, in the video segment of described at the appointed time scope, can comprise for the step of frame of video by default feature calculation characteristic value:
For each frame of video in the video segment of fixed time scope, calculate the First Eigenvalue respectively, Second Eigenvalue, third feature value, fourth feature value and fifth feature value;
Wherein, described the First Eigenvalue is calculate according to Reciprocals sums first weight threshold of the continuous quiet length of frame of video to obtain;
Described Second Eigenvalue is calculate according to the background motion amplitude of frame of video and the second weight threshold to obtain;
Described third feature value is calculate according to the background motion amplitude of frame of video and the 3rd weight threshold to obtain;
Described fourth feature value is calculate according to the absolute difference of frame of video and video first frame to be connected and the 4th weight threshold to obtain;
Described fifth feature value is calculate according to the time point of frame of video and the difference of fixed time end of extent (EOE) point and the 5th weight threshold to obtain;
And,
The weighted average calculating the First Eigenvalue of each frame of video in described video segment, Second Eigenvalue, third feature value, fourth feature value and fifth feature value is the characteristic value of current video frame.
Preferably, in the video segment of described at the appointed time scope, can comprise for the step of frame of video by default feature calculation characteristic value:
Calculate the First Eigenvalue for each frame of video in the video segment of fixed time scope, described the First Eigenvalue is calculate according to Reciprocals sums first weight threshold of the continuous quiet length of frame of video to obtain;
Characteristic fragment is extracted according to described the First Eigenvalue; Described characteristic fragment is one or more;
Travel through the frame of video in described characteristic fragment from back to front, calculate the Second Eigenvalue of described frame of video, third feature value, fourth feature value and fifth feature value;
Wherein, described Second Eigenvalue is calculate according to the background motion amplitude of frame of video and the second weight threshold to obtain;
Described third feature value is calculate according to the background motion amplitude of frame of video and the 3rd weight threshold to obtain;
Described fourth feature value is calculate according to the absolute difference of frame of video and video first frame to be connected and the 4th weight threshold to obtain;
Described fifth feature value is calculate according to the time point of frame of video and the difference of fixed time end of extent (EOE) point and the 5th weight threshold to obtain;
And,
The weighted average calculating the First Eigenvalue of each frame of video in described characteristic fragment, Second Eigenvalue, third feature value, fourth feature value and fifth feature value is the characteristic value of current video frame.
Preferably, following formulae discovery the First Eigenvalue can be passed through:
M1=R1/K1*C1′;
Wherein, R1 is the inverse of the continuous quiet length of frame of video, the first weight threshold
And, can following formulae discovery Second Eigenvalue be passed through:
M2=R2/K2*C2′;
Wherein, R2 is the background motion amplitude of frame of video, the second weight threshold
And, can by following formulae discovery third feature value:
M3=R3/K3*C3′;
Wherein, R3 is the foreground moving amplitude of frame of video, the 3rd weight threshold
And, can by following formulae discovery fourth feature value:
M4=R4/K4*C4′;
Wherein, R4 is current video frame and the absolute value of the difference of video first frame to be connected, the 4th weight threshold
And, can by following formulae discovery fifth feature value:
M5=R5/K5*C5′;
Wherein, R5 is the time point of current video frame and the absolute value of fixed time end of extent (EOE) point difference, the 5th weight threshold
K5=T2-T1, the time range that (T1, T2) is video segment, T2 > T1;
And, can by the characteristic value of following formulae discovery frame of video:
SUM=(M1+M2+M3+M4+M5)/(C1+C2+C3+C4+C5)。
Preferably, the step of described extraction target video frame can comprise:
The frame of video that selected characteristic value is less than predetermined threshold value is candidate video frame;
Target video frame is determined in described candidate video frame.
Preferably, the step of described extraction target video frame can comprise:
Each characteristic fragment is carried out ascending order arrangement by the minimal eigenvalue of its frame of video, and determine that the characteristic fragment of top N is preferred feature fragment, described N is predetermined number threshold value;
In preferred feature fragment, extract the minimum frame of video of characteristic value is respectively target video frame.
Disclosed herein as well is a kind of device of video clipping, specifically can comprise:
Characteristic value calculating module, in the video segment of at the appointed time scope, for frame of video by presetting feature calculation characteristic value;
Target video frame extraction module, for extracting target video frame, described target video frame is that characteristic value meets pre-conditioned frame of video;
Video-splicing module, carries out video-splicing for the time point place corresponding at described target video frame.
Preferably, described characteristic value calculating module can comprise:
Feature obtains submodule, for each frame of video in the video segment for fixed time scope, calculates the First Eigenvalue respectively, Second Eigenvalue, third feature value, fourth feature value and fifth feature value;
Wherein, described the First Eigenvalue is calculate according to Reciprocals sums first weight threshold of the continuous quiet length of frame of video to obtain;
Described Second Eigenvalue is calculate according to the background motion amplitude of frame of video and the second weight threshold to obtain;
Described third feature value is calculate according to the background motion amplitude of frame of video and the 3rd weight threshold to obtain;
Described fourth feature value is calculate according to the absolute difference of frame of video and video first frame to be connected and the 4th weight threshold to obtain;
Described fifth feature value is calculate according to the time point of frame of video and the difference of fixed time end of extent (EOE) point and the 5th weight threshold to obtain;
And,
Feature integration submodule is the characteristic value of current video frame for calculating the weighted average of the First Eigenvalue of each frame of video in described video segment, Second Eigenvalue, third feature value, fourth feature value and fifth feature value.
Preferably, described characteristic value calculating module can comprise:
The First Eigenvalue calculating sub module, calculates the First Eigenvalue for each frame of video in the video segment for fixed time scope, and described the First Eigenvalue is calculate according to Reciprocals sums first weight threshold of the continuous quiet length of frame of video to obtain;
Characteristic fragment extracts submodule, for extracting characteristic fragment according to described the First Eigenvalue; Described characteristic fragment is one or more;
Traversal submodule, for traveling through the frame of video in described characteristic fragment from back to front, calculates the Second Eigenvalue of described frame of video, third feature value, fourth feature value and fifth feature value;
Wherein, described Second Eigenvalue is calculate according to the background motion amplitude of frame of video and the second weight threshold to obtain;
Described third feature value is calculate according to the background motion amplitude of frame of video and the 3rd weight threshold to obtain;
Described fourth feature value is calculate according to the absolute difference of frame of video and video first frame to be connected and the 4th weight threshold to obtain;
Described fifth feature value is calculate according to the time point of frame of video and the difference of fixed time end of extent (EOE) point and the 5th weight threshold to obtain;
And,
Characteristic fragment integron module is the characteristic value of current video frame for calculating the weighted average of the First Eigenvalue of each frame of video in described characteristic fragment, Second Eigenvalue, third feature value, fourth feature value and fifth feature value.
Preferably, described target video frame extraction module can comprise:
Candidate video frame chooses submodule, and the frame of video being less than predetermined threshold value for selected characteristic value is candidate video frame;
Target video frame determination submodule, for determining target video frame in described candidate video frame.
Preferably, described target video frame extraction module can comprise:
Characteristic fragment chooses submodule, and for each characteristic fragment is carried out ascending order arrangement by the minimal eigenvalue of its frame of video, determine that the characteristic fragment of top N is preferred feature fragment, described N is predetermined number threshold value;
Target video frame chooses submodule, is target video frame for extracting the minimum frame of video of characteristic value in preferred feature fragment respectively.
Compared with prior art, the application has the following advantages:
The embodiment of the present application to be made mistakes the common trait of time point by utilizing video, automatically in time shaft specified scope, optimum video clip time point is extracted by algorithm, and be automatically connected video content at this time point, thus avoid complicated manual operation, and ensure that the validity of selected time point.
And, the embodiment of the present application can in advance for the characteristic fragment searched for audio frequency in the video segment of at the appointed time scope and be tending towards quiet, again temporally by the frame of video in rear and front these characteristic fragments of traversal and characteristic fragment, the further analysis and inspection of algorithm is adopted to the characteristics of image of these frame of video, this kind of mode can not only save system resource more, raise the efficiency, the validity of selected time point can also be ensured further.
Accompanying drawing explanation
Fig. 1 is the flow chart of steps of a kind of video clipping method embodiment 1 of the application;
Fig. 2 is the flow chart of steps of a kind of video clipping method embodiment 2 of the application;
Fig. 3 is the structured flowchart of a kind of video clipping device embodiment of the application.
Embodiment
For enabling above-mentioned purpose, the feature and advantage of the application more become apparent, below in conjunction with the drawings and specific embodiments, the application is described in further detail.
Inventor herein is through repeatedly statistical analysis discovery, and the time point chosen required in video clipping process has following common trait:
1, within the scope of a period of time before this time point to be chosen, audio frequency is tending towards quiet;
2, within the scope of a period of time before this time point to be chosen, the background of video is fixing;
3, within the scope of a period of time before this time point to be chosen, the foreground moving amplitude of video is little;
4, mate as far as possible with the first two field picture being connected video;
5, this time point to be chosen is as far as possible close to the end of current search scope.
It should be noted that, the feature of above-mentioned time point be only used as example and and non exhaustive, may also have more common trait in practice, the application is not restricted this.
One of core idea of the embodiment of the present application is, utilizes the feature of above-mentioned time point, in time shaft specified scope, automatically extracts optimum video clip time point by algorithm, and is automatically connected video content at this time point.
With reference to figure 1, show the flow chart of steps of the embodiment of the method 1 of a kind of video clipping of the application, specifically can comprise the following steps:
In the video segment of step 11, at the appointed time scope, for frame of video by presetting feature calculation characteristic value;
In the present embodiment, this step 11 specifically can comprise following sub-step:
Sub-step 111, for each frame of video in the video segment of fixed time scope, calculate the First Eigenvalue respectively, Second Eigenvalue, third feature value, fourth feature value and fifth feature value;
Wherein, described the First Eigenvalue is calculate according to Reciprocals sums first weight threshold of the continuous quiet length of frame of video to obtain;
Described Second Eigenvalue is calculate according to the background motion amplitude of frame of video and the second weight threshold to obtain;
Described third feature value is calculate according to the background motion amplitude of frame of video and the 3rd weight threshold to obtain;
Described fourth feature value is calculate according to the absolute difference of frame of video and video first frame to be connected and the 4th weight threshold to obtain;
Described fifth feature value is calculate according to the time point of frame of video and the difference of fixed time end of extent (EOE) point and the 5th weight threshold to obtain;
And,
The weighted average of sub-step 112, the First Eigenvalue calculating each frame of video in described video segment, Second Eigenvalue, third feature value, fourth feature value and fifth feature value is the characteristic value of current video frame.
In a preferred embodiment of the present application, can following formulae discovery the First Eigenvalue be passed through:
M1=R1/K1*C1′;
Wherein, R1 is the inverse of the continuous quiet length of frame of video, the first weight threshold
Preferably, following formulae discovery Second Eigenvalue can be passed through:
M2=R2/K2*C2′;
Wherein, R2 is the background motion amplitude of frame of video, the second weight threshold
Preferably, can by following formulae discovery third feature value:
M3=R3/K3*C3′;
Wherein, R3 is the foreground moving amplitude of frame of video, the 3rd weight threshold
Preferably, can by following formulae discovery fourth feature value:
M4=R4/K4*C4′;
Wherein, R4 is current video frame and the absolute value of the difference of video first frame to be connected, the 4th weight threshold
Preferably, can by following formulae discovery fifth feature value:
M5=R5/K5*C5′;
Wherein, R5 is the time point of current video frame and the absolute value of fixed time end of extent (EOE) point difference, the 5th weight threshold
K5=T2-T1, the time range that (T1, T2) is video segment, T2 > T1;
Preferably, can by the characteristic value of following formulae discovery frame of video:
SUM=(M1+M2+M3+M4+M5)/(C 1+C2+C3+C4+C5)。
Specifically, find according to aforementioned inventor herein, the common trait of the time point chosen required in video clipping process:
1, within the scope of a period of time before this time point to be chosen, audio frequency is tending towards quiet;
2, within the scope of a period of time before this time point to be chosen, the background of video is fixing;
3, within the scope of a period of time before this time point to be chosen, the foreground moving amplitude of video is little;
4, mate as far as possible with the first two field picture being connected video;
5, this time point to be chosen is as far as possible close to the end of current search scope.
Adopt the present embodiment, can process as follows in practice:
1), in the video segment of at the appointed time scope, calculate continuous quiet length for each frame of video reciprocal, be multiplied by the weights (the first weight threshold) of feature 1, obtain the First Eigenvalue; As a kind of example of the embodiment of the present application embody rule, described first weight threshold can get 8, domain of walker ± 3.
2) frame of video in video segment is traveled through, before adopting, segmenting Background (is not limited to scape partitioning algorithm before and after certain, can be used for judging background motion amplitude) the front background image of divided video frame, after extracting background image, the amplitude of background motion is multiplied by the weights (the second weight threshold) of feature 2, obtains Second Eigenvalue.
It should be noted that, if the second weight threshold is for negative infinite, is then equivalent to the part (may be whole fragment, also may be the part in original fragment) getting rid of all on-fixed backgrounds, namely gets rid of all parts not meeting feature 2.
As a kind of example of the embodiment of the present application embody rule, described second weight threshold can get 5, domain of walker ± 3.
3) frame of video in video segment is traveled through, use and can judge the foreground moving amplitude of frame of video as the algorithm (as gauss hybrid models) of movement tendency and amplitude by process decision chart, this foreground moving amplitude is multiplied by the weights (the 3rd weight threshold) of feature 3, obtains third feature value.As a kind of example of the embodiment of the present application embody rule, described 3rd weight threshold can get 4, domain of walker ± 3.
4) frame of video in video segment is traveled through, calculus of finite differences is used to judge the matching degree of certain frame of video (frame of video when pre-treatment) and the first frame of video to be connected, this matching degree is multiplied by the weights (the 4th weight threshold) of feature 4, obtains fourth feature value.As a kind of example of the embodiment of the present application embody rule, described 4th weight threshold can get 2, domain of walker ± 3.
5) travel through the frame of video in video segment, calculate the distance (time) of each frame and hunting zone end, this distance is multiplied by feature 5 weights (the 5th weight threshold), obtains fifth feature value.As a kind of example of the embodiment of the present application embody rule, described 5th weight threshold can get 4, domain of walker ± 3.
6) travel through the frame of video in video segment, the summation calculating all first, second, third, fourth and fifth characteristic values of each frame is the characteristic value of each frame.
Step 12, extraction target video frame, described target video frame is that characteristic value meets pre-conditioned frame of video;
In the present embodiment, this step 12 specifically can comprise following sub-step:
The frame of video that sub-step 121, selected characteristic value are less than predetermined threshold value is candidate video frame;
Sub-step 122, in described candidate video frame, determine target video frame.
If obtain each frame characteristic value after traversal must be divided into R [1 ~ n], now for the ease of manually doing final screening, optimal time point and some suboptimum time points (time point that namely target video frame is corresponding) need be provided.If by means of only simple sequence, so optimal time point and suboptimum time point are concentrated in shorter a period of time usually, nonsensical for next step screening.To this, the embodiment of the present application provides following two kinds of methods determining target video frame in candidate video frame:
Method one:
Given threshold value k=5, unit is second.
The time point of target video frame is W1=min{R [1] ... R [n] }, centered by W1, candidate video frame in the deletion ± K time, the next time point W2=min{R [1] as target video frame is searched ... R [n] in the candidate video frame of remainder, wherein, R does not belong to W1 ± k}, by that analogy.
Method two:
Given threshold value k=5, unit is frame.
Carry out ascending sort to frame of video by characteristic value, obtain R ' [1 ~ n]=1 ~ n, the numerical value of note: R ' is the subscript of R, and namely R ' is the pointer after R sequence.
Get target video frame W1=R [R ' [1]], if abs{R ' [2]-R ' [1] } < k, so judge abs{R ' [3]-R ' [1] } < k, until meet abs{R ' [x]-R ' [1] } < k, then get target video frame w2=R [R ' [x]].
Certainly, said method be only used as example, those skilled in the art adopt any one method determining target video frame to be all feasible according to actual conditions, the application to this without the need to being limited.
Step 13, carry out video-splicing at the time point place that described target video frame is corresponding.
In specific implementation, can be dealt with the work by the screening at this step execution optimal time point place and video-splicing, specifically can by manually first screening further target video frame, then video-splicing is carried out at time point (optimal time point) place of the target video frame filtered out, the characteristic value of the corresponding frame of video of optimal time point can as reference conditions, for judging the linking quality of this time point, of low quality (as characteristic value is higher for linking, can show as video pictures not too mate or audio intervals shorter), special video effect can be inserted carry out processing (such as rollover effect or shutter effect and so on).
With reference to figure 2, show the flow chart of steps of the embodiment of the method 2 of a kind of video clipping of the application, specifically can comprise the following steps:
In the video segment of step 21, at the appointed time scope, for frame of video by presetting feature calculation characteristic value;
In the present embodiment, this step 21 specifically can comprise following sub-step:
Sub-step 211, calculate the First Eigenvalue for each frame of video in the video segment of fixed time scope, described the First Eigenvalue is calculate according to Reciprocals sums first weight threshold of the continuous quiet length of frame of video to obtain;
Sub-step 212, extract characteristic fragment according to described the First Eigenvalue; Described characteristic fragment is one or more;
Sub-step 213, travel through frame of video in described characteristic fragment from back to front, calculate the Second Eigenvalue of described frame of video, third feature value, fourth feature value and fifth feature value;
Wherein, described Second Eigenvalue is calculate according to the background motion amplitude of frame of video and the second weight threshold to obtain;
Described third feature value is calculate according to the background motion amplitude of frame of video and the 3rd weight threshold to obtain;
Described fourth feature value is calculate according to the absolute difference of frame of video and video first frame to be connected and the 4th weight threshold to obtain;
Described fifth feature value is calculate according to the time point of frame of video and the difference of fixed time end of extent (EOE) point and the 5th weight threshold to obtain;
And,
The weighted average of sub-step 214, the First Eigenvalue calculating each frame of video in described characteristic fragment, Second Eigenvalue, third feature value, fourth feature value and fifth feature value is the characteristic value of current video frame.
Specifically, find according to aforementioned inventor herein, the common trait of the time point chosen required in video clipping process:
1, within the scope of a period of time before this time point to be chosen, audio frequency is tending towards quiet;
2, within the scope of a period of time before this time point to be chosen, the background of video is fixing;
3, within the scope of a period of time before this time point to be chosen, the foreground moving amplitude of video is little;
4, mate as far as possible with the first two field picture being connected video;
5, this time point to be chosen is as far as possible close to the end of current search scope.
Adopt the present embodiment, can process as follows in practice:
1) input parameter (or directly running according to default value), comprises following content:
The fixed time scope T1 of video segment, T2 (T2 > T1);
The each characteristic threshold value of K1, K2, K3, K4, K5; Wherein K5=T2-T1;
The each feature weight of C1, C2, C3, C4, C5.
2) audio frequency of each frame of video in the video segment of fixed time scope is processed, find out all fragments (characteristic fragment) meeting feature 1, be specially and calculate continuous quiet length inverse for each frame of video, be multiplied by the weights (the first weight threshold) of feature 1, obtained the First Eigenvalue;
In a preferred embodiment of the present application, can following formulae discovery the First Eigenvalue be passed through:
M1=R1/K1*C1′;
Wherein, R1 is the inverse of the continuous quiet length of frame of video, the first weight threshold
As a kind of example of the embodiment of the present application embody rule, described first weight threshold can get 8, domain of walker ± 3.
3) frame of video in characteristic fragment is traveled through, before adopting, segmenting Background (is not limited to scape partitioning algorithm before and after certain, can be used for judging background motion amplitude) the front background image of divided video frame, after extracting background image, the amplitude of background motion is multiplied by the weights (the second weight threshold) of feature 2, obtains Second Eigenvalue.
Preferably, following formulae discovery Second Eigenvalue can be passed through:
M2=R2/K2*C2′;
Wherein, R2 is the background motion amplitude of frame of video, the second weight threshold
It should be noted that, if the second weight threshold is for negative infinite, is then equivalent to the part (may be whole fragment, also may be the part in original fragment) getting rid of all on-fixed backgrounds, namely gets rid of all parts not meeting feature 2.
As a kind of example of the embodiment of the present application embody rule, described second weight threshold can get 5, domain of walker ± 3.
4) frame of video in characteristic fragment is traveled through, use and can judge the foreground moving amplitude of frame of video as the algorithm (as gauss hybrid models) of movement tendency and amplitude by process decision chart, this foreground moving amplitude is multiplied by the weights (the 3rd weight threshold) of feature 3, obtains third feature value.
Preferably, can by following formulae discovery third feature value:
M3=R3/K3*C3′;
Wherein, R3 is the foreground moving amplitude of frame of video, the 3rd weight threshold
As a kind of example of the embodiment of the present application embody rule, described 3rd weight threshold can get 4, domain of walker ± 3.
5) frame of video in characteristic fragment is traveled through, calculus of finite differences is used to judge the matching degree of certain frame of video (frame of video when pre-treatment) and the first frame of video to be connected, this matching degree is multiplied by the weights (the 4th weight threshold) of feature 4, obtains fourth feature value.
Preferably, can by following formulae discovery fourth feature value:
M4=R4/K4*C4′;
Wherein, R4 is current video frame and the absolute value of the difference of video first frame to be connected, the 4th weight threshold
As a kind of example of the embodiment of the present application embody rule, described 4th weight threshold can get 2, domain of walker ± 3.
5) travel through the frame of video in characteristic fragment, calculate the distance (time) of each frame and hunting zone end, this distance is multiplied by feature 5 weights (the 5th weight threshold), obtains fifth feature value.
Preferably, can by following formulae discovery fifth feature value:
M5=R5/K5*C5′;
Wherein, R5 is the time point of current video frame and the absolute value of fixed time end of extent (EOE) point difference, the 5th weight threshold
K5=T2-T1, the time range that (T1, T2) is video segment, T2 > T1.
As a kind of example of the embodiment of the present application embody rule, described 5th weight threshold can get 4, domain of walker ± 3.
6) travel through the frame of video in characteristic fragment, the summation calculating all first, second, third, fourth and fifth characteristic values of each frame is the characteristic value of each frame.
Preferably, can by the characteristic value of following formulae discovery frame of video:
SUM=(M1+M2+M3+M4+M5)/(C 1+C2+C3+C4+C5)。
Step 22, extraction target video frame, described target video frame is that characteristic value meets pre-conditioned frame of video;
In the present embodiment, this step 22 specifically can comprise following sub-step:
Sub-step 221, each characteristic fragment is carried out ascending order arrangement by the minimal eigenvalue of its frame of video, determine that the characteristic fragment of top N is preferred feature fragment, described N is predetermined number threshold value;
Sub-step 222, in preferred feature fragment, extract the minimum frame of video of characteristic value be respectively target video frame.
Such as, the minimum SUM that can find out each characteristic fragment, as the score of current signature fragment, then sorts, and the frame of video that the minimal eigenvalue in N number of characteristic fragment that score is minimum is corresponding is target video frame.
Certainly, said method is only used as example, and the method that those skilled in the art adopt any one to extract target video frame according to actual conditions is all feasible, the application to this without the need to being limited.
Step 13, carry out video-splicing at the time point place that described target video frame is corresponding.
In specific implementation, can be dealt with the work by the screening at this step execution optimal time point place and video-splicing, specifically can by manually first screening further target video frame, then video-splicing is carried out at time point (optimal time point) place of the target video frame filtered out, the characteristic value of the corresponding frame of video of optimal time point can as reference conditions, for judging the linking quality of this time point, of low quality (as characteristic value is higher for linking, can show as video pictures not too mate or audio intervals shorter), special video effect can be inserted carry out processing (such as rollover effect or shutter effect and so on).
It should be noted that, for said method embodiment, in order to simple description, therefore it is all expressed as a series of combination of actions, but those skilled in the art should know, the application is not by the restriction of described sequence of movement, because according to the application, some step can adopt other orders or carry out simultaneously.Secondly, those skilled in the art also should know, the embodiment described in specification all belongs to preferred embodiment, and involved action and module might not be that the application is necessary.
With reference to figure 3, show the structured flowchart of the device embodiment of a kind of video clipping of the application, specifically can comprise as lower module:
Characteristic value calculating module 301, in the video segment of at the appointed time scope, for frame of video by presetting feature calculation characteristic value;
Target video frame extraction module 302, for extracting target video frame, described target video frame is that characteristic value meets pre-conditioned frame of video;
Video-splicing module 303, carries out video-splicing for the time point place corresponding at described target video frame.
In a preferred embodiment of the present application, described characteristic value calculating module 301 can comprise following submodule:
Feature obtains submodule, for each frame of video in the video segment for fixed time scope, calculates the First Eigenvalue respectively, Second Eigenvalue, third feature value, fourth feature value and fifth feature value;
Wherein, described the First Eigenvalue is calculate according to Reciprocals sums first weight threshold of the continuous quiet length of frame of video to obtain;
Described Second Eigenvalue is calculate according to the background motion amplitude of frame of video and the second weight threshold to obtain;
Described third feature value is calculate according to the background motion amplitude of frame of video and the 3rd weight threshold to obtain;
Described fourth feature value is calculate according to the absolute difference of frame of video and video first frame to be connected and the 4th weight threshold to obtain;
Described fifth feature value is calculate according to the time point of frame of video and the difference of fixed time end of extent (EOE) point and the 5th weight threshold to obtain;
And,
Feature integration submodule is the characteristic value of current video frame for calculating the weighted average of the First Eigenvalue of each frame of video in described video segment, Second Eigenvalue, third feature value, fourth feature value and fifth feature value.
In the present embodiment, described target video frame extraction module 302 can comprise following submodule:
Candidate video frame chooses submodule, and the frame of video being less than predetermined threshold value for selected characteristic value is candidate video frame;
Target video frame determination submodule, for determining target video frame in described candidate video frame.
In another preferred embodiment of the present application, described characteristic value calculating module 301 can comprise following submodule:
The First Eigenvalue calculating sub module, calculates the First Eigenvalue for each frame of video in the video segment for fixed time scope, and described the First Eigenvalue is calculate according to Reciprocals sums first weight threshold of the continuous quiet length of frame of video to obtain;
Characteristic fragment extracts submodule, for extracting characteristic fragment according to described the First Eigenvalue; Described characteristic fragment is one or more;
Traversal submodule, for traveling through the frame of video in described characteristic fragment from back to front, calculates the Second Eigenvalue of described frame of video, third feature value, fourth feature value and fifth feature value;
Wherein, described Second Eigenvalue is calculate according to the background motion amplitude of frame of video and the second weight threshold to obtain;
Described third feature value is calculate according to the background motion amplitude of frame of video and the 3rd weight threshold to obtain;
Described fourth feature value is calculate according to the absolute difference of frame of video and video first frame to be connected and the 4th weight threshold to obtain;
Described fifth feature value is calculate according to the time point of frame of video and the difference of fixed time end of extent (EOE) point and the 5th weight threshold to obtain;
And,
Characteristic fragment integron module is the characteristic value of current video frame for calculating the weighted average of the First Eigenvalue of each frame of video in described characteristic fragment, Second Eigenvalue, third feature value, fourth feature value and fifth feature value.
In the present embodiment, described target video frame extraction module 302 can comprise following submodule:
Characteristic fragment chooses submodule, and for each characteristic fragment is carried out ascending order arrangement by the minimal eigenvalue of its frame of video, determine that the characteristic fragment of top N is preferred feature fragment, described N is predetermined number threshold value;
Target video frame chooses submodule, is target video frame for extracting the minimum frame of video of characteristic value in preferred feature fragment respectively.
For device embodiment, due to itself and embodiment of the method basic simlarity, so description is fairly simple, relevant part illustrates see the part of embodiment of the method.
The application can be used in numerous general or special purpose computing system environment or configuration.Such as: personal computer, server computer, handheld device or portable set, laptop device, multicomputer system, system, set top box, programmable consumer-elcetronics devices, network PC, minicom, mainframe computer, the distributed computing environment (DCE) comprising above any system or equipment etc. based on microprocessor.
The application can describe in the general context of computer executable instructions, such as program module.Usually, program module comprises the routine, program, object, assembly, data structure etc. that perform particular task or realize particular abstract data type.Also can put into practice the application in a distributed computing environment, in these distributed computing environment (DCE), be executed the task by the remote processing devices be connected by communication network.In a distributed computing environment, program module can be arranged in the local and remote computer-readable storage medium comprising memory device.
Above the method for a kind of video clipping that the application provides and a kind of device of video clipping are described in detail, apply specific case herein to set forth the principle of the application and execution mode, the explanation of above embodiment is just for helping method and the core concept thereof of understanding the application; Meanwhile, for one of ordinary skill in the art, according to the thought of the application, all will change in specific embodiments and applications, in sum, this description should not be construed as the restriction to the application.
Claims (7)
1. a method for video clipping, is characterized in that, comprising:
In the video segment of at the appointed time scope, for frame of video by presetting feature calculation characteristic value;
Extract target video frame, described target video frame is that characteristic value meets pre-conditioned frame of video;
At the time point place that described target video frame is corresponding, the characteristic value with reference to described target video frame carries out video-splicing;
Wherein, in the video segment of described at the appointed time scope, comprise for the step of frame of video by default feature calculation characteristic value:
Calculate the First Eigenvalue for each frame of video in the video segment of fixed time scope, described the First Eigenvalue is calculate according to Reciprocals sums first weight threshold of the continuous quiet length of frame of video to obtain;
Characteristic fragment is extracted according to described the First Eigenvalue; Described characteristic fragment is one or more;
Travel through the frame of video in described characteristic fragment from back to front, calculate the Second Eigenvalue of described frame of video, third feature value, fourth feature value and fifth feature value;
Wherein, described Second Eigenvalue is calculate according to the background motion amplitude of frame of video and the second weight threshold to obtain;
Described third feature value is calculate according to the foreground moving amplitude of frame of video and the 3rd weight threshold to obtain;
Described fourth feature value is calculate according to the absolute difference of frame of video and video first frame to be connected and the 4th weight threshold to obtain;
Described fifth feature value is calculate according to the time point of frame of video and the difference of fixed time end of extent (EOE) point and the 5th weight threshold to obtain;
And,
The weighted average calculating the First Eigenvalue of each frame of video in described characteristic fragment, Second Eigenvalue, third feature value, fourth feature value and fifth feature value is the characteristic value of current video frame.
2. the method for claim 1, is characterized in that, by following formulae discovery the First Eigenvalue:
M1=R1/K1*C1′;
Wherein, R1 is the inverse of the continuous quiet length of frame of video, the first weight threshold
And, by following formulae discovery Second Eigenvalue:
M2=R2/K2*C2′;
Wherein, R2 is the background motion amplitude of frame of video, the second weight threshold
And, by following formulae discovery third feature value:
M3=R3/K3*C3′;
Wherein, R3 is the foreground moving amplitude of frame of video, the 3rd weight threshold
And, by following formulae discovery fourth feature value:
M4=R4/K4*C4′;
Wherein, R4 is current video frame and the absolute value of the difference of video first frame to be connected, the 4th weight threshold
And, by following formulae discovery fifth feature value:
M5=R5/K5*C5′;
Wherein, R5 is the time point of current video frame and the absolute value of fixed time end of extent (EOE) point difference, the 5th weight threshold
K5=T2-T1, the time range that (T1, T2) is video segment, T2 > T1;
And, the characteristic value by following formulae discovery frame of video:
SUM=(M1+M2+M3+M4+M5)/(C1+C2+C3+C4+C5);
K1, K2, K3, K4 are each characteristic threshold value.
3. the method for claim 1, is characterized in that, the step of described extraction target video frame comprises:
The frame of video that selected characteristic value is less than predetermined threshold value is candidate video frame;
Target video frame is determined in described candidate video frame.
4. the method for claim 1, is characterized in that, the step of described extraction target video frame comprises:
Each characteristic fragment is carried out ascending order arrangement by the minimal eigenvalue of its frame of video, and determine that the characteristic fragment of top N is preferred feature fragment, described N is predetermined number threshold value;
In preferred feature fragment, extract the minimum frame of video of characteristic value is respectively target video frame.
5. a device for video clipping, is characterized in that, comprising:
Characteristic value calculating module, in the video segment of at the appointed time scope, for frame of video by presetting feature calculation characteristic value;
Target video frame extraction module, for extracting target video frame, described target video frame is that characteristic value meets pre-conditioned frame of video;
Video-splicing module, at time point place corresponding to described target video frame, the characteristic value with reference to described target video frame carries out video-splicing;
Wherein, described characteristic value calculating module comprises:
The First Eigenvalue calculating sub module, calculates the First Eigenvalue for each frame of video in the video segment for fixed time scope, and described the First Eigenvalue is calculate according to Reciprocals sums first weight threshold of the continuous quiet length of frame of video to obtain;
Characteristic fragment extracts submodule, for extracting characteristic fragment according to described the First Eigenvalue; Described characteristic fragment is one or more;
Traversal submodule, for traveling through the frame of video in described characteristic fragment from back to front, calculates the Second Eigenvalue of described frame of video, third feature value, fourth feature value and fifth feature value;
Wherein, described Second Eigenvalue is calculate according to the background motion amplitude of frame of video and the second weight threshold to obtain;
Described third feature value is calculate according to the foreground moving amplitude of frame of video and the 3rd weight threshold to obtain;
Described fourth feature value is calculate according to the absolute difference of frame of video and video first frame to be connected and the 4th weight threshold to obtain;
Described fifth feature value is calculate according to the time point of frame of video and the difference of fixed time end of extent (EOE) point and the 5th weight threshold to obtain;
And,
Characteristic fragment integron module is the characteristic value of current video frame for calculating the weighted average of the First Eigenvalue of each frame of video in described characteristic fragment, Second Eigenvalue, third feature value, fourth feature value and fifth feature value.
6. device as claimed in claim 5, it is characterized in that, described target video frame extraction module comprises:
Candidate video frame chooses submodule, and the frame of video being less than predetermined threshold value for selected characteristic value is candidate video frame;
Target video frame determination submodule, for determining target video frame in described candidate video frame.
7. device as claimed in claim 5, it is characterized in that, described target video frame extraction module comprises:
Characteristic fragment chooses submodule, and for each characteristic fragment is carried out ascending order arrangement by the minimal eigenvalue of its frame of video, determine that the characteristic fragment of top N is preferred feature fragment, described N is predetermined number threshold value;
Target video frame chooses submodule, is target video frame for extracting the minimum frame of video of characteristic value in preferred feature fragment respectively.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210036660.6A CN102543136B (en) | 2012-02-17 | 2012-02-17 | Method and device for clipping video |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210036660.6A CN102543136B (en) | 2012-02-17 | 2012-02-17 | Method and device for clipping video |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102543136A CN102543136A (en) | 2012-07-04 |
CN102543136B true CN102543136B (en) | 2015-05-20 |
Family
ID=46349855
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210036660.6A Active CN102543136B (en) | 2012-02-17 | 2012-02-17 | Method and device for clipping video |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102543136B (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103139663B (en) * | 2013-01-25 | 2016-05-11 | 深圳先进技术研究院 | The automatic detachment device of video and the method automatically splitting thereof |
CN105072354A (en) * | 2015-07-17 | 2015-11-18 | Tcl集团股份有限公司 | Method and system of synthesizing video stream by utilizing a plurality of photographs |
CN106375872A (en) * | 2015-07-24 | 2017-02-01 | 三亚中兴软件有限责任公司 | Method and device for video editing |
CN106559631A (en) * | 2015-09-30 | 2017-04-05 | 小米科技有限责任公司 | Method for processing video frequency and device |
CN108268486B (en) * | 2016-12-30 | 2022-04-19 | 中兴通讯股份有限公司 | Multimedia content association and playing method and device, and terminal |
CN107529091B (en) * | 2017-09-08 | 2020-08-04 | 广州华多网络科技有限公司 | Video editing method and device |
CN108391063B (en) * | 2018-02-11 | 2021-02-02 | 北京优聚视微传媒科技有限公司 | Video editing method and device |
CN109451360B (en) * | 2018-11-02 | 2021-03-05 | 北京亿幕信息技术有限公司 | Video transition special effect method and engine |
CN113079420A (en) * | 2020-01-03 | 2021-07-06 | 北京三星通信技术研究有限公司 | Video generation method and device, electronic equipment and computer readable storage medium |
CN113923390A (en) * | 2021-09-30 | 2022-01-11 | 北京字节跳动网络技术有限公司 | Video recording method, device, equipment and storage medium |
CN113949828B (en) * | 2021-10-18 | 2024-04-30 | 北京达佳互联信息技术有限公司 | Video editing method, device, electronic equipment and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101304490A (en) * | 2008-06-20 | 2008-11-12 | 北京六维世纪网络技术有限公司 | Method and device for jointing video |
CN101308501A (en) * | 2008-06-30 | 2008-11-19 | 腾讯科技(深圳)有限公司 | Method, system and device for generating video frequency abstract |
-
2012
- 2012-02-17 CN CN201210036660.6A patent/CN102543136B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101304490A (en) * | 2008-06-20 | 2008-11-12 | 北京六维世纪网络技术有限公司 | Method and device for jointing video |
CN101308501A (en) * | 2008-06-30 | 2008-11-19 | 腾讯科技(深圳)有限公司 | Method, system and device for generating video frequency abstract |
Also Published As
Publication number | Publication date |
---|---|
CN102543136A (en) | 2012-07-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102543136B (en) | Method and device for clipping video | |
US11507677B2 (en) | Image classification modeling while maintaining data privacy compliance | |
CN110008378B (en) | Corpus collection method, device, equipment and storage medium based on artificial intelligence | |
CN112070416B (en) | AI-based RPA flow generation method, apparatus, device and medium | |
CN111709966B (en) | Fundus image segmentation model training method and device | |
CN103870597A (en) | Method and device for searching for watermark-free picture | |
CN112926621B (en) | Data labeling method, device, electronic equipment and storage medium | |
CN104918060A (en) | Method and device for selecting position to insert point in video advertisement | |
CN110019849B (en) | Attention mechanism-based video attention moment retrieval method and device | |
CN103077254A (en) | Webpage acquiring method and device | |
CN105183856A (en) | Method and device for evaluating information content quality | |
US11062229B1 (en) | Training latent variable machine learning models using multi-sample objectives | |
CN105790967B (en) | Network log processing method and device | |
US20160188680A1 (en) | Electronic device and information searching method for the electronic device | |
CN109064464B (en) | Method and device for detecting burrs of battery pole piece | |
CN112364185B (en) | Method and device for determining characteristics of multimedia resources, electronic equipment and storage medium | |
CN109376362A (en) | A kind of the determination method and relevant device of corrected text | |
CN114037716A (en) | Image segmentation method, device, equipment and storage medium | |
CN104580109A (en) | Method and device for generating click verification code | |
US20150142709A1 (en) | Automatic learning of bayesian networks | |
US10430739B2 (en) | Automatic solution to a scheduling problem | |
US9514256B1 (en) | Method and system for modelling turbulent flows in an advection-diffusion process | |
CN112084954A (en) | Video target detection method and device, electronic equipment and storage medium | |
US20160179960A1 (en) | Optimizing web crawling through web page pruning | |
Meng et al. | Towards a partial differential equation remote sensing image method based on adaptive degradation diffusion parameter |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
PE01 | Entry into force of the registration of the contract for pledge of patent right | ||
PE01 | Entry into force of the registration of the contract for pledge of patent right |
Denomination of invention: Method and device for clipping video Effective date of registration: 20191014 Granted publication date: 20150520 Pledgee: China Co truction Bank Corp Guangzhou economic and Technological Development Zone sub branch Pledgor: Guangzhou Ncast Electronic Science & Technology Co., Ltd. Registration number: Y2019440000121 |