CN102543136A - Method and device for clipping video - Google Patents

Method and device for clipping video Download PDF

Info

Publication number
CN102543136A
CN102543136A CN2012100366606A CN201210036660A CN102543136A CN 102543136 A CN102543136 A CN 102543136A CN 2012100366606 A CN2012100366606 A CN 2012100366606A CN 201210036660 A CN201210036660 A CN 201210036660A CN 102543136 A CN102543136 A CN 102543136A
Authority
CN
China
Prior art keywords
video
frame
eigenwert
weight threshold
calculate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012100366606A
Other languages
Chinese (zh)
Other versions
CN102543136B (en
Inventor
刘志聪
叶荣华
张冲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Ncast Electronic Science & Technology Co Ltd
Original Assignee
Guangzhou Ncast Electronic Science & Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Ncast Electronic Science & Technology Co Ltd filed Critical Guangzhou Ncast Electronic Science & Technology Co Ltd
Priority to CN201210036660.6A priority Critical patent/CN102543136B/en
Publication of CN102543136A publication Critical patent/CN102543136A/en
Application granted granted Critical
Publication of CN102543136B publication Critical patent/CN102543136B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The invention provides a method and a device for clipping a video. The method comprises the following steps of: in a video segment in an appointed time range, computing a characteristic value of a video frame according to the preset characteristic; extracting a target video frame, wherein the target video frame is the video frame of which the characteristic value is in accordance with preset conditions; and splicing the video at a time point which corresponds to the target video frame. By the invention, the video can be clipped at an optimum time point which is automatically selected, complicated manual operation is avoided, and the effectiveness of the selected time point is ensured.

Description

A kind of method of video clipping and device
Technical field
The application relates to the method for the technical field of Video processing, particularly a kind of video clipping and a kind of device of video clipping.
Background technology
Video clipping relates to the processing that the video of different time points is sheared and spliced, because in the video clipping process, choosing of suitable time point is extremely important, but choosing at present of this time point can only be accomplished by manual work.
For example, in the recording process of Classic Course,, just need in video, find a suitable time point to do montage through the later stage if mistake occurs.Use manually-operated, need repeatedly through put slowly, operation such as single frames broadcast seeks optimum montage time point, consumption great amount of time and energy.And often owing to the loaded down with trivial details of work and repetition, the time point that manually-operated is chosen is not optimum time point sometimes yet.
Therefore; Need the urgent technical matters that solves of those skilled in the art to be exactly at present: how creatively to propose a kind of video clipping method and device; Accomplish video clipping in order to choose optimum time point automatically, avoid the complex manual operation, and guarantee the validity of selected time point.
Summary of the invention
The application's technical matters to be solved provides a kind of method of video clipping, accomplishes video clipping in order to choose optimum time point automatically, avoids the complex manual operation, and guarantees the validity of selected time point.
The application also provides a kind of device of video clipping, in order to guarantee application and the realization of said method in reality.
In order to address the above problem, the application discloses a kind of method of video clipping, specifically can comprise:
At the appointed time in the video segment of scope, press preset feature calculation eigenwert to frame of video;
Extract the target video frame, said target video frame is that eigenwert satisfies pre-conditioned frame of video;
The place carries out video-splicing at said target video frame time corresponding point.
Preferably, in the video segment of said at the appointed time scope, can comprise to the step of frame of video by preset feature calculation eigenwert:
To each frame of video in the video segment of fixed time scope, calculate first eigenwert respectively, second eigenwert, the 3rd eigenwert, the 4th eigenwert and the 5th eigenwert;
Wherein, said first eigenwert is to calculate acquisition according to the inverse and first weight threshold of the continuous quiet length of frame of video;
Said second eigenwert is to calculate acquisition according to the background motion amplitude of frame of video and second weight threshold;
Said the 3rd eigenwert is to calculate acquisition according to the background motion amplitude of frame of video and the 3rd weight threshold;
Said the 4th eigenwert is for calculating acquisition according to frame of video with waiting the absolute difference and the 4th weight threshold that are connected video first frame;
Said the 5th eigenwert is to calculate acquisition according to the difference of the time point of frame of video and fixed time end of extent (EOE) point and the 5th weight threshold;
And,
The weighted mean value that calculates first eigenwert, second eigenwert, the 3rd eigenwert, the 4th eigenwert and the 5th eigenwert of each frame of video in the said video segment is the eigenwert of current video frame.
Preferably, in the video segment of said at the appointed time scope, can comprise to the step of frame of video by preset feature calculation eigenwert:
Each frame of video in the video segment of fixed time scope is calculated first eigenwert, and said first eigenwert is to calculate acquisition according to the inverse of the continuous quiet length of frame of video and first weight threshold;
Extract characteristic fragment according to said first eigenwert; Said characteristic fragment is one or more;
From after travel through the frame of video the said characteristic fragment forward, calculate second eigenwert of said frame of video, the 3rd eigenwert, the 4th eigenwert and the 5th eigenwert;
Wherein, said second eigenwert is to calculate acquisition according to the background motion amplitude and second weight threshold of frame of video;
Said the 3rd eigenwert is to calculate acquisition according to the background motion amplitude of frame of video and the 3rd weight threshold;
Said the 4th eigenwert is for calculating acquisition according to frame of video with waiting the absolute difference and the 4th weight threshold that are connected video first frame;
Said the 5th eigenwert is to calculate acquisition according to the difference of the time point of frame of video and fixed time end of extent (EOE) point and the 5th weight threshold;
And,
The weighted mean value that calculates first eigenwert, second eigenwert, the 3rd eigenwert, the 4th eigenwert and the 5th eigenwert of each frame of video in the said characteristic fragment is the eigenwert of current video frame.
Preferably, can calculate first eigenwert through following formula:
M1=R1/K1*C1′;
Wherein, R1 is the inverse of the continuous quiet length of frame of video, first weight threshold
C 1 ′ = C 1 ( R 1 ≤ K 1 ) + ∞ ( R 1 > K 1 ) ;
And, can calculate second eigenwert through following formula:
M2=R2/K2*C2′;
Wherein, R2 is the background motion amplitude of frame of video, second weight threshold C 2 ′ = C 2 ( R 2 ≤ K 2 ) + ∞ ( R 2 > K 2 ) ;
And, can calculate the 3rd eigenwert through following formula:
M3=R3/K3*C3′;
Wherein, R3 is the foreground moving amplitude of frame of video, the 3rd weight threshold C 3 ′ = C 3 ( R 3 ≤ K 3 ) + ∞ ( R 3 > K 3 ) ;
And, can calculate the 4th eigenwert through following formula:
M4=R4/K4*C4′;
Wherein, R4 is a current video frame and the absolute value of the difference of waiting to be connected video first frame, the 4th weight threshold C 4 ′ = C 4 ( R 4 ≤ K 4 ) + ∞ ( R 4 > K 4 ) ;
And, can calculate the 5th eigenwert through following formula:
M5=R5/K5*C5′;
Wherein, R5 is the time point of current video frame and the absolute value of fixed time end of extent (EOE) point difference, the 5th weight threshold C 5 ′ = C 5 ( R 5 ≤ K 5 ) + ∞ ( R 5 > K 5 ) ; K5=T2-T1, (T1 T2) is the time range of video segment, T2>T1;
And, can calculate the eigenwert of frame of video through following formula:
SUM=(M1+M2+M3+M4+M5)/(C1+C2+C3+C4+C5)。
Preferably, the step of said extraction target video frame can comprise:
The selected characteristic value is candidate's frame of video less than the frame of video of predetermined threshold value;
In said candidate's frame of video, confirm the target video frame.
Preferably, the step of said extraction target video frame can comprise:
Each characteristic fragment is carried out ascending order by the minimal eigenvalue of its frame of video arrange, the characteristic fragment of N position is the preferred feature fragment before confirming, said N is the predetermined number threshold value;
In the preferred feature fragment, extracting the minimum frame of video of eigenwert respectively is the target video frame.
Disclosed herein as well is a kind of device of video clipping, specifically can comprise:
Characteristic value calculating module is used in the video segment of scope at the appointed time, to frame of video by preset feature calculation eigenwert;
Target video frame extraction module is used to extract the target video frame, and said target video frame is that eigenwert satisfies pre-conditioned frame of video;
The video-splicing module is used for carrying out video-splicing at said target video frame time corresponding point place.
Preferably, said characteristic value calculating module can comprise:
Characteristic is obtained submodule, is used for each frame of video to the video segment of fixed time scope, calculates first eigenwert respectively, second eigenwert, the 3rd eigenwert, the 4th eigenwert and the 5th eigenwert;
Wherein, said first eigenwert is to calculate acquisition according to the inverse and first weight threshold of the continuous quiet length of frame of video;
Said second eigenwert is to calculate acquisition according to the background motion amplitude of frame of video and second weight threshold;
Said the 3rd eigenwert is to calculate acquisition according to the background motion amplitude of frame of video and the 3rd weight threshold;
Said the 4th eigenwert is for calculating acquisition according to frame of video with waiting the absolute difference and the 4th weight threshold that are connected video first frame;
Said the 5th eigenwert is to calculate acquisition according to the difference of the time point of frame of video and fixed time end of extent (EOE) point and the 5th weight threshold;
And,
Characteristic integron module, the weighted mean value that is used for calculating first eigenwert, second eigenwert, the 3rd eigenwert, the 4th eigenwert and the 5th eigenwert of said each frame of video of video segment is the eigenwert of current video frame.
Preferably, said characteristic value calculating module can comprise:
The first eigenvalue calculation submodule is used for calculating first eigenwert to each frame of video of the video segment of fixed time scope, and said first eigenwert be to calculate acquisition according to the inverse of the continuous quiet length of frame of video and first weight threshold;
Characteristic fragment extracts submodule, is used for extracting characteristic fragment according to said first eigenwert; Said characteristic fragment is one or more;
The traversal submodule, be used for from after travel through the frame of video of said characteristic fragment forward, calculate second eigenwert of said frame of video, the 3rd eigenwert, the 4th eigenwert and the 5th eigenwert;
Wherein, said second eigenwert is to calculate acquisition according to the background motion amplitude and second weight threshold of frame of video;
Said the 3rd eigenwert is to calculate acquisition according to the background motion amplitude of frame of video and the 3rd weight threshold;
Said the 4th eigenwert is for calculating acquisition according to frame of video with waiting the absolute difference and the 4th weight threshold that are connected video first frame;
Said the 5th eigenwert is to calculate acquisition according to the difference of the time point of frame of video and fixed time end of extent (EOE) point and the 5th weight threshold;
And,
Characteristic fragment integron module, the weighted mean value that is used for calculating first eigenwert, second eigenwert, the 3rd eigenwert, the 4th eigenwert and the 5th eigenwert of said each frame of video of characteristic fragment is the eigenwert of current video frame.
Preferably, said target video frame extraction module can comprise:
Candidate's frame of video is chosen submodule, and being used for the selected characteristic value is candidate's frame of video less than the frame of video of predetermined threshold value;
The target video frame is confirmed submodule, is used for confirming the target video frame in said candidate's frame of video.
Preferably, said target video frame extraction module can comprise:
Characteristic fragment is chosen submodule, is used for that each characteristic fragment is carried out ascending order by the minimal eigenvalue of its frame of video and arranges, and the characteristic fragment of N position is the preferred feature fragment before confirming, said N is the predetermined number threshold value;
The target video frame is chosen submodule, and being used for extracting the minimum frame of video of eigenwert respectively in the preferred feature fragment is the target video frame.
Compared with prior art, the application has the following advantages:
The application embodiment is through utilizing the make mistakes common trait of time point of video; Automatically in the time shaft specified scope, extract optimum video clipping time point through algorithm; And be connected video content at this time point automatically; Thereby avoided the complex manual operation, and guaranteed the validity of selected time point.
And; The application embodiment can be directed against at the appointed time in advance, and the interior search of the video segment audio frequency of scope is tending towards quiet characteristic fragment; The time of pressing is again further analyzed and evaluation the characteristics of image employing algorithm of these frame of video by the frame of video in back and preceding these characteristic fragments of traversal and the characteristic fragment, and this kind mode is conserve system resources more; Raise the efficiency, can also further guarantee the validity of selected time point.
Description of drawings
Fig. 1 is the flow chart of steps of a kind of video clipping method embodiment 1 of the application;
Fig. 2 is the flow chart of steps of a kind of video clipping method embodiment 2 of the application;
Fig. 3 is the structured flowchart of a kind of video clipping device embodiment of the application.
Embodiment
For above-mentioned purpose, the feature and advantage that make the application can be more obviously understandable, the application is done further detailed explanation below in conjunction with accompanying drawing and embodiment.
The inventor herein is through repeatedly statistical study discovery, and the required time point of choosing has following common trait in the video clipping process:
1, in a period of time scope before this waits to choose time point, it is quiet that audio frequency is tending towards;
2, in a period of time scope before this waits to choose time point, the background of video is fixed;
3, in a period of time scope before this waits to choose time point, the foreground moving amplitude of video is little;
4, mate as far as possible with first two field picture that is connected video;
5, this waits to choose time point as far as possible near the end of current search scope.
Need to prove, the characteristic of above-mentioned time point only as example and and non exhaustive, in reality, possibly also have more common trait, the application does not limit this.
One of core idea of the application embodiment is, utilizes the characteristic of above-mentioned time point, in the time shaft specified scope, extracts optimum video clipping time point automatically through algorithm, and is connected video content at this time point automatically.
With reference to figure 1, show the flow chart of steps of method embodiment 1 of a kind of video clipping of the application, specifically can may further comprise the steps:
Step 11, at the appointed time in the video segment of scope, to frame of video by preset feature calculation eigenwert;
In the present embodiment, this step 11 specifically can comprise following substep:
Substep 111, to each frame of video in the video segment of fixed time scope, calculate first eigenwert respectively, second eigenwert, the 3rd eigenwert, the 4th eigenwert and the 5th eigenwert;
Wherein, said first eigenwert is to calculate acquisition according to the inverse and first weight threshold of the continuous quiet length of frame of video;
Said second eigenwert is to calculate acquisition according to the background motion amplitude of frame of video and second weight threshold;
Said the 3rd eigenwert is to calculate acquisition according to the background motion amplitude of frame of video and the 3rd weight threshold;
Said the 4th eigenwert is for calculating acquisition according to frame of video with waiting the absolute difference and the 4th weight threshold that are connected video first frame;
Said the 5th eigenwert is to calculate acquisition according to the difference of the time point of frame of video and fixed time end of extent (EOE) point and the 5th weight threshold;
And,
The weighted mean value of first eigenwert, second eigenwert, the 3rd eigenwert, the 4th eigenwert and the 5th eigenwert of each frame of video is the eigenwert of current video frame in substep 112, the said video segment of calculating.
In a kind of preferred embodiment of the application, can calculate first eigenwert through following formula:
M1=R1/K1*C1′;
Wherein, R1 is the inverse of the continuous quiet length of frame of video, first weight threshold C 1 ′ = C 1 ( R 1 ≤ K 1 ) + ∞ ( R 1 > K 1 ) ;
Preferably, can calculate second eigenwert through following formula:
M2=R2/K2*C2′;
Wherein, R2 is the background motion amplitude of frame of video, second weight threshold C 2 ′ = C 2 ( R 2 ≤ K 2 ) + ∞ ( R 2 > K 2 ) ;
Preferably, can calculate the 3rd eigenwert through following formula:
M3=R3/K3*C3′;
Wherein, R3 is the foreground moving amplitude of frame of video, the 3rd weight threshold C 3 ′ = C 3 ( R 3 ≤ K 3 ) + ∞ ( R 3 > K 3 ) ;
Preferably, can calculate the 4th eigenwert through following formula:
M4=R4/K4*C4′;
Wherein, R4 is a current video frame and the absolute value of the difference of waiting to be connected video first frame, the 4th weight threshold C 4 ′ = C 4 ( R 4 ≤ K 4 ) + ∞ ( R 4 > K 4 ) ;
Preferably, can calculate the 5th eigenwert through following formula:
M5=R5/K5*C5′;
Wherein, R5 is the time point of current video frame and the absolute value of fixed time end of extent (EOE) point difference, the 5th weight threshold C 5 ′ = C 5 ( R 5 ≤ K 5 ) + ∞ ( R 5 > K 5 ) ; K5=T2-T1, (T1 T2) is the time range of video segment, T2>T1;
Preferably, can calculate the eigenwert of frame of video through following formula:
SUM=(M1+M2+M3+M4+M5)/(C?1+C2+C3+C4+C5)。
Particularly, find the common trait of the required time point of choosing in the video clipping process according to aforementioned inventor herein:
1, in a period of time scope before this waits to choose time point, it is quiet that audio frequency is tending towards;
2, in a period of time scope before this waits to choose time point, the background of video is fixed;
3, in a period of time scope before this waits to choose time point, the foreground moving amplitude of video is little;
4, mate as far as possible with first two field picture that is connected video;
5, this waits to choose time point as far as possible near the end of current search scope.
Adopt present embodiment, in reality, can handle as follows:
1) at the appointed time in the video segment of scope, it is reciprocal to calculate continuous quiet length to each frame of video, with its weights (first weight threshold) that multiply by characteristic 1, obtains first eigenwert; As the concrete a kind of example used of the application embodiment, said first weight threshold can get 8, domain of walker ± 3.
2) frame of video in the traversal video segment; Segmenting Background before adopting (is not limited to certain front and back scape partitioning algorithm; Can be used to judge the background motion amplitude all can) the preceding background image of divided video frame; After extracting background image, the amplitude of background motion multiply by the weights (second weight threshold) of characteristic 2, obtain second eigenwert.
Need to prove,, promptly get rid of the part that all do not satisfy characteristic 2 if second weight threshold for negative infinite, then is equivalent to get rid of the part (possibly be entire segment, also possibly be the part in original fragment) of all on-fixed backgrounds.
As the concrete a kind of example used of the application embodiment, said second weight threshold can get 5, domain of walker ± 3.
3) frame of video in the traversal video segment; Use can process decision chart be judged the foreground moving amplitude of frame of video as the algorithm (like gauss hybrid models) of movement tendency and amplitude; This foreground moving amplitude multiply by the weights (the 3rd weight threshold) of characteristic 3, obtain the 3rd eigenwert.As the concrete a kind of example used of the application embodiment, said the 3rd weight threshold can get 4, domain of walker ± 3.
4) frame of video of traversal in the video segment uses method of difference to judge the matching degree of certain frame of video (when the frame of video of pre-treatment) and first frame of waiting to be connected video, and this matching degree multiply by the weights (the 4th weight threshold) of characteristic 4, obtains the 4th eigenwert.As the concrete a kind of example used of the application embodiment, said the 4th weight threshold can get 2, domain of walker ± 3.
5) frame of video in the traversal video segment is calculated the terminal distance (time) in each frame and hunting zone, and this distance multiply by characteristic 5 weights (the 5th weight threshold), obtains the 5th eigenwert.As the concrete a kind of example used of the application embodiment, said the 5th weight threshold can get 4, domain of walker ± 3.
6) frame of video in the traversal video segment, the summation of calculating all the first, second, third, fourth and the 5th eigenwerts of each frame is the eigenwert of each frame.
Step 12, extraction target video frame, said target video frame is that eigenwert satisfies pre-conditioned frame of video;
In the present embodiment, this step 12 specifically can comprise following substep:
Substep 121, selected characteristic value are candidate's frame of video less than the frame of video of predetermined threshold value;
Substep 122, in said candidate's frame of video, confirm the target video frame.
Must be divided into R [1~n] if obtain each frame eigenwert after the traversal, do final screening for the ease of manual work this moment, need provide optimal time point and some suboptimum time points (being target video frame time corresponding point).If only through simple ordering, optimal time point and suboptimum time point are concentrated in short a period of time usually so, and be nonsensical for next step screening.To this, the application embodiment provides following two kinds of methods of in candidate's frame of video, confirming the target video frame:
Method one:
Given threshold value k=5, unit are second.
The time point of target video frame is W1=min{R [1] ... R [n] }, be the center with W1, the candidate's frame of video in the deletion ± K time; In candidate's frame of video of remainder, search next time point W2=min{R [1] ... R [n] as the target video frame; Wherein, R does not belong to W1 ± k}, by that analogy.
Method two:
Given threshold value k=5, unit are frame.
Frame of video is carried out ascending sort by eigenwert, obtain R ' [1~n]=1~n, annotate: the numerical value of R ' is the subscript of R, and promptly R ' is the pointer after the R ordering.
Get target video frame W1=R [R ' [1]], if abs{R ' [2]-R ' [1] }<k, judge abs{R ' [3]-R ' [1] so }<k, up to satisfying abs{R ' [x]-R ' [1] }<k, then get target video frame w2=R [R ' [x]].
Certainly, said method is only as example, and those skilled in the art adopt any to confirm that the method for target video frame all is feasible according to actual conditions, and the application need not this to limit.
Step 13, carry out video-splicing at said target video frame time corresponding point place.
In concrete the realization; Can carry out the screening and the video-splicing work of treatment at optimal time point place through this step; Specifically can at first further be screened the target video frame by manual work, locate to carry out video-splicing at the time point (optimal time point) of the target video frame that filters out then, the eigenwert of the corresponding frame of video of optimal time point can be used as reference conditions; Be used to judge the linking quality of this time point; For be connected of low quality (higher like eigenwert, can show as that video pictures not too matees or audio frequency shorter at interval), can insert special video effect and handle (for example rollover effect or window shutter effect and so on).
With reference to figure 2, show the flow chart of steps of method embodiment 2 of a kind of video clipping of the application, specifically can may further comprise the steps:
Step 21, at the appointed time in the video segment of scope, to frame of video by preset feature calculation eigenwert;
In the present embodiment, this step 21 specifically can comprise following substep:
Substep 211, each frame of video that is directed against in the video segment of fixed time scope are calculated first eigenwert, and said first eigenwert is to calculate acquisition according to the inverse of the continuous quiet length of frame of video and first weight threshold;
Substep 212, extract characteristic fragment according to said first eigenwert; Said characteristic fragment is one or more;
Substep 213, from after travel through the frame of video the said characteristic fragment forward, calculate second eigenwert of said frame of video, the 3rd eigenwert, the 4th eigenwert and the 5th eigenwert;
Wherein, said second eigenwert is to calculate acquisition according to the background motion amplitude and second weight threshold of frame of video;
Said the 3rd eigenwert is to calculate acquisition according to the background motion amplitude of frame of video and the 3rd weight threshold;
Said the 4th eigenwert is for calculating acquisition according to frame of video with waiting the absolute difference and the 4th weight threshold that are connected video first frame;
Said the 5th eigenwert is to calculate acquisition according to the difference of the time point of frame of video and fixed time end of extent (EOE) point and the 5th weight threshold;
And,
The weighted mean value of first eigenwert, second eigenwert, the 3rd eigenwert, the 4th eigenwert and the 5th eigenwert of each frame of video is the eigenwert of current video frame in substep 214, the said characteristic fragment of calculating.
Particularly, find the common trait of the required time point of choosing in the video clipping process according to aforementioned inventor herein:
1, in a period of time scope before this waits to choose time point, it is quiet that audio frequency is tending towards;
2, in a period of time scope before this waits to choose time point, the background of video is fixed;
3, in a period of time scope before this waits to choose time point, the foreground moving amplitude of video is little;
4, mate as far as possible with first two field picture that is connected video;
5, this waits to choose time point as far as possible near the end of current search scope.
Adopt present embodiment, in reality, can handle as follows:
1) input parameter (perhaps directly moving according to default value) comprises following content:
The fixed time scope T1 of video segment, and T2 (T2>T1);
K1, K2, K3, K4, each characteristic threshold value of K5; K5=T2-T1 wherein;
C1, C2, C3, C4, each characteristic weights of C5.
2) audio frequency of each frame of video in the video segment of processing fixed time scope; Find out all fragments that meet characteristic 1 (characteristic fragment); Be specially to each frame of video and calculate continuous quiet length inverse,, obtain first eigenwert its weights (first weight threshold) that multiply by characteristic 1;
In a kind of preferred embodiment of the application, can calculate first eigenwert through following formula:
M1=R1/K1*C1′;
Wherein, R1 is the inverse of the continuous quiet length of frame of video, first weight threshold C 1 ′ = C 1 ( R 1 ≤ K 1 ) + ∞ ( R 1 > K 1 ) .
As the concrete a kind of example used of the application embodiment, said first weight threshold can get 8, domain of walker ± 3.
3) frame of video in the traversal characteristic fragment; Segmenting Background before adopting (is not limited to certain front and back scape partitioning algorithm; Can be used to judge the background motion amplitude all can) the preceding background image of divided video frame; After extracting background image, the amplitude of background motion multiply by the weights (second weight threshold) of characteristic 2, obtain second eigenwert.
Preferably, can calculate second eigenwert through following formula:
M2=R2/K2*C2′;
Wherein, R2 is the background motion amplitude of frame of video, second weight threshold C 2 ′ = C 2 ( R 2 ≤ K 2 ) + ∞ ( R 2 > K 2 ) .
Need to prove,, promptly get rid of the part that all do not satisfy characteristic 2 if second weight threshold for negative infinite, then is equivalent to get rid of the part (possibly be entire segment, also possibly be the part in original fragment) of all on-fixed backgrounds.
As the concrete a kind of example used of the application embodiment, said second weight threshold can get 5, domain of walker ± 3.
4) frame of video in the traversal characteristic fragment; Use can process decision chart be judged the foreground moving amplitude of frame of video as the algorithm (like gauss hybrid models) of movement tendency and amplitude; This foreground moving amplitude multiply by the weights (the 3rd weight threshold) of characteristic 3, obtain the 3rd eigenwert.
Preferably, can calculate the 3rd eigenwert through following formula:
M3=R3/K3*C3′;
Wherein, R3 is the foreground moving amplitude of frame of video, the 3rd weight threshold C 3 ′ = C 3 ( R 3 ≤ K 3 ) + ∞ ( R 3 > K 3 ) .
As the concrete a kind of example used of the application embodiment, said the 3rd weight threshold can get 4, domain of walker ± 3.
5) frame of video of traversal in the characteristic fragment uses method of difference to judge the matching degree of certain frame of video (when the frame of video of pre-treatment) and first frame of waiting to be connected video, and this matching degree multiply by the weights (the 4th weight threshold) of characteristic 4, obtains the 4th eigenwert.
Preferably, can calculate the 4th eigenwert through following formula:
M4=R4/K4*C4′;
Wherein, R4 is a current video frame and the absolute value of the difference of waiting to be connected video first frame, the 4th weight threshold C 4 ′ = C 4 ( R 4 ≤ K 4 ) + ∞ ( R 4 > K 4 ) .
As the concrete a kind of example used of the application embodiment, said the 4th weight threshold can get 2, domain of walker ± 3.
5) frame of video in the traversal characteristic fragment is calculated the terminal distance (time) in each frame and hunting zone, and this distance multiply by characteristic 5 weights (the 5th weight threshold), obtains the 5th eigenwert.
Preferably, can calculate the 5th eigenwert through following formula:
M5=R5/K5*C5′;
Wherein, R5 is the time point of current video frame and the absolute value of fixed time end of extent (EOE) point difference, the 5th weight threshold C 5 ′ = C 5 ( R 5 ≤ K 5 ) + ∞ ( R 5 > K 5 ) ; K5=T2-T1, (T1 T2) is the time range of video segment, T2>T1.
As the concrete a kind of example used of the application embodiment, said the 5th weight threshold can get 4, domain of walker ± 3.
6) frame of video in the traversal characteristic fragment, the summation of calculating all the first, second, third, fourth and the 5th eigenwerts of each frame is the eigenwert of each frame.
Preferably, can calculate the eigenwert of frame of video through following formula:
SUM=(M1+M2+M3+M4+M5)/(C?1+C2+C3+C4+C5)。
Step 22, extraction target video frame, said target video frame is that eigenwert satisfies pre-conditioned frame of video;
In the present embodiment, this step 22 specifically can comprise following substep:
Substep 221, each characteristic fragment is carried out ascending order by the minimal eigenvalue of its frame of video arrange, the characteristic fragment of N position is the preferred feature fragment before confirming, said N is the predetermined number threshold value;
Substep 222, in the preferred feature fragment, to extract the minimum frame of video of eigenwert respectively be the target video frame.
For example, can find out the score of the minimum SUM of each characteristic fragment as current characteristic fragment, ordering then, the frame of video of the minimal eigenvalue correspondence in minimum N the characteristic fragment of score is the target video frame.
Certainly, said method is only as example, and it all is feasible that those skilled in the art adopt any method of extracting the target video frame according to actual conditions, and the application need not this to limit.
Step 13, carry out video-splicing at said target video frame time corresponding point place.
In concrete the realization; Can carry out the screening and the video-splicing work of treatment at optimal time point place through this step; Specifically can at first further be screened the target video frame by manual work, locate to carry out video-splicing at the time point (optimal time point) of the target video frame that filters out then, the eigenwert of the corresponding frame of video of optimal time point can be used as reference conditions; Be used to judge the linking quality of this time point; For be connected of low quality (higher like eigenwert, can show as that video pictures not too matees or audio frequency shorter at interval), can insert special video effect and handle (for example rollover effect or window shutter effect and so on).
Need to prove; For said method embodiment, for simple description, so it all is expressed as a series of combination of actions; But those skilled in the art should know; The application does not receive the restriction of described sequence of movement, because according to the application, some step can adopt other orders or carry out simultaneously.Secondly, those skilled in the art also should know, the embodiment described in the instructions all belongs to preferred embodiment, and related action and module might not be that the application is necessary.
With reference to figure 3, show the structured flowchart of device embodiment of a kind of video clipping of the application, specifically can comprise like lower module:
Characteristic value calculating module 301 is used in the video segment of scope at the appointed time, to frame of video by preset feature calculation eigenwert;
Target video frame extraction module 302 is used to extract the target video frame, and said target video frame is that eigenwert satisfies pre-conditioned frame of video;
Video-splicing module 303 is used for carrying out video-splicing at said target video frame time corresponding point place.
In a kind of preferred embodiment of the application, said characteristic value calculating module 301 can comprise following submodule:
Characteristic is obtained submodule, is used for each frame of video to the video segment of fixed time scope, calculates first eigenwert respectively, second eigenwert, the 3rd eigenwert, the 4th eigenwert and the 5th eigenwert;
Wherein, said first eigenwert is to calculate acquisition according to the inverse and first weight threshold of the continuous quiet length of frame of video;
Said second eigenwert is to calculate acquisition according to the background motion amplitude of frame of video and second weight threshold;
Said the 3rd eigenwert is to calculate acquisition according to the background motion amplitude of frame of video and the 3rd weight threshold;
Said the 4th eigenwert is for calculating acquisition according to frame of video with waiting the absolute difference and the 4th weight threshold that are connected video first frame;
Said the 5th eigenwert is to calculate acquisition according to the difference of the time point of frame of video and fixed time end of extent (EOE) point and the 5th weight threshold;
And,
Characteristic integron module, the weighted mean value that is used for calculating first eigenwert, second eigenwert, the 3rd eigenwert, the 4th eigenwert and the 5th eigenwert of said each frame of video of video segment is the eigenwert of current video frame.
In the present embodiment, said target video frame extraction module 302 can comprise following submodule:
Candidate's frame of video is chosen submodule, and being used for the selected characteristic value is candidate's frame of video less than the frame of video of predetermined threshold value;
The target video frame is confirmed submodule, is used for confirming the target video frame in said candidate's frame of video.
In the application's another kind of preferred embodiment, said characteristic value calculating module 301 can comprise following submodule:
The first eigenvalue calculation submodule is used for calculating first eigenwert to each frame of video of the video segment of fixed time scope, and said first eigenwert be to calculate acquisition according to the inverse of the continuous quiet length of frame of video and first weight threshold;
Characteristic fragment extracts submodule, is used for extracting characteristic fragment according to said first eigenwert; Said characteristic fragment is one or more;
The traversal submodule, be used for from after travel through the frame of video of said characteristic fragment forward, calculate second eigenwert of said frame of video, the 3rd eigenwert, the 4th eigenwert and the 5th eigenwert;
Wherein, said second eigenwert is to calculate acquisition according to the background motion amplitude and second weight threshold of frame of video;
Said the 3rd eigenwert is to calculate acquisition according to the background motion amplitude of frame of video and the 3rd weight threshold;
Said the 4th eigenwert is for calculating acquisition according to frame of video with waiting the absolute difference and the 4th weight threshold that are connected video first frame;
Said the 5th eigenwert is to calculate acquisition according to the difference of the time point of frame of video and fixed time end of extent (EOE) point and the 5th weight threshold;
And,
Characteristic fragment integron module, the weighted mean value that is used for calculating first eigenwert, second eigenwert, the 3rd eigenwert, the 4th eigenwert and the 5th eigenwert of said each frame of video of characteristic fragment is the eigenwert of current video frame.
In the present embodiment, said target video frame extraction module 302 can comprise following submodule:
Characteristic fragment is chosen submodule, is used for that each characteristic fragment is carried out ascending order by the minimal eigenvalue of its frame of video and arranges, and the characteristic fragment of N position is the preferred feature fragment before confirming, said N is the predetermined number threshold value;
The target video frame is chosen submodule, and being used for extracting the minimum frame of video of eigenwert respectively in the preferred feature fragment is the target video frame.
For device embodiment, because it is similar basically with method embodiment, so description is fairly simple, relevant part gets final product referring to the part explanation of method embodiment.
The application can be used in numerous general or special purpose computingasystem environment or the configuration.For example: personal computer, server computer, handheld device or portable set, plate equipment, multicomputer system, the system based on microprocessor, set top box, programmable consumer-elcetronics devices, network PC, small-size computer, mainframe computer, comprise DCE of above any system or equipment or the like.
The application can describe in the general context of the computer executable instructions of being carried out by computing machine, for example program module.Usually, program module comprises the routine carrying out particular task or realize particular abstract, program, object, assembly, data structure or the like.Also can in DCE, put into practice the application, in these DCEs, by through communication network connected teleprocessing equipment execute the task.In DCE, program module can be arranged in this locality and the remote computer storage medium that comprises memory device.
More than the method for a kind of video clipping that the application provided and a kind of device of video clipping are described in detail; Used specific case herein the application's principle and embodiment are set forth, the explanation of above embodiment just is used to help to understand the application's method and core concept thereof; Simultaneously, for one of ordinary skill in the art, according to the application's thought, the part that all can change in specific embodiments and applications, in sum, this description should not be construed as the restriction to the application.

Claims (11)

1. the method for a video clipping is characterized in that, comprising:
At the appointed time in the video segment of scope, press preset feature calculation eigenwert to frame of video;
Extract the target video frame, said target video frame is that eigenwert satisfies pre-conditioned frame of video;
The place carries out video-splicing at said target video frame time corresponding point.
2. the method for claim 1 is characterized in that, in the video segment of said at the appointed time scope, comprises to the step of frame of video by preset feature calculation eigenwert:
To each frame of video in the video segment of fixed time scope, calculate first eigenwert respectively, second eigenwert, the 3rd eigenwert, the 4th eigenwert and the 5th eigenwert;
Wherein, said first eigenwert is to calculate acquisition according to the inverse and first weight threshold of the continuous quiet length of frame of video;
Said second eigenwert is to calculate acquisition according to the background motion amplitude of frame of video and second weight threshold;
Said the 3rd eigenwert is to calculate acquisition according to the background motion amplitude of frame of video and the 3rd weight threshold;
Said the 4th eigenwert is for calculating acquisition according to frame of video with waiting the absolute difference and the 4th weight threshold that are connected video first frame;
Said the 5th eigenwert is to calculate acquisition according to the difference of the time point of frame of video and fixed time end of extent (EOE) point and the 5th weight threshold;
And,
The weighted mean value that calculates first eigenwert, second eigenwert, the 3rd eigenwert, the 4th eigenwert and the 5th eigenwert of each frame of video in the said video segment is the eigenwert of current video frame.
3. the method for claim 1 is characterized in that, in the video segment of said at the appointed time scope, comprises to the step of frame of video by preset feature calculation eigenwert:
Each frame of video in the video segment of fixed time scope is calculated first eigenwert, and said first eigenwert is to calculate acquisition according to the inverse of the continuous quiet length of frame of video and first weight threshold;
Extract characteristic fragment according to said first eigenwert; Said characteristic fragment is one or more;
From after travel through the frame of video the said characteristic fragment forward, calculate second eigenwert of said frame of video, the 3rd eigenwert, the 4th eigenwert and the 5th eigenwert;
Wherein, said second eigenwert is to calculate acquisition according to the background motion amplitude and second weight threshold of frame of video;
Said the 3rd eigenwert is to calculate acquisition according to the background motion amplitude of frame of video and the 3rd weight threshold;
Said the 4th eigenwert is for calculating acquisition according to frame of video with waiting the absolute difference and the 4th weight threshold that are connected video first frame;
Said the 5th eigenwert is to calculate acquisition according to the difference of the time point of frame of video and fixed time end of extent (EOE) point and the 5th weight threshold;
And,
The weighted mean value that calculates first eigenwert, second eigenwert, the 3rd eigenwert, the 4th eigenwert and the 5th eigenwert of each frame of video in the said characteristic fragment is the eigenwert of current video frame.
4. like claim 2 or 3 described methods, it is characterized in that, calculate first eigenwert through following formula:
M1=R1/K1*C1′;
Wherein, R1 is the inverse of the continuous quiet length of frame of video, first weight threshold C 1 ′ = C 1 ( R 1 ≤ K 1 ) + ∞ ( R 1 > K 1 ) ;
And, calculate second eigenwert through following formula:
M2=R2/K2*C2′;
Wherein, R2 is the background motion amplitude of frame of video, second weight threshold C 2 ′ = C 2 ( R 2 ≤ K 2 ) + ∞ ( R 2 > K 2 ) ;
And, calculate the 3rd eigenwert through following formula:
M3=R3/K3*C3′;
Wherein, R3 is the foreground moving amplitude of frame of video, the 3rd weight threshold C 3 ′ = C 3 ( R 3 ≤ K 3 ) + ∞ ( R 3 > K 3 ) ;
And, calculate the 4th eigenwert through following formula:
M4=R4/K4*C4′;
Wherein, R4 is a current video frame and the absolute value of the difference of waiting to be connected video first frame, the 4th weight threshold C 4 ′ = C 4 ( R 4 ≤ K 4 ) + ∞ ( R 4 > K 4 ) ;
And, calculate the 5th eigenwert through following formula:
M5=R5/K5*C5′;
Wherein, R5 is the time point of current video frame and the absolute value of fixed time end of extent (EOE) point difference, the 5th weight threshold C 5 ′ = C 5 ( R 5 ≤ K 5 ) + ∞ ( R 5 > K 5 ) ; K5=T2-T1, (T1 T2) is the time range of video segment, T2>T1;
And, calculate the eigenwert of frame of video through following formula:
SUM=(M1+M2+M3+M4+M5)/(C1+C2+C3+C4+C5)。
5. method as claimed in claim 2 is characterized in that, the step of said extraction target video frame comprises:
The selected characteristic value is candidate's frame of video less than the frame of video of predetermined threshold value;
In said candidate's frame of video, confirm the target video frame.
6. method as claimed in claim 3 is characterized in that, the step of said extraction target video frame comprises:
Each characteristic fragment is carried out ascending order by the minimal eigenvalue of its frame of video arrange, the characteristic fragment of N position is the preferred feature fragment before confirming, said N is the predetermined number threshold value;
In the preferred feature fragment, extracting the minimum frame of video of eigenwert respectively is the target video frame.
7. the device of a video clipping is characterized in that, comprising:
Characteristic value calculating module is used in the video segment of scope at the appointed time, to frame of video by preset feature calculation eigenwert;
Target video frame extraction module is used to extract the target video frame, and said target video frame is that eigenwert satisfies pre-conditioned frame of video;
The video-splicing module is used for carrying out video-splicing at said target video frame time corresponding point place.
8. device as claimed in claim 7 is characterized in that, said characteristic value calculating module comprises:
Characteristic is obtained submodule, is used for each frame of video to the video segment of fixed time scope, calculates first eigenwert respectively, second eigenwert, the 3rd eigenwert, the 4th eigenwert and the 5th eigenwert;
Wherein, said first eigenwert is to calculate acquisition according to the inverse and first weight threshold of the continuous quiet length of frame of video;
Said second eigenwert is to calculate acquisition according to the background motion amplitude of frame of video and second weight threshold;
Said the 3rd eigenwert is to calculate acquisition according to the background motion amplitude of frame of video and the 3rd weight threshold;
Said the 4th eigenwert is for calculating acquisition according to frame of video with waiting the absolute difference and the 4th weight threshold that are connected video first frame;
Said the 5th eigenwert is to calculate acquisition according to the difference of the time point of frame of video and fixed time end of extent (EOE) point and the 5th weight threshold;
And,
Characteristic integron module, the weighted mean value that is used for calculating first eigenwert, second eigenwert, the 3rd eigenwert, the 4th eigenwert and the 5th eigenwert of said each frame of video of video segment is the eigenwert of current video frame.
9. device as claimed in claim 7 is characterized in that, said characteristic value calculating module comprises:
The first eigenvalue calculation submodule is used for calculating first eigenwert to each frame of video of the video segment of fixed time scope, and said first eigenwert be to calculate acquisition according to the inverse of the continuous quiet length of frame of video and first weight threshold;
Characteristic fragment extracts submodule, is used for extracting characteristic fragment according to said first eigenwert; Said characteristic fragment is one or more;
The traversal submodule, be used for from after travel through the frame of video of said characteristic fragment forward, calculate second eigenwert of said frame of video, the 3rd eigenwert, the 4th eigenwert and the 5th eigenwert;
Wherein, said second eigenwert is to calculate acquisition according to the background motion amplitude and second weight threshold of frame of video;
Said the 3rd eigenwert is to calculate acquisition according to the background motion amplitude of frame of video and the 3rd weight threshold;
Said the 4th eigenwert is for calculating acquisition according to frame of video with waiting the absolute difference and the 4th weight threshold that are connected video first frame;
Said the 5th eigenwert is to calculate acquisition according to the difference of the time point of frame of video and fixed time end of extent (EOE) point and the 5th weight threshold;
And,
Characteristic fragment integron module, the weighted mean value that is used for calculating first eigenwert, second eigenwert, the 3rd eigenwert, the 4th eigenwert and the 5th eigenwert of said each frame of video of characteristic fragment is the eigenwert of current video frame.
10. device as claimed in claim 8 is characterized in that, said target video frame extraction module comprises:
Candidate's frame of video is chosen submodule, and being used for the selected characteristic value is candidate's frame of video less than the frame of video of predetermined threshold value;
The target video frame is confirmed submodule, is used for confirming the target video frame in said candidate's frame of video.
11. device as claimed in claim 9 is characterized in that, said target video frame extraction module comprises:
Characteristic fragment is chosen submodule, is used for that each characteristic fragment is carried out ascending order by the minimal eigenvalue of its frame of video and arranges, and the characteristic fragment of N position is the preferred feature fragment before confirming, said N is the predetermined number threshold value;
The target video frame is chosen submodule, and being used for extracting the minimum frame of video of eigenwert respectively in the preferred feature fragment is the target video frame.
CN201210036660.6A 2012-02-17 2012-02-17 Method and device for clipping video Active CN102543136B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210036660.6A CN102543136B (en) 2012-02-17 2012-02-17 Method and device for clipping video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210036660.6A CN102543136B (en) 2012-02-17 2012-02-17 Method and device for clipping video

Publications (2)

Publication Number Publication Date
CN102543136A true CN102543136A (en) 2012-07-04
CN102543136B CN102543136B (en) 2015-05-20

Family

ID=46349855

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210036660.6A Active CN102543136B (en) 2012-02-17 2012-02-17 Method and device for clipping video

Country Status (1)

Country Link
CN (1) CN102543136B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103139663A (en) * 2013-01-25 2013-06-05 深圳先进技术研究院 Automatic splitting device of video and splitting method thereof
CN105072354A (en) * 2015-07-17 2015-11-18 Tcl集团股份有限公司 Method and system of synthesizing video stream by utilizing a plurality of photographs
CN106375872A (en) * 2015-07-24 2017-02-01 三亚中兴软件有限责任公司 Method and device for video editing
CN106559631A (en) * 2015-09-30 2017-04-05 小米科技有限责任公司 Method for processing video frequency and device
CN107529091A (en) * 2017-09-08 2017-12-29 广州华多网络科技有限公司 Video clipping method and device
CN108268486A (en) * 2016-12-30 2018-07-10 中兴通讯股份有限公司 Association, playback method and the device of multimedia content, terminal
CN108391063A (en) * 2018-02-11 2018-08-10 北京秀眼科技有限公司 Video clipping method and device
CN109451360A (en) * 2018-11-02 2019-03-08 北京亿幕信息技术有限公司 Video transition effects method and engine
CN113079420A (en) * 2020-01-03 2021-07-06 北京三星通信技术研究有限公司 Video generation method and device, electronic equipment and computer readable storage medium
CN113923390A (en) * 2021-09-30 2022-01-11 北京字节跳动网络技术有限公司 Video recording method, device, equipment and storage medium
WO2023065663A1 (en) * 2021-10-18 2023-04-27 北京达佳互联信息技术有限公司 Video editing method and apparatus, and electronic device and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101304490A (en) * 2008-06-20 2008-11-12 北京六维世纪网络技术有限公司 Method and device for jointing video
CN101308501A (en) * 2008-06-30 2008-11-19 腾讯科技(深圳)有限公司 Method, system and device for generating video frequency abstract

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101304490A (en) * 2008-06-20 2008-11-12 北京六维世纪网络技术有限公司 Method and device for jointing video
CN101308501A (en) * 2008-06-30 2008-11-19 腾讯科技(深圳)有限公司 Method, system and device for generating video frequency abstract

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103139663A (en) * 2013-01-25 2013-06-05 深圳先进技术研究院 Automatic splitting device of video and splitting method thereof
CN103139663B (en) * 2013-01-25 2016-05-11 深圳先进技术研究院 The automatic detachment device of video and the method automatically splitting thereof
CN105072354A (en) * 2015-07-17 2015-11-18 Tcl集团股份有限公司 Method and system of synthesizing video stream by utilizing a plurality of photographs
CN106375872A (en) * 2015-07-24 2017-02-01 三亚中兴软件有限责任公司 Method and device for video editing
CN106559631A (en) * 2015-09-30 2017-04-05 小米科技有限责任公司 Method for processing video frequency and device
CN108268486A (en) * 2016-12-30 2018-07-10 中兴通讯股份有限公司 Association, playback method and the device of multimedia content, terminal
CN108268486B (en) * 2016-12-30 2022-04-19 中兴通讯股份有限公司 Multimedia content association and playing method and device, and terminal
CN107529091A (en) * 2017-09-08 2017-12-29 广州华多网络科技有限公司 Video clipping method and device
CN108391063A (en) * 2018-02-11 2018-08-10 北京秀眼科技有限公司 Video clipping method and device
CN108391063B (en) * 2018-02-11 2021-02-02 北京优聚视微传媒科技有限公司 Video editing method and device
CN109451360A (en) * 2018-11-02 2019-03-08 北京亿幕信息技术有限公司 Video transition effects method and engine
CN109451360B (en) * 2018-11-02 2021-03-05 北京亿幕信息技术有限公司 Video transition special effect method and engine
CN113079420A (en) * 2020-01-03 2021-07-06 北京三星通信技术研究有限公司 Video generation method and device, electronic equipment and computer readable storage medium
CN113923390A (en) * 2021-09-30 2022-01-11 北京字节跳动网络技术有限公司 Video recording method, device, equipment and storage medium
WO2023065663A1 (en) * 2021-10-18 2023-04-27 北京达佳互联信息技术有限公司 Video editing method and apparatus, and electronic device and storage medium

Also Published As

Publication number Publication date
CN102543136B (en) 2015-05-20

Similar Documents

Publication Publication Date Title
CN102543136A (en) Method and device for clipping video
CN107680586B (en) Far-field speech acoustic model training method and system
CN104038705B (en) Video creating method and device
CN104854539A (en) Object searching method and device
CN103870597A (en) Method and device for searching for watermark-free picture
CN110992395B (en) Image training sample generation method and device and motion tracking method and device
CN110633594A (en) Target detection method and device
CN110457524A (en) Model generating method, video classification methods and device
CN112085087A (en) Method and device for generating business rules, computer equipment and storage medium
US20160188680A1 (en) Electronic device and information searching method for the electronic device
EP3564833B1 (en) Method and device for identifying main picture in web page
CN107239473A (en) A kind of exhibiting pictures system of selection and device
US9514256B1 (en) Method and system for modelling turbulent flows in an advection-diffusion process
CN103902608A (en) Method and device for monitoring and compressing web images
CN111027376A (en) Method and device for determining event map, electronic equipment and storage medium
CN113850239B (en) Multi-document detection method and device, electronic equipment and storage medium
CN105808677A (en) Picture deleting method and device and electronic equipment
CN105068708B (en) Instruction obtaining and feedback method and device and cloud server
CN110278482A (en) Generate the method and device of the comment of video
CN107547437A (en) Application and identification method and device
CN104679838A (en) Efficient information collection method
CN104951802A (en) Classifier updating method
CN111199189A (en) Target object tracking method and system, electronic equipment and storage medium
CN103942227A (en) Method, device and system for displaying information pushing process in rendering mode
CN117093466B (en) Performance test method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Method and device for clipping video

Effective date of registration: 20191014

Granted publication date: 20150520

Pledgee: China Co truction Bank Corp Guangzhou economic and Technological Development Zone sub branch

Pledgor: Guangzhou Ncast Electronic Science & Technology Co., Ltd.

Registration number: Y2019440000121