CN105611401A - Video cutting method and video cutting device - Google Patents
Video cutting method and video cutting device Download PDFInfo
- Publication number
- CN105611401A CN105611401A CN201510968558.3A CN201510968558A CN105611401A CN 105611401 A CN105611401 A CN 105611401A CN 201510968558 A CN201510968558 A CN 201510968558A CN 105611401 A CN105611401 A CN 105611401A
- Authority
- CN
- China
- Prior art keywords
- video
- video file
- audio
- data frame
- time
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 238000005520 cutting process Methods 0.000 title claims abstract description 28
- 238000012545 processing Methods 0.000 claims description 23
- 230000008569 process Effects 0.000 claims description 20
- 238000007689 inspection Methods 0.000 claims description 14
- 230000000630 rising effect Effects 0.000 claims description 2
- 239000013078 crystal Substances 0.000 description 7
- 238000004590 computer program Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 4
- 239000012634 fragment Substances 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 239000000203 mixture Substances 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005498 polishing Methods 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44016—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/439—Processing of audio elementary streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8456—Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8547—Content authoring involving timestamps for synchronizing content
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Security & Cryptography (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
The invention discloses a video cutting method and a video cutting device, which are used for improving playing quality of video files subjected to cutting. The method comprises the following steps of segmenting a source video file toacquireat least two segments of to-be-merged first video files; performing audio data frame checking for an ending position of each segment of the first video file, and when the ending position of the first video file is lack of the audio data frame, performing audio interpolation for the first video file toacquirecorresponding processed first video file; and merging each segment of the first video file to form a to-be-played target video file. Thus, the segmented first video files are adjusted according to the audio data frame, and the adjusted first video files are merged, consequently, when the merged target video file is played, time difference between audio and video is avoided, and playing quality of the cut video file is improved.
Description
Technical field
The present invention relates to multimedia technology field, particularly a kind of method and apparatus of video clipping.
Background technology
Along with the development of multimedia technology, video, audio frequency, the broadcasting of picture etc. is extensively known by user.User in the time carrying out multimedia, for example, when video playback, may not need to watch whole video contents,Or, only want to watch wherein certain several fragment. Like this, just need to carry out montage to video, according to demand fromIn the video file of source, cut several clip file, be then merged into a new video file and play.
At present, some video clipping softwares can carry out non-linear editing to source video file, can be according to useFamily instruction, searches for video, until search cut-point, and Video segmentation is become to multistage fragment video,Then the fragment video of need being watched is merged into a new video file and plays, and this operation is very fastVictory, but may exist between new video file sound intermediate frequency and video life period poor, or some is lookedFrequently frame data are imperfect and the problem such as cannot play.
Summary of the invention
The invention provides a kind of method and apparatus of video clipping, in order to improve the broadcasting of montage rear video fileQuality.
A kind of method that the invention provides video clipping, the method comprises:
Source video file is cut, obtain at least two sections of first video files to be combined;
Final position to the first video file every section described carries out the inspection of audio data frame, and when described theWhen the final position of one video file lacks audio data frame, described the first video file is carried out to audio frequency and fill intoProcess, obtain corresponding the first video file after treatment;
Every section of first video file is merged to target video file to be played.
In one embodiment of the invention, the described final position when described the first video file lacks audio data frameTime, to described the first video file carry out audio frequency fill into process comprise:
Original position to the next one first video file adjacent with current the first video file is carried out audio frequencyThe inspection of Frame;
If the start bit of described next the first video file is equipped with unnecessary audio data frame without video countsDuring according to frame, described unnecessary audio data frame is filled into the final position of described current the first video file;
Delete unnecessary audio data frame in described next the first video file.
In one embodiment of the invention, described source video file is cut, obtain at least two sections to be combinedThe first video file comprises:
The cutting instruction that reception comprises the point of contact time, wherein, the described point of contact time comprises: initial time and endThe only time;
According to the video time stamp corresponding with video flowing in institute's source video file, and the sound corresponding with audio streamFrequently timestamp, obtains the video data frame corresponding with the described point of contact time and audio data frame, obtains described theOne video file.
In one embodiment of the invention, the described final position when described the first video file lacks audio data frameTime, to described the first video file carry out audio frequency fill into process comprise:
Determine the time difference between that described the first video file sound intermediate frequency flows and video flowing;
Fill into the audio data frame corresponding with the described time difference.
In one embodiment of the invention, between described that determine described the first video file sound intermediate frequency stream and video flowingTime difference comprise:
By the video time stamp of the first frame of video flowing in described the first video file, with the first frame in audio streamAudio time stamp compare, obtain the very first time poor;
By the video time stamp of the last frame of video flowing in described the first video file, with last in audio streamThe audio time stamp of one frame compares, and obtains for the second time difference;
Poor and described the second time difference according to described very first time, obtain described audio stream and described video flowingBetween time difference.
The invention provides a kind of device of video clipping, this device comprises:
Cutter unit, for source video file is cut, obtains at least two sections of first videos to be combinedFile;
Processing unit, for carrying out the inspection of audio data frame to the final position of the first video file every section describedLook into, and in the time that the final position of described the first video file lacks audio data frame, to described the first video literary compositionPart carries out audio frequency and fills into processing, obtains corresponding the first video file after treatment;
Merge cells, for merging target video file to be played by every section of first video file.
In one embodiment of the invention, described processing unit comprises:
Check subelement, for rising of the next one the first video file to adjacent with current the first video fileThe inspection of audio data frame is carried out in beginning position;
First fills into subelement, if be equipped with unnecessary sound for the start bit of described next the first video fileAudio data frame and during without video data frame, described unnecessary audio data frame is filled into described current the first videoThe final position of file;
Delete subelement, for deleting the unnecessary audio data frame of described next the first video file.
In one embodiment of the invention, described cutter unit comprises:
Receive subelement, for receiving the cutting instruction that comprises the point of contact time, wherein, described point of contact time bagDraw together: initial time and termination time;
Obtain subelement, for according to institute's source video file video time stamp corresponding with video flowing, andThe audio time stamp corresponding with audio stream, obtains the video data frame corresponding with the described point of contact time and audio frequency numberAccording to frame, obtain described the first video file.
In one embodiment of the invention, described processing unit comprises:
Determine subelement, for determining the time between that described the first video file sound intermediate frequency flows and video flowingPoor;
Second fills into subelement, for filling into the audio data frame corresponding with the described time difference.
In one embodiment of the invention, described definite subelement, specifically for looking in described the first video fileFrequently the video time stamp of the first frame of stream, compares with the audio time stamp of the first frame in audio stream, obtainsThe very first time is poor; By the video time stamp of the last frame of video flowing in described the first video file, with audio frequencyIn stream, the audio time stamp of last frame compares, and obtains for the second time difference; And, according to described firstTime difference and described the second time difference, obtain described audio stream and described video flowing between time difference.
Some beneficial effects of the embodiment of the present invention can comprise:
The first video file after cutting is adjusted according to audio data frame, and merged first after adjustmentVideo file, thereby while playing the target video file after merging, can life period between Voice & VideoDiffer from, improved the play quality of montage rear video file.
Other features and advantages of the present invention will be set forth in the following description, and, partly from explanationIn book, become apparent, or understand by implementing the present invention. Object of the present invention and other advantages canRealize and obtain by specifically noted structure in write description, claims and accompanying drawing.
Below by drawings and Examples, technical scheme of the present invention is described in further detail.
Brief description of the drawings
Accompanying drawing is used to provide a further understanding of the present invention, and forms a part for description, with thisBright embodiment mono-is used from explanation the present invention, is not construed as limiting the invention. In the accompanying drawings:
Fig. 1 is according to the flow chart of the video clipping shown in an exemplary embodiment;
Fig. 2 is according to the flow chart of the video clipping shown in an exemplary embodiment one;
Fig. 3 is according to the flow chart of the video clipping shown in an exemplary embodiment two;
Fig. 4 is according to the structure chart of the device of the video clipping shown in an exemplary embodiment;
Fig. 5 is according to the structure chart of the cutter unit 410 shown in an exemplary embodiment;
Fig. 6 is according to the structure chart of the processing unit 420 shown in an exemplary embodiment;
Fig. 7 is according to the structure chart of the processing unit 420 shown in an exemplary embodiment.
Detailed description of the invention
Below in conjunction with accompanying drawing, the preferred embodiments of the present invention are described, should be appreciated that described hereinPreferred embodiment only, for description and interpretation the present invention, is not intended to limit the present invention.
The technical scheme that disclosure embodiment provides, is cut at least two the first video literary compositions by source video filePart, and the first video file after cutting is adjusted according to audio data frame, and merge the after adjustingOne video file, thereby while playing the target video file after merging, can not exist between Voice & Video timeBetween poor, improved the play quality of montage rear video file.
Fig. 1 is according to the flow chart of the video clipping shown in an exemplary embodiment. The process of video clipping asShown in Fig. 1, comprising:
Step 101: source video file is cut, obtain at least two sections of first video files to be combined.
Here need to be source video file by the video file of montage. In the video file of source, have video flowing andAudio stream, and in video flowing, every frame video data frame has corresponding video time stamp, and every frame in audio streamAudio data frame has corresponding audio time stamp.
Therefore, can be according to the point of contact time of input, and video time stamp and audio time stamp are to source video literary compositionPart cuts, and specifically can comprise: receive the cutting instruction that comprises the point of contact time, wherein, point of contact time bagDraw together: initial time and end time; Then, according to video time corresponding with video flowing in the video file of sourceStamp, and the audio time stamp corresponding with audio stream, obtain the video data frame corresponding with the point of contact time and soundAudio data frame, obtains the first video file.
Taking video flowing as example, the video playback speed of source video file is 25FPS, i.e. broadcasting 25 frames per secondVideo data frame, the reproduction time of every frame is 40ms. Like this, taking 8:00 as with reference to starting point, soThe video time stamp of the first frame video data frame is 8: 0: 0; And the video of the second frame video data frameTimestamp be 8: 0 40 milliseconds; The video time stamp of the 3rd frame video data frame is 8: 0 80 milliSecond; The like, and the video time stamp of the 9000th frame video data frame is 8: 6: 0.
According to application demand, determine after the time of point of contact, can input the cutting instruction that comprises the point of contact time,Like this, can receive switching command, for example: the point of contact time comprising in switching command comprises: initial time 8Point 0 minute 0 second, and the end time be 8: 5 120 milliseconds, can obtain video time stamp when initialBetween and video data frame between the end time.
Certainly, in the audio data frame of source video file sound intermediate frequency stream, also there is audio time stamp, equally also can obtainGet audio time stamp at initial time and the audio data frame between the end time, thereby obtained the first videoFile.
Certainly, can also obtain first video file corresponding with other point of contact time, for example, obtain 8: 20Divide first video file of 0 second to 8: 25: 10.
Step 102: the final position of every section of first video file is carried out to the inspection of audio data frame, and work asWhen the final position of the first video file lacks audio data frame, the first video file is carried out to audio frequency and fill into placeReason, obtains corresponding the first video file after treatment.
Source video file has comprised video flowing and audio stream, due to video flowing and audio stream not necessarily synchronous, thisSample, may be just poor if having time between the first video file sound intermediate frequency stream and video flowing. For example: source video fileIn just had video data frame since 8: 0: 0, but now not necessarily have sound, not necessarilyThere is audio data frame, may just have sound since 8: 0: 30, had audio data frame, soIf cut source video file since 8: 0: 0, the first video file sound intermediate frequency flows and looks soFrequently just may be poor if having time between stream.
Again for example: after cutting, obtain in the first video file the last frame video data frame correspondence of video flowingVideo time stamp be 8: 5: 0, and corresponding audio frequency time of last frame audio data frame in audio streamStamp is 8: 4: 0, lacks audio data frame so at the final position of the first video file, if every frameVideo data frame is 40 milliseconds while broadcasting, large probable error 250 frame audio data frames.
Now, can carry out audio frequency to the first video file and fill into processing, obtain after treatment first of correspondence and lookFrequency file. Owing to being that two or more the first video files merge, now lack one of audio data frameVideo file is current the first video file, can be to the next one first adjacent with current the first video fileThe original position of video file is carried out the inspection of audio data frame; If the start bit of next the first video fileBe equipped with unnecessary audio data frame and during without video data frame, unnecessary audio data frame filled into current firstThe final position of video file; Delete unnecessary audio data frame in next the first video file.
For example: the final position of current the first video file lacks 250 frame audio data frames, and with currentThe original position of the next one the first video file that one video file is adjacent only has audio data frame and without videoFrame, for example: and in next the first video file, the first frame video data frame of video flowing is correspondingVideo time stamp is 8: 11: 0, and the corresponding audio time stamp of the first frame audio data frame in audio streamBe 8: 10: 0, be equipped with unnecessary audio data frame in the start bit of this first video file so,If every frame video data frame is 40 milliseconds while playing, general unnecessary 250 frame audio data frames. Now,These 250 unnecessary frame audio data frames can be added into the final position of current the first video file. Certainly,The quantity of possible audio data frame neither complete coupling, can be according to the termination of current the first video filePosition lacks the quantity of audio data frame and determines how much supplement. If the unnecessary sound of next the first video fileAudio data frame affluence, final position that can current the first video file of polishing lacks audio data frame.
Certainly, the unnecessary audio data frame of next the first video file also fills into current the first video fileFinal position, needs to delete unnecessary audio data frame in next the first video file.
In the embodiment of the present invention, also can according to the first video file sound intermediate frequency stream and video flowing between timePoor, fill into the audio data frame corresponding with the time difference. Determine that the first video file sound intermediate frequency flows and lookFrequently the time difference between stream, then fill into the audio data frame corresponding with the time difference.
Because every frame video data frame in video flowing has corresponding video time stamp, and every frame audio frequency in audio streamFrame has corresponding audio time stamp, thereby more corresponding timestamp can be determined the first video fileSound intermediate frequency stream and video flowing between time difference. Specifically can comprise: by video flowing in the first video fileThe video time stamp of the first frame, compares with the audio time stamp of the first frame in audio stream, obtains at first o'clockBetween poor; By the video time stamp of the last frame of video flowing in the first video file, with in audio stream lastThe audio time stamp of frame compares, and obtains for the second time difference; Poor and the second time difference according to the very first time,Obtain audio stream and video flowing between time difference. Certainly, can also obtain and the first video file for each timeThe very first time that middle video flowing is corresponding, second time corresponding with audio stream, then according to the very first time andTime difference between that two times obtained audio stream and video flowing.
Because the final position of the first video file lacks audio data frame, life period is poor, can fill into and timeBetween poor corresponding audio data frame, obtain the first video file after treatment.
Step 103: every section of first video file is merged into target video file to be played.
In the time that the final position of the first video file lacks audio data frame, the first video file is carried out to audio frequencyFill into processing, obtain corresponding the first video file after treatment. And the final position of the first video file is notWhile lacking audio data frame, do not need to process, remain the first video file. Can be by every section of first video fileMerge target video file to be played, wherein, can be according to crystal oscillator frequency, in every section of first video fileThe timestamp of video flowing converts, and determines the reproduction time of target video file.
For example first paragraph the first video file is since 8: 0: 0,30 milliseconds of end in 5 minutes in 8 o'clock,And when second segment the first video file since 8: 12 500 milliseconds, 8: 20: 0 finishes,This two section of first video file merged, but also the video of video flowing in need amendment the first video fileTimestamp, and the audio time stamp of audio stream. Can be according to crystal oscillator frequency, in every section of first video fileThe timestamp of video flowing converts, and determines the reproduction time of target video file. If the frame per second of playing is still25FPS, in the video flowing of second segment the first video file, the video time stamp of the first frame video data frame canCan since 8: 5 70 milliseconds, while revising successively the video of the every frame video data frame of the first video fileBetween stamp. Wherein, the frame per second of broadcasting can be corresponding with crystal oscillator frequency, and the time that for example every frame is play is 21 millisecondsTime, corresponding crystal oscillator is 1920 hertz. According to crystal oscillator frequency, to video in every section of first video fileThe timestamp of stream converts, and determines the reproduction time of target video file.
Visible, in the embodiment of the present invention, the first video file after cutting is adjusted according to audio data frameWhole, and merge the first video file after adjusting, thereby while playing the target video file after merging, audio frequencyAnd can life period between video not poor, improve the play quality of montage rear video file.
Below by operating process set in specific embodiment, illustrate the side that disclosure embodiment providesMethod.
Embodiment mono-, referring to Fig. 2, in the present embodiment, the process of video clipping comprises:
Step 201: source video file is cut, obtain two sections of first video files to be combined.
The point of contact time in cutting instruction is respectively 8: 0: 0, and 8: 10: 0, correspondenceThis cutting instruction, obtains first paragraph the first video file. The point of contact time in cutting instruction is respectively 8: 20Divide 0 second, and 8: 30: 0, to cutting instruction, obtain second segment the first video file.Specifically can be according to video time stamp corresponding with video flowing in the video file of source, and the sound corresponding with audio streamFrequently timestamp, obtains the video data frame corresponding with the point of contact time and audio data frame, obtains the first video literary compositionPart.
Step 202: the final position that judges every section of first video file lacks audio data frame? if so,Execution step 203, otherwise, execution step 206.
Here, the final position of every section of first video file is carried out to the inspection of audio data frame, if stop bitPut and lack audio data frame execution step 203, otherwise, execution step 206.
Step 203: the first video file is defined as to current the first video file.
Step 204: judge initial to the next one first video file adjacent with current the first video fileDoes is there unnecessary audio data frame position? if so, perform step 205, otherwise, execution step 206.
Step 205: unnecessary audio data frame in next the first video file is filled into current the first videoThe final position of file, and by voice data frame deletion unnecessary in next the first video file, respectivelyTo the first video file after treatment.
Step 206: every section of first video file is merged to target video file to be played.
Get after the first video file, the first video file having may be after treatment, also may be straightConnect cutting obtain, two section of first video file can be merged, obtain target video literary composition to be playedPart. Certainly in the process merging, according to reference time point, and crystal oscillator frequency, can be to the first video literary compositionIn part, the timestamp of video flowing is modified, and obtains correct reproduction time.
In visible enforcement, can pass through the processing to audio stream, while obtaining not having between audio stream and video flowingBetween poor target video file, further improve the play quality of video file.
Embodiment bis-, referring to Fig. 3, in the present embodiment, the process of video clipping comprises:
Step 301: source video file is cut, obtain three sections of first video files to be combined.
The point of contact time in cutting instruction is respectively 8: 0: 0, and 8: 10: 0, correspondenceThis cutting instruction, obtains first paragraph the first video file. The point of contact time in cutting instruction is respectively 8: 20Divide 0 second, and 8: 30: 0, to cutting instruction, obtain second segment the first video file.The point of contact time in cutting instruction is respectively 8: 45: 0, and 8: 55: 0, to cuttingCut instruction, obtain the 3rd section of the first video file.
Specifically can be according to video time stamp corresponding with video flowing in the video file of source, and corresponding with audio streamAudio time stamp, obtain the video data frame corresponding with the point of contact time and audio data frame, obtain first and lookFrequency file.
Step 302: whether the final position that judges every section of first video file lacks audio data frame, if so,Execution step 303, otherwise, execution step 306.
Step 303: the first video file is defined as to current the first video file.
Step 304: determine the time difference between that current the first video file sound intermediate frequency flows and video flowing.
According to the video time stamp corresponding with video flowing in current the first video file, and corresponding with audio streamAudio time stamp, obtain corresponding audio stream and video flowing between time difference. Particularly, by currentThe video time stamp of the first frame of video flowing in the first video file, with the audio frequency time of the first frame in audio streamStamp compares, and obtains the very first time poor; By looking of the last frame of video flowing in current the first video fileFrequently timestamp, compares with the audio time stamp of last frame in audio stream, obtains for the second time difference; WithAnd, poor and the second time difference according to the very first time, obtain audio stream and video flowing between time difference.
Step 305: fill into the audio data frame corresponding with the time difference, obtain the first video file after treatment.
Step 306: every section of first video file is merged to target video file to be played.
Get after the first video file, the first video file having may be after treatment, also may be straightConnect cutting obtain, two section of first video file can be merged, obtain target video literary composition to be playedPart. Certainly in the process merging, according to reference time point, and crystal oscillator frequency, can be to the first video literary compositionIn part, the timestamp of video flowing is modified, and obtains correct reproduction time.
In visible enforcement, can pass through the processing to audio stream, while obtaining not having between audio stream and video flowingBetween poor target video file, further improve the play quality of video file.
Following is disclosure device embodiment, can be for carrying out disclosure embodiment of the method.
According to the process of above-mentioned video clipping, can build a kind of device of video clipping, as shown in Figure 4, shouldDevice comprises: comprising: cutter unit 410, processing unit 420 and merge cells 430, wherein,
Cutter unit 410, for source video file is cut, obtains at least two sections to be combined firstVideo file.
Processing unit 420, for carrying out the inspection of audio data frame to the final position of every section of first video fileLook into, and in the time that the final position of the first video file lacks audio data frame, the first video file is carried out to soundFrequently fill into processing, obtain corresponding the first video file after treatment.
Merge cells 430, for merging target video file to be played by every section of first video file.
In one embodiment of the invention, as shown in Figure 5, cutter unit 410 comprises: receive subelement 411 HesObtain subelement 412, wherein,
Receive subelement 411, for receiving the cutting instruction that comprises the point of contact time, wherein, point of contact time bagDraw together: initial time and termination time.
Obtain subelement 412, for according to institute's source video file video time stamp corresponding with video flowing,And the audio time stamp corresponding with audio stream, obtain the video data frame corresponding with the point of contact time and audio frequency numberAccording to frame, obtain the first video file.
In one embodiment of the invention, as shown in Figure 6, processing unit 420 comprises: inspection subelement 421,First fills into subelement 422 and deletes subelement 423. Wherein,
Check subelement 421, for the next one the first video file to adjacent with current the first video fileOriginal position carry out the inspection of audio data frame.
First fills into subelement 422, if be equipped with unnecessary sound for the start bit of next the first video fileAudio data frame and during without video data frame, unnecessary audio data frame is filled into the end of current the first video fileStop bit is put.
Delete subelement 423, for deleting the unnecessary audio data frame of next the first video file.
In one embodiment of the invention, as shown in Figure 7, processing unit 420 can comprise: determine subelement 424With second fill into subelement 425. Wherein,
Determine subelement, for determining the time difference between that the first video file sound intermediate frequency flows and video flowing.
Second fills into subelement, for filling into the audio data frame corresponding with the time difference.
In one embodiment of the invention, determine subelement 424, specifically for by video flowing in the first video fileThe video time stamp of the first frame, compare with the audio time stamp of the first frame in audio stream, obtain firstTime difference; By the video time stamp of the last frame of video flowing in the first video file, with last in audio streamThe audio time stamp of one frame compares, and obtains for the second time difference; And, according to very first time difference and secondTime difference, obtain audio stream and video flowing between time difference.
Visible, the device of embodiment of the present invention video clipping can be to the first video file after cutting according to audio frequencyFrame is adjusted, and merges the first video file after adjusting, thereby plays the target video after mergingWhen file, can life period between Voice & Video not poor, improve the play quality of montage rear video file.
Those skilled in the art should understand, embodiments of the invention can be provided as method, system or meterCalculation machine program product. Therefore, the present invention can adopt complete hardware implementation example, completely implement software example or knotClose the form of the embodiment of software and hardware aspect. And the present invention can adopt at one or more wherein bagsThe computer-usable storage medium that contains computer usable program code (include but not limited to magnetic disc store andOptical memory etc.) form of the upper computer program of implementing.
The present invention is that reference is according to the method for the embodiment of the present invention, equipment (system) and computer program productThe flow chart of product and/or block diagram are described. Should understand can be by computer program instructions realization flow figure and/ or block diagram in each flow process and/or flow process in square frame and flow chart and/or block diagram and/Or the combination of square frame. Can provide these computer program instructions to all-purpose computer, special-purpose computer, embeddingThe processor of formula processor or other programmable data processing device, to produce a machine, makes by calculatingThe instruction that the processor of machine or other programmable data processing device is carried out produces for realizing at flow chart oneThe device of the function of specifying in square frame of individual flow process or multiple flow process and/or block diagram or multiple square frame.
These computer program instructions also can be stored in energy vectoring computer or other programmable data processing are establishedIn the standby computer-readable memory with ad hoc fashion work, make to be stored in this computer-readable memoryInstruction produce and comprise the manufacture of command device, this command device is realized in flow process or multiple of flow chartThe function of specifying in square frame of flow process and/or block diagram or multiple square frame.
These computer program instructions also can be loaded in computer or other programmable data processing device, makeMust on computer or other programmable devices, carry out sequence of operations step to produce computer implemented placeReason, thus the instruction of carrying out on computer or other programmable devices is provided for realizing one of flow chartThe step of the function of specifying in square frame of flow process or multiple flow process and/or block diagram or multiple square frame.
Obviously, those skilled in the art can carry out various changes and modification and not depart from this present inventionThe spirit and scope of invention. Like this, if these amendments of the present invention and modification belong to the claims in the present inventionAnd within the scope of equivalent technologies, the present invention be also intended to comprise these change and modification interior.
Claims (10)
1. a method for video clipping, is characterized in that, comprising:
Source video file is cut, obtain at least two sections of first video files to be combined;
Final position to the first video file every section described carries out the inspection of audio data frame, and when described theWhen the final position of one video file lacks audio data frame, described the first video file is carried out to audio frequency and fill intoProcess, obtain corresponding the first video file after treatment;
Every section of first video file is merged to target video file to be played.
2. the method for claim 1, is characterized in that, described when described the first video fileWhen final position lacks audio data frame, to described the first video file carry out audio frequency fill into process comprise:
Original position to the next one first video file adjacent with current the first video file is carried out audio frequencyThe inspection of Frame;
If the start bit of described next the first video file is equipped with unnecessary audio data frame without video countsDuring according to frame, described unnecessary audio data frame is filled into the final position of described current the first video file;
Delete unnecessary audio data frame in described next the first video file.
3. the method for claim 1, is characterized in that, described source video file cut,Obtaining at least two sections of first video files to be combined comprises:
The cutting instruction that reception comprises the point of contact time, wherein, the described point of contact time comprises: initial time and endThe only time;
According to the video time stamp corresponding with video flowing in institute's source video file, and the sound corresponding with audio streamFrequently timestamp, obtains the video data frame corresponding with the described point of contact time and audio data frame, obtains described theOne video file.
4. method as claimed in claim 3, is characterized in that, described when described the first video fileWhen final position lacks audio data frame, to described the first video file carry out audio frequency fill into process comprise:
Determine the time difference between that described the first video file sound intermediate frequency flows and video flowing;
Fill into the audio data frame corresponding with the described time difference.
5. method as claimed in claim 4, is characterized in that, described definite described the first video fileSound intermediate frequency stream and video flowing between time difference comprise:
By the video time stamp of the first frame of video flowing in described the first video file, with the first frame in audio streamAudio time stamp compare, obtain the very first time poor;
By the video time stamp of the last frame of video flowing in described the first video file, with last in audio streamThe audio time stamp of one frame compares, and obtains for the second time difference;
Poor and described the second time difference according to described very first time, obtain described audio stream and described video flowingBetween time difference.
6. a device for video clipping, is characterized in that, comprising:
Cutter unit, for source video file is cut, obtains at least two sections of first videos to be combinedFile;
Processing unit, for carrying out the inspection of audio data frame to the final position of the first video file every section describedLook into, and in the time that the final position of described the first video file lacks audio data frame, to described the first video literary compositionPart carries out audio frequency and fills into processing, obtains corresponding the first video file after treatment;
Merge cells, for merging target video file to be played by every section of first video file.
7. device as claimed in claim 6, is characterized in that, described processing unit comprises:
Check subelement, for rising of the next one the first video file to adjacent with current the first video fileThe inspection of audio data frame is carried out in beginning position;
First fills into subelement, if be equipped with unnecessary sound for the start bit of described next the first video fileAudio data frame and during without video data frame, described unnecessary audio data frame is filled into described current the first videoThe final position of file;
Delete subelement, for deleting the unnecessary audio data frame of described next the first video file.
8. device as claimed in claim 6, is characterized in that, described cutter unit comprises:
Receive subelement, for receiving the cutting instruction that comprises the point of contact time, wherein, described point of contact time bagDraw together: initial time and termination time;
Obtain subelement, for according to institute's source video file video time stamp corresponding with video flowing, andThe audio time stamp corresponding with audio stream, obtains the video data frame corresponding with the described point of contact time and audio frequency numberAccording to frame, obtain described the first video file.
9. device as claimed in claim 8, is characterized in that, described processing unit comprises:
Determine subelement, for determining the time between that described the first video file sound intermediate frequency flows and video flowingPoor;
Second fills into subelement, for filling into the audio data frame corresponding with the described time difference.
10. device as claimed in claim 6, is characterized in that,
Described definite subelement, specifically for by the video of the first frame of video flowing in described the first video fileTimestamp, compares with the audio time stamp of the first frame in audio stream, obtains the very first time poor; Described in inciting somebody to actionThe video time stamp of the last frame of video flowing in the first video file, with the audio frequency of last frame in audio streamTimestamp compares, and obtains for the second time difference; And, poor and described second o'clock according to described very first timeBetween poor, obtain described audio stream and described video flowing between time difference.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510968558.3A CN105611401B (en) | 2015-12-18 | 2015-12-18 | A kind of method and apparatus of video clipping |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510968558.3A CN105611401B (en) | 2015-12-18 | 2015-12-18 | A kind of method and apparatus of video clipping |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105611401A true CN105611401A (en) | 2016-05-25 |
CN105611401B CN105611401B (en) | 2018-08-24 |
Family
ID=55990886
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510968558.3A Expired - Fee Related CN105611401B (en) | 2015-12-18 | 2015-12-18 | A kind of method and apparatus of video clipping |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105611401B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108616768A (en) * | 2018-05-02 | 2018-10-02 | 腾讯科技(上海)有限公司 | Synchronous broadcast method, device, storage location and the electronic device of multimedia resource |
CN111601162A (en) * | 2020-06-08 | 2020-08-28 | 北京世纪好未来教育科技有限公司 | Video segmentation method and device and computer storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101771869A (en) * | 2008-12-30 | 2010-07-07 | 深圳市万兴软件有限公司 | AV (audio/video) encoding and decoding device and method |
CN102316358A (en) * | 2011-09-02 | 2012-01-11 | 惠州Tcl移动通信有限公司 | Method for recording streaming media file and corresponding equipment |
CN103096184A (en) * | 2013-01-18 | 2013-05-08 | 深圳市龙视传媒有限公司 | Method and device for video editing |
CN103167342A (en) * | 2013-03-29 | 2013-06-19 | 天脉聚源(北京)传媒科技有限公司 | Audio and video synchronous processing device and method |
US20140186003A1 (en) * | 2009-11-06 | 2014-07-03 | Telefonaktiebolaget L M Ericsson (Publ) | File Format for Synchronized Media |
-
2015
- 2015-12-18 CN CN201510968558.3A patent/CN105611401B/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101771869A (en) * | 2008-12-30 | 2010-07-07 | 深圳市万兴软件有限公司 | AV (audio/video) encoding and decoding device and method |
US20140186003A1 (en) * | 2009-11-06 | 2014-07-03 | Telefonaktiebolaget L M Ericsson (Publ) | File Format for Synchronized Media |
CN102316358A (en) * | 2011-09-02 | 2012-01-11 | 惠州Tcl移动通信有限公司 | Method for recording streaming media file and corresponding equipment |
CN103096184A (en) * | 2013-01-18 | 2013-05-08 | 深圳市龙视传媒有限公司 | Method and device for video editing |
CN103167342A (en) * | 2013-03-29 | 2013-06-19 | 天脉聚源(北京)传媒科技有限公司 | Audio and video synchronous processing device and method |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108616768A (en) * | 2018-05-02 | 2018-10-02 | 腾讯科技(上海)有限公司 | Synchronous broadcast method, device, storage location and the electronic device of multimedia resource |
CN108616768B (en) * | 2018-05-02 | 2021-10-15 | 腾讯科技(上海)有限公司 | Synchronous playing method and device of multimedia resources, storage position and electronic device |
CN111601162A (en) * | 2020-06-08 | 2020-08-28 | 北京世纪好未来教育科技有限公司 | Video segmentation method and device and computer storage medium |
CN111601162B (en) * | 2020-06-08 | 2022-08-02 | 北京世纪好未来教育科技有限公司 | Video segmentation method and device and computer storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN105611401B (en) | 2018-08-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6920181B1 (en) | Method for synchronizing audio and video streams | |
CN105721811A (en) | Live video recording method and system | |
CN106331869A (en) | Video-based picture re-editing method and device | |
CN101331761B (en) | Information processing device and information processing method | |
CN105979267A (en) | Video compression and play method and device | |
JP2007060286A (en) | Content-editing device and reproducing device thereof | |
CN105828220A (en) | Method and device of adding audio file in video file | |
CN104185088B (en) | A kind of method for processing video frequency and device | |
CN105681891A (en) | Mobile terminal used method for embedding user video in scene | |
WO2016168984A1 (en) | Media editing method, a media editor and a media computer | |
CN1259735A (en) | Additive information prodn. method, recording medium, and recording, edit and producing device | |
CN105611401A (en) | Video cutting method and video cutting device | |
EP2159797B1 (en) | Audio signal generator, method of generating an audio signal, and computer program for generating an audio signal | |
CN104822087B (en) | A kind of processing method and processing device of video-frequency band | |
WO2019042217A1 (en) | Video editing method and terminal | |
CN105530534A (en) | Video clipping method and apparatus | |
CN104780456A (en) | Video dotting and playing method and device | |
JP2007535781A (en) | Frame unit (FRAME-ACCURATE) editing method and system | |
CN105592321A (en) | Method and device for clipping video | |
JP6269734B2 (en) | Movie data editing device, movie data editing method, playback device, and program | |
US10217489B2 (en) | Systems and methods for media track management in a media editing tool | |
US20050201724A1 (en) | Method and system for effect addition in a single multimedia clip | |
CN105578261A (en) | Video editing method and device | |
CN105578260A (en) | Video editing method and device | |
CN104135628A (en) | Video editing method and terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
PE01 | Entry into force of the registration of the contract for pledge of patent right | ||
PE01 | Entry into force of the registration of the contract for pledge of patent right |
Denomination of invention: A method and device for video editing Effective date of registration: 20210104 Granted publication date: 20180824 Pledgee: Inner Mongolia Huipu Energy Co.,Ltd. Pledgor: WUXI TVMINING MEDIA SCIENCE & TECHNOLOGY Co.,Ltd. Registration number: Y2020990001517 |
|
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20180824 |