CN105592321A - Method and device for clipping video - Google Patents

Method and device for clipping video Download PDF

Info

Publication number
CN105592321A
CN105592321A CN201510954690.9A CN201510954690A CN105592321A CN 105592321 A CN105592321 A CN 105592321A CN 201510954690 A CN201510954690 A CN 201510954690A CN 105592321 A CN105592321 A CN 105592321A
Authority
CN
China
Prior art keywords
video
video file
time
audio
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510954690.9A
Other languages
Chinese (zh)
Inventor
武悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Tvmining Juyuan Media Technology Co Ltd
Original Assignee
Wuxi Tvmining Juyuan Media Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi Tvmining Juyuan Media Technology Co Ltd filed Critical Wuxi Tvmining Juyuan Media Technology Co Ltd
Priority to CN201510954690.9A priority Critical patent/CN105592321A/en
Publication of CN105592321A publication Critical patent/CN105592321A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Computer Security & Cryptography (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention discloses a method and device for clipping a video to improve the playing quality of a clipped video file. The method comprises steps of: cutting an original video file and acquiring at least two first video files to be combined; determining a time difference between an audio stream and a video stream in each first video file; processing, according to the determined time difference, audio data frames in each first video file to obtain a corresponding second video file, wherein an audio stream and a video stream in each second video file have no time difference; and combining the second video files to form a target video file to be played. The cut first video files are adjusted according to the audio data frames and the audio stream and the video stream in each second video file have no time difference. Therefore, when the combined target video file is played, the audio stream and the video stream of the combined target video file have no time difference so that the playing quality of the clipped video file is improved.

Description

A kind of method and apparatus of video clipping
Technical field
The present invention relates to multimedia technology field, particularly a kind of method and apparatus of video clipping.
Background technology
Along with the development of multimedia technology, video, audio frequency, the broadcasting of picture etc. is extensively known by user.User in the time carrying out multimedia, for example, when video playback, may not need to watch whole video contents,Or, only want to watch wherein certain several fragment. Like this, just need to carry out montage to video, according to demand fromIn the video file of source, cut several clip file, be then merged into a new video file and play.
At present, some video clipping softwares can carry out non-linear editing to source video file, can be according to useFamily instruction, searches for video, until search cut-point, and Video segmentation is become to multistage fragment video,Then the fragment video of need being watched is merged into a new video file and plays, and this operation is very fastVictory, but may exist between new video file sound intermediate frequency and video life period poor, or some is lookedFrequently frame data are imperfect and the problem such as cannot play.
Summary of the invention
The invention provides a kind of method and apparatus of video clipping, in order to improve the broadcasting of montage rear video fileQuality.
A kind of method that the invention provides video clipping, the method comprises:
Source video file is cut, obtain at least two sections of first video files to be combined;
Determine the time difference between that every section of described first video file sound intermediate frequency flows and video flowing;
According to the described time difference of determining, to the first video file sound intermediate frequency Frame place every section describedReason, obtains the second corresponding video file, wherein, described the second video file sound intermediate frequency stream and video flowing itBetween not free poor;
Every section of described second video file is merged into target video file to be played.
In one embodiment of the invention, described source video file is cut, obtain at least two sections to be combinedThe first video file comprises:
The cutting instruction that reception comprises the point of contact time, wherein, the described point of contact time comprises: initial time and endThe only time;
According to the video time stamp corresponding with video flowing in the video file of described source, and corresponding with audio streamAudio time stamp, obtains the video data frame corresponding with the described point of contact time and audio data frame, described in obtainingThe first video file.
In one embodiment of the invention, described definite every section of described first video file sound intermediate frequency flows and video flowingBetween time difference comprise:
By the video time stamp of the first frame of video flowing in described the first video file, with the first frame in audio streamAudio time stamp compare, obtain the very first time poor;
By the video time stamp of the last frame of video flowing in described the first video file, with last in audio streamThe audio time stamp of one frame compares, and obtains for the second time difference;
Poor and described the second time difference according to described very first time, obtain described audio stream and described video flowingBetween time difference.
In one embodiment of the invention, the described time difference that described basis is determined, to the first video literary composition every section describedPart sound intermediate frequency Frame is processed, and obtains the second corresponding video file and comprises:
If the time of current the first video file sound intermediate frequency stream while being less than the time of video flowing, fills into when describedBetween poor corresponding audio data frame, obtain the second video file;
If the time of current the first video file sound intermediate frequency stream while being greater than the time of video flowing, deletes when describedBetween poor corresponding audio data frame, obtain the second video file.
In one embodiment of the invention, described every section of described second video file is merged into video file to be playedComprise:
According to crystal oscillator frequency, the timestamp of video flowing in every section of second video file is converted, determine instituteState the reproduction time of target video file.
The invention provides a kind of device of video clipping, this device comprises:
Cutter unit, for source video file is cut, obtains at least two sections of first videos to be combinedFile;
Determining unit, for determine every section of described first video file sound intermediate frequency stream and video flowing between timeBetween poor;
Processing unit, the described time difference of determining for basis, to the first video file sound intermediate frequency every section describedFrame is processed, and obtains the second corresponding video file, wherein, and described the second video file sound intermediate frequencyNot free poor between stream and video flowing;
Merge cells, for being merged into target video file to be played by every section of described second video file.
In one embodiment of the invention, described cutter unit comprises:
Receive subelement, for receiving the cutting instruction that comprises the point of contact time, wherein, described point of contact time bagDraw together: initial time and end time;
Obtain subelement, for according to the described source video file video time stamp corresponding with video flowing, withAnd the audio time stamp corresponding with audio stream, obtain the video data frame corresponding with the described point of contact time and audio frequencyFrame, obtains described the first video file.
In one embodiment of the invention, described determining unit, specifically for by video in described the first video fileThe video time stamp of the first frame of stream, compares with the audio time stamp of the first frame in audio stream, obtains theOne time difference; By the video time stamp of the last frame of video flowing in described the first video file, with audio streamThe audio time stamp of middle last frame compares, and obtains for the second time difference; And, according to described first o'clockBetween poor and described the second time difference, obtain described audio stream and described video flowing between time difference.
In one embodiment of the invention, described processing unit, if specifically for current the first video file sound intermediate frequencyThe time of stream while being less than the time of video flowing, fills into the audio data frame corresponding with the described time difference, obtains theTwo video files; If the time of current the first video file sound intermediate frequency stream, while being greater than the time of video flowing, deletesThe audio data frame corresponding with the described time difference, obtains the second video file.
In one embodiment of the invention, described merge cells, specifically for according to crystal oscillator frequency, to every section secondIn video file, the timestamp of video flowing converts, and determines the reproduction time of described target video file.
Some beneficial effects of the embodiment of the present invention can comprise:
The first video file after cutting is adjusted according to audio data frame, and what after adjustment, obtain second looksNot free poor between frequency file sound intermediate frequency stream and video flowing, thus the target video file after merging playTime, can life period between Voice & Video not poor, improve the play quality of montage rear video file.
Other features and advantages of the present invention will be set forth in the following description, and, partly from explanationIn book, become apparent, or understand by implementing the present invention. Object of the present invention and other advantages canRealize and obtain by specifically noted structure in write description, claims and accompanying drawing.
Below by drawings and Examples, technical scheme of the present invention is described in further detail.
Brief description of the drawings
Accompanying drawing is used to provide a further understanding of the present invention, and forms a part for description, with thisBright embodiment mono-is used from explanation the present invention, is not construed as limiting the invention. In the accompanying drawings:
Fig. 1 is according to the flow chart of the video clipping shown in an exemplary embodiment;
Fig. 2 is according to the flow chart of the video clipping shown in an exemplary embodiment one;
Fig. 3 is according to the flow chart of the video clipping shown in an exemplary embodiment two;
Fig. 4 is according to the structure chart of the device of the video clipping shown in an exemplary embodiment;
Fig. 5 is according to the structure chart of the cutter unit 410 shown in an exemplary embodiment.
Detailed description of the invention
Below in conjunction with accompanying drawing, the preferred embodiments of the present invention are described, should be appreciated that described hereinPreferred embodiment only, for description and interpretation the present invention, is not intended to limit the present invention.
The technical scheme that disclosure embodiment provides, is cut at least two the first video literary compositions by source video filePart, and to cutting after the first video file adjust according to audio data frame, after adjustment, obtain secondNot free poor between video file sound intermediate frequency stream and video flowing, thus the target video file after merging playTime, can life period between Voice & Video not poor, improve the play quality of montage rear video file.
Fig. 1 is according to the flow chart of the video clipping shown in an exemplary embodiment. The process of video clipping asShown in Fig. 1, comprising:
Step 101: source video file is cut, obtain at least two sections of first video files to be combined.
Here need to be source video file by the video file of montage. In the video file of source, have video flowing andAudio stream, and in video flowing, every frame video data frame has corresponding video time stamp, and every frame in audio streamAudio data frame has corresponding audio time stamp.
Therefore, can be according to the point of contact time of input, and video time stamp and audio time stamp are to source video literary compositionPart cuts, and specifically can comprise: receive the cutting instruction that comprises the point of contact time, wherein, point of contact time bagDraw together: initial time and end time; Then, according to video time corresponding with video flowing in the video file of sourceStamp, and the audio time stamp corresponding with audio stream, obtain the video data frame corresponding with the point of contact time and soundAudio data frame, obtains the first video file.
Taking video flowing as example, the video playback speed of source video file is 25FPS, i.e. broadcasting 25 frames per secondVideo data frame, the reproduction time of every frame is 40ms. Like this, taking 8:00 as with reference to starting point, soThe video time stamp of the first frame video data frame is 8: 0: 0; And the video of the second frame video data frameTimestamp be 8: 0 40 milliseconds; The video time stamp of the 3rd frame video data frame is 8: 0 80 milliSecond; The like, and the video time stamp of the 9000th frame video data frame is 8: 6: 0.
According to application demand, determine after the time of point of contact, can input the cutting instruction that comprises the point of contact time,Like this, can receive switching command, for example: the point of contact time comprising in switching command comprises: initial time 8Point 0 minute 0 second, and the end time be 8: 5 120 milliseconds, can obtain video time stamp when initialBetween and video data frame between the end time.
Certainly, in the audio data frame of source video file sound intermediate frequency stream, also there is audio time stamp, equally also can obtainGet audio time stamp at initial time and the audio data frame between the end time, thereby obtained the first videoFile.
Certainly, can also obtain first video file corresponding with other point of contact time, for example, obtain 8: 20Divide first video file of 0 second to 8: 25: 10.
Step 102: determine the time difference between that every section of first video file sound intermediate frequency flows and video flowing.
Source video file has comprised video flowing and audio stream, due to video flowing and audio stream not necessarily synchronous, thisSample, may be just poor if having time between the first video file sound intermediate frequency stream and video flowing. For example: source video fileIn just had video data frame since 8: 0: 0, but now not necessarily have sound, not necessarilyThere is audio data frame, may just have sound since 8: 0: 30, had audio data frame, soIf cut source video file since 8: 0: 0, the first video file sound intermediate frequency flows and looks soFrequently just may be poor if having time between stream.
Here, obtain after every section of first video file, can determine every section of first video file sound intermediate frequency stream andTime difference between video flowing.
Because every frame video data frame in video flowing has corresponding video time stamp, and every frame audio frequency in audio streamFrame has corresponding audio time stamp, thereby more corresponding timestamp can be determined the first video fileSound intermediate frequency stream and video flowing between time difference. Specifically can comprise: by video flowing in the first video fileThe video time stamp of the first frame, compares with the audio time stamp of the first frame in audio stream, obtains at first o'clockBetween poor; By the video time stamp of the last frame of video flowing in the first video file, with in audio stream lastThe audio time stamp of frame compares, and obtains for the second time difference; Poor and the second time difference according to the very first time,Obtain audio stream and video flowing between time difference. Certainly, can also obtain and the first video file for each timeThe very first time that middle video flowing is corresponding, second time corresponding with audio stream, then according to the very first time andTime difference between that two times obtained audio stream and video flowing.
Step 103: according to definite time difference, to every section of first video file sound intermediate frequency Frame placeReason, obtains the second corresponding video file, wherein, between the second video file sound intermediate frequency stream and video flowing, does not havePoor if having time.
Determine after the time difference, can be according to the audio frequency of time difference pair first video file corresponding with this time differenceFrame is processed, and obtains the second corresponding video file. The process of processing can only be carried out audio streamProcessing obtains the second video file, can comprise: if the time of current the first video file sound intermediate frequency stream is less than lookFrequently when the time of stream, fill into the audio data frame corresponding with the time difference, obtain the second video file; If currentThe time of the first video file sound intermediate frequency stream, while being greater than the time of video flowing, is deleted the audio frequency corresponding with the time differenceFrame, obtains the second video file. For example: the first video file sound intermediate frequency stream and video flowing synchronously start,But when the timestamp of the last frame audio data frame of audio stream 8: 29 400 milliseconds, and video flowingThe timestamp of last frame video data frame time 8: 30: 0, can polishing corresponding with 600 millisecondsAudio data frame, obtain the second video file.
Step 104: every section of second video file is merged into target video file to be played.
Obtain the second video file, every section of second video file can have been merged, obtained to be playedTarget video file.
For example first paragraph the second video file is since 8: 0: 0,30 milliseconds of end in 5 minutes in 8 o'clock,And when second segment the second video file since 8: 12 500 milliseconds, 8: 20: 0 finishes,This two section of second video file merged, but also the video of video flowing in need amendment the second video fileTimestamp, and the audio time stamp of audio stream. Can be according to crystal oscillator frequency, in every section of second video fileThe timestamp of video flowing converts, and determines the reproduction time of target video file. If the frame per second of playing is still25FPS, in the video flowing of second segment the second video file, the video time stamp of the first frame video data frame canCan since 8: 5 70 milliseconds, while revising successively the video of the every frame video data frame of the second video fileBetween stamp. Wherein, the frame per second of broadcasting can be corresponding with crystal oscillator frequency, and the time that for example every frame is play is 21 millisecondsTime, corresponding crystal oscillator is 1920 hertz. According to crystal oscillator frequency, to video in every section of second video fileThe timestamp of stream converts, and determines the reproduction time of target video file.
Visible, in the embodiment of the present invention, the first video file after cutting is adjusted according to audio frame number certificateWhole, not free poor between the second video file sound intermediate frequency stream obtaining after adjustment and video flowing, thus playWhen target video file after merging, can life period between Voice & Video not poor, improve montage backsightThe play quality of frequency file.
Below by operating process set in specific embodiment, illustrate the side that disclosure embodiment providesMethod.
Embodiment mono-, in the present embodiment, is to start to 9: 0 for 8: 0: 0 one section from reproduction timeDivide the source video file finishing for 0 second to cut, obtaining a period of time stamp is to start to 8 for 8: 0: 0The first video file that point finishes for 10 minutes 0 second, and a period of time stamp is to start to 8 for 8: 20: 0The first video file that point finishes for 30 minutes 0 second, then adjusts each the first video file, obtainsThe second corresponding video file, is being merged into target video file to be played by two section of second video file.Referring to Fig. 2, in the present embodiment, the process of video clipping comprises:
Step 201: source video file is cut, obtain two sections of first video files to be combined.
The point of contact time in cutting instruction is respectively 8: 0: 0, and 8: 10: 0, correspondenceThis cutting instruction, obtains first paragraph the first video file. The point of contact time in cutting instruction is respectively 8: 20Divide 0 second, and 8: 30: 0, to cutting instruction, obtain second segment the first video file.Specifically can be according to video time stamp corresponding with video flowing in the video file of source, and the sound corresponding with audio streamFrequently timestamp, obtains the video data frame corresponding with the point of contact time and audio data frame, obtains the first video literary compositionPart.
Step 202: determine the time difference between that every section of first video file sound intermediate frequency flows and video flowing.
Obtain two section of first video file, can be according to the video corresponding with video flowing in every section of first video fileTimestamp, and the audio time stamp corresponding with audio stream, obtain corresponding audio stream and video flowing betweenTime difference. Particularly, by the video time stamp of the first frame of video flowing in the first video file, with audio frequencyIn stream, the audio time stamp of the first frame compares, and obtains the very first time poor; By video in the first video fileThe video time stamp of the last frame of stream, compares with the audio time stamp of last frame in audio stream, obtainsObtained for the second time difference; Poor and the second time difference according to the very first time, obtain audio stream and video flowing betweenTime difference.
Step 203: according to definite time difference, to the first video file middle pitch frequency corresponding with the time differenceProcess according to frame, obtain the second corresponding video file.
In the present embodiment, only audio stream is processed, it is not free poor to obtain between audio stream and video flowingThe second video file. Wherein, if the time of current the first video file sound intermediate frequency stream be less than video flowing timeBetween time, fill into the audio data frame corresponding with the time difference, obtain the second video file; If current the first videoThe time of file sound intermediate frequency stream, while being greater than the time of video flowing, is deleted the audio data frame corresponding with the time difference,Obtain the second video file.
Step 204: two section of second video file is merged into target video file to be played.
Get after the second video file, two section of second video file can be merged, obtain to be playedTarget video file. Certainly in the process merging, according to reference time point, and crystal oscillator frequency, canTimestamp to video flowing in the second video file is modified, and obtains correct reproduction time.
In visible enforcement, can pass through the processing to audio stream, while obtaining not having between audio stream and video flowingBetween the second poor video file, thereby obtain the target video file of better quality, further improve video literary compositionThe play quality of part.
Embodiment bis-, in the present embodiment, is to start to 9: 0 for 8: 0: 0 one section from reproduction timeDivide the source video file finishing for 0 second to cut, obtaining a period of time stamp is to start to 8 for 8: 0: 0The first video file that point finishes for 10 minutes 0 second, a period of time stamp is to start to 8: 30 for 8: 20: 0Divide the first video file finishing for 0 second, and a period of time stamp is to start to 8: 55 for 8: 45: 0Divide the first video file finishing for 0 second. Then each the first video file is adjusted, obtained correspondingThe second video file, is being merged into target video file to be played by three section of second video file. Referring to figure3, in the present embodiment, the process of video clipping comprises:
Step 301: source video file is cut, obtain three sections of first video files to be combined.
The point of contact time in cutting instruction is respectively 8: 0: 0, and 8: 10: 0, correspondenceThis cutting instruction, obtains first paragraph the first video file. The point of contact time in cutting instruction is respectively 8: 20Divide 0 second, and 8: 30: 0, to cutting instruction, obtain second segment the first video file.The point of contact time in cutting instruction is respectively 8: 45: 0, and 8: 55: 0, to cuttingCut instruction, obtain the 3rd section of the first video file.
Specifically can be according to video time stamp corresponding with video flowing in the video file of source, and corresponding with audio streamAudio time stamp, obtain the video data frame corresponding with the point of contact time and audio data frame, obtain first and lookFrequency file.
Step 302: determine the time difference between that every section of first video file sound intermediate frequency flows and video flowing.
Obtain two section of first video file, can be according to the video corresponding with video flowing in every section of first video fileTimestamp, and the audio time stamp corresponding with audio stream, obtain corresponding audio stream and video flowing betweenTime difference. Particularly, by the video time stamp of the first frame of video flowing in the first video file and lastThe video time stamp of frame is carried out computing, obtains the very first time, and just first of the first video file sound intermediate frequency streamThe audio time stamp of frame and the audio time stamp of last frame are carried out computing, obtain for the second time, and first o'clockBetween and difference between the second time can be the time difference.
Step 303: according to definite time difference, and audio data frame, to corresponding first looking of time differenceFrequency file is processed, and obtains the second corresponding video file.
Wherein, if the time of current the first video file sound intermediate frequency stream while being less than the time of video flowing, fill intoThe audio data frame that time difference is corresponding, obtains the second video file. Certainly, also can lead to many video requency frame datas entersRow compensation, if the time of current the first video file sound intermediate frequency stream while being greater than the time of video flowing, fills into and timeBetween poor corresponding video data frame, obtain the second video file.
Step 304: three section of second video file is merged into target video file to be played.
Get after the second video file, three section of second video file can be merged, obtain to be playedTarget video file. Certainly in the process merging, according to reference time point, and crystal oscillator frequency, canTimestamp to the second video file video flowing is modified, and obtains correct reproduction time.
Visible, in this enforcement, can process audio stream or video flowing, obtain audio stream and video flowing itBetween the second not free poor video file, thereby obtain the target video file of better quality, further carryThe play quality of high video file.
Following is disclosure device embodiment, can be for carrying out disclosure embodiment of the method.
According to the process of above-mentioned video clipping, can build a kind of device of video clipping, as shown in Figure 4, shouldDevice comprises: comprising: cutter unit 410, determining unit 420, processing unit 430 and merge cells440, wherein,
Cutter unit 410, for source video file is cut, obtains at least two sections to be combined firstVideo file.
Determining unit, for determining the time difference between that every section of first video file sound intermediate frequency flows and video flowing420。
Processing unit 430, for according to definite time difference, to every section of first video file sound intermediate frequency dataFrame is processed, and obtains the second corresponding video file, wherein, and the second video file sound intermediate frequency stream and videoNot free poor between stream.
Merge cells 440, for being merged into target video file to be played by every section of second video file.
In one embodiment of the invention, as shown in Figure 5, cutter unit 410 comprises: receive subelement 411 HesObtain subelement 412.
Receive subelement 411, for receiving the cutting instruction that comprises the point of contact time, wherein, point of contact time bagDraw together: initial time and end time.
Obtain subelement 412, for the video time stamp corresponding with video flowing according to source video file, withAnd the audio time stamp corresponding with audio stream, obtain the video data frame corresponding with the point of contact time and voice dataFrame, obtains the first video file.
In one embodiment of the invention, determining unit 420, specifically for by video flowing in the first video fileThe video time stamp of the first frame, compares with the audio time stamp of the first frame in audio stream, obtains at first o'clockBetween poor; By the video time stamp of the last frame of video flowing in the first video file, with in audio stream lastThe audio time stamp of frame compares, and obtains for the second time difference; And, poor and second o'clock according to the very first timeBetween poor, obtain audio stream and video flowing between time difference.
In one embodiment of the invention, processing unit 430, if specifically for current the first video file sound intermediate frequencyThe time of stream, while being less than the time of video flowing, fills into the audio data frame corresponding with the time difference, obtains second and looksFrequency file; If the time of current the first video file sound intermediate frequency stream, while being greater than the time of video flowing, deletes and timeBetween poor corresponding audio data frame, obtain the second video file.
In one embodiment of the invention, merge cells 440, specifically for according to crystal oscillator frequency, to every section secondIn video file, the timestamp of video flowing converts, and determines the reproduction time of target video file.
Visible, the device of embodiment of the present invention video clipping can be to the first video file after cutting according to audio frequencyFrame is adjusted, not free between the second video file sound intermediate frequency stream obtaining after adjustment and video flowingPoor, thus while playing the target video file after merging, can life period between Voice & Video not poor, carryThe high play quality of montage rear video file.
Those skilled in the art should understand, embodiments of the invention can be provided as method, system or meterCalculation machine program product. Therefore, the present invention can adopt complete hardware implementation example, completely implement software example or knotClose the form of the embodiment of software and hardware aspect. And the present invention can adopt at one or more wherein bagsThe computer-usable storage medium that contains computer usable program code (include but not limited to magnetic disc store andOptical memory etc.) form of the upper computer program of implementing.
The present invention is that reference is according to the method for the embodiment of the present invention, equipment (system) and computer program productThe flow chart of product and/or block diagram are described. Should understand can be by computer program instructions realization flow figure and/ or block diagram in each flow process and/or flow process in square frame and flow chart and/or block diagram and/Or the combination of square frame. Can provide these computer program instructions to all-purpose computer, special-purpose computer, embeddingThe processor of formula processor or other programmable data processing device, to produce a machine, makes by calculatingThe instruction that the processor of machine or other programmable data processing device is carried out produces for realizing at flow chart oneThe device of the function of specifying in square frame of individual flow process or multiple flow process and/or block diagram or multiple square frame.
These computer program instructions also can be stored in energy vectoring computer or other programmable data processing are establishedIn the standby computer-readable memory with ad hoc fashion work, make to be stored in this computer-readable memoryInstruction produce and comprise the manufacture of command device, this command device is realized in flow process or multiple of flow chartThe function of specifying in square frame of flow process and/or block diagram or multiple square frame.
These computer program instructions also can be loaded in computer or other programmable data processing device, makeMust on computer or other programmable devices, carry out sequence of operations step to produce computer implemented placeReason, thus the instruction of carrying out on computer or other programmable devices is provided for realizing one of flow chartThe step of the function of specifying in square frame of flow process or multiple flow process and/or block diagram or multiple square frame.
Obviously, those skilled in the art can carry out various changes and modification and not depart from this present inventionThe spirit and scope of invention. Like this, if these amendments of the present invention and modification belong to the claims in the present inventionAnd within the scope of equivalent technologies, the present invention be also intended to comprise these change and modification interior.

Claims (10)

1. a method for video clipping, is characterized in that, comprising:
Source video file is cut, obtain at least two sections of first video files to be combined;
Determine the time difference between that every section of described first video file sound intermediate frequency flows and video flowing;
According to the described time difference of determining, to the first video file sound intermediate frequency Frame place every section describedReason, obtains the second corresponding video file, wherein, described the second video file sound intermediate frequency stream and video flowing itBetween not free poor;
Every section of described second video file is merged into target video file to be played.
2. the method for claim 1, is characterized in that, described source video file cut,Obtaining at least two sections of first video files to be combined comprises:
The cutting instruction that reception comprises the point of contact time, wherein, the described point of contact time comprises: initial time and endThe only time;
According to the video time stamp corresponding with video flowing in the video file of described source, and corresponding with audio streamAudio time stamp, obtains the video data frame corresponding with the described point of contact time and audio data frame, described in obtainingThe first video file.
3. method as claimed in claim 2, is characterized in that, described definite every section of described first videoFile sound intermediate frequency stream and video flowing between time difference comprise:
By the video time stamp of the first frame of video flowing in described the first video file, with the first frame in audio streamAudio time stamp compare, obtain the very first time poor;
By the video time stamp of the last frame of video flowing in described the first video file, with last in audio streamThe audio time stamp of one frame compares, and obtains for the second time difference;
Poor and described the second time difference according to described very first time, obtain described audio stream and described video flowingBetween time difference.
4. method as claimed in claim 3, is characterized in that, the described time difference that described basis is determined,The first video file sound intermediate frequency Frame every section described is processed, obtained the second corresponding video file bagDraw together:
If the time of current the first video file sound intermediate frequency stream while being less than the time of video flowing, fills into when describedBetween poor corresponding audio data frame, obtain the second video file;
If the time of current the first video file sound intermediate frequency stream while being greater than the time of video flowing, deletes when describedBetween poor corresponding audio data frame, obtain the second video file.
5. the method for claim 1, is characterized in that, described by every section of described second video literary compositionPart is merged into video file to be played and comprises:
According to crystal oscillator frequency, the timestamp of video flowing in every section of second video file is converted, determine instituteState the reproduction time of target video file.
6. a device for video clipping, is characterized in that, comprising:
Cutter unit, for source video file is cut, obtains at least two sections of first videos to be combinedFile;
Determining unit, for determine every section of described first video file sound intermediate frequency stream and video flowing between timeBetween poor;
Processing unit, the described time difference of determining for basis, to the first video file sound intermediate frequency every section describedFrame is processed, and obtains the second corresponding video file, wherein, and described the second video file sound intermediate frequencyNot free poor between stream and video flowing;
Merge cells, for being merged into target video file to be played by every section of described second video file.
7. device as claimed in claim 6, is characterized in that, described cutter unit comprises:
Receive subelement, for receiving the cutting instruction that comprises the point of contact time, wherein, described point of contact time bagDraw together: initial time and end time;
Obtain subelement, for according to the described source video file video time stamp corresponding with video flowing, withAnd the audio time stamp corresponding with audio stream, obtain the video data frame corresponding with the described point of contact time and audio frequencyFrame, obtains described the first video file.
8. device as claimed in claim 7, is characterized in that,
Described determining unit, when by the video of the first frame of video flowing in described the first video fileBetween stamp, compare with the audio time stamp of the first frame in audio stream, obtain the very first time poor; By describedThe video time stamp of the last frame of video flowing in one video file, during with the audio frequency of last frame in audio streamBetween stab and compare, obtained for the second time difference; And, poor and described the second time according to the described very first timePoor, obtain described audio stream and described video flowing between time difference.
9. device as claimed in claim 7, is characterized in that,
Described processing unit, if be less than video flowing specifically for the time of current the first video file sound intermediate frequency streamTime time, fill into the audio data frame corresponding with the described time difference, obtain the second video file; If currentThe time of the first video file sound intermediate frequency stream, while being greater than the time of video flowing, deletes corresponding with the described time differenceAudio data frame, obtains the second video file.
10. device as claimed in claim 6, is characterized in that,
Described merge cells, specifically for according to crystal oscillator frequency, to video flowing in every section of second video fileTimestamp converts, and determines the reproduction time of described target video file.
CN201510954690.9A 2015-12-18 2015-12-18 Method and device for clipping video Pending CN105592321A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510954690.9A CN105592321A (en) 2015-12-18 2015-12-18 Method and device for clipping video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510954690.9A CN105592321A (en) 2015-12-18 2015-12-18 Method and device for clipping video

Publications (1)

Publication Number Publication Date
CN105592321A true CN105592321A (en) 2016-05-18

Family

ID=55931491

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510954690.9A Pending CN105592321A (en) 2015-12-18 2015-12-18 Method and device for clipping video

Country Status (1)

Country Link
CN (1) CN105592321A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107844578A (en) * 2017-11-10 2018-03-27 阿基米德(上海)传媒有限公司 Repeated fragment method and device in one kind identification audio stream
CN111666446A (en) * 2020-05-26 2020-09-15 珠海九松科技有限公司 Method and system for judging AI automatic editing video material

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101374231A (en) * 2007-04-30 2009-02-25 Vixs***公司 System and method for combining a plurality of video streams
CN101771869A (en) * 2008-12-30 2010-07-07 深圳市万兴软件有限公司 AV (audio/video) encoding and decoding device and method
CN101996662A (en) * 2010-10-22 2011-03-30 深圳市万兴软件有限公司 Method and device for connecting and outputting video files
CN103096184A (en) * 2013-01-18 2013-05-08 深圳市龙视传媒有限公司 Method and device for video editing
CN103167342A (en) * 2013-03-29 2013-06-19 天脉聚源(北京)传媒科技有限公司 Audio and video synchronous processing device and method
CN103269460A (en) * 2013-04-28 2013-08-28 天脉聚源(北京)传媒科技有限公司 Device and method for computing duration of audio/video file
US8559793B2 (en) * 2011-05-26 2013-10-15 Avid Technology, Inc. Synchronous data tracks in a media editing system
CN103458271A (en) * 2012-05-29 2013-12-18 北京数码视讯科技股份有限公司 Audio-video file splicing method and audio-video file splicing device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101374231A (en) * 2007-04-30 2009-02-25 Vixs***公司 System and method for combining a plurality of video streams
CN101771869A (en) * 2008-12-30 2010-07-07 深圳市万兴软件有限公司 AV (audio/video) encoding and decoding device and method
CN101996662A (en) * 2010-10-22 2011-03-30 深圳市万兴软件有限公司 Method and device for connecting and outputting video files
US8559793B2 (en) * 2011-05-26 2013-10-15 Avid Technology, Inc. Synchronous data tracks in a media editing system
CN103458271A (en) * 2012-05-29 2013-12-18 北京数码视讯科技股份有限公司 Audio-video file splicing method and audio-video file splicing device
CN103096184A (en) * 2013-01-18 2013-05-08 深圳市龙视传媒有限公司 Method and device for video editing
CN103167342A (en) * 2013-03-29 2013-06-19 天脉聚源(北京)传媒科技有限公司 Audio and video synchronous processing device and method
CN103269460A (en) * 2013-04-28 2013-08-28 天脉聚源(北京)传媒科技有限公司 Device and method for computing duration of audio/video file

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107844578A (en) * 2017-11-10 2018-03-27 阿基米德(上海)传媒有限公司 Repeated fragment method and device in one kind identification audio stream
CN107844578B (en) * 2017-11-10 2021-08-13 阿基米德(上海)传媒有限公司 Method and device for identifying repeated segments in audio stream
CN111666446A (en) * 2020-05-26 2020-09-15 珠海九松科技有限公司 Method and system for judging AI automatic editing video material
CN111666446B (en) * 2020-05-26 2023-07-04 珠海九松科技有限公司 Method and system for judging automatic video editing material of AI

Similar Documents

Publication Publication Date Title
CN111741233B (en) Video dubbing method and device, storage medium and electronic equipment
CN104244023B (en) Video cloud editing system and method
CN105721811A (en) Live video recording method and system
EP3080810A1 (en) Providing beat matching
JP2007060286A (en) Content-editing device and reproducing device thereof
CN104410930A (en) A method and device for controlling playing speed of transport stream TS media file
CN105898500A (en) Network video play method and device
KR20140145584A (en) Method and system of playing online video at a speed variable in real time
US20170164010A1 (en) Method and device for generating and playing video
CN105592321A (en) Method and device for clipping video
WO2016168984A1 (en) Media editing method, a media editor and a media computer
CN105530534B (en) A kind of method and apparatus of video clipping
CN112468741A (en) Video generation method, electronic device and storage medium
CN104822087B (en) A kind of processing method and processing device of video-frequency band
JP2019003185A (en) Acoustic signal auxiliary information conversion transmission apparatus and program
CN105578261B (en) A kind of method and apparatus of video clipping
CN104780456A (en) Video dotting and playing method and device
CN105611401A (en) Video cutting method and video cutting device
CN105578260A (en) Video editing method and device
US20050201724A1 (en) Method and system for effect addition in a single multimedia clip
JP5457867B2 (en) Image display device, image display method, and image display program
JP6269734B2 (en) Movie data editing device, movie data editing method, playback device, and program
CN105323653B (en) A kind of method and apparatus playing segment video
CN109413443A (en) A kind of implementation method and device of time-shifting function
CN113905321A (en) Object-based audio channel metadata and generation method, device and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20160518

RJ01 Rejection of invention patent application after publication