CN110891192B - Video editing method and device - Google Patents

Video editing method and device Download PDF

Info

Publication number
CN110891192B
CN110891192B CN201811056917.8A CN201811056917A CN110891192B CN 110891192 B CN110891192 B CN 110891192B CN 201811056917 A CN201811056917 A CN 201811056917A CN 110891192 B CN110891192 B CN 110891192B
Authority
CN
China
Prior art keywords
information
clipping
video
clipped
clip
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811056917.8A
Other languages
Chinese (zh)
Other versions
CN110891192A (en
Inventor
张敏杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Youku Culture Technology Beijing Co ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN201811056917.8A priority Critical patent/CN110891192B/en
Publication of CN110891192A publication Critical patent/CN110891192A/en
Application granted granted Critical
Publication of CN110891192B publication Critical patent/CN110891192B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4394Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4398Processing of audio elementary streams involving reformatting operations of audio signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The present disclosure relates to a video clipping method and apparatus, the method is applied to a terminal, the method includes: in the process of playing a video to be clipped on line, obtaining clipping information of the video to be clipped, wherein the video to be clipped comprises video fragments; determining the video fragment corresponding to the clipping information as a fragment to be clipped, and determining the clipping information corresponding to the fragment to be clipped according to the fragment to be clipped and the clipping information; downloading the fragments to be clipped from a server, and clipping the downloaded fragments to be clipped according to the clipping information to generate clipping fragments; and combining the clip fragments to generate a clip video. In the embodiment of the disclosure, the clip video of the video to be clipped is split into the clip information corresponding to the video clips, the clips to be clipped are downloaded to the server, and the video clips are performed on the clips to be clipped. The non-clip-waiting fragments are not downloaded to the server, so that the clipping efficiency and timeliness of the video clip can be improved.

Description

Video editing method and device
Technical Field
The present disclosure relates to the field of video processing technologies, and in particular, to a video editing method and apparatus.
Background
In the conventional video clipping method, the video needs to be downloaded to the local for clipping and then transmitted back to the server side. The downloading of the video and the transcoding of the video both need to consume longer time, the resource consumption of the terminal and the server side is large, the time consumption in the editing process is longer, and the editing efficiency is low.
Disclosure of Invention
In view of the above, the present disclosure provides a video clipping method and apparatus, so as to solve the problems of large resource consumption, long time required for clipping process, and low clipping efficiency in the conventional video clipping.
According to an aspect of the present disclosure, there is provided a video clipping method applied to a terminal, the method including:
in the process of playing a video to be clipped on line, obtaining clipping information of the video to be clipped, wherein the video to be clipped comprises video fragments;
determining the video fragment corresponding to the clipping information as a fragment to be clipped, and determining the clipping information corresponding to the fragment to be clipped according to the fragment to be clipped and the clipping information;
downloading the fragments to be clipped from a server, and clipping the downloaded fragments to be clipped according to the clipping information to generate clipping fragments;
and combining the clip fragments to generate a clip video.
In a possible implementation manner, in the process of playing a video to be clipped online, obtaining clipping information of the video to be clipped includes:
requesting the server for the fragment information of the video to be clipped, and according to the fragment information, concurrently caching the video fragments after the current playing progress in the video to be clipped;
and acquiring the clipping information of the video to be clipped in the process of playing the video to be clipped on line according to the video fragments.
In a possible implementation manner, the clipping information includes clipping time information, and determining a video segment corresponding to the clipping information as a segment to be clipped includes:
and determining the video fragments corresponding to the clipping information as fragments to be clipped according to the clipping time period or the clipping time in the clipping time information.
In a possible implementation manner, determining slice clipping information corresponding to the slice to be clipped according to the slice to be clipped and the clipping information includes;
and when the clipping information corresponds to a plurality of fragments to be clipped, splitting the clipping information into fragment clipping information corresponding to each fragment to be clipped.
In a possible implementation manner, generating clip fragments according to the to-be-clipped fragments downloaded by the clip information clip includes:
and according to the clip information, parallelly clipping the downloaded clips to be clipped to generate clip clips.
In one possible implementation, the method further includes;
uploading the clip video to the server.
In one possible implementation, the clip information includes clip content information including at least one of the following information: multi-track video clip information, LOGO removing information, subtitle adding information, audio track superposition information, mark pressing information and multi-video splicing information.
According to another aspect of the present disclosure, there is provided a video clipping apparatus applied to a terminal, the apparatus including:
the device comprises a clipping information acquisition module, a video editing module and a video editing module, wherein the clipping information acquisition module is used for acquiring clipping information of a video to be clipped in the process of playing the video to be clipped on line, and the video to be clipped comprises video fragments;
the clip information determining module is used for determining the video segments corresponding to the clip information as segments to be clipped, and determining the clip information corresponding to the segments to be clipped according to the segments to be clipped and the clip information;
the piece clipping module is used for downloading the pieces to be clipped from a server and clipping the downloaded pieces to be clipped according to the piece clipping information to generate clipping pieces;
and the merging module is used for merging the clip fragments to generate a clip video.
In one possible implementation manner, the clipping information obtaining module includes:
the concurrent cache submodule is used for requesting the server for the fragment information of the video to be edited and concurrently caching the video fragments after the current playing progress in the video to be edited according to the fragment information;
and the clipping information acquisition sub-module is used for acquiring the clipping information of the video to be clipped in the process of playing the video to be clipped on line according to the video fragments.
In one possible implementation manner, the clip information determination module includes:
and the to-be-clipped segment determining submodule is used for determining the video segment corresponding to the clipping information as the to-be-clipped segment according to the clipping time period or the clipping time in the clipping time information.
In one possible implementation manner, the clip information determination module includes:
and the piece clipping information determining sub-module is used for splitting the clipping information into piece clipping information corresponding to each piece to be clipped when the clipping information corresponds to a plurality of pieces to be clipped.
In one possible implementation, the clip module includes:
and the parallel clipping sub-module is used for parallelly clipping the downloaded fragments to be clipped according to the clipping information to generate clipping fragments.
In one possible implementation, the apparatus further includes;
and the uploading module is used for uploading the clip video to the server.
In one possible implementation, the clip information includes clip content information including at least one of the following information: multi-track video clip information, LOGO removing information, subtitle adding information, audio track superposition information, mark pressing information and multi-video splicing information.
According to another aspect of the present disclosure, there is provided a video clipping device including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: the above method is performed.
According to an aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In this embodiment, in the process of playing the video to be clipped online, the clipping information of the video to be clipped is acquired, and the slice to be clipped and the slice clipping information are determined. The method can download the clip to the server, clip the clip to the clip according to the clip information to generate the clip, and combine the clip to generate the clip video. In the embodiment of the present disclosure, according to the video fragment, the clipping information for the video to be clipped is split into the fragment clipping information corresponding to the video fragment, the fragment to be clipped is downloaded to the server, and the video clip is performed for the fragment to be clipped. The non-to-be-clipped fragments do not need to be downloaded to the server, so that resources consumed in the video clips can be reduced, and the clipping efficiency and timeliness of the video clips are improved.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features, and aspects of the disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 shows a flow diagram of a video clipping method according to an embodiment of the present disclosure;
FIG. 2 shows a flow diagram of a video clipping method according to an embodiment of the present disclosure;
FIG. 3 shows a flow diagram of a video clipping method according to an embodiment of the present disclosure;
FIG. 4 shows a flow diagram of a video clipping method according to an embodiment of the present disclosure;
FIG. 5 illustrates a display interface diagram of a video clip application in a video clip method according to an embodiment of the present disclosure;
FIG. 6 shows a block diagram of a video clipping device according to an embodiment of the present disclosure;
FIG. 7 shows a block diagram of a video clipping device according to an embodiment of the present disclosure;
FIG. 8 is a block diagram illustrating a video clipping device in accordance with an exemplary embodiment.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a flowchart of a video clipping method according to an embodiment of the present disclosure, which may be applied to a terminal, such as a mobile phone, a tablet computer, and the like. As shown in fig. 1, the video clipping method includes:
step S10, in the process of playing the video to be clipped on line, the clipping information of the video to be clipped is obtained, and the video to be clipped comprises video fragments.
In one possible implementation, video clips may be made on various terminals using a video clip application. The video to be clipped can be played on line by using a video clipping application program, clipping information of the video to be clipped is acquired in the process of playing the video to be clipped on line, and video clipping is performed on the video to be clipped according to the method in the embodiment of the disclosure. The present disclosure does not limit the specific implementation of the video clip application.
In one possible implementation, the video to be clipped may include videos shot with various shooting devices. The terminal can request the playing address of the video to be clipped from the server, and cache the video to be clipped according to the requested playing address, so as to realize the online playing of the video to be clipped.
In one possible implementation, the video to be clipped may include one or more videos. One clip video may be generated from a plurality of video clips to be clipped. A plurality of clip videos may also be generated from one video clip to be clipped. The present disclosure is not limited thereto.
In one possible implementation, the video clipping method in the embodiments of the present disclosure may be implemented by an application program. Options to play the interface and clip information may be set in the application. The user can play the frame image needing to be clipped in the video to be clipped in the playing interface, and select the required clipping information by using the options of the clipping information. The application program can render the clipping information selected by the user in the playing interface while rendering the frame image needing to be clipped, and the effect after the video clipping is presented for the user, so that the user can clip the video to be clipped on line.
In one possible implementation, the clip information includes clip content information including at least one of the following information: multi-track video clip information, LOGO removing information, subtitle adding information, audio track superposition information, mark pressing information and multi-video splicing information.
In one possible implementation manner, the multi-track video clip may include setting multiple tracks on a time axis, and clipping multiple videos on different tracks, so as to achieve an effect of simultaneously displaying multiple pictures in one picture. And removing the LOGO, wherein the LOGO information contained in the frame picture can be covered or fuzzified, and the frame picture does not contain the LOGO information any more during playing. Adding subtitles may include adding subtitle information at a set position of the frame picture, and the subtitle information may include text, image or symbol information. And the step of superposing the audio track can comprise superposing the audio track in a frame picture of the video, and adding sound information to achieve the effect of playing the sound while playing the frame picture. The tagging may include adding mark information, such as adding sign information, logo information, etc., to a frame of the video so that the frame is played out with the mark information. The multi-video splicing may include splicing frame pictures in the plurality of videos, so that the played-out video includes the frame pictures in the plurality of videos. The clip content information may include parameter information required for any of the above clips, such as a clip type identification, a picture position to which the clip relates, a track identification, and the like.
In a possible implementation manner, a single frame image in a video to be clipped may be video-clipped, a continuous frame image within a time period may be video-clipped, or a plurality of discontinuous frame images may be video-clipped. The frame image to be clipped, the clip type of the video clip, and the clip effect can be determined as needed. The present disclosure is not limited thereto.
For example, the user uses the application to perform video clipping on the to-be-clipped video 1. The clip content information determined by the user through the options of the clip information in the application program is as follows: and covering station captions in each frame image of the video 1 to be clipped. In a playing interface of an application program, any frame image in a video 1 to be clipped is rendered, and simultaneously, a station caption is covered, so that a user can visually see the effect after the video is clipped. The present disclosure does not limit the specific content and implementation of the clip information.
In one possible implementation, the video to be edited may include at least two video slices. The video to be edited can be stored in the server in a video slicing mode. The number of video slices of the video to be clipped may be determined according to the resolution. The video to be edited with higher resolution has more video slices. For example, for 1080P video to be edited, the duration of each video slice may be 2-3 minutes. A movie with a duration of around 100 minutes may have more than 50 video slices.
In a possible implementation manner, the video slices can be connected to obtain a complete video to be edited according to the start time and the slice length of each video slice in the slice information. Or connecting the video fragments according to the starting time and the ending time of each video fragment in the fragment information to obtain the complete video to be edited. The method and the device do not limit the number and the fragment length of the video fragments in the video to be edited, and do not limit the connection mode among the video fragments.
In a possible implementation manner, the terminal may request the server for address information of each video segment of the video to be clipped, and the terminal may cache each video segment according to the address information of each video segment, thereby implementing online playing of the video to be clipped. The clipping information of the video to be clipped can be acquired in the process of playing the video fragment.
Step S20, determining the video segment corresponding to the clipping information as a segment to be clipped, and determining the clipping information corresponding to the segment to be clipped according to the segment to be clipped and the clipping information.
In a possible implementation manner, the clipping information includes clipping time information, and determining a video segment corresponding to the clipping information as a segment to be clipped includes: and determining the video fragments corresponding to the clipping information as fragments to be clipped according to the clipping time period or the clipping time in the clipping time information.
In one possible implementation, the clipping information may include clipping time information and clipping content information. From the clip time information, the frame image to which the clip content information is directed can be determined. The clip time information may include a clip period or a clip time. When the clipping time information is a clipping period, a frame image corresponding to the clipping period may be clipped according to the clipping content information. When the clipping time information is the clipping time, the frame image corresponding to the clipping time can be clipped according to the clipping content information.
For example, the clip content information is the LOGO removing information, and when the clip time information is 00:01: 10-00: 02:12, the LOGO removing clip processing is performed on the frame images within the time period range of 00:01: 10-00: 02:12 of the display time. When the clipping time information is 00:01:10, 00:02:12, the clipping process of LOGO is performed for the frame images whose display times are 00:01:10 and 00:02: 12.
In one possible implementation manner, when the clipping time information is the clipping period, the video slice covered by the clipping period may be determined as the slice to be clipped according to the start time and the end time in the clipping period. When the clipping time information is the clipping time, the video segment where the clipping time is located can be determined as the segment to be clipped according to the clipping time. For example, each video slice has a duration of 15 seconds. When the clipping time information is 00:01: 10-00: 02:12, the clip to be clipped corresponding to the clipping information is five video clips from the fifth video clip to the ninth video clip. When the clipping time information is 00:01:10 and 00:02:12, the clip to be clipped corresponding to the clipping information is divided into a fifth video clip and a ninth video clip, which are two video clips.
In a possible implementation manner, determining slice clipping information corresponding to the slice to be clipped according to the slice to be clipped and the clipping information includes;
and when the clipping information corresponds to a plurality of fragments to be clipped, splitting the clipping information into fragment clipping information corresponding to each fragment to be clipped.
In one possible implementation manner, the clip information corresponding to each to-be-clipped slice may be determined according to the to-be-clipped slice and the clipping information. One to-be-clipped slice may correspond to one or more slice clipping information. The clip information corresponding to the clip to be clipped represents the clip content and the clip time to be performed for the clip to be clipped. For example, a video clip is carried out on a clip video 1, the clip time information of the clip information 1 is 00:01: 10-00: 02:12, and the clip content of the clip information 1 is subjected to LOGO removing processing; the clip time information of the clip information 2 is 00:01: 30-00: 02:00, and the clip content of the clip information 2 is added subtitle information. The to-be-clipped slice corresponding to the clipping information 1 is a fifth video slice to a ninth video slice. The clip to be clipped corresponding to the clip information 2 is the sixth video clip to the eighth video clip. Aiming at the sixth video segment to the eighth video segment in the segments to be clipped, the corresponding clipping information is as follows: removing LOGO and adding caption information; for the fifth video segment and the ninth video segment in the segments to be clipped, the corresponding clipping information is as follows: and removing LOGO.
And step S30, downloading the fragments to be clipped from the server, and clipping the downloaded fragments to be clipped according to the clipping information to generate clipping fragments.
In a possible implementation manner, the server may be requested to download the to-be-clipped segments, and the to-be-clipped segments are downloaded to the local of the terminal. And according to the piece clipping information corresponding to each piece to be clipped, clipping the downloaded piece to be clipped to generate a clipping piece corresponding to each piece to be clipped.
In a possible implementation manner, point location information corresponding to clip content information may also be included in the clip information. The point location information may include: and position information of pixels in the frame images of the fragments to be clipped corresponding to the clipping content information. For example, the point location information is (X, Y), and the pixel at the (X, Y) position in the frame image of the slice to be clipped can be clipped according to the clip content information.
In one possible implementation, a confirmation option may be set in the application. The user can preview the clipping effect of the video to be clipped in the application program, and repeatedly modify or adjust different frame images or clipping information. When the user is satisfied with the clipping effect or achieves the clipping purpose, the user can click a confirmation option to determine the final video clipping scheme. The application program can determine the clipping information and the clipping information of the clip to be clipped according to the obtained clipping information and the video clip of the video to be clipped according to the instruction sent by the confirmation option. The application program can send a downloading request to the server according to the determined to-be-clipped fragments, download the to-be-clipped fragments to the local terminal, and clip the downloaded to-be-clipped fragments according to the fragment clipping information to obtain the clipped fragments.
And step S40, combining the clip fragments to generate a clip video.
In one possible implementation, the clip segments may be combined to generate a clip video. The clip video may be generated by merging according to the time information of the respective clip slices. Or combining the clip fragments according to the connection sequence of the clip fragments determined in the clip information to generate the clip video. The present disclosure is not limited thereto.
In this embodiment, in the process of playing the video to be clipped online, the clipping information of the video to be clipped is acquired, and the slice to be clipped and the slice clipping information are determined. The method can download the clip to the server, clip the clip to the clip according to the clip information to generate the clip, and combine the clip to generate the clip video. In the embodiment of the present disclosure, according to the video fragment, the clipping information for the video to be clipped is split into the fragment clipping information corresponding to the video fragment, the fragment to be clipped is downloaded to the server, and the video clip is performed for the fragment to be clipped. The non-to-be-clipped fragments do not need to be downloaded to the server, so that resources consumed in the video clips can be reduced, and the clipping efficiency and timeliness of the video clips are improved.
Fig. 2 shows a flowchart of a video clipping method according to an embodiment of the present disclosure, and as shown in fig. 2, step S10 in the video clipping method includes:
step S11, requesting the server for the fragment information of the video to be clipped, and according to the fragment information, concurrently caching the video fragments after the current playing progress in the video to be clipped.
Step S12, in the process of playing the video to be clipped on line according to the video fragments, the clipping information of the video to be clipped is obtained.
In one possible implementation, when video clips are included in the video to be clipped, clip information of the video to be clipped may be requested from the server. The clip information of the video to be clipped may include: start time and length information of each video slice. And the video fragments can be concurrently cached according to the fragment information.
In one possible implementation, the video segment after the current playing progress of the video to be edited may be cached. When the video fragments after the current playing progress are multiple, the video fragments can be cached concurrently, and the caching efficiency of the video to be edited is improved. The video to be edited can be played on line according to the video fragments which are cached concurrently, and the editing information is obtained in the process of playing on line.
In this embodiment, the video to be clipped may be concurrently buffered according to the slicing information of the video to be clipped. The concurrent cache can improve the cache efficiency of the video to be edited and improve the playing speed of the video to be edited, thereby improving the efficiency of video editing.
Fig. 3 shows a flowchart of a video clipping method according to an embodiment of the present disclosure, and as shown in fig. 3, step S30 in the video clipping method includes:
and step S31, according to the clip information, parallelly clipping the downloaded clips to be clipped to generate clip clips.
In one possible implementation, the to-be-clipped slice corresponds to the slice clipping information one-to-one, for example, the slice clipping information corresponding to the to-be-clipped slice 1 is independent of the to-be-clipped slice 2. The method can carry out parallel clipping processing on the fragments to be clipped according to the fragments to be clipped and the piece clipping information corresponding to the fragments to be clipped, thereby improving the efficiency of video clipping.
In this embodiment, each slice to be clipped may be clipped in parallel according to the slice clipping information. Parallel processing may improve the clipping efficiency of the video clip.
Fig. 4 shows a flowchart of a video clipping method according to an embodiment of the present disclosure, as shown in fig. 4, the video clipping method further includes:
and step S50, uploading the clip video to the server.
In one possible implementation, the server may include an OSS (Operation Support System) server and a CDN (Content Delivery Network) server. The content provider of the video service can store various video resources in the OSS server, push the video resources in the OSS server to the CDN server, and provide services for users in different geographic ranges using the CDN server.
In a possible implementation manner, the terminal may request the CDN server for segment information of a video to be clipped, and concurrently cache video segments in the video to be clipped according to the segment information. The terminal may upload the clip video to the OSS server. The OSS server may push the clip video to each CDN server for viewing by the user.
Application example:
fig. 5 shows a schematic display interface of a video clip application program in a video clip method according to an embodiment of the present disclosure, where the display interface of the application program shown in fig. 5 is a schematic interface, and the positions and settings of various options in the interface can be arbitrarily adjusted according to requirements. The present disclosure is not limited thereto.
As shown in fig. 5, the video to be edited may be retrieved and shown within the material area. The playing interface can play the selected video to be edited. In fig. 5, the video 1 to be clipped is a video being played in the playing interface. In the progress bar of the playing interface, it can be seen that the video 1 to be edited is divided into a plurality of video segments, and the current playing progress is the video segment 2. The clipping area also comprises a clipping information option, and the user can preview the effect of the clipping in the playing interface by clicking the clipping information option. When the user determines the final video clip scheme, a confirmation button in the clip area may be clicked to confirm the video clip is being made. The step of video clipping may comprise:
1. requesting the fragment information of the video 1 to be clipped from the CDN server, and performing concurrent cache on each video fragment in the video 1 to be clipped according to the received fragment information of the video 1 to be clipped. The mapping relation of the playing time of each video fragment in the video to be edited can be made according to the starting time and the fragment length of each video fragment in the fragment information. The playing interface can play each video fragment according to the mapping relation, and the video to be edited is rendered in the playing interface. By concurrently caching the video fragments, the user can quickly play the video 1 to be edited on line.
2. In the process of playing the video 1 to be clipped on line, the clipping information is obtained according to an instruction sent by a user clicking an option of the clipping information, and the clipping effect of the clipping information is rendered in a playing interface. The clip information may include clip time information, clip content information, point location information, and the like. The related contents in the video clipping method can be referred to specifically, and are not described again.
3. When the user clicks the confirmation button to determine the video clipping scheme and the confirmation button sends an instruction, the clip to be clipped and the clip information of the clip are determined according to the clip information and the clip information. When one piece of clipping information corresponds to a plurality of fragments to be clipped, the piece of clipping information is divided into pieces of clipping information corresponding to the fragments to be clipped. The related contents in the video clipping method can be referred to specifically, and are not described again.
4. And requesting the CDN server to download the fragments to be edited. And according to the downloaded fragments to be clipped and the clip information of the fragments, carrying out parallel clipping on the fragments to be clipped to generate the clip fragments corresponding to the fragments to be clipped. The clipping parameters can be obtained by parsing the clip information. And the clipping processing is carried out on the fragments to be clipped by using the clipping parameters.
5. The clip segments are merged to generate a clip video.
6. The clip video is uploaded to the OSS server.
Fig. 6 shows a block diagram of a video clipping apparatus according to an embodiment of the present disclosure, the apparatus being applied to a terminal, as shown in fig. 6, the apparatus including:
the device comprises a clipping information acquisition module 10, a video editing module and a video editing module, wherein the clipping information acquisition module is used for acquiring clipping information of a video to be clipped in the process of playing the video to be clipped on line, and the video to be clipped comprises video fragments;
a clip information determining module 20, configured to determine a video segment corresponding to the clip information as a segment to be clipped, and determine clip information corresponding to the segment to be clipped according to the segment to be clipped and the clip information;
a clip module 30, configured to download the to-be-clipped segment from a server, and clip the downloaded to-be-clipped segment according to the clip information to generate a clip segment;
and the merging module 40 is used for merging the clip fragments to generate a clip video.
Fig. 7 shows a block diagram of a video clipping device according to an embodiment of the present disclosure, as shown in fig. 7,
in a possible implementation manner, the clipping information obtaining module 10 includes:
the concurrent cache submodule 11 is configured to request the server for the fragment information of the video to be edited, and concurrently cache, according to the fragment information, the video fragment after the current playing progress in the video to be edited;
and the clipping information obtaining sub-module 12 is configured to obtain the clipping information of the video to be clipped during the process of playing the video to be clipped on line according to the video clips.
In one possible implementation, the clip information determining module 20 includes:
and the to-be-clipped segment determining submodule 21 is configured to determine, according to the clipping time period or the clipping time in the clipping time information, the video segment corresponding to the clipping information as the to-be-clipped segment.
In one possible implementation, the clip information determining module 20 includes:
the clipping information determining sub-module 22 is configured to split the clipping information into the clipping information corresponding to each to-be-clipped slice when the clipping information corresponds to a plurality of to-be-clipped slices.
In one possible implementation, the clip module 30 includes:
and the parallel clipping sub-module 31 is configured to clip the downloaded to-be-clipped clips in parallel according to the clip information to generate clip clips.
In one possible implementation, the apparatus further includes;
an upload module 50 configured to upload the clip video to the server.
In one possible implementation, the clip information includes clip content information including at least one of the following information: multi-track video clip information, LOGO removing information, subtitle adding information, audio track superposition information, mark pressing information and multi-video splicing information.
FIG. 8 is a block diagram illustrating a video clipping device 800 according to an example embodiment. For example, the apparatus 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 8, the apparatus 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the apparatus 800. Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power components 806 provide power to the various components of device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 800.
The multimedia component 808 includes a screen that provides an output interface between the device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the device 800. For example, the sensor assembly 814 may detect the open/closed status of the device 800, the relative positioning of components, such as a display and keypad of the device 800, the sensor assembly 814 may also detect a change in the position of the device 800 or a component of the device 800, the presence or absence of user contact with the device 800, the orientation or acceleration/deceleration of the device 800, and a change in the temperature of the device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communications between the apparatus 800 and other devices in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the device 800 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (16)

1. A video clipping method applied to a terminal, the method comprising:
in the process of playing a video to be clipped on line, clip information of the video to be clipped is acquired, the clip information comprises clip time information and clip content information, the clip time information comprises a clip time interval or a clip moment, and the video to be clipped comprises video fragments;
determining the video fragment corresponding to the clipping information as a fragment to be clipped, and splitting the clipping information into piece clipping information corresponding to the fragment to be clipped according to the fragment to be clipped, wherein the piece clipping information represents clipping content and clipping time to be carried out on the fragment to be clipped;
downloading the fragments to be clipped from a server, and clipping the downloaded fragments to be clipped according to the clipping information to generate clipping fragments;
and combining the clip fragments to generate a clip video.
2. The method of claim 1, wherein the obtaining the clipping information of the video to be clipped during the online playing of the video to be clipped comprises:
requesting the server for the fragment information of the video to be clipped, and according to the fragment information, concurrently caching the video fragments after the current playing progress in the video to be clipped;
and acquiring the clipping information of the video to be clipped in the process of playing the video to be clipped on line according to the video fragments.
3. The method of claim 1, wherein the clipping information includes clipping time information, and determining the video slice corresponding to the clipping information as the slice to be clipped comprises:
and determining the video fragments corresponding to the clipping information as fragments to be clipped according to the clipping time period or the clipping time in the clipping time information.
4. The method according to claim 1, wherein determining slice clipping information corresponding to the slice to be clipped according to the slice to be clipped and the clipping information comprises;
and when the clipping information corresponds to a plurality of fragments to be clipped, splitting the clipping information into fragment clipping information corresponding to each fragment to be clipped.
5. The method of claim 1, wherein generating clip segments according to the to-be-clipped segments downloaded from the clip information clip comprises:
and according to the clip information, parallelly clipping the downloaded clips to be clipped to generate clip clips.
6. The method of claim 1, further comprising;
uploading the clip video to the server.
7. The method of claim 1, wherein the clipping information comprises clipping content information, the clipping content information comprising at least one of: multi-track video clip information, LOGO removing information, subtitle adding information, audio track superposition information, mark pressing information and multi-video splicing information.
8. A video clipping apparatus, characterized in that the apparatus is applied to a terminal, the apparatus comprising:
the device comprises a clipping information acquisition module, a video editing module and a video editing module, wherein the clipping information acquisition module is used for acquiring clipping information of a video to be clipped in the process of playing the video to be clipped on line, the clipping information comprises clipping time information and clipping content information, the clipping time information comprises clipping time periods or clipping moments, and the video to be clipped comprises video fragments;
the clip information determining module is used for determining the video segments corresponding to the clip information as segments to be clipped, and splitting the clip information into clip information corresponding to the segments to be clipped according to the segments to be clipped, wherein the clip information represents clipping content and clipping time to be carried out on the segments to be clipped;
the piece clipping module is used for downloading the pieces to be clipped from a server and clipping the downloaded pieces to be clipped according to the piece clipping information to generate clipping pieces;
and the merging module is used for merging the clip fragments to generate a clip video.
9. The apparatus of claim 8, wherein the clipping information obtaining module comprises:
the concurrent cache submodule is used for requesting the server for the fragment information of the video to be edited and concurrently caching the video fragments after the current playing progress in the video to be edited according to the fragment information;
and the clipping information acquisition sub-module is used for acquiring the clipping information of the video to be clipped in the process of playing the video to be clipped on line according to the video fragments.
10. The apparatus of claim 8, wherein the clip information determining module comprises:
and the to-be-clipped segment determining submodule is used for determining the video segment corresponding to the clipping information as the to-be-clipped segment according to the clipping time period or the clipping time in the clipping time information.
11. The apparatus of claim 8, wherein the clip information determining module comprises:
and the piece clipping information determining sub-module is used for splitting the clipping information into piece clipping information corresponding to each piece to be clipped when the clipping information corresponds to a plurality of pieces to be clipped.
12. The apparatus of claim 8, wherein the clip module comprises:
and the parallel clipping sub-module is used for parallelly clipping the downloaded fragments to be clipped according to the clipping information to generate clipping fragments.
13. The apparatus of claim 8, further comprising;
and the uploading module is used for uploading the clip video to the server.
14. The apparatus of claim 8, wherein the clipping information comprises clipping content information, and wherein the clipping content information comprises at least one of: multi-track video clip information, LOGO removing information, subtitle adding information, audio track superposition information, mark pressing information and multi-video splicing information.
15. A video clipping apparatus, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: performing the method of any one of claims 1 to 7.
16. A non-transitory computer readable storage medium having computer program instructions stored thereon, wherein the computer program instructions, when executed by a processor, implement the method of any of claims 1 to 7.
CN201811056917.8A 2018-09-11 2018-09-11 Video editing method and device Active CN110891192B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811056917.8A CN110891192B (en) 2018-09-11 2018-09-11 Video editing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811056917.8A CN110891192B (en) 2018-09-11 2018-09-11 Video editing method and device

Publications (2)

Publication Number Publication Date
CN110891192A CN110891192A (en) 2020-03-17
CN110891192B true CN110891192B (en) 2021-10-15

Family

ID=69745496

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811056917.8A Active CN110891192B (en) 2018-09-11 2018-09-11 Video editing method and device

Country Status (1)

Country Link
CN (1) CN110891192B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113301409B (en) * 2021-05-21 2023-01-10 北京大米科技有限公司 Video synthesis method and device, electronic equipment and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101964894A (en) * 2010-08-24 2011-02-02 中国科学院深圳先进技术研究院 Method and system for parallel trans-coding of video slicing
CN105227999A (en) * 2015-09-29 2016-01-06 北京奇艺世纪科技有限公司 A kind of method and apparatus of video cutting
CN107124568A (en) * 2016-02-25 2017-09-01 掌赢信息科技(上海)有限公司 A kind of video recording method and electronic equipment
CN107281709A (en) * 2017-06-27 2017-10-24 深圳市酷浪云计算有限公司 The extracting method and device, electronic equipment of a kind of sport video fragment
CN107888988A (en) * 2017-11-17 2018-04-06 广东小天才科技有限公司 A kind of video clipping method and electronic equipment
CN108156407A (en) * 2017-12-13 2018-06-12 深圳市金立通信设备有限公司 A kind of video clipping method and terminal

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101740082A (en) * 2009-11-30 2010-06-16 孟智平 Method and system for clipping video based on browser
US8631047B2 (en) * 2010-06-15 2014-01-14 Apple Inc. Editing 3D video
US20160014482A1 (en) * 2014-07-14 2016-01-14 The Board Of Trustees Of The Leland Stanford Junior University Systems and Methods for Generating Video Summary Sequences From One or More Video Segments
CN104980773B (en) * 2014-09-23 2019-12-13 腾讯科技(深圳)有限公司 streaming media processing method and device, terminal and server
CN105493512B (en) * 2014-12-14 2018-07-06 深圳市大疆创新科技有限公司 A kind of method for processing video frequency, video process apparatus and display device
CN105338368B (en) * 2015-11-02 2019-03-15 腾讯科技(北京)有限公司 A kind of method, apparatus and system of the live stream turning point multicast data of video
WO2018023553A1 (en) * 2016-08-04 2018-02-08 SZ DJI Technology Co., Ltd. Parallel video encoding
CN106791933B (en) * 2017-01-20 2019-11-12 杭州当虹科技股份有限公司 The method and system of online quick editor's video based on web terminal
CN107911739A (en) * 2017-10-25 2018-04-13 北京川上科技有限公司 A kind of video acquiring method, device, terminal device and storage medium
CN108391142B (en) * 2018-03-30 2019-11-19 腾讯科技(深圳)有限公司 A kind of method and relevant device of video source modeling
CN108449651B (en) * 2018-05-24 2021-11-02 腾讯科技(深圳)有限公司 Subtitle adding method, device, equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101964894A (en) * 2010-08-24 2011-02-02 中国科学院深圳先进技术研究院 Method and system for parallel trans-coding of video slicing
CN105227999A (en) * 2015-09-29 2016-01-06 北京奇艺世纪科技有限公司 A kind of method and apparatus of video cutting
CN107124568A (en) * 2016-02-25 2017-09-01 掌赢信息科技(上海)有限公司 A kind of video recording method and electronic equipment
CN107281709A (en) * 2017-06-27 2017-10-24 深圳市酷浪云计算有限公司 The extracting method and device, electronic equipment of a kind of sport video fragment
CN107888988A (en) * 2017-11-17 2018-04-06 广东小天才科技有限公司 A kind of video clipping method and electronic equipment
CN108156407A (en) * 2017-12-13 2018-06-12 深圳市金立通信设备有限公司 A kind of video clipping method and terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
面向移动视频的云编辑***研究与实现;周礼;《中国优秀硕士学位论文全文数据库(信息科技辑)》;20150615;全文 *

Also Published As

Publication number Publication date
CN110891192A (en) 2020-03-17

Similar Documents

Publication Publication Date Title
CN106791893B (en) Video live broadcasting method and device
CN107729522B (en) Multimedia resource fragment intercepting method and device
CN108093315B (en) Video generation method and device
CN108259991B (en) Video processing method and device
CN106911967B (en) Live broadcast playback method and device
CN107948708B (en) Bullet screen display method and device
CN107820131B (en) Comment information sharing method and device
CN108260020B (en) Method and device for displaying interactive information in panoramic video
CN109947981B (en) Video sharing method and device
CN110519655B (en) Video editing method, device and storage medium
CN107277628B (en) video preview display method and device
CN108924644B (en) Video clip extraction method and device
CN107122430B (en) Search result display method and device
CN108495168B (en) Bullet screen information display method and device
CN110493627B (en) Multimedia content synchronization method and device
CN110234030B (en) Bullet screen information display method and device
CN109063101B (en) Video cover generation method and device
CN108174269B (en) Visual audio playing method and device
CN112188230A (en) Virtual resource processing method and device, terminal equipment and server
CN111182328B (en) Video editing method, device, server, terminal and storage medium
CN106998493B (en) Video previewing method and device
CN108521579B (en) Bullet screen information display method and device
CN109756783B (en) Poster generation method and device
CN109151553B (en) Display control method and device, electronic equipment and storage medium
CN108574860B (en) Multimedia resource playing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20200430

Address after: 310052 room 508, floor 5, building 4, No. 699, Wangshang Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province

Applicant after: Alibaba (China) Co.,Ltd.

Address before: 200241, room 2, floor 02, building 555, Dongchuan Road, Minhang District, Shanghai

Applicant before: CHUANXIAN NETWORK TECHNOLOGY (SHANGHAI) Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240619

Address after: 101400 Room 201, 9 Fengxiang East Street, Yangsong Town, Huairou District, Beijing

Patentee after: Youku Culture Technology (Beijing) Co.,Ltd.

Country or region after: China

Address before: 310052 room 508, 5th floor, building 4, No. 699 Wangshang Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province

Patentee before: Alibaba (China) Co.,Ltd.

Country or region before: China