WO2019120013A1 - 视频编辑方法、装置及智能移动终端 - Google Patents

视频编辑方法、装置及智能移动终端 Download PDF

Info

Publication number
WO2019120013A1
WO2019120013A1 PCT/CN2018/115916 CN2018115916W WO2019120013A1 WO 2019120013 A1 WO2019120013 A1 WO 2019120013A1 CN 2018115916 W CN2018115916 W CN 2018115916W WO 2019120013 A1 WO2019120013 A1 WO 2019120013A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
edited
frame
display area
preset
Prior art date
Application number
PCT/CN2018/115916
Other languages
English (en)
French (fr)
Inventor
张奇
Original Assignee
北京达佳互联信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京达佳互联信息技术有限公司 filed Critical 北京达佳互联信息技术有限公司
Publication of WO2019120013A1 publication Critical patent/WO2019120013A1/zh
Priority to US16/906,761 priority Critical patent/US11100955B2/en
Priority to US17/381,842 priority patent/US11568899B2/en

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/036Insert-editing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4858End-user interface for client configuration for modifying screen layout parameters, e.g. fonts, size of the windows
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Definitions

  • the embodiment of the present invention relates to the field of video editing, and in particular, to a video editing method, device, and smart mobile terminal.
  • Video editing refers to the process of recording the desired image with a camera and then using the video editing software on the computer to make the recorded video into a disc.
  • video editing software on the computer to make the recorded video into a disc.
  • the smart mobile terminal when the video is edited by the smart mobile terminal, the user cannot observe the specific editing position due to the limitation of the operation interface space of the smart mobile terminal. That is to say, in the prior art, the user cannot accurately select the video frame to which the video editing is desired.
  • the smart mobile terminal since the smart mobile terminal can display the edited segment and the original video in a split screen, that is, in the prior art, the smart mobile terminal performs split screen processing on the display screen, and the two parts after the split screen In the screen, one part is used to play the original video, and the other part is used to display the video frame picture selected by the user and want to perform video editing. In this way, the space of the existing editing interface is greatly compressed, so that the user cannot accurately click the function button in the operation interface, resulting in an increase in the editing error rate.
  • the inventor of the present application found in the research that in the video editing technology in the prior art, when the editing segment is selected in the original video, the user cannot accurately determine the position of the video segment to be edited, thereby causing the user to edit.
  • the content of the video clip to be edited cannot be accurately determined, so that the error rate during editing is high, and the user cannot accurately perform video editing.
  • the embodiment of the present application provides a video editing method, an apparatus, and an intelligent mobile terminal that can represent a duration of an original video by using a display area, and can display a duration of a video to be edited in the display area.
  • a technical solution adopted by the embodiment of the present application is to provide a video editing method, where the method includes:
  • the frame image is rendered by calling a preset rendering script to cause the rendered frame image to be highlighted within the display area.
  • the step of selecting, according to the editing instruction, a frame image that represents a duration of a video segment to be edited in a preset display area includes:
  • the step of invoking a preset rendering script to render the frame image to highlight the rendered frame image in the display area includes:
  • the frame rendering interval is rendered by invoking a preset rendering script to cause the rendered frame frame interval to be highlighted within the display area.
  • the method further includes:
  • the reserved video clip is saved in the preset second storage area.
  • the method further includes the following steps:
  • the frame image images subjected to the scaling process are sequentially arranged in the display area in the order in which they are called, so that the frame image representing the duration of the reserved video segment is displayed in the display area.
  • the method further includes:
  • the extracted video clip to be edited is inserted into the second storage area.
  • the step of inserting the extracted video segment to be edited into the second storage area includes:
  • the step of selecting, according to the editing instruction, a frame image that represents a duration of a video segment to be edited in a preset display area includes:
  • a frame picture interval in which the start frame picture image is the start frame and the end frame picture image is the end frame is determined as a frame picture image representing the duration of the video segment to be edited.
  • the method further includes:
  • the embodiment of the present application further provides a video editing apparatus, where the apparatus includes:
  • An obtaining module configured to acquire a user to execute an editing instruction
  • An execution module configured to select, according to the editing instruction, a frame image that represents a duration of a video segment to be edited in a preset display area; wherein the display area is used to display the to-be-edited according to a preset time span a frame image captured in the original video corresponding to the video segment;
  • a generating module configured to invoke a preset rendering script to render the frame image, so that the rendered frame image is highlighted in the display area.
  • the execution module includes a first execution sub-module, configured to select, according to the editing instruction, at least one frame picture interval that represents a duration of a video segment to be edited in a preset display area;
  • the generating module includes a first generating submodule, configured to invoke a preset rendering script to render the frame frame interval, so that the rendered frame frame interval is highlighted in the display area.
  • the video editing device further includes:
  • a first storage module configured to render the frame image by calling the preset rendering script, so that the rendered frame image is highlighted in the display area, and the video segment to be edited is Stored in a preset first storage area;
  • a second storage module configured to save the reserved video segment in a preset second storage area.
  • the video editing device further includes:
  • a first calling module configured to sequentially capture a frame image of the reserved video segment according to a preset first time span
  • a first scaling module configured to perform image scaling processing on the frame image by a preset scaling ratio
  • a second generation sub-module configured to arrange the framed image images subjected to the scaling process in the display sequence in order of retrieving, so that the duration of the representation of the reserved video segment is displayed in the display area Frame picture image.
  • the video editing device further includes:
  • a second obtaining module configured to acquire a revocation instruction of the user to be executed
  • a second calling module configured to extract, according to the revocation instruction, a video segment to be edited stored in the first storage area by way of stacking
  • a third storage module configured to insert the extracted video segment to be edited into the second storage area.
  • the third storage module includes:
  • a third obtaining submodule configured to acquire a start time of the video segment to be edited
  • a first determining submodule configured to determine, in the reserved video segment, a starting insertion time of the to-be-edited video segment according to the start time
  • a video insertion sub-module for inserting the to-be-edited video clip at the initial insertion time of the reserved video clip.
  • the obtaining module includes:
  • a fourth obtaining submodule configured to acquire a starting position and an ending position in the display area indicated by the editing instruction
  • a second execution submodule configured to determine a start frame picture image according to the start position, and acquire an end frame picture image according to the end position;
  • an image determining sub-module configured to determine, as a start frame of the start frame picture image and the end frame picture image as an end frame, a frame picture image that represents a duration of the video segment to be edited.
  • the video editing device further includes:
  • a fifth obtaining module configured to acquire a save instruction of the user to be executed
  • a third execution module configured to delete the video information in the first storage area according to the save instruction, and store the video information in the second storage area in a local storage space.
  • an intelligent mobile terminal including:
  • One or more processors are One or more processors;
  • One or more applications wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the The video editing method described.
  • the embodiment of the present application further provides a computer readable storage medium, where the computer readable storage medium stores a video editing program, and the video editing program is executed by the processor to implement any of the above video editing. The steps of the method.
  • the embodiment of the present application further provides a computer program product, when it is run on a computer, causes the computer to implement the steps of any of the above video editing methods.
  • the beneficial effects of the embodiment of the present application are: when video editing is performed, the duration of the original video is represented by the display area in the video editing area, and when the user performs video editing, the corresponding frame representing the duration of the video segment to be edited is selected in the display area.
  • the screen image it is possible to determine the corresponding duration of the video to be edited.
  • the selected frame picture image is script rendered so that it can be displayed differently from other frame picture images, and the selected video segment to be edited is displayed to the user in an intuitive manner, so that the user can determine the position of the video to be edited.
  • This editing method can increase the utilization rate in a limited space, and is convenient for the user to browse and operate.
  • the frame picture image representing the duration of the video segment to be edited is selected in the display area, and the selected frame picture image is highlighted.
  • the user can intuitively and accurately determine the content of the video to be edited in the original video, thereby reducing the error rate during editing and enabling the user to accurately edit the video.
  • FIG. 1 is a schematic diagram of a basic process of a video editing method according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of an edit page provided by the frequency in the embodiment of the present application.
  • FIG. 3 is a schematic flowchart diagram of another embodiment of a video editing method according to an embodiment of the present disclosure
  • FIG. 4 is a schematic flowchart of video clipping according to an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of updating the display area according to an embodiment of the present application.
  • FIG. 6 is a schematic flowchart of a revocation procedure according to an embodiment of the present application.
  • FIG. 7 is a schematic flowchart of inserting a video segment to be edited into a reserved video segment according to an embodiment of the present disclosure
  • FIG. 8 is a schematic flowchart of determining a duration of a video segment to be edited according to an embodiment of the present disclosure
  • FIG. 9 is a schematic flowchart of a save program according to an embodiment of the present application.
  • FIG. 10 is a block diagram showing a basic structure of a video editing apparatus according to an embodiment of the present application.
  • FIG. 11 is a schematic diagram of a basic structure of an intelligent mobile terminal according to an embodiment of the present application.
  • FIG. 1 is a schematic diagram of a basic process of a video editing method according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a video editing page provided by this embodiment.
  • the display area of the smart mobile terminal includes: a video display area and a display area.
  • the upper half of the display area after the split screen of the smart mobile terminal may be a video display area
  • the lower half area may be a display area.
  • the display area of the smart mobile terminal including the video display area and the display area may be referred to as a video editing area.
  • the video display area is used to play the original video or the video to be edited. When the playback is paused, the video display area displays the video frame picture at the stop of the playback progress.
  • the original video is a video that has been edited by the camera and is not yet edited.
  • the video to be edited is: the user selects a video segment to be edited in the original video, when the user does not When the video clip to be edited performs a cut operation, the video clip to be edited can be played in the video display area.
  • the user when the video segment to be edited by the user is played in the video display area, the user can edit the video segment to be edited again, that is, the user can select again in the video segment to be edited.
  • a video clip to be edited twice the video clip to be edited twice, the video clip to be edited by the user playing in the current video display area can be regarded as the original video of the video clip to be subjected to secondary editing.
  • the display area displays a number of frame image images.
  • the frame image of the original video is periodically acquired according to a preset time span, and the captured frame image is displayed in the display area.
  • the length of the original video is 36s
  • a frame image is extracted by a time span of 2s
  • the frame image is displayed in the display area in the order of extraction, and each frame image is characterized by a length of 2s.
  • Video screen. Therefore, the display area is actually a progress bar of the original video, and the play position of the original video indicated by the progress pointer can be acquired according to the play time of the original video corresponding to the frame picture image corresponding to the position of the progress pointer in the display area.
  • a video editing method includes the following steps:
  • the user edits the captured or locally stored video using the smart mobile terminal.
  • the smart mobile terminal After entering the editing state, the smart mobile terminal receives the click or slide command sent by the user through the finger or the stylus to perform video editing. Therefore, in the video editing state, The user's swipe command or click command is the edit command to be executed.
  • the display area of the smart mobile terminal can be as shown in FIG. 2, including the video display area and the display area.
  • the user can use a finger or a stylus to trigger an edit instruction to be executed by clicking or sliding the display area, so that the smart mobile terminal acquires the user after detecting the user's click or slide operation on the display area.
  • the pending edit instruction is to say, after the smart mobile terminal enters the editing state.
  • the instruction to be executed may indicate which of the plurality of video frame pictures displayed by the display area are frame picture images representing the duration of the video segment to be edited.
  • the display area is used to display a frame picture image collected in the original video corresponding to the to-be-edited video segment according to a preset time span.
  • the edit command can be a user's click command or a slide command according to a user's edit instruction. Selecting one or consecutive multiple frame picture images according to the editing instruction sent by the user, and calculating the start time of the initial frame picture image representation and ending the end time of the frame picture image representation, the length of the video segment to be edited can be calculated. , that is, the length of time of the video selected for editing by the editing command sent by the user.
  • the smart mobile terminal can respond to the editing instruction. Since the editing instruction can indicate which frame picture images of the plurality of video frame pictures displayed by the display area are frame picture images representing the duration of the video segment to be edited, the smart mobile terminal can be in response to the editing instruction. A frame picture image representing the duration of the video segment is selected in the display area. Furthermore, based on the start time of the selected initial frame picture image representation and the end time of the end frame picture image representation, the duration of the video segment to be edited can be calculated.
  • the smart mobile terminal can determine the play time corresponding to the video segment to be edited in the original video, and then, according to the determined The playback time gets the video clip to be edited in the original video.
  • the video segment to be edited can be determined in the original video according to the playing time in the original video corresponding to the selected frame picture image.
  • the start frame picture image is the first frame picture image in the selected plurality of frame picture images
  • the end frame picture image is ended. Is the last frame picture image of the selected multiple frame picture images.
  • the duration of the video clip to be edited is: the duration represented by the frame image.
  • S1300 Call a preset rendering script to render the frame image, so that the rendered frame image is highlighted in the display area.
  • the selected frame image After obtaining the frame image of the selected length of the video segment to be edited by the user, the selected frame image needs to be rendered, so that the rendered frame image is different from the unrendered frame image.
  • the user can intuitively observe the position and length of the pre-selected video clip to be edited in the original video.
  • the rendering script is a preset program for rendering the selected frame image.
  • the rendering mode that the rendering script can set is (not limited to): covering the frame image with a colored translucent mask and changing the frame of the frame image. Color or enlarge the selected frame image to display.
  • the duration of the original video is represented by the display area in the video editing area
  • the user performs video editing by selecting the corresponding frame image image representing the duration of the video segment to be edited in the display area,
  • the length of time of the corresponding video to be edited can be determined.
  • the selected frame picture image is script rendered so that it can be displayed differently from other frame picture images, and the selected video segment to be edited is displayed to the user in an intuitive manner, so that the user can determine the position of the video to be edited.
  • This editing method can increase the utilization rate in a limited space, and is convenient for the user to browse and operate.
  • the frame picture image representing the duration of the video segment to be edited is selected in the display area, and the selected frame picture image is highlighted.
  • the user can intuitively and accurately determine the content of the video to be edited in the original video, thereby reducing the error rate during editing, so that the user can accurately edit the video.
  • the video clip to be edited in the embodiment of the present application can be (not limited to) a video clip selected by the user to add an effect, a video clip selected by the user to be deleted in the original video, or the user selects to perform in the original video. Reserved video clips, etc.
  • FIG. 3 is a schematic flowchart diagram of another embodiment of a video editing method according to an embodiment.
  • the embodiment may include:
  • S1111 Select at least one frame picture interval that represents a duration of the video segment to be edited in the preset display area according to the editing instruction;
  • a continuous plurality of frame picture images are selected according to the user's sliding instruction, and the to-be-edited video segment can be calculated by acquiring the start time of the initial frame picture image representation and ending the end of the frame picture image representation.
  • the length of time that is, the length of time that the user edits the instruction to select the video that needs to be edited.
  • the edit instruction of the user to be executed acquired by the smart mobile terminal is triggered by the sliding operation of the display area in the video editing area of the smart mobile terminal by the user using a finger or a stylus.
  • the user selects a frame picture image displayed by the display area by using a finger or a stylus, and starts from the frame picture image, and sequentially slides the frame picture image in the corresponding forward-to-back playback order in the original video.
  • Frame the image of the frame until you stop sliding, and leave your finger or stylus off the display of the smart terminal.
  • the frame picture image that the user passes during the process of sliding the finger or the stylus in the display area constitutes at least one frame picture interval representing the duration of the video segment to be edited.
  • the frame picture image selected and started by the user is the start frame picture image
  • the frame picture image selected by the finger or the stylus is the end frame picture image.
  • the smart mobile terminal can determine the play time corresponding to the video segment to be edited in the original video, and then, according to the determined The playback time gets the video clip to be edited in the original video.
  • each frame image of the display area represents a video duration of 2s duration
  • the start time of the original video represented by the user-selected start frame picture is 6s
  • the user's sliding instruction selects four consecutive frame image images.
  • the duration of the video to be edited is 8s
  • the position in the original video is a video clip with a start time of 6s and an end time of 14s.
  • S1112 Calling a preset rendering script to render the frame picture interval, so that the rendered frame picture interval is highlighted in the display area.
  • the selected frame picture interval needs to be rendered, so that the rendered frame picture interval is different from the unrendered frame picture image.
  • the user can intuitively observe the position and length of the pre-selected video clip to be edited in the original video.
  • the rendering script is a preset program for rendering the selected frame image.
  • the rendering mode that the rendering script can set is (not limited to): covering the frame image with a colored translucent mask and changing the frame of the frame image. Color or enlarge the selected frame image to display.
  • step S1200 in the video editing method shown in FIG. 1 above select a character to be edited in a preset display area.
  • the frame image of the duration of the video segment includes: S1111, selecting at least one frame frame interval representing the duration of the video segment to be edited in the preset display area according to the editing instruction; and step S1300, calling a preset rendering script pair
  • the frame picture image is rendered, so that the rendered frame picture image is highlighted in the display area, including: S1112, calling a preset rendering script to render the frame picture interval, so that the rendered frame is rendered.
  • the screen interval is highlighted within the display area.
  • the video clip to be edited in the embodiment of the present application can be (not limited to) a video clip selected by the user to add an effect, a video clip selected by the user to be deleted in the original video, or the user selects to perform in the original video. Reserved video clips, etc.
  • the video clip to be edited is selected for video cropping, ie the video clip to be edited will be deleted from the original video.
  • FIG. 4 is a schematic flowchart of video clipping provided by this embodiment.
  • step S1300 the following steps are further included:
  • S1211 save the to-be-edited video segment in a preset first storage area
  • the video segment to be edited is actually cut, and after the video segment to be edited is acquired, the video segment to be edited is stored in the preset first storage area.
  • the smart mobile terminal After selecting the frame picture image representing the duration of the video segment to be edited in the display area, the smart mobile terminal can determine the video segment to be edited in the original video. In addition, the user can send a video cut instruction to the smart mobile terminal by performing a predetermined operation on the smart mobile terminal, so that the smart mobile terminal saves the to-be-edited video clip to the preset first after acquiring the video cut instruction. Within the storage area.
  • the user can trigger a video cut instruction by clicking a designated button within the video editing area of the smart mobile terminal; for example, the user can trigger a video cut instruction by performing a preset sliding operation within the video editing area of the smart mobile terminal.
  • the first storage area is a cache area of the smart mobile terminal, that is, a RAM (random access memory) of the smart mobile terminal, and the data information stored in the buffer area is powered off or according to a preset erasing procedure. (For example, when the user chooses not to save the video or close the application when the video is not saved) is cleared, and the data saved in the cache area is completely deleted.
  • a RAM random access memory
  • the application sets a part of the cache space in the cache area of the smart mobile terminal as the first storage area, and specifies that the acquired video clip to be edited is stored in the first storage area.
  • S1212 The reserved video segment is saved in a preset second storage area.
  • the video clip to be edited is a clipped video clip
  • the video clip except the video clip to be edited is a reserved video clip
  • the edited video clip is actually cut and the reserved video clip is stored in the preset second storage area.
  • the frame picture image that is not highlighted in the display area is a frame picture image that characterizes the duration of the reserved video segment.
  • the smart mobile terminal can determine the reserved video segment in the original video. Further, when the smart mobile terminal saves the video clip to be edited into the preset first storage area, the reserved video clip can be saved into the preset second storage area.
  • the second storage area is a cache area of the smart mobile terminal, that is, the RAM of the smart mobile terminal, and the data information stored in the cache area is powered off or according to a preset erasing procedure (for example, the user chooses not to save the video or When the application is closed while the video is not saved, it is cleared, and the data saved in the cache area is completely deleted.
  • a preset erasing procedure for example, the user chooses not to save the video or When the application is closed while the video is not saved, it is cleared, and the data saved in the cache area is completely deleted.
  • the application sets a part of the cache space in the cache area of the smart mobile terminal as the second storage area, and specifies that the acquired reserved video clip is stored in the second storage area.
  • the reserved video clip needs to be re-displayed, and the content of the display area needs to be updated.
  • FIG. 5 is a schematic flowchart of a method for updating the display area according to the embodiment.
  • step S1212 the following steps are further included:
  • S1221 sequentially capture a frame image of the reserved video segment according to a preset first time span
  • the first time span is a time interval for acquiring a frame image of the reserved video segment, such as a frame image of a reserved video segment every 1.5 s.
  • the duration of the first time span is not limited thereto.
  • the duration of the first time span can be longer or shorter, and the criterion of the selection is limited by the duration of the reserved video. The longer the duration of the reserved video, the longer the duration of the first time span.
  • the frame picture images of the reserved video segments are sequentially acquired according to the first time span.
  • the captured frame image is scaled by a preset ratio, and the scaling ratio needs to be determined according to the size of the container for displaying the frame image in the display area. For example, if the ratio of the container to the frame is 1:9, the frame image is reduced by a factor of nine.
  • the frame image images that are subjected to the scaling process are sequentially arranged in the display area in the order of the picking, so that the frame image that represents the duration of the reserved video clip is displayed in the display area.
  • the zoomed frame image is sequentially arranged in the display area in the order of extraction to complete the update of the display area.
  • the display area represents a progress bar for retaining the duration of the video.
  • the display area may no longer display the frame picture image representing the duration of the video segment to be edited, and at this time, the frame picture displayed in the display area
  • the image is a frame picture image that characterizes the duration of the reserved video clip.
  • the frame picture image displayed in the display area has a corresponding relationship with the video segment stored in the second storage area, that is, the frame picture image displayed in the display area is a frame that represents the duration of the video segment stored in the second storage area. Picture image.
  • the frame image representing the duration of the reserved video clip can continue to be displayed in the display area.
  • the frame picture image of the original video within the display area is updated to a frame picture image characterizing the duration of the reserved video segment.
  • the smart mobile terminal may perform the foregoing steps S1221-S1223 immediately after saving the reserved video clip in the preset second storage area, or after saving the reserved video clip in the preset second storage area, Receiving an area update instruction sent by the user to the smart mobile terminal by performing a predetermined operation on the smart mobile terminal, so that after acquiring the area update instruction, the smart mobile terminal displays the frame picture image displayed in the display area from the frame picture of the original video. The image is updated to a frame picture image that characterizes the duration of the reserved video segment.
  • FIG. 6 is a schematic flowchart of the revocation procedure provided by the embodiment.
  • step S1212 shown in FIG. 6 the following steps are further included:
  • the smart mobile terminal receives the click or slide command sent by the user through the finger or the stylus to perform the undo operation. Therefore, in the video editing state, the user's swipe command or click command in the undo area is the pending revocation command.
  • the smart mobile terminal can receive the user's use of a finger or a stylus, triggered by a click or slide operation on the display screen of the smart mobile terminal.
  • the revocation instruction is to be executed.
  • the to-be-executed revocation instruction may indicate which of the to-be-edited video segments stored in the first storage area are video segments that need to be recalled.
  • the area in the displayed display screen may be a video display area or a display area, which is reasonable.
  • S1232 Extract, according to the revocation instruction, a video segment to be edited stored in the first storage area by way of stacking;
  • the number of video clips to be edited in the first storage area can be plural.
  • the time when multiple video clips to be edited enter the first storage area has a sequence.
  • the stack principle that is, the principle of advanced backwards
  • the video clip to be edited into the first storage area is first, after receiving the revocation instruction.
  • the video segment to be edited that is first moved into the first storage area is finally moved out of the first storage area according to the corresponding revocation instruction.
  • the smart mobile terminal can respond to the revocation instruction. Since the revocation instruction indicates which of the to-be-edited video segments stored in the first storage area are video segments that need to be retracted, the smart mobile terminal can follow the stacking principle according to the stacking principle, according to the revocation instruction. Editing the storage time of the video clip starts from the last stored video clip to be edited, and sequentially extracts the video clip to be edited indicated by the revocation command from the first storage area.
  • the video clip to be edited extracted from the first storage area is inserted into the corresponding location of the reserved video clip in the second storage area. After the insertion, steps S1221-S1223 are performed.
  • the smart mobile terminal can insert the extracted video segment to be edited into the corresponding location of the reserved video segment in the second storage area.
  • the frame picture image displayed in the display area Since the frame picture image displayed in the display area has a corresponding relationship with the video segment stored in the second storage area, the frame picture image displayed in the display area is a frame picture image representing the duration of the video segment stored in the second storage area. Therefore, after the extracted video segment to be edited is inserted into the second storage area, the frame picture image displayed by the smart mobile terminal can be updated again, and displayed after the update: the video to be edited is inserted in the second storage area.
  • the frame picture image of the duration of the video segment obtained after the segment.
  • FIG. 7 is a schematic flowchart of inserting a video segment to be edited into a reserved video segment according to the embodiment.
  • step S1233 includes the following steps:
  • the start time of the video clip to be edited is read, that is, the start screen of the edit video clip corresponds to the moment when the video clip is reserved, such as the reserved video clip corresponding to the video start screen to be edited.
  • the time is 5s.
  • the smart mobile terminal can simultaneously extract the segment duration of the video segment to be edited.
  • S1242 Determine a start insertion time of the to-be-edited video segment in the reserved video segment according to the start time, and insert the to-be-edited video segment at the initial insertion time of the reserved video segment.
  • the insertion position of the video clip to be edited is reserved, for example, the time of the reserved video clip corresponding to the video start screen to be edited is 5 s, and the initial insertion time is 5 s.
  • the smart mobile terminal may further determine the end insertion time of the video segment to be edited according to the initial insertion time and the segment duration. For example, the initial insertion time of the video clip to be edited is 5s, and the clip duration is 6s, and the end insertion time is 11s.
  • the smart mobile terminal After determining the initial insertion time, the smart mobile terminal can insert the video clip to be edited at the initial insertion time of the reserved video clip.
  • the reserved video segment is saved in the second storage area.
  • the play duration corresponding to the start insertion time and the end insertion time is the same as the clip duration of the video clip to be edited.
  • FIG. 8 is a schematic flowchart of determining a duration of a video segment to be edited according to an embodiment.
  • step S1200 specifically includes the following steps:
  • S1251 Acquire a starting position and an ending position in the display area indicated by the editing instruction
  • the start coordinate and the end coordinate in the display area indicated by the user slide instruction are obtained, that is, the start position and the end position in the display area indicated by the edit instruction can be obtained.
  • the starting coordinate corresponds to the starting position and the ending coordinate corresponds to the ending position.
  • step S1251 when the user uses the finger or the stylus to trigger the editing instruction by the sliding operation of the display area, the specific operation of the user to send the editing instruction is: the user passes the finger or the stylus Starting from a position in the display area and sliding to another position, the coordinates of the position to start sliding are the starting coordinates of the editing command in the display area, and the coordinates of the position to stop sliding are The end coordinates of the edit instruction in the display area.
  • S1252 Determine a start frame picture image according to the start position, and determine an end frame picture image according to the end position;
  • the start frame picture image is acquired according to the frame picture image corresponding to the start position coordinate in the display area
  • the end frame picture image is acquired according to the frame picture image corresponding to the end position in the display area.
  • the frame image of the start position coordinate corresponding to the display area can be determined as The start frame picture image is determined as the end frame picture image by the frame picture image corresponding to the end position corresponding to the display area.
  • the specific operation of the user to send the editing instruction is: the user uses a finger or a stylus to When a frame image in the display area starts and multiple frames of images are successively clicked, the first frame image that the user clicks is the initial frame image, and the last frame image that the user clicks is the end. Frame picture image.
  • S1253 Determine a frame picture interval that is the start frame of the start frame picture and the end frame picture image as the end frame, and determine the frame picture image that represents the duration of the video segment to be edited.
  • the smart mobile terminal may determine, as the start frame of the start frame picture image and the end frame picture image as the end frame, the frame picture interval as A frame picture image that characterizes the duration of the video clip to be edited.
  • the start time information of the start frame picture image representation and the end time information of the end frame picture image representation may be acquired, and the end time information is The difference from the start time information may be determined as the duration of the video clip to be edited.
  • the frame picture interval in which the start frame picture image is the start frame and the end frame picture image is the end frame is determined as the frame picture image representing the duration of the video segment to be edited.
  • the length of the original video is 36s
  • a frame image is extracted by using 2s as a time span, and the frame image is displayed in the display area in the order of extraction, and the time of the image representation of the first frame is 0s.
  • the time at which the second frame picture image is represented is 2 s..., and the difference between the time information of the start frame picture image and the end frame picture image is analogized, and the duration of the video segment to be edited can be calculated.
  • FIG. 9 is a schematic flowchart of a save program according to an embodiment.
  • the smart mobile terminal receives the click or slide instruction sent by the user through the finger or the stylus to save the operation. Therefore, in the video editing state, the user's slide instruction or click instruction in the save area is a pending execution save instruction.
  • the smart mobile terminal can receive the user to use the finger or the stylus to trigger the click or slide operation on the display screen of the smart mobile terminal.
  • the to-be-executed save instruction may indicate to which storage location of the smart mobile terminal the reserved video clip stored in the second storage area is stored, and may also indicate which video information in the first storage area is deleted.
  • the area in the displayed display screen may be a video display area or a display area, which is reasonable.
  • S1312 Delete video information in the first storage area according to the save instruction, and store video information in the second storage area in a local storage space.
  • the smart mobile terminal can respond to the save instruction. Since the save instruction to be executed indicates which storage location of the reserved mobile video stored in the second storage area is stored to which storage location of the smart mobile terminal, and which video information in the first storage area is deleted, in response to the protection instruction, the smart The mobile terminal can store the video information in the second storage area into the local storage space according to the location indicated by the protection execution, and delete the video information in the first storage area according to the video information indicated by the save instruction.
  • the first storage area and the second storage area are both cache areas, when the save is performed, the clip to be edited in the first storage area needs to be deleted, and the reserved video clips in the second storage area are stored. In the memory of the smart mobile terminal.
  • FIG. 10 is a block diagram showing the basic structure of a video editing apparatus according to this embodiment.
  • a video editing apparatus includes: an obtaining module 2100, an executing module 2200, and a generating module 2300.
  • the obtaining module 2100 is configured to acquire a to-be-executed editing instruction of the user;
  • the executing module 2200 is configured to select, according to the editing instruction, a frame image that represents a duration of the video segment to be edited in the preset display area;
  • the generating module 2300 is configured to invoke The preset rendering script renders the frame image so that the rendered frame image is highlighted in the display area.
  • the length of the original video is represented by the display area in the video editing area.
  • the user performs video editing, by selecting a corresponding frame picture image representing the duration of the video segment to be edited in the display area, it is possible to determine the corresponding The length of the video to be edited.
  • the selected frame picture image is script rendered so that it can be displayed differently from other frame picture images, and the selected video segment to be edited is displayed to the user in an intuitive manner, so that the user can determine the position of the video to be edited.
  • This editing method can increase the utilization rate in a limited space, and is convenient for the user to browse and operate.
  • the frame picture image representing the duration of the video segment to be edited is selected in the display area, and the selected frame picture image is highlighted.
  • the user can intuitively and accurately determine the content of the video to be edited in the original video, thereby reducing the error rate during editing, so that the user can accurately edit the video.
  • the execution module 2200 includes a first execution sub-module
  • the generation module 2300 includes a first generation sub-module.
  • the first execution sub-module is configured to select at least one frame picture interval that represents a duration of the video segment to be edited in the preset display area according to the editing instruction; the first generation sub-module is configured to invoke the preset rendering script to the frame picture. The interval is rendered so that the rendered frame frame interval is highlighted within the display area.
  • the video editing apparatus further includes: a first storage module and a second storage module.
  • the first storage module is configured to render the frame image on the preset rendering script to save the frame image to be edited after the rendered frame image is highlighted in the display area.
  • the second storage module is configured to save the reserved video segment in the preset second storage area.
  • the video editing apparatus further includes: a first retrieval module, a first scaling module, and a second generation module.
  • the first calling module is configured to sequentially capture a frame image of the reserved video segment according to the preset first time span;
  • the first scaling module is configured to perform image scaling processing on the frame image according to a preset scaling ratio;
  • the module is configured to arrange the scaled frame image images in the display area in the order in which they are called, so that the frame image of the duration of the reserved video segment is displayed in the display area.
  • the video editing apparatus further includes: a second acquisition module, a second retrieval module, and a third storage module.
  • the second obtaining module is configured to acquire a to-be-executed revocation instruction of the user;
  • the second retrieving module is configured to extract, according to the revocation instruction, the to-be-edited video segment stored in the first storage area according to a stack manner;
  • the third storage module For inserting the extracted video clip to be edited into the second storage area.
  • the foregoing third storage module includes: a third acquisition submodule and a first determination submodule.
  • the third obtaining sub-module is configured to obtain a start time of the video segment to be edited and a segment duration;
  • the first determining sub-module is configured to determine, in the reserved video segment, a starting insertion time of the to-be-edited video segment according to the starting time, and Determining an end insertion time of the to-be-edited video segment according to the duration of the segment; the video insertion sub-module, for inserting the to-be-edited video segment from the position corresponding to the initial insertion time in the second storage region, until The position corresponding to the end insertion time is stopped.
  • the obtaining module 2100 includes: a fourth acquiring submodule, a second executing submodule, and a first computing submodule.
  • the fourth obtaining sub-module is configured to obtain a starting position and an ending position of the editing instruction in the display area; the second executing sub-module is configured to determine a starting frame picture image according to the starting position, and obtain an ending frame picture according to the ending position. And an image; wherein a difference between the start time information of the start frame picture image representation and the end time information of the end frame picture image representation is a duration of the video segment to be edited.
  • the video editing apparatus further includes: a fifth obtaining module and a third executing module.
  • the fifth obtaining module is configured to acquire a save instruction of the user to be executed
  • the third execution module is configured to delete the video information in the first storage area according to the save instruction, and store the video information in the second storage area in the local storage.
  • FIG. 11 is a schematic diagram of a basic structure of an intelligent mobile terminal according to an embodiment of the present disclosure.
  • all the programs in the video editing method in the embodiment are stored in the memory 1520 of the smart mobile terminal, and the processor 1580 can call the program in the memory 1520 to execute the video editing method. List all the features. The function of the smart mobile terminal is described in detail in the video editing method in this embodiment, and details are not described herein.
  • the length of the original video is represented by the display area in the video editing area.
  • the corresponding frame image of the length of the video segment to be edited is selected in the display area, and the corresponding image can be determined.
  • the selected frame picture image is script rendered so that it can be displayed differently from other frame picture images, and the selected video segment to be edited is displayed to the user in an intuitive manner, so that the user can determine the position of the video to be edited.
  • This editing method can increase the utilization rate in a limited space, and is convenient for the user to browse and operate.
  • the frame picture image representing the duration of the video segment to be edited is selected in the video editing process, and the selected frame picture image is highlighted.
  • the user can intuitively and accurately determine the content of the video to be edited in the original video, thereby reducing the error rate during editing, so that the user can accurately edit the video.
  • the embodiment of the present application further provides a smart mobile terminal.
  • a smart mobile terminal As shown in FIG. 11 , for the convenience of description, only the parts related to the embodiment of the present application are shown. For details that are not disclosed, refer to the method part of the embodiment of the present application.
  • the terminal may be any terminal device including a smart mobile terminal, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), an in-vehicle computer, and the terminal is an intelligent mobile terminal as an example:
  • FIG. 11 is a block diagram showing a partial structure of an intelligent mobile terminal related to a terminal provided by an embodiment of the present application.
  • the smart mobile terminal includes: a radio frequency (RF) circuit 1510 , a memory 1520 , an input unit 1530 , a display unit 1540 , a sensor 1550 , an audio circuit 1560 , and a wireless fidelity (Wi-Fi) module 1570. , processor 1580, and power supply 1590 and other components.
  • RF radio frequency
  • FIG. 7 does not constitute a limitation on the smart mobile terminal, and may include more or less components than those illustrated, or combine some components or different components. Arrangement.
  • the RF circuit 1510 can be used for receiving and transmitting signals during the transmission or reception of information or during a call. Specifically, after receiving the downlink information of the base station, the processing is processed by the processor 1580. In addition, the data designed for the uplink is sent to the base station.
  • RF circuit 1510 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like.
  • LNA Low Noise Amplifier
  • RF circuitry 1510 can also communicate with the network and other devices via wireless communication.
  • the above wireless communication may use any communication standard or protocol, including but not limited to Global System of Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code Division). Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), E-mail, Short Messaging Service (SMS), and the like.
  • GSM Global System of Mobile communication
  • GPRS General Packet Radio Service
  • CDMA Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • LTE Long Term Evolution
  • E-mail Short Messaging Service
  • the memory 1520 can be used to store software programs and modules, and the processor 1580 executes various functional applications and data processing of the smart mobile terminal by running software programs and modules stored in the memory 1520.
  • the memory 1520 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a voiceprint playing function, an image playing function, etc.), and the like; the storage data area may be stored. Data created according to the use of the smart mobile terminal (such as audio data, phone book, etc.).
  • memory 1520 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • the input unit 1530 can be configured to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the smart mobile terminal.
  • the input unit 1530 may include a touch panel 1531 and other input devices 1532.
  • the touch panel 1531 also referred to as a touch screen, can collect touch operations on or near the user (such as the user using a finger, a stylus, or the like on the touch panel 1531 or near the touch panel 1531. Operation), and drive the corresponding connecting device according to a preset program.
  • the touch panel 1531 may include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information.
  • the processor 1580 is provided and can receive commands from the processor 1580 and execute them.
  • the touch panel 1531 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • the input unit 1530 may also include other input devices 1532.
  • other input devices 1532 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like.
  • the display unit 1540 can be used to display information input by the user or information provided to the user as well as various menus of the smart mobile terminal.
  • the display unit 1540 can include a display panel 1541.
  • the display panel 1541 can be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like.
  • the touch panel 1531 may cover the display panel 1541. After the touch panel 1531 detects a touch operation on or near the touch panel 1531, the touch panel 1531 transmits to the processor 1580 to determine the type of the touch event, and then the processor 1580 according to the touch event. The type provides a corresponding visual output on display panel 1541.
  • the touch panel 1531 and the display panel 1541 are two independent components to implement the input and input functions of the smart mobile terminal, in some embodiments, the touch panel 1531 and the display panel 1541 may be Integrate to realize the input and output functions of intelligent mobile terminals.
  • the smart mobile terminal may also include at least one type of sensor 1550, such as a light sensor, motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 1541 according to the brightness of the ambient light, and the proximity sensor may close the display panel 1541 when the smart mobile terminal moves to the ear. And / or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity. It can be used to identify the posture of smart mobile terminals (such as horizontal and vertical screen switching).
  • An audio circuit 1560, a speaker 1561, and a microphone 1562 can provide an audio interface between the user and the smart mobile terminal.
  • the audio circuit 1560 can transmit the converted electrical data of the received audio data to the speaker 1561, and convert it into a voiceprint signal output by the speaker 1561.
  • the microphone 1562 converts the collected voiceprint signal into an electrical signal by the audio.
  • Circuit 1560 is converted to audio data upon receipt, processed by audio data output processor 1580, transmitted via RF circuitry 1510 to, for example, another smart mobile terminal, or output audio data to memory 1520 for further processing.
  • Wi-Fi is a short-range wireless transmission technology.
  • the smart mobile terminal can help users to send and receive emails, browse web pages and access streaming media through the Wi-Fi module 1570. It provides users with wireless broadband Internet access.
  • FIG. 7 shows the Wi-Fi module 1570, it can be understood that it does not belong to the essential configuration of the smart mobile terminal, and can be omitted as needed within the scope of not changing the essence of the invention.
  • the processor 1580 is a control center of the smart mobile terminal that connects various portions of the entire smart mobile terminal using various interfaces and lines, by running or executing software programs and/or modules stored in the memory 1520, and by calling them stored in the memory 1520.
  • the processor 1580 may include one or more processing units; preferably, the processor 1580 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like.
  • the modem processor primarily handles wireless communications. It will be appreciated that the above described modem processor may also not be integrated into the processor 1580.
  • the smart mobile terminal also includes a power source 1590 (such as a battery) for supplying power to various components.
  • a power source 1590 such as a battery
  • the power source can be logically connected to the processor 1580 through a power management system to manage functions such as charging, discharging, and power management through the power management system. .
  • the smart mobile terminal may further include a camera, a Bluetooth module, and the like, and details are not described herein again.
  • the embodiment of the present application further provides a non-transitory computer readable storage medium including instructions, for example, a memory 1520 including instructions executable by the processor 1580 of the smart mobile terminal to complete the method.
  • the non-transitory computer readable storage medium may be a ROM (Read-Only Memory), a RAM, a CD-ROM, a magnetic tape, a floppy disk, and an optical data storage device.
  • a non-transitory computer readable storage medium that, when executed by a processor of an intelligent mobile terminal, enables a smart mobile terminal to perform the steps of any of the video editing methods described herein.
  • the embodiment of the present application provides a computer program product, when it is run on a computer, causing the computer to perform the steps of the video editing method described in any of the above embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

本申请实施例公开了一种视频编辑方法、装置及智能移动终端,获取用户的待执行编辑指令;根据所述编辑指令在预设的展示区域内选取表征待编辑视频片段的时长的帧画面图像;调用预设的渲染脚本对所述帧画面图像进行渲染,以使渲染后的帧画面图像在所述展示区域内突出显示。在用户进行视频编辑时,通过在展示区域选择对应的表征待编辑视频片段的时长的帧画面图像,就能够确定对应的待编辑视频时长。对选取的帧画面图像进行脚本渲染,使其能够区别于其他帧画面图像进行显示,直观的向用户展示选取的待编辑视频片段,方便用户确定待编辑视频的位置。采用这种编辑方式能够在有限得空间内,增大利用率,方便用户浏览和操作。

Description

视频编辑方法、装置及智能移动终端
本申请要求于2017年12月20日提交中国专利局、申请号为201711386533.8发明名称为“视频编辑方法、装置及智能移动终端”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及视频编辑领域,尤其是涉及一种视频编辑方法、装置及智能移动终端。
背景技术
视频编辑是指先用摄影机摄录下预期的影像,再在电脑上用视频编辑软件将摄录下的影像制作成碟片的编辑过程。当前,随着智能移动终端的处理能力越来越好,通过智能移动终端对拍摄的视频进行编辑成为视频编辑的新需求。
现有技术中,通过智能移动终端对视频进行编辑时,由于智能移动终端的操作界面空间的限制,用户无法观察到具体的编辑位置。也就是说,在现有技术中,用户无法准确选取到想要进行视频编辑的视频帧。此外,在编辑时,由于智能移动终端可以将编辑片段与原始视频进行分屏显示,也就是说,在现有技术中,智能移动终端会对显示屏进行分屏处理,分屏后的两部分屏幕中,一部分用于播放原始视频,另外一部分用于显示用户所选取到的、想要进行视频编辑的视频帧画面。这样,会大大压缩现有的编辑界面的空间,使用户无法准确点击到操作界面中的功能按键,造成编辑错误率上升。
本申请的发明人在研究中发现,在现有技术中的视频编辑技术中,当在原始视频中选择编辑片段时,由于用户无法准确地确定待编辑视频片段的位置,从而导致在编辑时用户无法准确地确定待编辑视频片段内容,使得编辑时错误率较高,用户无法准确地进行视频编辑。
发明内容
本申请实施例提供一种通过展示区域表征原始视频时长,且能够在显示区域显示待编辑视频时长的视频编辑方法、装置及智能移动终端。
为解决上述技术问题,本申请的实施例采用的一个技术方案是:提供一种视频编辑方法,所述方法包括:
获取用户的待执行编辑指令;
根据所述编辑指令,在预设的展示区域内选取表征待编辑视频片段的时长的帧画面图像;其中,所述展示区域用于展示按照预设时间跨度在所述待编辑视频片段对应的原始视频中采集到的帧画面图像;
调用预设的渲染脚本对所述帧画面图像进行渲染,以使渲染后的帧画面图像在所述展示区域内突出显示。
可选地,所述根据所述编辑指令在预设的展示区域内选取表征待编辑视频片段的时长的帧画面图像的步骤,包括:
根据所述编辑指令在预设的展示区域内选取表征待编辑视频片段时长的至少一个帧画面区间;
所述调用预设的渲染脚本对所述帧画面图像进行渲染,以使渲染后的帧画面图像在所述展示区域内突出显示的步骤,包括:
调用预设的渲染脚本对所述帧画面区间进行渲染,以使所述渲染后的帧画面区间在所述展示区域内突出显示。
可选地,所述调用预设的渲染脚本对所述帧画面图像进行渲染,以使所述渲染后的帧画面图像在所述展示区域内突出显示的步骤之后,所述方法还包括:
将所述待编辑视频片段保存在预设的第一存储区域内;
将保留视频片段保存在预设的第二存储区域内。
可选地,所述将保留视频片段保存在预设的第二存储区域内的步骤之后,还包括下述步骤:
根据预设第一时间跨度依次调取所述保留视频片段的帧画面图像;
将所述帧画面图像按预设缩放比例进行图像缩放处理;
将经过缩放处理的所述帧画面图像按调取的先后顺序依次排布在所述展示区域内,以使所述展示区域内显示表征所述保留视频片段的时长的帧画面图像。
可选地,所述将保留视频片段保存在预设的第二存储区域内的步骤之后,所述方法还包括:
获取用户的待执行撤销指令;
根据所述撤销指令,将存储在所述第一存储区域内的待编辑视频片段按堆栈的方式进行提取;
将提取的待编辑视频片段***到所述第二存储区域内。
可选地,所述将提取的待编辑视频片段***到所述第二存储区域内的步骤,包括:
获取所述待编辑视频片段的起始时间;
根据所述起始时间在所述保留视频片段中确定所述待编辑视频片段的起始***时间,在所述保留视频片段的所述起始***时间处***所述待编辑视频片段。
可选地,所述根据所述编辑指令在预设的展示区域内选取表征待编辑视频片段的时长的帧画面图像的步骤,包括:
获取所述编辑指令所指示的在所述展示区域内的起始位置和结束位置;
根据所述起始位置确定起始帧画面图像,并根据所述结束位置确定结束帧画面图像;
将以所述起始帧画面图像为起始帧且以所述结束帧画面图像为结束帧的帧画面区间,确定为表征待编辑视频片段的时长的帧画面图像。
可选地,所述将保留视频片段保存在预设的第二存储区域内的步骤之后,所述方法还包括:
获取用户的待执行保存指令;
根据所述保存指令,删除所述第一存储区域内的视频信息,并将所述第二存储区域内的视频信息存储在本地存储空间内。
为解决上述技术问题,本申请实施例还提供一种视频编辑装置,所述装置包括:
获取模块,用于获取用户的待执行编辑指令;
执行模块,用于根据所述编辑指令,在预设的展示区域内选取表征待编辑视频片段的时长的帧画面图像;其中,所述展示区域用于展示按照预设时间跨度在所述待编辑视频片段对应的原始视频中采集到的帧画面图像;
生成模块,用于调用预设的渲染脚本对所述帧画面图像进行渲染,以使渲染后的帧画面图像在所述展示区域内突出显示。
可选地,所述执行模块包括第一执行子模块,用于根据所述编辑指令在预设的展示区域内选取表征待编辑视频片段时长的至少一个帧画面区间;
所述生成模块包括第一生成子模块,用于调用预设的渲染脚本对所述帧画面区间进行渲染,以使所述渲染后的帧画面区间在所述展示区域内突出显示。
可选地,所述视频编辑装置还包括:
第一存储模块,用于在所述调用预设的渲染脚本对所述帧画面图像进行渲染,以使渲染后的帧画面图像在所述展示区域内突出显示之后,将所述待编辑视频片段保存在预设的第一存储区域内;
第二存储模块,用于将保留视频片段保存在预设的第二存储区域内。
可选地,所述视频编辑装置还包括:
第一调取模块,用于根据预设第一时间跨度依次调取所述保留视频片段的帧画面图像;
第一缩放模块,用于将所述帧画面图像按预设缩放比例进行图像缩放处理;
第二生成子模块,用于将经过缩放处理的所述帧画面图像按调取的先后顺序依次排布在所述展示区域内,以使所述展示区域内显示表征所述保留视频片段的时长的帧画面图像。
可选地,所述视频编辑装置还包括:
第二获取模块,用于获取用户的待执行撤销指令;
第二调取模块,用于根据所述撤销指令将存储在所述第一存储区域内的待编辑视频片段按堆栈的方式进行提取;
第三存储模块,用于将提取的待编辑视频片段***到所述第二存储区域内。
可选地,所述第三存储模块包括:
第三获取子模块,用于获取所述待编辑视频片段的起始时间;
第一确定子模块,用于根据所述起始时间在所述保留视频片段中确定所述待编辑视频片段的起始***时间;
视频***子模块,用于在所述保留视频片段的所述起始***时间处***所述待编辑视频片段。
可选地,所述获取模块包括:
第四获取子模块,用于获取所述编辑指令所指示的在所述展示区域内的起始位置和结束位置;
第二执行子模块,用于根据所述起始位置确定起始帧画面图像,并根据所述结束位置获取结束帧画面图像;
图像确定子模块,用于将以所述起始帧画面图像为起始帧且以所述结束帧画面图像为结束帧的帧画面区间,确定为表征待编辑视频片段的时长的帧画面图像。
可选地,所述视频编辑装置还包括:
第五获取模块,用于获取用户的待执行保存指令;
第三执行模块,用于根据所述保存指令,删除所述第一存储区域内的视频信息,并将所述第二存储区域内的视频信息存储在本地存储空间内。
为解决上述技术问题,本申请实施例还提供一种智能移动终端,包括:
一个或多个处理器;
存储器;
一个或多个应用程序,其中所述一个或多个应用程序被存储在所述存储器中并被配置为由所述一个或多个处理器执行,所述一个或多个程序配置用于执行上述所述的视频编辑方法。
为了解决上述技术问题,本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质存储有视频编辑程序,所述视频编辑程序被处理器执行时实现上述任意一种视频编辑方法的步骤。
为了解决上述技术问题,本申请实施例还提供一种计算机程序产品,当其在计算机上运行时,使得计算机实现上述任意一种视频编辑方法的步骤。
本申请实施例的有益效果是:进行视频编辑时,在视频编辑区域通过展示区域表征原始视频的时长,在用户进行视频编辑时,通过在展示区域选择对应的表征待编辑视频片段的时长的帧画面图像,就能够确定对应的待编辑视频时长。对选取的帧画面图像进行脚本渲染,使其能够区别于其他帧画面图像进行显示,直观的向用户展示选取的待编辑视频片段,方便用户确定待编辑视频的位置。采用这种编辑方式能够在有限得空间内,增大利用率,方便用户浏览和操作。也就是说,应用本申请实施例提供的方案,在视频编辑过程中,通过在展示区域内选取表征待编辑视频片段的时长的帧画面图像,并对所选取的帧画面图像进行突出显示。用户可以直观、准确地在原始视频中确定待编辑视频的内容,从而降低编辑时的错误率,使用户可以准确地编 辑视频。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的视频编辑方法的一种基本流程示意图;
图2为本申请实施例频提供的编辑页面的一种示意图;
图3为本申请实施例提供的视频编辑方法的另一种实施方式的流程示意图;
图4为本申请实施例提供的视频剪切的一种流程示意图;
图5为本申请实施例提供的更新所述展示区域的一种流程示意图;
图6为本申请实施例提供的撤销程序的一种流程示意图;
图7为本申请实施例提供的将待编辑视频片段***保留视频片段的一种流程示意图;
图8为本申请实施例提供的确定待编辑视频片段的时长的一种流程示意图;
图9为本申请实施例提供的保存程序的一种流程示意图;
图10为本申请实施例视频编辑装置基本结构框图;
图11为本申请实施例智能移动终端基本结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。
在本申请的说明书和权利要求书及上述附图中的描述的一些流程中,包含了按照特定顺序出现的多个操作,但是应该清楚了解,这 些操作可以不按照其在本文中出现的顺序来执行或并行执行,操作的序号如101、102等,仅仅是用于区分开各个不同的操作,序号本身不代表任何的执行顺序。另外,这些流程可以包括更多或更少的操作,并且这些操作可以按顺序执行或并行执行。需要说明的是,本文中的“第一”、“第二”等描述,是用于区分不同的消息、设备、模块等,不代表先后顺序,也不限定“第一”和“第二”是不同的类型。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
实施例
请参阅图1和图2,图1为本实施例提供的视频编辑方法的一种基本流程示意图;图2为本实施例提供的视频编辑页面的一种示意图。
为了便于更好地理解本实施例提供的一种视频编辑方法,下面首先对图2所示的视频编辑页面进行介绍。
在视频编辑状态下,智能移动终端的显示区域包括:视频显示区域和展示区域。如图2所示,智能移动终端分屏后的显示区域中的上半区域可以为视频显示区域,下半区域可以为展示区域。其中,在视频编辑状态下,智能移动终端包括视频显示区域和展示区域的显示区域可以称为视频编辑区域。
下面分别对上述视频显示区域和展示区域进行介绍。
视频显示区域用于播放原始视频或待编辑视频,在播放暂停时,视频显示区域显示播放进度停止时刻的视频帧画面。需要说明的是,上述原始视频为:摄像机对预期的影像进行摄录得到的、还未进行编辑的视频,上述待编辑视频为:用户在原始视频中选取待编辑的视频片段,当用户不对该待编辑的视频片段执行剪切操作时,该待编辑的视频片段便可以在该视频显示区域内进行播放。
其中,当视频显示区域内播放的是用户已经选择的待编辑的视频片段时,用户可以对该待编辑的视频片段进行再次编辑,也就是说,用户可以在该待编辑的视频片段中再次选择待进行二次编辑的视频片段。那么,可以理解的,对于待进行二次编辑的视频片段而言,当前视频显示区域内播放的用户已经选择的待编辑的视频片段可以看做待进行二次编辑的视频片段的原始视频。
展示区域显示有若干个帧画面图像,在进行视频编辑时,按照预设时间跨度,定时采集原始视频的帧画面图像,并在展示区域内显示所采集的帧画面图像。例如:原始视频长度为36s,以2s为一个时间跨度提取一幅帧画面图像,将帧画面图像按提取的先后顺序在展示区域内进行显示,则每一幅帧画面图像表征一段长度为2s的视频画面。因此,展示区域实际为原始视频的进度条,根据进度指针在展示区域内的位置所对应的帧画面图像所对应的原始视频的播放时间,能够获取该进度指针所指示的原始视频的播放位置。
下面,对图1所示的、本申请实施例提供的一种视频编辑方法进行介绍。
如图1所示,一种视频编辑方法,包括下述步骤:
S1100、获取用户的待执行编辑指令;
用户使用智能移动终端对拍摄或者本地存储的视频进行编辑,进入到编辑状态后,智能移动终端接收用户通过手指或触控笔发送的点击或滑动指令进行视频编辑,因此,在视频编辑状态下,用户的滑动指令或点击指令为待执行编辑指令。
也就是说,在智能移动终端进入编辑状态后,智能移动终端的显示区域可以如图2所示,包括视频显示区域和展示区域。这样,用户便可以使用手指或触控笔,通过对该展示区域的点击或滑动操作触发待执行编辑指令,以使得智能移动终端在检测到用户对该展示区域的点击或滑动操作后,获取用户的待执行编辑指令。
其中,该待执行编辑指令可以指示该展示区域所显示的若干个视频帧画面中哪些帧画面图像为表征待编辑视频片段的时长的帧画面 图像。
S1200、根据所述编辑指令,在预设的展示区域内选取表征待编辑视频片段的时长的帧画面图像;
其中,所述展示区域用于展示按照预设时间跨度在所述待编辑视频片段对应的原始视频中采集到的帧画面图像。
根据用户的编辑指令,该编辑指令能够是用户的点击指令或滑动指令。根据用户发送的编辑指令选择一幅或连续的多幅帧画面图像,通过获取起始帧画面图像表征的起始时间,和结束帧画面图像表征的结束时间,能够计算出待编辑视频片段的时长,即用户发送的编辑指令选择的需要进行编辑的视频的时长。
也就是说,在获取用户的待执行编辑指令后,智能移动终端便可以响应该编辑指令。由于该编辑指令可以指示该展示区域所显示的若干个视频帧画面中哪些帧画面图像为表征待编辑视频片段的时长的帧画面图像,因此,在响应该编辑指令时,智能移动终端便可以在该展示区域内选取表征视频片段的时长的帧画面图像。进而,根据所选择的起始帧画面图像表征的起始时间,以及结束帧画面图像表征的结束时间,便能够计算出待编辑视频片段的时长。这样,在获取起始帧画面图像表征的起始时间和待编辑视频片段的时长后,智能移动终端便可以确定待编辑视频片段在原始视频中所对应的播放时间,进而,便可以根据所确定的播放时间在原始视频中获取待编辑视频片段。
也就是说,通过在展示区域选取表征待编辑视频片段的时长的帧画面图像,便可以根据所选取的帧画面图像对应的在原始视频中的播放时间,在原始视频中确定待编辑视频片段。
其中,当所选取的表征视频片段的时长的帧画面图像为多幅帧画面图像时,则起始帧画面图像为所选取的多幅帧画面图像中的第一幅帧画面图像,结束帧画面图像为所选取的多幅帧画面图像中的最后一幅帧画面图像。
当所选取的表征视频片段的时长的帧画面图像为一幅帧画面图像时,则起始帧画面图像和结束帧画面图像均为所选择的该帧画图像。 则待编辑视频片段的时长即为:该帧画面图像所表征的时长。
S1300、调用预设的渲染脚本对所述帧画面图像进行渲染,以使渲染后的帧画面图像在所述展示区域内突出显示。
获取到用户选定的表征待编辑视频片段的时长的帧画面图像后,需要对选中的帧画面图像进行渲染,以使该渲染后的帧画面图像与未渲染的帧画面图像之间具有区别,能够使用户直观的观察到预选的待编辑视频片段在原始视频中的位置和长度。
渲染脚本是预先设定的对选中的帧画面图像进行渲染的程序,渲染脚本能够设置的渲染方式为(不限于):在帧画面图像上覆盖有色的半透明蒙版、改变帧画面图像边框的颜色或将选中的帧画面图像进行放大显示等。
上述实施方式,通过进行视频编辑时,在视频编辑区域通过展示区域表征原始视频的时长,在用户进行视频编辑时,通过在展示区域选择对应的表征待编辑视频片段的时长的帧画面图像,就能够确定对应的待编辑视频的时长。对选取的帧画面图像进行脚本渲染,使其能够区别于其他帧画面图像进行显示,直观的向用户展示选取的待编辑视频片段,方便用户确定待编辑视频的位置。采用这种编辑方式能够在有限得空间内,增大利用率,方便用户浏览和操作。也就是说,应用本申请实施例提供的方案,在视频编辑过程中,通过在展示区域内选取表征待编辑视频片段的时长的帧画面图像,并对所选取的帧画面图像进行突出显示。用户可以直观、准确地在原始视频中确定待编辑视频的内容,从而降低编辑时的错误率,使用户可以准确地编辑视频。
需要指出的是,本申请实施例中的待编辑视频片段能够是(不限于)用户选定添加特效的视频片段、用户选定在原始视频中进行删除的视频片段或用户选择在原始视频中进行保留的视频片段等。
在一些实施方式中,用户通过滑动指令在展示区域内选择待编辑视频,因此,用户的编辑指令选择的帧画面图像为连续的多个帧画面图像。具体请请参阅图3,图3为本实施例提供的视频编辑方法的另 一种实施方式的流程示意图。
如图3所示,在步骤S1100之后,本实施方式可以包括:
S1111、根据所述编辑指令在预设的展示区域内选取表征待编辑视频片段的时长的至少一个帧画面区间;
在本实施方式中,根据用户的滑动指令选择连续的多幅帧画面图像,通过获取起始帧画面图像表征的起始时间,和结束帧画面图像表征的结束时间,能够计算出待编辑视频片段的时长,即用户编辑指令选择的需要进行编辑的视频的时长。
也就是说,在本实施方式中,智能移动终端获取的用户的待执行编辑指令是用户使用手指或触控笔,通过对智能移动终端的视频编辑区域中的展示区域的滑动操作触发的。
其中,用户使用手指或触控笔,选中该展示区域所展示的一个帧画面图像,从该帧画面图像开始,按照帧画面图像在原始视频中对应的从前到后的播放顺序,依次滑过多幅帧画面图像,直至停止滑动,将手指或触控笔离开智能终端的显示屏。这样,用户在展示区域内滑动手指或触控笔的过程中所经过的帧画面图像便构成了表征待编辑视频片段的时长的至少一个帧画面区间。
在用户滑过的多幅帧画面图像中,用户选中并开始滑动的帧画面图像为起始帧画面图像,用户停止滑动时,手指或触控笔所选中的帧画面图像为结束帧画面图像。这样,根据起始帧画面图像表征的起始时间,和结束帧画面图像表征的结束时间,便能够计算出待编辑视频片段的时长。
这样,在获取起始帧画面图像表征的起始时间和待编辑视频片段的时长后,智能移动终端便可以确定待编辑视频片段在原始视频中所对应的播放时间,进而,便可以根据所确定的播放时间在原始视频中获取待编辑视频片段。
举例说明,展示区域每一幅帧画面图像表征2s时长的视频时长,用户选择的起始帧画面表征的原始视频的起始时间为第6s,用户的滑动指令选择连续的四幅帧画面图像,则待编辑视频的时长为8s,在原 始视频中的位置为起始时间为6s,结束时间为14s的视频片段。
S1112、调用预设的渲染脚本对所述帧画面区间进行渲染,以使渲染后的帧画面区间在所述展示区域内突出显示。
获取到用户选定的表征待编辑视频片段的时长的帧画面区间后,需要对选中的帧画面区间进行渲染,以使该渲染后的帧画面区间与未渲染的帧画面图像之间具有区别,能够使用户直观的观察到预选的待编辑视频片段在原始视频中的位置和长度。
渲染脚本是预先设定的对选中的帧画面图像进行渲染的程序,渲染脚本能够设置的渲染方式为(不限于):在帧画面图像上覆盖有色的半透明蒙版、改变帧画面图像边框的颜色或将选中的帧画面图像进行放大显示等。
也就是说,在图3所示的实施方式中,在步骤S1100之后,上述图1所示的视频编辑方法中的步骤S1200、根据所述编辑指令,在预设的展示区域内选取表征待编辑视频片段的时长的帧画面图像,包括:S1111、根据所述编辑指令在预设的展示区域内选取表征待编辑视频片段的时长的至少一个帧画面区间;步骤S1300、调用预设的渲染脚本对所述帧画面图像进行渲染,以使渲染后的帧画面图像在所述展示区域内突出显示,包括:S1112、调用预设的渲染脚本对所述帧画面区间进行渲染,以使渲染后的帧画面区间在所述展示区域内突出显示。
需要指出的是,本申请实施例中的待编辑视频片段能够是(不限于)用户选定添加特效的视频片段、用户选定在原始视频中进行删除的视频片段或用户选择在原始视频中进行保留的视频片段等。
在一些实施方式中,待编辑视频片段被选取进行视频剪切,即待编辑视频片段将从原始视频中删除。具体请参阅4,图4为本实施例提供的视频剪切的一种流程示意图。
如图4所示,步骤S1300之后还包括下述步骤:
S1211、将所述待编辑视频片段保存在预设的第一存储区域内;
本实施方式中,在视频编辑时,实际是对待编辑视频片段进行剪 切,在获取待编辑视频片段后,将该待编辑视频片段存储在预设的第一存储区域内。
在展示区域内选取表征待编辑视频片段的时长的帧画面图像后,智能移动终端便可以在原始视频中确定待编辑视频片段。进而,用户便可以通过对智能移动终端执行预定操作向智能移动终端发送视频剪切指令,以使得智能移动终端在获取该视频剪切指令后,将该待编辑视频片段保存到预设的第一存储区域内。
例如,用户可以通过点击智能移动终端的视频编辑区域内的指定按钮触发视频剪切指令;又例如,用户可以通过在智能移动终端的视频编辑区域内执行预设的滑动操作触发视频剪切指令。
第一存储区域为智能移动终端的缓存区域,即智能移动终端的RAM(random access memory,随机存取存储器),存储在该缓存区域内的数据信息在断电时或者根据预设的擦除程序(例如,用户选择不保存视频或在视频未保存的状态下关闭该应用程序时)被清空,该缓存区域内保存的数据被彻底删除。
应用程序将智能移动终端的缓存区域中的一部分缓存空间设定为第一存储区域,并指定获取的待编辑视频片段存储在该第一存储区域内。
S1212、将保留视频片段保存在预设的第二存储区域内。
在原始视频中,待编辑视频片段为剪切的视频片段,除待编辑视频片段之外的视频片段为保留视频片段。
在视频编辑时,实际是对待编辑视频片段进行剪切,并将保留视频片段存储在预设的第二存储区域内。
也就是说,在上述步骤S1300之后,在展示区域中未被突出显示的帧画面图像为表征保留视频片段的时长的帧画面图像。根据展示区域内的帧画面图像与原始视频的播放时间的对应关系,在确定待编辑视频片段后,智能移动终端便可以确定原始视频中的保留视频片段。进而,当智能移动终端将待编辑视频片段保存到预设的第一存储区域内,便可以将保留视频片段保存到预设的第二存储区域内。
第二存储区域为智能移动终端的缓存区域,即智能移动终端的RAM,存储在该缓存区域内的数据信息在断电时或者根据预设的擦除程序(例如,用户选择不保存视频或在视频未保存的状态下关闭该应用程序时)被清空,该缓存区域内保存的数据被彻底删除。
应用程序将智能移动终端的缓存区域中的一部分缓存空间设定为第二存储区域,并指定获取的保留视频片段存储在该第二存储区域内。
通过将待编辑视频片段与保留视频片段分别存储在不同的缓存区域内,方便对保留视频片段进行重新拼接。
在一些实施方式中,将待编辑视频进行剪切后,需要对保留视频片段进行重新的展示,需要对展示区域的内容进行更新。
具体请参阅图5,图5为本实施例提供的更新所述展示区域的一种的流程示意图。
如图5所示,步骤S1212之后还包括下述步骤:
S1221、根据预设第一时间跨度依次调取所述保留视频片段的帧画面图像;
第一时间跨度为采集保留视频片段的帧画面图像的时间间隔,如每隔1.5s采集一张保留视频片段的帧画面图像。但第一时间跨度的时长不局限于此,根据具体应用场景的不同,在一些实施方式中,第一时间跨度的时长能够更长或更短,其选择的标准受到保留视频的时长的限制,保留视频的时长越长,则第一时间跨度的时长越长。
根据第一时间跨度依次采集保留视频片段的帧画面图像。
S1222、将所述帧画面图像按预设缩放比例进行图像缩放处理;
根据预设图像处理脚本,将采集的帧画面图像进行预设比例的缩放,缩放的比例需要根据展示区域内用于展示帧画面图像的容器的大小进行确定。例如,容器与帧画面的比例为1:9,则将帧画面图像缩小九倍。
S1223、将经过缩放处理的所述帧画面图像按调取的先后顺序依 次排布在所述展示区域内,以使所述展示区域内显示表征所述保留视频片段的时长的帧画面图像。
将通过缩放后的帧画面图像按提取的顺序依次排布在展示区域内,完成对展示区域的更新,此时,展示区域表征的是保留视频的时长的进度条。
也就是说,在将待编辑视频片段保存到预设的第一存储区域内后,展示区域可以不再展示表征待编辑视频片段的时长的帧画面图像,此时,展示区域内显示的帧画面图像为表征保留视频片段的时长的帧画面图像。进一步的,展示区域内所显示的帧画面图像与第二存储区域存储的视频片段具有对应关系,即展示区域所显示的帧画面图像是表征第二存储区域内所存储的视频片段的时长的帧画面图像。
这样,在将保留视频片段保存在预设的第二存储区域内后,展示区域内可以继续显示表征保留视频片段的时长的帧画面图像。以使得展示区域内的原始视频的帧画面图像被更新为表征保留视频片段的时长的帧画面图像。
其中,智能移动终端可以在将保留视频片段保存在预设的第二存储区域内后,立即执行上述步骤S1221-S1223,也可以在将保留视频片段保存在预设的第二存储区域内后,接收用户通过对智能移动终端执行预定操作向智能移动终端发送的区域更新指令,以使得智能移动终端在获取该区域更新指令后,将展示区域内所展示的帧画面图像,由原始视频的帧画面图像更新为表征保留视频片段的时长的帧画面图像。
在一些实施方式中,用户需要对已经剪切编辑完成的指令进行撤回,具体请参阅图6,图6为本实施例提供的撤销程序的一种流程示意图。
如图6所示步骤S1212之后还包括下述步骤:
S1231、获取用户的待执行撤销指令;
智能移动终端接收用户通过手指或触控笔发送的点击或滑动指令进行撤销操作,因此,在视频编辑状态下,用户在撤销区域内的滑 动指令或点击指令为待执行撤销指令。
也就是说,在将待编辑视频片段保存到预设的第一存储区域内后,智能移动终端可以接收用户使用手指或触控笔,通过对智能移动终端的显示屏的点击或滑动操作触发的待执行撤销指令。该待执行撤销指令可以指示了第一存储区域内所存储的哪些待编辑视频片段为需要撤回的视频片段。其中,用户触发待执行撤销指令时,所操作的显示屏中的区域可以是视频显示区域,也可以是展示区域,这都合理的。
S1232、根据所述撤销指令,将存储在所述第一存储区域内的待编辑视频片段按堆栈的方式进行提取;
对原始视频进行剪切时,能够对原始视频的多个位置进行剪切,因此,在第一存储区域内的待编辑视频片段能够是多个。多个待编辑视频片段进入第一存储区域内的时间有先后,根据堆栈原理,即先进后出的原则,最后进入到第一存储区域内的待编辑视频片段,在接收到撤销指令后最先被移出第一存储区域,而最先进入第一存储区域内的待编辑视频片段则根据相应的撤销指令最后被移出第一存储区域。
也就是说,在获取用户的待执行撤销指令后,智能移动终端便可以响应该撤销指令。由于该撤销指令指示第一存储区域内所存储的哪些待编辑视频片段为需要撤回的视频片段,因此,在响应该撤销指令时,智能移动终端便可以按照堆栈原理,按照撤销指令所指示的待编辑视频片段的存储时间,从最后存储的待编辑视频片段开始,依次将撤销指令所指示的待编辑视频片段从第一存储区域内提取出来。
S1233、将提取的待编辑视频片段***到所述第二存储区域内。
将从第一存储区域提取出来的待编辑视频片段,***到第二存储区域内保留视频片段的相应位置。***后执行步骤S1221-S1223。
也就是说,在从第一存储区域内提取出撤销指令所指示的待编辑视频片段后,智能移动终端便可以将提取到的待编辑视频片段***到第二存储区域内保留视频片段的相应位置。
由于展示区域内所显示的帧画面图像与第二存储区域存储的视频片段具有对应关系,展示区域所显示的帧画面图像是表征第二存储 区域内所存储的视频片段的时长的帧画面图像。因此,在将提取到的待编辑视频片段***到第二存储区域内,智能移动终端所显示的帧画面图像便可以进行再次更新,并在更新后显示:表征第二存储区域内***待编辑视频片段后得到的视频片段的时长的帧画面图像。
请参阅图7,图7为本实施例提供的将待编辑视频片段***保留视频片段的一种流程示意图。
如图7所示,步骤S1233包括下述步骤:
S1241、获取所述待编辑视频片段的起始时间;
在提取待编辑视频片段后,读取该待编辑视频片段的起始时间,即该编辑视频片段的起始画面对应在保留视频片段的时刻,如待编辑视频起始画面对应的保留视频片段的时刻为5s。此外,智能移动终端可以同时提取该待编辑视频片段的片段时长。
S1242、根据所述起始时间在所述保留视频片段中确定所述待编辑视频片段的起始***时间,在所述保留视频片段的所述起始***时间处***所述待编辑视频片段
根据起始时间计算出待编辑视频片段在保留视频片段的***位置,例如,待编辑视频起始画面对应的保留视频片段的时刻为5s,则起始***时间为5s。
其中,在智能移动终端提取该待编辑视频片段的片段时长时,智能移动终端还可以根据起始***时间和片段时长,确定待编辑视频片段的结束***时间。例如,待编辑视频片段的起始***时间为5s,片段时长为6s,则结束***时间为11s。
在确定起始***时间后,智能移动终端便可以在保留视频片段的该起始***时间处***待编辑视频片段。其中,该保留视频片段被保存在第二存储区域内。
当确定待编辑视频片段的结束***时间时,在保留视频片段中,起始***时间处与结束***时间处之间所对应的播放时长,与待编辑视频片段的片段时长相同。
请参阅图8,图8为本实施例提供的确定待编辑视频片段的时长 的一种流程示意图。
如图8所示,步骤S1200具体包括下述步骤:
S1251、获取所述编辑指令所指示的在所述展示区域内的起始位置和结束位置;
获取用户滑动指令所指示的在展示区域内的起始坐标和结束坐标,即能够获得编辑指令所指示的在展示区域内的起始位置和结束位置。其中,起始坐标对应于起始位置,结束坐标对应于结束位置。
也就是说,在上述步骤S1251中,当用户使用手指或触控笔,通过对该展示区域的滑动操作触发该编辑指令时,用户发送该编辑指令的具体操作为:用户通过手指或触控笔从展示区域内的一个位置处开始滑动,并在滑动至另一位置处停止,则该开始滑动的位置的坐标即为编辑指令在展示区域内的起始坐标,该停止滑动的位置的坐标即为编辑指令在展示区域内的结束坐标。
S1252、根据所述起始位置确定起始帧画面图像,并根据所述结束位置确定结束帧画面图像;
根据起始位置坐标在展示区域对应的帧画面图像,获取到起始帧画面图像,根据结束位置在展示区域对应的帧画面图像,获取到结束帧画面图像。
也就是说,在上述步骤S1252中,当用户使用手指或触控笔,通过对该展示区域的滑动操作触发该编辑指令时,可以将起始位置坐标在展示区域对应的帧画面图像确定为起始帧画面图像,将结束位置在展示区域对应的帧画面图像确定为结束帧画面图像。
此外,另一种实现方式中,当用户使用手指或触控笔,通过对该展示区域的点击操作触发该编辑指令时,用户发送该编辑指令的具体操作为:用户通过手指或触控笔从展示区域内的一个帧画面图像开始,连续点击多幅帧画面图像,则用户所点击的第一幅帧画面图像即为起始帧画面图像,用户所点击的最后一幅帧画面图像即为结束帧画面图像。
S1253、将以所述起始帧画面图像为起始帧且以所述结束帧画面 图像为结束帧的帧画面区间,确定为表征待编辑视频片段的时长的帧画面图像。
在确定起始帧画面图像和结束帧画面图像后,智能移动终端便可以将以所述起始帧画面图像为起始帧且以所述结束帧画面图像为结束帧的帧画面区间,确定为表征待编辑视频片段的时长的帧画面图像。
其中,在确定起始帧画面图像和结束帧画面图像后,可以获取所述起始帧画面图像表征的起始时间信息和所述结束帧画面图像表征的结束时间信息,则所述结束时间信息与所述起始时间信息的差值可以被确定为所述待编辑视频片段的时长。这样,将以所述起始帧画面图像为起始帧且以所述结束帧画面图像为结束帧的帧画面区间,确定为表征待编辑视频片段的时长的帧画面图像。
例如,原始视频长度为36s,以2s为一个时间跨度提取一张帧画面图像,将帧画面图像按提取的先后顺序在展示区域内进行显示,则第一张帧画面图像表征的时刻为0s,第二张帧画面图像表征的时刻为2s……,依次类推从起始帧画面图像到结束帧画面图像的时间信息的差值就能够计算出待编辑视频片段的时长。
在一些实施方式中,当用户编辑完成后,需要对保留视频片段进行保存。具体请参阅图9,图9为本实施例提供的保存程序的一种流程示意图。
如图9所示包括下述步骤:
S1311、获取用户的待执行保存指令;
智能移动终端接收用户通过手指或触控笔发送的点击或滑动指令进行保存操作,因此,在视频编辑状态下,用户在保存区域内的滑动指令或点击指令为待执行保存指令。
也就是说,在将保留视频片段保存在预设的第二存储区域内后,智能移动终端可以接收用户使用手指或触控笔,通过对智能移动终端的显示屏的点击或滑动操作触发的待执行保存指令。该待执行保存指令可以指示将第二存储区域内所存储的保留视频片段存储到智能移动终端的哪个存储位置,同时也可以指示删除第一存储区域内的哪些 视频信息。其中,用户触发待执行保存指令时,所操作的显示屏中的区域可以是视频显示区域,也可以是展示区域,这都合理的。
S1312、根据所述保存指令,删除所述第一存储区域内的视频信息,并将所述第二存储区域内的视频信息存储在本地存储空间内。
也就说,在获取用户的待执行保存指令后,智能移动终端便可以响应保存指令。由于待执行保存指令指示将第二存储区域内所存储的保留视频片段存储到智能移动终端的哪个存储位置,以及删除第一存储区域内的哪些视频信息,因此,在响应该保护指令时,智能移动终端便可以按照保护执行所指示的位置,将第二存储区域内的视频信息存储到本地存储空间内,并按照保存指令所指示的视频信息,删除第一存储区域内的视频信息。
由于第一存储区域和第二存储区域均为缓存区域,在进行保存时,需要将第一存储区域中被剪切的待编辑视频片段进行删除,同时将第二存储区域内的保留视频片段存储在智能移动终端的存储器内。
为解决上述技术问题,本申请实施例还提供一种视频编辑装置。具体请参阅图10,图10为本实施例视频编辑装置基本结构框图。
如图10所示,一种视频编辑装置,包括:获取模块2100、执行模块2200和生成模块2300。其中,获取模块2100用于获取用户的待执行编辑指令;执行模块2200用于根据编辑指令,在预设的展示区域内选取表征待编辑视频片段的时长的帧画面图像;生成模块2300用于调用预设的渲染脚本对帧画面图像进行渲染,以使渲染后的帧画面图像在展示区域内突出显示。
视频编辑装置进行视频编辑时,在视频编辑区域通过展示区域表征原始视频的时长,在用户进行视频编辑时,通过在展示区域选择对应的表征待编辑视频片段时长的帧画面图像,就能够确定对应的待编辑视频时长。对选取的帧画面图像进行脚本渲染,使其能够区别于其他帧画面图像进行显示,直观的向用户展示选取的待编辑视频片段,方便用户确定待编辑视频的位置。采用这种编辑方式能够在有限得空 间内,增大利用率,方便用户浏览和操作。也就是说,应用本申请实施例提供的方案,在视频编辑过程中,通过在展示区域内选取表征待编辑视频片段的时长的帧画面图像,并对所选取的帧画面图像进行突出显示。用户可以直观、准确地在原始视频中确定待编辑视频的内容,从而降低编辑时的错误率,使用户可以准确地编辑视频。
在一些实施方式中,上述执行模块2200包括第一执行子模块,上述生成模块2300包括第一生成子模块。其中,第一执行子模块用于根据编辑指令在预设的展示区域内选取表征待编辑视频片段的时长的至少一个帧画面区间;第一生成子模块用于调用预设的渲染脚本对帧画面区间进行渲染,以使渲染后的帧画面区间在展示区域内突出显示。
在一些实施方式中,视频编辑装置还包括:第一存储模块和第二存储模块。其中,第一存储模块用于在所述调用预设的渲染脚本对所述帧画面图像进行渲染,以使渲染后的帧画面图像在所述展示区域内突出显示之后,将待编辑视频片段保存在预设的第一存储区域内;第二存储模块用于将保留视频片段保存在预设的第二存储区域内。
在一些实施方式中,视频编辑装置还包括:第一调取模块、第一缩放模块和第二生成模块。其中,第一调取模块用于根据预设第一时间跨度依次调取保留视频片段的帧画面图像;第一缩放模块用于将帧画面图像按预设缩放比例进行图像缩放处理;第二生成模块用于将经过缩放处理的帧画面图像按调取的先后顺序依次排布在展示区域内,以使展示区域内显示表征保留视频片段的时长的帧画面图像。
在一些实施方式中,视频编辑装置还包括:第二获取模块、第二调取模块和第三存储模块。其中,第二获取模块用于获取用户的待执行撤销指令;第二调取模块用于根据撤销指令将存储在第一存储区域内的待编辑视频片段按堆栈的方式进行提取;第三存储模块用于将提取的待编辑视频片段***到第二存储区域内。
在一些实施方式中,上述第三存储模块包括:第三获取子模块和第一确定子模块。其中,第三获取子模块用于获取待编辑视频片段的 起始时间和片段时长;第一确定子模块用于根据起始时间在保留视频片段中确定待编辑视频片段的起始***时间,并根据片段时长确定待编辑视频片段的结束***时间;视频***子模块,用于在所述第二存储区域内,从所述起始***时间所对应的位置开始***所述待编辑视频片段,直至所述结束***时间所对应的位置停止。
在一些实施方式中,上述获取模块2100包括:第四获取子模块、第二执行子模块和第一计算子模块。其中,第四获取子模块用于获取编辑指令在展示区域内的起始位置和结束位置;第二执行子模块用于根据起始位置确定起始帧画面图像,并根据结束位置获取结束帧画面图像;其中,所述起始帧画面图像表征的起始时间信息和所述结束帧画面图像表征的结束时间信息的差值为所述待编辑视频片段的时长。
在一些实施方式中,视频编辑装置还包括:第五获取模块和第三执行模块。其中,第五获取模块用于获取用户的待执行保存指令;第三执行模块用于根据保存指令,删除第一存储区域内的视频信息,并将第二存储区域内的视频信息存储在本地存储空间内。
本实施例还提供一种智能移动终端。具体请参阅图11,图11为本实施例智能移动终端基本结构示意图。
需要指出的是本实施列中,智能移动终端的存储器1520内存储用于实现本实施例中视频编辑方法中的所有程序,处理器1580能够调用该存储器1520内的程序,执行上述视频编辑方法所列举的所有功能。由于智能移动终端实现的功能在本实施例中的视频编辑方法进行了详述,在此不再进行赘述。
智能移动终端进行视频编辑时,在视频编辑区域通过展示区域表征原始视频的时长,在用户进行视频编辑时,通过在展示区域选择对应的表征待编辑视频片段时长的帧画面图像,就能够确定对应的待编辑视频时长。对选取的帧画面图像进行脚本渲染,使其能够区别于其他帧画面图像进行显示,直观的向用户展示选取的待编辑视频片段,方便用户确定待编辑视频的位置。采用这种编辑方式能够在有限得空间内,增大利用率,方便用户浏览和操作。也就是说,应用本申请实 施例提供的方案,在视频编辑过程中,通过在展示区域内选取表征待编辑视频片段的时长的帧画面图像,并对所选取的帧画面图像进行突出显示。用户可以直观、准确地在原始视频中确定待编辑视频的内容,从而降低编辑时的错误率,使用户可以准确地编辑视频。
本申请实施例还提供了智能移动终端,如图11所示,为了便于说明,仅示出了与本申请实施例相关的部分,具体技术细节未揭示的,请参照本申请实施例方法部分。该终端可以为包括智能移动终端、平板电脑、PDA(Personal Digital Assistant,个人数字助理)、POS(Point of Sales,销售终端)、车载电脑等任意终端设备,以终端为智能移动终端为例:
图11示出的是与本申请实施例提供的终端相关的智能移动终端的部分结构的框图。参考图11,智能移动终端包括:射频(Radio Frequency,RF)电路1510、存储器1520、输入单元1530、显示单元1540、传感器1550、音频电路1560、无线保真(wireless fidelity,Wi-Fi)模块1570、处理器1580、以及电源1590等部件。本领域技术人员可以理解,图7中示出的智能移动终端结构并不构成对智能移动终端的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
下面结合图11对智能移动终端的各个构成部件进行具体的介绍:
RF电路1510可用于收发信息或通话过程中,信号的接收和发送,特别地,将基站的下行信息接收后,给处理器1580处理;另外,将设计上行的数据发送给基站。通常,RF电路1510包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器(Low Noise Amplifier,LNA)、双工器等。此外,RF电路1510还可以通过无线通信与网络和其他设备通信。上述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯***(Global System of Mobile communication,GSM)、通用分组无线服务(General Packet Radio Service,GPRS)、码分多址(Code Division Multiple Access,CDMA)、宽带码分多址(Wideband Code Division Multiple Access,WCDMA)、 长期演进(Long Term Evolution,LTE)、电子邮件、短消息服务(Short Messaging Service,SMS)等。
存储器1520可用于存储软件程序以及模块,处理器1580通过运行存储在存储器1520的软件程序以及模块,从而执行智能移动终端的各种功能应用以及数据处理。存储器1520可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作***、至少一个功能所需的应用程序(比如声纹播放功能、图像播放功能等)等;存储数据区可存储根据智能移动终端的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器1520可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
输入单元1530可用于接收输入的数字或字符信息,以及产生与智能移动终端的用户设置以及功能控制有关的键信号输入。具体地,输入单元1530可包括触控面板1531以及其他输入设备1532。触控面板1531,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板1531上或在触控面板1531附近的操作),并根据预先设定的程式驱动相应的连接装置。可选的,触控面板1531可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器1580,并能接收处理器1580发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板1531。除了触控面板1531,输入单元1530还可以包括其他输入设备1532。具体地,其他输入设备1532可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。
显示单元1540可用于显示由用户输入的信息或提供给用户的信息以及智能移动终端的各种菜单。显示单元1540可包括显示面板 1541,可选的,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板1541。进一步的,触控面板1531可覆盖显示面板1541,当触控面板1531检测到在其上或附近的触摸操作后,传送给处理器1580以确定触摸事件的类型,随后处理器1580根据触摸事件的类型在显示面板1541上提供相应的视觉输出。虽然在图7中,触控面板1531与显示面板1541是作为两个独立的部件来实现智能移动终端的输入和输入功能,但是在某些实施例中,可以将触控面板1531与显示面板1541集成而实现智能移动终端的输入和输出功能。
智能移动终端还可包括至少一种传感器1550,比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板1541的亮度,接近传感器可在智能移动终端移动到耳边时,关闭显示面板1541和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别智能移动终端姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于智能移动终端还可配置的陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。
音频电路1560、扬声器1561,传声器1562可提供用户与智能移动终端之间的音频接口。音频电路1560可将接收到的音频数据转换后的电信号,传输到扬声器1561,由扬声器1561转换为声纹信号输出;另一方面,传声器1562将收集的声纹信号转换为电信号,由音频电路1560接收后转换为音频数据,再将音频数据输出处理器1580处理后,经RF电路1510以发送给比如另一智能移动终端,或者将音频数据输出至存储器1520以便进一步处理。
Wi-Fi属于短距离无线传输技术,智能移动终端通过Wi-Fi模块1570可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图7示出了Wi-Fi模块 1570,但是可以理解的是,其并不属于智能移动终端的必须构成,完全可以根据需要在不改变发明的本质的范围内而省略。
处理器1580是智能移动终端的控制中心,利用各种接口和线路连接整个智能移动终端的各个部分,通过运行或执行存储在存储器1520内的软件程序和/或模块,以及调用存储在存储器1520内的数据,执行智能移动终端的各种功能和处理数据,从而对智能移动终端进行整体监控。可选的,处理器1580可包括一个或多个处理单元;优选的,处理器1580可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作***、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器1580中。
智能移动终端还包括给各个部件供电的电源1590(比如电池),优选的,电源可以通过电源管理***与处理器1580逻辑相连,从而通过电源管理***实现管理充电、放电、以及功耗管理等功能。
尽管未示出,智能移动终端还可以包括摄像头、蓝牙模块等,在此不再赘述。
本申请实施例,还提供了一种包括指令的非临时性计算机可读存储介质,例如,包括指令的存储器1520,上述指令可由智能移动终端的处理器1580执行以完成所述方法。例如,所述非临时性计算机可读存储介质可以是ROM(Read-Only Memory,只读存储器)、RAM、CD-ROM、磁带、软盘和光数据存储设备等。
一种非临时性计算机可读存储介质,当存储介质中的指令由智能移动终端的处理器执行时,使得智能移动终端能够执行本申请中所述的任意一种视频编辑方法的步骤。
相应于上述方法实施例,本申请实施例提供了一种计算机程序产品,当其在计算机上运行时,使得计算机执行上述实施例中任一所述的视频编辑方法的步骤。
需要说明的是,本申请的说明书及其附图中给出了本申请的较佳的实施例,但是,本申请可以通过许多不同的形式来实现,并不限于 本说明书所描述的实施例,这些实施例不作为对本申请内容的额外限制,提供这些实施例的目的是使对本申请的公开内容的理解更加透彻全面。并且,上述各技术特征继续相互组合,形成未在上面列举的各种实施例,均视为本申请说明书记载的范围;进一步地,对本领域普通技术人员来说,可以根据上述说明加以改进或变换,而所有这些改进和变换都应属于本申请所附权利要求的保护范围。

Claims (19)

  1. 一种视频编辑方法,其特征在于,所述方法包括:
    获取用户的待执行编辑指令;
    根据所述编辑指令,在预设的展示区域内选取表征待编辑视频片段的时长的帧画面图像;其中,所述展示区域用于展示按照预设时间跨度在所述待编辑视频片段对应的原始视频中采集到的帧画面图像;
    调用预设的渲染脚本对所述帧画面图像进行渲染,以使渲染后的帧画面图像在所述展示区域内突出显示。
  2. 根据权利要求1所述的视频编辑方法,其特征在于,
    所述根据所述编辑指令,在预设的展示区域内选取表征待编辑视频片段的时长的帧画面图像的步骤,包括:
    根据所述编辑指令在预设的展示区域内选取表征待编辑视频片段时长的至少一个帧画面区间;
    所述调用预设的渲染脚本对所述帧画面图像进行渲染,以使渲染后的帧画面图像在所述展示区域内突出显示的步骤,包括:
    调用预设的渲染脚本对所述帧画面区间进行渲染,以使所述渲染后的帧画面区间在所述展示区域内突出显示。
  3. 根据权利要求1或2所述的视频编辑方法,其特征在于,所述调用预设的渲染脚本对所述帧画面图像进行渲染,以使所述渲染后的帧画面图像在所述展示区域内突出显示的步骤之后,还包括下述步骤:
    将所述待编辑视频片段保存在预设的第一存储区域内;将保留视频片段保存在预设的第二存储区域内。
  4. 根据权利要求3所述的视频编辑方法,其特征在于,所述将保留视频片段保存在预设的第二存储区域内的步骤之后,所述方法还包括:
    根据预设第一时间跨度依次调取所述保留视频片段的帧画面图像;
    将所述帧画面图像按预设缩放比例进行图像缩放处理;
    将经过缩放处理的所述帧画面图像按调取的先后顺序依次排布 在所述展示区域内,以使所述展示区域内显示表征所述保留视频片段的时长的帧画面图像。
  5. 根据权利要求3所述的视频编辑方法,其特征在于,所述将保留视频片段保存在预设的第二存储区域内的步骤之后,所述方法还包括:
    获取用户的待执行撤销指令;
    根据所述撤销指令,将存储在所述第一存储区域内的待编辑视频片段按堆栈的方式进行提取;
    将提取的待编辑视频片段***到所述第二存储区域内。
  6. 根据权利要求5所述的视频编辑方法,其特征在于,所述将提取的待编辑视频片段***到所述第二存储区域内的步骤,具体包括下述步骤:
    获取所述待编辑视频片段的起始时间;
    根据所述起始时间在所述保留视频片段中确定所述待编辑视频片段的起始***时间,在所述保留视频片段的所述起始***时间处***所述待编辑视频片段。
  7. 根据权利要求1所述的视频编辑方法,其特征在于,所述根据所述编辑指令在预设的展示区域内选取表征待编辑视频片段的时长的帧画面图像的步骤,具体包括下述步骤:
    获取所述编辑指令所指示的在所述展示区域内的起始位置和结束位置;
    根据所述起始位置确定起始帧画面图像,并根据所述结束位置确定结束帧画面图像;
    将以所述起始帧画面图像为起始帧且以所述结束帧画面图像为结束帧的帧画面区间,确定为表征待编辑视频片段的时长的帧画面图像。
  8. 根据权利要求3所述的视频编辑方法,其特征在于,所述将保留视频片段保存在预设的第二存储区域内的步骤之后,所述方法还包括:
    获取用户的待执行保存指令;
    根据所述保存指令,删除所述第一存储区域内的视频信息,并将所述第二存储区域内的视频信息存储在本地存储空间内。
  9. 一种视频编辑装置,其特征在于,所述装置包括:
    获取模块,用于获取用户的待执行编辑指令;
    执行模块,用于根据所述编辑指令,在预设的展示区域内选取表征待编辑视频片段的时长的帧画面图像;其中,所述展示区域用于展示按照预设时间跨度在所述待编辑视频片段对应的原始视频中采集到的帧画面图像;
    生成模块,用于调用预设的渲染脚本对所述帧画面图像进行渲染,以使渲染后的帧画面图像在所述展示区域内突出显示。
  10. 根据权利要求9所述的视频编辑装置,其特征在于,所述执行模块包括第一执行子模块,用于根据所述编辑指令在预设的展示区域内选取表征待编辑视频片段时长的至少一个帧画面区间;
    所述生成模块包括第一生成子模块,用于调用预设的渲染脚本对所述帧画面区间进行渲染,以使所述渲染后的帧画面区间在所述展示区域内突出显示。
  11. 根据权利要求9或10所述的视频编辑装置,其特征在于,所述视频编辑装置还包括:
    第一存储模块,用于在所述调用预设的渲染脚本对所述帧画面图像进行渲染,以使渲染后的帧画面图像在所述展示区域内突出显示之后,将所述待编辑视频片段保存在预设的第一存储区域内;
    第二存储模块,用于将保留视频片段保存在预设的第二存储区域内。
  12. 根据权利要求11所述的视频编辑装置,其特征在于,所述视频编辑装置还包括:
    第一调取模块,用于根据预设第一时间跨度依次调取所述保留视频片段的帧画面图像;
    第一缩放模块,用于将所述帧画面图像按预设缩放比例进行图像 缩放处理;
    第二生成子模块,用于将经过缩放处理的所述帧画面图像按调取的先后顺序依次排布在所述展示区域内,以使所述展示区域内显示表征所述保留视频片段的时长的帧画面图像。
  13. 根据权利要求11所述的视频编辑装置,其特征在于,所述视频编辑装置还包括:
    第二获取模块,用于获取用户的待执行撤销指令;
    第二调取模块,用于根据所述撤销指令,将存储在所述第一存储区域内的待编辑视频片段按堆栈的方式进行提取;
    第三存储模块,用于将提取的待编辑视频片段***到所述第二存储区域内。
  14. 根据权利要求13所述的视频编辑装置,其特征在于,所述第三存储模块包括:
    第三获取子模块,用于获取所述待编辑视频片段的起始时间;
    第一确定子模块,用于根据所述起始时间在所述保留视频片段中确定所述待编辑视频片段的起始***时间;
    视频***子模块,用于在所述保留视频片段的所述起始***时间处***所述待编辑视频片段。
  15. 根据权利要求9所述的视频编辑装置,其特征在于,所述获取模块包括:
    第四获取子模块,用于获取所述编辑指令所指示的在所述展示区域内的起始位置和结束位置;
    第二执行子模块,用于根据所述起始位置确定起始帧画面图像,并根据所述结束位置确定结束帧画面图像;
    图像确定子模块,用于将以所述起始帧画面图像为起始帧且以所述结束帧画面图像为结束帧的帧画面区间,确定为表征待编辑视频片段的时长的帧画面图像。
  16. 根据权利要求11所述的视频编辑装置,其特征在于,所述视频编辑装置还包括:
    第五获取模块,用于获取用户的待执行保存指令;
    第三执行模块,用于根据所述保存指令,删除所述第一存储区域内的视频信息,并将所述第二存储区域内的视频信息存储在本地存储空间内。
  17. 一种智能移动终端,其特征在于,包括:
    一个或多个处理器;
    存储器;
    一个或多个应用程序,其中所述一个或多个应用程序被存储在所述存储器中并被配置为由所述一个或多个处理器执行,所述一个或多个程序配置用于执行权利要求1-8中任意一项所述的视频编辑方法。
  18. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有视频编辑程序,所述视频编辑程序被处理器执行时实现如权利要求1-8中任意一项所述的视频编辑方法。
  19. 一种计算机程序产品,其特征在于,当其在计算机上运行时,使得计算机执行权利要求1-8中任意一项所述的视频编辑方法。
PCT/CN2018/115916 2017-12-20 2018-11-16 视频编辑方法、装置及智能移动终端 WO2019120013A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US16/906,761 US11100955B2 (en) 2017-12-20 2020-06-19 Method, apparatus and smart mobile terminal for editing video
US17/381,842 US11568899B2 (en) 2017-12-20 2021-07-21 Method, apparatus and smart mobile terminal for editing video

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711386533.8A CN108040288B (zh) 2017-12-20 2017-12-20 视频编辑方法、装置及智能移动终端
CN201711386533.8 2017-12-20

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/906,761 Continuation US11100955B2 (en) 2017-12-20 2020-06-19 Method, apparatus and smart mobile terminal for editing video

Publications (1)

Publication Number Publication Date
WO2019120013A1 true WO2019120013A1 (zh) 2019-06-27

Family

ID=62100232

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/115916 WO2019120013A1 (zh) 2017-12-20 2018-11-16 视频编辑方法、装置及智能移动终端

Country Status (3)

Country Link
US (2) US11100955B2 (zh)
CN (1) CN108040288B (zh)
WO (1) WO2019120013A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115515005A (zh) * 2021-06-07 2022-12-23 京东方科技集团股份有限公司 一种节目切换的封面获取方法、装置及显示设备

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108040288B (zh) 2017-12-20 2019-02-22 北京达佳互联信息技术有限公司 视频编辑方法、装置及智能移动终端
US11683464B2 (en) * 2018-12-28 2023-06-20 Canon Kabushiki Kaisha Electronic device, control method, and non-transitorycomputer readable medium
CN112311961A (zh) * 2020-11-13 2021-02-02 深圳市前海手绘科技文化有限公司 一种短视频中镜头设置的方法和装置
CN112738416B (zh) * 2020-12-23 2023-05-02 上海哔哩哔哩科技有限公司 缩略图预览方法、***、设备及计算机可读存储介质
CN113099287A (zh) * 2021-03-31 2021-07-09 上海哔哩哔哩科技有限公司 视频制作方法及装置
CN113490051B (zh) * 2021-07-16 2024-01-23 北京奇艺世纪科技有限公司 一种视频抽帧方法、装置、电子设备及存储介质
CN114125556B (zh) * 2021-11-12 2024-03-26 深圳麦风科技有限公司 视频数据的处理方法、终端和存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102638658A (zh) * 2012-03-01 2012-08-15 盛乐信息技术(上海)有限公司 音视频编辑方法及***
CN104093090A (zh) * 2014-06-13 2014-10-08 北京奇艺世纪科技有限公司 一种视频处理方法和装置
US9502073B2 (en) * 2010-03-08 2016-11-22 Magisto Ltd. System and method for semi-automatic video editing
CN107295416A (zh) * 2017-05-05 2017-10-24 中广热点云科技有限公司 截取视频片段的方法和装置
CN108040288A (zh) * 2017-12-20 2018-05-15 北京达佳互联信息技术有限公司 视频编辑方法、装置及智能移动终端

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3665456B2 (ja) * 1997-11-19 2005-06-29 株式会社東芝 映像情報の記録再生システム及び同システムに適用する映像編集方法
JP5381459B2 (ja) * 2009-07-27 2014-01-08 富士通株式会社 動画像編集装置、動画像編集方法及び動画像編集用コンピュータプログラム
US9299389B2 (en) * 2012-09-24 2016-03-29 Adobe Systems Incorporated Interpretation of free-form timelines into compositing instructions
US20140310598A1 (en) * 2013-01-10 2014-10-16 Okappi, Inc. Multimedia Spiral Timeline
US10334300B2 (en) * 2014-12-04 2019-06-25 Cynny Spa Systems and methods to present content
US9858969B2 (en) * 2015-01-29 2018-01-02 HiPoint Technology Services, INC Video recording and editing system
CN104836963B (zh) * 2015-05-08 2018-09-14 广东欧珀移动通信有限公司 一种视频处理方法和装置
US10623801B2 (en) * 2015-12-17 2020-04-14 James R. Jeffries Multiple independent video recording integration
US9836484B1 (en) * 2015-12-30 2017-12-05 Google Llc Systems and methods that leverage deep learning to selectively store images at a mobile image capture device
EP3535982A1 (en) * 2016-11-02 2019-09-11 TomTom International B.V. Creating a digital media file with highlights of multiple media files relating to a same period of time
US11006180B2 (en) * 2017-05-11 2021-05-11 Broadnet Teleservices, Llc Media clipper system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9502073B2 (en) * 2010-03-08 2016-11-22 Magisto Ltd. System and method for semi-automatic video editing
CN102638658A (zh) * 2012-03-01 2012-08-15 盛乐信息技术(上海)有限公司 音视频编辑方法及***
CN104093090A (zh) * 2014-06-13 2014-10-08 北京奇艺世纪科技有限公司 一种视频处理方法和装置
CN107295416A (zh) * 2017-05-05 2017-10-24 中广热点云科技有限公司 截取视频片段的方法和装置
CN108040288A (zh) * 2017-12-20 2018-05-15 北京达佳互联信息技术有限公司 视频编辑方法、装置及智能移动终端

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ANONYMOUS: "Xiami Mobile Phone Tips Make Easily Become a Video Editing Master", GAO3JI1WANG3, 9 October 2017 (2017-10-09), XP055619790, Retrieved from the Internet <URL:https://v.youku.com/v_show/id_XMzA3MzkyMDQwNA-.html?spm=a2hOk> *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115515005A (zh) * 2021-06-07 2022-12-23 京东方科技集团股份有限公司 一种节目切换的封面获取方法、装置及显示设备

Also Published As

Publication number Publication date
US20200327909A1 (en) 2020-10-15
US20210350831A1 (en) 2021-11-11
US11100955B2 (en) 2021-08-24
CN108040288B (zh) 2019-02-22
US11568899B2 (en) 2023-01-31
CN108040288A (zh) 2018-05-15

Similar Documents

Publication Publication Date Title
WO2019120013A1 (zh) 视频编辑方法、装置及智能移动终端
CN108022279B (zh) 视频特效添加方法、装置及智能移动终端
CN108762954B (zh) 一种对象分享方法及移动终端
WO2019120191A1 (zh) 多段文本复制方法及移动终端
CN110913141B (zh) 一种视频显示方法、电子设备以及介质
US11340777B2 (en) Method for editing text and mobile terminal
JP2021525430A (ja) 表示制御方法及び端末
CN108737904B (zh) 一种视频数据处理方法及移动终端
CN111314784B (zh) 一种视频播放方法及电子设备
CN108920239B (zh) 一种长截屏方法及移动终端
WO2020238938A1 (zh) 信息输入方法及移动终端
CN108616771B (zh) 视频播放方法及移动终端
WO2021104160A1 (zh) 编辑方法及电子设备
WO2021169954A1 (zh) 搜索方法及电子设备
WO2019105446A1 (zh) 视频编辑方法、装置及智能移动终端
WO2019076377A1 (zh) 图像的查看方法及移动终端
WO2023030306A1 (zh) 视频编辑方法、装置及电子设备
KR102186815B1 (ko) 컨텐츠 스크랩 방법, 장치 및 기록매체
CN109542307B (zh) 一种图像处理方法、设备和计算机可读存储介质
CN111491205A (zh) 视频处理方法、装置及电子设备
US20220011914A1 (en) Operation method and terminal device
JP2023519389A (ja) スクラッチパッド作成方法及び電子機器
CN111049977B (zh) 一种闹钟提醒方法及电子设备
CN109800095B (zh) 通知消息的处理方法及移动终端
CN111445929A (zh) 一种语音信息处理方法及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18890096

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18890096

Country of ref document: EP

Kind code of ref document: A1