WO2021249168A1 - 视频处理方法、装置、电子设备及计算机可读存储介质 - Google Patents

视频处理方法、装置、电子设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2021249168A1
WO2021249168A1 PCT/CN2021/095502 CN2021095502W WO2021249168A1 WO 2021249168 A1 WO2021249168 A1 WO 2021249168A1 CN 2021095502 W CN2021095502 W CN 2021095502W WO 2021249168 A1 WO2021249168 A1 WO 2021249168A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
editing
track
processed
processing
Prior art date
Application number
PCT/CN2021/095502
Other languages
English (en)
French (fr)
Inventor
何彦
李鑫
张文海
李锦敏
熊壮
邓鑫亮
Original Assignee
北京字节跳动网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字节跳动网络技术有限公司 filed Critical 北京字节跳动网络技术有限公司
Priority to BR112022025162A priority Critical patent/BR112022025162A2/pt
Priority to KR1020227043537A priority patent/KR102575848B1/ko
Priority to EP21822298.2A priority patent/EP4152758A4/en
Priority to JP2022576468A priority patent/JP7307864B2/ja
Publication of WO2021249168A1 publication Critical patent/WO2021249168A1/zh
Priority to US18/064,128 priority patent/US20230107220A1/en

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/034Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications

Definitions

  • the present disclosure relates to the field of computer technology. Specifically, the present disclosure relates to a video processing method, device, electronic equipment, and computer-readable storage medium.
  • the content of the invention is provided to introduce concepts in a brief form, and these concepts will be described in detail in the following specific embodiments.
  • the content of the invention is not intended to identify the key features or essential features of the technical solution that is required to be protected, nor is it intended to be used to limit the scope of the technical solution that is required to be protected.
  • embodiments of the present disclosure provide a video processing method, including:
  • the preview screen of the processed video processed by the processing function is displayed in the video preview area, and the editing logo corresponding to the processing function is displayed in the track editing area, where the editing logo and The editing track of the video to be processed is superimposed and displayed in the track editing area.
  • a video processing device including:
  • the to-be-processed video receiving module is used to receive the to-be-processed video
  • the to-be-processed video display module is used to display the preview screen of the to-be-processed video through the video preview area on the display interface, to display the editing track of the to-be-processed video through the track editing area, and to display at least one processing function through the processing function navigation area;
  • the to-be-processed video processing module is used to display the preview screen of the processed video processed by the processing function in the video preview area when a trigger operation for any processing function is received, and display the corresponding processing function in the track editing area
  • the editing logo where the editing logo and the editing track of the to-be-processed video are superimposed and displayed in the track editing area.
  • an embodiment of the present disclosure provides an electronic device, including a memory and a processor
  • a computer program is stored in the memory
  • the processor is configured to execute a computer program to implement the method provided in the embodiment of the first aspect.
  • the embodiments of the present disclosure provide a computer-readable storage medium on which a computer program is stored, and the computer program is executed by a processor to implement the method provided in the embodiment of the first aspect.
  • the preview screen is displayed in the video preview area, the editing logo is displayed in the track editing area, and the processing function to be selected is displayed in the processing function navigation area.
  • the to-be-processed video The segment is processed to obtain the processed video, and the preview screen of the processed video is displayed in the video preview area, and the editing logo corresponding to the processing function is displayed in the track editing area.
  • the program partitions the screen preview, track editing, and processing function navigation in the video processing process, and sets a reasonable linkage mechanism, it ensures that users can easily and conveniently obtain rich processing functions when using the program for video processing. , Improve the user experience.
  • the solution can also improve the functional scalability of the application, thereby satisfying the needs of users and enhancing the user's operating experience.
  • FIG. 1 is a schematic flowchart of a video processing method provided by an embodiment of the disclosure
  • FIG. 2a is a schematic diagram of a display interface after a user uploads a video to be processed in an embodiment of the disclosure
  • FIG. 2b is a schematic diagram of the display interface after the user has made a leftward sliding operation on the track editing area in the display interface of FIG. 2a in an embodiment of the disclosure;
  • FIG. 3a is a schematic diagram of a track editing area corresponding to a special effect processing function added in an example of an embodiment of the disclosure
  • FIG. 3b is a schematic diagram of a track editing area corresponding to a divided video processing function in an example of an embodiment of the disclosure
  • FIG. 4 is a structural block diagram of a video processing device provided by an embodiment of the disclosure.
  • FIG. 5 is a schematic structural diagram of an electronic device provided by an embodiment of the disclosure.
  • FIG. 1 is a schematic flowchart of a video processing method provided by an embodiment of the present disclosure.
  • the execution body of the method may include an electronic device that installs a video processing service application. As shown in FIG. 1, the method may include:
  • Step S101 Receive a video to be processed.
  • the video to be processed may include an unprocessed video uploaded by a user, a processed draft video saved by an application, or an unprocessed video formed by splicing multiple pictures uploaded by the user.
  • the user can choose to upload a locally stored unprocessed video through the new video add button set in the application interface, or can directly select the last processed draft video saved in the application.
  • the user can also choose to upload multiple locally stored pictures through the new video add button set in the application interface, and the application can splice the received multiple pictures into a video as a video to be processed.
  • the present disclosure does not limit the receiving method of the to-be-processed video.
  • the application program performs related display and processing operations after receiving the to-be-processed video.
  • Step S102 Display a preview picture of the video to be processed in the video preview area on the display interface, display the edited track of the video to be processed in the track editing area, and display at least one processing function through the processing function navigation area.
  • the display interface of the application program may include three areas to display related information of the video to be processed, and the three areas respectively include: a video preview area, a track editing area, and a processing function navigation area.
  • the display interface includes a video preview area 201, a track editing area 202, and a processing function navigation area 203.
  • the video preview area 201 can be regarded as a playback interface for displaying preview images.
  • the video preview area 201 can display preview images corresponding to each video segment of the video to be processed, or can play the entire video to be processed.
  • the track editing area 202 is provided with a time axis 204 and a time axis ruler 205, and is used to display at least one editing mark.
  • the editing track 206 of the video to be processed is displayed in the track editing area 202.
  • the track start point of the editing track 206 of the video to be processed is aligned with the start point of the timeline 204, and the editing track 206 of the video to be processed is aligned with the start point of the timeline 204.
  • Each video clip of the video to be processed is displayed on the screen.
  • the user can perform stretching and compression operations on the editing track 206 of the video to be processed in the length direction of the editing track 206 of the video to be processed.
  • the application program reduces the time axis 204; when receiving a compression operation for the editing track 206, the application program enlarges the time axis 204.
  • the processing function navigation area 203 is used to display various processing functions for processing the to-be-processed video, for example, editing function, filter function, special effect function, etc.
  • the processing function navigation area 203 may include a multi-level navigation bar, and the corresponding processing function is displayed on each level of the navigation bar.
  • the user can trigger the corresponding processing function by opening the multi-level navigation bar as needed.
  • By setting a multi-level navigation bar in the processing function navigation area 203 it is possible to provide users with a simpler and easier-to-use operation interface and more selectable processing functions. At the same time, it can also improve the functional scalability of the application to meet the needs of users. Demand and improve the user’s operating experience.
  • Step S103 when a trigger operation for any processing function is received, the preview screen of the processed video processed by any processing function is displayed in the video preview area, and the edit corresponding to any processing function is displayed in the track editing area Mark, where the edit mark and the edit track of the to-be-processed video are superimposed and displayed in the track edit area.
  • the edit flag indicates that the user has triggered the corresponding processing function.
  • the edit identification may include edit track and edit effect identification.
  • a processing instruction to execute the processing function is issued.
  • the application program uses the processing function to process the to-be-processed segment of the video to be processed and obtain the processed video.
  • the application program displays the preview screen of the processed video in the video preview area 201, and displays the editing identifier corresponding to the processing function in the track editing area 202.
  • the editing identifier corresponding to the processing function and the editing track of the to-be-processed video are superimposed and displayed.
  • the application program when the displayed editing identifier includes the editing track corresponding to the processing function, the application program superimposes the editing track corresponding to the processing function and the editing track of the video to be processed in the track editing area in parallel;
  • the application program when the editing identifier includes the editing effect identifier corresponding to the processing function, the application program superimposes the editing effect identifier corresponding to the processing function on the editing track of the to-be-processed video for superimposed display.
  • the preview screen is displayed through the video preview area, the editing mark is displayed through the track editing area, and the processing function to be selected is displayed through the processing function navigation area.
  • the video clip to be processed is processed to obtain the processed video
  • the preview screen of the processed video is displayed in the video preview area, and the editing mark corresponding to the processing function is displayed in the track editing area.
  • the method may further include:
  • the starting editing point of the video to be processed can be selected by the user by sliding the editing track of the video to be processed in the track editing area left and right.
  • the user's sliding operation on the track editing area can be understood as changing the relative position of the editing track of the video to be processed and the time axis ruler, because the position of the time axis ruler is fixed (for example, located in the center of the track editing area) and It is perpendicular to the editing track of the video to be processed, so the position of the time axis on the editing track of the video to be processed can be changed by sliding operation.
  • the time point on the time axis corresponding to the intersection of the time axis ruler and the editing track of the to-be-processed video is taken as the starting editing point of the to-be-processed video.
  • the application can start processing the to-be-processed video from the starting editing point. Take the user sliding the editing track 206 of the video to be processed to the left in the track editing area 202 shown in FIG. 2a to obtain the situation shown in FIG. 2b as an example.
  • the timeline ruler 205 and the editing track 206 of the video to be processed are The time point on the time axis 204 corresponding to the intersection point A is the corresponding starting edit point. Taking FIG. 2b as an example, it can be seen that the starting edit point corresponds to the time 00:05 on the time axis 204.
  • a preview screen of the processed video processed by the processing function is displayed in the video preview area, and displayed in the track editing area
  • the edit ID corresponding to the processing function can include:
  • a processing instruction corresponding to the processing function is acquired, where the processing instruction includes the function identification and processing parameters of the processing function;
  • the corresponding preview screen is displayed in the video preview area, and the editing logo corresponding to the processing function is displayed in the track editing area.
  • the editing logo uses the starting editing point as the reference point and is related to the editing track of the video clip to be processed Align on the timeline.
  • a processing instruction to execute the processing function is issued, and the processing instruction includes the function identification and processing parameters of the processing function.
  • the function identifier is used to indicate the processing function selected by the user.
  • the processing parameter includes the parameter for processing the video clip to be processed. Taking the processing function of adding media resource as an example, the processing parameter includes the corresponding media resource identifier.
  • the application program first obtains the duration corresponding to the corresponding processing function according to the function identifier therein.
  • the duration corresponding to the processing function can be preset.
  • the duration corresponding to the sticker function can be set to 1 second, and after receiving the function identifier of the sticker function, the application can determine that the duration corresponding to the processing function is 1 second.
  • the application program determines the corresponding video clip to be processed according to the starting edit point and the corresponding duration of the processing function. Specifically, the application uses the starting editing point as the starting point of the to-be-processed video clip, and the corresponding duration of the processing function as The duration of the to-be-processed video segment can then be determined.
  • the application program uses the processing function corresponding to the function identifier and the corresponding processing parameter to process the determined to-be-processed video segment to obtain a processed video.
  • the application program displays the preview screen corresponding to the processed video in the video preview area. It may only display the preview screen at a certain moment (for example, the moment of the initial editing point), or Play the processed video based on the received play trigger operation.
  • the editing track corresponding to the processing function and the editing track of the to-be-processed video are superimposed and displayed in the track editing area in parallel, and the editing track corresponding to the processing function takes the starting editing point as the reference point and matches
  • the editing tracks of the video clips to be processed are aligned on the timeline.
  • the processing function as the added special effect function as an example, the time corresponding to the added special effect is 0.3 seconds.
  • the application determines that the video clip to be processed is 00:05 to 00: 35 video clips to be processed.
  • the application After the application adds special effects to the video clip to be processed between 00:05 and 00:35, the editing track corresponding to the added special effect will be displayed in parallel with the editing track of the to-be-processed video in the track editing area, and the special effect function should be added
  • the corresponding editing track and the video clip to be processed respectively correspond to 00:05 to 00:35 on the time axis, that is, the two are aligned on the time axis with the start editing point as the reference point.
  • the application can also generate the reverse processing instruction of the current instruction.
  • the reverse instruction of the add instruction is delete
  • the delete instruction includes the identification and processing parameters corresponding to the processing function.
  • the application program stores the reverse processing instruction in the cancel queue, and displays the cancel button on the user interface. When receiving the user's click operation on the cancel button on the user interface, the reverse processing instruction is executed, that is, the processing function corresponding to the previous processing instruction is cancelled.
  • the processing function is a processing function of adding a media resource
  • the corresponding processing parameter includes the media resource identifier corresponding to the processing function
  • the to-be-processed video clip is processed based on the processing parameter to obtain the processed
  • the video includes:
  • the application obtains the corresponding media content in the media resource library of the corresponding processing function according to the media resource identifier, and the media resource library includes multiple media resources corresponding to the processing function, and each type of media resource Each has a corresponding media resource identifier, so that the corresponding media resource can be obtained by matching the media resource identifier.
  • the application program mounts the media resource on the video clip to be processed, and the processed video can be obtained.
  • the application obtains the corresponding filter resource package from the filter resource library according to the filter resource identifier, and adds the filter resource package to the video clip to be processed for rendering to get the processed image video.
  • the processing function is the processing function of adding media resources
  • the corresponding processing parameters include the media resource identifier corresponding to the processing function and the content parameters to be added
  • the video to be processed is based on the processing parameters. Fragments are processed, and the processed video is obtained, including:
  • the processing parameters received by the application include the text content parameters to be added input by the user and the text effect resource package identifier selected by the user.
  • the application obtains the corresponding text effect resource package from the text effect resource library according to the text effect resource package identifier.
  • Text effect resource package and process the text to be added according to the text effect resource package to obtain the processed text with text effect to be added, and use it as a media resource to be added. Add text to the video clip to be processed for rendering to obtain the processed video.
  • displaying the editing identifier corresponding to the processing function in the track editing area includes:
  • the editing track corresponding to the processing function is displayed in the track editing area, where the editing track corresponding to the processing function takes the starting editing point as a reference point and is aligned with the editing track of the corresponding to-be-processed video clip on the time axis and is superimposed and displayed in parallel.
  • the editing track corresponding to the processing function is superimposed and displayed in parallel with the editing track of the to-be-processed video, and the editing track corresponding to the processing function is aligned with the corresponding to-be-processed video clip on the time axis, and the editing track corresponding to the processing function is aligned with the corresponding to-be-processed
  • the datum point of the video segment alignment is the starting editing point of the video to be processed.
  • the editing track 301 corresponding to special effect type 1 starts editing Point B is the reference point and corresponds to the editing track 302 of the corresponding to-be-processed video segment (that is, the editing track 302 of the to-be-processed video segment, starting from the timeline ruler 303 to the right and ending with the dotted line in the editing track 302). Align on the time axis 304. As can be seen from FIG. 3a, the editing track 301 of the special effect type 1 and the editing track 302 of the video clip to be processed are aligned and correspond to 00:05 to 00:20 on the time axis 304.
  • the method may further include:
  • the navigation bar corresponding to the processing function is displayed in the processing function navigation area.
  • the application program can realize the linkage between the track editing area and the processing function navigation area by receiving the user's selection operation of the editing identifier. Specifically, when the user selects a processing function corresponding edit mark in the track editing area, the application will display the navigation bar corresponding to the processing function in the processing function navigation area, and the navigation bar displays the processing function associated with the processing function. Processing function.
  • processing function 1 for example, "replace special effect”
  • Processing function 2 for example, "copy special effect”
  • Processing function n for example, "delete special effect”
  • other trigger buttons for processing functions related to adding special effects.
  • displaying the navigation bar corresponding to the processing function in the processing function navigation area includes:
  • the application program Upon receiving the selection operation of the edit track corresponding to the processing function, the application program updates the state information in the visual model, and sends the updated state information to the navigation bar manager through the visual model;
  • a navigation bar is created through the navigation bar manager and displayed in the processing function navigation area.
  • the view model and the navigation bar manager are both functional components within the application.
  • the state information indicates the selected state of the edit flag
  • the state information may indicate that the edit flag is selected or not.
  • the application program updates the state information of the edit mark corresponding to the processing function in the view model to the selected state, and the view model sends the updated state information
  • the navigation bar manager determines that the status information of the edit flag corresponding to the processing function is updated to the selected state, and then creates the navigation bar corresponding to the processing function corresponding to the edit flag and
  • the navigation bar is displayed, for example, the processing function associated with the processing function can be displayed in the navigation bar.
  • the method may further include:
  • the updated editing identification information is sent to the track manager through the view model, so that the track manager displays the editing identification corresponding to the processing function in the track editing area according to the updated editing identification information.
  • view model and track manager are both functional components within the application.
  • the application After obtaining the processed video, the application sends a processing completion notification to the view model to inform the view model that it has completed processing the currently pending video segment.
  • the view model sends the updated editing identification information to the track manager, for example, the updated editing identification information is determined according to the corresponding processing function.
  • the track manager creates a corresponding edit mark according to the received updated edit mark information, and displays the created edit mark in the track edit area.
  • the method before receiving a trigger operation for any processing function, the method further includes:
  • the navigation bar corresponding to the processing function used to process the video clip to be processed is displayed.
  • the user in addition to sliding the editing track of the to-be-processed video in the track editing area to select the starting edit point of the to-be-processed video, the user can also select the editing track of the to-be-processed video, or the system automatically selects the editing of the to-be-processed video track.
  • the to-be-processed video may include at least one video segment, for example, the selected video segment is determined to be the to-be-processed video segment.
  • the application can also display the navigation bar corresponding to the processing function for processing the video clip to be processed in the processing function navigation area while determining the video clip to be processed, that is, by receiving the user’s
  • the application program displays the navigation bar corresponding to the processing function for processing the video clip to be processed in the processing function navigation area.
  • a preview screen of the processed video processed by the processing function is displayed in the video preview area, and displayed in the track editing area
  • the edit ID corresponding to the processing function including:
  • the corresponding preview screen is displayed in the video preview area, and the editing logo corresponding to the processing function is displayed in the track editing area, where the editing logo and the editing track of the to-be-processed video are overlapped and displayed in the track editing area.
  • the application program has determined the corresponding segment to be processed by receiving the user's selection operation on the editing track of the to-be-processed video.
  • the application program processes the video clip to be processed based on the function identifier and processing parameters in the processing instruction to obtain the processed video .
  • the application program displays the corresponding preview screen in the video preview area, and displays the editing logo corresponding to the processing function in the track editing area.
  • the editing identifier may include an editing effect identifier, and the editing effect identifier may be overlapped and displayed in the track editing area with the editing track of the video to be processed.
  • the selection operation of the editing track of the video to be processed may correspond to at least one processing function of editing the video to be processed.
  • the application program upon receiving the user's selection operation on the editing track of the to-be-processed video, the application program displays a navigation bar corresponding to the processing function for processing the to-be-processed video clip in the processing function navigation area 313.
  • the application receives the user's click operation on the video segmentation processing function in the navigation bar, and also receives the user's movement operation on the to-be-processed video segment 311 in the editing track of the to-be-processed video, the application can obtain the video The processing instruction corresponding to the segmentation function.
  • the processing instruction may include the video segmentation function identifier and processing parameters (for example, the processing parameters include the current operating video segment identifier and the target location time of the video segment, etc.).
  • the application program is based on the processing instruction to process the video
  • the segment is processed to get the processed video.
  • the application can also display the corresponding preview screen in the video preview area, and display the editing effect identifier 312 corresponding to the processing function in the track editing area, as shown in Figure 3b, the editing effect identifier 312 and the to-be-processed
  • the editing track of the video is superimposed and displayed in the track editing area.
  • the navigation bar where the processing function for processing the video clip to be processed is displayed in the processing function navigation area includes:
  • the application program When receiving the user's selection operation on the editing track of the to-be-processed video, the application program updates the state information in the visual model, and sends the updated state information to the navigation bar manager through the visual model;
  • the application creates a navigation bar through the navigation bar manager and displays the navigation bar in the processing function navigation area.
  • the linkage between the track editing area and the processing function navigation area can be realized by selecting the editing track of the video to be processed.
  • the processing function navigation area will display a navigation bar corresponding to the processing function used to process the video clip to be processed, and the navigation bar displays that can be used to process the video clip to be processed Processing function.
  • the editing identifier corresponding to the triggered processing function can be displayed in the track editing area, thereby realizing the processing of the functional navigation area and the track. Linkage of editing area.
  • the application updates the state information of the editing track corresponding to the processing function in the view model to the selected state, and the view model sends the updated state information to the track manager.
  • the track manager determines that the status information of the editing track corresponding to the processing function is updated to the selected state, and displays or selects the corresponding editing track.
  • the application program displays the selection of the editing track of the video to be processed in the track editing area; after receiving the add media resource processing in the function navigation area When the function is triggered, the application program displays the editing track corresponding to the processing function in the track editing area.
  • a video clip adding button is displayed on the editing track of the to-be-processed video. Therefore, the method may further include:
  • the editing track of the video to be processed is updated.
  • the application program updates the editing track of the video to be processed.
  • the application program can determine the time point corresponding to the position of the time axis ruler, and the application program adds the video clip to be added to the current waiting time according to the position of the time point on the time axis corresponding to the current to-be-processed video. Process the video and update the editing track of the video to be processed.
  • the application program will add the video clip to be added from the start point of the time axis to the current video to be processed Medium; if the time point is located to the right of the center point of the time axis corresponding to the current to-be-processed video, the application will start adding the to-be-added video segment from the end time point of the current to-be-processed video.
  • the application program can also add the to-be-added video segment to the current to-be-processed video according to other implementations, which is not limited in the present disclosure.
  • the trigger operation includes click, text/voice input, and touch input (for example, the movement operation of moving the to-be-processed video clip 311 in FIG. 3b), etc., and the user sends out the trigger operation to execute the corresponding processing function.
  • the application program can obtain the processing instruction of the corresponding processing function according to the user's trigger operation.
  • the device 400 may include: a to-be-processed video receiving module 401, a to-be-processed video display module 402, and a to-be-processed video processing module 403 ,in:
  • the to-be-processed video receiving module 401 is used to receive the to-be-processed video
  • the to-be-processed video display module 402 is configured to display the preview screen of the to-be-processed video through the video preview area on the display interface, display the editing track of the to-be-processed video through the track editing area, and display at least one processing function through the processing function navigation area;
  • the to-be-processed video processing module 403 is used to display a preview screen of the processed video processed by any processing function in the video preview area when a trigger operation for any processing function is received, and display the preview image in the track editing area
  • the editing logo corresponding to any processing function, where the editing logo and the editing track of the to-be-processed video are superimposed and displayed in the track editing area.
  • the preview screen is displayed through the video preview area, the editing mark is displayed through the track editing area, and the processing function to be selected is displayed through the processing function navigation area.
  • the video clip to be processed is processed to obtain the processed video
  • the preview screen of the processed video is displayed in the video preview area
  • the editing logo corresponding to the processing function is displayed in the track editing area.
  • the device further includes a starting edit point determination module, configured to:
  • the to-be-processed video processing module may include: a first processing instruction acquisition module, a first processed video acquisition module, and a first preview and track display module, wherein:
  • the first processing instruction acquisition module is configured to acquire a processing instruction corresponding to the processing function when a trigger operation for any processing function is received, where the processing instruction includes the function identification and processing parameters of the processing function;
  • the first processed video acquisition module is used to determine the video clip to be processed based on the function identifier and the starting edit point, and process the video clip to be processed based on the processing parameters to obtain the processed video;
  • the first preview and track display module is used to display the corresponding preview screen in the video preview area based on the processed video, and display the editing logo corresponding to the processing function in the track editing area, where the editing logo is based on the starting editing point Click and align with the editing track of the video clip to be processed on the timeline.
  • the processing parameter includes a media resource identifier corresponding to the processing function
  • the first processed video acquisition module is specifically configured to:
  • the first preview and track display module is specifically used for:
  • the editing track corresponding to the processing function is displayed in the track editing area, where the editing track corresponding to the processing function takes the starting editing point as a reference point and is aligned with the editing track of the corresponding to-be-processed video clip on the time axis and is superimposed and displayed in parallel.
  • the device may further include a first linkage module for:
  • the navigation bar corresponding to the processing function is displayed in the processing function navigation area.
  • the first linkage module is specifically used for:
  • a navigation bar is created through the navigation bar manager and displayed in the processing function navigation area.
  • the device further includes an edit logo creation and display module, configured to:
  • the updated editing identification information is sent to the track manager through the view model, so that the track manager displays the editing identification corresponding to the processing function in the track editing area according to the updated editing identification information.
  • the device further includes a second linkage module for:
  • the navigation bar where the processing function used to process the video clip to be processed is displayed.
  • the to-be-processed video processing module may include: a second processing instruction acquisition module, a second processed video acquisition module, and a second preview and track display module, wherein:
  • the second processing instruction acquisition module is configured to acquire the processing instruction corresponding to the processing function when a trigger operation for any processing function is received;
  • the second processed video acquisition module is configured to process the to-be-processed video segment based on the processing instruction to obtain the processed video;
  • the second preview and track display module is used to display the corresponding preview screen in the video preview area based on the processed video, and display the editing logo corresponding to the processing function in the track editing area, where the editing logo and the editing track of the video to be processed Overlap and superimpose in the track editing area.
  • the second linkage module is specifically used for:
  • a navigation bar is created through the navigation bar manager and displayed in the processing function navigation area.
  • a video clip adding button is displayed on the editing track of the to-be-processed video, and the device further includes an editing track update module for:
  • the editing track of the video to be processed is updated.
  • modules may be implemented as software components executed on one or more general-purpose processors, and may also be implemented as hardware that performs certain functions or a combination thereof, such as programmable logic devices and/or application specific integrated circuits.
  • these modules may be embodied in the form of software products, and the software products may be stored in non-volatile storage media. These non-volatile storage media include Devices, mobile terminals, etc.) implement the methods described in the embodiments of the present invention.
  • the aforementioned modules may also be implemented on a single device or distributed on multiple devices. The functions of these modules can be combined with each other, or can be further split into multiple sub-modules.
  • the video processing devices in the foregoing embodiments may include mobile terminals, such as smart phones, palmtop computers, tablet computers, wearable devices with display screens, etc., and may also include computer equipment, such as desktop computers, notebook computers, all-in-one computers, etc. .
  • FIG. 5 shows a schematic structural diagram of an electronic device (for example, a terminal device or a server that executes the method shown in FIG. 1) 500 suitable for implementing the embodiments of the present disclosure.
  • the electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia players), vehicle-mounted terminals (e.g. Car navigation terminals), mobile terminals such as wearable devices, and fixed terminals such as digital TVs, desktop computers, and the like.
  • the electronic device shown in FIG. 5 is only an example, and should not bring any limitation to the function and scope of use of the embodiments of the present disclosure.
  • the electronic device includes: a memory and a processor, where the memory is used to store programs for executing the methods described in the foregoing method embodiments; the processor is configured to execute the programs stored in the memory.
  • the processor here may be referred to as the processing device 501 described below, and the memory may include at least one of a read-only memory (ROM) 502, a random access memory (RAM) 503, and a storage device 508, specifically as follows Shown:
  • the electronic device 500 may include a processing device (such as a central processing unit, a graphics processor, etc.) 501, which may be loaded into a random access device according to a program stored in a read-only memory (ROM) 502 or from a storage device 508.
  • the program in the memory (RAM) 503 executes various appropriate actions and processing.
  • various programs and data necessary for the operation of the electronic device 500 are also stored.
  • the processing device 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504.
  • An input/output (I/O) interface 505 is also connected to the bus 504.
  • the following devices can be connected to the I/O interface 505: including input devices 506 such as touch screens, touch pads, keyboards, mice, cameras, microphones, accelerometers, gyroscopes, etc.; including, for example, liquid crystal displays (LCD), speakers, vibrations
  • input devices 506 such as touch screens, touch pads, keyboards, mice, cameras, microphones, accelerometers, gyroscopes, etc.
  • LCD liquid crystal displays
  • An output device 507 such as a device
  • a storage device 508 such as a magnetic tape, a hard disk, etc.
  • the communication device 509 may allow the electronic device 500 to perform wireless or wired communication with other devices to exchange data.
  • FIG. 5 shows an electronic device with various devices, it should be understood that it is not required to implement or have all of the illustrated devices. It may alternatively be implemented or provided with more or fewer devices.
  • an embodiment of the present disclosure includes a computer program product, which includes a computer program carried on a non-transitory computer readable medium, and the computer program contains program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from the network through the communication device 509, or installed from the storage device 508, or installed from the ROM 502.
  • the processing device 501 the above-mentioned functions defined in the method of the embodiment of the present disclosure are executed.
  • the aforementioned computer-readable storage medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the two.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above. More specific examples of computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium.
  • the computer-readable signal medium may send, propagate or transmit the program for use by or in combination with the instruction execution system, apparatus, or device .
  • the program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wire, optical cable, RF (Radio Frequency), etc., or any suitable combination of the above.
  • the client and server can communicate with any currently known or future developed network protocol such as HTTP (HyperText Transfer Protocol), and can communicate with digital data in any form or medium.
  • Communication e.g., communication network
  • Examples of communication networks include local area networks (“LAN”), wide area networks (“WAN”), the Internet (for example, the Internet), and end-to-end networks (for example, ad hoc end-to-end networks), as well as any currently known or future research and development network of.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or it may exist alone without being installed in the electronic device.
  • the aforementioned computer-readable medium carries one or more programs, and when the aforementioned one or more programs are executed by the electronic device, the electronic device:
  • the processing function is triggered, the preview screen of the processed video processed by the processing function is displayed in the video preview area, and the editing logo corresponding to the processing function is displayed in the track editing area, where the editing logo and the editing track of the video to be processed It is superimposed and displayed in the track editing area.
  • the computer program code used to perform the operations of the present disclosure can be written in one or more programming languages or a combination thereof.
  • the above-mentioned programming languages include, but are not limited to, object-oriented programming languages—such as Java, Smalltalk, C++, and Including conventional procedural programming languages-such as "C" language or similar programming languages.
  • the program code can be executed entirely on the user's computer, partly on the user's computer, executed as an independent software package, partly on the user's computer and partly executed on a remote computer, or entirely executed on the remote computer or server.
  • the remote computer can be connected to the user’s computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to pass Internet connection).
  • LAN local area network
  • WAN wide area network
  • each block in the flowchart or block diagram can represent a module, program segment, or part of code, and the module, program segment, or part of code contains one or more for realizing the specified logic function.
  • Executable instructions can also occur in a different order from the order marked in the drawings. For example, two blocks shown one after the other can actually be executed substantially in parallel, or they can sometimes be executed in the reverse order, depending on the functions involved.
  • each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or operations Or it can be realized by a combination of dedicated hardware and computer instructions.
  • modules or units involved in the described embodiments of the present disclosure can be implemented in software or hardware.
  • the name of the module or unit does not constitute a limitation on the unit itself under certain circumstances.
  • the video receiving module to be processed can also be described as a "module for receiving the video to be processed".
  • exemplary types of hardware logic components include: Field Programmable Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), Application Specific Standard Product (ASSP), System on Chip (SOC), Complex Programmable Logical device (CPLD) and so on.
  • FPGA Field Programmable Gate Array
  • ASIC Application Specific Integrated Circuit
  • ASSP Application Specific Standard Product
  • SOC System on Chip
  • CPLD Complex Programmable Logical device
  • a machine-readable medium may be a tangible medium, which may contain or store a program for use by the instruction execution system, apparatus, or device or in combination with the instruction execution system, apparatus, or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • the machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or equipment, or any suitable combination of the foregoing.
  • machine-readable storage media would include electrical connections based on one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or flash memory erasable programmable read-only memory
  • CD-ROM compact disk read only memory
  • magnetic storage device or any suitable combination of the foregoing.
  • the present disclosure provides a video processing method, including:
  • the preview screen of the processed video processed by the processing function is displayed in the video preview area, and the editing logo corresponding to the processing function is displayed in the track editing area, where the editing logo and The editing track of the video to be processed is superimposed and displayed in the track editing area.
  • the method before receiving a trigger operation for any processing function, the method further includes:
  • the preview screen of the processed video processed by the processing function is displayed in the video preview area, and the processed video is displayed in the track editing area.
  • the editing logo corresponding to the function including:
  • a processing instruction corresponding to the processing function is acquired, where the processing instruction includes the function identification and processing parameters of the processing function;
  • the corresponding preview screen is displayed in the video preview area, and the editing logo corresponding to the processing function is displayed in the track editing area.
  • the editing mark uses the starting editing point as a reference point and is aligned with the editing track of the to-be-processed video clip on the time axis.
  • the processing parameter includes the processing function and the media resource identifier corresponding to the processing function
  • processing the to-be-processed video segment based on the processing parameter to obtain the processed video includes:
  • displaying the editing identifier corresponding to the processing function in the track editing area includes:
  • the editing track corresponding to the processing function is displayed in the track editing area, where the editing track corresponding to the processing function takes the start editing point as the reference point and is aligned with the editing track of the corresponding to-be-processed video clip on the time axis and superimposed and displayed in parallel.
  • the method further includes:
  • a navigation bar corresponding to the processing function is displayed in the processing function navigation area.
  • displaying the navigation bar corresponding to the processing function in the processing function navigation area includes:
  • a navigation bar is created through the navigation bar manager and displayed in the processing function navigation area.
  • the method further includes:
  • the updated editing identification information is sent to the track manager through the view model, so that the track manager displays the editing identification corresponding to the processing function in the track editing area according to the updated editing identification information.
  • the method before receiving a trigger operation for any processing function, the method further includes:
  • the navigation bar where the processing function used to process the video clip to be processed is displayed.
  • the preview screen of the processed video processed by the processing function is displayed in the video preview area, and the processed video is displayed in the track editing area.
  • the editing logo corresponding to the function including:
  • the corresponding preview screen is displayed in the video preview area, and the editing logo corresponding to the processing function is displayed in the track editing area, where the editing logo and the editing track of the to-be-processed video are overlapped and displayed in the track editing area.
  • the navigation bar where the processing function for processing the video clip to be processed is displayed in the processing function navigation area includes:
  • a navigation bar is created through the navigation bar manager and displayed in the processing function navigation area.
  • a video clip adding button is displayed on the edit track of the to-be-processed video, and the method further includes:
  • the editing track of the video to be processed is updated.
  • the present disclosure provides a video processing device, including:
  • the to-be-processed video receiving module is used to receive the to-be-processed video
  • the to-be-processed video display module is used to display the preview screen of the to-be-processed video through the video preview area on the display interface, to display the editing track of the to-be-processed video through the track editing area, and to display at least one processing function through the processing function navigation area;
  • the to-be-processed video processing module is used to display the preview screen of the processed video processed by the processing function in the video preview area when a trigger operation for any processing function is received, and display the corresponding processing function in the track editing area
  • the editing logo where the editing logo and the editing track of the to-be-processed video are superimposed and displayed in the track editing area.
  • the device further includes a starting edit point determination module, configured to:
  • the to-be-processed video processing module may include: a first processing instruction acquisition module, a first processed video acquisition module, and a first preview and track display module, wherein:
  • the first processing instruction acquisition module is configured to acquire a processing instruction corresponding to the processing function when a trigger operation for any processing function is received, where the processing instruction includes the function identification and processing parameters of the processing function;
  • the first processed video acquisition module is used to determine the video clip to be processed based on the function identifier and the starting edit point, and process the video clip to be processed based on the processing parameters to obtain the processed video;
  • the first preview and track display module is used to display the corresponding preview screen in the video preview area based on the processed video, and display the editing logo corresponding to the processing function in the track editing area, where the editing logo is based on the starting editing point Click and align with the editing track of the video clip to be processed on the timeline.
  • the processing parameter includes a media resource identifier corresponding to the processing function
  • the first processed video acquisition module is specifically configured to:
  • the first preview and track display module is specifically used for:
  • the editing track corresponding to the processing function is displayed in the track editing area, where the editing track corresponding to the processing function takes the starting editing point as a reference point and is aligned with the editing track of the corresponding to-be-processed video clip on the time axis and is superimposed and displayed in parallel.
  • the device may further include a first linkage module for:
  • the navigation bar corresponding to the processing function is displayed in the processing function navigation area.
  • the first linkage module is specifically used for:
  • a navigation bar is created through the navigation bar manager and displayed in the processing function navigation area.
  • the device further includes an edit logo creation and display module, which is used to:
  • the updated editing identification information is sent to the track manager through the view model, so that the track manager displays the editing identification corresponding to the processing function in the track editing area according to the updated editing identification information.
  • the device further includes a second linkage module for:
  • the navigation bar where the processing function used to process the video clip to be processed is displayed.
  • the to-be-processed video processing module may include: a second processing instruction acquisition module, a second processed video acquisition module, and a second preview and track display module, where:
  • the second processing instruction acquisition module is configured to acquire the processing instruction corresponding to the processing function when a trigger operation for any processing function is received;
  • the second processed video acquisition module is configured to process the to-be-processed video segment based on the processing instruction to obtain the processed video;
  • the second preview and track display module is used to display the corresponding preview screen in the video preview area based on the processed video, and display the editing logo corresponding to the processing function in the track editing area, where the editing logo and the editing track of the video to be processed Overlap and superimpose in the track editing area.
  • the second linkage module is specifically used for:
  • a navigation bar is created through the navigation bar manager and displayed in the processing function navigation area.
  • a video clip adding button is displayed on the editing track of the to-be-processed video, and the device further includes an editing track update module for:
  • the editing track of the video to be processed is updated.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Marketing (AREA)
  • Business, Economics & Management (AREA)
  • Television Signal Processing For Recording (AREA)
  • User Interface Of Digital Computer (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

本公开提供了一种视频处理方法、装置、电子设备及计算机可读存储介质,包括:接收待处理视频;在显示界面通过视频预览区显示待处理视频的预览画面,通过轨道编辑区显示待处理视频的编辑轨道,并通过处理功能导航区显示至少一个处理功能;在接收到针对任一处理功能的触发操作时,在视频预览区显示经处理功能处理得到的处理后的视频的预览画面,并在轨道编辑区显示处理功能对应的编辑标识,其中,编辑标识与待处理视频的编辑轨道在轨道编辑区中叠加显示。由于该方案对视频处理过程中的画面预览、轨道编辑和处理功能导航进行了分区并设置了合理的联动机制,保证了用户在使用该方案进行视频处理时,能够简单方便的获取丰富的处理功能,提高了用户体验。

Description

视频处理方法、装置、电子设备及计算机可读存储介质
本申请要求于2020年6月10日递交的中国专利申请第202010525242.8号的优先权,该中国专利申请的全文以引入的方式并入以作为本申请的一部分。
技术领域
本公开涉及计算机技术领域,具体而言,本公开涉及一种视频处理方法、装置、电子设备及计算机可读存储介质。
背景技术
随着移动互联网的快速发展,移动端的视频处理类APP(Application,应用程序)的需求也越来越大,现有的视频处理类APP要么完全拷贝PC(Personal Computer,个人计算机)端的视频处理软件的功能和逻辑,要么简化PC端的视频处理软件的功能和逻辑,前者提供的视频处理功能复杂不易用,而后者由于功能扩展性较差又无法满足用户需求。
发明内容
提供该发明内容部分以便以简要的形式介绍构思,这些构思将在后面的具体实施方式部分被详细描述。该发明内容部分并不旨在标识要求保护的技术方案的关键特征或必要特征,也不旨在用于限制所要求的保护的技术方案的范围。
第一方面,本公开实施例提供了一种视频处理方法,包括:
接收待处理视频;
在显示界面通过视频预览区显示待处理视频的预览画面,通过轨道编辑区显示待处理视频的编辑轨道,并通过处理功能导航区显示至少一个处理功能;
在接收到针对任一处理功能的触发操作时,在视频预览区显示经处理功能处理得到的处理后的视频的预览画面,并在轨道编辑区显示处理功能对应的编辑标识,其中,编辑标识与待处理视频的编辑轨道在轨道编辑区中叠加显示。
第二方面,本公开实施例提供了一种视频处理装置,包括:
待处理视频接收模块,用于接收待处理视频;
待处理视频显示模块,用于在显示界面通过视频预览区显示待处理视频的预览画面,通过轨道编辑区显示待处理视频的编辑轨道,并通过处理功能导航区显示至少一个处理功能;
待处理视频处理模块,用于在接收到针对任一处理功能的触发操作时,在视频预览区显示经处理功能处理得到的处理后的视频的预览画面,并在轨道编辑区显示处理功能对应的编辑标识,其中,编辑标识与待处理视频的编辑轨道在轨道编辑区中叠加显示。
第三方面,本公开实施例提供了一种电子设备,包括存储器和处理器;
存储器中存储有计算机程序;
处理器,用于执行计算机程序以实现第一方面实施例中所提供的方法。
第四方面,本公开实施例提供了一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,计算机程序被处理器执行时实现第一方面实施例中所提供的方法。
本公开实施例提供的技术方案带来的有益效果是:
在处理待处理视频过程中,通过视频预览区显示预览画面,通过轨道编辑区显示编辑标识,通过处理功能导航区显示待选取的处理功能,在接收到用户触发所需处理功能后,对待处理视频片段进行处理得到处理后的视频,并在视频预览区显示处理后的视频的预览画面、在轨道编辑区显示处理功能对应的编辑标识。由于该方案对视频处理过程中的画面预览、轨道编辑和处理功能导航进行了分区并设置了合理的联动机制,保证了用户在使用该方案进行视频处理时,能够简单方便的获取丰富的处理功能,提高了用户体验。此外,该方案也能够提高应用程序的功能可扩展性,进而满足用户的需求并提升用户的操作体验。
附图说明
为了更清楚地说明本公开实施例中的技术方案,下面将对本公开实施例描述中所需要使用的附图作简单地介绍。
图1为本公开实施例提供的一种视频处理方法的流程示意图;
图2a为本公开实施例中用户上传待处理视频后显示界面示意图;
图2b为本公开实施例中用户对图2a的显示界面中的轨道编辑区发出向左滑动操作后的显示界面示意图;
图3a为本公开实施例的一个示例中添加特效处理功能对应的轨道编辑区的示意图;
图3b为本公开实施例的一个示例中分割视频处理功能对应的轨道编辑区的示意图;
图4为本公开实施例提供的一种视频处理装置的结构框图;
图5为本公开实施例提供的一种电子设备的结构示意图。
具体实施方式
下面详细描述本公开的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,仅用于解释本公开,而不能解释为对本发明的限制。
本技术领域技术人员可以理解,除非特意声明,这里使用的单数形式“一”、“一个”、“所述”和“该”也可包括复数形式。应该进一步理解的是,本公开的说明书中使用的措辞“包括”是指存在所述特征、整数、步骤、操作、元件和/或组件,但是并不排除存在或添加一个或多个其他特征、整数、步骤、操作、元件、组件和/或它们的组。应该理解,当我们称元件被“连接”或“耦接”到另一元件时,它可以直接连接或耦接到其他元件,或者也可以存在中间元件。此外,这里使用的“连接”或“耦接”可以包 括无线连接或无线耦接。这里使用的措辞“和/或”包括一个或更多个相关联的列出项的全部或任一单元和全部组合。
为使本公开的目的、技术方案和优点更加清楚,下面将结合附图对本公开实施方式作进一步地详细描述。
图1为本公开实施例提供的一种视频处理方法的流程示意图,该方法的执行主体可以包括安装视频处理服务应用程序的电子设备,如图1所示,该方法可以包括:
步骤S101,接收待处理视频。
例如,待处理视频可以包括用户上传的未经处理的视频,也可以包括应用程序保存的处理过的草稿视频,还可以包括用户上传的多张图片拼接后形成的未处理的视频。
具体地,用户打开应用程序后,可以通过应用程序界面中设置的新视频添加按钮选择上传本地存储的未经处理的视频,或者可以直接选取应用程序中保存的上次处理过的草稿视频。此外,用户还可以通过应用程序界面中设置的新视频添加按钮选择上传本地存储的多张图片,应用程序可以将接收到的多张图片拼接成视频作为待处理视频。本公开对待处理视频的接收方式不进行限定。应用程序在接收到待处理视频后进行相关显示和处理操作。
步骤S102,在显示界面通过视频预览区显示待处理视频的预览画面,通过轨道编辑区显示待处理视频的编辑轨道,并通过处理功能导航区显示至少一个处理功能。
例如,应用程序的显示界面可以包括三个区域以对待处理视频的相关信息进行显示,该三个区域分别包括:视频预览区、轨道编辑区和处理功能导航区。
如图2a所示,显示界面中包括视频预览区201、轨道编辑区202以及处理功能导航区203。
在一个实施例中,视频预览区201可以看作播放界面,用于显示预览画面。视频预览区201可以显示待处理视频的各视频片段对应的预览画面,也可以播放整个待处理视频。
轨道编辑区202中设置有时间轴204和时间轴标尺205,并用于显示至少一种编辑标识。当接收到待处理视频时,轨道编辑区202中显示待处理视频的编辑轨道206,待处理视频的编辑轨道206的轨道起始点与时间轴204的起始点对齐,在待处理视频的编辑轨道206上显示有待处理视频的各视频片段。用户可以在待处理视频的编辑轨道206的长度方向上对待处理视频的编辑轨道206进行拉伸和压缩操作。当接收到针对编辑轨道206的拉伸操作时,应用程序缩小时间轴204;当接收到针对编辑轨道206的压缩操作时,应用程序放大时间轴204。
处理功能导航区203用于显示各种用于对待处理视频进行处理的处理功能,例如,剪辑功能、滤镜功能、特效功能等。处理功能导航区203可包括多级导航栏,每级导航栏上显示对应的处理功能。用户可以根据需要通过打开多级导航栏来触发相应的处理功能。通过在处理功能导航区203中设置多级导航栏,可以向用户提供更为简单易用的操作界面和更多可选择的处理功能,同时也能够提高应用程序的功能可扩展性,满足用户 的需求并提升用户的操作体验。
步骤S103,在接收到针对任一处理功能的触发操作时,在视频预览区显示经任一处理功能处理得到的处理后的视频的预览画面,并在轨道编辑区显示任一处理功能对应的编辑标识,其中,编辑标识与待处理视频的编辑轨道在轨道编辑区中叠加显示。
例如,编辑标识指示用户触发了对应的处理功能。在一个实施例中,编辑标识可以包括编辑轨道和编辑效果标识。
具体地,用户在处理功能导航区里发出针对任一处理功能的触发操作时,即发出了执行该处理功能的处理指令。应用程序在接收到执行该处理功能的处理指令后,利用该处理功能对待处理视频的待处理片段进行处理并得到处理后的视频。应用程序将处理后的视频的预览画面显示在视频预览区201,并在轨道编辑区202中显示该处理功能对应的编辑标识。并且,在轨道编辑区202中,该处理功能对应的编辑标识与待处理视频的编辑轨道叠加显示。在一个实施例中,当显示的编辑标识包括该处理功能对应的编辑轨道时,应用程序将该处理功能对应的编辑轨道与待处理视频的编辑轨道在轨道编辑区中并行叠加显示;当显示的编辑标识包括该处理功能对应的编辑效果标识时,应用程序则将该处理功能对应的编辑效果标识重叠于待处理视频的编辑轨道上进行叠加显示。
本公开提供的方案,在处理待处理视频过程中,通过视频预览区显示预览画面,通过轨道编辑区显示编辑标识,通过处理功能导航区显示待选取的处理功能,在接收到用户触发所需处理功能后,对待处理视频片段进行处理得到处理后的视频,并在视频预览区显示处理后的视频的预览画面,在轨道编辑区显示处理功能对应的编辑标识。由于该方案对视频处理过程中的画面预览、轨道编辑和处理功能导航进行了分区并设置了合理的联动机制,保证了用户在使用该方案进行视频处理时,能够简单方便的获取丰富的处理功能,提高了用户体验。此外,该方案也能够提高应用程序的功能可扩展性,进而满足用户的需求并提升用户的操作体验。
在本公开的一种可选实施例中,在接收到针对任一处理功能的触发操作之前,该方法还可以包括:
接收轨道编辑区上的滑动操作,确定待处理视频的起始编辑点。
待处理视频的起始编辑点可以通过用户左右滑动轨道编辑区中待处理视频的编辑轨道来选定。具体来说,用户在轨道编辑区上的滑动操作可以理解为改变待处理视频的编辑轨道和时间轴标尺的相对位置,由于时间轴标尺的位置固定不变(例如位于轨道编辑区的中央)且与待处理视频的编辑轨道垂直,因此,通过滑动操作可以改变时间轴在待处理视频的编辑轨道上的位置。
具体地,将时间轴标尺与待处理视频的编辑轨道的交点所对应的时间轴上的时间点作为待处理视频的起始编辑点。在接收到用户选择的处理功能后,应用程序可以从该起始编辑点开始对待处理视频进行处理。以用户在图2a所示的轨道编辑区202内向左滑动待处理视频的编辑轨道206得到图2b所示的情形为例,如图2b所示,时间轴标尺205与待处理视频的编辑轨道206的交点A所对应的时间轴204上的时间点即为对应的起始 编辑点,以图2b为例可以看出,该起始编辑点对应于时间轴204上的时刻00:05。
在本公开的一种可选实施例中,在接收到针对任一处理功能的触发操作时,在视频预览区显示经处理功能处理得到的处理后的视频的预览画面,并在轨道编辑区显示处理功能对应的编辑标识,可以包括:
在接收到针对任一处理功能的触发操作时,获取处理功能对应的处理指令,其中,处理指令包括处理功能的功能标识和处理参数;
基于功能标识和起始编辑点确定待处理视频片段,并基于处理参数对待处理视频片段进行处理,得到处理后的视频;
基于处理后的视频,在视频预览区显示对应的预览画面,并在轨道编辑区显示处理功能对应的编辑标识,其中,编辑标识以起始编辑点为基准点并与待处理视频片段的编辑轨道在时间轴上对齐。
由前文描述可知,用户在处理功能导航区里发出针对任一处理功能的触发操作时,即发出了执行该处理功能的处理指令,该处理指令包括处理功能的功能标识和处理参数。例如,功能标识用于指示用户所选择的处理功能,处理参数包括对待处理视频片段进行处理的参数,以添加媒体资源的处理功能为例,其处理参数包括对应的媒体资源标识。
具体地,应用程序在接收到处理指令后,首先根据其中的功能标识获取对应的处理功能对应的时长。在一个实施例中,处理功能对应的时长可以预先设定。例如,可以将贴纸功能对应的时长设置为1秒,应用程序在接收到贴纸功能的功能标识后,可以确定该处理功能对应的时长为1秒。之后,应用程序根据起始编辑点和处理功能对应的时长确定对应的待处理视频片段,具体来说,应用程序将起始编辑点作为待处理视频片段的起始点,将处理功能的对应时长作为待处理视频片段的时长,进而可以确定出待处理视频片段。在确定出待处理视频片段后,应用程序利用功能标识对应的处理功能和对应的处理参数对确定出的待处理视频片段进行处理,从而得到处理后的视频。
进一步地,基于得到的处理后的视频,应用程序在视频预览区显示该处理后的视频对应的预览画面,可以只显示某一时刻(例如,起始编辑点所在时刻)的预览画面,还可以基于接收到的播放触发操作播放处理后的视频。
在执行用户所选的功能后,该处理功能对应的编辑轨道与待处理视频的编辑轨道在轨道编辑区中并行叠加显示,且该处理功能对应的编辑轨道以起始编辑点为基准点并与待处理视频片段的编辑轨道在时间轴上对齐。以处理功能为添加特效功能为例,该添加特效对应的时长为0.3秒,当起始编辑点对应于时间轴上的00:05时刻,应用程序确定待处理视频片段为00:05至00:35之间的待处理视频片段。当应用程序对该00:05至00:35之间的待处理视频片段进行添加特效之后,在轨道编辑区中与待处理视频的编辑轨道并行显示添加特效对应的编辑轨道,且该添加特效功能对应的编辑轨道和待处理视频片段都分别与时间轴上00:05至00:35对应,即两者以起始编辑点为基准点并在时间轴上对齐。此外,在处理视频的同时,应用程序还可以生成当前指令的逆处理指令。例如,添加指令的逆指令是删除,删除指令包括处理功能对应的标识和处理参数。应用程序将 所述逆处理指令存入撤销队列中,并在用户界面上显示撤销按钮。当接收到用户对用户界面上的撤销按钮的点击操作时,执行该逆处理指令,即撤销上一步处理指令对应的处理功能。
在本公开的一种可选实施例中,当处理功能为添加媒体资源的处理功能时,对应的处理参数包括处理功能对应的媒体资源标识,基于处理参数对待处理视频片段进行处理,得到处理后的视频,包括:
基于媒体资源标识获取对应的媒体资源;以及
将媒体资源挂载在待处理视频片段上,得到处理后的视频。
在一个实施例中,应用程序根据媒体资源标识在对应的处理功能的媒体资源库中获取对应的媒体内容,该媒体资源库中包括该处理功能对应的多种媒体资源,且每一种媒体资源都有对应的媒体资源标识,从而通过媒体资源标识的匹配即可获取到对应的媒体资源。在获取到媒体资源并确定出待处理视频片段后,应用程序将该媒体资源挂载在待处理视频片段上,即可得到处理后的视频。
以添加滤镜功能为例,应用程序根据滤镜资源标识从滤镜资源库中获取对应的滤镜资源包,并将该滤镜资源包添加到待处理视频片段进行渲染即可得到处理后的视频。
此外,在本公开的一种可选实施例中,当处理功能为添加媒体资源的处理功能时,对应的处理参数包括处理功能对应的媒体资源标识和待添加内容参数,基于处理参数对待处理视频片段进行处理,得到处理后的视频,包括:
基于媒体资源标识获取对应的媒体资源;以及
基于所述媒体资源和所述待添加内容参数,获得待添加媒体资源;
将所述待添加媒体资源挂载在待处理视频片段上,得到处理后的视频。
以添加文字功能为例,应用程序接收到的处理参数包括用户输入的待添加文字内容参数和用户选择的文字效果资源包标识,应用程序根据文字效果资源包标识从文字效果资源库中获取对应的文字效果资源包,并根据该文字效果资源包对待添加文字进行处理,得到处理后的带有文字效果的待添加文字,并将其作为待添加媒体资源,应用程序将该带有文字效果的待添加文字添加到待处理视频片段进行渲染从而得到处理后的视频。
在本公开的一种可选实施例中,在轨道编辑区显示处理功能对应的编辑标识,包括:
在轨道编辑区显示处理功能对应的编辑轨道,其中,处理功能对应的编辑轨道以起始编辑点为基准点并与对应的待处理视频片段的编辑轨道在时间轴上对齐且并行叠加显示。
处理功能对应的编辑轨道与待处理视频的编辑轨道并行叠加显示,且该处理功能对应的编辑轨道与对应的待处理视频片段在时间轴上对齐,该处理功能对应的编辑轨道与对应的待处理视频片段对齐的基准点为待处理视频的起始编辑点。以处理功能为添加特效、且对应的媒体资源为特效类型1为例进行说明,如图3a所示,在执行增加特效类型1的处理指令后,特效类型1对应的编辑轨道301以起始编辑点B为基准点,并与对应的待处理视频片段(即待处理视频片段的编辑轨道302中,从时间轴标尺303开始向右 直至编辑轨道302中的虚线结束的视频片段)的编辑轨道302在时间轴304上对齐。由图3a可以看出,特效类型1的编辑轨道301和待处理视频片段的编辑轨道302对齐且对应于时间轴304上的00:05至00:20。
在本公开的一种可选实施例中,该方法还可以包括:
在接收到针对所述处理功能对应的编辑轨道的选中操作时,在所述处理功能导航区显示所述处理功能对应的导航栏。
在一个实施例中,应用程序通过接收用户对编辑标识的选中操作可以实现轨道编辑区和处理功能导航区的联动。具体来说,当接收到用户在轨道编辑区选中某一处理功能对应编辑标识时,应用程序在处理功能导航区会显示该处理功能对应的导航栏,该导航栏中显示与该处理功能相关联的处理功能。例如,以处理功能为添加特效为例,图3a中,当用户通过点击选中其对应的特效类型1的编辑轨道时,在处理功能导航区将显示处理功能1(例如,“替换特效”)、处理功能2(例如,“复制特效”)……处理功能n(例如,“删除特效”)等与增加特效相关的处理功能的触发按钮。
在本公开的一种可选实施例中,在处理功能导航区显示处理功能对应的导航栏,包括:
在接收到处理功能对应的编辑轨道的选中操作时,应用程序更新视觉模型中的状态信息,并通过视觉模型将更新后的状态信息发送至导航栏管理器;
响应于更新后的状态信息,通过导航栏管理器创建导航栏并在处理功能导航区显示导航栏。
例如,视图模型和导航栏管理器都是应用程序内的功能组件。
例如,状态信息指示编辑标识的选中状态,状态信息可以指示编辑标识被选中或未被选中。
具体地,当接收到用户对某一处理功能的编辑标识选中操作后,应用程序将视图模型中的该处理功能对应的编辑标识的状态信息更新为选中状态,视图模型将更新后的状态信息发送至导航栏管理器,导航栏管理器接收到该更新后的状态信息后确定该处理功能对应的编辑标识的状态信息被更新至选中状态,则创建该编辑标识对应的处理功能对应的导航栏并显示该导航栏,例如,该导航栏中可以显示该处理功能关联的处理功能。
在本公开的一种可选实施例中,在得到处理后的视频之后,该方法还可以包括:
向视图模型发送处理完成通知;
响应于处理完成通知,通过视图模型向轨道管理器发送更新后的编辑标识信息,以使轨道管理器根据更新后的编辑标识信息,在轨道编辑区显示处理功能对应的编辑标识。
例如,视图模型和轨道管理器都是应用程序内的功能组件。
在得到处理后的视频后,应用程序向视图模型发送处理完成通知,以告知视图模型已完成对当前待处理视频片段的处理。视图模型将该更新后的编辑标识信息发送给轨道管理器,例如,该更新后的编辑标识信息是根据对应的处理功能确定的。轨道管理器根据接收到的更新后的编辑标识信息创建对应的编辑标识,并在轨道编辑区显示创建的编 辑标识。
在本公开的一种可选实施例中,在接收到针对任一处理功能的触发操作之前,方法还包括:
接收待处理视频的编辑轨道上的选中操作,确定待处理视频的待处理视频片段;以及
在处理功能导航区显示用于处理待处理视频片段的处理功能对应的导航栏。
在一个实施例中,除了滑动轨道编辑区中待处理视频的编辑轨道来选定待处理视频的起始编辑点,用户还可以选中待处理视频的编辑轨道,或者***自动选中待处理视频的编辑轨道。待处理处理视频可以包括至少一个视频片段,例如,被选中的视频片段确定为待处理视频片段。在这种选择视频片段的方式下,应用程序在确定了待处理视频片段的同时还可以在处理功能导航区显示用于处理待处理视频片段的处理功能对应的导航栏,即,通过接收用户在待处理视频的编辑轨道的选中操作,应用程序在处理功能导航区显示用于处理待处理视频片段的处理功能对应的导航栏。当应用程序接收到用户在该导航栏内对任一处理功能的触发操作,即可实现对待处理视频片段的处理。
在本公开的一种可选实施例中,在接收到针对任一处理功能的触发操作时,在视频预览区显示经处理功能处理得到的处理后的视频的预览画面,并在轨道编辑区显示处理功能对应的编辑标识,包括:
在接收到针对任一处理功能的触发操作时,获取处理功能对应的处理指令;
基于处理指令对待处理视频片段进行处理,得到处理后的视频;
基于处理后的视频,在视频预览区显示对应的预览画面,并在轨道编辑区显示处理功能对应的编辑标识,其中,编辑标识与待处理视频的编辑轨道在轨道编辑区中重叠并叠加显示。
具体地,由前文描述可知,在处理功能被触发之前,应用程序通过接收用户在待处理视频的编辑轨道的选中操作已经确定了对应的待处理片段。在接收到用户对处理功能的触发操作时,也即用户发出执行该处理操作的处理指令时,应用程序基于该处理指令中的功能标识和处理参数对待处理视频片段进行处理,得到处理后的视频。在得到处理后的视频后,应用程序在视频预览区显示对应的预览画面,并在轨道编辑区显示处理功能对应的编辑标识。例如,编辑标识可以包括编辑效果标识,并且,该编辑效果标识可以与待处理视频的编辑轨道在轨道编辑区中重叠并叠加显示。在一个实施例中,对待处理视频的编辑轨道的选中操作可以对应于为所述待处理视频进行剪辑的至少一个处理功能。
以处理功能为分割视频为例进行说明。参照图3b,在接收到用户在待处理视频的编辑轨道的选中操作时,应用程序在处理功能导航区313显示用于处理待处理视频片段的处理功能对应的导航栏。当应用程序接收到用户在该导航栏内对视频分割处理功能的点击操作,并且还接收到了用户在待处理视频的编辑轨道中针对待处理视频片段311的移动操作时,应用程序可以获取到视频分割功能对应的处理指令,该处理指令可以包括视 频分割功能标识和处理参数(例如,处理参数包括当前操作的视频片段标识以及视频片段的目标位置时间等),应用程序基于上述处理指令对待处理视频片段进行处理,得到处理后的视频。此外,基于处理后的视频,应用程序还可以在视频预览区显示对应的预览画面,并在轨道编辑区显示处理功能对应的编辑效果标识312,如图3b所示,编辑效果标识312与待处理视频的编辑轨道在轨道编辑区中重叠并叠加显示。
在本公开的一种可选实施例中,在处理功能导航区显示用于处理待处理视频片段的处理功能所在的导航栏,包括:
在接收到用户在待处理视频的编辑轨道的选中操作时,应用程序更新视觉模型中的状态信息,并通过视觉模型将更新后的状态信息发送至导航栏管理器;
响应于更新后的状态信息,应用程序通过导航栏管理器创建导航栏并在处理功能导航区显示导航栏。
通过选中待处理视频的编辑轨道可以实现轨道编辑区和处理功能导航区的联动。用户在轨道编辑区选中某一视频片段时,在处理功能导航区将会显示用于处理该待处理视频片段的处理功能对应的导航栏,该导航栏中显示能够用于处理该待处理视频片段的处理功能。
此外,在一个实施例中,通过对功能导航区中的处理功能的触发操作,可以在轨道编辑区中显示与所触发的处理功能相对应的编辑标识,由此,实现处理功能导航区和轨道编辑区的联动。当接收到用户对某一处理功能的选中操作后,应用程序将视图模型中的该处理功能对应的编辑轨道的状态信息更新为选中状态,视图模型将更新后的状态信息发送至轨道管理器,轨道管理器接收到该更新后的状态信息后确定该处理功能对应的编辑轨道的状态信息被更新至选中状态,显示或者选中该对应的编辑轨道。例如,在接收到功能导航区中的针对视频进行剪辑的处理功能的触发操作,应用程序在轨道编辑区中显示对待处理视频的编辑轨道的选中;在接收到功能导航区中的添加媒体资源处理功能的触发操作时,应用程序在轨道编辑区中显示该处理功能对应的编辑轨道。
在本公开的一种可选实施例中,在待处理视频的编辑轨道上显示有视频片段添加按钮,由此,该方法还可以包括:
在通过视频片段添加按钮接收到视频添加操作时,获取视频添加操作对应的待添加视频片段;
根据待添加视频片段,更新待处理视频的编辑轨道。
用户在处理待处理视频过程中,若还需要添加新的视频片段,可以点击视频片段添加按钮触发视频添加操作,进而从本地上传待添加视频片段。在接收到该待添加视频的上传后,应用程序更新待处理视频的编辑轨道。在一个实施例中,应用程序可以确定时间轴标尺所在位置对应的时间点,应用程序根据所述时间点在当前待处理视频所对应的时间轴上的位置,将待添加视频片段添加到当前待处理视频中并更新待处理视频的编辑轨道。在一个实施例中,如果所述时间点位于当前待处理视频所对应的时间轴的中心点左边的位置,则应用程序将待添加视频片段从时间轴的起始点开始,添加到当前待处理 视频中;如果所述时间点位于当前待处理视频所对应的时间轴的中心点右边的位置,则应用程序将待添加视频片段从当前待处理视频的结尾时间点处开始添加。此外,应用程序还可以根据其他实施方式将待添加视频片段添加到当前待处理视频中,本公开不对此进行限制。
在本公开实施例中,触发操作包括点击、文字/语音输入以及触摸输入(例如,图3b中移动待处理视频片段311的移动操作)等形式,用户发出触发操作即发出了执行对应处理功能的处理指令,应用程序根据用户的触发操作可以获取到对应处理功能的处理指令。
图4为本公开实施例提供的一种视频处理装置的结构框图,如图4所示,该装置400可以包括:待处理视频接收模块401、待处理视频显示模块402以及待处理视频处理模块403,其中:
待处理视频接收模块401用于接收待处理视频;
待处理视频显示模块402用于在显示界面通过视频预览区显示待处理视频的预览画面、通过轨道编辑区显示待处理视频的编辑轨道、并通过处理功能导航区显示至少一个处理功能;
待处理视频处理模块403用于在接收到针对任一处理功能的触发操作时,在视频预览区显示经该任一处理功能处理得到的处理后的视频的预览画面,并在轨道编辑区显示该任一处理功能对应的编辑标识,其中,编辑标识与待处理视频的编辑轨道在轨道编辑区中叠加显示。
本公开提供的方案,在处理待处理视频过程中,通过视频预览区显示预览画面,通过轨道编辑区显示编辑标识,通过处理功能导航区显示待选取的处理功能,在接收到用户触发所需处理功能后,对待处理视频片段进行处理得到处理后的视频,并在视频预览区显示处理后的视频的预览画面、在轨道编辑区显示处理功能对应的编辑标识。由于该方案对视频处理过程中的画面预览、轨道编辑和处理功能导航进行了分区并设置了合理的联动机制,保证了用户在使用该方案进行视频处理时,能够简单方便的获取丰富的处理功能,提高了用户体验。此外,该方案也能够提高应用程序的功能可扩展性,进而满足用户的需求并提升用户的操作体验。
在本公开的一种可选实施例中,该装置还包括起始编辑点确定模块,用于:
在接收到针对任一处理功能的触发操作之前,接收轨道编辑区上的滑动操作,确定待处理视频的起始编辑点。
在本公开的一种可选实施例中,待处理视频处理模块可以包括:第一处理指令获取模块、第一处理后的视频获取模块以及第一预览和轨道显示模块,其中:
第一处理指令获取模块,用于在接收到针对任一处理功能的触发操作时,获取处理功能对应的处理指令,其中,处理指令包括处理功能的功能标识和处理参数;
第一处理后的视频获取模块,用于基于功能标识和起始编辑点确定待处理视频片段,并基于处理参数对待处理视频片段进行处理,得到处理后的视频;
第一预览和轨道显示模块,用于基于处理后的视频,在视频预览区显示对应的预览画面,并在轨道编辑区显示处理功能对应的编辑标识,其中,编辑标识以起始编辑点为基准点并与待处理视频片段的编辑轨道在时间轴上对齐。
在本公开的一种可选实施例中,处理参数包括处理功能对应的媒体资源标识,第一处理后的视频获取模块具体用于:
基于媒体资源标识获取对应的媒体资源;以及
将媒体资源挂载在待处理视频片段上,得到处理后的视频。
在本公开的一种可选实施例中,第一预览和轨道显示模块具体用于:
在轨道编辑区显示处理功能对应的编辑轨道,其中,处理功能对应的编辑轨道以起始编辑点为基准点并与对应的待处理视频片段的编辑轨道在时间轴上对齐且并行叠加显示。
在本公开的一种可选实施例中,该装置还可以包括第一联动模块,用于:
在接收到针对处理功能对应的编辑轨道的选中操作时,在处理功能导航区显示处理功能对应的导航栏。
在本公开的一种可选实施例中,第一联动模块具体用于:
在接收到选中操作时,更新视觉模型中的状态信息,并通过视觉模型将更新后的状态信息发送至导航栏管理器;
响应于更新后的状态信息,通过导航栏管理器创建导航栏并在处理功能导航区显示导航栏。
在本公开的一种可选实施例中,该装置还包括编辑标识创建和显示模块,用于:
在得到处理后的视频之后,向视图模型发送处理完成通知;
响应于处理完成通知,通过视图模型向轨道管理器发送更新后的编辑标识信息,以使轨道管理器根据更新后的编辑标识信息,在轨道编辑区显示处理功能对应的编辑标识。
在本公开的一种可选实施例中,该装置还包括第二联动模块,用于:
在接收到针对任一处理功能的触发操作之前,接收待处理视频的编辑轨道上的选中操作,确定待处理视频的待处理视频片段;以及
在处理功能导航区显示用于处理待处理视频片段的处理功能所在的导航栏。
在本公开的一种可选实施例中,待处理视频处理模块可以包括:第二处理指令获取模块、第二处理后的视频获取模块以及第二预览和轨道显示模块,其中:
第二处理指令获取模块,用于在接收到针对任一处理功能的触发操作时,获取处理功能对应的处理指令;
第二处理后的视频获取模块,用于基于处理指令对待处理视频片段进行处理,得到处理后的视频;
第二预览和轨道显示模块,用于基于处理后的视频,在视频预览区显示对应的预览画面,并在轨道编辑区显示处理功能对应的编辑标识,其中,编辑标识与待处理视频的编辑轨道在轨道编辑区中重叠叠加显示。
在本公开的一种可选实施例中,所第二联动模块具体用于:
在接收到选中操作时,更新视觉模型中的状态信息,并通过视觉模型将更新后的状态信息发送至导航栏管理器;
响应于更新后的状态信息,通过导航栏管理器创建导航栏并在处理功能导航区显示导航栏。
在本公开的一种可选实施例中,在待处理视频的编辑轨道上显示有视频片段添加按钮,该装置还包括编辑轨道更新模块,用于:
在通过视频片段添加按钮接收到视频添加操作时,获取视频添加操作对应的待添加视频片段;
根据待添加视频片段,更新待处理视频的编辑轨道。
上述模块可以被实现为在一个或多个通用处理器上执行的软件组件,也可以被实现为诸如执行某些功能或其组合的硬件,诸如可编程逻辑设备和/或专用集成电路。在一些实施例中,这些模块可以体现为软件产品的形式,该软件产品可以存储在非易失性存储介质中,这些非易失性存储介质中包括使得计算机设备(例如个人计算机、服务器、网络设备、移动终端等)实现本发明实施例中描述的方法。在一个实施例中,上述模块还可以在单个设备上实现,也可以分布在多个设备上。这些模块的功能可以相互合并,也可以进一步拆分为多个子模块。
上述各实施例中的视频处理装置可以包括移动终端,例如智能手机、掌上电脑、平板电脑、带显示屏的可穿戴设备等等,还可以包括计算机设备,如台式机、笔记本电脑、一体机等。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的视频处理装置中的模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
下面参考图5,其示出了适于用来实现本公开实施例的电子设备(例如执行图1所示方法的终端设备或服务器)500的结构示意图。本公开实施例中的电子设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)、可穿戴设备等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图5示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
电子设备包括:存储器以及处理器,存储器用于存储执行上述各个方法实施例所述方法的程序;处理器被配置为执行存储器中存储的程序。其中,这里的处理器可以称为下文所述的处理装置501,存储器可以包括下文中的只读存储器(ROM)502、随机访问存储器(RAM)503以及存储装置508中的至少一项,具体如下所示:
如图5所示,电子设备500可以包括处理装置(例如中央处理器、图形处理器等)501,其可以根据存储在只读存储器(ROM)502中的程序或者从存储装置508加载到随机访问存储器(RAM)503中的程序而执行各种适当的动作和处理。在RAM503中,还 存储有电子设备500操作所需的各种程序和数据。处理装置501、ROM 502以及RAM503通过总线504彼此相连。输入/输出(I/O)接口505也连接至总线504。
通常,以下装置可以连接至I/O接口505:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置506;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置507;包括例如磁带、硬盘等的存储装置508;以及通信装置509。通信装置509可以允许电子设备500与其他设备进行无线或有线通信以交换数据。虽然图5示出了具有各种装置的电子设备,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置509从网络上被下载和安装,或者从存储装置508被安装,或者从ROM502被安装。在该计算机程序被处理装置501执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读存储介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的***、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行***、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行***、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装 配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:
接收待处理视频;在显示界面通过视频预览区显示待处理视频的预览画面、通过轨道编辑区显示待处理视频的编辑轨道、并通过处理功能导航区显示至少一个处理功能;在接收到针对任一处理功能的触发操作时,在视频预览区显示经处理功能处理得到的处理后的视频的预览画面,并在轨道编辑区显示处理功能对应的编辑标识,其中,编辑标识与待处理视频的编辑轨道在轨道编辑区中叠加显示。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的***、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的***来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的模块或单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,模块或单元的名称在某种情况下并不构成对该单元本身的限定,例如,待处理视频接收模块还可以被描述为“接收待处理视频的模块”。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上***(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行***、装置或设备使用或与指令执行***、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体***、装置或设备,或者上述 内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的计算机可读介质被电子设备执行时实现的具体方法,可以参考前述方法实施例中的对应过程,在此不再赘述。
根据本公开的一个或多个实施例,本公开提供了一种视频处理方法,包括:
接收待处理视频;
在显示界面通过视频预览区显示待处理视频的预览画面,通过轨道编辑区显示待处理视频的编辑轨道,并通过处理功能导航区显示至少一个处理功能;
在接收到针对任一处理功能的触发操作时,在视频预览区显示经处理功能处理得到的处理后的视频的预览画面,并在轨道编辑区显示处理功能对应的编辑标识,其中,编辑标识与待处理视频的编辑轨道在轨道编辑区中叠加显示。
根据本公开的一个或多个实施例,在接收到针对任一处理功能的触发操作之前,方法还包括:
接收轨道编辑区上的滑动操作,确定待处理视频的起始编辑点。
根据本公开的一个或多个实施例,在接收到针对任一处理功能的触发操作时,在视频预览区显示经处理功能处理得到的处理后的视频的预览画面,并在轨道编辑区显示处理功能对应的编辑标识,包括:
在接收到针对任一处理功能的触发操作时,获取处理功能对应的处理指令,其中,处理指令包括处理功能的功能标识和处理参数;
基于功能标识和起始编辑点确定待处理视频片段,并基于处理参数对待处理视频片段进行处理,得到处理后的视频;
基于处理后的视频,在视频预览区显示对应的预览画面,并在轨道编辑区显示处理功能对应的编辑标识,
其中,编辑标识以起始编辑点为基准点并与待处理视频片段的编辑轨道在时间轴上对齐。
根据本公开的一个或多个实施例,处理参数包括处理功能与处理功能对应的媒体资源标识,基于处理参数对待处理视频片段进行处理,得到处理后的视频,包括:
基于媒体资源标识获取对应的媒体资源;以及
将媒体资源挂载在待处理视频片段上,得到处理后的视频。
根据本公开的一个或多个实施例,在轨道编辑区显示处理功能对应的编辑标识,包括:
在轨道编辑区显示处理功能对应的编辑轨道,其中,处理功能对应的编辑轨道以起始编辑点为基准点并与对应的待处理视频片段的编辑轨道在时间轴上对齐且并行叠加显 示。
根据本公开的一个或多个实施例,该方法还包括:
在接收到针对处理功能对应的编辑轨道的选中操作时,在处理功能导航区显示与处理功能对应的导航栏。
根据本公开的一个或多个实施例,在处理功能导航区显示与处理功能对应的导航栏,包括:
在接收到选中操作时,更新视觉模型中的状态信息,并通过视觉模型将更新后的状态信息发送至导航栏管理器;
响应于更新后的状态信息,通过导航栏管理器创建导航栏并在处理功能导航区显示导航栏。
根据本公开的一个或多个实施例,在得到处理后的视频之后,该方法还包括:
向视图模型发送处理完成通知;
响应于处理完成通知,通过视图模型向轨道管理器发送更新后的编辑标识信息,以使轨道管理器根据更新后的编辑标识信息,在轨道编辑区显示处理功能对应的编辑标识。
根据本公开的一个或多个实施例,在接收到针对任一处理功能的触发操作之前,该方法还包括:
接收待处理视频的编辑轨道上的选中操作,确定待处理视频的待处理视频片段;以及
在处理功能导航区显示用于处理待处理视频片段的处理功能所在的导航栏。
根据本公开的一个或多个实施例,在接收到针对任一处理功能的触发操作时,在视频预览区显示经处理功能处理得到的处理后的视频的预览画面,并在轨道编辑区显示处理功能对应的编辑标识,包括:
在接收到针对任一处理功能的触发操作时,获取处理功能对应的处理指令;
基于处理指令对待处理视频片段进行处理,得到处理后的视频;
基于处理后的视频,在视频预览区显示对应的预览画面,并在轨道编辑区显示处理功能对应的编辑标识,其中,编辑标识与待处理视频的编辑轨道在轨道编辑区中重叠叠加显示。
根据本公开的一个或多个实施例,在处理功能导航区显示用于处理待处理视频片段的处理功能所在的导航栏,包括:
在接收到选中操作时,更新视觉模型中的状态信息,并通过视觉模型将更新后的状态信息发送至导航栏管理器;
响应于更新后的状态信息,通过导航栏管理器创建导航栏并在处理功能导航区显示导航栏。
根据本公开的一个或多个实施例,在待处理视频的编辑轨道上显示有视频片段添加按钮,该方法还包括:
在通过视频片段添加按钮接收到视频添加操作时,获取视频添加操作对应的待添加 视频片段;
根据待添加视频片段,更新待处理视频的编辑轨道。
根据本公开的一个或多个实施例,本公开提供了一种视频处理装置,包括:
待处理视频接收模块,用于接收待处理视频;
待处理视频显示模块,用于在显示界面通过视频预览区显示待处理视频的预览画面、通过轨道编辑区显示待处理视频的编辑轨道、并通过处理功能导航区显示至少一个处理功能;
待处理视频处理模块,用于在接收到针对任一处理功能的触发操作时,在视频预览区显示经处理功能处理得到的处理后的视频的预览画面,并在轨道编辑区显示处理功能对应的编辑标识,其中,编辑标识与待处理视频的编辑轨道在轨道编辑区中叠加显示。
根据本公开的一个或多个实施例,该装置还包括起始编辑点确定模块,用于:
在接收到针对任一处理功能的触发操作之前,接收轨道编辑区上的滑动操作,确定待处理视频的起始编辑点。
根据本公开的一个或多个实施例,待处理视频处理模块可以包括:第一处理指令获取模块、第一处理后的视频获取模块以及第一预览和轨道显示模块,其中:
第一处理指令获取模块,用于在接收到针对任一处理功能的触发操作时,获取处理功能对应的处理指令,其中,处理指令包括处理功能的功能标识和处理参数;
第一处理后的视频获取模块,用于基于功能标识和起始编辑点确定待处理视频片段,并基于处理参数对待处理视频片段进行处理,得到处理后的视频;
第一预览和轨道显示模块,用于基于处理后的视频,在视频预览区显示对应的预览画面,并在轨道编辑区显示处理功能对应的编辑标识,其中,编辑标识以起始编辑点为基准点并与待处理视频片段的编辑轨道在时间轴上对齐。
根据本公开的一个或多个实施例,处理参数包括处理功能对应的媒体资源标识,第一处理后的视频获取模块具体用于:
基于媒体资源标识获取对应的媒体资源;以及
将媒体资源挂载在待处理视频片段上,得到处理后的视频。
根据本公开的一个或多个实施例,第一预览和轨道显示模块具体用于:
在轨道编辑区显示处理功能对应的编辑轨道,其中,处理功能对应的编辑轨道以起始编辑点为基准点并与对应的待处理视频片段的编辑轨道在时间轴上对齐且并行叠加显示。
根据本公开的一个或多个实施例,该装置还可以包括第一联动模块,用于:
在接收到针对处理功能对应的编辑轨道的选中操作时,在处理功能导航区显示处理功能对应的导航栏。
根据本公开的一个或多个实施例,第一联动模块具体用于:
在接收到选中操作时,更新视觉模型中的状态信息,并通过视觉模型将更新后的状态信息发送至导航栏管理器;
响应于更新后的状态信息,通过导航栏管理器创建导航栏并在处理功能导航区显示导航栏。
根据本公开的一个或多个实施例,该装置还包括编辑标识创建和显示模块,用于:
在得到处理后的视频之后,向视图模型发送处理完成通知;
响应于处理完成通知,通过视图模型向轨道管理器发送更新后的编辑标识信息,以使轨道管理器根据更新后的编辑标识信息,在轨道编辑区显示处理功能对应的编辑标识。
根据本公开的一个或多个实施例,该装置还包括第二联动模块,用于:
在接收到针对任一处理功能的触发操作之前,接收待处理视频的编辑轨道上的选中操作,确定待处理视频的待处理视频片段;以及
在处理功能导航区显示用于处理待处理视频片段的处理功能所在的导航栏。
根据本公开的一个或多个实施例,待处理视频处理模块可以包括:第二处理指令获取模块、第二处理后的视频获取模块以及第二预览和轨道显示模块,其中:
第二处理指令获取模块,用于在接收到针对任一处理功能的触发操作时,获取处理功能对应的处理指令;
第二处理后的视频获取模块,用于基于处理指令对待处理视频片段进行处理,得到处理后的视频;
第二预览和轨道显示模块,用于基于处理后的视频,在视频预览区显示对应的预览画面,并在轨道编辑区显示处理功能对应的编辑标识,其中,编辑标识与待处理视频的编辑轨道在轨道编辑区中重叠叠加显示。
根据本公开的一个或多个实施例,所第二联动模块具体用于:
在接收到选中操作时,更新视觉模型中的状态信息,并通过视觉模型将更新后的状态信息发送至导航栏管理器;
响应于更新后的状态信息,通过导航栏管理器创建导航栏并在处理功能导航区显示导航栏。
根据本公开的一个或多个实施例,在待处理视频的编辑轨道上显示有视频片段添加按钮,该装置还包括编辑轨道更新模块,用于:
在通过视频片段添加按钮接收到视频添加操作时,获取视频添加操作对应的待添加视频片段;
根据待添加视频片段,更新待处理视频的编辑轨道。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利 的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。

Claims (15)

  1. 一种视频处理方法,其特征在于,包括:
    接收待处理视频;
    在显示界面通过视频预览区显示所述待处理视频的预览画面,通过轨道编辑区显示所述待处理视频的编辑轨道,并通过处理功能导航区显示至少一个处理功能;
    在接收到针对任一处理功能的触发操作时,在所述视频预览区显示经所述任一处理功能处理得到的处理后的视频的预览画面,并在所述轨道编辑区显示所述任一处理功能对应的编辑标识,其中,所述编辑标识与所述待处理视频的编辑轨道在所述轨道编辑区中叠加显示。
  2. 根据权利要求1所述的方法,其特征在于,在接收到针对任一处理功能的触发操作之前,所述方法还包括:
    接收所述轨道编辑区上的滑动操作,确定所述待处理视频的起始编辑点。
  3. 根据权利要求2所述的方法,其特征在于,所述在接收到针对任一处理功能的触发操作时,在所述视频预览区显示经所述任一处理功能处理得到的处理后的视频的预览画面,并在所述轨道编辑区显示所述任一处理功能对应的编辑标识,包括:
    在接收到针对任一处理功能的触发操作时,获取所述任一处理功能对应的处理指令,其中,所述处理指令包括所述任一处理功能的功能标识和处理参数;
    基于所述功能标识和所述起始编辑点确定待处理视频片段,并基于所述处理参数对所述待处理视频片段进行处理,得到所述处理后的视频;
    基于所述处理后的视频,在所述视频预览区显示对应的预览画面,并在所述轨道编辑区显示所述任一处理功能对应的编辑标识,
    其中,所述编辑标识以所述起始编辑点为基准点并与所述待处理视频片段的编辑轨道在时间轴上对齐。
  4. 根据权利要求3所述的方法,其特征在于,所述处理参数包括与所述任一处理功能对应的媒体资源标识,
    所述基于所述处理参数对所述待处理视频片段进行处理,得到所述处理后的视频,包括:
    基于所述媒体资源标识获取对应的媒体资源;以及
    将所述对应的媒体资源挂载在所述待处理视频片段上,得到所述处理后的视频。
  5. 根据权利要求3或4所述的方法,其特征在于,所述在所述轨道编辑区显示所述任一处理功能对应的编辑标识,包括:
    在所述轨道编辑区显示所述任一处理功能对应的编辑轨道,其中,所述任一处理功能对应的编辑轨道以所述起始编辑点为基准点并与对应的待处理视频片段的编辑轨道在时间轴上对齐且并行叠加显示。
  6. 根据权利要求5所述的方法,其特征在于,所述方法还包括:
    在接收到针对所述任一处理功能对应的编辑轨道的选中操作时,在所述处理功能导航区显示与所述任一处理功能对应的导航栏。
  7. 根据权利要求6所述的方法,其特征在于,所述在所述处理功能导航区显示与所述任一处理功能对应的导航栏,包括:
    在接收到所述选中操作时,更新视觉模型中的状态信息,并通过所述视觉模型将更新后的状态信息发送至导航栏管理器;
    响应于所述更新后的状态信息,通过所述导航栏管理器创建所述导航栏并在所述处理功能导航区显示所述导航栏。
  8. 根据权利要求3-7任一项所述的方法,其特征在于,在得到所述处理后的视频之后,所述方法还包括:
    向视图模型发送处理完成通知;
    响应于所述处理完成通知,通过所述视图模型向轨道管理器发送更新后的编辑标识信息,以使所述轨道管理器根据所述更新后的编辑标识信息,在所述轨道编辑区显示所述处理功能对应的编辑标识。
  9. 根据权利要求1所述的方法,其特征在于,在接收到针对任一处理功能的触发操作之前,所述方法还包括:
    接收所述待处理视频的编辑轨道上的选中操作,确定所述待处理视频的待处理视频片段;以及
    在所述处理功能导航区显示用于处理所述待处理视频片段的处理功能所在的导航栏。
  10. 根据权利要求9所述的方法,其特征在于,所述在接收到针对任一处理功能的触发操作时,在所述视频预览区显示经所述任一处理功能处理得到的处理后的视频的预览画面,并在所述轨道编辑区显示所述任一处理功能对应的编辑标识,包括:
    在接收到针对所述任一处理功能的触发操作时,获取所述任一处理功能对应的处理指令;
    基于所述处理指令对所述待处理视频片段进行处理,得到所述处理后的视频;
    基于所述处理后的视频,在所述视频预览区显示对应的预览画面,并在所述轨道编辑区显示所述任一处理功能对应的编辑标识,其中,所述编辑标识与所述待处理视频的编辑轨道在所述轨道编辑区中重叠叠加显示。
  11. 根据权利要求9或10所述的方法,其特征在于,所述在所述处理功能导航区显示用于处理所述待处理视频片段的处理功能所在的导航栏,包括:
    在接收到所述选中操作时,更新视觉模型中的状态信息,并通过所述视觉模型将更新后的状态信息发送至导航栏管理器;
    响应于所述更新后的状态信息,通过所述导航栏管理器创建所述导航栏并在所述处理功能导航区显示所述导航栏。
  12. 根据权利要求1-11任一项所述的方法,其特征在于,在所述待处理视频的编辑 轨道上显示有视频片段添加按钮,
    所述方法还包括:
    在通过所述视频片段添加按钮接收到视频添加操作时,获取所述视频添加操作对应的待添加视频片段;
    根据所述待添加视频片段,更新所述待处理视频的编辑轨道。
  13. 一种视频处理装置,其特征在于,包括:
    待处理视频接收模块,用于接收待处理视频;
    待处理视频显示模块,用于在显示界面通过视频预览区显示所述待处理视频的预览画面,通过轨道编辑区显示所述待处理视频的编辑轨道,并通过处理功能导航区显示至少一个处理功能;
    待处理视频处理模块,用于在接收到针对任一处理功能的触发操作时,在所述视频预览区显示经所述任一处理功能处理得到的处理后的视频的预览画面,并在所述轨道编辑区显示所述任一处理功能对应的编辑标识,其中,所述编辑标识与所述待处理视频的编辑轨道在所述轨道编辑区中叠加显示。
  14. 一种电子设备,其特征在于,包括存储器和处理器;
    所述存储器中存储有计算机程序;
    所述处理器,用于执行所述计算机程序,在所述计算机程序被执行时,使得所述电子设备实现根据权利要求1至12中任一项所述的方法。
  15. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现根据权利要求1至12中任一项所述的方法。
PCT/CN2021/095502 2020-06-10 2021-05-24 视频处理方法、装置、电子设备及计算机可读存储介质 WO2021249168A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
BR112022025162A BR112022025162A2 (pt) 2020-06-10 2021-05-24 Método e aparelho de processamento de vídeo, dispositivo eletrônico e meio de armazenamento legível por computador
KR1020227043537A KR102575848B1 (ko) 2020-06-10 2021-05-24 비디오 처리 방법 및 장치, 전자 장치, 및 컴퓨터 판독가능 저장매체
EP21822298.2A EP4152758A4 (en) 2020-06-10 2021-05-24 VIDEO PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND COMPUTER READABLE RECORDING MEDIUM
JP2022576468A JP7307864B2 (ja) 2020-06-10 2021-05-24 ビデオ処理方法、装置、電子機器及びコンピュータ可読記憶媒体
US18/064,128 US20230107220A1 (en) 2020-06-10 2022-12-09 Video processing method and apparatus, electronic device, and computer readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010525242.8A CN111629252B (zh) 2020-06-10 2020-06-10 视频处理方法、装置、电子设备及计算机可读存储介质
CN202010525242.8 2020-06-10

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/064,128 Continuation US20230107220A1 (en) 2020-06-10 2022-12-09 Video processing method and apparatus, electronic device, and computer readable storage medium

Publications (1)

Publication Number Publication Date
WO2021249168A1 true WO2021249168A1 (zh) 2021-12-16

Family

ID=72272209

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/095502 WO2021249168A1 (zh) 2020-06-10 2021-05-24 视频处理方法、装置、电子设备及计算机可读存储介质

Country Status (7)

Country Link
US (1) US20230107220A1 (zh)
EP (1) EP4152758A4 (zh)
JP (1) JP7307864B2 (zh)
KR (1) KR102575848B1 (zh)
CN (1) CN111629252B (zh)
BR (1) BR112022025162A2 (zh)
WO (1) WO2021249168A1 (zh)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111629252B (zh) * 2020-06-10 2022-03-25 北京字节跳动网络技术有限公司 视频处理方法、装置、电子设备及计算机可读存储介质
CN112822543A (zh) * 2020-12-30 2021-05-18 北京达佳互联信息技术有限公司 视频处理方法及装置、电子设备、存储介质
CN112804590B (zh) * 2020-12-31 2023-03-28 上海深柯视觉艺术设计有限公司 一种基于ue4的视频编辑***
CN113473204B (zh) * 2021-05-31 2023-10-13 北京达佳互联信息技术有限公司 一种信息展示方法、装置、电子设备及存储介质
CN113347479B (zh) * 2021-05-31 2023-05-26 网易(杭州)网络有限公司 多媒体素材的编辑方法、装置、设备及存储介质
CN113891127A (zh) * 2021-08-31 2022-01-04 维沃移动通信有限公司 视频编辑方法、装置及电子设备
CN113784165B (zh) * 2021-09-17 2023-05-05 北京快来文化传播集团有限公司 短视频滤镜叠加方法、***、电子设备及可读存储介质
CN114253653A (zh) * 2021-09-27 2022-03-29 北京字节跳动网络技术有限公司 视频处理方法、视频处理装置和计算机可读存储介质
CN113873329A (zh) * 2021-10-19 2021-12-31 深圳追一科技有限公司 视频处理方法、装置、计算机存储介质及电子设备
CN114125181B (zh) * 2021-11-22 2024-06-21 北京达佳互联信息技术有限公司 视频处理方法和视频处理装置
CN115460455B (zh) * 2022-09-06 2024-02-09 上海硬通网络科技有限公司 一种视频剪辑方法、装置、设备及存储介质
CN117453085B (zh) * 2023-12-22 2024-06-25 荣耀终端有限公司 显示方法、电子设备及存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100086136A (ko) * 2009-01-22 2010-07-30 (주)코드엑트 동영상 편집 시스템
CN104811629A (zh) * 2015-04-21 2015-07-29 上海极食信息科技有限公司 一种在同一界面内获取视频素材并对其制作的方法及***
CN109120997A (zh) * 2018-09-30 2019-01-01 北京微播视界科技有限公司 视频处理方法、装置、终端和介质
CN109495791A (zh) * 2018-11-30 2019-03-19 北京字节跳动网络技术有限公司 一种视频贴纸的添加方法、装置、电子设备及可读介质
CN110198486A (zh) * 2019-05-28 2019-09-03 上海哔哩哔哩科技有限公司 一种预览视频素材的方法、计算机设备及可读存储介质
CN110381371A (zh) * 2019-07-30 2019-10-25 维沃移动通信有限公司 一种视频剪辑方法及电子设备
CN110636382A (zh) * 2019-09-17 2019-12-31 北京达佳互联信息技术有限公司 在视频中添加可视对象的方法、装置、电子设备及存储介质
CN111629252A (zh) * 2020-06-10 2020-09-04 北京字节跳动网络技术有限公司 视频处理方法、装置、电子设备及计算机可读存储介质

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5999173A (en) * 1992-04-03 1999-12-07 Adobe Systems Incorporated Method and apparatus for video editing with video clip representations displayed along a time line
US9131078B2 (en) * 2007-07-27 2015-09-08 Lagavulin Limited Apparatuses, methods, and systems for a portable, image-processing transmitter
US8555170B2 (en) * 2010-08-10 2013-10-08 Apple Inc. Tool for presenting and editing a storyboard representation of a composite presentation
KR101260834B1 (ko) * 2010-12-14 2013-05-06 삼성전자주식회사 타임라인 바를 이용한 터치스크린 제어방법, 장치 및 이를 위한 프로그램이 기록된 기록매체 및 사용자 단말
KR20130107863A (ko) * 2012-03-23 2013-10-02 삼성테크윈 주식회사 영상 검색 장치
KR101528312B1 (ko) * 2014-02-14 2015-06-11 주식회사 케이티 영상 편집 방법 및 이를 위한 장치
CN107005675B (zh) * 2014-09-05 2019-08-06 富士胶片株式会社 动态图像编辑装置、动态图像编辑方法及存储介质
US20170294212A1 (en) * 2015-04-10 2017-10-12 OMiro IP LLC Video creation, editing, and sharing for social media
US10109319B2 (en) * 2016-01-08 2018-10-23 Gopro, Inc. Digital media editing
US10622021B2 (en) * 2016-02-19 2020-04-14 Avcr Bilgi Teknolojileri A.S Method and system for video editing
CN110582018B (zh) * 2019-09-16 2022-06-10 腾讯科技(深圳)有限公司 一种视频文件处理的方法、相关装置及设备
CN111078348B (zh) * 2019-12-25 2023-06-23 广州市百果园信息技术有限公司 一种界面管理方法、装置、设备和存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100086136A (ko) * 2009-01-22 2010-07-30 (주)코드엑트 동영상 편집 시스템
CN104811629A (zh) * 2015-04-21 2015-07-29 上海极食信息科技有限公司 一种在同一界面内获取视频素材并对其制作的方法及***
CN109120997A (zh) * 2018-09-30 2019-01-01 北京微播视界科技有限公司 视频处理方法、装置、终端和介质
CN109495791A (zh) * 2018-11-30 2019-03-19 北京字节跳动网络技术有限公司 一种视频贴纸的添加方法、装置、电子设备及可读介质
CN110198486A (zh) * 2019-05-28 2019-09-03 上海哔哩哔哩科技有限公司 一种预览视频素材的方法、计算机设备及可读存储介质
CN110381371A (zh) * 2019-07-30 2019-10-25 维沃移动通信有限公司 一种视频剪辑方法及电子设备
CN110636382A (zh) * 2019-09-17 2019-12-31 北京达佳互联信息技术有限公司 在视频中添加可视对象的方法、装置、电子设备及存储介质
CN111629252A (zh) * 2020-06-10 2020-09-04 北京字节跳动网络技术有限公司 视频处理方法、装置、电子设备及计算机可读存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4152758A1 *

Also Published As

Publication number Publication date
KR20230016049A (ko) 2023-01-31
CN111629252A (zh) 2020-09-04
BR112022025162A2 (pt) 2022-12-27
JP7307864B2 (ja) 2023-07-12
JP2023527250A (ja) 2023-06-27
CN111629252B (zh) 2022-03-25
EP4152758A4 (en) 2023-09-06
KR102575848B1 (ko) 2023-09-06
EP4152758A1 (en) 2023-03-22
US20230107220A1 (en) 2023-04-06

Similar Documents

Publication Publication Date Title
WO2021249168A1 (zh) 视频处理方法、装置、电子设备及计算机可读存储介质
WO2021196903A1 (zh) 视频处理方法、装置、可读介质及电子设备
WO2021008223A1 (zh) 信息的确定方法、装置及电子设备
WO2021218325A1 (zh) 视频处理方法、装置、计算机可读介质和电子设备
WO2021135626A1 (zh) 菜单项选择方法、装置、可读介质及电子设备
WO2020207085A1 (zh) 信息分享方法、装置、电子设备及存储介质
WO2022077996A1 (zh) 一种多媒体数据处理方法、装置、电子设备和存储介质
CN111629151B (zh) 视频合拍方法、装置、电子设备及计算机可读介质
WO2021135648A1 (zh) 直播间礼物列表配置方法、装置、介质及电子设备
WO2021244480A1 (zh) 主题视频生成方法、装置、电子设备及可读存储介质
WO2022194031A1 (zh) 视频的处理方法、装置、电子设备和存储介质
WO2021197024A1 (zh) 视频特效配置文件生成方法、视频渲染方法及装置
WO2022042389A1 (zh) 搜索结果的展示方法、装置、可读介质和电子设备
WO2023165515A1 (zh) 拍摄方法、装置、电子设备和存储介质
WO2021218318A1 (zh) 视频传输方法、电子设备和计算机可读介质
CN110070592B (zh) 特效包的生成方法、装置和硬件装置
WO2023116479A1 (zh) 视频的发布方法、装置、电子设备、存储介质和程序产品
JP2023528398A (ja) ライブ配信ルームの作成方法、装置、電子機器及び記憶媒体
CN115278275B (zh) 信息展示方法、装置、设备、存储介质和程序产品
WO2021227953A1 (zh) 图像特效配置方法、图像识别方法、装置及电子设备
WO2021089002A1 (zh) 多媒体信息处理方法、装置、电子设备及介质
WO2024041568A1 (zh) 一种直播视频处理方法、装置、设备及介质
WO2021057738A1 (zh) 用户界面展示方法、装置、计算机可读介质及电子设备
WO2024041556A1 (zh) 一种连麦展示方法、装置、电子设备、计算机可读介质
WO2024037491A1 (zh) 媒体内容处理方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21822298

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022576468

Country of ref document: JP

Kind code of ref document: A

Ref document number: 20227043537

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2021822298

Country of ref document: EP

Effective date: 20221212

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112022025162

Country of ref document: BR

ENP Entry into the national phase

Ref document number: 112022025162

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20221208

NENP Non-entry into the national phase

Ref country code: DE