CN113709575A - Video editing processing method and device, electronic equipment and storage medium - Google Patents

Video editing processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113709575A
CN113709575A CN202110371612.1A CN202110371612A CN113709575A CN 113709575 A CN113709575 A CN 113709575A CN 202110371612 A CN202110371612 A CN 202110371612A CN 113709575 A CN113709575 A CN 113709575A
Authority
CN
China
Prior art keywords
shot
video
editing
picture
video editing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110371612.1A
Other languages
Chinese (zh)
Other versions
CN113709575B (en
Inventor
韩瑞
王丽云
沈艳慧
张仁寿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110371612.1A priority Critical patent/CN113709575B/en
Publication of CN113709575A publication Critical patent/CN113709575A/en
Application granted granted Critical
Publication of CN113709575B publication Critical patent/CN113709575B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting
    • G06F40/186Templates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • H04N21/4355Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reformatting operations of additional data, e.g. HTML pages on a television screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4398Processing of audio elementary streams involving reformatting operations of audio signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440245Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The application provides a video editing processing method, a video editing processing device, electronic equipment and a computer readable storage medium; relates to a computer vision technology in the field of artificial intelligence, and the method comprises the following steps: displaying a video editing script template in a document editing interface; displaying a plurality of set shot pictures and parameters corresponding to each shot picture in the video editing script template in response to a video editing operation; and responding to a video preview operation received in a video editing process, generating a preview video based on at least one lens picture and parameters corresponding to the at least one lens picture, and displaying the preview video. By the method and the device, the corresponding preview video can be timely checked in the script editing process, so that the video editing efficiency is improved.

Description

Video editing processing method and device, electronic equipment and storage medium
Technical Field
The present application relates to artificial intelligence technologies and internet technologies, and in particular, to a video editing processing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
Artificial Intelligence (AI) is a theory, method and technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. As artificial intelligence technology has been researched and developed, artificial intelligence technology has been developed and applied in various fields.
Taking video editing as an example, a take-out script (Storyboard) is a series of sketches that a video planner prepares to formally produce a video, and is a visual forecast before the video is formed. In the related technology, a video planner can formulate a static shot-splitting script according to a video theme, and the static shot-splitting script cannot acquire the rhythm of a video and cannot accurately estimate transition and stay time, so that parameters in the shot-splitting script need to be continuously optimized in the video editing process, and the video editing efficiency is low. For this reason, the related art has not yet made an effective solution.
Disclosure of Invention
The embodiment of the application provides a video editing processing method and device, electronic equipment and a computer readable storage medium, which can support timely viewing of corresponding preview videos in a script editing process, so that video editing efficiency is improved.
The video editing efficiency is improved by generating the preview video corresponding to the shot picture and the parameters.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a video editing processing method, which comprises the following steps:
displaying a video editing script template in a document editing interface;
displaying a plurality of set shot pictures and parameters corresponding to each shot picture in the video editing script template in response to a video editing operation;
in response to a video preview operation received in a video editing process, a preview video is generated based on at least one shot screen and parameters corresponding to the at least one shot screen, and
and displaying the preview video.
In the foregoing solution, the performing parameter identification processing on each of the shot pictures to obtain parameters adapted to the multiple shot pictures includes:
performing one of the following processes for each of the shot pictures:
carrying out object identification processing on the shot picture, and determining duration adapted to the shot picture according to the number of identified objects, wherein the number of the identified objects is positively correlated with the duration;
and identifying the similarity between the shot picture and the adjacent shot picture, and determining the duration adapted to the shot picture according to the similarity, wherein the similarity and the duration are in negative correlation.
In the foregoing solution, the performing parameter identification processing on each of the shot pictures to obtain parameters adapted to the multiple shot pictures includes:
performing the following processing for each of the shot pictures:
carrying out object identification processing on the shot picture, and inquiring sound effects with a mapping relation between the shot picture and the identified objects in a mapping table, wherein the mapping table comprises a plurality of objects and a plurality of sound effects which are in one-to-one correspondence with the objects;
and determining the sound effect which has a mapping relation with the identified object as the sound effect which is adaptive to the lens picture.
In the foregoing solution, the performing parameter identification processing on each of the shot pictures to obtain parameters adapted to the multiple shot pictures includes:
performing the following processing for each of the shot pictures:
identifying the similarity between the shot picture and the adjacent shot picture, and inquiring transition matched with the similarity;
and determining the transition matched with the similarity as the transition matched with the shot picture.
In the foregoing solution, the performing parameter identification processing on each of the shot pictures to obtain parameters adapted to the multiple shot pictures includes:
performing the following processing for each of the shot pictures:
identifying historical parameters adapted to the shot picture as parameters adapted to the shot picture;
wherein the type of the historical parameter comprises one of:
historical parameters corresponding to historical lens pictures with highest similarity between the historical lens pictures and the lens pictures;
setting a history parameter with the highest frequency in the video editing process;
and historical parameters set in the video editing process with the interval closest to the current time point.
In the above aspect, the method further includes:
responding to the collaboration editing triggering operation received in the document editing interface, and displaying a collaboration account setting page of the first account;
the first account is an account for logging in the document editing interface, and the collaboration account setting page comprises at least one candidate account;
and in response to the account selection operation received on the collaborative account setting page, determining the selected at least one candidate account as a second account for editing the video editing script template in cooperation with the first account, and sending the video editing script template to the second account.
In the above scheme, before the sending the video editing script template to the second account, the method further includes:
displaying an editing authority setting entry on the collaboration account setting page;
acquiring the set authority in response to the authority setting operation for the editing authority setting entry, wherein the type of the authority comprises: viewing authority and editing authority;
determining that processing is to be performed to send a video editing script template to which the permission is applied to the second account.
In the foregoing solution, when the preview video is displayed, the method further includes:
responding to a modification operation aiming at the preview video, and displaying a video modification page, wherein the video modification page comprises at least one lens picture and a parameter corresponding to each lens picture;
and responding to the parameter modification operation received in the video modification page, and updating the displayed preview video according to the modified parameters.
In the above solution, the displaying a video editing script template in a document editing interface includes:
responding to a video editing triggering operation, and displaying a video type selection page, wherein the video type selection page comprises a plurality of candidate video types;
and responding to the video type selection operation received in the video type selection page, and displaying a video editing script template corresponding to the selected video type.
An embodiment of the present application provides a video editing processing apparatus, including:
the display module is used for displaying the video editing script template in the document editing interface;
the editing module is used for responding to video editing operation and displaying a plurality of set lens pictures and parameters corresponding to each lens picture in the video editing script template;
the generation module is used for responding to video preview operation received in the video editing process and generating a preview video based on at least one lens picture and parameters corresponding to the at least one lens picture;
the display module is further used for displaying the preview video.
In the above scheme, the parameters include a clipping parameter and an engagement parameter; the generating module is further configured to perform the following processing for each of the shot screens: according to the clipping parameters of the shot pictures, clipping processing is carried out on the shot pictures to obtain shot fragments corresponding to the shot pictures; when the number of the at least one shot picture is one, determining a shot section corresponding to the shot picture as the preview video; and when the number of the at least one shot picture is multiple, combining shot fragments corresponding to each shot picture according to the connection parameters of each shot picture to obtain the preview video.
In the above scheme, the clipping parameters include at least one of: case, duration, sound effect; the generating module is further configured to clip the shot picture into a pre-processing segment, wherein a playing duration of the pre-processing segment is a duration in a clipping parameter of the shot picture; and adding the file in the clipping parameters of the shot picture into the preprocessing section, and adding the sound effect in the clipping parameters into the preprocessing section to obtain the shot section corresponding to the shot picture.
In the above scheme, the parameter of engagement includes at least one of: lens sequence number and transition; the generation module is further configured to sort the plurality of shot segments according to the order of the shot sequence numbers in the connection parameters to obtain a shot segment sequence; sequentially performing the following connection processing for each of the shot slices in the shot slice sequence: connecting the shot segments with adjacent shot segments according to transitions in the linking parameters corresponding to the shot segments; and taking the shot section sequence after the connection processing as the preview video.
In the above scheme, the editing module is further configured to respond to a video editing operation submitted by at least one of a first account and a second account in the video editing script template, and display a plurality of shot pictures set by the video editing operation and parameters corresponding to each of the shot pictures in the video editing script template; the first account is an account for logging in the document editing interface, and the second account is an account for editing the video editing script template in cooperation with the first account.
In the above scheme, the editing module is further configured to query a state of a first shot picture, where the first shot picture is a shot picture edited by the video editing operation request in the video editing script template, or a shot picture corresponding to a parameter edited by the video editing operation request in the video editing script template; determining that a process responsive to the video editing operation is to be performed when both the first lens view and a second lens view related to the first lens view are in an unedited state; and when the first lens picture and a second lens picture related to the first lens picture are in an editing state, displaying first prompt information, wherein the first prompt information is used for prompting that the video editing operation cannot be immediately responded due to editing conflict.
In the above solution, the first lens picture and the second lens picture satisfy at least one of the following association conditions: the first shot picture and the second shot picture are edited into the video editing script template by the same account; the first shot picture and the second shot picture belong to the same scene.
In the above scheme, the editing module is further configured to query a state of a first parameter, where the first parameter is a parameter corresponding to a third shot image edited in the video editing script template by the video editing operation request; determining that processing in response to the video editing operation is to be performed when both the first parameter and a second parameter related to the first parameter are in an unedited state; and when the first parameter and a second parameter related to the first parameter are both in an editing state, displaying second prompt information, wherein the second prompt information is used for prompting that the video editing operation cannot be immediately responded due to editing conflict.
In the above scheme, the first parameter and the second parameter satisfy at least one of the following association conditions: the first parameter and the second parameter are edited into the video editing script template by the same account; the lens picture corresponding to the first parameter is the same as the lens picture corresponding to the second parameter; the first parameter is of the same type as the second parameter.
In the above scheme, the video editing operation includes a shot picture editing operation and a parameter editing operation; the editing module is further configured to respond to the shot image editing operation and display a plurality of shot images set by the shot image editing operation in the video editing script template; and responding to the parameter editing operation, and displaying the parameters which are set by the parameter editing operation and correspond to the plurality of lens pictures in the video editing script template.
In the above scheme, the editing module is further configured to perform parameter identification processing on each of the shot pictures to obtain parameters adapted to the multiple shot pictures, use the parameters adapted to the multiple shot pictures as default parameters, and display the default parameters in the video editing script template; and in response to the parameter editing operation aiming at the plurality of lens pictures, replacing the default parameters of the plurality of lens pictures displayed on the video editing script template with the parameters set by the parameter editing operation.
In the foregoing solution, the editing module is further configured to perform the following processing for each of the shot pictures: carrying out object recognition processing on the shot picture, and determining the recognized object as a file matched with the shot picture; wherein the types of the objects include: scene, person, event.
In the foregoing solution, the editing module is further configured to perform, for each of the shot pictures, one of the following processes: carrying out object identification processing on the shot picture, and determining duration adapted to the shot picture according to the number of identified objects, wherein the number of the identified objects is positively correlated with the duration; and identifying the similarity between the shot picture and the adjacent shot picture, and determining the duration adapted to the shot picture according to the similarity, wherein the similarity and the duration are in negative correlation.
In the foregoing solution, the editing module is further configured to perform the following processing for each of the shot pictures: carrying out object identification processing on the shot picture, and inquiring sound effects with a mapping relation between the shot picture and the identified objects in a mapping table, wherein the mapping table comprises a plurality of objects and a plurality of sound effects which are in one-to-one correspondence with the objects; and determining the sound effect which has a mapping relation with the identified object as the sound effect which is adaptive to the lens picture.
In the foregoing solution, the editing module is further configured to perform the following processing for each of the shot pictures: identifying the similarity between the shot picture and the adjacent shot picture, and inquiring transition matched with the similarity; and determining the transition matched with the similarity as the transition matched with the shot picture.
In the foregoing solution, the editing module is further configured to perform the following processing for each of the shot pictures: identifying historical parameters adapted to the shot picture as parameters adapted to the shot picture; wherein the type of the historical parameter comprises one of: historical parameters corresponding to historical lens pictures with highest similarity between the historical lens pictures and the lens pictures; setting a history parameter with the highest frequency in the video editing process; and historical parameters set in the video editing process with the interval closest to the current time point.
In the above scheme, the editing module is further configured to respond to a collaborative editing trigger operation received in the document editing interface, and display a collaborative account setting page of the first account; the first account is an account for logging in the document editing interface, and the collaboration account setting page comprises at least one candidate account; and in response to the account selection operation received on the collaborative account setting page, determining the selected at least one candidate account as a second account for editing the video editing script template in cooperation with the first account, and sending the video editing script template to the second account.
In the above scheme, the editing module is further configured to display an editing right setting entry on the collaboration account setting page; acquiring the set authority in response to the authority setting operation for the editing authority setting entry, wherein the type of the authority comprises: viewing authority and editing authority; determining that processing is to be performed to send a video editing script template to which the permission is applied to the second account.
In the foregoing aspect, the video editing processing apparatus further includes: the modification module is used for responding to modification operation aiming at the preview video and displaying a video modification page, wherein the video modification page comprises at least one lens picture and parameters corresponding to each lens picture; and responding to the parameter modification operation received in the video modification page, and updating the displayed preview video according to the modified parameters.
In the above scheme, the display module is further configured to respond to a video editing trigger operation and display a video type selection page, where the video type selection page includes a plurality of candidate video types; and responding to the video type selection operation received in the video type selection page, and displaying a video editing script template corresponding to the selected video type.
An embodiment of the present application provides an electronic device, including:
a memory for storing computer executable instructions;
and the processor is used for realizing the video editing processing method provided by the embodiment of the application when executing the computer executable instructions stored in the memory.
The embodiment of the present application provides a computer-readable storage medium, which stores computer-executable instructions and is used for implementing a video editing processing method provided by the embodiment of the present application when being executed by a processor.
The embodiment of the present application provides a computer program product, where the computer program product includes computer-executable instructions, and is used for implementing the video editing processing method provided by the embodiment of the present application when being executed by a processor.
The embodiment of the application has the following beneficial effects:
the shot pictures and the corresponding parameters in the video editing script template can be accurately and efficiently identified and preview videos can be generated in the document editing interface, so that a user can visually know the visual effect of the video editing script, the optimization times of the shot pictures and the corresponding parameters can be reduced, the editing resources can be saved, and the editing efficiency of the videos can be improved.
Drawings
Fig. 1 is a schematic diagram of a shot script provided by the related art;
fig. 2 is a schematic architecture diagram of a video editing processing system 100 according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a terminal 400 provided in an embodiment of the present application;
fig. 4 is a schematic flowchart of a video editing processing method provided in an embodiment of the present application;
fig. 5 is a schematic flowchart of a video editing processing method provided in an embodiment of the present application;
fig. 6 is a schematic flowchart of a video editing processing method provided in an embodiment of the present application;
fig. 7A and fig. 7B are schematic application scenarios of a video editing processing method provided in an embodiment of the present application;
fig. 8 is a schematic flowchart of a video editing processing method provided in an embodiment of the present application;
fig. 9A and 9B are schematic application scenarios of a video editing processing method provided in an embodiment of the present application;
fig. 10 is a schematic application scenario diagram of a video editing processing method provided in an embodiment of the present application;
fig. 11A and fig. 11B are schematic application scenarios of a video editing processing method provided in an embodiment of the present application;
fig. 12 is an application scenario diagram of a video editing processing method provided in an embodiment of the present application;
fig. 13 is an application scenario schematic diagram of a video editing processing method according to an embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, references to the terms "first/second" are only to distinguish similar items and do not denote a particular order, but rather the terms "first/second" may, where permissible, be interchanged with a particular order or sequence so that embodiments of the application described herein may be practiced in other than the order shown or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) In response to the condition or state on which the performed operation depends, one or more of the performed operations may be in real-time or may have a set delay when the dependent condition or state is satisfied; there is no restriction on the order of execution of the operations performed unless otherwise specified.
2) Computer Vision technology (CV) is a science for researching how to make a machine "see", and more specifically, it refers to using a camera and a Computer to replace human eyes to perform machine Vision such as identification, tracking and measurement on a target, and further performing image processing, so that the Computer processing becomes an image more suitable for human eyes to observe or transmit to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision technologies generally include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technologies, virtual reality, augmented reality, synchronous positioning, map construction, and other technologies, and also include common biometric technologies such as face recognition and fingerprint recognition.
3) A shot script, or video editing script, is a series of sketches that a video planner prepares for formal production (shooting), which is a visual forecast before video formation. It is not the final form of video, but is the basis for early communication of people, and also requires cooperation of multiple people, thus requiring repeated modification. Generally comprising several elements: shot number, picture content, case (caption/commentary), view (full/mid/near/close-up), transition (hard cut/push pan/stack), length of time (seconds), music, etc.
Referring to fig. 1, fig. 1 is a schematic diagram of a shot script provided by the related art, and fig. 1 is a handwritten version of the shot script currently used in the film and television industry, wherein a draft drawn by a subtopic is included in a shot picture in fig. 1. The process of editing a video in the related art generally includes: after establishing a video theme, a planner imagines by editing the shot-splitting script shown in fig. 1 and communicates with multiple persons; then, the shot script is sent to a clipping person to generate a video through professional clipping software, and the learning cost of the clipping software is high; meanwhile, because the static shot script cannot confirm the rhythm of the video and cannot accurately estimate the transition and stay time, the output video needs to be generated again after the parameters are modified every time, so that the editing cost is high. And the static shot-splitting script is still the traditional storyboard script, and the user still needs to imagine to generate the video by himself, so that multi-person cooperation cannot be implemented for the shot-splitting script.
In view of the above technical problems, embodiments of the present application provide a video editing processing method, which can support timely viewing of a corresponding preview video in a script editing process, thereby improving video editing efficiency. An exemplary application of the video editing processing method provided by the embodiment of the present application is described below, and the video editing processing method provided by the embodiment of the present application can be implemented by various electronic devices, for example, can be applied to various types of user terminals (hereinafter also referred to as simply terminals) such as smart phones, tablet computers, in-vehicle terminals, and smart wearable devices.
Next, taking an electronic device as an example, an exemplary application system architecture of a terminal for implementing the video editing processing method provided by the embodiment of the present application is described, referring to fig. 2, and fig. 2 is an architecture schematic diagram of a video editing processing system 100 provided by the embodiment of the present application. The video editing processing system 100 includes: the server 200, the network 300, and the terminal 400 will be separately described.
The server 200 is a background server of the client 410, and is configured to receive a plurality of shot pictures sent by the client 410 and parameters corresponding to each shot picture; and is further configured to generate a preview video based on the at least one lens frame and the parameters corresponding to the at least one lens frame, and send the preview video to the client 410.
The network 300, which is used as a medium for communication between the server 200 and the terminal 400, may be a wide area network or a local area network, or a combination of both.
And the terminal 400 is used for operating the client 410. The client 410 is used for displaying the video editing script template in the document editing interface; the server 200 is further configured to display a plurality of set shot pictures and parameters corresponding to each shot picture in the video editing script template in response to the video editing operation, and send the plurality of shot pictures and the parameters corresponding to each shot picture to the server 200; and is also configured to receive a preview video sent by the server 200 and display the preview video.
The embodiments of the present application may be implemented by means of Cloud Technology (Cloud Technology), which refers to a hosting Technology for unifying series resources such as hardware, software, and network in a wide area network or a local area network to implement data calculation, storage, processing, and sharing.
The cloud technology is a general term of network technology, information technology, integration technology, management platform technology, application technology and the like applied based on a cloud computing business model, can form a resource pool, is used as required, and is flexible and convenient. Cloud computing technology will become an important support. Background services of the technical network system require a large amount of computing and storage resources.
As an example, the server 200 may be an independent physical server, may be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a web service, cloud communication, a middleware service, a domain name service, a security service, a CDN, and a big data and artificial intelligence platform. The terminal 400 and the server 200 may be directly or indirectly connected through wired or wireless communication, and the embodiment of the present application is not limited thereto.
The structure of the terminal 400 in fig. 2 is explained next. Referring to fig. 3, fig. 3 is a schematic structural diagram of a terminal 400 according to an embodiment of the present application, where the terminal 400 shown in fig. 3 includes: at least one processor 410, memory 450, at least one network interface 420, and a user interface 430. The various components in the terminal 400 are coupled together by a bus system 440. It is understood that the bus system 440 is used to enable communications among the components. The bus system 440 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 440 in FIG. 3.
The Processor 410 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 430 includes one or more output devices 431, including one or more speakers and/or one or more visual displays, that enable the presentation of media content. The user interface 430 also includes one or more input devices 432, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 450 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 450 optionally includes one or more storage devices physically located remote from processor 410.
The memory 450 includes either volatile memory or nonvolatile memory, and may include both volatile and nonvolatile memory. The nonvolatile memory may be a Read Only Memory (ROM), and the volatile memory may be a Random Access Memory (RAM). The memory 450 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 450 is capable of storing data, examples of which include programs, modules, and data structures, or a subset or superset thereof, to support various operations, as exemplified below.
The operating system 451, which includes system programs for handling various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., is used for implementing various basic services and for processing hardware-based tasks.
A network communication module 452 for communicating to other computing devices via one or more (wired or wireless) network interfaces 420, exemplary network interfaces 420 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), among others.
A presentation module 453 for enabling presentation of information (e.g., user interfaces for operating peripherals and displaying content and information) via one or more output devices 431 (e.g., display screens, speakers, etc.) associated with user interface 430.
An input processing module 454 for detecting one or more user inputs or interactions from one of the one or more input devices 432 and translating the detected inputs or interactions.
In some embodiments, the video editing processing apparatus provided in the embodiments of the present application may be implemented in software, and fig. 3 illustrates a video editing processing apparatus 455 stored in a memory 450, which may be software in the form of programs and plug-ins, and includes the following software modules: a display module 4551, an editing module 4552 and a generation module 4553, which are logical and thus may be arbitrarily combined or further divided according to the functions implemented. The functions of the respective modules will be explained below.
In the following, the video editing processing method provided by the embodiment of the present application is executed by the terminal 400 in fig. 2 alone as an example. Referring to fig. 4, fig. 4 is a schematic flowchart of a video editing processing method provided in an embodiment of the present application, and will be described with reference to the steps shown in fig. 4.
It should be noted that the method shown in fig. 4 can be executed by various forms of computer programs executed by the terminal 400, and is not limited to the client 410, such as the operating system 451, the software modules, and the scripts, described above, and therefore the examples of the client in the following should not be construed as limiting the embodiments of the present application.
In step S101, a video editing script template is displayed in the document editing interface.
In some embodiments, the document editing interface may be an editing interface of a local document, where the local document refers to a document opened in a document editing program installed and running locally in the terminal, for example, an editing interface of a local text (Word) document, an editing interface of a local table (Excel) document, an editing interface of a local slide (PPT, Power Point) document, and the like; the document editing method may also be an editing interface of an online document, where the online document refers to a document that is opened in a text editing program and is run in a server (e.g., a cloud), and is displayed on a terminal, for example, an editing interface of an online Word document, an editing interface of an online excel document, an editing interface of an online PPT document, and the like.
In some embodiments, the video editing script template includes a take scene editing area and a parameter editing area.
As an example, referring to fig. 9A, fig. 9A is a schematic view of an application scenario of a video editing processing method provided in an embodiment of the present application, in fig. 9A, a video editing script template 901 includes a shot editing area 902 and a parameter editing area 903, a user may edit a shot in the shot editing area 902, for example, the user may directly upload a picture or a video in the shot editing area 902 as a shot; the user can edit parameters such as a shot number, a case, a duration, a transition, a scene, a sound effect, music, and the like in the parameter editing area 903.
In some embodiments, in response to a video editing triggering operation, displaying a video type selection page, wherein the video type selection page comprises a plurality of candidate video types; and responding to the video type selection operation received in the video type selection page, and displaying a video editing script template corresponding to the selected video type.
As an example, the candidate video types include a plurality of video types of different styles, for example, a favorite video type, a cool video type, a gourmet video type, a fashion video type, and the like, where video editing script templates corresponding to different video types are also different, and thus, a corresponding video editing script template may be generated according to the video type selected by the user, so as to meet the personalized video editing requirement of the user.
For example, referring to fig. 9B, fig. 9B is a schematic view of an application scenario of a video editing processing method provided in the embodiment of the present application, in fig. 9B, when a user triggers a video editing script entry 904, a video type selection page 905 is presented, where the video type selection page 905 includes a plurality of candidate video types, and when the user triggers an entry of "favorite video type", a video editing script template corresponding to "favorite video type" is presented.
In step S102, in response to a video editing operation, a plurality of set shot screens and parameters corresponding to each shot screen are displayed in a video editing script template.
In some embodiments, in response to a video editing operation submitted by at least one of the first account and the second account in the video editing script template, displaying a plurality of shot pictures set by the video editing operation and parameters corresponding to each shot picture in the video editing script template in real time; the first account is an account for logging in a document editing interface, and the second account is an account for editing the video editing script template in cooperation with the first account.
As an example, when the document editing interface is an editing interface of an online document, the first account may be an account that logs into the online document.
As an example, the video editing script template supports a first account and a second account to perform editing, and the display effects of a shot picture and corresponding parameters set in the video editing script template by different accounts are different, for example, a border color of the shot picture set in the video editing script template by the first account is different from a border color of a shot picture set in the video editing script template by the second account, a font (or a font size, a font color) of the parameters set in the video editing script template by the first account is different from a font (or a font size, a font color) of the parameters set in the video editing script template by the second account, and the like.
For example, referring to fig. 10, fig. 10 is a schematic view of an application scenario of a video editing processing method according to an embodiment of the present application, in fig. 10, a parameter 101 is a parameter set by a first account in a video editing script template, a parameter 102 is a parameter set by a second account in the video editing script template, and fonts of the parameter 101 and the parameter 102 are different in size.
As an example, prior to a video editing operation submitted in a video editing script template in response to at least one of the first account number and the second account number, may further include: inquiring the state of a first shot picture, wherein the first shot picture is a shot picture edited by the video editing operation request in the video editing script template or a shot picture corresponding to a parameter edited by the video editing operation request in the video editing script template; determining that a process responsive to a video editing operation is to be performed when both a first lens view and a second lens view related to the first lens view are in an unedited state; and when the first lens picture and a second lens picture related to the first lens picture are in an editing state, displaying first prompt information, wherein the first prompt information is used for prompting that the video editing operation cannot be immediately responded due to editing conflict.
For example, the first shot may be any shot in the video editing script template, or may be a shot specified by the first account or the second account.
For example, the first prompt message is further used to prompt the first lens picture, the second lens picture, the parameter corresponding to the first lens picture, or the parameter corresponding to the second lens picture to be in the editing state; the first prompt message may further include an account number that is editing the first shot picture, the second shot picture, the parameter corresponding to the first shot picture, or the parameter corresponding to the second shot picture.
For example, when the first shot screen is updated from the editing state to the non-editing state, it may be that the editing of the first shot screen is completed or the editing of the first shot screen is cancelled, and if the editing of the first shot screen is completed, the editing result needs to be synchronously displayed in the video editing script template, and prompt information capable of responding to the video editing operation is displayed; and if the first shot picture editing is cancelled, cancelling the editing result in the video editing script template and displaying prompt information capable of responding to the video editing operation.
For example, referring to fig. 11A, fig. 11A is a schematic view of an application scenario of a video editing processing method according to an embodiment of the present application, and in fig. 11A, a state 111 corresponding to each frame of a shot screen, including an edited state and an unedited state, is displayed in an upper right corner of the shot screen. When a user edits the shot picture in the editing state, first prompt information 112 is presented, and the first prompt information 112 includes an account number for editing the shot picture. When the user edits the shot picture in an unedited state, the user can directly respond to the editing operation.
For example, the first shot picture and the second shot picture satisfy at least one of the following association conditions: the first shot picture and the second shot picture are edited into the video editing script template by the same account; the first and second lens pictures belong to the same scene, for example, the first and second lens pictures are both scenes in a room.
In the embodiment of the application, the relevance of the shot pictures belonging to the same scene is higher, so that when a user edits a certain shot picture, the possibility of editing other shot pictures belonging to the same scene is higher, the shot pictures belonging to the same scene are locked in an exclusive mode, the optimization times of the shot pictures in the script can be reduced, and the editing resources can be saved. Similarly, the shot picture edited into the video editing script template by the same account number is also higher in association degree, so that the shot picture edited into the video editing script template by the same account number is locked in an exclusive mode, the optimization times of the shot picture can be reduced, and editing resources are saved.
As an example, prior to a video editing operation submitted in a video editing script template in response to at least one of the first account number and the second account number, may further include: inquiring the state of a first parameter, wherein the first parameter is a parameter corresponding to a third shot picture edited in the video editing script template by the video editing operation request; determining that a process responsive to a video editing operation is to be performed when both a first parameter and a second parameter related to the first parameter are in an unedited state; and when the first parameter and a second parameter related to the first parameter are both in an editing state, displaying second prompt information, wherein the second prompt information is used for prompting that the video editing operation cannot be immediately responded due to editing conflict.
For example, the first parameter may be any parameter in the video editing script template, or may be a parameter specified by the first account or the second account.
For example, the second prompt information is further used for prompting that the parameter corresponding to the third shot image is in an editing state; the second prompt message may further include an account number that is editing the parameter corresponding to the third shot.
For example, when the first parameter is updated from the edited state to the unedited state, it may be that the first parameter is edited completely or the first parameter is edited and canceled, and if the first parameter is edited completely, the editing result needs to be synchronously displayed in the video editing script template, and a prompt message capable of responding to the video editing operation is displayed; and if the first parameter editing is cancelled, cancelling the editing result in the video editing script template and displaying prompt information capable of responding to the video editing operation.
For example, referring to fig. 11B, fig. 11B is a schematic view of an application scenario of the video editing processing method provided in the embodiment of the present application, and in fig. 11B, the upper right corner of each column of parameters displays a state 113 corresponding to the parameter, including an edited state and an unedited state. When the user edits the parameter in the editing state, the second prompt message 114 is presented, and the second prompt message 114 includes the account number editing the parameter. When the user edits the parameter in the non-edited state, the user can directly respond to the editing operation.
For example, the first parameter and the second parameter satisfy at least one of the following associated conditions: the first parameter and the second parameter are edited into the video editing script template by the same account; the lens picture corresponding to the first parameter is the same as the lens picture corresponding to the second parameter; the first parameter is of the same type as the second parameter.
In the embodiment of the application, the association degree of the parameters edited into the video editing script template by the same account is higher, so that when a user edits a certain parameter, the possibility of editing the parameter edited into the video editing script template by the user is higher, the parameter edited into the video editing script template by the same account is locked in an exclusive mode, the optimization times of the parameters in the script can be reduced, and the editing resources can be saved. Similarly, the parameter association degree and the parameter association degree with the same type corresponding to the same shot picture are also higher, so that the parameter and the parameter with the same type corresponding to the same shot picture are locked in an exclusive mode, the optimization times of the parameters can be reduced, and the editing resources are saved.
In some embodiments, the video editing operations include a shot editing operation and a parameter editing operation; referring to fig. 5, fig. 5 is a schematic flowchart of a video editing processing method provided in an embodiment of the present application, and based on fig. 4, step S102 may include step S1021 to step S1022.
In step S1021, in response to the lens screen editing operation, a plurality of lens screens set by the lens screen editing operation are displayed in the video editing script template.
In some embodiments, in response to a lens screen editing operation for a lens screen editing region, a plurality of lens screens set by the lens screen editing operation are acquired through the lens screen editing region, and the plurality of lens screens set by the lens screen editing operation are displayed in the lens screen editing region.
As an example, the shot editing operation may be submitted by the first account or the second account. The display effect of the shot picture set in the video editing script template by different account numbers is different, for example, the border color of the shot picture set in the video editing script template by the first account number is different from the border color of the shot picture set in the video editing script template by the second account number.
For example, in fig. 9A, a user may edit a shot in the shot editing area 902, for example, the user may upload a picture or video as a shot directly in the shot editing area 902.
In step S1022, in response to the parameter editing operation, the parameters corresponding to the plurality of shot screens set by the parameter editing operation are displayed in the video editing script template.
In some embodiments, in response to a parameter editing operation for the parameter editing region, parameters of the plurality of lens screens set by the parameter editing operation are acquired through the parameter editing region, and the parameters of the plurality of lens screens set by the parameter editing operation are displayed in the parameter editing region.
As an example, the parameter editing operation may be submitted by the first account or the second account. The display effect of the parameters set in the video editing script template by different account numbers is different, for example, the font (or font size, font color) of the parameter set in the video editing script template by the first account number is different from the font (or font size, font color) of the parameter set in the video editing script template by the second account number.
For example, in fig. 9A, the user can edit parameters such as a shot number, a case, a duration, a transition, a scene, a sound effect, and music in the parameter editing area 903.
In some embodiments, parameter identification processing is performed on each shot picture to obtain parameters adaptive to a plurality of shot pictures, and the parameters adaptive to the plurality of shot pictures are used as default parameters and displayed in a video editing script template; in response to a parameter editing operation for a plurality of shot screens, replacing default parameters of the plurality of shot screens displayed on the video editing script template with parameters set by the parameter editing operation to replace the default parameters.
The following describes a procedure of performing parameter identification processing for each shot screen with reference to a plurality of examples.
As a first example, the following processing is performed for each shot screen: carrying out object recognition processing on the shot picture, and determining the recognized object as a file matched with the shot picture; wherein the types of objects include: scene, person, event.
For example, when the lens frame is a cyclist's frame, the text "bicycling" may be automatically populated in the case parameter. Therefore, by automatically filling the document parameters, a user does not need to fill and write a corresponding document in the video editing script template according to the shot picture, and the video editing efficiency is improved.
As a second example, one of the following processes is performed for each shot picture: carrying out object identification processing on the shot picture, and determining duration adapted to the shot picture according to the number of identified objects, wherein the number of the identified objects is positively correlated with the duration; and identifying the similarity between the shot picture and the adjacent shot picture, and determining the duration adapted to the shot picture according to the similarity, wherein the similarity and the duration are in negative correlation.
For example, the more the number of objects in the shot picture is, the more the information quantity representing the shot picture is, the more time the user spends understanding the shot picture is, so that the duration parameter positively correlated to the number of identified objects can be set, and the operation frequency of the user can be reduced while the video editing accuracy is ensured. Similarly, the higher the similarity between the shot picture and the adjacent shot picture is, the higher the association degree representing the two shot pictures is, and the less time is consumed for a user to understand the association between the two shot pictures, so that a duration parameter negatively related to the similarity can be set, and the operation times of the user can be reduced while the video editing accuracy is ensured.
As a third example, the following processing is performed for each shot screen: identifying historical parameters adaptive to the shot pictures as parameters adaptive to the shot pictures; wherein the type of the historical parameter comprises one of the following: historical parameters corresponding to historical lens pictures with highest similarity with the lens pictures; setting a history parameter with the highest frequency in the video editing process; and historical parameters set in the video editing process with the interval closest to the current time point. Therefore, by multiplexing the historical parameters, a user does not need to upload corresponding parameters in the video editing script template according to the shot picture, and the video editing efficiency is improved.
As a fourth example, the following processing is performed for each shot screen: carrying out object identification processing on the shot picture, and inquiring sound effects with a mapping relation between the shot picture and the identified objects in a mapping table, wherein the mapping table comprises a plurality of objects and a plurality of sound effects which are in one-to-one correspondence with the objects; and determining the sound effect having the mapping relation with the identified object as the sound effect matched with the lens picture.
For example, when the lens frame is a frame of a rider, it can be recognized that the object included in the lens frame is a "bicycle", so that a sound effect corresponding to the "bicycle", such as a "bicycle ring", can be queried in the local mapping table to serve as a sound effect parameter, and of course, a sound effect corresponding to the "bicycle" can also be directly queried in the network to serve as a sound effect parameter. And the sound effect parameters are automatically filled, so that a user does not need to upload corresponding sound effects in the video editing script template according to the shot picture, and the video editing efficiency is improved.
As a fifth example, the following processing is performed for each shot screen: identifying the similarity between the shot picture and the adjacent shot picture, and inquiring transition matched with the similarity; the transition matching the similarity is determined as a transition fitting the shot.
For example, the process of identifying the similarity between a shot and an adjacent shot may include: extracting first image features from a shot, extracting second image features from an adjacent shot, determining a geometric distance (e.g., a chebyshev distance, a euclidean distance, a minkowski distance, etc.) between the first image features and the second image features, and determining a similarity between the shot and the adjacent shot based on the geometric distance.
For example, when the similarity is smaller than a similarity threshold, the transition adapted to the shot picture is set as "hard cut", where the similarity threshold may be a default value or a value set by a user, a client, or a server; when the similarity is not less than the similarity threshold, the transition adaptive to the shot pictures is set to be 'push-pull' or 'overlapped', so that the transition between the shot picture with higher similarity and the adjacent shot pictures can be smooth and natural.
In step S103, in response to a video preview operation received in the video editing process, a preview video is generated based on at least one shot screen and parameters corresponding to the at least one shot screen.
In some embodiments, the client may invoke a corresponding service (e.g., a preview video generation service) of the terminal, and the process of generating the preview video is completed by the terminal. The client may also call a corresponding service (e.g., a preview video generation service) of the server, and the process of generating the preview video is completed through the server.
As an example, when the client calls the corresponding service of the server to complete the process of generating the preview video, the alternative step of step S103 may be: the client side responds to the video preview operation received in the video editing process and sends a video preview generation request to the server; the server responds to a preview video generation request, and generates a preview video based on at least one shot picture and parameters corresponding to the at least one shot picture; and sending the preview video to the client.
In the following, a process of generating a preview video by a terminal is described as an example, in which a client calls a corresponding service of the terminal. It should be noted that the process of generating the preview video by the client invoking the corresponding service of the server is similar to that described below, and will not be described again.
As an example, in fig. 7B, when a user triggers the video generation portal 703, a preview video 704 may be generated according to the content edited by the user in the video editing script template 702.
In some embodiments, a preview video may be generated based on all of the shots and corresponding parameters in the video editing script template; the preview video may also be generated based on portions of the take frames and corresponding parameters in the video editing script template.
As an example, in response to a shot screen selection operation, a preview video is generated based on the selected shot screen and the corresponding parameters. Therefore, the user can watch the preview video comprising the selected shot picture, so that the personalized requirements of the user are met, and the resources consumed by generating the preview video are saved.
Here, the parameters include a clipping parameter and an engagement parameter; the clipping parameters include at least one of: case, duration, sound effect; the engagement parameters include at least one of: shot number, transition.
In some embodiments, referring to fig. 6, fig. 6 is a schematic flowchart of a video editing processing method provided in an embodiment of the present application, and based on fig. 4, step S103 may include steps S1031 to S1033.
In step S1031, in response to a video preview operation received in the video editing process, the following processing is performed for each of the shot screens: and according to the clipping parameters of the shot pictures, clipping the shot pictures to obtain shot fragments corresponding to the shot pictures.
In some embodiments, the following processing is performed for each shot picture: editing the shot picture into a preprocessing segment, wherein the playing time length of the preprocessing segment is the time length in the editing parameters of the shot picture; and adding the file in the clipping parameters of the shot picture into the preprocessing section, and adding the sound effect in the clipping parameters into the preprocessing section to obtain the shot section corresponding to the shot picture.
For example, a shot picture is firstly clipped into a preprocessing segment, then a pattern in the clipping parameters is overlapped in the preprocessing segment in a floating layer mode, and finally the sound effect in the clipping parameters is called to fill the preprocessing segment, so that the shot segment corresponding to the shot picture is obtained.
In step S1032, when the number of at least one lens screen is one, a lens segment of the corresponding lens screen is determined as the preview video.
In some embodiments, when the preview video is generated based on one shot picture and corresponding parameters, since one shot picture only corresponds to one shot section and one shot section does not need to be spliced, the linking parameters of the shot picture do not need to be considered, and the corresponding preview video can be generated only according to the clipping parameters corresponding to the shot picture.
In step S1033, when the number of the at least one shot image is multiple, the shot segments corresponding to each shot image are combined according to the connection parameter of each shot image, so as to obtain the preview video.
In some embodiments, when the number of the at least one shot picture is multiple, the multiple shot slices are sequenced according to the sequence of the shot sequence numbers in the linking parameters to obtain a shot slice sequence; sequentially performing the following connection processing for each shot in the shot sequence: connecting the shot segments with adjacent shot segments according to transitions in the corresponding connection parameters of the shot segments; and taking the shot section sequence after the connection processing as a preview video.
As an example, the splicing manner between the shots is based on the transitions in the splicing parameters, for example, when the transitions in the splicing parameters are overlapped for N seconds, the shot can be extended for N seconds, and the transparency of the shot is gradually reduced from 100 to 0 during the time, and the transparency of the next adjacent shot is gradually increased from 0 to 100, so as to achieve the smooth process exhibition between the shots.
In step S104, the preview video is displayed.
In some embodiments, displaying the preview video may be displaying the preview video with a display box integrated into the document editing interface; it is also possible that the video editing program calls the player program pop-up window to display the preview video, or the video editing program calls the player program to display the preview video in the window of the video editing program itself.
In some embodiments, after step S104, the preview video (i.e., the exported video file) may also be downloaded in response to the video editing completion operation, so that the user may save the generated video after previewing the video.
By way of example, in fig. 7B, a preview video 704 is displayed and a video download entry 705 is included in the document editing interface, and when the user triggers the video download entry 705, the preview video 704 can be downloaded.
In the embodiment of the application, the shot pictures and the corresponding parameters in the video editing script template can be accurately and efficiently identified and preview videos can be generated on a document editing interface, so that a user can visually know the content logic of the videos, the technical problem that the video editing cost is high due to the fact that the rhythm of the videos cannot be confirmed by static shot scripts and the videos need to be generated again after the parameters are modified every time in the related technology can be solved, the optimization times of the shot pictures and the corresponding parameters can be reduced, editing resources can be saved, and the editing efficiency of the videos is improved.
In some embodiments, after step S104, the method may further include: responding to a modification operation aiming at the preview video, and displaying a video modification page, wherein the video modification page comprises at least one shot picture and a parameter corresponding to each shot picture; and responding to the parameter modification operation received in the video modification page, and updating the displayed preview video according to the modified parameters.
As an example, referring to fig. 12, fig. 12 is an application scene schematic diagram of a video editing processing method provided in an embodiment of the present application, in fig. 12, when a user clicks a certain frame of shot screen in a preview video 121, a floating window displays a video modification page 122 corresponding to the clicked shot screen, the user may modify parameters corresponding to the shot screen in the video modification page 122, and after the user finishes modifying, the preview video 121 may be updated by triggering a determination button 123, so that the human-computer interaction efficiency may be improved in the modification process of the preview video.
In some embodiments, after step S101, the method may further include: responding to a collaboration editing triggering operation received in a document editing interface, and displaying a collaboration account setting page of a first account; the first account is an account for logging in a document editing interface, and the collaboration account setting page comprises at least one candidate account; and in response to the account selection operation received on the collaborative account setting page, determining the selected at least one candidate account as a second account for editing the video editing script template in cooperation with the first account, and sending the video editing script template to the second account.
As an example, an editing right setting entry is displayed on a collaboration account setting page; and responding to the authority setting operation aiming at the editing authority setting entry, and acquiring the set authority, wherein the type of the authority comprises: viewing authority and editing authority; determining that processing of the video editing script template that sends the application authority to the second account is to be performed.
For example, for an account with viewing authority, the update of the video editing script template can be viewed in real time; for the account with the editing authority, the updating of the video editing script template can be checked in real time, and the video editing script template can be edited.
For example, the same authority may be globally set for all shots and parameters in the video editing script template, for example, the second account may view and edit all shots and parameters in the video editing script template; different permissions can also be set for each shot picture and parameter or each type of shot picture and parameter in the video editing script template, for example, the second account can view the update of the parameter 1 in real time and edit the parameter 1, but can only view the update of the parameter 2 in real time and cannot edit the parameter 2.
For example, in addition to the manner of manual allocation through the first account, the permission of the video editing script template may be automatically allocated through the client, for example, corresponding permission may be automatically allocated according to a role of the second account, where the role of the second account may be set by the first account or the second account, or corresponding permission may be automatically allocated according to an activity level of the second account, where the activity level of the second account is positively correlated with the number of times the second account participates in editing, and the activity level of the second account is positively correlated with the number of times the second account interacts based on editing.
For example, referring to fig. 13, fig. 13 is a schematic view of an application scenario of a video editing processing method provided in this embodiment of the application, in fig. 13, when a user triggers a collaboration entry 131, a collaboration account setting page 132 is displayed, the user can select an account to be sent in the collaboration account setting page 132 and set a right of a video editing script template in an editing right setting entry 133, and after the user selects the right and the account to be sent, the video editing script template to which the right is applied can be sent to the selected account.
The embodiment of the application provides a multi-user cooperation function for the video editing script template, can solve the technical problems of high editing cost and low editing efficiency caused by the fact that multi-user cooperation cannot be implemented for static shot-splitting scripts in the related art, can improve the communication efficiency in the video editing process, and reduces the optimization times of shot pictures and corresponding parameters, thereby saving editing resources.
The following describes a video editing processing method provided in an embodiment of the present application, taking an online document as an example.
According to the embodiment of the application, the video can be quickly generated through the text editing of the shot script, for example, after a video planner fills in and selects the structured text (namely the parameters) of the corresponding shot painting, the system can automatically recognize and generate the preview video, so that the video production efficiency is improved. In addition, the embodiment of the application also utilizes the characteristic that the online document can be modified in real time in a multi-person cooperation manner, so that the communication efficiency is improved, the information structure of the composition of the story board shot script is processed, the preview video is directly generated, the imagination of a user is not needed, and the video generation efficiency is further improved on the basis of not changing the industry process and the user habit.
Referring to fig. 7A, 7B and 8, fig. 7A and 7B are schematic application scenarios of a video editing processing method provided in an embodiment of the present application, and fig. 8 is a schematic flow diagram of the video editing processing method provided in the embodiment of the present application. Next, a specific implementation of the embodiment of the present application will be described with reference to fig. 7A, 7B, and 8.
In step S801, the terminal displays a video editing script template in response to a video editing trigger operation received in the online document.
In some embodiments, in FIG. 7A, when a user clicks on the video editing script entry 701 in the online document, a video editing script template 702 as shown in FIG. 7B may be created and displayed in the human-computer interaction machine interface.
In step S802, the terminal displays a plurality of shot screens and parameters corresponding to each shot screen in the video editing script template in response to a video editing operation.
In some embodiments, in fig. 7B, the user may fill in (or upload) a plurality of shots and parameters (including shot number, file, duration, transition, sound effect, music, and motion inside the shot) corresponding to each shot in the video editing script template 702. The editing of the lens picture can copy and paste or upload pictures or videos, if a plurality of pictures exist, the pictures or videos are arranged in sequence, and the video needs to fill in the use time period in the uploading stage. Shot number editing is typically a default fill. Corresponding characters can be directly edited in the document editing process, whether the document is used as a subtitle or a voice-over can be selected, and if not, the document is not displayed by default. Duration edit, i.e. indicating the length of stay of the line (the shot), where the user can fill in "number + seconds(s)", otherwise show 1s by default. Transition editing, namely indicating the connection relationship between the shot picture and the next shot picture, can select (options include hard cutting, push-pull, folding and the like) and needs to input the duration, otherwise, the default is direct hard cutting. In the sound effect editing, a user is supported to upload an audio file in an mp3 format or input text (such as 'clock sound'), and the server can perform network search downloading according to the input text to obtain corresponding sound effects, wherein the use duration of the sound effects depends on the duration of a lens picture. The music editing supports users to upload audio files in mp3 format and fill in the use period in the uploading stage as full-film use. The specific data editing method is determined by the actual scene, and is not limited here.
In some embodiments, the content in the video editing script template may be edited by the account (i.e., the first account mentioned above) that logs into the online document, or may be edited by the collaboration account (i.e., the second account mentioned above).
In step S803, the terminal transmits the plurality of shot screens and the parameters corresponding to each shot screen to the server, and the server generates a preview video from the plurality of shot screens and the parameters corresponding to each shot screen and transmits the preview video to the terminal.
In some embodiments, the process of generating the preview video may include: firstly, generating the shot segments of each shot image according to the shot sequence number respectively, then combining all the shot segments to generate a complete preview video, wherein the duration of the preview video is X seconds, calling a 'music' parameter in a document, and starting to play the whole video according to the shot sequence number of a line where the 'music' parameter is located, wherein the playing duration of the music is X seconds.
As an example, the process of generating a shot segment for each shot may include: for the video part, the "shot" in the document may be called first, for example, the pictures or videos in the "shot" are compressed or enlarged proportionally according to the 1080 × 720 bit specification, so as to ensure the adaptation of no cropping, wherein the duration of the stay of the pictures calls the "duration" parameter M seconds in the document. Then judging whether there is any file and whether there is any checking caption, if there is file and checking caption, calling the text information in the file, and superposing the file on the picture by 32-bit white Song body and black background bar based on the total word number length. For the audio part, the "sound effect" parameter in the document can be called first to fill (if the content is empty, the sound is absent), and the playing time is M seconds; and then, according To whether the check voice exists in the 'case' parameter, if so, calling the Text information and converting the Text information into voice information through a Text To Speech (TTS) technology, if the voice playing time is not more than M seconds (namely the 'time length' parameter in the document), not processing the voice information, and if the voice playing time is more than M seconds, accelerating the voice playing of the voice.
As an example, the manner of joining between shots is based on the "transition" parameter in the document, for example, when the "transition" parameter of a shot is overlapped for N seconds, the shot is extended for N seconds, and the transparency of the shot is gradually reduced from 100 to 0 in the period of time, and the transparency of the next shot is gradually increased from 0 to 100, so as to achieve smooth process exhibition. If the 'transition' parameter is empty content, the processing is not carried out, and the hard cutting is directly carried out.
In step S804, in response to the video preview operation, a preview video is displayed.
In some embodiments, in fig. 7B, when the user triggers the video generation portal 703, the server may generate a preview video 704 according to the content the user fills in the video editing script template 702, wherein the size of the preview video 704 may default to 1080 × 720, and the material of each shot is not cropped in adaptation, and the user may modify the content in the video editing script template 702 according to the preview video 704. Also, when the user triggers the video download portal 705, the preview video 704 may be downloaded.
According to the embodiment of the application, the video can be quickly generated through the literal editing of the shot script, the cost of generating the video can be reduced through multi-person cooperation and real-time modification, and the video generation efficiency is improved.
An exemplary structure of a video editing processing apparatus provided in an embodiment of the present application, which is implemented as a software module, is described below with reference to fig. 3.
In some embodiments, as shown in fig. 3, the software modules stored in the video editing processing device 455 of the memory 450 may include: a display module 4551 configured to display a video editing script template in the document editing interface; an editing module 4552 configured to display a plurality of set shot screens and parameters corresponding to each shot screen in the video editing script template in response to a video editing operation; a generating module 4553, configured to generate a preview video based on at least one shot screen and parameters corresponding to the at least one shot screen in response to a video preview operation received in a video editing process; the display module 4551 is further configured to display the preview video.
In the above scheme, the parameters include a clipping parameter and a linking parameter; a generating module 4553, further configured to perform the following processing for each shot picture: according to the clipping parameters of the shot pictures, clipping the shot pictures to obtain shot segments corresponding to the shot pictures; when the number of at least one shot picture is one, determining a shot section corresponding to the shot picture as a preview video; and when the number of the at least one shot picture is multiple, combining the shot fragments corresponding to each shot picture according to the connection parameters of each shot picture to obtain the preview video.
In the above scheme, the clipping parameters include at least one of: case, duration, sound effect; a generating module 4553, further configured to clip the shot into a pre-processed segment, wherein a playing duration of the pre-processed segment is a duration in the clipping parameter of the shot; and adding the file in the clipping parameters of the shot picture into the preprocessing section, and adding the sound effect in the clipping parameters into the preprocessing section to obtain the shot section corresponding to the shot picture.
In the above scheme, the linking parameter includes at least one of: lens sequence number and transition; the generating module 4553 is further configured to sort the multiple shot segments according to the order of the shot sequence numbers in the connection parameters, so as to obtain a shot segment sequence; sequentially performing the following connection processing for each shot in the shot sequence: connecting the shot segments with adjacent shot segments according to transitions in the corresponding connection parameters of the shot segments; and taking the shot section sequence after the connection processing as a preview video.
In the above scheme, the editing module 4552 is further configured to, in response to a video editing operation submitted by at least one of the first account and the second account in a video editing script template, display, in the video editing script template, a plurality of shot pictures set by the video editing operation and parameters corresponding to each shot picture; the first account is an account for logging in a document editing interface, and the second account is an account for editing the video editing script template in cooperation with the first account.
In the above scheme, the editing module 4552 is further configured to query a state of a first shot picture, where the first shot picture is a shot picture edited by the video editing operation request in the video editing script template, or a shot picture corresponding to a parameter edited by the video editing operation request in the video editing script template; determining that a process responsive to a video editing operation is to be performed when both a first lens view and a second lens view related to the first lens view are in an unedited state; and when the first lens picture and a second lens picture related to the first lens picture are in an editing state, displaying first prompt information, wherein the first prompt information is used for prompting that the video editing operation cannot be immediately responded due to editing conflict.
In the above scheme, the first lens picture and the second lens picture satisfy at least one of the following association conditions: the first shot picture and the second shot picture are edited into the video editing script template by the same account; the first shot picture and the second shot picture belong to the same scene.
In the above scheme, the editing module 4552 is further configured to query a state of a first parameter, where the first parameter is a parameter corresponding to a third shot that is edited in the video editing script template by the video editing operation request; determining that a process responsive to a video editing operation is to be performed when both a first parameter and a second parameter related to the first parameter are in an unedited state; and when the first parameter and a second parameter related to the first parameter are both in an editing state, displaying second prompt information, wherein the second prompt information is used for prompting that the video editing operation cannot be immediately responded due to editing conflict.
In the above scheme, the first parameter and the second parameter satisfy at least one of the following association conditions: the first parameter and the second parameter are edited into the video editing script template by the same account; the lens picture corresponding to the first parameter is the same as the lens picture corresponding to the second parameter; the first parameter is of the same type as the second parameter.
In the above scheme, the video editing operation includes a shot screen editing operation and a parameter editing operation; an editing module 4552, further configured to display, in response to a shot screen editing operation, a plurality of shot screens set by the shot screen editing operation in the video editing script template; in response to the parameter editing operation, parameters corresponding to the plurality of shot screens set by the parameter editing operation are displayed in the video editing script template.
In the above scheme, the editing module 4552 is further configured to perform parameter identification processing on each shot image to obtain parameters adapted to multiple shot images, use the parameters adapted to multiple shot images as default parameters, and display the default parameters in the video editing script template; and in response to the parameter editing operation for the plurality of lens pictures, replacing default parameters of the plurality of lens pictures displayed on the video editing script template with the parameters set by the parameter editing operation.
In the above scheme, the editing module 4552 is further configured to perform the following processing for each shot picture: carrying out object recognition processing on the shot picture, and determining the recognized object as a file matched with the shot picture; wherein the types of objects include: scene, person, event.
In the above scheme, the editing module 4552 is further configured to perform, for each shot screen, one of the following processes: carrying out object identification processing on the shot picture, and determining duration adapted to the shot picture according to the number of identified objects, wherein the number of the identified objects is positively correlated with the duration; and identifying the similarity between the shot picture and the adjacent shot picture, and determining the duration adapted to the shot picture according to the similarity, wherein the similarity and the duration are in negative correlation.
In the above scheme, the editing module 4552 is further configured to perform the following processing for each shot picture: carrying out object identification processing on the shot picture, and inquiring sound effects with a mapping relation between the shot picture and the identified objects in a mapping table, wherein the mapping table comprises a plurality of objects and a plurality of sound effects which are in one-to-one correspondence with the objects; and determining the sound effect having the mapping relation with the identified object as the sound effect matched with the lens picture.
In the above scheme, the editing module 4552 is further configured to perform the following processing for each shot picture: identifying the similarity between the shot picture and the adjacent shot picture, and inquiring transition matched with the similarity; the transition matching the similarity is determined as a transition fitting the shot.
In the above scheme, the editing module 4552 is further configured to perform the following processing for each shot picture: identifying historical parameters adaptive to the shot pictures as parameters adaptive to the shot pictures; wherein the type of the historical parameter comprises one of the following: historical parameters corresponding to historical lens pictures with highest similarity with the lens pictures; setting a history parameter with the highest frequency in the video editing process; and historical parameters set in the video editing process with the interval closest to the current time point.
In the above scheme, the editing module 4552 is further configured to display a collaboration account setting page of the first account in response to a collaboration editing triggering operation received in the document editing interface; the first account is an account for logging in a document editing interface, and the collaboration account setting page comprises at least one candidate account; and in response to the account selection operation received on the collaborative account setting page, determining the selected at least one candidate account as a second account for editing the video editing script template in cooperation with the first account, and sending the video editing script template to the second account.
In the above scheme, the editing module 4552 is further configured to display an editing right setting entry on a collaboration account setting page; and responding to the authority setting operation aiming at the editing authority setting entry, and acquiring the set authority, wherein the type of the authority comprises: viewing authority and editing authority; determining that processing of the video editing script template that sends the application authority to the second account is to be performed.
In the above solution, the video editing processing apparatus 455 further includes: the modification module is used for responding to modification operation aiming at the preview video and displaying a video modification page, wherein the video modification page comprises at least one shot picture and parameters corresponding to each shot picture; and responding to the parameter modification operation received in the video modification page, and updating the displayed preview video according to the modified parameters.
In the above scheme, the display module 4551 is further configured to display a video type selection page in response to a video editing trigger operation, where the video type selection page includes a plurality of candidate video types; and responding to the video type selection operation received in the video type selection page, and displaying a video editing script template corresponding to the selected video type.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the computer device executes the video editing processing method described in the embodiment of the present application.
Embodiments of the present application provide a computer-readable storage medium storing computer-executable instructions, which, when executed by a processor, cause the processor to execute a video editing processing method provided by embodiments of the present application, for example, the video editing processing method shown in fig. 4, 5, 6 and 8, where the computer includes various computing devices including an intelligent terminal and a server.
In some embodiments, the logic of the video editing processing method provided by the embodiments of the present application may be implemented in an intelligent contract, a node (e.g., a server) generates a preview video by invoking the intelligent contract, and stores the preview video in a blockchain network, so that the blockchain network responds to a preview request of a client for the preview video according to the stored preview video, thereby improving reliability of obtaining the preview video through the blockchain network.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EP ROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, the computer-executable instructions may be in the form of programs, software modules, scripts or code written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and they may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, computer-executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, e.g., in one or more scripts in a hypertext markup language document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, computer-executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
To sum up, in the embodiment of the application, the shot picture and the corresponding parameter in the video editing script template can be accurately and efficiently identified and generated in the document editing interface to generate the preview video, so that the user can visually and intuitively know the visual effect of the video editing script, the optimization times of the shot picture and the corresponding parameter can be reduced, the editing resources can be saved, and the video editing efficiency can be improved.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (15)

1. A video editing processing method, characterized in that the method comprises:
displaying a video editing script template in a document editing interface;
displaying a plurality of set shot pictures and parameters corresponding to each shot picture in the video editing script template in response to a video editing operation;
and responding to a video preview operation received in a video editing process, generating a preview video based on at least one lens picture and parameters corresponding to the at least one lens picture, and displaying the preview video.
2. The method of claim 1,
the parameters comprise clipping parameters and linking parameters;
the generating a preview video based on at least one shot picture and parameters corresponding to the at least one shot picture comprises:
performing the following processing for each of the shot pictures: according to the clipping parameters of the shot pictures, clipping processing is carried out on the shot pictures to obtain shot fragments corresponding to the shot pictures;
when the number of the at least one shot picture is one, determining a shot section corresponding to the shot picture as the preview video;
and when the number of the at least one shot picture is multiple, combining shot fragments corresponding to each shot picture according to the connection parameters of each shot picture to obtain the preview video.
3. The method of claim 2, wherein the clipping parameters comprise at least one of: case, duration, sound effect;
the clipping processing is performed on the shot picture according to the clipping parameters of the shot picture to obtain a shot section corresponding to the shot picture, and the clipping processing includes:
editing the shot picture into a preprocessing segment, wherein the playing time length of the preprocessing segment is the time length in the editing parameters of the shot picture;
and adding the file in the clipping parameters of the shot picture into the preprocessing section, and adding the sound effect in the clipping parameters into the preprocessing section to obtain the shot section corresponding to the shot picture.
4. The method of claim 2, wherein the engagement parameters include at least one of: lens sequence number and transition;
the step of combining the shot segments corresponding to each shot picture according to the connection parameters of each shot picture to obtain the preview video comprises the following steps:
sequencing the plurality of shot fragments according to the sequence of the shot sequence numbers in the connection parameters to obtain a shot fragment sequence;
sequentially performing the following connection processing for each of the shot slices in the shot slice sequence: connecting the shot segments with adjacent shot segments according to transitions in the linking parameters corresponding to the shot segments;
and taking the shot section sequence after the connection processing as the preview video.
5. The method according to claim 1, wherein the displaying a plurality of shot screens that are set and parameters corresponding to each of the shot screens in the video editing script template in response to a video editing operation comprises:
responding to a video editing operation submitted by at least one of a first account and a second account in the video editing script template, and displaying a plurality of lens pictures set by the video editing operation and parameters corresponding to each lens picture in the video editing script template;
the first account is an account for logging in the document editing interface, and the second account is an account for editing the video editing script template in cooperation with the first account.
6. The method of claim 5, wherein prior to the video editing operation submitted in the video editing script template in response to at least one of the first account number and the second account number, the method further comprises:
inquiring the state of a first shot picture, wherein the first shot picture is a shot picture edited in the video editing script template by the video editing operation request or a shot picture corresponding to a parameter edited in the video editing script template by the video editing operation request;
determining that a process responsive to the video editing operation is to be performed when both the first lens view and a second lens view related to the first lens view are in an unedited state;
and when the first lens picture and a second lens picture related to the first lens picture are in an editing state, displaying first prompt information, wherein the first prompt information is used for prompting that the video editing operation cannot be immediately responded due to editing conflict.
7. The method of claim 6,
the first lens picture and the second lens picture meet at least one of the following association conditions:
the first shot picture and the second shot picture are edited into the video editing script template by the same account;
the first shot picture and the second shot picture belong to the same scene.
8. The method of claim 5, wherein prior to the video editing operation submitted in the video editing script template in response to at least one of the first account number and the second account number, the method further comprises:
querying the state of a first parameter, wherein the first parameter is a parameter corresponding to a third shot picture edited in the video editing script template by the video editing operation request;
determining that processing in response to the video editing operation is to be performed when both the first parameter and a second parameter related to the first parameter are in an unedited state;
and when the first parameter and a second parameter related to the first parameter are both in an editing state, displaying second prompt information, wherein the second prompt information is used for prompting that the video editing operation cannot be immediately responded due to editing conflict.
9. The method of claim 8,
the first parameter and the second parameter satisfy at least one of the following association conditions:
the first parameter and the second parameter are edited into the video editing script template by the same account;
the lens picture corresponding to the first parameter is the same as the lens picture corresponding to the second parameter;
the first parameter is of the same type as the second parameter.
10. The method according to claim 1, wherein the video editing operation includes a shot editing operation and a parameter editing operation;
the displaying a plurality of set shot pictures and parameters corresponding to each shot picture in the video editing script template in response to a video editing operation comprises:
responding to the lens picture editing operation, and displaying a plurality of lens pictures set by the lens picture editing operation in the video editing script template;
and responding to the parameter editing operation, and displaying the parameters which are set by the parameter editing operation and correspond to the plurality of lens pictures in the video editing script template.
11. The method according to claim 10, wherein the displaying, in response to the parameter editing operation, the parameters corresponding to the plurality of shots set by the parameter editing operation in the video editing script template comprises:
performing parameter identification processing on each shot picture to obtain parameters adaptive to the plurality of shot pictures, taking the parameters adaptive to the plurality of shot pictures as default parameters, and displaying the default parameters in the video editing script template;
in response to the parameter editing operation for the plurality of shot screens, replacing the default parameters of the plurality of shot screens displayed on the video editing script template with parameters set by the parameter editing operation.
12. The method according to claim 11, wherein said performing a parameter identification process on each of the shots to obtain parameters adapted to the multiple shots comprises:
performing the following processing for each of the shot pictures:
carrying out object recognition processing on the shot picture, and determining the recognized object as a file matched with the shot picture;
wherein the types of the objects include: scene, person, event.
13. A video editing processing apparatus, comprising:
the display module is used for displaying the video editing script template in the document editing interface;
the editing module is used for responding to video editing operation and displaying a plurality of set lens pictures and parameters corresponding to each lens picture in the video editing script template;
the generation module is used for responding to video preview operation received in the video editing process and generating a preview video based on at least one lens picture and parameters corresponding to the at least one lens picture;
the display module is further used for displaying the preview video.
14. An electronic device, comprising:
a memory for storing computer executable instructions;
a processor for implementing the video editing processing method of any one of claims 1 to 12 when executing computer executable instructions stored in the memory.
15. A computer-readable storage medium having stored thereon computer-executable instructions for implementing the video editing processing method of any one of claims 1 to 12 when executed.
CN202110371612.1A 2021-04-07 2021-04-07 Video editing processing method and device, electronic equipment and storage medium Active CN113709575B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110371612.1A CN113709575B (en) 2021-04-07 2021-04-07 Video editing processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110371612.1A CN113709575B (en) 2021-04-07 2021-04-07 Video editing processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113709575A true CN113709575A (en) 2021-11-26
CN113709575B CN113709575B (en) 2024-04-16

Family

ID=78647961

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110371612.1A Active CN113709575B (en) 2021-04-07 2021-04-07 Video editing processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113709575B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114302174A (en) * 2021-12-31 2022-04-08 上海爱奇艺新媒体科技有限公司 Video editing method and device, computing equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103839562A (en) * 2014-03-17 2014-06-04 杨雅 Video creation system
CN104965816A (en) * 2015-07-22 2015-10-07 网易(杭州)网络有限公司 Editing method and device for data sheet
US20160006946A1 (en) * 2013-01-24 2016-01-07 Telesofia Medical Ltd. System and method for flexible video construction
CN111277905A (en) * 2020-03-09 2020-06-12 新华智云科技有限公司 Online collaborative video editing method and device
CN112422831A (en) * 2020-11-20 2021-02-26 广州太平洋电脑信息咨询有限公司 Video generation method and device, computer equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160006946A1 (en) * 2013-01-24 2016-01-07 Telesofia Medical Ltd. System and method for flexible video construction
CN103839562A (en) * 2014-03-17 2014-06-04 杨雅 Video creation system
CN104965816A (en) * 2015-07-22 2015-10-07 网易(杭州)网络有限公司 Editing method and device for data sheet
CN111277905A (en) * 2020-03-09 2020-06-12 新华智云科技有限公司 Online collaborative video editing method and device
CN112422831A (en) * 2020-11-20 2021-02-26 广州太平洋电脑信息咨询有限公司 Video generation method and device, computer equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114302174A (en) * 2021-12-31 2022-04-08 上海爱奇艺新媒体科技有限公司 Video editing method and device, computing equipment and storage medium

Also Published As

Publication number Publication date
CN113709575B (en) 2024-04-16

Similar Documents

Publication Publication Date Title
CN111930994A (en) Video editing processing method and device, electronic equipment and storage medium
WO2022037260A1 (en) Multimedia processing method and apparatus based on artificial intelligence, and electronic device
US11321667B2 (en) System and method to extract and enrich slide presentations from multimodal content through cognitive computing
US20230013601A1 (en) Program trial method, system, apparatus, and device, and medium
CN115082602B (en) Method for generating digital person, training method, training device, training equipment and training medium for model
US20180143741A1 (en) Intelligent graphical feature generation for user content
CN113655999A (en) Rendering method, device and equipment of page control and storage medium
CN113207039B (en) Video processing method and device, electronic equipment and storage medium
CN113709575B (en) Video editing processing method and device, electronic equipment and storage medium
CN113095056B (en) Generation method, processing method, device, electronic equipment and medium
CN113268232B (en) Page skin generation method and device and computer readable storage medium
Fischer et al. Brassau: automatic generation of graphical user interfaces for virtual assistants
CN116978028A (en) Video processing method, device, electronic equipment and storage medium
CN116962807A (en) Video rendering method, device, equipment and storage medium
US11532111B1 (en) Systems and methods for generating comic books from video and images
US11526578B2 (en) System and method for producing transferable, modular web pages
CN115543291A (en) Development and application method and device of interface template suite
JP7153052B2 (en) Online Picture Book Content Acquisition Method, Apparatus, and Smart Screen Device
CN114443022A (en) Method for generating page building block and electronic equipment
Shim et al. CAMEO-camera, audio and motion with emotion orchestration for immersive cinematography
CN113010129A (en) Virtual studio full-flow multi-terminal blackboard writing extraction method and device
CN113806596B (en) Operation data management method and related device
CN117251231B (en) Animation resource processing method, device and system and electronic equipment
CN115713578B (en) Animation interactive courseware manufacturing method, platform and electronic equipment
CN118338090A (en) Method and equipment for generating multimedia resources

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant