CN111930994A - Video editing processing method and device, electronic equipment and storage medium - Google Patents

Video editing processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111930994A
CN111930994A CN202010672591.2A CN202010672591A CN111930994A CN 111930994 A CN111930994 A CN 111930994A CN 202010672591 A CN202010672591 A CN 202010672591A CN 111930994 A CN111930994 A CN 111930994A
Authority
CN
China
Prior art keywords
video
template
target video
filling position
presenting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010672591.2A
Other languages
Chinese (zh)
Inventor
阳萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010672591.2A priority Critical patent/CN111930994A/en
Publication of CN111930994A publication Critical patent/CN111930994A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/732Query formulation
    • G06F16/7328Query by example, e.g. a complete video frame or video sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content

Abstract

The invention provides a video editing processing method, a video editing processing device, electronic equipment and a computer readable storage medium; the method comprises the following steps: presenting at least one video sample in response to a video template viewing operation; responding to the template multiplexing operation aiming at any one video sample, and acquiring a target video template; the target video template is used for editing and forming the video sample selected by the template multiplexing operation; presenting material adapted to the target video template; wherein the adapted material is used to populate into the target video template to form a video preview similar to the video sample selected by the template multiplexing operation. By the method and the device, appropriate materials can be intelligently and efficiently selected according to the video sample.

Description

Video editing processing method and device, electronic equipment and storage medium
Technical Field
The present invention relates to internet technologies, and in particular, to a method and an apparatus for processing video editing, an electronic device, and a computer-readable storage medium.
Background
With the development of the internet and the popularization of the network, editing and publishing videos is an indispensable way for people to show their lives and express themselves.
The related art generally provides a preset video template for a user, and the user selects materials in a local material library according to the video template so as to clip a wonderful video and share the wonderful video to a social network. However, when a user selects a material from the local material library according to the video template, if the material in the material library is more, the user often cannot determine a proper material from the presented materials, frequent attempts not only take a lot of time, but also cause great consumption on equipment and network resources.
Disclosure of Invention
The embodiment of the invention provides a video editing processing method and device, electronic equipment and a computer readable storage medium, which can intelligently and efficiently select appropriate materials according to video samples.
The technical scheme of the embodiment of the invention is realized as follows:
the embodiment of the invention provides a video editing processing method, which comprises the following steps:
presenting at least one video sample in response to a video template viewing operation;
responding to the template multiplexing operation aiming at any one video sample, and acquiring a target video template;
the target video template is used for editing and forming the video sample selected by the template multiplexing operation;
presenting material adapted to the target video template;
wherein the adapted material is used to populate into the target video template to form a video preview similar to the video sample selected by the template multiplexing operation.
In the above solution, when presenting the video preview, the method further comprises:
in response to an export operation for the video preview, a corresponding video file is generated from the video preview.
In the above solution, when presenting the material adapted to the target video template, the method further comprises:
presenting the candidate materials;
presenting at least one material filling position included in the target video template, and presenting requirements of the material filling position in each material filling position to prompt manual selection of an adapted material;
and responding to the selection operation of the candidate materials, and taking the selected candidate materials as materials matched with the target video template.
In the above solution, before the presenting the material adapted to the target video template, the method further comprises:
acquiring a plurality of candidate materials;
performing image recognition on the candidate materials to determine the type of each candidate material;
and selecting the material with the type consistent with the filling requirement of each material filling position in the target video template from the candidate materials to serve as the material matched with the target video template.
In the above solution, when the candidate materials are pictures, the image recognizing the candidate materials to determine the type of each candidate material includes:
the following processing is performed for each picture:
dividing the picture into a plurality of candidate frames;
predicting the type of the target included in each candidate frame according to the feature vector of the candidate frame;
determining the type of the target as the type of the picture.
In the above solution, when the candidate material is a video, the image recognizing the plurality of candidate materials to determine the type of each candidate material includes:
the following processing is performed for each video:
extracting a plurality of image frames contained in the video;
dividing each of the image frames into a plurality of candidate frames;
predicting the type of the target included in each candidate frame according to the feature vector of the candidate frame;
determining the type of the target as corresponding to the type of the image frame to obtain the type of each image frame;
determining a type with the most distribution among the types of the plurality of image frames as the type of the video.
In the above scheme, the selecting, as the material adapted to the target video template, a material whose type is consistent with the type required to be filled at each material filling position in the target video template includes:
selecting a plurality of materials with the filling types consistent with the filling requirements of each material filling position in the target video template;
performing aesthetic scoring treatment on a plurality of materials corresponding to each material filling position;
sequencing the aesthetic scores of a plurality of materials corresponding to each material filling position in a descending order, and taking the materials sequenced at the head in the descending order as the materials matched with the material filling positions;
and taking the set of materials which are matched with each material filling position as the materials which are matched with the target video template.
In the above solution, the performing an aesthetic scoring process on the plurality of materials corresponding to each material filling position includes:
calling the neural network model to execute the following processing:
extracting a feature vector of each material;
mapping the extracted feature vectors to corresponding aesthetic scores;
the neural network model is obtained by training a sample by taking a sample material and an aesthetic score of a label aiming at the sample material as the sample.
An embodiment of the present invention provides a processing apparatus for video editing, including:
the video presenting module is used for responding to the video template viewing operation and presenting at least one video sample;
the material presentation module is used for responding to template multiplexing operation aiming at any one video sample and acquiring a target video template;
the target video template is used for editing and forming the video sample selected by the template multiplexing operation;
the material presenting module is also used for presenting the material matched with the target video template;
wherein the adapted material is used to populate into the target video template to form a video preview similar to the video sample selected by the template multiplexing operation.
In the above scheme, the video presentation module is further configured to present a cover page of at least one video sample, and present a multiplexing template button; or playing the video sample and presenting a multiplexing template button corresponding to the video sample; wherein the operation for triggering the multiplexing template button is a template multiplexing operation for the video sample, and the multiplexing template button is used for characterizing multiplexing the target video template to edit a new video.
In the above scheme, the material presenting module is further configured to present at least one material filling position included in the target video template, and present a material that meets the requirement of the corresponding material filling position in each material filling position.
In the above scheme, the material presenting module is further configured to present a one-to-one corresponding material at each material filling position; and the presenting time length of each presented material is consistent with the time length required by the filling position of the corresponding material and is consistent with the type required by the filling position of the corresponding material.
In the foregoing solution, the processing apparatus for video editing further includes: a selecting module, configured to apply, in response to an overall selecting operation for a material adapted to the target video template, the material adapted to the target video template to generate a corresponding video preview; presenting the video preview.
In the above scheme, the material adapted to the target video template is a material adapted to the requirement of the material filling position in the target video template; the selecting module is further used for filling materials which are matched with the requirements of the material filling positions in each material filling position of the target video template; wherein the requirements comprise the type and duration of the material to be filled; and generating corresponding video preview for the filled target video template.
In the foregoing solution, the processing apparatus for video editing further includes: and the clipping module is used for clipping the video to be consistent with the duration of the material required to be filled in the material filling position when the material is a video and the duration of the video is greater than the duration of the material required to be filled in the material filling position.
In the above solution, the cropping module is further configured to perform at least one of the following operations on the video: intercepting a segment which takes the starting time point of the video as the starting position of the clipping and has the duration as the duration of the material required to be filled in the material filling position; intercepting a segment which takes the termination time point of the video as the termination position of clipping and has the duration as the duration of the material required to be filled in the material filling position; intercepting a segment which has the highest aesthetic score and the duration of which is the duration of the material required to be filled at the material filling position in the video; and in response to the clipping operation aiming at the video, a segment which corresponds to the clipping operation and has the duration of the material required to be filled by the material filling position is intercepted.
In the foregoing solution, the processing apparatus for video editing further includes: and the material editing module is used for responding to the material editing operation matched with the target video template so as to update the material matched with the target video template.
In the above solution, the material editing module is further configured to present at least one candidate material, and execute at least one of: in response to an interchange operation for material adapted to the target video template, interchanging at least one of the adapted material as the candidate material; interchanging the material filled in at least two material filling positions in the target video template in response to a position adjustment operation for the material adapted to the target video template; updating the duration of the material adapted to the target video template in response to a duration adjustment operation for the material adapted to the target video template; adding a special effect in the material adapted to the target video template in response to a special effect operation on the material adapted to the target video template.
In the foregoing solution, the processing apparatus for video editing further includes: the generation module is used for presenting material filling positions included in the target video template and the materials filled in each material filling position by the video preview; in response to the editing operation of the material corresponding to each material filling position of the video preview, updating the material corresponding to at least one material filling position of the video preview; and generating a new video preview according to the material corresponding to the target video template at each material filling position, and presenting the new video preview.
In the foregoing solution, the generating module is further configured to execute at least one of: responding to the time length adjustment operation of the material corresponding to each material filling position aiming at the video preview, and updating the time length of the material corresponding to at least one material filling position of the video preview; in response to the special effect operation aiming at the material corresponding to each material filling position of the video preview, updating the special effect used by the video preview in the material corresponding to at least one material filling position; and in response to the position adjustment operation of the material corresponding to each material filling position aiming at the video preview, interchanging the material corresponding to the video preview in any two material filling positions.
In the foregoing solution, the processing apparatus for video editing further includes: and the export module is used for responding to the export operation aiming at the video preview and generating a corresponding video file according to the video preview.
In the above scheme, the selecting module is further configured to present candidate materials; presenting at least one material filling position included in the target video template, and presenting requirements of the material filling position in each material filling position to prompt manual selection of an adapted material; and responding to the selection operation of the candidate materials, and taking the selected candidate materials as materials matched with the target video template.
In the foregoing solution, the processing apparatus for video editing further includes: the identification module is used for acquiring a plurality of candidate materials; performing image recognition on the candidate materials to determine the type of each candidate material; and selecting the material with the type consistent with the filling requirement of each material filling position in the target video template from the candidate materials to serve as the material matched with the target video template.
In the foregoing solution, when the candidate material is a picture, the identifying module is further configured to perform the following processing on each picture: dividing the picture into a plurality of candidate frames; predicting the type of the target included in each candidate frame according to the feature vector of the candidate frame; determining the type of the target as the type of the picture.
In the foregoing solution, when the candidate material is a video, the identifying module is further configured to perform the following processing for each video: extracting a plurality of image frames contained in the video; dividing each of the image frames into a plurality of candidate frames; predicting the type of the target included in each candidate frame according to the feature vector of the candidate frame; determining the type of the target as corresponding to the type of the image frame to obtain the type of each image frame; determining a type with the most distribution among the types of the plurality of image frames as the type of the video.
In the above scheme, the identification module is further configured to select a plurality of materials of the same type as the type required to be filled at each material filling position in the target video template; performing aesthetic scoring treatment on a plurality of materials corresponding to each material filling position; sequencing the aesthetic scores of a plurality of materials corresponding to each material filling position in a descending order, and taking the materials sequenced at the head in the descending order as the materials matched with the material filling positions; and taking the set of materials which are matched with each material filling position as the materials which are matched with the target video template.
In the above solution, the identification module is further configured to invoke a neural network model to perform the following processing: extracting a feature vector of each material; mapping the extracted feature vectors to corresponding aesthetic scores; the neural network model is obtained by training a sample by taking a sample material and an aesthetic score of a label aiming at the sample material as the sample.
An embodiment of the present invention provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the video editing processing method provided by the embodiment of the invention when the executable instructions stored in the memory are executed.
The embodiment of the invention provides a computer-readable storage medium, which stores executable instructions and is used for causing a processor to execute the executable instructions so as to realize the video editing processing method provided by the embodiment of the invention.
The embodiment of the invention has the following beneficial effects:
the method has the advantages that the materials matched with the target video template are intelligently presented according to the video, so that the time for manually selecting the materials and continuously trying the materials is saved, the efficiency of video production based on the materials and the quality of the video are improved, and the resources of equipment and a network are effectively saved.
Drawings
Fig. 1 is a schematic diagram of an application scenario provided by the related art;
fig. 2 is a schematic structural diagram of a processing system 100 for video editing according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device 500 according to an embodiment of the present invention;
fig. 4 is a flowchart illustrating a processing method for video editing according to an embodiment of the present invention;
fig. 5 is a flowchart illustrating a processing method for video editing according to an embodiment of the present invention;
fig. 6 is a flowchart illustrating a processing method for video editing according to an embodiment of the present invention;
fig. 7A and fig. 7B are schematic diagrams of application scenarios provided by an embodiment of the present invention;
fig. 8 is a flowchart illustrating a processing method for video editing according to an embodiment of the present invention;
fig. 9 is a schematic flowchart of intelligently recommending materials according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail with reference to the accompanying drawings, the described embodiments should not be construed as limiting the present invention, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein is for the purpose of describing embodiments of the invention only and is not intended to be limiting of the invention.
Before further detailed description of the embodiments of the present invention, terms and expressions mentioned in the embodiments of the present invention are explained, and the terms and expressions mentioned in the embodiments of the present invention are applied to the following explanations.
1) The client is an application program running in the terminal and used for providing various services, such as a video client, a video editing client, a short video client or an instant messaging client.
2) In response to the condition or state on which the performed operation depends, one or more (i.e., at least two) operations performed may be real-time or may have a set delay when the dependent condition or state is satisfied; there is no restriction on the order of execution of the operations performed unless otherwise specified.
3) A video template, including metadata describing how to edit the video, such as the amount of material, the type of material, the duration of the material, etc.
4) The video sample is formed by applying sample materials to a video template.
With the development of the internet and the popularization of the network, editing and publishing videos is an indispensable way for people to show their lives and express themselves. The related art provides more video templates, and a user can quickly clip a wonderful video by selecting a local video or photo according to the video template. While these editing tools make it easier for a user to edit a video, the user may spend too much time selecting a local video or picture from a video template because of difficulty in selection or the complexity of the selection process.
Referring to fig. 1, fig. 1 is a schematic diagram of an application scenario provided in the related art. In fig. 1, a user is first presented with a list of video templates, the user selects a video template in the list of video templates, and triggers a clip button 101; the client calls the local photo album of the user and presents all materials (such as photos or videos) in the local photo album to the user; the user selects the material according to the video template, and the corresponding video preview can be presented by triggering the preview button 102 after the selection is completed.
In the related art, a user consumes much time when selecting local materials, on one hand, the local materials of the user are too much, on the other hand, the user has difficulty in selecting, and a proper material is difficult to select in a short time.
The embodiment of the invention provides a processing method and device for video editing, electronic equipment and a computer readable storage medium.
An exemplary application of the electronic device provided by the embodiment of the present invention is described below, and the electronic device provided by the embodiment of the present invention can be implemented as various types of user terminals such as a notebook computer, a tablet computer, a desktop computer, a set-top box, and a mobile device (e.g., a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, a portable game device). In the following, an exemplary application will be explained when the device is implemented as a terminal.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a processing system 100 for video editing according to an embodiment of the present invention. The processing system 100 for video editing includes: the server 200, the network 300, and the terminal 400 will be separately described.
The server 200 is a background server of the client 410, and is configured to respond to a video sample acquisition request of the client 410 and send a corresponding video sample to the client 410; the system is further used for receiving a material acquisition request of the client 410 and determining a corresponding target video template; and is further configured to determine material adapted to the target video template according to the target video template (a specific implementation of determining material adapted to the target video template according to the target video template will be described in detail below), and send the material adapted to the target video template to the client 410.
The network 300, which is used as a medium for communication between the server 200 and the terminal 400, may be a wide area network or a local area network, or a combination of both.
The terminal 400 is used for operating a client 410, and the client 410 is a client with a video playing function or a video editing function. The client 410 is used for receiving the video sample sent by the server 200 and presenting the video sample in a human-computer interaction interface; the system is also used for responding to template multiplexing operation of a user on the video sample and sending a material acquisition request corresponding to the template multiplexing operation to the server 200; the system is also used for receiving the material which is sent by the server 200 and is matched with the target video template, and presenting the material in the human-computer interaction interface; and the video processing device is also used for responding to the selection operation of the user on the material matched with the target video template, and applying the selected material to the target video template to generate the corresponding video.
The embodiment of the invention can be realized by means of Cloud Technology (Cloud Technology), which is a hosting Technology for unifying series resources such as hardware, software, network and the like in a wide area network or a local area network to realize the calculation, storage, processing and sharing of data.
The cloud technology is based on the general names of network technology, information technology, integration technology, management platform technology, application technology and the like applied in the cloud computing business model, can form a resource pool, is used as required, and is flexible and convenient. Cloud computing technology will become an important support. Background services of technical network systems require a large amount of computing and storage resources, for example, video portals.
As an example, the server 200 may be an independent physical server, may be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a web service, cloud communication, a middleware service, a domain name service, a security service, a CDN, and a big data and artificial intelligence platform. The terminal 400 may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal 400 and the server 200 may be directly or indirectly connected through wired or wireless communication, and the embodiment of the present invention is not limited thereto.
The embodiment of the invention can be applied to video watching scenes (such as short video watching scenes and live broadcasting watching scenes) and video editing scenes. Taking a short video watching scene as an example, in the process that a user watches a short video through the client 410, the client 410 responds to template multiplexing operation for the short video, and sends a material acquisition request corresponding to the template multiplexing operation to the server 200 so as to receive a material which is sent by the server 200 and is adapted to a target video template; the client 410 responds to the selection operation of the material adaptive to the target video template, and applies the selected material to the target video template to generate a short video meeting the requirements of the user; the user can select to share the generated short video to the short video platform or store the short video in a local cache.
Next, a structure of an electronic device according to an embodiment of the present invention is described, where the electronic device may be the terminal 400 shown in fig. 2, referring to fig. 3, fig. 3 is a schematic structural diagram of an electronic device 500 according to an embodiment of the present invention, and the electronic device 500 shown in fig. 3 includes: at least one processor 510, memory 550, at least one network interface 520, and a user interface 530. The various components in the electronic device 500 are coupled together by a bus system 540. It is understood that the bus system 540 is used to enable communications among the components. The bus system 540 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 540 in fig. 3.
The Processor 510 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 530 includes one or more output devices 531 enabling presentation of media content, including one or more speakers and/or one or more visual display screens. The user interface 530 also includes one or more input devices 532, including user interface components to facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 550 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 550 optionally includes one or more storage devices physically located remote from processor 510.
The memory 550 may comprise volatile memory or nonvolatile memory, and may also comprise both volatile and nonvolatile memory. The nonvolatile memory may be a Read Only Memory (ROM), and the volatile memory may be a Random Access Memory (RAM). The memory 550 described in connection with embodiments of the invention is intended to comprise any suitable type of memory.
In some embodiments, memory 550 can store data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 551 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
a network communication module 552 for communicating to other computing devices via one or more (wired or wireless) network interfaces 520, exemplary network interfaces 520 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
a presentation module 553 for enabling presentation of information (e.g., a user interface for operating peripherals and displaying content and information) via one or more output devices 531 (e.g., a display screen, speakers, etc.) associated with the user interface 530;
an input processing module 554 to detect one or more user inputs or interactions from one of the one or more input devices 532 and to translate the detected inputs or interactions.
In some embodiments, the processing apparatus for video editing provided by the embodiments of the present invention may be implemented in a software manner, for example, a short video client, a video editing client, or a live broadcast client and other clients having a video playing function or a video editing function. Fig. 3 shows a processing means 555 for video editing stored in a memory 550, which may be software in the form of programs and plug-ins, etc., comprising the following software modules: a video presentation module 5551 and a material presentation module 5552, which are logical and thus can be arbitrarily combined or further divided according to the implemented functions. The functions of the respective modules will be explained below.
Next, a video editing processing method according to an embodiment of the present invention will be described by taking as an example that the terminal 400 and the server 200 in fig. 2 cooperate with each other. Referring to fig. 4, fig. 4 is a flowchart illustrating a processing method for video editing according to an embodiment of the present invention, which will be described with reference to the steps shown in fig. 4.
In step S101, the client presents at least one video sample in response to a video template viewing operation.
Here, the client may be an Application (APP) having a video playing function or a video editing function, such as a short video APP, a video editing APP, or a live APP; it can also be a video applet or video editing applet that can be embedded into any APP; the browser can also be a browser with a video playing function or a video editing function.
In some embodiments, the client obtains at least one video sample and presents the at least one video sample through the server in response to the video template viewing operation.
In some embodiments, the client presents a cover page of at least one video sample and presents a reuse template button.
Here, the multiplexing template button is used to characterize a multiplexing target video template (i.e., an editing template of a video sample) to edit a new video; the multiplexing template button can be embedded in the cover of the video sample or can be an area outside the cover of the video sample. The operation for triggering the multiplex template button is a template multiplexing operation for video samples.
As an example, the shape of the cover of the video sample may be an image and/or text. The image may be a key frame representing the style and/or type of the video sample; or a concatenation of multiple key frames representing the style and/or type of the video sample; but also key frames containing objects (e.g., faces) recognized in the video sample. Text is content that is used to introduce a genre and/or type of video sample. Therefore, the user can quickly know the video sample by looking up the image or the text of the corresponding video sample, and the user can conveniently select the interested video template.
It should be noted that, while the client presents the cover page of at least one video sample and the reuse template button in the human-computer interaction interface, various types of interaction buttons are also presented, for example, a share button, a comment button, or a like button, where the interaction buttons are used to indicate the popularity of the video sample to the user, for example, the like buttons include the number of like times of the network-wide user, and the popularity of the video sample can be determined according to the number of like times. Therefore, the user can select the video template corresponding to the video sample with higher heat degree to complete video production so as to obtain more attention degree.
In other embodiments, the client plays the video sample and presents a reuse template button corresponding to the video sample.
Here, the multiplexing template button may be embedded in the video playback page or may be located in an area outside the video playback page.
It should be noted that, while the client plays the video sample and presents the multiplexing template button in the human-computer interaction interface, various types of interaction buttons are also presented, for example, a share button, a comment button, or a like button, where the interaction buttons are used to indicate the popularity of the video sample to the user, for example, the like buttons include the like times of the network-wide user, and the popularity of the video sample can be determined according to the like times. Therefore, the user can select the video template corresponding to the video sample with higher heat degree to complete video production so as to obtain more attention degree.
As an example of accepting fig. 2, the client 410 sends a video sample acquisition request to the server 200 in response to a video sample viewing operation; the server 200 responds to the video sample acquisition request and sends a corresponding video sample to the client 410; the client 410 presents the received video sample in a video playback interface.
The embodiment of the invention supports the user to obtain the interested video template at any time while watching the video sample so as to facilitate the subsequent video production, improves the connectivity between the watched video and the produced video, enables the user to experience the watched video and the produced video seamlessly, and reduces the operation path of the user.
In step S102, the client transmits a material acquisition request to the server in response to a template multiplexing operation for any one video sample.
Here, the template multiplexing operation may be various forms of operations that are preset by the operating system and do not conflict with the registered operation; or may be various forms of operations that are user-defined and that do not conflict with registered operations. The template multiplexing operation includes at least one of: click operations (e.g., single-finger click operations, multi-finger click operations, multiple continuous click operations, etc.); a sliding operation in a specific track or direction; performing voice operation; a motion sensing operation (e.g., an operation of moving up and down, a curved motion operation, or the like). Thus, the operation experience of the user can be improved.
Taking the template multiplexing operation as a somatosensory operation as an example, the client presents a cover of one video sample each time or plays one video sample each time in the human-computer interaction interface, the client supports the user to switch the cover of the video sample presented in the human-computer interaction interface or the played video sample in a sliding manner, and when the client acquires the somatosensory operation associated with the template multiplexing instruction through the input device 532, the client determines that the operation is the template multiplexing operation, and sends a material acquisition request to the server.
For example, in fig. 7A, the client sends a material acquisition request to the server in response to a trigger operation for the clip button 701.
In step S103, the server determines a target video template corresponding to the video sample, and determines a material adapted to the target video template according to the target video template.
Here, the target video template is used to edit the video sample selected by the form template multiplexing operation. The adapted material is used to populate a target video template to form a video preview similar to the video sample selected by the template multiplexing operation.
In some embodiments, the client may call itself or a corresponding service (e.g., a material selection service) of the operating system 551, and complete the material selection process. The client may also invoke a corresponding service (e.g., a material selection service) of the server, and the server completes the material selection process.
Thus, alternative steps of step S102 and step S103 may be: the client side responds to template multiplexing operation aiming at any one video sample to obtain a target video template; and determining the material matched with the target video template. Thus, step S104 can be omitted.
Here, the target video template may be obtained locally from the client or may be obtained through the server.
Next, taking the example that the server completes the material selection process, the specific implementation of the material selection will be described.
Referring to fig. 5, fig. 5 is a schematic flowchart of a processing method for video editing according to an embodiment of the present invention, and based on fig. 4, step S103 may specifically include steps S1031 to S1033.
In step S1031, the server acquires a plurality of candidate materials.
Here, the material may be a picture or a video. The candidate material may be a material (e.g., a photo or video) contained in the user's local album; or the materials can be stored in the cloud by the user; the method can also be used for crawling the materials acquired in the whole network through the network.
Taking the candidate materials as the materials contained in the local album of the user as an example, the client calls the materials of the local album stored in the terminal cache and sends the materials of the local album to the server.
Here, the client may transmit the material of the local album to the server together at the same time of transmitting the material acquisition request to the server.
In step S1032, the server performs image recognition on a plurality of candidate materials to determine the type of each candidate material.
Here, the type of the material includes at least one of: a character; a landscape; a delicious food; a pet; making fun; nostalgic; youth; and (4) emotion. For example, when the candidate material is a portrait, it may be determined that the type of the material is a person; when the candidate material is a video and the background music of the video is a beat music, it can be determined that the type of the material is a beat.
As an example, when the candidate material is a picture, the server performs the following processing for each picture: dividing a picture into a plurality of candidate frames; predicting the type of the target included in each candidate frame according to the feature vector of each candidate frame; and determining the type of the target as the type of the picture. Therefore, the type of the picture material can be accurately determined, and the subsequent selection of the picture material matched with the target video template is facilitated.
Here, the type of the picture includes at least one of: a character; a landscape; a delicious food; a pet; making fun; nostalgic; youth; and (4) emotion.
As another example, when the candidate material is a video, the server performs the following processing for each video: extracting a plurality of image frames contained in a video; dividing each image frame into a plurality of candidate frames; predicting the type of the target included in the candidate frame according to the feature vector of each candidate frame; determining the type of the target as the type of the corresponding image frame to obtain the type of each image frame; and determining the type which is most distributed in the types of the plurality of image frames as the type of the video. Therefore, the type of the video material can be accurately determined, and the subsequent selection of the video material matched with the target video template is facilitated.
Here, the type of the video includes at least one of: a character; a landscape; a delicious food; a pet; making fun; nostalgic; youth; and (4) emotion.
In step S1033, the server selects, from the plurality of candidate materials, one or more materials of the type that is consistent with the filling requirement of each material filling position in the target video template as materials that are adapted to the target video template.
In some embodiments, step S1033 may be preceded by: the server performs image recognition on the video sample to determine the type of material that each material fill location included in the target video template requires to be filled.
Here, the target video template includes at least one material fill location, and each material fill location requires that the type of material filled may or may not be the same.
As an example, the server performs the following processing on the video sample: determining at least one material contained in a video sample, wherein the at least one material contained in the video sample and at least one material filling position included in a target video template are in one-to-one correspondence; the type of each material contained in the video sample is determined and the type of each material is determined as the type of material that the corresponding material fill location requires to fill.
Here, the determination of the type of the material included in the video sample is similar to the above-described implementation of determining the type of the candidate material, and will not be described again.
In some embodiments, the server selects a plurality of materials which are consistent with the type of filling required by each material filling position in the target video template; performing aesthetic scoring treatment on a plurality of materials corresponding to each material filling position; sequencing the aesthetic scores of the plurality of materials corresponding to each material filling position in a descending order, and taking the materials sequenced at the head in the descending order as the materials matched with the material filling positions; and taking the set of materials matched with each material filling position as the materials matched with the target video template.
The following describes a specific implementation of the aesthetic scoring process.
By way of example, the server performs aesthetic scoring processing on the plurality of materials corresponding to each material filling position according to the aesthetic scoring dimension to obtain an aesthetic score of each material.
Here, the aesthetic scoring dimension includes at least one of: analyzing the person; the definition of the picture; the scale of the picture; layout of the screen.
Specifically, the server calls the neural network model to perform the following processing: extracting a feature vector of each material; mapping the extracted feature vectors to corresponding aesthetic scores; the neural network model is obtained by training a sample by taking the sample material and the aesthetic scores of the labels aiming at the sample material as the sample.
According to the embodiment of the invention, the aesthetic degree of the material can be accurately judged through the neural network model, the material with higher aesthetic degree can be preferentially recommended to the user on the basis of selecting the material with the same type as the material required to be filled at each material filling position in the target video template, and the material is higher in quality compared with the material selected by the user without aesthetic feeling, so that the user can be helped to edit more exquisite video.
It should be noted that the process of the client selecting the material is similar to the process of the server selecting the material, and will not be described again.
In step S104, the server transmits the material adapted to the target video template.
As an example of fig. 2, in response to a template multiplexing operation for any one video sample, the client 410 sends a material acquisition request corresponding to the template multiplexing operation to the server 200; the server 200 transmits the material corresponding to the material acquisition request and adapted to the target video template.
Here, the material acquisition request includes information indicating a video sample, so that the server 200 can determine the video sample corresponding to the template multiplexing operation, thereby determining a target video template corresponding to the video sample, and further determining a material adapted to the target video template to be transmitted to the client.
In step S105, the client presents the material adapted to the target video template.
Here, the page presenting the material adapted to the target video template and the page presenting the video sample in step S101 may be displayed simultaneously, for example, the page presenting the material adapted to the target video template and the page presenting the video sample are displayed in a split screen; and displaying the page presenting the material adapted to the target video template above the page presenting the video sample in a floating layer manner, so that the page presenting the material adapted to the target video template has transparency and cannot completely shield the page presenting the video sample. Of course, the page presenting the material adapted to the target video template and the page presenting the video sample may not be displayed at the same time, e.g., when the client responds to a template multiplexing operation, switching from the page presenting the video sample to the page presenting the material adapted to the target video template.
It should be noted that, while presenting the material adapted to the target video template, the client may also present the material in the local material library (e.g., local album).
As an example, the client presents the intelligent recommendation module and the local material library in the human-computer interaction interface, wherein the intelligent recommendation module includes material adapted to the target video template. Therefore, the client can respectively present the materials matched with the target video template and the materials in the local material library in different areas.
For example, in fig. 7A, material adapted to the target video template is presented in the intelligent recommendation module 702, and material in the local material library is presented below the intelligent recommendation module 702.
As another example, the client presents a local corpus in a human-machine interaction interface; presenting the material adapted to the target video template in a presentation form (e.g., color mark or shape mark) different from the rest of the material in a local material library; wherein the rest of the material is the material which is not adapted to the target video template. Therefore, the user can be reminded to select the material matched with the target video template without independently displaying the material matched with the target video template in a human-computer interaction interface.
In some embodiments, the client presents at least one material fill location included in the target video template and presents material in each material fill location that meets the requirements of the respective material fill location.
Here, each material filling position is used to fill one material, and thus, the number of material filling positions included in the target video template is the same as the number of materials that the target video template requires to fill, that is, the number of materials that the target video template requires to fill is the same as the number of materials included in the video sample.
The client can also present the requirement of the corresponding material filling position in each material filling position, for example, the type of the required material or the duration of the required material; information of the material, such as the type of the material and the duration of the material, can also be presented in each material filling position; the amount of material required by the target video template may also be presented.
For example, in fig. 7B, the intelligent recommendation template 707 includes three videos adapted to the target video template, and the "00: 10", "00: 22" and "00: 17" in the upper left corner of the video refer to the duration of video playing.
It should be noted that the requirement of the material filling position includes at least one of the following: the type of material that needs to be filled; the length of time the material needs to be filled. Here, when the material is a video, the duration of the material refers to the duration of video playing; when the material is a picture, the duration of the material refers to the duration of the picture being presented, wherein during the presentation of the picture, the presentation mode of the picture may be static presentation or dynamic presentation, for example, the picture is presented with an animated special effect (e.g., flying out, fading or erasing).
In the following, a specific implementation of rendering adapted material for the requirements of the material filling position is described.
As a first example, when the requirements of the material filling positions include types of materials needing to be filled, the client presents the materials corresponding to one another at each material filling position; wherein each material presented is consistent with the type of corresponding material fill location requirement.
Here, when the material of the type meeting the material filling position requirement is not acquired, the material of the type similar to the material filling position requirement may be used as the material meeting the requirement of the corresponding material filling position, for example, when the type meeting the material filling position requirement is a pet type and the material of the pet type is not acquired, the material of the character type similar to the pet type may be used as the material meeting the requirement of the corresponding material filling position, thereby avoiding the material corresponding to the material filling position from being missing.
As a second example, when the requirement of the material filling position includes the duration of the material to be filled, the client presents the material corresponding to one at each material filling position; wherein the presentation time length of each presented material is consistent with the time length required by the filling position of the corresponding material.
Here, when the duration of the acquired material is longer than the duration required by the material filling position, the acquired material may be cut or shortened (for example, the frame number of the material is kept unchanged, and the frame rate is increased, that is, the material is quickly played) so that the duration of the processed material is the same as the duration required by the material filling position; when the duration of the acquired material is less than the duration required by the material filling position, the acquired material may be extended (for example, the frame number of the material is kept unchanged, and the frame rate is reduced, that is, the material is played slowly) so that the duration of the processed material is the same as the duration required by the material filling position. Therefore, the time length of the material can be adaptively adjusted, the adjusted material is used as the material matched with the requirement of the corresponding material filling position, and the material loss of the corresponding material filling position is avoided.
As a third example, when the requirements of the material filling positions include the duration of the material to be filled and the type of the material to be filled, the client presents the material corresponding to one at each material filling position; and the presenting time length of each presented material is consistent with the time length required by the filling position of the corresponding material and is consistent with the type required by the filling position of the corresponding material.
Here, when the type of the material meeting the material filling position requirement is not acquired, the duration of the acquired material is less than the duration of the material filling position requirement, or the duration of the acquired material is less than the duration of the material filling position requirement, the material adapted to the material filling position may be selected in a manner of processing the material according to the above two examples, so as to avoid the material missing corresponding to the material filling position.
The embodiment of the invention not only supports the presentation of the material with the same type as the filling position requirement of the corresponding material, but also supports the presentation of the material with the same duration as the filling position requirement of the corresponding material, and can improve the matching degree of the material and the target video template, thereby improving the aesthetic degree of the generated video.
Referring to fig. 6, fig. 6 is a flowchart of a processing method for video editing according to an embodiment of the present invention, and based on fig. 4, step S106 to step S108 may be included after step S105, and it should be noted that step S109 is an optional step.
In step S106, in response to the overall selection operation for the material adapted to the target video template, the client applies the material adapted to the target video template to generate a corresponding video preview.
In some embodiments, the material adapted to the target video template is material adapted to the requirements of the material fill locations in the target video template; in this way, the specific implementation manner of applying the material adapted to the target video template to generate the corresponding video preview may be: filling materials matched with the requirements of the material filling positions in each material filling position of the target video template by the client; and generating corresponding video preview by the filled target video template.
Here, the requirement of the material filling position includes at least one of: the type of material that needs to be filled; the length of time the material needs to be filled. After the material filling position is filled with the material which is matched with the requirement of the material filling position, the rendering special effect corresponding to the material filling position can be superposed to generate the corresponding video preview. Wherein rendering the special effect comprises at least one of: a filter; pasting a paper; background music; a segment switching effect; and (5) picture drawing.
As a first example, when the requirement of the material filling position includes the type of the material to be filled, the client fills the material, which is consistent with the type of the material required to be filled by the material filling position, into the corresponding position, and superimposes the filled material on the rendering special effect corresponding to the material filling position to generate the corresponding video preview.
As a second example, when the requirement of the material filling position includes the duration of the material to be filled, the client fills the material with the duration consistent with the duration of the material required to be filled by the material filling position to the corresponding position, and superimposes the filled material on the rendering special effect corresponding to the material filling position to generate the corresponding video preview.
As a third example, when the requirement of the material filling position includes the duration of the material to be filled and the type of the material to be filled, the client fills the material, which is consistent with the duration of the material to be filled in the material filling position, into the corresponding position, and superimposes the filled material on the rendering special effect corresponding to the material filling position to generate the corresponding video preview.
Here, when the material is a video and the duration of the video is less than the duration of the material that the material filling position requires to be filled, before filling the material that is adapted to the requirement of the material filling position, the method may further include: the video is extended to coincide with the duration of the material that the material fill location requires to fill. For example, keeping the number of frames of the video unchanged, increasing the frame rate, i.e. playing the video quickly.
Here, when the material is a video and the duration of the video is less than the duration of the material that the material filling position requires to be filled, before filling the material that is adapted to the requirement of the material filling position, the method may further include: the video is repeatedly spliced to coincide with the duration of the material that the material filling position requires to be filled. For example, when the duration of a video is 5 seconds and the duration of a material required to be filled in a material filling position is 10 seconds, the video is copied and then spliced end to obtain a 10-second video.
Here, when the material is a video and the duration of the video is greater than the duration of the material that the material filling position requires to be filled, before filling the material that is adapted to the requirement of the material filling position, the method may further include: the video is shortened to coincide with the duration of the material that the material fill location requires to fill. For example, keeping the number of frames of the video constant, the frame rate is lowered, i.e., the video is played slowly.
Here, when the material is a video and the duration of the video is greater than the duration of the material that the material filling position requires to be filled, before filling the material that is adapted to the requirement of the material filling position, the method may further include: the video is cropped to coincide with the duration of the material that the material fill location requires to fill.
A specific implementation of clipping the video is described below.
As a first example, the client intercepts a segment having a starting position clipped at the starting time point of the video and a duration of the material that the material filling position requires to fill.
In this way, the slice header portion of the video can be cropped as material that fits the requirements of the corresponding material fill location.
As a second example, the client intercepts a segment having the termination time point of the video as the termination position of the cropping and the duration as the duration of the material that the material filling position requires to fill.
In this way, the end-of-piece portion of the video can be cropped as material that fits the requirements of the corresponding material fill location.
As a third example, the client intercepts a segment of the video with the highest aesthetic score and for a duration of the material that the material fill location requires to fill.
Specifically, the client performs the following processing on the video: extracting a plurality of image frames contained in a video; performing aesthetic scoring processing on each image frame to obtain an aesthetic score for each image frame; and taking the time length of the material required to be filled at the material filling position as a sliding window for sliding, determining the sum of aesthetic scores of all image frames contained in the sliding window after each sliding, and intercepting the segment contained in the sliding window with the highest sum of the aesthetic scores.
Here, the specific process of the aesthetic scoring is similar to that described above and will not be described in detail.
In this way, the most wonderful or aesthetically pleasing segment of the video can be cropped as material that fits the requirements of the corresponding material fill location.
As a fourth example, in response to a clipping operation for a video, the client intercepts a segment corresponding to the clipping operation and having a duration that is the duration of the material that the material fill location requires to fill.
In this way, the user can cut out the segment which best meets the requirement in the video as the material which is matched with the requirement of the corresponding material filling position.
In some embodiments, before applying the material adapted to the target video template, the method may further include: the client is responsive to a material editing operation directed to the target video template to update material presented in the material fill location of the target video template that is adapted to the target video template. Therefore, the method and the device support the user to adjust the material which is applied to the target video template adaptation, so that the generated video can better meet the requirements of the user.
A specific implementation of editing the material adapted to the target video template is described below.
As a first example, the client presents at least one candidate material; at least one adapted material is interchanged as a candidate material in response to an interchange operation for material adapted to the target video template.
Here, the candidate material may be a material (e.g., a photograph or a video) contained in the user's local album; or the materials can be stored in the cloud by the user; the method can also be used for crawling the materials acquired in the whole network through the network. The interchanging operation may be an operation of dragging the selected candidate material to the corresponding material filling position, that is, the material which is presented in the material filling position and is adapted to the target video template may be interchanged into the selected candidate material.
It should be noted that, when the interchanged candidate material is a video and the candidate material is longer than the duration of the material required to be filled at the material filling position, the candidate material is clipped or shortened to be consistent with the duration of the material required to be filled at the material filling position, wherein the specific implementation manner of clipping and shortening is similar to that described above, and will not be described again. And when the exchanged candidate materials are videos and the candidate materials are smaller than the time length of the materials required to be filled at the material filling position, extending or repeatedly splicing the candidate materials to be consistent with the time length of the materials required to be filled at the material filling position, wherein the specific implementation mode of extending or repeatedly splicing is similar to that described above, and is not repeated.
For example, in fig. 7A, material adapted to the target video template is presented in the intelligent recommendation module 702, and the user may replace the material in the intelligent recommendation module 702 with material in a local material library below the intelligent recommendation module 702.
Therefore, when the material which is presented by the client and matched with the target video template does not meet the requirements of the user, the user is supported to replace the material, and the operation experience of the user is improved.
As a second example, the client swaps material filled in at least two material fill locations in the target video template in response to a position adjustment operation for the material that fits the target video template.
Here, the position adjustment operation may be an operation of dragging the material of one material filling position to another material filling position, i.e., the materials filled in the two material filling positions may be interchanged.
It should be noted that, when the material to be exchanged is a video and the time length of the material to be exchanged is greater than the time length of the material to be filled at the material filling position, the material to be exchanged is cut or shortened to be consistent with the time length of the material to be filled at the material filling position, wherein the specific implementation manner of cutting and shortening is similar to that described above, and will not be described again. When the exchanged material is a video and the exchanged material is smaller than the time length of the material required to be filled at the material filling position, extending or repeatedly splicing the exchanged material to be consistent with the time length of the material required to be filled at the material filling position, wherein the specific implementation mode of extending or repeatedly splicing is similar to that described above, and will not be described again.
Therefore, when the sequencing of the materials which are presented by the client and are adaptive to the target video template does not meet the requirements of the user, the user is supported to adjust the sequence of the materials, and the operation experience of the user is improved.
As a third example, the client updates the duration of the material adapted to the target video template in response to a duration adjustment operation for the material adapted to the target video template.
When the material to be adjusted is triggered, a time axis corresponding to the material and a synchronous preview page are presented, and the operation of sliding preview on the time axis is a duration adjustment operation, so that a user can select a clip meeting the requirement.
Therefore, when the duration of the material which is presented by the client and is adaptive to the target video template does not meet the requirements of the user, the user is supported to adjust the presentation duration, and the operation experience of the user is improved.
As a fourth example, the client adds a special effect in the material adapted to the target video template in response to a special effect operation on the material adapted to the target video template.
Here, when the material to be operated is triggered, a special effect button and a synchronized preview page are presented, and the operation for the special effect button is special effect operation to support the user to add a special effect.
Therefore, when the special effect of the material which is presented by the client and is matched with the target video template does not meet the requirements of the user, the user is supported to add the special effect to the material, and the operation experience of the user is improved.
As a fifth example, the client removes the selected object in the material adapted to the target video template in response to a removal operation for the material adapted to the target video template.
Here, the object may be an arbitrary region in the material, such as a human face or a background. The removing mode comprises at least one of the following modes: matting; mosaic; and (6) pasting a picture. When the material to be adjusted is triggered, a removal button and a synchronous preview page are presented, and the operation aiming at the removal button is removal operation so as to support the user to remove the selected object.
In this way, areas which do not meet the requirements of the user are removed from the material matched with the target video template, so that the operation experience of the user is improved.
As can be seen from the above, in step S106, the material adapted to the target video template is automatically selected, and the material adapted to the target video template is applied to the target video template to generate the corresponding video preview.
As another alternative, material adapted to the target video template is manually selected to generate the corresponding video preview. As such, step S106 may be replaced with: the client presents the candidate materials; presenting at least one material filling position included in the target video template, and presenting requirements of the material filling position in each material filling position to prompt manual selection of an adapted material; responding to the selection operation of the candidate materials, and taking the selected candidate materials as materials matched with the target video template; and applying the material adapted to the target video template to generate a corresponding video preview.
By way of example, the client-presented candidate material includes at least one of: according to the shooting time of the candidate materials, the candidate materials are sorted in an ascending order or a descending order for presentation; sorting the candidate materials in a descending order for presentation according to the viewing times of the candidate materials; sorting the candidate materials in a descending order for presentation according to the interaction (e.g., sharing) times of the candidate materials; the candidate materials are sorted in descending order for presentation according to their aesthetic scores. Therefore, the candidate materials are sorted through multiple dimensions, and the candidate materials meeting the requirements of the user can be presented preferentially more comprehensively.
As an example, the client determines the candidate material to be moved as the material adapted to the material filling position filled after the movement in response to the moving operation of moving the candidate material to the material filling position in the target video template.
In some embodiments, the client presents a material capture button; in response to a shooting operation for a shooting button, taking a shot material as a material adapted to the target video template; and applying the material adapted to the target video template to generate a corresponding video preview. Therefore, when the user cannot acquire the appropriate materials, the user is supported to shoot to acquire the materials meeting the requirements.
As an example, the client sequentially fills the shot materials into the material filling positions of the target video template, and supports editing the materials in the material filling positions, where a specific implementation manner of editing the materials in the material filling positions is similar to a specific implementation manner of editing the materials adapted to the target video template, and will not be described again.
In step S107, the client presents a video preview.
In some embodiments, the client, while presenting the video preview, also presents the material fill locations included in the target video template, and the material that the video preview fills at each material fill location. In this way, the client can respond to the editing operation of the material corresponding to the video preview at each material filling position, and update the material correspondingly filled in at least one material filling position by the video preview; and generating a new video preview according to the material corresponding to the target video template at each material filling position, and presenting the new video preview.
A specific implementation of editing the material corresponding to each material filling position for the video preview is described below.
As a first example, the client updates the duration of the material corresponding to the video preview at the at least one material fill location in response to a duration adjustment operation for the material corresponding to the video preview at each material fill location.
When the material to be adjusted is triggered, a time axis corresponding to the material and a synchronous preview page are presented, and the operation of sliding preview on the time axis is a duration adjustment operation, so that a user can select a clip meeting the requirement.
Therefore, when the duration of the material corresponding to the material filling position of the video preview does not meet the requirement of the user, the user is supported to adjust the presentation duration, and the operation experience of the user is improved.
As a second example, the client updates the special effect used in the material corresponding to the at least one material fill location for the video preview in response to a special effect operation on the material corresponding to each material fill location for the video preview.
Here, when the material to be operated is triggered, a special effect button and a synchronized preview page are presented, and the operation for the special effect button is special effect operation to support the user to add a special effect.
Therefore, when the special effect of the material corresponding to the material filling position of the video preview does not meet the requirement of the user, the user is supported to add the special effect to the material, and the operation experience of the user is improved.
As a third example, the client interchanges the material corresponding to the video preview in any two material fill positions in response to a position adjustment operation for the material corresponding to the video preview in each material fill position.
Here, the position adjustment operation may be an operation of dragging the material of one material filling position to another material filling position, i.e., the materials filled in the two material filling positions may be interchanged.
It should be noted that, when the material to be exchanged is a video and the time length of the material to be exchanged is greater than the time length of the material to be filled at the material filling position, the material to be exchanged is cut or shortened to be consistent with the time length of the material to be filled at the material filling position, wherein the specific implementation manner of cutting and shortening is similar to that described above, and will not be described again. When the exchanged material is a video and the exchanged material is smaller than the time length of the material required to be filled at the material filling position, extending or repeatedly splicing the exchanged material to be consistent with the time length of the material required to be filled at the material filling position, wherein the specific implementation mode of extending or repeatedly splicing is similar to that described above, and will not be described again.
Therefore, when the sequencing of the materials corresponding to the video preview at the material filling position does not meet the requirements of the user, the user is supported to adjust the sequence of the materials, and the operation experience of the user is improved.
In step S108, the client generates a corresponding video file from the video preview in response to the export operation for the video preview.
After the client generates the corresponding video file, the client supports the user to select to store the generated video file locally and/or share the video file to a sharing object, wherein the sharing object comprises a social friend or a social platform. Therefore, the video generation process and the video sharing process are seamlessly connected, so that the operation paths of the user are reduced, and the operation experience of the user is improved.
For example, in fig. 7A, when the export button 705 is triggered, a corresponding video file is generated, and the user is prompted by a pop-up window to save the generated video file to the local and/or to share to the sharing object.
The following describes a processing method for video editing according to an embodiment of the present invention.
Referring to fig. 7A and 7B, fig. 7A and 7B are schematic diagrams of application scenarios provided by an embodiment of the present invention.
Fig. 7A is a process of intelligently recommending local photos according to a video template, in fig. 7A, when a user triggers a clip button 701, a background (i.e., a server) calls a local photo album of the user and analyzes the local photo album, after the analysis is completed, an intelligent recommendation module 702 is presented in an interface for selecting photos, and the photos recommended in the intelligent recommendation module 702 are photos more matched with the video template; when the user triggers the selection button 703, all photos in the intelligent recommendation module 702 can be selected, and when the selection is completed, the user clicks the preview button 704 to generate corresponding video preview; when the user triggers export button 705, the video clip is completed. In addition, when the user does not select the photo in the intelligent recommendation module 702, the user may slide to the lower part of the intelligent recommendation module 702 to perform the autonomous selection.
Fig. 7B is a process of intelligently recommending a local video according to a video template, in fig. 7B, when a user triggers a clip button 706, a background (i.e., a server) calls a local album of the user and analyzes the local album, after the analysis is completed, an intelligent recommending module 707 is presented in an interface for selecting a video, and the video recommended in the intelligent recommending module 707 is a video more matched with the video template; when the user triggers the selection button 708, all videos in the intelligent recommendation module 707 can be selected, and after the selection is completed, the user clicks the preview button 709 to generate corresponding video preview; when the user triggers export button 710, the video clip is completed. In addition, when the user does not select the video in the intelligent recommendation module 707, the user may slide to the lower side of the intelligent recommendation module 707 to perform the autonomous selection.
It should be noted that the presentation manner of the intelligent recommendation module is not limited to the above-mentioned individual presentation manner, and may be to mark (for example, color mark or shape mark) in the materials included in the local album to distinguish from the rest of the materials for presentation; or can be presented on a separate page; and the display can be performed in a pop-up window mode.
Next, a specific implementation of the video editing processing method provided by the embodiment of the present invention is described.
Referring to fig. 8, fig. 8 is a flowchart illustrating a video editing processing method according to an embodiment of the present invention, which will be described in detail with reference to fig. 8.
In step S801, the client initiates a request to the server in response to a clipping operation for a video template, and the server acquires video template information.
In some embodiments, the client supports the user to select favorite video templates, when the user triggers the clip button, a request is sent to the server, and the server obtains corresponding video template information (or attribute information), such as material type, number of segments, time length, and the like.
In step S802, the server obtains local material (e.g., photos or videos), and filters the material that conforms to the video template.
In some embodiments, the server acquires the materials in the local album of the user, and performs image recognition classification on the materials in the local album to determine the types of the materials, such as people, scenery, food, pets and the like; and then sift out the same type of material as required by the video template.
In step S803, the server performs an aesthetic scoring process on the screened materials, and sorts the aesthetic scores in descending order.
In some embodiments, the server scores the screened materials with an aesthetic scoring model, and sorts the screened materials by score from high to low.
In step S804, the server determines the number N of the materials required by the video template, and sends the top N materials in the descending order to the client, so that the client is presented in the intelligent recommendation module in the material presentation page.
In step S805, the client presents a video preview in response to the selection operation; the client generates a corresponding video in response to the export operation.
Referring to fig. 9, fig. 9 is a schematic flowchart of the intelligent recommendation of materials according to the embodiment of the present invention. In fig. 9, the process of intelligently recommending the material is specifically divided into three steps, which are respectively: (1) analyzing a video template; (2) identifying and screening local materials; (3) aesthetic scoring and ranking.
The standard for intelligently recommending the materials is mainly a video template, so that the server needs to analyze the video template before requesting the local album of the user, analyze the main content of the video (such as people, scenery, food, pets and the like) and the style of the whole video (such as fun, nostalgia, youth, emotion and the like) included in the video template, which is a main reference dimension for performing first-layer screening on the materials in the local album.
After the server acquires the local photo album, the materials in the local photo album need to be identified and classified, and the materials matched with the video template are extracted.
The server performs aesthetic scoring processing on the materials in the local album according to aesthetic scoring dimensions (such as screen definition, screen proportion, screen layout and the like, which are not limited to the above scoring dimensions), so that the aesthetic scoring of each material can be obtained; the material with the highest aesthetic quality (namely the aesthetic degree) in the local photo album can be obtained according to the level of the aesthetic score.
According to the embodiment of the invention, through intelligent analysis, the user is helped to screen the materials more suitable for the target video template from the materials of the local photo album, so that the user can complete the material selection step only by triggering the selection button, and the user can complete video editing in a shorter time. On the other hand, the video recommended by intelligent aesthetics is better than the video selected by the user with lack of beauty, and the user can be helped to edit a more exquisite video.
Continuing with the exemplary structure of the video editing processing apparatus 555 provided by the embodiment of the present invention implemented as software modules, in some embodiments, as shown in fig. 3, the software modules stored in the video editing processing apparatus 555 of the memory 550 may include:
a video presentation module 5551 for presenting at least one video sample in response to a video template viewing operation;
a material presentation module 5552, configured to, in response to a template multiplexing operation for any one of the video samples, obtain a target video template;
the target video template is used for editing and forming the video sample selected by the template multiplexing operation;
the material presenting module 5552 is further configured to present a material adapted to the target video template;
wherein the adapted material is used to populate into the target video template to form a video preview similar to the video sample selected by the template multiplexing operation.
In the above solution, the video presenting module 5551 is further configured to present a cover of at least one video sample, and present a multiplexing template button; or playing the video sample and presenting a multiplexing template button corresponding to the video sample; wherein the operation for triggering the multiplexing template button is a template multiplexing operation for the video sample, and the multiplexing template button is used for characterizing multiplexing the target video template to edit a new video.
In the above solution, the material presenting module 5552 is further configured to present at least one material filling position included in the target video template, and present material in each material filling position, which meets the requirement of the corresponding material filling position.
In the above solution, the material presenting module 5552 is further configured to present a one-to-one corresponding material at each material filling position; and the presenting time length of each presented material is consistent with the time length required by the filling position of the corresponding material and is consistent with the type required by the filling position of the corresponding material.
In the above solution, the video editing processing apparatus 555 further includes: a selecting module, configured to apply, in response to an overall selecting operation for a material adapted to the target video template, the material adapted to the target video template to generate a corresponding video preview; presenting the video preview.
In the above scheme, the material adapted to the target video template is a material adapted to the requirement of the material filling position in the target video template; the selecting module is further used for filling materials which are matched with the requirements of the material filling positions in each material filling position of the target video template; wherein the requirements comprise the type and duration of the material to be filled; and generating corresponding video preview for the filled target video template.
In the above solution, the video editing processing apparatus 555 further includes: and the clipping module is used for clipping the video to be consistent with the duration of the material required to be filled in the material filling position when the material is a video and the duration of the video is greater than the duration of the material required to be filled in the material filling position.
In the above solution, the cropping module is further configured to perform at least one of the following operations on the video: intercepting a segment which takes the starting time point of the video as the starting position of the clipping and has the duration as the duration of the material required to be filled in the material filling position; intercepting a segment which takes the termination time point of the video as the termination position of clipping and has the duration as the duration of the material required to be filled in the material filling position; intercepting a segment which has the highest aesthetic score and the duration of which is the duration of the material required to be filled at the material filling position in the video; and in response to the clipping operation aiming at the video, a segment which corresponds to the clipping operation and has the duration of the material required to be filled by the material filling position is intercepted.
In the above solution, the video editing processing apparatus 555 further includes: and the material editing module is used for responding to the material editing operation matched with the target video template so as to update the material matched with the target video template.
In the above solution, the material editing module is further configured to present at least one candidate material, and execute at least one of: in response to an interchange operation for material adapted to the target video template, interchanging at least one of the adapted material as the candidate material; interchanging the material filled in at least two material filling positions in the target video template in response to a position adjustment operation for the material adapted to the target video template; updating the duration of the material adapted to the target video template in response to a duration adjustment operation for the material adapted to the target video template; adding a special effect in the material adapted to the target video template in response to a special effect operation on the material adapted to the target video template.
In the above solution, the video editing processing apparatus 555 further includes: the generation module is used for presenting material filling positions included in the target video template and the materials filled in each material filling position by the video preview; in response to the editing operation of the material corresponding to each material filling position of the video preview, updating the material corresponding to at least one material filling position of the video preview; and generating a new video preview according to the material corresponding to the target video template at each material filling position, and presenting the new video preview.
In the foregoing solution, the generating module is further configured to execute at least one of: responding to the time length adjustment operation of the material corresponding to each material filling position aiming at the video preview, and updating the time length of the material corresponding to at least one material filling position of the video preview; in response to the special effect operation aiming at the material corresponding to each material filling position of the video preview, updating the special effect used by the video preview in the material corresponding to at least one material filling position; and in response to the position adjustment operation of the material corresponding to each material filling position aiming at the video preview, interchanging the material corresponding to the video preview in any two material filling positions.
In the above solution, the video editing processing apparatus 555 further includes: and the export module is used for responding to the export operation aiming at the video preview and generating a corresponding video file according to the video preview.
In the above scheme, the selecting module is further configured to present candidate materials; presenting at least one material filling position included in the target video template, and presenting requirements of the material filling position in each material filling position to prompt manual selection of an adapted material; and responding to the selection operation of the candidate materials, and taking the selected candidate materials as materials matched with the target video template.
In the above solution, the video editing processing apparatus 555 further includes: the identification module is used for acquiring a plurality of candidate materials; performing image recognition on the candidate materials to determine the type of each candidate material; and selecting the material with the type consistent with the filling requirement of each material filling position in the target video template from the candidate materials to serve as the material matched with the target video template.
In the foregoing solution, when the candidate material is a picture, the identifying module is further configured to perform the following processing on each picture: dividing the picture into a plurality of candidate frames; predicting the type of the target included in each candidate frame according to the feature vector of the candidate frame; determining the type of the target as the type of the picture.
In the foregoing solution, when the candidate material is a video, the identifying module is further configured to perform the following processing for each video: extracting a plurality of image frames contained in the video; dividing each of the image frames into a plurality of candidate frames; predicting the type of the target included in each candidate frame according to the feature vector of the candidate frame; determining the type of the target as corresponding to the type of the image frame to obtain the type of each image frame; determining a type with the most distribution among the types of the plurality of image frames as the type of the video.
In the above scheme, the identification module is further configured to select a plurality of materials of the same type as the type required to be filled at each material filling position in the target video template; performing aesthetic scoring treatment on a plurality of materials corresponding to each material filling position; sequencing the aesthetic scores of a plurality of materials corresponding to each material filling position in a descending order, and taking the materials sequenced at the head in the descending order as the materials matched with the material filling positions; and taking the set of materials which are matched with each material filling position as the materials which are matched with the target video template.
In the above solution, the identification module is further configured to invoke a neural network model to perform the following processing: extracting a feature vector of each material; mapping the extracted feature vectors to corresponding aesthetic scores; the neural network model is obtained by training a sample by taking a sample material and an aesthetic score of a label aiming at the sample material as the sample.
Embodiments of the present invention provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the processing method for video editing according to the embodiment of the present invention.
Embodiments of the present invention provide a computer-readable storage medium storing executable instructions, which, when executed by a processor, cause the processor to execute a processing method for video editing provided by embodiments of the present invention, for example, the processing method for video editing shown in fig. 4, fig. 5, fig. 6, or fig. 8, where the computer includes various computing devices including an intelligent terminal and a server.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EP ROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions can correspond, but do not necessarily correspond, to files in a file system, and can be stored in a portion of a file that holds other programs or data, e.g., in one or more scripts stored in a hypertext markup language document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
In summary, the embodiments of the present invention have the following beneficial effects:
(1) when a user watches a video sample, the user is supported to acquire an interested video template at any time so as to facilitate the subsequent video production, the linkage between the video watching and the video production is improved, the user can watch the video and produce the video in a seamless experience manner, and the operation path of the user is reduced.
(2) The aesthetic degree of the material can be accurately judged through the neural network model, the material with higher aesthetic degree can be preferentially recommended to the user on the basis of selecting the material which is in accordance with the type required to be filled at the filling position of each material in the target video template, and the material is higher in quality compared with the material selected by the user with poor aesthetic feeling, and the user can be helped to edit more exquisite video.
(3) The method not only supports presenting the materials with the same type as the filling position requirements of the corresponding materials, but also supports presenting the materials with the same duration as the filling position requirements of the corresponding materials, so as to improve the matching degree of the materials and the target video template and further improve the aesthetic degree of the generated video.
(4) In the process of presenting the video sample, the template multiplexing operation of the user is responded, so that the material matched with the target video template corresponding to the video sample is presented to the user, and compared with the situation that all the materials in the material library are directly presented to the user, the efficiency of selecting the material during the subsequent video production of the user can be improved, and the aesthetic effect of the produced video can be improved due to the fact that the material presented to the user is higher in matching performance with the target video template.
The above description is only an example of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present invention are included in the protection scope of the present invention.

Claims (15)

1. A method for processing video editing, the method comprising:
presenting at least one video sample in response to a video template viewing operation;
responding to the template multiplexing operation aiming at any one video sample, and acquiring a target video template;
the target video template is used for editing and forming the video sample selected by the template multiplexing operation;
presenting material adapted to the target video template;
wherein the adapted material is used to populate into the target video template to form a video preview similar to the video sample selected by the template multiplexing operation.
2. The method of claim 1, wherein said presenting at least one video sample comprises:
presenting a cover of at least one video sample and presenting a reuse template button; alternatively, the first and second electrodes may be,
playing the video sample, and presenting a multiplexing template button corresponding to the video sample;
wherein the operation for triggering the multiplexing template button is a template multiplexing operation for the video sample, and the multiplexing template button is used for characterizing multiplexing the target video template to edit a new video.
3. The method of claim 1, wherein the presenting material adapted to the target video template comprises:
presenting at least one material fill location included in the target video template and presenting material in each of the material fill locations that meets the requirements of the respective material fill location.
4. A method according to claim 3, wherein said presenting material in each of said material fill locations that meets the requirements of the respective material fill location comprises:
presenting materials corresponding to one another at each material filling position;
and the presenting time length of each presented material is consistent with the time length required by the filling position of the corresponding material and is consistent with the type required by the filling position of the corresponding material.
5. The method of claim 1, further comprising:
responding to the overall selection operation aiming at the material adapted to the target video template, and applying the material adapted to the target video template to generate a corresponding video preview;
presenting the video preview.
6. The method of claim 5,
the material matched with the target video template is matched with the material filling position in the target video template;
the applying the material adapted to the target video template to generate a corresponding video preview includes:
filling materials which are matched with the requirements of the material filling positions in each material filling position of the target video template; the requirements of the material filling positions comprise types and duration of materials needing to be filled;
and generating corresponding video preview for the filled target video template.
7. The method of claim 6, wherein prior to said filling material that is adapted to the requirements of said material filling location, said method further comprises:
when the material is a video and the duration of the video is greater than the duration of the material required to be filled in the material filling position, clipping the video to be consistent with the duration of the material required to be filled in the material filling position.
8. The method of claim 7, wherein said cropping the video to coincide with a duration of material that the material fill location requires to fill comprises:
performing at least one of the following operations with respect to the video:
intercepting a segment which takes the starting time point of the video as the starting position of the clipping and has the duration as the duration of the material required to be filled in the material filling position;
intercepting a segment which takes the termination time point of the video as the termination position of clipping and has the duration as the duration of the material required to be filled in the material filling position;
intercepting a segment which has the highest aesthetic score and the duration of which is the duration of the material required to be filled at the material filling position in the video;
and in response to the clipping operation aiming at the video, a segment which corresponds to the clipping operation and has the duration of the material required to be filled by the material filling position is intercepted.
9. The method of claim 5, wherein prior to said applying material adapted to the target video template, the method further comprises:
in response to a material editing operation directed to the target video template, to update the material adapted to the target video template.
10. The method of claim 9, wherein the updating the material adapted to the target video template in response to the material editing operation for the material adapted to the target video template comprises:
presenting at least one candidate material and performing at least one of:
in response to an interchange operation for material adapted to the target video template, interchanging at least one of the adapted material as the candidate material;
interchanging the material filled in at least two material filling positions in the target video template in response to a position adjustment operation for the material adapted to the target video template;
updating the duration of the material adapted to the target video template in response to a duration adjustment operation for the material adapted to the target video template;
adding a special effect in the material adapted to the target video template in response to a special effect operation on the material adapted to the target video template.
11. The method of claim 5, wherein when presenting the video preview, the method further comprises:
presenting material filling positions included in the target video template and materials filled in each material filling position by the video preview;
in response to the editing operation of the material corresponding to each material filling position of the video preview, updating the material corresponding to at least one material filling position of the video preview;
and generating a new video preview according to the material corresponding to the target video template at each material filling position, and presenting the new video preview.
12. The method of claim 11, wherein updating the material corresponding to the video preview at the at least one material fill location in response to the editing operation on the material corresponding to the video preview at each material fill location comprises:
performing at least one of:
responding to the time length adjustment operation of the material corresponding to each material filling position aiming at the video preview, and updating the time length of the material corresponding to at least one material filling position of the video preview;
in response to the special effect operation aiming at the material corresponding to each material filling position of the video preview, updating the special effect used by the video preview in the material corresponding to at least one material filling position;
and in response to the position adjustment operation of the material corresponding to each material filling position aiming at the video preview, interchanging the material corresponding to the video preview in any two material filling positions.
13. A processing apparatus for video editing, the apparatus comprising:
the video presenting module is used for responding to the video template viewing operation and presenting at least one video sample;
the material presentation module is used for responding to template multiplexing operation aiming at any one video sample and acquiring a target video template;
the target video template is used for editing and forming the video sample selected by the template multiplexing operation;
the material presenting module is also used for presenting the material matched with the target video template;
wherein the adapted material is used to populate into the target video template to form a video preview similar to the video sample selected by the template multiplexing operation.
14. An electronic device, comprising:
a memory for storing executable instructions;
a processor for implementing the method of video editing as claimed in any one of claims 1 to 12 when executing the executable instructions stored in the memory.
15. A computer-readable storage medium storing executable instructions for implementing the method of processing video editing of any one of claims 1 to 12 when executed by a processor.
CN202010672591.2A 2020-07-14 2020-07-14 Video editing processing method and device, electronic equipment and storage medium Pending CN111930994A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010672591.2A CN111930994A (en) 2020-07-14 2020-07-14 Video editing processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010672591.2A CN111930994A (en) 2020-07-14 2020-07-14 Video editing processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111930994A true CN111930994A (en) 2020-11-13

Family

ID=73313526

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010672591.2A Pending CN111930994A (en) 2020-07-14 2020-07-14 Video editing processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111930994A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113157972A (en) * 2021-04-14 2021-07-23 北京达佳互联信息技术有限公司 Recommendation method and device for video cover documents, electronic equipment and storage medium
CN113411667A (en) * 2021-06-19 2021-09-17 杭州影笑科技有限责任公司 Video clip editing system and optimization method applied to smart phone APP
CN113849088A (en) * 2020-11-16 2021-12-28 阿里巴巴集团控股有限公司 Target picture determining method and device
CN114157917A (en) * 2021-11-29 2022-03-08 北京百度网讯科技有限公司 Video editing method and device and terminal equipment
CN114979495A (en) * 2022-06-28 2022-08-30 北京字跳网络技术有限公司 Method, apparatus, device and storage medium for content shooting
CN115146087A (en) * 2022-09-01 2022-10-04 北京达佳互联信息技术有限公司 Resource recommendation method, device, equipment and storage medium
CN115269889A (en) * 2021-04-30 2022-11-01 北京字跳网络技术有限公司 Clipping template searching method and device
CN115484399A (en) * 2021-06-16 2022-12-16 荣耀终端有限公司 Video processing method and electronic equipment
CN116095412A (en) * 2022-05-30 2023-05-09 荣耀终端有限公司 Video processing method and electronic equipment
WO2023093907A1 (en) * 2021-11-29 2023-06-01 北京字跳网络技术有限公司 Video processing method and apparatus, and device and medium
CN116347009A (en) * 2023-02-24 2023-06-27 荣耀终端有限公司 Video generation method and electronic equipment
CN116471452A (en) * 2023-05-10 2023-07-21 武汉亿臻科技有限公司 Video editing platform based on intelligent AI

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113849088A (en) * 2020-11-16 2021-12-28 阿里巴巴集团控股有限公司 Target picture determining method and device
CN113157972B (en) * 2021-04-14 2023-09-19 北京达佳互联信息技术有限公司 Recommendation method and device for video cover document, electronic equipment and storage medium
CN113157972A (en) * 2021-04-14 2021-07-23 北京达佳互联信息技术有限公司 Recommendation method and device for video cover documents, electronic equipment and storage medium
CN115269889A (en) * 2021-04-30 2022-11-01 北京字跳网络技术有限公司 Clipping template searching method and device
CN115484399B (en) * 2021-06-16 2023-12-12 荣耀终端有限公司 Video processing method and electronic equipment
CN115484399A (en) * 2021-06-16 2022-12-16 荣耀终端有限公司 Video processing method and electronic equipment
CN113411667A (en) * 2021-06-19 2021-09-17 杭州影笑科技有限责任公司 Video clip editing system and optimization method applied to smart phone APP
CN114157917A (en) * 2021-11-29 2022-03-08 北京百度网讯科技有限公司 Video editing method and device and terminal equipment
CN114157917B (en) * 2021-11-29 2024-04-16 北京百度网讯科技有限公司 Video editing method and device and terminal equipment
WO2023093907A1 (en) * 2021-11-29 2023-06-01 北京字跳网络技术有限公司 Video processing method and apparatus, and device and medium
CN116095412A (en) * 2022-05-30 2023-05-09 荣耀终端有限公司 Video processing method and electronic equipment
CN116095412B (en) * 2022-05-30 2023-11-14 荣耀终端有限公司 Video processing method and electronic equipment
CN114979495B (en) * 2022-06-28 2024-04-12 北京字跳网络技术有限公司 Method, apparatus, device and storage medium for content shooting
CN114979495A (en) * 2022-06-28 2022-08-30 北京字跳网络技术有限公司 Method, apparatus, device and storage medium for content shooting
CN115146087A (en) * 2022-09-01 2022-10-04 北京达佳互联信息技术有限公司 Resource recommendation method, device, equipment and storage medium
CN116347009A (en) * 2023-02-24 2023-06-27 荣耀终端有限公司 Video generation method and electronic equipment
CN116347009B (en) * 2023-02-24 2023-12-15 荣耀终端有限公司 Video generation method and electronic equipment
CN116471452A (en) * 2023-05-10 2023-07-21 武汉亿臻科技有限公司 Video editing platform based on intelligent AI
CN116471452B (en) * 2023-05-10 2024-01-19 武汉亿臻科技有限公司 Video editing platform based on intelligent AI

Similar Documents

Publication Publication Date Title
CN111930994A (en) Video editing processing method and device, electronic equipment and storage medium
CN112449231B (en) Multimedia file material processing method and device, electronic equipment and storage medium
KR102290419B1 (en) Method and Appratus For Creating Photo Story based on Visual Context Analysis of Digital Contents
US20220360825A1 (en) Livestreaming processing method and apparatus, electronic device, and computer-readable storage medium
CN111835986B (en) Video editing processing method and device and electronic equipment
CN109547819B (en) Live list display method and device and electronic equipment
US20240107127A1 (en) Video display method and apparatus, video processing method, apparatus, and system, device, and medium
CN110968736B (en) Video generation method and device, electronic equipment and storage medium
CN103686344B (en) Strengthen video system and method
KR102117433B1 (en) Interactive video generation
CN114339285B (en) Knowledge point processing method, video processing method, device and electronic equipment
CN110868635A (en) Video processing method and device, electronic equipment and storage medium
CN113746875B (en) Voice packet recommendation method, device, equipment and storage medium
CN108737903B (en) Multimedia processing system and multimedia processing method
JP7240505B2 (en) Voice packet recommendation method, device, electronic device and program
US20230018502A1 (en) Display apparatus and method for person recognition and presentation
CN113411674A (en) Video playing control method and device, electronic equipment and storage medium
CN110647374A (en) Interaction method and device for holographic display window and electronic equipment
CN113157972A (en) Recommendation method and device for video cover documents, electronic equipment and storage medium
CN113709575B (en) Video editing processing method and device, electronic equipment and storage medium
CN114691926A (en) Information display method and electronic equipment
CN112165626A (en) Image processing method, resource acquisition method, related device and medium
CN112235516A (en) Video generation method, device, server and storage medium
CN110850996A (en) Picture/video processing method and device applied to input method
CN115499672B (en) Image display method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination