CN110971957B - Video editing method and device and mobile terminal - Google Patents

Video editing method and device and mobile terminal Download PDF

Info

Publication number
CN110971957B
CN110971957B CN201811161279.6A CN201811161279A CN110971957B CN 110971957 B CN110971957 B CN 110971957B CN 201811161279 A CN201811161279 A CN 201811161279A CN 110971957 B CN110971957 B CN 110971957B
Authority
CN
China
Prior art keywords
audio
file
video
floating layer
track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811161279.6A
Other languages
Chinese (zh)
Other versions
CN110971957A (en
Inventor
王丽娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201811161279.6A priority Critical patent/CN110971957B/en
Publication of CN110971957A publication Critical patent/CN110971957A/en
Application granted granted Critical
Publication of CN110971957B publication Critical patent/CN110971957B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention discloses a video editing method, a video editing device and a mobile terminal. The method comprises the following steps: displaying an audio editing interface based on the triggering of an audio operation interface in the video editing interface, wherein the audio editing interface comprises a video preview area, a first floating layer for displaying an audio file list and a second floating layer for performing audio editing; based on the selection operation of the audio files in the audio file list in the first floating layer, the first floating layer is folded, and the second floating layer is displayed; generating an audio track file corresponding to the audio file based on a first operation on the selected audio file in the second floating layer; and adding the generated audio track file to the video file. The invention realizes real-time cutting of music and video in one-to-one contrast, and is convenient for users to check the mapping relation between music and video.

Description

Video editing method and device and mobile terminal
Technical Field
The invention relates to the field of mobile internet, in particular to a video editing method and device and a mobile terminal.
Background
With the rapid development of the mobile internet, more and more users use mobile terminals as personal information management tools and entertainment platforms in addition to communication tools. A user may install various applications on a mobile terminal via which the user may conduct shopping, social, entertainment, financing, etc. activities on the mobile internet. One such application is short video applications such as trembling and fast hands.
In short video applications, there is a need to add music to short videos. However, when music is added to the short video in the existing scheme, the music to be added needs to be selected on a music library interface at first, and then the user jumps to a video editing interface to add the selected music to the short video. The realization mode can not cut the music and the video in real time in a one-to-one contrast way, and the user can not check the mapping relation between the music and the video and can only jump to the respective interfaces of the music and the video for repeated editing. Moreover, only a single piece of music can be imported, and editing of multiple pieces of music cannot be supported.
Disclosure of Invention
In view of the above, the present invention has been made to provide a video editing scheme that overcomes or at least partially solves the above-mentioned problems.
According to an aspect of the present invention, there is provided a video editing method including:
displaying an audio editing interface based on the triggering of an audio operation interface in the video editing interface, wherein the audio editing interface comprises a video preview area, a first floating layer for displaying an audio file list and a second floating layer for performing audio editing;
based on the selection operation of the audio files in the audio file list in the first floating layer, the first floating layer is folded, and the second floating layer is displayed;
generating an audio track file corresponding to the audio file based on a first operation on the selected audio file in the second floating layer;
and adding the generated audio track file to the video file.
Optionally, in the video editing method according to the present invention, the method further includes: displaying the video editing interface with an audio operation interface based on the selected one or more video files.
Optionally, in the video editing method according to the present invention, the adding the generated audio track file to the video file includes: splicing the selected one or more video files; and adding the generated one or more audio track files to the spliced video file to generate a new video file.
Optionally, in a video editing method according to the present invention, the second floating layer includes: a timeline component for displaying video thumbnails; the audio file system comprises one or more horizontal bar-shaped parts which are sequentially arranged from top to bottom, wherein each horizontal bar-shaped part is used for displaying the audio name of an associated audio file, and the position of each horizontal bar-shaped part corresponds to the position of the audio track file corresponding to the audio file in a time shaft part.
Optionally, in the video editing method according to the present invention, the second floating layer further includes an audio editing panel, the audio editing panel includes a track component for displaying a track, the track component has a first mask layer, and an audio clip covered by the first mask layer is a sound source for generating a track file; the time shaft part is provided with a second mask layer, the video clips covered by the second mask layer are the video clips corresponding to the audio track files, and the positions of the first mask layer and the second mask layer correspond to each other.
Optionally, in the video editing method according to the present invention, the first operation includes: a pinch and separate operation of multiple fingers within a first mask layer coverage area of a track assembly; the generating of the audio track file corresponding to the audio file comprises: based on the first operation, shrinking or amplifying the audio clip of the audio file under the first mask layer, using the start position of the audio clip in the audio file, the position in the time axis component corresponding to the start position and the length of the audio clip as the audio track parameters, and generating the audio track file corresponding to the audio file according to the audio track parameters.
Optionally, in the video editing method according to the present invention, the first operation includes: a drag operation in an uncovered area of a first mask layer of the track part; the generating of the audio track file corresponding to the audio file comprises: and moving the audio track based on the first operation, taking the starting position of the audio clip positioned under the first mask layer in the audio file, the position of the audio clip in the time axis part corresponding to the starting position and the length of the audio clip as audio track parameters, and generating the audio track file corresponding to the audio file according to the audio track parameters.
Optionally, in the video editing method according to the present invention, the audio editing panel further includes: an audio setting interface for setting fade-in and fade-out effects; the generating of the audio track file corresponding to the audio file further includes: fade-in and fade-out parameters of the audio clip are added to the track parameters in accordance with operation of the audio setting interface.
Optionally, in the video editing method according to the present invention, the second floating layer further includes an interface for continuing to add audio files, and when the interface is triggered, the second floating layer is retracted and the first floating layer is displayed, so that audio files in the audio file list continue to be selected in the first floating layer.
According to another aspect of the present invention, there is provided a video editing apparatus comprising:
the interface display unit is used for displaying an audio editing interface based on triggering of an audio operation interface in the video editing interface, wherein the audio editing interface comprises a video preview area, a first floating layer used for displaying an audio file list and a second floating layer used for carrying out audio editing, and the first floating layer is folded based on selection operation of audio files in the audio file list in the first floating layer, and the second floating layer is displayed;
and the processing unit is used for generating a sound track file corresponding to the selected audio file based on the first operation on the audio file in the second floating layer and adding the generated sound track file to the video file.
According to another aspect of the present invention, there is provided a mobile terminal including:
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the video editing methods according to the present invention.
According to another aspect of the present invention, there is provided a computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a mobile terminal, cause the mobile terminal to perform any of the video editing methods according to the present invention.
According to the video editing scheme, the music and the video are cut in real time in a one-to-one comparison mode, a user can conveniently check the mapping relation between the music and the video, and skip among pages is reduced. Moreover, the method can support the introduction of multiple pieces of music, and can ensure that the interactive operation is quicker and more convenient while meeting the new requirements of users.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 illustrates a mobile terminal configuration diagram according to an embodiment of the present invention;
FIG. 2 shows a flow diagram of a video editing method according to one embodiment of the invention;
fig. 3 to 7 are interface diagrams of a mobile terminal in a video editing method according to an embodiment of the present invention;
fig. 8 shows a block diagram of a video editing apparatus according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Fig. 1 is a schematic diagram of a mobile terminal according to an embodiment of the present invention. Referring to fig. 1, the mobile terminal 100 includes: a memory interface 102, one or more data processors, image processors and/or central processing units 104, and a peripheral interface 106. The memory interface 102, the one or more processors 104, and/or the peripherals interface 106 can be discrete components or can be integrated in one or more integrated circuits. In the mobile terminal 100, the various elements may be coupled by one or more communication buses or signal lines. Sensors, devices, and subsystems can be coupled to peripheral interface 106 to facilitate a variety of functions. For example, motion sensors 110, light sensors 112, and distance sensors 114 may be coupled to peripheral interface 106 to facilitate directional, lighting, and ranging functions. Other sensors 116 may also be coupled to the peripheral interface 106, such as a positioning system (e.g., a GPS receiver), a temperature sensor, a biometric sensor, or other sensing device, to facilitate related functions.
The camera subsystem 120 and optical sensor 122 may be used to facilitate implementation of camera functions such as recording photographs and video clips, where the camera subsystem 120 and optical sensor 122 may be, for example, a charge-coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) optical sensor. Communication functions may be facilitated by one or more wireless communication subsystems 124, which may include radio frequency receivers and transmitters and/or optical (e.g., infrared) receivers and transmitters. The particular design and implementation of the wireless communication subsystem 124 may depend on the one or more communication networks supported by the mobile terminal 100. For example, the mobile terminal 100 may include a network designed to support GSM networks, GPRS networks, EDGE networks, 3G networks, 4G networks, Wi-Fi or WiMax networks, and BluetoothTM A communication subsystem 124 of the network. The audio subsystem 126 may be coupled to a speaker 128 and a microphone 130 to facilitate voice-enabled functions, such as voice recognition, voice replication, digital recording, and telephony functions.
The I/O subsystem 140 may include a touch screen controller 142 and/or one or more other input controllers 144. The touch screen controller 142 may be coupled to a touch screen 146. For example, the touch screen 146 and touch screen controller 142 may detect contact and movement or pauses made therewith using any of a variety of touch sensing technologies, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies. One or more other input controllers 144 may be coupled to other input/control devices 148 such as one or more buttons, rocker switches, thumbwheels, infrared ports, USB ports, and/or pointing devices such as styluses. The one or more buttons (not shown) may include up/down buttons for controlling the volume of the speaker 128 and/or microphone 130.
The memory interface 102 may be coupled with a memory 150. The memory 150 may include high speed random access memory and/or non-volatile memory, such as one or more magnetic disk storage devices, one or more optical storage devices, and/or flash memory (e.g., NAND, NOR). The memory 150 may store an operating system 152, such as an operating system like Android, IOS or Windows Phone. The operating system 152 may include instructions for handling basic system services and performing hardware dependent tasks. The memory 150 may also store one or more programs 154. The programs 154 are loaded from the memory 150 onto the processor 104 when running, run on top of the operating system 152 already run by the processor 104, and utilize the operating system 152 and interfaces provided by the underlying hardware to implement various user-desired functions, such as instant messaging, web browsing, picture management, and the like. The program 154 may be provided separately from the operating system or may be self-contained.
In some embodiments, the mobile terminal 100 is configured to perform a video editing method 200 according to the present invention. Wherein the one or more programs 154 of the mobile terminal 100 include instructions for performing the video editing method 200 according to the present invention.
Fig. 2 shows a flow diagram of a video editing method 200 according to one embodiment of the invention. Referring to fig. 2, the method 200 begins in step S210, and in step S210, an audio editing interface is displayed based on a trigger to an audio operation interface in the video editing interface.
When a user needs to add an audio file (e.g., music) to a video file, the video file to which the audio file is to be added is first selected. It should be noted that, in the embodiment of the present invention, a typical video file is a short video, but is not limited to this, and may be any other type of video. The short video is an internet content transmission mode, and generally refers to video transmission content which is transmitted on a new internet medium within 5 minutes. As shown in fig. 3, a list of video files is presented in a display interface of the mobile terminal, and a user can select one or more video files in the list of video files. And if one video file is selected, the video file is the video file to which the audio file is to be added.
If a plurality of (6 selected in the figure) video files are selected, the video files can be spliced into one video file, and the spliced video file is the video file to which the audio file is to be added. Alternatively, instead of splicing temporarily, the video files to which the audio files are added may be spliced into one video file after the addition of the audio files is completed.
In this embodiment, one or more pieces of music may be added to one video file, or one or more pieces of music may be added to multiple video files, depending on the selection and operation of the user. A typical application scenario is to add music of different moods to each of a plurality of pieces of video.
And after the operation of selecting the video file is completed, presenting a video editing interface in a display interface of the mobile terminal according to the trigger of the user (for example, clicking a 'next' button). With continued reference to fig. 3, the video editing interface includes a video preview area in which the imported video may be played and a video editing toolbar located below the video preview area. The video editing toolbar includes a plurality of video editing icons or buttons, such as "filter", "music", "animation", "subtitle", and "MV", etc. The music button is the audio operation interface of the embodiment of the invention, and a user enters the audio editing interface by clicking the music button, so that music is added to the video file in the audio editing interface.
As shown in fig. 4, the audio editing interface has a video preview area (e.g., an area above the display interface), and the video preview area includes two tab pages, each of which is a floating display layer, e.g., a first floating layer corresponding to a "song library" tab page and a second floating layer corresponding to a "music editing" tab page. In the process of adding audio and editing audio, the video preview area can play the video to which the audio is added, so that a user can know the audio adding effect. And at the same time, only one of the two floating layers is displayed, namely the second floating layer is hidden when the first floating layer is displayed, and the first floating layer is hidden when the second floating layer is displayed. And displaying the audio file list in the first floating layer, and performing audio editing in the second floating layer.
After displaying the audio editing interface, the method 200 proceeds to step S220. In step S220, based on the selection operation of the audio file in the audio file list in the first floating layer (song library tab), the first floating layer is collapsed, and the second floating layer (music editing tab) is displayed.
When music is not added to the video file, a first floating layer, namely a song library tab, is displayed on an audio editing page by default. After one or more pieces of music have been added to the video file, the second floating layer is displayed by default on the audio editing interface. The user may also choose to display the first or second floating layer by clicking on it (e.g., clicking on the title "gallery" or "music edit" of the tab page), or, as described below, by clicking on the "+" button in the second floating layer (i.e., continuing the interface to add audio files).
With continued reference to FIG. 4, a list of audio files is displayed in the first floating layer, one audio file at a time being selectable by the user for addition to a video file. It should be noted that only 4 audio files are shown in the figure: song 1, song 2, song 3, and song 4, by performing a touch operation such as a slide-up operation in the audio file list display area, can also show other songs in the audio file list. After the user selects an audio file, such as song 1, the selected audio file may be audited and evoke both the "use" and "edit" buttons. Clicking the 'use' button, the audio file is directly added to the video file to be effective, and the video editing interface is returned. And clicking an 'edit' button, retracting the first floating layer and displaying the second floating layer.
After displaying the second float layer, the method 200 proceeds to step S230. In step S230, based on the first operation on the selected audio file in the second floating layer, a track file corresponding to the audio file is generated.
As shown in fig. 5, the second floating layer (music editing tab) includes a time-axis component and one or more bar components. The time axis component can also be a bar, which can present the time length of the video file and display the video thumbnails of the corresponding time points in time sequence. The plurality of horizontal bar-shaped components are sequentially arranged from top to bottom and used for displaying the selected one or more audio files. Specifically, each bar-shaped component is associated with a selected one of the audio files and is capable of displaying an audio name of the associated audio file. The left image has a bar element indicating that an audio file is added to the video file; the right diagram has three horizontal bars indicating that three audio files have been added to the video file.
And, the audio file associated with each horizontal bar corresponds to a track file, and the track file can be added to the video file according to the parameters of the track file (details on how to generate the track file and how to add the track file to the video file will be described later). Thus, each soundtrack file will correspond to a section of video (or a video clip of a video file) whose position in the timeline component (the section between the start time point and the end time point of the video clip), also referred to as the position of the soundtrack file in the timeline. For example, in the right drawing, a first bar corresponds to a first section of the time shaft component, a second bar corresponds to a second section of the time shaft component, and a third bar corresponds to a third section of the time shaft component.
The added audio file corresponds by default to the entire section of the video (shown in the left). When a selected audio file needs to be edited, an audio editing panel (alternatively referred to as a track editing drawer) is presented in the second floating layer. The audio editing panel comprises track parts for displaying tracks, where a track (soundtrack), i.e. a strip of parallel "tracks" seen in the sequencer software, each track defines the properties of the strip, such as the timbre of the track, the library of timbres, the number of channels, the input/output ports, the volume, etc., and each track corresponds to a track file.
The track unit has a first masking layer (the area between two sliders of the track unit in the right drawing) that covers the audio clip that is the audio source used to generate the track file. Correspondingly, the time axis component further has a second mask layer (black area of the time axis component in the right drawing), the video clip covered by the second mask layer is the video clip corresponding to the track file, and the position of the first mask layer corresponds to the position of the second mask layer. In the embodiment of the present invention, the position correspondence means that two display objects are aligned in the longitudinal direction of the display interface (have the same width and cover the same abscissa range).
As shown in fig. 6, in the present embodiment, two operation modes for the audio file are provided, i.e., the first operation may be a pinch and separate operation of a plurality of fingers in the first mask layer covered area of the track part (mode 1), or a drag operation in the first mask layer uncovered area of the track part (mode 2).
In the method 1, generating the audio track file corresponding to the audio file includes:
contracting or amplifying an audio clip of the audio file under the first mask layer based on the pinching and separating operations of the plurality of fingers;
recording the starting position of the audio segment in the audio file, the position of the audio segment in the time axis component corresponding to the starting position (namely the time stamp of the starting moment of the audio data in the video file corresponding to the time point) and the length of the audio segment as the audio track parameters;
and generating an audio track file corresponding to the audio file according to the audio track parameter.
It can be seen that in mode 1, the track selection can be gesture-contracted to intercept the duration. Along with the contraction of the audio track selection by the gesture of the user, the video thumbnail and the song title preview area (the bar-shaped component) are correspondingly contracted, so that the user can intuitively feel the connection mapping relation between the music and the video.
In the mode 2, generating the audio track file corresponding to the audio file includes:
moving the audio track based on the drag operation;
taking the starting position of the audio clip positioned under the first mask layer in the audio file, the position of the audio clip in the time axis component corresponding to the starting position (namely the time stamp of the starting moment of the audio data in the video file corresponding to the time point) and the length of the audio clip as the track parameters;
and generating an audio track file corresponding to the audio file according to the audio track parameter.
The application scenario of mode 2 is that the video segment to which music needs to be added is fixed in position, the starting position of music is selected by sliding the song track left and right, and the music selection section needs to be presented.
After the audio editing is completed (e.g., the user clicks the check icon in the figure), the method 200 proceeds to step S240. In step S240, the generated track file is added to the video file, and a new video file is generated. It should be noted that, each time an audio file is edited, an audio track file corresponding to the audio file is added to a video file; after the plurality of audio files are edited, the corresponding plurality of audio track files may be added to the video file. The video file may be one video file selected by the user, or may be a video file obtained by splicing a plurality of video files selected by the user.
As shown in fig. 7, the second floating layer may also include an interface (e.g., the "+" icon in the figure) for continuing to add audio files, which is triggered when the user clicks on the icon, the second floating layer collapses, and the first floating layer is displayed. In this way, the user can continue to select the audio file in the audio file list in the first floating layer, so as to add an audio file to the video file. The first diagram in fig. 7 is an audio editing interface with an audio file added (currently displaying a music editing tab and showing a bar), and after clicking the "+" icon, the user enters the gallery tab (the second diagram); selecting to add the next piece of music in the music library tab page, and entering a music editing tab page (a third picture); in the third diagram, two bar-shaped parts are displayed, which indicates that two audios are currently added, and the track part corresponding to the second bar-shaped part is displayed, so that the user can edit the second section of audio; the second section of audio editing is completed, and the audio track part (the fourth picture) is folded; continue clicking on the "+" icon to add and edit more audio; the fifth figure shows five horizontal bar elements indicating that five pieces of audio have been added to the video file and five track files have been generated. In addition, as can be seen from the fifth figure, the same music can be used by overlapping for a plurality of times, and the video clips corresponding to different track files can also have overlapping parts.
According to another embodiment, the audio editing panel in the second floating layer also has an audio setting interface for setting fade-in and fade-out effects, such as a "fade-in" button and a "fade-out" button in the figure. When editing an audio file, if the 'fade-in' or 'fade-out' effect is selected, when generating the audio track file corresponding to the audio file, the fade-in and fade-out parameters of the audio segment are also added to the audio track parameters, so that the music is excessively smoother in the connection process.
Fig. 8 shows a block diagram of a video editing apparatus according to an embodiment of the present invention. Referring to fig. 8, the video editing apparatus includes:
the interface display unit 810 is configured to display an audio editing interface based on triggering of an audio operation interface in the video editing interface, where the audio editing interface includes a video preview area, a first floating layer for displaying an audio file list, and a second floating layer for performing audio editing, and based on selection operation of an audio file in the audio file list in the first floating layer, pack up the first floating layer, and display the second floating layer;
and the processing unit 820 is used for generating a track file corresponding to the selected audio file based on the first operation on the audio file in the second floating layer, and adding the generated track file to the video file.
It should be noted that the processing performed by the interface display unit 810 and the processing unit 820 is the same as the processing performed in the method 200, and specific reference may be made to the description of the method 220 above, which is not repeated herein.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. It will be appreciated by those skilled in the art that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components in a video editing apparatus according to embodiments of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

Claims (12)

1. A video editing method, comprising:
displaying an audio editing interface based on the triggering of an audio operation interface in the video editing interface, wherein the audio editing interface comprises a video preview area, a first floating layer for displaying an audio file list and a second floating layer for performing audio editing;
based on the selection operation of the audio files in the audio file list in the first floating layer, the first floating layer is folded, and a second floating layer is displayed, wherein the second floating layer is generated in real time according to the selected audio files, and the second floating layer visually represents the time mapping relation between the selected audio files and the video to be edited;
generating an audio track file corresponding to the audio file based on a first operation on the selected audio file in the second floating layer;
and adding the generated audio track file to the video file.
2. The method of claim 1, further comprising:
displaying the video editing interface with an audio operation interface based on the selected one or more video files.
3. The method of claim 2, wherein adding the generated audio track file to a video file comprises:
splicing the selected one or more video files;
and adding the generated one or more audio track files to the spliced video file to generate a new video file.
4. The method of claim 1, wherein the second float layer comprises:
a timeline component for displaying video thumbnails;
the audio file system comprises one or more horizontal bar-shaped parts which are sequentially arranged from top to bottom, wherein each horizontal bar-shaped part is used for displaying the audio name of an associated audio file, and the position of each horizontal bar-shaped part corresponds to the position of the audio track file corresponding to the audio file in a time shaft part.
5. The method of claim 4, wherein the second floating layer further comprises an audio editing panel, the audio editing panel comprising a soundtrack assembly for displaying a soundtrack, the soundtrack assembly having a first mask layer covering an audio clip that is a sound source for generating a soundtrack file;
the time shaft part is provided with a second mask layer, the video clips covered by the second mask layer are the video clips corresponding to the audio track files, and the positions of the first mask layer and the second mask layer correspond to each other.
6. The method of claim 5, wherein the first operation comprises: a pinch and separate operation of multiple fingers within a first mask layer coverage area of a track assembly;
the generating of the audio track file corresponding to the audio file comprises: based on the first operation, shrinking or amplifying the audio clip of the audio file under the first mask layer, using the start position of the audio clip in the audio file, the position in the time axis component corresponding to the start position and the length of the audio clip as the audio track parameters, and generating the audio track file corresponding to the audio file according to the audio track parameters.
7. The method of claim 5, wherein the first operation comprises: a drag operation in an uncovered area of a first mask layer of the track part;
the generating of the audio track file corresponding to the audio file comprises: and moving the audio track based on the first operation, taking the starting position of the audio clip positioned under the first mask layer in the audio file, the position of the audio clip in the time axis part corresponding to the starting position and the length of the audio clip as audio track parameters, and generating the audio track file corresponding to the audio file according to the audio track parameters.
8. The method of claim 6 or 7, wherein the audio editing panel further comprises: an audio setting interface for setting fade-in and fade-out effects;
the generating of the audio track file corresponding to the audio file further includes: fade-in and fade-out parameters of the audio clip are added to the track parameters in accordance with operation of the audio setting interface.
9. The method of claim 1, wherein the second floating layer further comprises an interface for continuing to add audio files, and when the interface is triggered, the second floating layer is collapsed and the first floating layer is displayed so that audio files in the list of audio files continue to be selected in the first floating layer.
10. A video editing apparatus comprising:
the interface display unit is used for displaying an audio editing interface based on triggering of an audio operation interface in the video editing interface, wherein the audio editing interface comprises a video preview area, a first floating layer used for displaying an audio file list and a second floating layer used for carrying out audio editing, the first floating layer is folded based on selection operation of audio files in the audio file list in the first floating layer, and the second floating layer is displayed, wherein the second floating layer is generated in real time according to the selected audio files, and the second floating layer visually represents the time mapping relation between the selected audio files and videos to be edited;
and the processing unit is used for generating a sound track file corresponding to the selected audio file based on the first operation on the audio file in the second floating layer and adding the generated sound track file to the video file.
11. A mobile terminal, comprising:
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for performing any of the methods of claims 1-9.
12. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a mobile terminal, cause the mobile terminal to perform any of the methods of claims 1-9.
CN201811161279.6A 2018-09-30 2018-09-30 Video editing method and device and mobile terminal Active CN110971957B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811161279.6A CN110971957B (en) 2018-09-30 2018-09-30 Video editing method and device and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811161279.6A CN110971957B (en) 2018-09-30 2018-09-30 Video editing method and device and mobile terminal

Publications (2)

Publication Number Publication Date
CN110971957A CN110971957A (en) 2020-04-07
CN110971957B true CN110971957B (en) 2022-04-15

Family

ID=70029268

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811161279.6A Active CN110971957B (en) 2018-09-30 2018-09-30 Video editing method and device and mobile terminal

Country Status (1)

Country Link
CN (1) CN110971957B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111479158B (en) * 2020-04-16 2022-06-10 北京达佳互联信息技术有限公司 Video display method and device, electronic equipment and storage medium
CN113535289A (en) * 2020-04-20 2021-10-22 北京破壁者科技有限公司 Method and device for page presentation, mobile terminal interaction and audio editing
CN111741231B (en) * 2020-07-23 2022-02-22 北京字节跳动网络技术有限公司 Video dubbing method, device, equipment and storage medium
CN111901534B (en) * 2020-07-23 2022-03-29 北京字节跳动网络技术有限公司 Audio and video segmentation interaction method, device, equipment and storage medium
CN112291627B (en) * 2020-10-12 2022-12-09 广州市百果园网络科技有限公司 Video editing method and device, mobile terminal and storage medium
CN112468864A (en) 2020-11-24 2021-03-09 北京字跳网络技术有限公司 Video processing method, device, equipment and storage medium
CN113038014A (en) * 2021-03-17 2021-06-25 北京字跳网络技术有限公司 Video processing method of application program and electronic equipment
CN112987999B (en) * 2021-04-13 2023-02-07 杭州网易云音乐科技有限公司 Video editing method and device, computer readable storage medium and electronic equipment
CN113810538B (en) * 2021-09-24 2023-03-17 维沃移动通信有限公司 Video editing method and video editing device
CN116055799B (en) * 2022-05-30 2023-11-21 荣耀终端有限公司 Multi-track video editing method, graphical user interface and electronic equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104090979A (en) * 2014-07-23 2014-10-08 上海天脉聚源文化传媒有限公司 Method and device for editing webpage
WO2017107441A1 (en) * 2015-12-22 2017-06-29 乐视控股(北京)有限公司 Method and device for capturing continuous video pictures

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6628303B1 (en) * 1996-07-29 2003-09-30 Avid Technology, Inc. Graphical user interface for a motion video planning and editing system for a computer
CA2748309A1 (en) * 2008-12-23 2010-07-01 Vericorder Technology Inc. Digital media editing interface
US9117483B2 (en) * 2011-06-03 2015-08-25 Michael Edward Zaletel Method and apparatus for dynamically recording, editing and combining multiple live video clips and still photographs into a finished composition
US20130073960A1 (en) * 2011-09-20 2013-03-21 Aaron M. Eppolito Audio meters and parameter controls
US9716909B2 (en) * 2013-11-19 2017-07-25 SketchPost, LLC Mobile video editing and sharing for social media
CN103761985B (en) * 2014-01-24 2016-04-06 北京华科飞扬科技股份公司 A kind of hyperchannel video and audio is online performs in a radio or TV programme editing system
CN105530440B (en) * 2014-09-29 2019-06-07 北京金山安全软件有限公司 Video production method and device
US20170024110A1 (en) * 2015-07-22 2017-01-26 Funplus Interactive Video editing on mobile platform
CN105336348B (en) * 2015-11-16 2019-03-05 合一网络技术(北京)有限公司 The processing system and method for Multi-audio-frequency track in video editing
TWI635482B (en) * 2017-03-20 2018-09-11 李宗盛 Instant editing multi-track electronic device and processing method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104090979A (en) * 2014-07-23 2014-10-08 上海天脉聚源文化传媒有限公司 Method and device for editing webpage
WO2017107441A1 (en) * 2015-12-22 2017-06-29 乐视控股(北京)有限公司 Method and device for capturing continuous video pictures

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
多媒体课件制作中的数字音频处理技术;甘立军;《电脑知识与技术》;20080423(第12期);全文 *
音频编辑软件Adobe Audition 3.0从入门到提高(4)――多轨编辑合成混缩及其扩展功能;李传忠等;《音响技术》;20110120(第01期);全文 *

Also Published As

Publication number Publication date
CN110971957A (en) 2020-04-07

Similar Documents

Publication Publication Date Title
CN110971957B (en) Video editing method and device and mobile terminal
JP7181320B2 (en) Method, device, terminal and medium for selecting background music and shooting video
US10839855B2 (en) Systems and methods for video clip creation, curation, and interaction
CN108616696B (en) Video shooting method and device, terminal equipment and storage medium
RU2413292C2 (en) Graphic display
US11334619B1 (en) Configuring a playlist or sequence of compositions or stream of compositions
JP5666122B2 (en) Display information control apparatus and method
KR101899819B1 (en) Mobile terminal and method for controlling thereof
US9135901B2 (en) Using recognition-segments to find and act-upon a composition
KR101032634B1 (en) Method and apparatus of playing a media file
CN108900771B (en) Video processing method and device, terminal equipment and storage medium
US20120311443A1 (en) Displaying menu options for media items
CN105204737B (en) Application rollouts method and apparatus
WO2022143924A1 (en) Video generation method and apparatus, electronic device, and storage medium
KR20090029138A (en) The method of inputting user command by gesture and the multimedia apparatus thereof
US8716584B1 (en) Using recognition-segments to find and play a composition containing sound
KR20150022601A (en) Method for displaying saved information and an electronic device thereof
JP2012058858A (en) Information processor, data division method and data division program
WO2009153628A1 (en) Music browser apparatus and method for browsing music
JP2010057145A (en) Electronic device, and method and program for changing moving image data section
CN114564604B (en) Media collection generation method and device, electronic equipment and storage medium
CN113918522A (en) File generation method and device and electronic equipment
CN103366783A (en) Method, device and equipment for adjusting video information play time points
KR102040287B1 (en) Acoustic output device and control method thereof
JP7277635B2 (en) Method and system for generating video content based on image-to-speech synthesis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant