CN115484399B - Video processing method and electronic equipment - Google Patents

Video processing method and electronic equipment Download PDF

Info

Publication number
CN115484399B
CN115484399B CN202210039516.1A CN202210039516A CN115484399B CN 115484399 B CN115484399 B CN 115484399B CN 202210039516 A CN202210039516 A CN 202210039516A CN 115484399 B CN115484399 B CN 115484399B
Authority
CN
China
Prior art keywords
video
interface
duration
adjustment
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210039516.1A
Other languages
Chinese (zh)
Other versions
CN115484399A (en
Inventor
韩笑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Publication of CN115484399A publication Critical patent/CN115484399A/en
Application granted granted Critical
Publication of CN115484399B publication Critical patent/CN115484399B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor

Abstract

The application provides a video processing method and electronic equipment, relates to the field of terminals, and aims to add audio and video effects to all videos and improve user experience. The method comprises the following steps: displaying a gallery display interface; a video detail interface of the target video is displayed in response to selection operation of the thumbnail of the target video in the gallery display interface; responding to the triggering operation of a template configuration control in a video detail interface of the target video, and if the duration of the target video is longer than a first preset duration, displaying a first interception interface for displaying a first interception video with the duration of the first preset duration, which is obtained by intercepting the target video from the starting time of the target video; responding to the adjustment operation of a first adjustment control in the first cut interface, and updating the starting time and the duration of the first cut video to obtain a second cut video; and responding to the triggering operation of the first determined option in the first cut interface, and displaying a film preview interface of the first video obtained by randomly adding the video template into the second cut video.

Description

Video processing method and electronic equipment
The present application claims priority from the 16 th 2021 to the national intellectual property agency, application number 202110676709.3, application name "a video creation method for users based on storyline mode and electronic device" chinese patent application, the entire contents of which are incorporated herein by reference.
The present application claims priority from the national intellectual property agency, application number 202111434026.3, chinese patent application entitled "a video processing method and electronic device," filed on day 29, 11, 2021, the entire contents of which are incorporated herein by reference.
Technical Field
The present application relates to the field of terminals, and in particular, to a video processing method and an electronic device.
Background
To enhance the user experience, electronic devices such as cell phones, tablet computers, and the like are often equipped with one or more cameras (e.g., front-facing cameras and rear-facing cameras). The user can select a corresponding shooting mode to shoot according to the own requirement, such as a single-lens shooting mode or a multi-lens shooting mode.
After the user finishes shooting the video by using the electronic equipment, the electronic equipment also supports adding a video template to the video with the duration being less than a certain duration (for example, 30 s) (the video template is used for adding an audio and video effect to the video) so as to form the video with special look and feel, and the use experience of the user is improved. However, the current electronic device cannot process the video with the time length longer than a certain time length, and the user experience is not good enough.
Disclosure of Invention
The embodiment of the application provides a video processing method and electronic equipment, which can add audio and video effects to all videos and improve the use experience of users.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical scheme:
in a first aspect, the present application provides a video processing method, which is applied to an electronic device. The method comprises the following steps: the electronic equipment displays a gallery display interface; the gallery display interface comprises at least one thumbnail of a video; the electronic equipment responds to the selection operation of the user on the thumbnail of the target video in the gallery display interface, and a video detail interface of the target video is displayed; the video detail interface comprises a template configuration control, the template configuration control is used for triggering the electronic equipment to randomly add a video template for the target video, and the video template is used for adding an audio and video effect for the target video; the electronic equipment responds to the triggering operation of the template configuration control by a user, and if the duration of the target video is longer than a first preset duration, a first cut-off interface is displayed; the first intercepting interface is used for displaying a first intercepting video obtained by intercepting the target video from the starting moment of the target video, and the duration of the first intercepting video is a first preset duration; the first cut interface also comprises a first adjusting control, and the first adjusting control is used for adjusting the starting time and duration of the first cut video; the electronic equipment responds to the adjustment operation of the user on the first adjustment control, and the starting time and the duration of the first video capture are updated to obtain a second video capture of the target video; the electronic equipment responds to the triggering operation of a user on a first determined option in a first cut interface, and a film preview interface of the first video is displayed; the first video is obtained by randomly adding a video template to the second selected video.
Based on the above embodiment, when the duration of the target video to which the video template needs to be added by the user is longer than the first preset duration (i.e., the duration corresponding to the video template), the second selected video with the first preset duration can be obtained through selecting the video, so as to meet the duration requirement of the video template. And then the purpose of adding a video template to the video with the time length longer than the first preset time length to generate the video with the special audio/video effect is achieved. Based on the above, the technical scheme provided by the application can smoothly finish the purpose of adding video templates to all videos.
In one possible design of the first aspect, the first selection interface further includes a first cancel option; the method further comprises the steps of: the electronic equipment responds to the triggering operation of the user on the first cancel option in the first cut interface, and a film preview interface of the second video is displayed; and the second video is obtained by randomly adding a video template to the first cut video.
Based on the above scheme, in the case that the user does not want to perform the capturing of the selected target video or does not want to continue the capturing, in order to enable the video template to be added smoothly, a part of the first preset duration before capturing the target video may be taken as the capturing video for adding the video template by default. On the basis of ensuring that the video added with the video template can be generated by utilizing the target video, the operation of capturing the video by a user is reduced, and the user experience is improved.
In one possible design manner of the first aspect, the method further includes: and the electronic equipment responds to the adjustment operation of the user on the first adjustment control, and plays the second video from the first frame image of the second video.
Based on the above scheme, the electronic device can automatically play the second video from the beginning (i.e. play the second video from the first frame image of the second video) after the user adjusts the first adjustment control (i.e. when the user loosens his hands and does not touch the first adjustment control any more). The user can know the adjustment result in time, and user experience is improved.
In one possible design manner of the first aspect, the film preview interface of the first video further includes a first cut control; the electronic equipment responds to the triggering operation of the user on the first determined option in the first cut interface, and after the film preview interface of the first video is displayed, the method further comprises the following steps: the electronic equipment responds to the triggering operation of the user on the first cut-off control, and a second cut-off interface is displayed; the second cut interface is used for displaying a second cut video; the second screenshot interface also includes an adjustment control for adjusting the duration and start time of the second screenshot video.
Based on the scheme, after the user captures the target video to obtain the first video (namely, the second captured video with the video template added randomly), if the user is dissatisfied, the user can conveniently enter the second captured interface through the first captured control to re-capture the target video, and the duration and the starting time of the second captured video can be adjusted through the captured control in the second captured interface.
In one possible design manner of the first aspect, the first capturing interface further includes a playing progress bar, where the progress bar is used to indicate a playing progress of the first capturing video; the method further comprises the steps of: and the electronic equipment responds to the adjustment operation of the user on the progress bar to adjust the playing progress of the first cut video.
Based on the scheme, the user can check any content of the first cut video obtained by cutting at any time in the process of cutting the target video, and the user experience is improved.
In one possible design manner of the first aspect, the first adjustment control includes a first thumbnail and a first adjustment frame, the first thumbnail is formed by arranging multiple frames of images in the target video, multiple frames of images in the first adjustment frame are used for displaying the first video, and the length of the first adjustment frame corresponds to the duration of the first video; the electronic equipment responds to the adjustment operation of the user on the first adjustment control, and updates the starting time and duration of the first video capture, and the method comprises the following steps: the electronic equipment responds to the sliding operation of the user on the first thumbnail, adjusts the starting moment of the first video capture according to the sliding direction and the sliding distance of the sliding operation, and updates the multi-frame image displayed in the first adjusting frame; the electronic equipment responds to the adjustment operation of the user on the first adjustment frame, adjusts the length of the first adjustment frame, updates the duration of the first video, or the starting time and the duration according to the length of the first adjustment frame, and updates the multi-frame image displayed in the first adjustment frame.
Based on the scheme, through the first thumbnail and the first adjusting frame in the first adjusting control, the user can smoothly finish updating and adjusting the starting time and the duration of the first video.
In one possible design manner of the first aspect, the method further includes: and the electronic equipment responds to the adjustment operation of the user on the first adjustment frame, and indicates the maximum adjustment range of the first adjustment frame in the process of adjusting the length of the first adjustment frame.
Based on the scheme, the user can definitely and conveniently adjust the first adjusting frame, namely the maximum adjusting range of the first adjusting frame. The maximum adjusting range of the first adjusting frame corresponds to the maximum duration range in which the first video can be adjusted, that is, the maximum range in which the duration of the first video can be adjusted can be indirectly defined by the user.
In one possible design manner of the first aspect, the duration of the second video clip is less than or equal to the first preset duration and greater than or equal to the second preset duration; the method further comprises the steps of: responding to the adjustment operation of the user on the first adjusting frame by the electronic equipment, and if the length of the first adjusting frame is adjusted to the maximum length in the process of adjusting the length of the first adjusting frame, displaying first prompt information by the electronic equipment, and not increasing the length of the first adjusting frame; the first prompt message is used for prompting that the duration of the first video capture reaches a first preset duration, and the first preset duration is the duration corresponding to the maximum length of the first regulating frame; if the length of the first adjusting frame is adjusted to the minimum length, the electronic equipment displays the second prompt information, and the length of the first adjusting frame is not reduced any more; the second prompting information is used for prompting that the duration of the first video capture reaches a second preset duration, and the second preset duration is the duration corresponding to the minimum length of the first regulating frame.
Based on the above scheme, because the user can send out corresponding prompts when the first adjusting frame is shortest (i.e. the length is the minimum length) or longest (i.e. the length is the maximum length) in the process of adjusting the first adjusting frame. While the corresponding length of the first adjusting frame is not reduced (corresponding to the case of the shortest first adjusting frame) or is increased (corresponding to the case of the longest first adjusting frame). And further, the user knows the adjustable range of the first video clip in the adjusting process, and timely stops the adjusting operation corresponding to the increase or decrease when the duration of the first video clip is adjusted to be maximum or minimum. And invalid operation of a user is reduced to a certain extent, and user experience is improved.
In one possible design manner of the first aspect, after the electronic device responds to the selection operation of the user on the target video in the gallery display interface and displays the video detail interface of the target video, the method further includes: the electronic equipment responds to the triggering operation of the template configuration control by a user, and if the duration of the target video is less than or equal to the first preset duration, a film preview interface of the third video is displayed; the third video is obtained by randomly adding a video template for the target video; the slice preview interface of the third video comprises a second cut control; the electronic equipment responds to the triggering operation of the user on the second cut-off control in the slice preview interface of the third video, and the third cut-off interface is displayed; the third video capturing interface is used for displaying a third captured video obtained by capturing the third video from the starting time of the third video, and the duration of the third captured video is the duration of the third video; the third screenshot interface further comprises a second adjustment control, wherein the second adjustment control is used for adjusting the starting time and duration of the third screenshot video; the electronic equipment responds to the adjustment operation of the user on the second adjustment control, and the starting time and the duration of the third video capture are updated to obtain a fourth video capture of the target video; the electronic equipment responds to the triggering operation of the user on the second determined option in the third cut interface, and a film preview interface of the fourth video is displayed; and the fourth video is obtained by randomly adding a video template to the fourth cut video.
Based on the above scheme, when the duration of the target video to which the video template needs to be added by the user is less than or equal to the first preset duration (i.e., the duration corresponding to the video template), a third video to which the video template is added can be directly generated and a film preview interface of the third video can be displayed. And then, the user can enter the third interception interface through a second interception control in the finished product preview interface of the third video so as to intercept the target video, so that the user can select a part of the target video required by the user to add the video template, and a fourth video is obtained. Based on the method, the device and the system for adding the video template, the purpose of conveniently intercepting the needed part from the target video by the user can be achieved, and the use experience of the user is improved.
In one possible design of the first aspect, the third selection interface further includes a second cancel option; the method further comprises the steps of: and the electronic equipment responds to the triggering operation of the second cancel option in the third cut-off interface by the user, and displays a film preview interface of the third video.
Based on the above scheme, if the duration of the target video is less than or equal to the first preset duration (i.e., the duration corresponding to the video template), and if the user does not want to perform the cutting of the target video on the third cutting interface or does not want to continue the cutting, the video template can be added to the whole target video again to obtain the third video, and the film preview interface of the third video is displayed, so that the user experience is improved.
In a second aspect, the present application provides an electronic device comprising: at least one camera, a display screen, a memory, and one or more processors; the camera, the display screen and the memory are coupled with the processor; wherein the memory has stored therein computer program code comprising computer instructions which, when executed by the processor, cause the electronic device to perform the video processing method as provided in the first aspect.
In a third aspect, the present application provides a computer readable storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the video processing method as provided in the first aspect.
In a fourth aspect, the present application provides a computer program product comprising executable instructions which, when run on an electronic device, cause the electronic device to perform the video processing method as provided in the first aspect.
It may be appreciated that the advantages achieved by the technical solutions provided in the second aspect to the fourth aspect may refer to the advantages in the first aspect and any possible design manner thereof, and are not described herein.
Drawings
Fig. 1 is a schematic diagram of an interface for recording video in a front-back dual-camera mode of an electronic device according to an embodiment of the present application;
fig. 2 is a schematic diagram of a scenario in which a mobile phone adds a random template to a video according to the prior art;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 4 is a schematic diagram of a software architecture of an electronic device according to an embodiment of the present application;
fig. 5 is a schematic flow chart of a video processing method according to an embodiment of the present application;
fig. 6 is a schematic view of a scene of selecting video according to an embodiment of the present application;
fig. 7 is a schematic view of a video processing method according to an embodiment of the present application;
fig. 8 is a schematic view of a progress bar modification scene of a video clip according to an embodiment of the present application;
fig. 9 is a schematic diagram of an adjustment scene of a video clip according to an embodiment of the present application;
fig. 10 is a second schematic diagram of an adjustment scene of a video clip according to an embodiment of the present application;
fig. 11 is a schematic diagram III of an adjustment scene of a video clip according to an embodiment of the present application;
fig. 12 is a schematic diagram of an adjustment scene of a video clip according to an embodiment of the present application;
fig. 13 is a schematic diagram of an adjustment scene of a video clip according to an embodiment of the present application;
Fig. 14 is a schematic diagram of an adjustment scene of a video clip according to an embodiment of the present application;
fig. 15 is a schematic diagram seventh of an adjustment scene of a video clip according to an embodiment of the present application;
fig. 16 is a flowchart of another video processing method according to an embodiment of the present application;
FIG. 17 is a schematic view of a selection interface according to an embodiment of the present application;
FIG. 18 is a schematic view of a scene of a skip and cut interface of a preview film interface according to an embodiment of the present application;
fig. 19 is a schematic structural diagram of another electronic device according to an embodiment of the present application.
Detailed Description
The terminology used in the following embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," "the," and "the" are intended to include the plural forms as well, unless the context clearly indicates to the contrary. It should also be understood that "/" means or, e.g., A/B may represent A or B; the text "and/or" is merely an association relation describing the associated object, and indicates that three relations may exist, for example, a and/or B may indicate: a exists alone, A and B exist together, and B exists alone.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the described embodiments of the application may be combined with other embodiments.
The terms "first", "second" in the following embodiments of the present application are used for descriptive purposes only and are not to be construed as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature, and in the description of embodiments of the application, unless otherwise indicated, the meaning of "a plurality" is two or more.
Currently, electronic devices such as mobile phones and tablet computers are often equipped with a plurality of cameras to support various shooting requirements of users. Taking a mobile phone as an example, the mobile phone may be provided with a front camera and a plurality of rear cameras to support a single-lens shooting mode and a multi-lens shooting mode. Of course, a plurality of front cameras may be configured in the mobile phone, which is not limited in the embodiment of the present application.
Taking the electronic device as a mobile phone, a user uses the electronic device to prepare for the addition of an audio and video effect to a previously photographed multi-mirror video (multi-mirror video photographed in a front-back dual-photographing mode (one of multi-mirror photographing modes)), and the following description is made in the prior art:
taking a multi-mirror video captured in a front-back dual-camera mode as an example, referring to fig. 1, a video detail interface 101 of the multi-mirror video may include a first preview area 102 and a second preview area 103, where the first preview area 102 may be used to display a rear Jing Shipin captured by a rear camera of a mobile phone, and the second preview area 103 may be used to display a foreground video captured by a front camera of the mobile phone.
When the user needs to add audio and video effects (such as adding video styles (such as warmth, summer, dynamic, happy, etc.) and background music to the video shot before, the user can control the mobile phone to open the gallery application and enter the gallery display interface so as to conveniently select the video to be processed.
Specifically, referring to fig. 2 (a), the mobile phone may receive a triggering operation (e.g., a clicking operation) of an icon of a gallery application in a desktop of the mobile phone by a user. In response to the triggering operation, the mobile phone may display a gallery presentation interface 201 as shown in fig. 2 (b). At least one thumbnail of a video may be included in the gallery presentation interface 201. Of course, at least one photo may also be included in the gallery display interface 201. The thumbnail images of the video in the gallery display interface 201 are distinguished from the photos in that the thumbnail images of the video may include a video identifier, such as the video identifier 203 in the thumbnail image 202 of the target video in the gallery display interface shown in fig. 2 (b). Of course, in practice, the gallery display interface may also be displayed by the mobile phone in response to a triggering operation implemented by a gallery option existing in the user camera preview interface or the shooting preview interface after receiving the triggering operation. Wherein gallery options may be used to trigger opening of a gallery application and display of a gallery presentation interface. The thumbnail of the video can be any frame image in the video.
Thereafter, the handset may receive a trigger operation for thumbnail 202 of the target video in gallery presentation interface 201. And responding to the triggering operation, and displaying a video detail interface of the target video.
If the duration of the target video is less than or equal to the first preset duration (for example, 29 seconds), the mobile phone responds to the triggering operation, and may display the first video detail interface 204 of the target video as shown in fig. 2 (c). The first video detail interface 204 may include a one-touch tab 205, where the one-touch tab 205 is used to trigger the mobile phone to randomly add a video template to the target video. The video template is used for adding audio and video effects to the target video so as to enrich the look and feel of the target video. The audio-video effect in the application can be composed of preset styles (composed of mapping and special effects) and background music. And then, the mobile phone can respond to the triggering operation of the user on the one-touch large option 205 in the first video detail interface to display a film preview interface of the target video with the video template added randomly.
In addition, if the user is first using the mobile phone, or uses the gallery application for the first time after updating, referring to fig. 2 (c), the mobile phone may display a prompt pop-up window 206 in the first video detail interface 204. As shown in fig. 2 (c), the reminder tab 206 may be displayed in a region near the one-touch tab option 205 with some transparency. The prompt pop-up window 206 includes prompt information for prompting the user of the action of the one-key large-film option 205, for example, "a brand new one-key large film provides intelligent editing for multi-mirror video with shooting duration of 10-29 seconds, and automatically generates a highlight film". In the duration range of 10-29 seconds, the minimum duration limit of 10 seconds mainly considers that the video with less than 10 seconds is not good in effect or not prominent in the video template adding effect, and the limit can be omitted in practice, and corresponding prompt information can be changed to a certain extent. The maximum duration limit of 29 seconds mainly considers that the lengths of the video templates set in advance are all 30 seconds (with a one-second set end-of-chip image), and the multi-mirror video beyond 29 seconds cannot be matched with the proper templates, so that the video duration is limited to be less than or equal to 29 seconds.
The user may trigger the ok button 207 in the prompt pop-up if he does not need to review the prompt pop-up 206 any more, or click any location in the first video detail interface 204. The handset may then not display the prompt pop-up window 206 in response to a user's trigger of the ok button 207 or click anywhere in the first video detail interface 204.
If the duration of the target video is greater than the first preset duration (e.g., 29 seconds), the mobile phone responds to the triggering operation, and then a second detail interface 208 of the target video as shown in fig. 2 (d) may be displayed. The second detail interface 208 does not include a one-touch tab option, and the user cannot add a video template to the target video.
It can be seen that in the existing video processing scheme, when adding a video template to a video, only the video with the duration not longer than the first preset duration can be processed, and the user experience is not good enough.
In view of the above problems, an embodiment of the present application provides a video processing method, which is applied to an electronic device. The method can add the video template to all videos, and improves the user experience.
For better illustration of the embodiments of the present application, the following description is made with respect to the foregoing background art.
Taking an electronic device as an example of a mobile phone, in a single-lens shooting mode, the mobile phone adopts a camera (such as a front camera or a rear camera) to shoot. In the multi-lens shooting mode, the mobile phone can adopt any two or more cameras in the mobile phone to shoot. The single-lens photographing mode may include a front single-lens photographing mode and a rear single-lens photographing mode. The multi-mirror shooting mode may then include a post-double shooting mode, a front-back double shooting mode, and the like. Of course, according to the difference of the software and hardware designs of the mobile phone, the multi-lens shooting mode can be any other feasible mode in practice, and the application is not particularly limited to this. The shooting mentioned here may be video or photographing, for example, the multi-mirror shooting mode may be a multi-mirror shooting mode or a multi-mirror video mode, and the same applies.
Specifically, in the front single-lens shooting mode, the mobile phone adopts a front camera to shoot. In the rear single-lens shooting mode, the mobile phone adopts a rear camera to shoot. In the rear-mounted double-shot mode, the mobile phone adopts two rear-mounted cameras to shoot. In the front and rear double-shooting mode, the mobile phone adopts a front camera and a rear camera for shooting.
By way of example, the electronic device implementing the multimedia resource sharing method in the embodiments of the present application may be a mobile phone, a tablet computer, a desktop computer, a laptop computer, a handheld computer, a notebook computer, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook, a cellular phone, a personal digital assistant (personal digital assistant, PDA), an augmented reality (augmented reality, AR) device, a Virtual Reality (VR) device, an artificial intelligence (artificial intelligence, AI) device, a wearable device, a vehicle-mounted device, a smart home device, and/or a smart city device, and the specific type of the electronic device is not particularly limited in the embodiments of the present application.
Taking an electronic device as an example of a mobile phone, fig. 3 shows a schematic structural diagram of the electronic device according to an embodiment of the present application.
As shown in fig. 3, the electronic device may have a plurality of cameras 293, such as a front-mounted normal camera, a front-mounted low power consumption camera, a rear-mounted normal camera, a rear-mounted wide-angle camera, and the like. In addition, the electronic device may include a processor 210, an external memory interface 220, an internal memory 221, a universal serial bus (universal serial bus, USB) interface 230, a charge management module 240, a power management module 241, a battery 242, an antenna 1, an antenna 2, a mobile communication module 250, a wireless communication module 260, an audio module 270, a speaker 270A, a receiver 270B, a microphone 270C, an earphone interface 270D, a sensor module 280, keys 290, a motor 291, an indicator 292, a display 294, a subscriber identity module (subscriber identification module, SIM) card interface 295, and the like. The sensor module 280 may include, among other things, a pressure sensor 280A, a gyroscope sensor 280B, a barometric sensor 280C, a magnetic sensor 280D, an acceleration sensor 280E, a distance sensor 280F, a proximity light sensor 280G, a fingerprint sensor 280H, a temperature sensor 280J, a touch sensor 280K, an ambient light sensor 280L, a bone conduction sensor 280M, and the like.
Processor 210 may include one or more processing units such as, for example: the processor 210 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and command center of the electronic device. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 210 for storing instructions and data. In some embodiments, the memory in the processor 210 is a cache memory. The memory may hold instructions or data that the processor 210 has just used or recycled. If the processor 210 needs to reuse the instruction or data, it may be called directly from the memory. Repeated accesses are avoided and the latency of the processor 210 is reduced, thereby improving the efficiency of the system.
In some embodiments, processor 210 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The external memory interface 220 may be used to connect external non-volatile memory to enable expansion of the memory capabilities of the electronic device. The external nonvolatile memory communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music and video are stored in an external nonvolatile memory.
The internal memory 221 may include one or more random access memories (random access memory, RAM) and one or more non-volatile memories (NVM). The random access memory may be read directly from and written to by the processor 110, may be used to store executable programs (e.g., machine instructions) for an operating system or other on-the-fly programs, may also be used to store data for users and applications, and the like. The nonvolatile memory may store executable programs, store data of users and applications, and the like, and may be loaded into the random access memory in advance for the processor 110 to directly read and write. In the embodiment of the present application, the internal memory 221 may store a picture file or a recorded video file or the like of the electronic device photographed in a single-mirror photographing mode or a multi-mirror photographing mode or the like.
The touch sensor 280K, also referred to as a "touch device". The touch sensor 280K may be disposed on the display 194, and the touch sensor 280K and the display 294 form a touch screen, which is also referred to as a "touch screen". The touch sensor 280K is used to detect a touch operation acting on or near it. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 294. In other embodiments, the touch sensor 280K may also be disposed on a surface of the electronic device at a different location than the display 294.
In the embodiment of the present application, the touch sensor 280K may detect a touch operation performed by a user on a location where a camera application icon is located, and transmit information of the touch operation to the processor 210, and the processor 210 analyzes a function executed by the touch operation, for example, opens a camera application program (specifically, may open a rear camera to capture a background picture, and display the background picture in a camera preview interface). The touch sensor 280K may also detect a touch operation performed by a user on a location where the front and rear dual-camera video options are located, and transmit information of the touch operation to the processor 210, where the processor 210 analyzes a function corresponding to the touch operation, for example, opening a multi-mirror video preview interface (specifically, may open a front camera and a rear camera, and display a foreground image and a background image captured by the front camera and the rear camera on the multi-mirror video preview interface according to a certain proportion).
In some embodiments, the electronic device may include 1 or N cameras 293, N being a positive integer greater than 1. In the embodiment of the present application, the type of the camera 293 may be differentiated according to the hardware configuration and the physical location. For example, the plurality of cameras included in the camera 293 may be disposed on the front and back sides of the electronic device, the camera disposed on the display screen 294 of the electronic device may be referred to as a front camera, and the camera disposed on the rear cover of the electronic device may be referred to as a rear camera; for example, cameras having different focal lengths and different viewing angles, including the camera 293, may be referred to as wide-angle cameras, and cameras having a long focal length and a small viewing angle may be referred to as normal cameras. The content of the images collected by different cameras is different in that: the front camera is used for collecting sceneries facing the front surface of the electronic equipment, and the rear camera is used for collecting sceneries facing the back surface of the electronic equipment; the wide-angle camera can shoot scenes with larger area in a shorter shooting distance range, and the scenes shot at the same shooting distance are smaller than the images of the scenes shot by using the common lens in the picture. The focal length and the visual angle are relative concepts, and are not limited by specific parameters, so that the wide-angle camera and the common camera are also relative concepts, and can be distinguished according to physical parameters such as the focal length, the visual angle and the like.
The electronic device implements display functions through the GPU, the display screen 294, and the application processor, etc. The GPU is a microprocessor for image sharing, and is connected to the display 294 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 210 may include one or more GPUs that execute program instructions to generate or change display information.
The electronic device may implement shooting functions through an ISP, a camera 293, a video codec, a GPU, a display 294, an application processor, and the like.
The display 294 is used to display images, videos, and the like. The display 294 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light emitting diode (AMOLED), a flexible light-emitting diode (flex), a mini, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device may include 1 or N displays 294, N being a positive integer greater than 1.
In an embodiment of the present application, the display screen 294 may be used to display an interface of an electronic device (for example, a camera preview interface, a multi-mirror video preview interface, a film preview interface, etc. in the foregoing embodiment), and display an image captured from any one of the cameras 293 in the interface. Such as the foreground or background frames described above.
In some embodiments, the electronic device is in a multi-mirror capture mode (e.g., a front-to-back dual capture mode), and the display 294 can display multiple images from the multiple cameras 293 by stitching or in-picture-in-a-picture so that the multiple images of the multiple cameras 293 can be presented to the user for viewing at the same time.
In some embodiments, in a multi-mirror capture mode (e.g., a front-to-back dual capture mode) of the electronic device, the processor 210, such as a controller or GPU, may synthesize different images from the plurality of cameras 293. For example, a video encoder in processor 210 may encode the combined video stream data to generate a video file by combining multiple video streams from multiple cameras 293 into one video stream. Thus, each frame of image in the video file may contain multiple images from multiple cameras 293. When playing a frame of image of the video file, the display screen 194 may display a plurality of images from the plurality of cameras 293 to present a plurality of image frames of different contents, different depths of field, or different pixels to the user at the same time or under the same scene. For another example, a video encoder in processor 210 may encode a synthesized piece of photo data by combining multiple photos from multiple cameras 293 into one photo file. Thus, a photograph in the picture file may contain multiple photographs from multiple cameras 293. While viewing the photograph, the display 294 may display a plurality of photographs from a plurality of cameras 293 to present a plurality of image frames of different content, different depth of field, or different pixels to a user at the same time or under the same scene.
It will be understood, of course, that the above illustration of fig. 3 is merely exemplary of the case where the electronic device is in the form of a cellular phone. If the electronic device is a tablet computer, a handheld computer, a PC, a PDA, a wearable device (e.g., a smart watch, a smart bracelet), etc., the electronic device may include fewer structures than those shown in fig. 3, or may include more structures than those shown in fig. 3, which is not limited herein.
Fig. 4 is a block diagram illustrating a software structure of an electronic device according to an embodiment of the present application. The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. Taking an operating system of the electronic equipment asFor example, in some embodiments, the Android system in the electronic device is divided into four layers, namely, an application layer, an application framework layer, an Zhuoyun row (Android run) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 4, the application package may include applications for cameras, gallery, calendar, phone calls, maps, navigation, WLAN, bluetooth, music, video, short messages, etc.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 4, the application framework layer may include a window manager, a content provider, a view system, a telephony manager, a resource manager, a cut manager, a notification manager, and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The telephony manager is for providing communication functions of the electronic device. Such as the management of call status (including on, hung-up, etc.).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
And the interception manager is used for calling a function module supporting interception of the video when the application needs interception of the video so as to realize interception of the video.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc.
Android run time includes a core library and virtual machines. Android run time is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The video processing method provided by the present application is described below, and the methods in the following embodiments may be implemented in an electronic device having the above hardware structure and the above software structure.
Referring to fig. 5, a flowchart of a video processing method is provided in an embodiment of the present application. Referring to fig. 5, the method may include S501-S513:
s501, displaying a gallery display interface by the electronic equipment; the gallery presentation interface includes at least one thumbnail of a video.
The gallery display interface may be displayed by the electronic device in response to a triggering operation (such as a clicking operation) performed on the gallery application icon by the user after the triggering operation is received. The gallery display interface may also be displayed by the electronic device in response to a triggering operation performed by the user on a gallery option existing in a camera preview interface (an interface when the user triggers the electronic device to open a camera application and prepare for shooting) after the triggering operation is received. Wherein gallery options may be used to trigger opening of a gallery application and display of a gallery presentation interface.
For example, taking an electronic device as a mobile phone and a video as a multi-mirror video, the mobile phone may display a desktop as shown in fig. 6 (a). The desktop may include gallery application icon X therein. The mobile phone may display a gallery presentation interface 601 as shown in fig. 6 (b) in response to a user's triggering operation of the gallery application icon X shown in fig. 6 (a). The gallery presentation interface 601 may include at least one thumbnail of a video. Of course, at least one photo may be included in the gallery display interface 601. The thumbnail of the video in the gallery presentation interface 601 differs from the photo in that a video identifier may be included in the thumbnail of the video, such as the video identifier 603 in the thumbnail 602 of the target video in the gallery presentation interface shown in fig. 6 (b).
S502, the electronic equipment responds to the selection operation of a user on the thumbnail of the target video in the gallery display interface, and displays a video detail interface of the target video; the video detail interface includes a one-touch tab option.
The one-touch tab option is understood herein as a template configuration control in the present application. The one-key-pad option is used for triggering the electronic equipment to randomly add a video template or a preset shooting template for the target video. The shooting template is used for adding audio and video effects for the target video so as to enrich the look and feel of the target video. Here the audio-visual effect may consist of a preset style and background music.
For example, taking an electronic device as a mobile phone, and taking a target video as a multi-mirror video, based on the gallery display interface 601 shown in (b) in fig. 6, the mobile phone may receive a selection operation (such as a clicking operation) of the thumbnail 602 of the target video in the gallery display interface 601 by a user. In response to a user triggering operation of the thumbnail 602 of the target video, the handset may display a video detail interface 604 of the target video as shown in fig. 6 (c). Included in the video details interface 604 is a one-touch tab option 605.
In addition, if the user is first using the handset, or first using the gallery application after updating, the handset may display a prompt pop-up window 606 in the video detail interface 604. As shown in fig. 6 (c), the prompt pop-up window 606 may be displayed in a region near the one-touch tab option 605 with a certain transparency. The prompt pop-up window 606 includes prompt information for prompting the user of the action of the one-key-large option 605, for example, "a brand new one-key-large video provides intelligent editing for multi-mirror video with shooting time longer than 10 seconds, and automatically generates a highlight film". Of these, the minimum duration limit of 10 seconds is mainly a display made in consideration of poor or unobtrusive effects of video addition of a video template of less than 10 seconds. In practice, the limitation can be avoided, and corresponding prompt information can be changed into 'brand new one-key large-film for providing intelligent editing for multi-mirror video, and automatically generating a highlight film'.
The user may trigger the ok button 607 in the prompt pop-up if he does not need to review the prompt pop-up 606 any more, or click anywhere in the video detail interface 604. The handset will not display the prompt pop-up window 606 any more in response to the user's trigger of the ok button 607 or click on any position in the video detail interface 604. Of course, the mobile phone may also automatically not display the prompt pop-up window 606 after displaying the prompt pop-up window 606 for a certain period of time (e.g., 10 seconds).
S503, the electronic equipment receives triggering operation of a one-touch large option in a video detail interface of the target video by a user.
For example, with the electronic device as a mobile phone, based on the scenario example shown in fig. 6, the mobile phone may display the video details interface 701 of the target video shown in (a) in fig. 7 in response to the user selecting the thumbnail of the target video in the gallery display interface. A one-touch tab option 702 may be included in the video detail interface 701. The handset may receive a user trigger (e.g., a click) on the one-touch tab 702.
S504, the electronic device judges whether the duration of the target video is longer than a first preset duration.
If the electronic device determines that the duration of the target video is greater than the first preset duration, S505 is executed; if the electronic device determines that the duration of the target video is less than or equal to the first preset duration, S509 is executed.
The first preset duration may be, for example, 29 seconds. In the embodiment of the application, the length of the first preset time length is mainly related to the time length corresponding to the template which is randomly added for the target video by the electronic equipment in response to the trigger of the user on the one-key large-piece option.
For example, if the one-touch big option can trigger each video template added by the electronic device to be 30 seconds in length, then the video that each video template can match is 29 seconds except for the specified one-second tail. For video less than 29 seconds, any video template may be added. For video greater than 29 seconds, no template may meet its requirements. Therefore, different processing needs to be performed on the videos in the two different time ranges, so that after the electronic device in the application receives the trigger operation of the user on the one-touch large-scale option, the electronic device needs to judge whether the time length of the target video is longer than 29 seconds (namely, the first preset time length) so as to determine which mode to process the video. Of course, the one second tail may not exist, and the first preset duration may be 30 seconds.
Of course, in practice, the duration corresponding to each video template that the one-touch option triggers the electronic device to add may also be different. At this time, the first preset duration may be a duration corresponding to a certain video template that is determined randomly by the electronic device in response to a triggering operation of the user on the one-touch large-size option. Or if the video template has the set tail, the first preset duration may be a duration corresponding to a portion of the video template other than the tail.
S505, the electronic equipment responds to the triggering operation of a user on one-touch large-scale option in a video detail interface of a target video, and a first cut-off interface is displayed; the first intercepting interface is used for displaying a first intercepting video obtained by intercepting the target video from the starting moment of the target video, and the duration of the first intercepting video is a first preset duration; the first screenshot interface further includes a first adjustment control, where the first adjustment control is configured to adjust a start time and a duration of the first screenshot video.
S506 is performed after S505.
Specifically, for a target video with a time length longer than a first preset time length, because of the time length limitation of the video template, after a part of videos with a time length shorter than or equal to the first preset time length in the target video is required to be selected, the video template is added to the part of videos. Therefore, under the condition that the electronic equipment judges that the duration of the target video is longer than the first preset duration, the electronic equipment can cut the interface for the first time, so that a user can select a required part of video as the video of the final added video template.
The electronic device is taken as a mobile phone, and is based on a video detail interface 701 of the target video shown in (a) of fig. 7. When the duration of the target video is longer than the first preset duration, the mobile phone may receive a trigger operation (e.g., a click operation) of the user on the one-touch tab 702 in the video detail interface 701. In response to the trigger operation, the handset may display a first screenshot 703 as shown in fig. 7 (b). The first screenshot 703 is for displaying a first screenshot 704. The first capturing video 704 is a video obtained by capturing the target video from the starting time of the target video and having a duration of a first preset duration. The first preset duration is illustrated as 29 seconds. The first screenshot 703 further includes a first adjustment control 705. The user may perform an adjustment operation on the first adjustment control 705 to adjust the start time and duration (or start time and end time) of the first clip video 704 based on his own needs.
For example, referring to fig. 7 (b), the first adjustment control 705 can include a first thumbnail 7051 and a first adjustment box 7052. The first thumbnail 7051 is formed by arranging a plurality of frames of images in the target video (in fig. 7 (b), a plurality of simple images are taken as examples of the plurality of frames of images in the target video), the plurality of frames of images for displaying the first clip video 704 are displayed in the first adjustment frame 7052, and the length of the first adjustment frame 7052 corresponds to the duration of the first clip video 704. The user may change the length of the first adjustment frame by sliding the first thumbnail or dragging both sides of the first adjustment frame to trigger the electronic device to change the starting time and duration of the first clip video 704. The changing of the start time of the first captured video 704 mentioned here refers specifically to changing the start time of the first captured video 704 to the same as the start time of the target video as other times of the target video.
Specifically, the multi-frame image forming the first thumbnail 7051 may be images of all frames of the target video, or may be images of part of frames, which is specifically determined according to actual requirements. Each of the multiple frames of images may represent a time instant of the target video. Specifically, the time represented by each frame image in the multi-frame image may be the time in the target video when the frame image in the target video is captured.
In the embodiment of the present application, in order to prompt the user to change the starting time (and also change the ending time) of the first capturing video by sliding the first thumbnail, referring to (b) in fig. 7, the mobile phone (i.e. the electronic device) may further display the adjustment prompt information Y in a preset area (for example, under the first adjustment control) of the first capturing interface. For example, the adjustment prompt may be "drag screen select start point".
In the embodiment of the application, in order to facilitate the user to view the content of the latest first video shot according to the self requirement in the specific content adjustment process of the first video shot in the first video shot interface. The electronic device may display a progress bar on the first clip interface capable of indicating and controlling a playing progress of the first clip video. For example, taking an electronic device as a mobile phone, referring to fig. 7 (b), the mobile phone may display a progress bar 706 in the first screenshot interface 703, where the progress bar can indicate and control the playing progress of the first screenshot video 704. The progress bar 706 may include an adjustment control 7061, an adjustment track 7062, a current play time 7063, and a total duration 7064 (e.g., 00:29). When the adjustment control 7061 is triggered by a user to slide on the adjustment track 7062, the electronic device can change the progress of the playing of the first clip video 704, while the electronic device can change the specific data of the current playing time 7063 to match the specific position of the adjustment control 7061 on the adjustment track 7062.
In this way, the electronic device can respond to the adjustment operation of the progress bar without going through the user, and adjust the playing progress of the first video clip, so that the user can watch all the contents of the first video clip at will.
The electronic device is taken as a mobile phone, and is based on a first selection interface shown in (b) of fig. 7. Referring to fig. 8 (a), the mobile phone may receive a right-hand sliding operation of the adjustment control 8061 in the progress bar 806 in the first section interface 801 by the user. In response to the right slide operation, referring to (b) of fig. 8, the mobile phone may update the position of the adjustment control 8061 on the adjustment track 8062 according to the sliding distance of the right slide operation, and update the current playing time 8063 according to the position of the adjustment control 8061 on the adjustment track 8062. In the figure, the updated current playing time 8063 is taken as an example of 00:14. Of course, to facilitate the user knowing how much progress he has adjusted by himself, the initial segment of the adjustment track 8062 (left end in fig. 8) to the partial color at the adjustment control 8061 may change (e.g., turn from white to black) after the adjustment control 8061 changes position.
Of course, in practice, the user may also implement a left-sliding operation on the adjustment control, so that the playing progress of the first video clip is backed, for example, from 00:14 to 00:05. The specific implementation is similar to the process shown in fig. 8, and will not be described again here.
In addition, in order to facilitate the user to control the playing of the first video clip in the first video clip interface, the first video clip interface displayed by the electronic device may further include a playing control button. The electronic device may change the first clip video from the play state to the pause play state or from the pause play state to the play state in response to a trigger operation of the play control button by the user. Illustratively, the play control button may be the play control button X shown in (b) of fig. 7. When the play control button X is triggered by the user, the first video clip 704 is changed from the play state to the pause state, or from the pause state to the play state, and the form of the play control button X is changed. The play control button X may have two modes, which correspond to the play state and the pause play state of the first clip video 704, respectively. The form of the play control button X shown in fig. 7 (b) may be a form corresponding to the play state of the first clip video 704.
S506, the electronic equipment responds to the adjustment operation of the user on the first adjustment control, and the starting time and the duration of the first video capture are updated to obtain a second video capture of the target video.
In the application, in order to enable a user to see all the second video when the starting time and the duration of the first video are adjusted to obtain the second video. The electronic equipment can respond to the adjustment operation of the user on the first adjustment control, and play the second video from the beginning; wherein the beginning of the beginning specifically refers to the beginning of the first frame image of the second clip video.
Of course, in practice, in order to make the user define the self-adjustment result, after the user finishes adjustment, that is, when the user releases his hand and does not touch the first adjustment control, the electronic device plays the first video (may also be referred to as the second video) corresponding to the adjusted first adjustment control from scratch. Based on this, in step S506, the adjustment operation of the first adjustment control by the user may be performed multiple times, and before the two operations, the user may loose his hand and no longer touch the first adjustment control. Or, the step S506 may be performed continuously for multiple times, where the adjustment operation of the user on the first adjustment control in the step S506 may be performed once (i.e., the user touches the first adjustment control and releases the hand to touch no longer), and the second video captured after each execution may be used as the first video captured in the next step S506.
In addition, because the duration of the video template is already set (i.e., the first preset duration), in the embodiment of the present application, the duration of the second video clip is less than or equal to the first preset duration. Further, considering that the video with too small duration is not good enough in appearance, the duration of the second video is greater than or equal to the second preset duration. The second preset time period may be, for example, 10 seconds.
In an implementation manner, if the first adjustment control includes a first thumbnail and a first adjustment frame, the first thumbnail is formed by arranging multiple frames of images in the target video, the multiple frames of images in the first adjustment frame are used for displaying the first captured video, and the length of the first adjustment frame corresponds to the duration of the first captured video, S506 may specifically include:
(1) The electronic device responds to the sliding operation of the user on the first thumbnail, adjusts the starting moment of the first video according to the sliding direction and the sliding distance of the sliding operation, and updates the multi-frame image displayed in the first adjusting frame.
For example, with the electronic device as a mobile phone, based on the scene example shown in (b) of fig. 7, referring to (a) of fig. 9, the mobile phone may receive a sliding operation of the first thumbnail 9051 by the user (in (a) of fig. 9, the sliding operation is exemplified as sliding to the left). In response to the slide operation, the mobile phone can adjust the start timing of the first clip video 904 according to the slide direction and the slide distance of the slide operation (for example, the distance of one thumbnail in the length direction of the first adjustment frame), and update the multi-frame image displayed in the first adjustment frame 9052. If the duration corresponding to each thumbnail is 10 seconds, the mobile phone will adjust the starting time of the first capturing video 904 to be 11 seconds of the target video at this time. Of course, since the electronic device plays the second clip video from the beginning in response to the user's adjustment operation of the first adjustment control. Therefore, after the mobile phone adjusts the starting time of the first video clip according to the sliding operation of the user, if the user is not performing any adjustment operation on the first adjustment control 905, the mobile phone plays the adjusted first video clip (i.e. the second video clip) from the beginning. At this time, the current playing time in the progress bar of the first cut interface is still 00:00.
After the adjustment is completed, the handset may display a first screenshot 903 as shown in fig. 9 (b). In the first screenshot 903, it can be seen that the first thumbnail 9051 has slid one thumbnail distance to the right, leaving the rest unchanged. The sliding operation is the same as the leftward sliding operation.
In addition, in an embodiment of the present application, in order to ensure a user's viewing experience of a clip video (e.g., the first clip video 904) in a clip interface (e.g., the first clip interface 903), the playing state of the clip video should be unchanged, i.e., played as usual, during the process of the user sliding the thumbnail (e.g., the first thumbnail 9051) in the adjustment control (e.g., the first adjustment control 905). In this case, the mobile phone does not change the play state of the clip video in response to a sliding operation of the thumbnail by the user. After that, after the user has slid the thumbnail, in order to make the user aware of the condition of the clip video after the thumbnail is slid, the clip video after the thumbnail change can be played back from the beginning (i.e., played back from the first frame image).
In some embodiments, if the leftmost side of the first thumbnail coincides with the leftmost side of the first adjustment frame, this indicates that the first thumbnail 9 can no longer be slid to the right. At this time, there are two special cases: first: in the rightward sliding process, if the sliding distance is large, the situation that the first thumbnail can not slide rightward any more when the first thumbnail slides to the rightmost end can occur, and at the moment, the rightward sliding operation of the user is not finished yet; second,: the user actively slides the first thumbnail to the right under the condition that the first thumbnail is slid to the rightmost end and can not slide to the right any more.
In both special cases, the mobile phone may not modify the starting time and duration of the first clip video in response to the rightward sliding operation of the first thumbnail 9051 by the user. Meanwhile, in order to prevent the user from influencing the playing condition of the first video clip in the first video clip interface, the mobile phone may not play the first video clip (or the second video clip obtained after the first video clip is updated) from the beginning after receiving the rightward sliding operation of the user on the first thumbnail 9051, but play the first video clip continuously, that is, the playing progress of the first video clip is not influenced by the rightward sliding operation of the user at this time and normally goes on. The sliding operation is the same as the leftward sliding operation.
For example, with the electronic device as a mobile phone, based on the example of the scenario shown in (b) of fig. 7, referring to (a) of fig. 10, when the leftmost side of the first thumbnail 10051 coincides with the leftmost side of the first adjustment frame 10052, the mobile phone may display the first screenshot interface 1001 shown in (b) of fig. 10 in response to the rightward sliding operation of the first thumbnail 10051 by the user. Only progress bar 1006 in the first screenshot 1001 changes. Taking the case where the user takes 2 seconds to slide right, referring to fig. 10 (b), the current playing time 10063 of the progress bar 1006 is 00:03. In addition, the adjustment control 10061 is located in the adjustment track 10062 corresponding to the current playing time 10063, and the color of the portion of the adjustment track 10062 representing the played portion may change, such as darkening.
(2) The electronic equipment responds to the adjustment operation of the user on the first adjustment frame, adjusts the length of the first adjustment frame, updates the starting time and the duration of the first cut video according to the length of the first adjustment frame, and updates the multi-frame image displayed in the first adjustment frame.
The adjustment operation at this time may specifically be to shorten or lengthen the length of the first adjustment frame, and shorten or lengthen the length from one end corresponding to the start time of the first clip video. Specifically, the method can be realized by dragging a handle at one end of the first adjusting frame corresponding to the starting time of the first video capture to approach or depart from the other end. Dragging may specifically refer to holding and sliding. Of course, since the length of the second video segment obtained after the adjustment of the first video segment is limited, and the length of the first adjusting frame corresponds to the length of the first video segment, the length shortening or lengthening of the first adjusting frame is performed within a certain range. For example, if the duration of the second video clip is less than or equal to the first preset duration and greater than or equal to the second preset duration, the length shortening or lengthening of the first adjustment frame is performed between the maximum length (the length of the first adjustment frame corresponding to the first preset duration) and the minimum length (the length of the first adjustment frame corresponding to the second preset duration). The similar content is the same as the following.
Illustratively, with the electronic device as a mobile phone, based on the example of the scenario shown in fig. 9 (b), referring to fig. 11 (a), the mobile phone may respond to receiving a drag operation of the user to the right of the left handle of the first adjustment frame 11052. In response to the drag operation, the mobile phone may decrease the length of the first adjustment frame 11052, and update the start time and the duration of the first clip video 1104 according to the length of the first adjustment frame 11052, and update the multi-frame image displayed in the first adjustment frame 11052. Specifically, the reduced length of the first adjustment frame 11052 may be equal to the distance that the drag operation drags in the length direction of the first adjustment frame 11052, and the starting time of the first video clip may be the time corresponding to the thumbnail attached to the left handle of the first adjustment frame 11052. The time corresponding to the thumbnail is the time when the image corresponding to the thumbnail is shot in the target video. The multi-frame image displayed in the first adjustment frame is reduced correspondingly because the length of the first adjustment frame is reduced.
For example, taking the drag distance of the user to the rightward drag operation of the first adjustment frame 11052 as the length of one thumbnail in the length direction of the first adjustment frame and the length of time corresponding to one thumbnail as 4 seconds as an example, after the mobile phone responds to the rightward drag operation of the user to the first adjustment frame 11052, after the adjustment is completed, the first selection interface 1103 as shown in (b) in fig. 11 may be displayed. The first adjustment frame 11052 in the first selection interface 1103 is shortened by one thumbnail in length from the left side to the right side, and the multi-frame image displayed in the first adjustment frame 11052 is also reduced by one sheet on the leftmost side. In addition, the total duration 11064 in the progress bar 1106 becomes 00:25.
The specific implementation of increasing the length of the first adjustment frame may be a reverse process of the example shown in fig. 11, which is not described herein.
(3) The electronic equipment responds to the adjustment operation of the user on the first adjustment frame, adjusts the length of the first adjustment frame, updates the duration of the first cut video according to the length of the first adjustment frame, and updates the multi-frame image displayed in the first adjustment frame.
The adjustment operation at this time may specifically be to lengthen or shorten the length of the first adjustment frame, and lengthen or shorten the length from one end corresponding to the end time of the first cut video. The method can be realized by dragging a handle at one end of the first adjusting frame corresponding to the starting moment of the first video capture to be far away from or close to the other end. Dragging may specifically refer to holding and sliding.
For example, taking an electronic device as a mobile phone, before the mobile phone receives the adjustment operation of the first adjustment frame by the user, a first selection interface displayed by the mobile phone may be as shown in fig. 12 (a). Five thumbnails may be displayed in the first adjustment box 12052 in the first screenshot interface 1203 shown in fig. 12 (a), where the duration of the first screenshot 1204 is 15 seconds, i.e., the total duration 12064 in the progress bar 1206 is 00:15. In response to a drag operation of the user on the right side handle of the first adjustment frame 12052 (the handle corresponding to the end of the end time of the first clip video) to the right, the mobile phone may increase the length of the first adjustment frame 12052 from the rightmost side of the first adjustment frame to the right, and update (specifically increase here) the duration of the first clip video 1204 according to the length of the first adjustment frame 12052. Specifically, the increased length of the first adjustment frame 12052 may be equal to the distance that the drag operation drags in the length direction of the first adjustment frame 12052. The multi-frame image displayed in the first adjustment frame 12052 increases as the length of the first adjustment frame 12052 increases. The increased duration of the first clip video 1204 may be the same as the total duration corresponding to the newly displayed thumbnail in the first adjustment frame 12052. The duration corresponding to the thumbnail is the required duration for shooting the image corresponding to the thumbnail in the target video.
For example, taking the example that the drag distance of the user to the rightward drag operation of the first adjustment frame 12052 is the length of two thumbnails in the length direction of the first adjustment frame 12052, and the duration corresponding to one thumbnail is 3 seconds, after the mobile phone responds to the rightward drag operation of the user to the first adjustment frame 12052, the first selection interface 1203 as shown in (b) in fig. 12 may be displayed. The first adjustment frame 12052 in the first clipping interface 1203 is elongated by the length of two thumbnails from the right side to the right, and the multi-frame image displayed in the first adjustment frame 11052 is also increased by two pieces on the rightmost side. In addition, the total duration 11064 in the progress bar 1106 becomes 00:21.
The specific implementation of shortening the length of the first adjustment frame may be a reverse process of the example shown in fig. 12, which is not described herein.
In some embodiments, taking an electronic device as an example of a mobile phone, the mobile phone may display a first selection interface 1303 shown in fig. 13 (a) before receiving an adjustment operation of the first adjustment frame by the user. Five thumbnails may be displayed in the first adjustment frame 13052 in the first cut interface 1303 shown in fig. 13 (a), and the rightmost side of the first adjustment frame 13052 and the leftmost side of the first thumbnail 13051 are overlapped, i.e., no thumbnail exists on the right side of the first adjustment frame 13052. Meanwhile, the rightmost side of the first adjusting frame 13052 is located at a position deviated to the right from the left side in the screen of the mobile phone, that is, the first adjusting frame 13052 has a space to move to the right.
In order to make the user experience improved as much as possible in response to all user operations, in this case, if the user performs a drag operation to the right on the right handle of the first adjustment frame 13052, the cellular phone may move the first adjustment control 1305 (including the first adjustment frame 13052 and the first thumbnail 13051) as a whole to the right in response to the drag operation until the first adjustment control 1305 has moved to the outermost side of the movement range prescribed on the screen of the cellular phone. At this time, the mobile phone may display a first section interface 1303 as shown in (b) of fig. 13. The first adjustment control 1305 in the first cut interface 1303 moves to the right to the far right of the screen compared to the first adjustment control 1305 in (a) in fig. 13.
In addition, since the first clip interface shown in (a) in fig. 13 is changed to the first clip interface shown in (b) in fig. 13, the start timing and the duration of the first clip video 1304 are not changed. So in order to ensure the user's look and feel of the first captured video 1304, the first captured video 1304 will play normally without interruption in the process.
In some embodiments, if the duration of the second video clip is less than or equal to the first preset duration and greater than or equal to the second preset duration, the length adjustment of the first adjustment frame will have a maximum length (the length of the first adjustment frame corresponding to the first preset duration) and a minimum length (the length of the first adjustment frame corresponding to the second preset duration).
In this case, the video processing method provided in the embodiment of the present application further includes: in response to the adjustment operation of the user on the first adjustment frame, if the length of the first adjustment frame is adjusted to the maximum length in the process of adjusting the length of the first adjustment frame, the electronic device displays first prompt information and does not increase the length of the first adjustment frame; the first prompt message is used for prompting that the duration of the first video capture reaches a first preset duration, and the first preset duration is the duration corresponding to the maximum length of the first regulating frame. If the length of the first adjusting frame is adjusted to the minimum length, the electronic equipment displays the second prompt information, and the length of the first adjusting frame is not reduced any more; the second prompting information is used for prompting that the duration of the first video capture reaches a second preset duration, and the second preset duration is the duration corresponding to the minimum length of the first regulating frame. In addition, in order to ensure that the operation area can be clearly seen when the user operates, the first prompt information and the second prompt information should be outside the operation area (for example, on the first thumbnail) of the user.
For example, taking the electronic device as a mobile phone, the length of the first adjusting frame is adjusted to the maximum length, and the mobile phone may display the first selection interface 1403 as shown in fig. 14 (a). The first screenshot 1403 may include a first prompt window 1407. For example, the first prompt window 1407 may include first prompt information therein. The first prompt message is used for prompting that the duration of the first video capture reaches a first preset duration, and the first preset duration is the duration corresponding to the maximum length of the first regulating frame. For example, the first hint information may be "to reach a maximum adjustable duration". Thereafter, if the user performs an adjustment operation to adjust the length of the first adjustment frame 14052 again, the mobile phone does not increase the length of the first adjustment frame 14052 in response to the operation, and the first prompt window 1407 is displayed for a certain period of time (for example, 2 seconds).
Similarly, when the length of the first adjustment frame is adjusted to the minimum length, the mobile phone may display the first selection interface 1403 as shown in fig. 14 (b). A second prompt window 1408 may be included in the first screenshot 1403. For example, the second prompt window 1408 may include second prompt information. The second prompting information is used for prompting that the duration of the first video capture reaches a second preset duration, and the second preset duration is the duration corresponding to the minimum length of the first regulating frame. For example, the second hint information may be "to reach a minimum adjustable duration". Thereafter, if the user performs an adjustment operation to adjust the length of the first adjustment frame 14052 again, the mobile phone does not decrease the length of the first adjustment frame 14052 in response to the operation, and the second alert window 1408 is displayed for a certain period (e.g., 2 seconds).
In other embodiments, if the duration of the second video clip is less than or equal to the first preset duration, the adjustment (decrease or increase) of the length of the first adjustment frame cannot be performed between the maximum length (the length of the first adjustment frame corresponding to the first preset duration) and the minimum length (the length of the first adjustment frame corresponding to the second preset duration). At this time, in order to enable the user to know the maximum length that can be adjusted when adjusting the length of the first adjustment frame, the electronic device may indicate the maximum adjustment range of the first adjustment frame in the process of adjusting the length of the first adjustment frame in response to the adjustment operation of the first adjustment frame by the user.
The electronic device is taken as a mobile phone, and is based on the scene example shown in (a) in fig. 12. Referring to fig. 15 (a), in the course of the user dragging the first adjustment frame 15052, the mobile phone may display the maximum outline 15053 of the first adjustment frame 15052. The maximum contour line 15053 may be a contour line when the length of the first adjustment frame 15052 is maximum. The maximum contour line can then be used to indicate the maximum adjustment range of the first adjustment frame.
Of course, in practice, the three cases (1), (2) and (3) may exist either singly or in two or two at the same time, which is determined according to the actual requirements.
In addition, in the embodiment of the present application, in order to enable the user to know which part of the complete video (i.e., the complete video corresponding to the first clip video, such as the target video corresponding to the first clip video) is currently adjusted during the process of adjusting the duration and/or the starting time of the clip video (e.g., the first clip video 1504) in the clip interface (e.g., the first clip interface 1503) by adjusting the length of the adjustment frame (e.g., the first adjustment frame 15052 in the first adjustment control 1505). In an embodiment of the present application, the electronic device may cause the display area of the video clip to display a specific image of the thumbnail of the position where the handle is located during the process of dragging the handle of the adjustment frame (e.g., the left or right handle of the first adjustment frame 15052). After the user completes the length adjustment of the adjustment frame, in order to make the user aware of the condition of the length-adjusted clip video of the adjustment frame, the length-adjusted clip video of the adjustment frame may be played back from the beginning.
The electronic device is taken as a mobile phone, and is based on the scene example shown in (a) in fig. 15. The first adjustment box 15052 is normally in the process of the handset in response to a drag operation of the user to the right of the right hand grip of the first adjustment box 15052. Referring to fig. 15 (B), if the thumbnail of the position of the right handle of the first adjustment frame 15052 is the thumbnail a, the mobile phone displays a specific image B corresponding to the thumbnail a in the display area of the first cut video 1504. The user performs a drag operation to the left on the right hand grip of the first regulation frame 15052, or the user performs a drag operation to the left on the left hand grip of the first regulation frame 15052, or the user performs a drag operation to the right on the left hand grip of the first regulation frame 15052.
Therefore, the user can know the position of the complete video which is currently adjusted in time in the process of adjusting the length of the adjusting frame, and the experience of the user on the editing/cutting of the complete video is improved. (4) The electronic device determines the first video clip with the starting time and/or the duration updated as the second video clip according to the sliding operation of the user on the first thumbnail and/or the adjusting operation of the first adjusting frame.
S507, the electronic equipment responds to the triggering operation of a user on a first determined option in the first cut interface, and displays a film preview interface of the first video; the first video is obtained by randomly adding a video template to the second selected video.
For example, with the electronic device as a mobile phone, referring to fig. 7 (b), the first selection interface 703 may include a first determination option D. Taking the first section interface after the user finishes adjusting the starting time and the duration of the first section video in the first section interface as an example, the first section interface 1303 shown in fig. 13 (b) is taken as an example. The mobile phone may receive a triggering operation of the user on the first determination option D in the first selection interface 1303. In response to the trigger operation, the handset may display a clip preview interface 709 of the first video as shown in fig. 7 (d). The tile preview interface 709 is used for a user to preview the first video. It can be seen that the total duration of the first video is 16 seconds (one second added to the template from the tail, or indeed, it may be absent, and 15 seconds when absent).
S508, the electronic equipment responds to the triggering operation of the user on the first cancel option in the first cut interface, and displays a film preview interface of the second video; and the second video is obtained by randomly adding a video template to the first cut video.
The first cancel option is used for triggering the electronic equipment to cancel all the adjustment of the starting time and duration of the first video capture, and displaying a film preview interface of the second video after taking the initial first video capture as the second video.
For example, with the electronic device as a mobile phone, referring to fig. 7 (b), the first cancel option Q may be included in the first selection interface 703. Taking the first section interface after the user finishes adjusting the starting time and the duration of the first section video in the first section interface as an example, the first section interface 1103 shown in fig. 11 (b) is taken as an example. The mobile phone may receive a triggering operation of the user on the first cancel option Q in the first screenshot interface 1103. In response to the trigger operation, the handset may display a clip preview interface 711 of the second video similar to that shown in fig. 7 (e). The tile preview interface 711 is used for a user to preview the second video. It can be seen that the total duration of this second video is 30 seconds (one second added to the template from the tail, or in practice, 29 seconds without it).
In some embodiments, the film preview interface of the first video may include a first cut control; referring to fig. 16 in conjunction with fig. 5, S507 may further include S514:
S514, the electronic equipment responds to the triggering operation of the user on the first cut-off control in the film-forming preview interface of the first video, and displays a second cut-off interface; the second screenshot interface is for displaying a second screenshot video.
For example, using an electronic device as a mobile phone, referring to fig. 7 (d), a first capture control 710 may be included in the film preview interface 709 of the first video. The handset may display a second screenshot interface 1701 as shown in fig. 17 in response to a user triggering operation (e.g., a click operation) on the first screenshot control 710. The second screenshot 1701 is for displaying a second screenshot video 1702. A third adjustment control 1703 may also be included in the second screenshot 1701, the third adjustment control 1703 including a third thumbnail 17031 and a third adjustment box 17032. The third adjustment control 1703 may be used to adjust the starting time and duration of the second cut video. The total duration of the second video shot in the second shot interface is 15 seconds, which can be specifically the video of the first video after the video template is removed.
In addition, in order to enhance the user's look and feel, the mobile phone may center the third adjustment frame 17032 on the middle of the screen. The specific implementation of the adjustment of the starting time and the duration of the first video in the second capturing interface may refer to the related description of the adjustment of the starting time and the duration of the first capturing video by using the first adjustment control in the foregoing embodiment, which is not repeated herein.
Similarly, taking an electronic device as an example of a mobile phone, referring to fig. 7 (e), a third screenshot control 712 may be included in the film preview interface 711 of the second video. The handset may respond to a triggering operation (e.g., a clicking operation) of the third screenshot control 712 by the user, and may display a first screenshot interface 703 as shown in fig. 7 (b). The relevant identifier of the first screenshot interface may refer to the description in the foregoing embodiment, and will not be described herein.
S509, the electronic equipment responds to the triggering operation of a user on a one-touch large option in a video detail interface of the target video, and a film preview interface of a third video is displayed; the third video is obtained by randomly adding a video template for the target video; the tile preview interface of the third video includes a second cut control.
Taking the target video with the duration of 29 seconds as an example, the electronic device is a mobile phone, which is based on the scene example shown in (a) in fig. 7. The handset, in response to a trigger operation for the one-touch tab option 702 in the video detail interface 701, may display a tile preview interface 707 of the third video as shown in fig. 7 (c). Referring to fig. 7 (c), a second cut control 708 may be included in the tile preview interface 707. The second capture control 708 is used to trigger a third capture interface that electronically displays the target video for the user to capture the appropriate portion from the target video for adding the video template.
S510, the electronic equipment responds to the triggering operation of the second cut-off control in the slicing preview interface of the third video by the user, and the third cut-off interface is displayed; the third video capturing interface is used for displaying a third captured video obtained by capturing the third video from the starting time of the third video, and the duration of the third captured video is the duration of the third video; the third screenshot interface further includes a second adjustment control, where the second adjustment control is configured to adjust a starting time and a duration of the third screenshot video.
Illustratively, taking the target video as 29 seconds, the electronic device as a mobile phone, based on the example of the scene shown in (c) in fig. 7, referring to (a) in fig. 18, the mobile phone may receive the triggering operation of the second capture control 1802 in the film preview interface 1801 of the third video by the user. In response to the trigger operation, the handset may display a third screenshot interface as shown in fig. 18 (b). The first screenshot 1803 is used to present a third screenshot 1804. The third cut video 1804 is a video (i.e., the third cut video is initially the target video) obtained by cutting the target video from the starting time of the target video and having a duration of a first preset duration (in the figure, the first preset duration is 29 seconds as an example). The first screenshot 1803 further includes a second adjustment control 1805. The user may perform an adjustment operation on the second adjustment control 1805 to adjust the start time and duration (or start time and end time) of the third clip video 1804 based on his own needs.
For example, referring to fig. 18 (b), the second adjustment control 1805 may include a second thumbnail 18051 and a second adjustment box 18052. Specifically, the second thumbnail 18051 and the second adjustment frame 18052 may refer to the related description after S505 in the foregoing embodiment, which is not described herein.
Referring to fig. 18 (b) and 7 (b), it can be seen that the difference is that the second thumbnail 18051 is displayed entirely in the second adjustment frame 18052 when the target video is less than or equal to the first preset time period.
In addition, the other content included in the third section interface may refer to the related description of the first section interface after S505 in the foregoing embodiment, which is not described herein.
And S511, the electronic equipment responds to the adjustment operation of the user on the second adjustment control, and the starting time and the duration of the third video capture are updated to obtain a fourth video capture of the target video.
The specific implementation of S511 may refer to the related description after S506 in the foregoing embodiment, which is not described herein. Wherein the second adjustment control in S511 is responsive to the first adjustment control in S506, the third clip video corresponds to the first clip video in S506, and the fourth clip video corresponds to the second clip video in S506.
S512, the electronic equipment responds to the triggering operation of the user on the second determination option in the third interception interface, and a film preview interface of the fourth video is displayed; and the fourth video is obtained by randomly adding a video template to the fourth cut video.
The specific implementation of S512 may refer to the related description after S507 in the foregoing embodiment, which is not repeated herein. The third section interface in S512 corresponds to the first section interface in S507, the fourth video corresponds to the first video in S507, and the fourth section video corresponds to the second section video in S507.
And S513, the electronic equipment responds to the triggering operation of the user on the second cancel option in the third cut interface, and displays a film preview interface of the third video.
The specific implementation of S513 may refer to the related description after S508 of the foregoing embodiment, which is not repeated herein.
Based on the scheme corresponding to S501-S513, when the duration of the target video to which the video template needs to be added by the user is longer than the first preset duration (i.e., the duration corresponding to the video template), the second selected video with the first preset duration can be obtained through selecting the video, so as to meet the duration requirement of the video template. And then the purpose of adding a video template to the video with the time length longer than the first preset time length to generate the video with the special audio/video effect is achieved. Based on the above, the technical scheme provided by the application can smoothly finish the purpose of adding video templates to all videos.
It will be appreciated that, in order to achieve the above-mentioned functions, the electronic device includes corresponding hardware structures and/or software modules for performing the respective functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present application.
The embodiment of the application can divide the functional modules of the electronic device according to the method example, for example, each functional module can be divided corresponding to each function, or two or more functions can be integrated in one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
In the case of dividing each functional module by corresponding each function, referring to fig. 19, an embodiment of the present application provides an electronic device including: a display module 191 and a processing module 192.
The display module 191 is configured to display a gallery display interface; the gallery display interface comprises at least one thumbnail of the video. The processing module 192 is configured to control the display module 191 to display a video detail interface of the target video in response to a selection operation of the thumbnail of the target video in the gallery display interface by the user received by the display module 191. The video detail interface comprises a template configuration control, the template configuration control is used for triggering the electronic equipment to randomly add a video template for the target video, and the video template is used for adding an audio and video effect for the target video. The processing module 192 is further configured to control the display module 191 to display a first screenshot interface if the duration of the target video is longer than a first preset duration in response to the triggering operation of the template configuration control by the user received by the display module 191; the first intercepting interface is used for displaying a first intercepting video obtained by intercepting the target video from the starting moment of the target video, and the duration of the first intercepting video is a first preset duration; the first screenshot interface further includes a first adjustment control, where the first adjustment control is configured to adjust a start time and a duration of the first screenshot video. The processing module 192 is further configured to update a start time and a duration of the first video clip to obtain a second video clip of the target video in response to the adjustment operation of the first adjustment control by the user received by the display module 191. The processing module 192 is further configured to display a film preview interface of the first video in response to the triggering operation of the user on the first determination option in the first capturing interface received by the display module 191; the first video is obtained by randomly adding a video template to the second selected video.
Optionally, the first selection interface further includes a first cancel option; the processing module 192 is further configured to control the display module 191 to display a film preview interface of the second video in response to the triggering operation of the user on the first cancel option in the first section interface received by the display module 191; and the second video is obtained by randomly adding a video template to the first cut video.
Optionally, the processing module 192 is further configured to play the second video clip from the first frame image of the second video clip in response to the adjustment operation of the first adjustment control by the user received by the display module 191.
Optionally, the film preview interface of the first video further includes a first cut control; the processing module 192 is further configured to control, after the display module 191 is controlled to display the film preview interface of the first video, the display module 191 to display the second cut interface in response to the trigger operation of the first cut control by the user received by the display module 191, where the trigger operation of the first determined option by the user received by the display module 191 is received by the processing module 192; the second cut interface is used for displaying a second cut video; the second screenshot interface also includes an adjustment control for adjusting the duration and start time of the second screenshot video.
Optionally, the first section interface further includes a playing progress bar, where the progress bar is used to indicate a playing progress of the first section video; the processing module 192 is further configured to adjust a playing progress of the first selected video in response to the adjustment operation of the progress bar received by the display module 191.
Optionally, the first adjustment control includes a first thumbnail and a first adjustment frame, the first thumbnail is formed by arranging multiple frames of images in the target video, the multiple frames of images in the first adjustment frame are used for displaying the first video, and the length of the first adjustment frame corresponds to the duration of the first video; the processing module 192 specifically is configured to: in response to the sliding operation of the user on the first thumbnail received by the display module 191, adjusting the starting time of the first video clip according to the sliding direction and the sliding distance of the sliding operation, and updating the multi-frame image displayed in the first adjusting frame; in response to the adjustment operation of the first adjustment frame by the user received by the display module 191, the length of the first adjustment frame is adjusted, and the duration of the first video clip, or the start time and duration, is updated according to the length of the first adjustment frame, and the multi-frame image displayed in the first adjustment frame is updated.
Optionally, the processing module 192 is further configured to, in response to the adjustment operation of the first adjustment frame by the user received by the display module 191, indicate the maximum adjustment range of the first adjustment frame during the adjustment of the length of the first adjustment frame.
Optionally, the duration of the second video capture is less than or equal to the first preset duration and greater than or equal to the second preset duration; the processing module 192 is further configured to, in response to the adjustment operation of the first adjustment frame by the user received by the display module 191, control the display module 191 to display the first prompt message and not increase the length of the first adjustment frame any more if the length of the first adjustment frame is adjusted to the maximum length in the process of adjusting the length of the first adjustment frame; the first prompt message is used for prompting that the duration of the first video capture reaches a first preset duration, and the first preset duration is the duration corresponding to the maximum length of the first regulating frame; if the length of the first adjusting frame is adjusted to the minimum length, the control display module 191 displays the second prompt message, and does not decrease the length of the first adjusting frame any more; the second prompting information is used for prompting that the duration of the first video capture reaches a second preset duration, and the second preset duration is the duration corresponding to the minimum length of the first regulating frame.
Optionally, the processing module 192 is further configured to, after controlling the display module 191 to display the video detail interface of the target video in the gallery display interface in response to the selection operation of the target video by the user received by the display module 191, control the display module 191 to display the film preview interface of the third video if the duration of the target video is less than or equal to the first preset duration in response to the trigger operation of the template configuration control by the user received by the display module 191; the third video is obtained by randomly adding a video template for the target video; the tile preview interface of the third video includes a second cut control. The processing module 192 is further configured to control the display module 191 to display a third screenshot interface in response to a triggering operation of the second screenshot control in the zoomed preview interface of the third video, which is received by the display module 191 by a user; the third video capturing interface is used for displaying a third captured video obtained by capturing the third video from the starting time of the third video, and the duration of the third captured video is the duration of the third video; the third screenshot interface further includes a second adjustment control, where the second adjustment control is configured to adjust a starting time and a duration of the third screenshot video. The processing module 192 is further configured to update a start time and a duration of the third video clip to obtain a fourth video clip of the target video in response to the adjustment operation of the second adjustment control by the user received by the display module 191. The processing module 192 is further configured to control the display module 191 to display a film preview interface of the fourth video in response to the triggering operation of the second determination option in the third screenshot interface received by the display module 191 by the user; and the fourth video is obtained by randomly adding a video template to the fourth cut video.
Optionally, the third screenshot interface further includes a second cancel option; the processing module 192 is further configured to control the display module 191 to display a preview film interface of the third video in response to the trigger operation of the second cancel option in the third screenshot interface received by the display module 191.
In combination with the foregoing embodiment, the display module 191 in the electronic device provided by the present application is mainly configured to support the electronic device to receive the operation of the user in any interface displayed by the electronic device, and the processing module 192 is configured to support the electronic device to execute the related action or control the display module 191 to display the corresponding content in response to the operation received by the display module 191. Illustratively, taking the example shown in fig. 5 as an example, the display module 191 is configured to support the electronic device to execute the video detail interface for displaying the target video in S501 and S502, the first clip interface for displaying the first video in S503 and S505, the preview interface for displaying the first video in S507, the preview interface for displaying the second video in S508, the preview interface for displaying the third video in S509, the preview interface for displaying the third clip interface in S510, the preview interface for displaying the fourth video in S512, and the preview interface for displaying the third video in S513 in the example shown in fig. 5. The processing module 192 is configured to support the electronic device to perform actions other than the actions corresponding to the display module 191 in the example shown in fig. 5, and also is configured to control the display module 191 to perform actions such as receiving operation and displaying an interface. The rest of the example is as shown in fig. 16.
With respect to the electronic apparatus in the above-described embodiments, a specific manner in which each module performs an operation has been described in detail in the embodiments of the video processing method in the foregoing embodiments, and will not be specifically described herein. The relevant beneficial effects of the video processing method can also refer to the relevant beneficial effects of the video processing method, and are not repeated here.
The embodiment of the application also provides electronic equipment, which comprises: a plurality of cameras, a display screen, a memory and one or more processors; the camera, the display screen and the memory are coupled with the processor; wherein the memory has stored therein computer program code comprising computer instructions which, when executed by the processor, cause the electronic device to perform the video processing method as provided by the foregoing embodiments. The specific structure of the electronic device may refer to the structure of the electronic device shown in fig. 6.
Embodiments of the present application also provide a computer-readable storage medium including computer instructions that, when executed on an electronic device, cause the electronic device to perform a video processing method as provided in the foregoing embodiments.
Embodiments of the present application also provide a computer program product containing executable instructions that, when run on an electronic device, cause the electronic device to perform a video processing method as provided by the previous embodiments.
It will be apparent to those skilled in the art from this description that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus/device and method may be implemented in other manners. For example, the apparatus/device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of specific embodiments of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (11)

1. A video processing method, applied to an electronic device, the method comprising:
the electronic equipment displays a gallery display interface; the gallery display interface comprises at least one thumbnail of a video;
the electronic equipment responds to the selection operation of a user on the thumbnail of the target video in the gallery display interface and displays a video detail interface of the target video; the video detail interface comprises a template configuration control, wherein the template configuration control is used for triggering the electronic equipment to randomly add a video template for the target video, and the video template is used for adding an audio-video effect for the target video;
the electronic equipment responds to the triggering operation of the template configuration control by a user, and if the duration of the target video is longer than a first preset duration, a first cut-off interface is displayed; the first capturing interface is used for displaying a first capturing video obtained by capturing the target video from the starting moment of the target video, and the duration of the first capturing video is the first preset duration; the first cut interface further comprises a first adjusting control, wherein the first adjusting control is used for adjusting the starting time and duration of the first cut video;
The electronic equipment responds to the adjustment operation of the user on the first adjustment control, and updates the starting time and the duration of the first video capture to obtain a second video capture of the target video; the duration of the second video capture is greater than or equal to a second preset duration, and the second preset duration is less than the first preset duration;
the electronic equipment responds to the triggering operation of a user on a first determined option in the first cut interface and displays a film preview interface of a first video; the first video is obtained by randomly adding a video template to the second cut video; the film preview interface of the first video further comprises a first cut control;
the electronic equipment responds to the triggering operation of the user on the first cut-off control, and a second cut-off interface is displayed; the second cut-off interface is used for displaying the second cut-off video; and the second cut-off interface also comprises an adjustment control used for adjusting the duration and the starting time of the second cut-off video.
2. The method of claim 1, wherein the first selection interface further comprises a first cancel option; the method further comprises the steps of:
the electronic equipment responds to the triggering operation of the user on the first cancel option in the first cut interface and displays a film preview interface of the second video; and the second video is obtained by randomly adding a video template to the first cut video.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
and the electronic equipment responds to the adjustment operation of the user on the first adjustment control, and plays the second video from the first frame image of the second video.
4. A method according to any one of claims 1-3, wherein the first clip interface further comprises a play progress bar for indicating a play progress of the first clip video; the method further comprises the steps of:
and the electronic equipment responds to the adjustment operation of the user on the progress bar to adjust the playing progress of the first cut video.
5. The method of any of claims 1-4, wherein the first adjustment control comprises a first thumbnail and a first adjustment frame, the first thumbnail being comprised of an arrangement of multi-frame images in the target video, the multi-frame images in the first adjustment frame being used to display the first captured video, a length of the first adjustment frame corresponding to a duration of the first captured video;
the electronic equipment responds to the adjustment operation of the user on the first adjustment control, and updates the starting time and duration of the first cut video, and the method comprises the following steps:
The electronic equipment responds to the sliding operation of the user on the first thumbnail, adjusts the starting moment of the first cut video according to the sliding direction and the sliding distance of the sliding operation, and updates the multi-frame image displayed in the first adjusting frame;
the electronic equipment responds to the adjustment operation of the user on the first adjustment frame, adjusts the length of the first adjustment frame, updates the duration of the first video capture or the starting time and duration according to the length of the first adjustment frame, and updates the multi-frame image displayed in the first adjustment frame.
6. The method of claim 5, wherein the method further comprises:
and the electronic equipment responds to the adjustment operation of the user on the first adjustment frame, and indicates the maximum adjustment range of the first adjustment frame in the process of adjusting the length of the first adjustment frame.
7. The method of claim 5 or 6, wherein the second video clip has a duration less than or equal to the first preset duration and greater than or equal to a second preset duration; the method further comprises the steps of:
the electronic equipment responds to the adjustment operation of the user on the first adjustment frame, and in the process of adjusting the length of the first adjustment frame,
If the length of the first adjusting frame is adjusted to the maximum length, the electronic equipment displays first prompt information, and the length of the first adjusting frame is not increased any more; the first prompt message is used for prompting that the duration of the first video capture reaches the first preset duration, and the first preset duration is the duration corresponding to the maximum length of the first regulating frame;
if the length of the first adjusting frame is adjusted to the minimum length, the electronic equipment displays second prompt information, and the length of the first adjusting frame is not reduced any more; the second prompting information is used for prompting that the duration of the first video capture reaches the second preset duration, and the second preset duration is the duration corresponding to the minimum length of the first adjusting frame.
8. The method of any of claims 1-7, wherein the electronic device, in response to a user selection of a target video in the gallery presentation interface, after displaying a video detail interface of the target video, further comprises:
the electronic equipment responds to the triggering operation of the template configuration control by a user, and if the duration of the target video is smaller than or equal to the first preset duration, a film preview interface of a third video is displayed; the third video is obtained by randomly adding a video template to the target video; the third video slicing preview interface comprises a second intercept control;
The electronic equipment responds to the triggering operation of a user on a second cut-off control in the slice preview interface of the third video, and a third cut-off interface is displayed; the third video capturing interface is used for displaying a third video captured from the third video capturing at the starting moment of the third video, and the duration of the third video capturing is the duration of the third video; the third screenshot interface further comprises a second adjustment control, wherein the second adjustment control is used for adjusting the starting time and duration of the third screenshot video;
the electronic equipment responds to the adjustment operation of the user on the second adjustment control, and the starting time and the duration of the third video capture are updated to obtain a fourth video capture of the target video;
the electronic equipment responds to the triggering operation of the user on the second determined option in the third intercepting interface, and a film preview interface of the fourth video is displayed; and the fourth video is obtained by randomly adding a video template to the fourth cut video.
9. The method of claim 8, wherein the third screenshot interface further comprises a second cancel option; the method further comprises the steps of:
and the electronic equipment responds to the triggering operation of the second cancel option in the third cut-off interface by a user, and displays a film preview interface of the third video.
10. An electronic device, comprising: at least one camera, a display screen, a memory, and one or more processors; the camera, the display screen and the memory are coupled with the processor; wherein the memory has stored therein computer program code comprising computer instructions which, when executed by the processor, cause the electronic device to perform the video processing method of any of claims 1-9.
11. A computer readable storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the video processing method of any of claims 1-9.
CN202210039516.1A 2021-06-16 2022-01-13 Video processing method and electronic equipment Active CN115484399B (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN2021106767093 2021-06-16
CN202110676709 2021-06-16
CN2021114340263 2021-11-29
CN202111434026 2021-11-29

Publications (2)

Publication Number Publication Date
CN115484399A CN115484399A (en) 2022-12-16
CN115484399B true CN115484399B (en) 2023-12-12

Family

ID=84420763

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210039516.1A Active CN115484399B (en) 2021-06-16 2022-01-13 Video processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN115484399B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7362946B1 (en) * 1999-04-12 2008-04-22 Canon Kabushiki Kaisha Automated visual image editing system
CN102830939A (en) * 2012-09-13 2012-12-19 北京富年科技有限公司 Method for cutting video file of mobile terminal and mobile terminal
CN109523609A (en) * 2018-10-16 2019-03-26 华为技术有限公司 A kind of method and terminal of Edition Contains
CN110868636A (en) * 2019-12-06 2020-03-06 广州酷狗计算机科技有限公司 Video material intercepting method and device, storage medium and terminal
CN111541936A (en) * 2020-04-02 2020-08-14 腾讯科技(深圳)有限公司 Video and image processing method and device, electronic equipment and storage medium
CN111930994A (en) * 2020-07-14 2020-11-13 腾讯科技(深圳)有限公司 Video editing processing method and device, electronic equipment and storage medium
CN113325979A (en) * 2021-05-28 2021-08-31 北京中指讯博数据信息技术有限公司 Video generation method and device, storage medium and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7362946B1 (en) * 1999-04-12 2008-04-22 Canon Kabushiki Kaisha Automated visual image editing system
CN102830939A (en) * 2012-09-13 2012-12-19 北京富年科技有限公司 Method for cutting video file of mobile terminal and mobile terminal
CN109523609A (en) * 2018-10-16 2019-03-26 华为技术有限公司 A kind of method and terminal of Edition Contains
CN110868636A (en) * 2019-12-06 2020-03-06 广州酷狗计算机科技有限公司 Video material intercepting method and device, storage medium and terminal
CN111541936A (en) * 2020-04-02 2020-08-14 腾讯科技(深圳)有限公司 Video and image processing method and device, electronic equipment and storage medium
CN111930994A (en) * 2020-07-14 2020-11-13 腾讯科技(深圳)有限公司 Video editing processing method and device, electronic equipment and storage medium
CN113325979A (en) * 2021-05-28 2021-08-31 北京中指讯博数据信息技术有限公司 Video generation method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN115484399A (en) 2022-12-16

Similar Documents

Publication Publication Date Title
KR20220082926A (en) Video shooting method and electronic device
US20230217098A1 (en) Shooting method, graphical interface, and related apparatus
WO2022068511A1 (en) Video generation method and electronic device
US20230188830A1 (en) Image Color Retention Method and Device
CN115484399B (en) Video processing method and electronic equipment
CN115484387B (en) Prompting method and electronic equipment
CN115442509B (en) Shooting method, user interface and electronic equipment
CN115480684A (en) Method for returning edited multimedia resource and electronic equipment
CN115484397B (en) Multimedia resource sharing method and electronic equipment
WO2021103919A1 (en) Composition recommendation method and electronic device
CN115002336A (en) Video information generation method, electronic device and medium
CN115484390B (en) Video shooting method and electronic equipment
CN115484394B (en) Guide use method of air separation gesture and electronic equipment
CN115484392B (en) Video shooting method and electronic equipment
EP4277257A1 (en) Filming method and electronic device
WO2023231696A1 (en) Photographing method and related device
CN115484393B (en) Abnormality prompting method and electronic equipment
WO2023226699A1 (en) Video recording method and apparatus, and storage medium
CN116634058B (en) Editing method of media resources, electronic equipment and readable storage medium
WO2023226695A9 (en) Video recording method and apparatus, and storage medium
WO2023226694A1 (en) Video recording method and apparatus, and storage medium
WO2022237317A1 (en) Display method and electronic device
EP4258130A1 (en) Method and apparatus for viewing multimedia content
CN115811656A (en) Video shooting method and electronic equipment
CN115484391A (en) Shooting method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant