CN114363527A - Video generation method and electronic equipment - Google Patents

Video generation method and electronic equipment Download PDF

Info

Publication number
CN114363527A
CN114363527A CN202011057180.9A CN202011057180A CN114363527A CN 114363527 A CN114363527 A CN 114363527A CN 202011057180 A CN202011057180 A CN 202011057180A CN 114363527 A CN114363527 A CN 114363527A
Authority
CN
China
Prior art keywords
video
scene type
segment
type corresponding
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011057180.9A
Other languages
Chinese (zh)
Other versions
CN114363527B (en
Inventor
张韵叠
苏达
陈绍君
胡靓
徐迎庆
徐千尧
郭子淳
高家思
周雪怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Huawei Technologies Co Ltd
Original Assignee
Tsinghua University
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University, Huawei Technologies Co Ltd filed Critical Tsinghua University
Priority to CN202011057180.9A priority Critical patent/CN114363527B/en
Priority to PCT/CN2021/116047 priority patent/WO2022068511A1/en
Publication of CN114363527A publication Critical patent/CN114363527A/en
Application granted granted Critical
Publication of CN114363527B publication Critical patent/CN114363527B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/75Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72469User interfaces specially adapted for cordless or mobile telephones for operating the device by selecting functions from two or more displayed items, e.g. menus or icons
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Telephone Function (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides a video generation method and electronic equipment. The method comprises the following steps: the electronic device displays a first interface of a first application. After receiving a first operation acted on a first control, the electronic equipment determines that the arrangement sequence of the first material, the second material and the third material is a first sequence, wherein the first sequence is different from the third sequence; and generating the first video from the first material, the second material and the third material according to the first sequence. After receiving a second operation acting on the second control, the electronic equipment determines that the arrangement sequence of the first material, the second material and the third material is a second sequence, and the second sequence is different from the third sequence; and generating a second video from the first material, the second material and the third material according to a second sequence. Wherein the third order is a temporal order in which the first material, the second material, and the third material are stored in the electronic device. Therefore, the visual line of the video is continuous and the quality feeling is high.

Description

Video generation method and electronic equipment
Technical Field
The embodiment of the application relates to the technical field of electronics, in particular to a video generation method and electronic equipment.
Background
With the popularity trend of short videos, the demand of users for fast video generation on electronic devices such as mobile phones is increasing day by day. At present, video generated by electronic equipment has poor sight continuity and low quality feeling, and cannot meet high requirements of users on the picture feeling and the film and television of the video. Therefore, a method for generating a video with a consistent line of sight and high quality is needed.
Disclosure of Invention
The application provides a video generation method and electronic equipment, so that a video can be conveniently and rapidly generated, the visual line of the video is continuous, the quality sense is high, the shot sense and the movie sense of the video are enhanced, and the use experience of a user is improved.
In a first aspect, the present application provides a video generation method, including: the method comprises the steps that electronic equipment displays a first interface of a first application, wherein the first interface comprises a first control and a second control; after receiving a first operation acted on a first control, the electronic equipment determines that the arrangement sequence of the first material, the second material and the third material is a first sequence, wherein the first sequence is different from the third sequence; generating a first video from the first material, the second material and the third material according to the first sequence; after receiving a second operation acting on the second control, the electronic equipment determines that the arrangement sequence of the first material, the second material and the third material is a second sequence, and the second sequence is different from the third sequence; and generating a second video from the first material, the second material and the third material according to a second sequence. The first material, the second material and the third material are different image materials stored in the electronic equipment, and the third sequence is a time sequence of storing the first material, the second material and the third material in the electronic equipment.
According to the method provided by the first aspect, the arrangement sequence of the material is adjusted by identifying the scene type of the material, matching a proper video template and based on the set scene type of each segment in the video template, and a video with continuous sight and high quality can be automatically generated by combining the moving mirror, the speed and the transition set for each segment in the video template, so that the manual editing of a user is not required, the shot feeling and the movie feeling of the video are enhanced, and the use experience of the user is improved.
In one possible design, the first video is divided into a plurality of segments with the beat point of the music as a boundary; the first material, the second material and the third material appear at least once in the first video, and the materials appearing in any two adjacent segments of the first video are different; the first material, the second material and the third material appear at least once in the second video, and the material appearing in any two adjacent segments of the second video is different. Therefore, the generated video can have professional shot feeling and movie feeling.
In one possible design, the method further includes: the electronic equipment displays a second interface of the first application; the electronic equipment generates the first video from the first material, the second material and the third material after receiving a third operation acting on the second interface. Thus, the electronic apparatus can generate a video with a coherent line of sight and a high quality feeling based on the material selected by the user.
In one possible design, the method further includes: the electronic equipment determines to generate a first video from the first material, the second material, the third material and the fourth material; the fourth material is an image material which is stored in the electronic equipment and is different from the first material, the second material and the third material. Therefore, the electronic equipment can automatically generate the video based on the stored materials, and timely requirements of users are met.
In one possible design, the first interface further includes a third control; the method further comprises the following steps: after receiving a fourth operation acting on the third control, the electronic device displays a third interface, where the third interface includes: options for configuration information, the configuration information comprising: at least one parameter of duration, filters, frames, materials or titles; the electronic device generates a third video from the first material, the second material, and the third material based on the configuration information in the first order after receiving a fifth operation acting on the option of the configuration information. Therefore, the types of the videos are enriched, and the requirements of users for adjusting all parameters of the videos are met.
In one possible design, the first interface further includes a fourth control; the method further comprises the following steps: the electronic device saves the first video in response to a fourth operation acting on the fourth control after generating the first video. Therefore, the user can conveniently watch and edit the generated video in a subsequent process.
In one possible design, the method specifically includes: the electronic equipment determines a scene type corresponding to the first material, a scene type corresponding to the second material and a scene type corresponding to the third material; the electronic equipment determines a material matched with the scene type corresponding to the first segment based on the scene type corresponding to the first material, the scene type corresponding to the second material, the scene type corresponding to the third material and the scene type set by each segment in the first video template, wherein the first segment is any one segment in the first video template; arranging materials corresponding to all the segments in the first video template into a first sequence; the electronic equipment determines a material matched with the scene type corresponding to the second fragment based on the scene type corresponding to the first material, the scene type corresponding to the second material, the scene type corresponding to the third material and the scene type set by each fragment in the second video template, wherein the second fragment is any one fragment in the second video template; arranging materials corresponding to all the segments in the second video template into a second sequence; wherein the first video template is different from the second video template, each segment in the second video being opposite to each segment in the second video template, each segment in the second video corresponding to each segment in the second video template.
In one possible design, the method further includes: the electronic equipment generates a first video from a first material, a second material and a third material according to a first sequence and a mirror moving effect, a speed effect and a transition effect which are set by each segment in a first video template; and the electronic equipment generates a second video from the first material, the second material and the third material according to the second sequence and the mirror moving effect, the speed effect and the transition effect which are set by each segment in the second video template.
In one possible design, when the first material is a picture material, the method specifically includes: when the scene type corresponding to the first material is the same as the scene type corresponding to the first segment, or the scene type corresponding to the first material is adjacent to the sequence of the scene type corresponding to the first segment according to a preset rule, the electronic equipment determines the first material as the material matched with the scene type corresponding to the first segment; and when the scene type corresponding to the first material is the same as the scene type corresponding to the second segment, or the scene type corresponding to the first material is adjacent to the sequence of the scene type corresponding to the second segment according to a preset rule, the electronic equipment determines the first material as the material matched with the scene type corresponding to the second segment.
In one possible design, when the first material is a video material, the method specifically includes: when the scene type corresponding to the fourth material is the same as the scene type corresponding to the first segment or the scene type corresponding to the fourth material is adjacent to the sequence of the scene type corresponding to the first segment according to a preset rule and the duration of the fourth material is equal to the duration of the first segment, the electronic equipment intercepts the fourth material from the first material and determines the fourth material as the material matched with the scene type corresponding to the first segment; when the scene type corresponding to the fourth material is the same as the scene type corresponding to the second segment or the scene type corresponding to the fourth material is adjacent to the sequencing of the scene type corresponding to the second segment according to a preset rule, and the duration of the fourth material is equal to the duration of the second segment, the electronic equipment intercepts the fourth material from the second material and determines the fourth material as the material matched with the scene type corresponding to the second segment; and the fourth material is part or all of the first material.
In one possible design, the scene types include, in order of the preset rule: the short shot, the middle shot and the long shot, the type of the shot adjacent to the short shot is the long shot, the type of the shot adjacent to the middle shot is the short shot and the long shot, and the type of the shot adjacent to the long shot is the short shot.
In one possible design, the first application is a gallery application of the electronic device.
In a second aspect, the present application provides an electronic device comprising: a memory and a processor; the memory is used for storing program instructions; the processor is configured to invoke the program instructions in the memory to cause the electronic device to perform the video generation method of the first aspect and any one of the possible designs of the first aspect.
In a third aspect, the present application provides a chip system, which is applied to an electronic device including a memory, a display screen, and a sensor; the chip system includes: a processor; the electronic device performs the video generation method of the first aspect and any one of the possible designs of the first aspect when the processor executes the computer instructions stored in the memory.
In a fourth aspect, the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, causes an electronic device to implement the video generation method of the first aspect and any one of the possible designs of the first aspect.
In a fifth aspect, the present application provides a computer program product comprising: the execution instructions are stored in a readable storage medium, and at least one processor of the electronic device can read the execution instructions from the readable storage medium, and the execution of the execution instructions by the at least one processor causes the electronic device to implement the video generation method in the first aspect and any one of the possible designs of the first aspect.
Drawings
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a block diagram of a software structure of an electronic device according to an embodiment of the present application;
FIGS. 3A-3T are schematic diagrams of human-machine interaction interfaces provided by an embodiment of the present application;
fig. 4A to fig. 4J are schematic diagrams illustrating the effect of using a mirror to move a picture material according to an embodiment of the present application;
fig. 5 is a schematic diagram illustrating an effect of using different speeds for a picture material according to an embodiment of the present application;
fig. 6 is a schematic diagram illustrating an effect of using transition of a picture material according to an embodiment of the present application;
FIG. 7 is a schematic illustration of scene types of story type material provided in accordance with an embodiment of the present application;
8A-8E are schematic diagrams illustrating the playing of a video generated based on material according to an embodiment of the present application;
fig. 9 is a schematic diagram of a video generation method according to an embodiment of the present application.
Detailed Description
In the embodiments of the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a alone, b alone, or c alone, may represent: a alone, b alone, c alone, a and b in combination, a and c in combination, b and c in combination, or a, b and c in combination, wherein a, b and c may be single or multiple. Furthermore, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 1, the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the present application does not constitute a specific limitation to the electronic device 100. In other embodiments, electronic device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller may be, among other things, a neural center and a command center of the electronic device 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor 180K via an I2C interface, such that the processor 110 and the touch sensor 180K communicate via an I2C bus interface to implement the touch functionality of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may communicate audio signals to the wireless communication module 160 via the I2S interface, enabling answering of calls via a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled by a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a bluetooth headset.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the capture functionality of electronic device 100. The processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transmit data between the electronic device 100 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It should be understood that the interface connection relationship between the modules illustrated in the present application is only an exemplary illustration, and does not constitute a limitation on the structure of the electronic device 100. In other embodiments, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, with N being a positive integer greater than 1.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic apparatus 100 can listen to music through the speaker 170A or listen to a handsfree call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic apparatus 100 receives a call or voice information, it can receive voice by placing the receiver 170B close to the ear of the person.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking the user's mouth near the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, perform directional recording, and so on.
The headphone interface 170D is used to connect a wired headphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., the x, y, and z axes) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects a shake angle of the electronic device 100, calculates a distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 100 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude, aiding in positioning and navigation, from barometric pressure values measured by barometric pressure sensor 180C.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip holster using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip phone, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the opening and closing state of the leather sheath or the opening and closing state of the flip cover, the automatic unlocking of the flip cover is set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, electronic device 100 may utilize range sensor 180F to range for fast focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light to the outside through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there are no objects near the electronic device 100. The electronic device 100 can utilize the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear for talking, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor 180G may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor 180L is used to sense the ambient light level. Electronic device 100 may adaptively adjust the brightness of display screen 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer an incoming call with the fingerprint, and so on.
The temperature sensor 180J is used to detect temperature. In some embodiments, electronic device 100 implements a temperature processing strategy using the temperature detected by temperature sensor 180J. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device 100 performs a reduction in performance of a processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In other embodiments, the electronic device 100 heats the battery 142 when the temperature is below another threshold to avoid the low temperature causing the electronic device 100 to shut down abnormally. In other embodiments, when the temperature is lower than a further threshold, the electronic device 100 performs boosting on the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, the bone conduction sensor 180M may acquire a vibration signal of the human vocal part vibrating the bone mass. The bone conduction sensor 180M may also contact the human pulse to receive the blood pressure pulsation signal. In some embodiments, the bone conduction sensor 180M may also be disposed in a headset, integrated into a bone conduction headset. The audio module 170 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound part acquired by the bone conduction sensor 180M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, so as to realize the heart rate detection function.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be brought into and out of contact with the electronic apparatus 100 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
The software system of the electronic device 100 may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present application takes an Android system with a layered architecture as an example, and exemplarily illustrates a software structure of the electronic device 100. The embodiment of the present application does not limit the type of the operating system of the electronic device. For example, an Android system, a Linux system, a Windows system, an iOS system, a hong meng operating system (hong meng OS), and the like.
Referring to fig. 2, fig. 2 is a block diagram of a software structure of an electronic device according to an embodiment of the present disclosure. As shown in fig. 2, the layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer (APP), an application framework layer (APP framework), an Android runtime (Android runtime) and system libraries (libraries), and a kernel layer (kernel).
The application layer may include a series of application packages.
As shown in fig. 2, the application package may include Applications (APPs) such as camera, gallery, calendar, call, map, navigation, WLAN, bluetooth, music, video, game, chat, shopping, travel, instant messaging (e.g., short message), smart home, device control, etc.
The smart home application can be used for controlling or managing home equipment with a networking function. For example, household equipment may include electric lights, televisions, and air conditioners. For another example, the household equipment may further include a security door lock, a sound box, a floor sweeping robot, a socket, a body fat scale, a table lamp, an air purifier, a refrigerator, a washing machine, a water heater, a microwave oven, an electric cooker, a curtain, a fan, a television, a set-top box, a door and window, and the like.
In addition, the application package may further include: main screen (i.e. desktop), minus one screen, control center, notification center, etc.
The negative screen, which may also be referred to as a "-1 screen", refers to a User Interface (UI) that slides a screen to the right on a main screen of the electronic device until the screen is split to the leftmost side. For example, minus one screen can be used to place some shortcut service functions and notification messages, such as global search, shortcut entries (pay codes, WeChat, etc.) of a certain page of an application, instant messages and reminders (express messages, expense messages, commute traffic, taxi trip messages, schedule messages, etc.), dynamic concerns (football stands, basketball stands, stock information, etc.), and the like. The control center is a slide-up message notification bar of the electronic device, namely a user interface displayed by the electronic device when a user starts to slide up at the bottom of the electronic device. The notification center is a pull-down message notification bar of the electronic device, namely a user interface displayed by the electronic device when a user starts to operate downwards at the top of the electronic device.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs, such as managing window states, attributes, view (view) addition, deletion, update, window order, message collection and processing, and the like. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like. And, the window manager is the entry for outside access windows.
The content provider is used to deposit and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide communication functions of the electronic device 100. Such as management of call status (including on, off, etc.).
A resource manager (resource manager) provides various resources, such as localized strings, icons, pictures, layout files, video files, etc., to an application.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The android runtime includes a core library and a virtual machine. And the Android runtime is responsible for scheduling and managing the Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of the Android system.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), media libraries (media libraries), three-dimensional graphics processing libraries (e.g., OpenGLES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The following describes exemplary software and hardware workflow of the electronic device 100 with a scenario of playing sound using a smart speaker.
When the touch sensor 180K receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into an original input event (including touch coordinates, a time stamp of the touch operation, and other information). The raw input events are stored at the kernel layer. And the application program framework layer acquires the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch clicking operation, and taking the control corresponding to the clicking operation as the control of the smart sound box icon as an example, the smart sound box application calls the interface of the application framework layer to start the smart sound box application, and then starts the audio driver by calling the kernel layer, and converts the audio electric signal into a sound signal through the loudspeaker 170A.
It is to be understood that the illustrated structure of the present application does not constitute a specific limitation to the electronic device 100. In other embodiments, electronic device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The technical solutions in the following embodiments can be implemented in the electronic device 100 having the above hardware architecture and software architecture.
The embodiment of the application provides a video generation method and electronic equipment, the scene type of a material is identified through the electronic equipment, a proper video template is matched, the arrangement sequence of the material is adjusted based on the scene type set in the video template, and in combination with moving mirrors, speed and transition set in the video template, videos can be automatically generated, so that the generated video sight lines are continuous and high in quality, the lens feeling and the movie feeling of the videos are enhanced, the use experience of a user is improved, the user can manually adjust parameters such as the duration, the filter, the picture frame and the like of the videos, the actual user requirements are met, and the types of the videos are enriched.
The electronic device may be a mobile phone, a tablet computer, a wearable device, an on-board device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), a smart television, a smart screen, a high definition television, a 4K television, a smart speaker, a smart projector, and the like, and the specific type of the electronic device is not limited in any way in the embodiment of the present application.
In the following, some terms related to the embodiments of the present application are explained to facilitate understanding by those skilled in the art.
1. The material may be understood as picture material or video material stored in the electronic device. It should be noted that the picture material and the photo material mentioned in the embodiments of the present application have the same meaning. The picture material may be obtained by shooting by the electronic device, may also be obtained by downloading from a server by the electronic device, and may also be received by the electronic device from other electronic devices, which is not limited in this embodiment of the application.
2. The scene difference is understood as a difference in the size of a range in which the subject appears in the subject due to a difference in the distance between the subject and the subject. The shooting object may be an electronic device, or may be a device in communication connection with the electronic device, which is not limited in this application.
In the embodiment of the present application, the scene type division may include various implementations. It should be noted that the scene type mentioned in the embodiments of the present application refers to the scene type.
In some embodiments, the classification of the scene type may be three, from near to far, a near scene, a medium scene, and a far scene. For example, a close view refers to above the chest of a human body, a medium view refers to above the thighs of a human body, and a far view refers to a case other than the close view and the medium view.
In other embodiments, the classification of the scene type may be five, from near to far, close-up, medium, panoramic and distant. For example, close-up refers to the area above the shoulders of the human body, close-up refers to the area above the chest of the human body, medium-view refers to the area above the knees of the human body, panoramic refers to the entire and surrounding area of the human body, and far-view refers to the area where the subject is located.
In the embodiment of the present application, the scene type corresponding to the video material may be regarded as a set of respective scene types of the plurality of picture materials. In general, the electronic device may record a start time and a duration, or a start time and an end time, or a start time, a duration, and an end time for each scene type. Moreover, the electronic equipment adopts technologies such as face recognition, semantic recognition, salient feature recognition, semantic segmentation and the like, and can classify to judge the scene type of the material, namely determine the scene type of the picture material.
In the following, a specific implementation manner of determining the scene type of any one material by the electronic device will be described with reference to the embodiments.
A. Human face close-up and human face close-up
The electronic equipment determines a face recognition frame of any material based on a face recognition technology.
When the area of the face recognition box is larger than the threshold value A1, the electronic equipment judges that the scene of the material is a face close-up.
When the area of the face recognition frame is larger than the threshold a2 and smaller than the threshold a1, the electronic device determines that the scene of the material is a close face scene.
The specific values of the threshold a1 and the threshold a2 may be set according to factors such as empirical values and human face recognition techniques.
B. Close-up and close-up of characters
The electronic equipment carries out face recognition on any material based on a face recognition technology.
When the recognition result indicates that no human face exists and semantic styles (such as the side faces/back shadows of the human face) of the human exist, the electronic equipment can obtain a human recognition frame by using the area of the head.
When the area of the character recognition box is larger than the threshold value B1, the electronic equipment judges that the scene of the material is a feature.
When the area of the person identification box is larger than the threshold B2 and smaller than the threshold B1, the electronic device determines that the scene of the material is a near scene of the object.
The specific values of the threshold B1 and the threshold B2 may be set according to factors such as empirical values.
C. Close-up and close-up of food
The electronic equipment determines a semantic recognition result and a salient feature recognition result of any material based on a voice segmentation recognition technology and a salient feature recognition technology.
When the semantic recognition result shows that the area of the food is larger than the threshold C1, the significant characteristic result shows that the area of the significance is larger than the threshold C2, and the area of the food is overlapped with the area of the significance, the electronic equipment judges that the scene of the material is a close-up of the food.
When the semantic recognition result indicates that the area of the food is larger than the threshold C1 and the significant feature result indicates that the area of the significance is smaller than the threshold C2, the electronic device determines that the scene of the material is a close scene of the food.
The specific values of the threshold C1 and the threshold C2 may be set according to factors such as empirical values.
D. Big aperture close shot of non-figure
When a picture with a large aperture mode of any material is detected or a large virtual focus image of the material is detected, the electronic equipment judges whether the scene of the material is a non-person large aperture close scene.
E. Significant flower close shot and significant pet close shot
The electronic equipment determines a semantic recognition result and a salient feature recognition result of any material based on a voice segmentation recognition technology and a salient feature recognition technology.
When the semantic recognition result shows that the area of the flower is larger than the threshold D1, the saliency feature result shows that the area of the saliency is larger than the threshold D2, and the area of the flower is overlapped with the area of the saliency, the electronic equipment judges that the scene of the material is the close scene of the saliency flower.
The specific values of the threshold D1 and the threshold D2 may be set according to factors such as empirical values.
When the semantic recognition result shows that the area of the pet is larger than the threshold E1, the significant feature result shows that the area of the pet is larger than the threshold E2, and the area of the pet is overlapped with the area of the significant feature, the electronic device judges that the scene of the material is the close scene of the significant pet.
The specific values of the threshold E1 and the threshold E2 may be set according to factors such as empirical values.
F. Character middle scenery
The electronic equipment carries out face recognition on any material based on a face recognition technology.
When the recognition result indicates that no face or decomposition result which conforms to the close view of the character appears, or no complete character completely enters the picture frame (such as the trunk leaves the edge of the picture frame and the face or the head is smaller than a threshold), the electronic device judges that the view of the material is the middle view of the character.
G. Significant perspective
The electronic equipment determines the salient feature recognition result of any material based on the salient feature recognition technology.
When the significance result exists and the significance result indicates that the area of significance is smaller than the threshold value F (for example, the material is a picture material of a camel in the desert, wherein the camel is the significance result), the electronic equipment judges that the view of the material is a significance perspective.
H. Landscape long shot
The electronic equipment determines the picture segmentation result of any material based on a semantic segmentation technology.
And when the picture segmentation result shows that the area of the material is larger than the threshold value G and is a preset target, the electronic equipment judges that the scene of the material is a landscape long shot.
The threshold G may be set to be greater than or equal to 90%, and the specific value of the threshold G is not limited in this embodiment. The preset target may be a landscape feature such as a sea, sky, mountain, etc.
I. Others
When the electronic equipment cannot identify the scene of the material based on the technology, the electronic equipment judges that the scene of the material is a middle scene.
Wherein, A to E are close range, the certainty sequence is the close range of the character, namely the face close-up, the food close-up, the close range of the character, namely the face close-up, the non-character big aperture close-up, the food close-up, namely the obvious close range of the flower and the obvious close range of the pet, G and H are the long range, the certainty sequence is H > G, F and I are the medium range.
3. The moving mirror is also called as a moving lens, and mainly refers to the movement of the lens. In the embodiment of the application, the moving mirror is related to the type of the material, that is, the moving mirror corresponding to the picture material and the moving mirror corresponding to the video material may be the same or different.
4. Transitions can be understood as transitions or transitions between paragraphs and paragraphs, scene to scene. Each paragraph (the smallest unit constituting a video is a shot, and a shot sequence formed by connecting shots together) has a single and relatively complete meaning, such as representing an action process, representing a correlation, representing a meaning, and the like. It is a complete narrative level in the video, just like the scenes in dramas and chapters in novels, and the complete video is formed by connecting the paragraphs. Therefore, paragraphs are the most basic structural form of video, and the structural hierarchy of video in content is represented by paragraphs.
5. Scene type, moving mirror, speed and transition set in video template
A video template may be understood as the theme or genre of a video. The types of the video template may include, but are not limited to: travel, parent-child, party, sport, food, scene, vintage, city, night screen, humanity, etc.
The parameters in any one of the video templates may include, but are not limited to: scene type, mirror motion, speed, transition, etc. Typically, the different video templates differ in at least one of corresponding scene type, motion mirror, speed, and transition.
In the embodiment of the application, the electronic equipment has a function of generating the stored materials into videos, so that one or more picture materials and/or video materials in the electronic equipment generate the videos. Moreover, the electronic equipment provides multiple video generation entrance modes for the user, so that the user can generate videos timely and quickly, and convenience of the user is improved.
In the following, by taking a gallery application of an electronic device as an entry for generating a video, a method for generating a video from a stored material by the electronic device according to the embodiment of the present application is described in detail by combining the first mode, the second mode and the third mode. It should be noted that, the embodiments of the present application include, but are not limited to, gallery application as a portal way for generating a video, and include, but are not limited to, the above three ways.
In a first mode
Referring to fig. 3A to 3F, fig. 3A to 3F are schematic diagrams of a human-computer interaction interface according to an embodiment of the present application. For convenience of description, fig. 3A to 3F are exemplarily illustrated by taking an electronic device as a mobile phone.
The handset may display a user interface 11 as exemplarily shown in fig. 3A. The user interface 11 may be a Home screen (Home screen) of a desktop, and the user interface 11 may include, but is not limited to: status bar, navigation bar, calendar indicator, weather indicator, and a plurality of application icons, etc. The application icons may include: icon 301 of the gallery application, the application icon may further include: such as an icon of a Huawei video application, an icon of a music application, an icon of a cell phone housekeeper application, an icon of a setting application, an icon of a Huawei market application, an icon of a smart life application, an icon of a sports health application, an icon of a call application, an icon of an instant messaging application, an icon of a browser application, an icon of a camera application, and the like.
After detecting that the user performs an operation of opening the gallery application in the user interface 11 shown in fig. 3A (e.g., clicking the icon 301 of the gallery application), the mobile phone may display the user interface 12 exemplarily shown in fig. 3B, where the user interface 12 is used to display a page corresponding to the album category in the gallery application.
Among these, the user interface 12 may include: the widget 3021 is used for entering a display interface containing all picture materials and/or video materials in the mobile phone, and the widget 3023 is used for entering a display interface corresponding to the album category in the gallery application, where the widget 3021 is used for entering the display interface.
In the embodiment of the present application, the specific implementation manner of the user interface 12 may include various manners. For ease of illustration, in FIG. 3B, the user interface 12 is divided into two groups.
The first packet includes two portions. The title of the first group is illustrated in fig. 3B using the text "album" as an example.
The first section provides a search box for a user to search for picture material and/or video material by keywords such as photos, people, places, etc.
Included in the second portion are widgets 3021, as well as widgets for accessing a display interface containing only video material.
The second group displays pictures obtained by means of screen capture or some application. The title of the third part is illustrated in fig. 3B by using the text "other album (3)" and a rounded rectangular frame as an example.
In addition, the user interface 12 further includes: widget 3022, widget 3024, and widget 3025. The control 3022 is used for entering a display interface corresponding to the photo category in the gallery application. The control 3024 is used to enter a display interface corresponding to the time category in the gallery application. The control 3025 is used to enter a display interface corresponding to a discovery category in the gallery application.
In addition, the user interface 12 may further include: controls for implementing functions in the user interface 12 such as deleting existing groupings, changing names of existing groupings, and controls for adding new groupings in the user interface 12.
After detecting that the user performs an operation such as clicking on the widget 3021 in the user interface 12 shown in fig. 3B, the mobile phone may display the user interface 13 exemplarily shown in fig. 3C, where the user interface 13 is a display interface of all picture materials and/or video materials in the mobile phone. In the embodiment of the present application, parameters such as the display number of picture materials, the display area of picture materials, the display position of picture materials, the display content of video materials, the display number of video materials, the display area of video materials, the display position of video materials, and the material sequence of each type in the user interface 13 are not limited.
For convenience of explanation, in fig. 3C, in time order from near to far from the current time, the user interface 13 displays: a video material 3031, a picture material 3032, a picture material 3033, a video material 3034, a picture material 3035, a picture material 3036, a picture material 3037, and a video material 3038. For any one video material, the electronic device may select an image displayed by any one frame of the video material as a picture displayed to the user by the electronic device. Therefore, in fig. 3C, the screen displayed by the video material 3031, the video material 3034, and the video material 3038 is an image displayed in any one frame of the respective video materials.
After detecting that the user performs an operation (such as a long press operation) for selecting picture materials and/or video materials in the user interface 13 shown in fig. 3C, the mobile phone may display a user interface 14 exemplarily shown in fig. 3D, where the user interface 14 is used for displaying a display interface for selecting picture materials and/or video materials used by the user to generate a video.
In the embodiment of the present application, the specific implementation manner of the user interface 14 may include various manners. For ease of illustration, in fig. 3D, user interface 14 includes user interface 13, and an editing interface overlaid on user interface 13.
For picture materials and/or video materials that are not selected by the user (in fig. 3D, other picture materials/other video materials are exemplified except for the video material 3031, the picture material 3032, the picture material 3033, the video material 3034, the picture material 3035, the picture material 3036, the picture material 3037, and the video material 3038), in the editing interface, a control for displaying the picture material/video material in an enlarged manner may be displayed in the upper left corner of each picture material/video material (in fig. 3D, two oblique arrows pointing in opposite directions are exemplified), and a control for selecting the picture material/video material is displayed in the lower right corner of each picture material/video material (in fig. 3D, a rectangular frame is exemplified).
For picture materials and/or video materials that have been selected by a user (which is exemplified by the video materials 3031, 3032, 3033, 3034, 3035, 3036, 3037, and 3038 in fig. 3D), a control for displaying the picture materials/video materials in an enlarged manner may be displayed in the upper left corner of each picture material/video material in the editing interface (which is exemplified by two arrows that are oblique and point oppositely in fig. 3D), and a control for selecting the picture materials/video materials is displayed in the lower right corner of each picture material/video material (which is exemplified by a rounded rectangle in fig. 3D).
And, the editing interface can include: a control 304, the control 304 for authoring picture material and/or video material that has been selected by a user. In addition, the editing interface may further include: the control is used for carrying out operations such as sharing, full selection, deletion and more on the picture material and/or the video material selected by the user, and the embodiment of the application does not limit the operations.
Upon detecting that the user performs an operation such as clicking on the control 304 in the user interface 14 shown in fig. 3D, the mobile phone may display a window 305 exemplarily shown in fig. 3E on the user interface 14 (fig. 3E illustrates the text "movie", the text "jigsaw", and a rounded rectangle frame as an example).
When the user selects a picture material, a video material, or both, the mobile phone may display a user interface for editing a new video if the user performs an operation such as clicking on the text "movie" in the window 305.
When the user selects a picture material, the mobile phone may display a user interface for editing a new picture if the user performs an operation such as clicking on the text "jigsaw" in the window 305.
When the user selects a video material, or a picture material and a video material, if the user performs an operation such as clicking on the text "mosaic" in the window 305, the mobile phone cannot display a user interface for editing a new picture, and can display the text "mosaic does not support video" to prompt the user to cancel the selection of the video material.
After detecting that the user performs an operation such as clicking the text "movie" in the window 305 shown in fig. 3E, the mobile phone may determine that the type of the video template is a parent-child type based on the video material 3031, the picture material 3032, the picture material 3033, the video material 3034, the picture material 3035, the picture material 3036, the picture material 3037 and the video material 3038 selected by the user, so that the video is generated from the video material 3031, the picture material 3032, the picture material 3033, the video material 3034, the picture material 3035, the picture material 3036, the picture material 3037 and the video material 3038 selected by the user based on the parent-child type video template, and a user interface 15 exemplarily shown in fig. 3F may be displayed, and the user interface 15 is used for displaying the video generated by the mobile phone.
Wherein the segments in the generated video correspond to the segments in the parent-child type video template. The video material 3031, the picture material 3032, the picture material 3033, the video material 3034, the picture material 3035, the picture material 3036, the picture material 3037 and the video material 3038 appear at least once in the generated video, and the same material cannot be placed in any two adjacent segments in the generated video.
In summary, the electronic device may automatically generate video based on picture elements and/or video material selected by the user in the gallery application. In addition, the user interface 15 is also used to display controls for editing the generated video.
Among them, the user interface 15 may include: preview area 306, progress bar 307, controls 3081, controls 3082, controls 3083, controls 3084, controls 3085, controls 30811, controls 30812, controls 30813, controls 30814, and controls 309.
And the preview area 306 is used for displaying the generated video, so that the user can conveniently watch and adjust the video.
A progress bar 307 for indicating the duration of the video under any one of the video templates (fig. 3F exemplarily indicates the start time of the video by "00: 00", exemplarily indicates the end time of the video by "00: 32", and exemplarily indicates the progress of the video by a slide bar).
Widget 3081 to provide different types of video templates. The widget 30811 is used for representing a parent-child type video template (fig. 3F exemplarily represents a parent-child type video template by using the text "parent-child" and a bold-displayed rounded rectangle box), the widget 30812 is used for representing a travel type video template (fig. 3F exemplarily represents a travel type video template by using the text "travel" and a normal-displayed rounded rectangle box), the widget 30813 is used for representing a food type video template (fig. 3F exemplarily represents a food type video template by using the text "food" and a normal-displayed rounded rectangle box), and the widget 30814 is used for representing a sport type video template (fig. 3F exemplarily represents a sport type video template by using the text "sport" and a normal-displayed rounded rectangle box). Therefore, when the material is identified to be matched with the video main body template of a certain type, the electronic equipment can also provide the user with the video templates of other types except the type, and various requirements of the user can be met.
The control 3082 is used for editing the frame of the video, changing the duration of the video, adding a new picture and/or video in the video, deleting the picture and/or video in the video, and the like. Therefore, the video with the corresponding length and/or the corresponding material is generated based on the user requirements, and the flexibility of video generation is considered.
And a control 3083 for changing the music matched with the video template.
And a control 3084 for changing a filter of the video.
And a control 3085, configured to add text to the video, for example, add text at the head and end of the video.
And a control 309, configured to store the generated video, so as to facilitate use or viewing of the stored video.
Based on the above description, the electronic device may display the generated video to the user through preview area 306.
In addition, the type of the video template determined by the electronic equipment is a parent-child type, so that the electronic equipment displays the rounded rectangle frame in the control 3081 in a bold manner, and a user can be conveniently and quickly informed.
And based on other controls in the user interface 15, the user may perform operations such as selecting a type of a video template, adjusting a frame of a video, adjusting a duration of the video, adding a new picture material and/or a video material to the video, selecting music matched with the video, selecting a filter of the video, adding characters to the video, and the like, so that the electronic device can determine the video template meeting the user's intention and generate a corresponding video.
For example, upon detecting that the user performs an operation such as clicking on control 3081 in user interface 15 shown in fig. 3F, the cell phone may display user interface 15 exemplarily shown in fig. 3F, such that the user may select one video template among controls 30811, 30812, and 30813.
For another example, after detecting that the user performs an operation such as clicking on the widget 3082 in the user interface 15 shown in fig. 3F, the mobile phone may display the user interface 21 exemplarily shown in fig. 3G, where the user interface 21 is used to display factors of editing a video, such as a picture, a duration, and materials included during playing.
The user interface 21 may include: video playback area 3171, widget 3172, widget 3173, widget 3174, widget 3175, material playback area 3176, widget 3177. The video playing area 3171 is used to show the effect of the video to be generated. Control 3172 is used to enter a user interface that changes the frame of the video, which may be 16: 9. 1: 1 or 9: 16, etc. Control 3173 is used to enter a user interface that changes the duration of the video. Widget 3174 is used to enter a user interface for adding new material in the video. Widget 3175 is used to enter material already in the video. The material playing area 3176 is used to show the playing effect of each material in the video. Control 3177 is used to exit user interface 21.
For another example, after detecting that the user performs an operation such as clicking on the control 3083 in the user interface 15 shown in fig. 3F, the mobile phone may display the user interface 22 exemplarily shown in fig. 3H, where the user interface 22 is used to display music corresponding to the edited video.
Included in the user interface 22 may be: a video playing area 3181, a progress bar 3182, a control 3183, a control 3184, and a control 3185. The video playing area 3181 is used to show the effect of the video to be generated. The progress bar 3182 is used to display or change the playing progress of the video to be generated. The control 3183 is used to show various types of video templates, such as displaying the types of characters "parent and child", "travel", "gourmet", "sports", and so on. Control 3184 is used to expose corresponding music under a certain type of video template, such as "song 1", "song 2", and "song 3" as shown. Control 3185 is used to exit user interface 22.
As another example, after detecting that the user performs an operation such as clicking on the control 3084 in the user interface 15 shown in fig. 3F, the mobile phone may display the user interface 23 exemplarily shown in fig. 3I, where the user interface 23 is used for displaying a filter for editing a video. For example, in fig. 3H, the text "parent-child" is displayed in bold and a check mark is placed in the display column corresponding to the text "song 1" to indicate that the mobile phone currently selects the video template of the parent-child type, and the music corresponding to the video template is song 1. It should be noted that the corresponding rounded rectangle boxes before the characters "song 1", "song 2", and "song 3" are used to display the image of the corresponding song. The embodiment of the present application does not limit the specific display content of the image. For convenience of explanation, the examples of the present application are illustrated with white filling as an example.
Included in the user interface 23 may be: video play area 3191, progress bar 3192, control 3183, control 3184, and control 3185. The video playing area 3191 is used to show the effect of the video to be generated. The progress bar 3192 is used to display or change the playing progress of the video to be generated. The control 3183 is used to display each filter, for example, the characters "filter 1", "filter 2", "filter 3", "filter 4", "filter 5" and the like are displayed. Wherein, different filters and videos have different display effects, such as softening, whitening and blackening, and deepening of color. Control 3194 is used to exit user interface 23. For example, in fig. 3I, the bold display text "filter 1" may indicate that the filter of the currently selected video of the mobile phone is filter 1.
As another example, after detecting that the user performs an operation such as clicking on the control 3085 in the user interface 15 shown in fig. 3F, the mobile phone may display the user interface 24 exemplarily shown in fig. 3J, where the user interface 24 is used to display a filter for editing the video.
Included in the user interface 24 may be: video playback area 3201, control 3202, control 3203, control 3204, and control 3185. The video playing area 3191 is used to show the effect of the video to be generated. Control 3202 is used to select to add a title to the slice header or slice trailer. The control 3193 is used to show various titles, such as the type of displaying the characters "title 1", "title 2", "title 3", "title 4", "title 5", etc. For any two different titles (e.g., title 1 and title 2), if the contents of title 1 and title 2 can be the same, title 1 and title 2 can be displayed in any one frame of the video with different playing effects. The playing effect can be understood as an effect formed by changing parameters such as font, thickness, color and the like of characters in the title. For example, title 1 may be the word "weekend hours" and title 1 takes a regular script. Title 2 is the text "weekend hours" and title 1 is in song. If the contents of title 1 and title 2 are different, title 1 and title 2 may be displayed in any one of the frames of the video with the same or different playback effects. For example, title 1 may be the word "weekend hours light". Title 2 is the word "nice day". Control 3194 is used to exit user interface 24. For example, in FIG. 3J, the bold display of the words "slice header" and "heading 1" may indicate that the handset currently selects to add heading 1 to the slice header of the video.
In conclusion, the electronic equipment can provide a function of manually editing the generated video for the user, so that the user can conveniently configure parameters such as duration, picture width, video template, contained materials and filter of the video based on own will, and the style of the video is enriched.
In addition, the cell phone may save the video after detecting that the user performs an operation such as clicking on the control 309 in the user interface 15 shown in fig. 3F.
Mode two
Please refer to fig. 3A-3B, fig. 3K-3N, and fig. 3F, wherein fig. 3K-3N are schematic diagrams of human-computer interaction interfaces according to an embodiment of the present disclosure.
In some embodiments, after detecting that the user performs an operation such as clicking on the control 3025 in the user interface 12 shown in fig. 3K, the mobile phone may display the user interface 16 exemplarily shown in fig. 3L, where the user interface 16 is used to display a page corresponding to a discovery category in the gallery application. In fig. 3L, control 3023 changes from the bold display to the normal display, and control 3025 changes from the normal display to the bold display.
In other embodiments, after detecting the operation of opening the gallery application (e.g. clicking the icon 301 of the gallery application) indicated by the user, the mobile phone may display the user interface 16 exemplarily shown in fig. 3L, where the user interface 16 is used to display a page corresponding to the discovery category in the gallery application. In FIG. 3L, control 3025 is shown in bold.
Among these, the user interface 16 may include: the control 312, the control 312 is used to enter a display page of picture material and/or video material stored in the mobile phone.
In the embodiment of the present application, the specific implementation manner of the user interface 16 may include various manners. For ease of illustration, in FIG. 3L, user interface 16 is divided into five portions.
The first portion includes a search box for providing a user with a way to search for picture material and/or video material by keywords such as photos, people, places, etc.
The second part includes a control for entering a new video authoring by template (fig. 3L is illustrated by using the text "template authoring" and an icon as an example), a control 312, and a control for entering a new video authoring by puzzle (fig. 3L is illustrated by using the text "template authoring" and an icon as an example).
The third part displays pictures divided according to the portrait. The title of the third part is illustrated in fig. 3L by taking the characters "portrait" and "more" as examples.
The fourth part includes pictures and/or videos divided according to places, such as pictures and/or videos with a place of "shenzhen city", pictures and/or videos with a place of "Guilin city", and pictures and/or videos with a place of "metropolis", as shown in FIG. 3L. The title of the fourth part is illustrated in fig. 3L by using the text "place" and the text "more" as examples.
Shown in the fifth section are control 3022, control 3023, control 3024, and control 3025.
The title of the user interface 16 is illustrated in fig. 3L by using the word "find" as an example. Also included in the user interface 16 may be: controls for implementing editing of the user interface 16, such as adding new groupings or deleting existing groupings in the user interface 16 (fig. 3L is illustrated with three black dots as an example).
After detecting that the user performs an operation such as clicking on the control 312 in the user interface 16 shown in fig. 3L, the mobile phone may display the user interface 17 exemplarily shown in fig. 3M, where the user interface 17 is used to display picture materials and/or video materials that may be used to generate a new video in a free-authoring manner.
In the embodiment of the present application, the specific implementation manner of the user interface 17 may include various manners. For ease of illustration, in fig. 3M, the user interface 17 includes a display area 313, and a window 314 overlaid on the display area 313.
The display area 313 includes picture materials and/or video materials, and a control for displaying the picture materials/video materials in an enlarged manner is displayed at the upper left corner of each picture material/video material (as illustrated in fig. 3M by using two arrows that are oblique and point in opposite directions), and a control for selecting the picture materials/video materials is displayed at the lower right corner of each picture material/video material (as illustrated in fig. 3M by using a rounded rectangle).
In the embodiment of the present application, parameters such as the display number of the picture materials, the display area of the picture materials, the display position of the picture materials, the display content of the video materials, the display number of the video materials, the display area of the video materials, the display position of the video materials, and the material sequence of each type in the display region 313 are not limited. For convenience of explanation, in fig. 3M, the display area 313 shows: the video material 3031, the picture material 3032, the picture material 3033, the video material 3034, the picture material 3035, the picture material 3036, the picture material 3037, and the video material 3038 may specifically refer to the description in the first embodiment, and are not described herein again.
The window 314 may include: a widget 3141 (illustrated in fig. 3M by the icon "0/50", where "0" indicates that no one of the picture materials/video materials is selected, "50" indicates that there are 50 picture materials/video materials in the cell phone), the widget 3141 is used to indicate the total number of picture materials/video materials stored in the cell phone and to indicate the number of picture materials/video materials currently selected by the user, and a widget 3142, the widget 3142 is used to enter a display interface where new video production is started, and a preview area 3143, the preview area 3143 is used to show the picture materials and/or video materials selected by the user.
After detecting that the user performs an operation of selecting the picture material/video material in the display area 313 shown in fig. 3M, the mobile phone may display a display change in the user interface 17 exemplarily shown in fig. 3N based on the user operation.
For pictures and/or videos that are not selected by the user (illustrated in fig. 3N by taking other picture materials/other video materials besides the video material 3031, the picture material 3032, the picture material 3033, the video material 3034, the picture material 3035, the picture material 3036, the picture material 3037 and the video material 3038 as examples), other picture materials/other material videos of the display area in the user interface 17 maintain the same display screen.
For the picture material and/or the video material that has been selected by the user (which is exemplified by the video material 3031, the picture material 3032, the picture material 3033, the video material 3034, the picture material 3035, the picture material 3036, the picture material 3037, and the video material 3038 in fig. 3N), the display change of the control for selecting the picture material/the video material in the lower right corner of each picture material/video material in the video material 3031, the picture material 3032, the picture material 3033, the video material 3034, the picture material 3035, the picture material 3036, the picture material 3037, and the video material 3038 in the display region 313 in the user interface 17 occurs (which is exemplified by adding a hook in a rounded rectangle in fig. 3N).
A widget 3141 in the user interface 17 shows that the number of picture/video material selected by the user changes (fig. 3N is illustrated by an icon "8/50", where "8" indicates that the user selects eight picture/video material, and "50" indicates that there are 50 picture/video material in the mobile phone, which can be freely created to generate new video).
Preview area 2143 in user interface 17 shows that the selected picture material/video material has changed (in fig. 3N, display of video material 3031, picture material 3032, picture material 3033, and video material 3034 is used, and display of picture material 3035, picture material 3036, picture material 3037, and video material 3038 by dragging a slider is illustrated by way of example).
After detecting that the user performs an operation of generating a new video in the user interface 17 shown in fig. 3N (e.g., clicking a widget 3142 in the user interface 17), the mobile phone may determine that the type of the video template is a parent-child type based on the video material 3031, the picture material 3032, the picture material 3033, the video material 3034, the picture material 3035, the picture material 3036, the picture material 3037 and the video material 3038 selected by the user, so as to generate videos from the video material 3031, the picture material 3032, the picture material 3033, the video material 3034, the picture material 3035, the picture material 3036, the picture material 3037 and the video material 3038 selected by the user based on the parent-child type video template, and may display the user interface 15 exemplarily shown in fig. 3F. For a specific implementation of the generated video mentioned herein, reference may be made to the description of the generated video in mode 1.
In summary, the electronic device may automatically generate video based on picture elements and/or video material selected by the user in the gallery application.
The specific implementation manner of the user interface 15 can refer to the foregoing description, and is not described herein again. Accordingly, the electronic device may display the generated video to the user through the preview area 306.
In addition, the user interface 15 is also used to display controls for editing the generated video. Therefore, the electronic equipment can provide a function of manually editing the generated video for the user, the user can conveniently configure parameters such as duration, picture width, video template, contained materials and filter of the video based on own will, and the style of the video is enriched. In addition, the cell phone may save the video after detecting that the user performs an operation such as clicking on the control 309 in the user interface 15 shown in fig. 3F.
Mode III
Please refer to fig. 3A-3B, fig. 3O-3Q, fig. 3T, and fig. 3F, and fig. 3O-3Q, and fig. 3T are schematic diagrams of a human-machine interface according to an embodiment of the present disclosure.
In some embodiments, after detecting that the user performs an operation such as clicking on the control 3024 in the user interface 12 shown in fig. 3O, the mobile phone may display the user interface 18 exemplarily shown in fig. 3P, where the user interface 18 is used to display a page corresponding to the time category in the gallery application. In FIG. 3P, control 3023 changes from the bold display to the normal display, and control 3024 changes from the normal display to the bold display.
In other embodiments, after detecting the operation of opening the gallery application (e.g. clicking the icon 301 of the gallery application) indicated by the user, the mobile phone may display the user interface 18 exemplarily shown in fig. 3P, where the user interface 18 is used to display a page corresponding to the time category in the gallery application. In FIG. 3P, control 3024 is shown in bold.
Among other things, the user interface 18 may include: and a control 3151, wherein the control 3151 is used for entering a display page for creating a new video in the manner provided by the embodiment of the application.
In the embodiment of the present application, the specific implementation manner of the user interface 18 may include various manners. For ease of illustration, the user interface 18 is divided into three portions in FIG. 3P.
The first portion includes a search box for providing a user with a way to search for picture material and/or video material by keywords such as photos, people, places, etc.
The second part comprises a control 3152 (fig. 3P is illustrated by using the characters "hour light on weekend", date "9 months in 2020", and a picture material), the control 3152 is used for displaying a video 1 generated by the picture material and/or the video material in the mobile phone within a period of time, a control 3153 used for displaying a video 2 generated by the picture material and/or the video material in the mobile phone within a period of time (fig. 3P is illustrated by using the characters "hour light on weekend", date "5 months in 2020", and a picture material), and a control 3154 used for displaying a video 3 generated by the picture material and/or the video material in the mobile phone within a period of time (fig. 3P is illustrated by using the characters "hour light on weekend", date "4 months in 2020", and a picture material). It should be noted that the picture material/video material in the video 1, the video 2, and the video 3 may or may not be repeated, which is not limited in the embodiment of the present application.
It should be noted that video 1, video 2, and video 3 are all generated by the electronic device according to the scheme provided in the present application.
In the third section, control 3022, control 3023, control 3024, and control 3025 are displayed.
The title of the user interface 18 is illustrated in fig. 3P by using the text "time" as an example.
In some embodiments, upon detecting that the user performs an operation such as clicking on control 3151 in user interface 18 shown in fig. 3P, the cell phone may display window 316, which is exemplarily shown in fig. 3Q, on user interface 18, where window 316 is used to display picture material and/or video material that may be used to generate movies or puzzles.
The handset, upon detecting that the user performs an operation such as clicking on the text "compose a movie" in the window 316 shown in fig. 3Q, may display the user interface 17 exemplarily shown in fig. 3M. The specific implementation manner of the user interface 17 may refer to the foregoing description, and is not described herein again.
After detecting that the user performs an operation of selecting the picture material/video material in the display area 313 shown in fig. 3M, the mobile phone may display a display change in the user interface 17 exemplarily shown in fig. 3N based on the user operation. The specific implementation manner of the display change of the user interface 17 may refer to the foregoing description, and is not described herein again.
After detecting that the user performs an operation of generating a new video in the user interface 17 shown in fig. 3N (e.g., clicking a widget 3142 in the user interface 17), the mobile phone may determine that the type of the video template is a parent-child type based on the video material 3031, the picture material 3032, the picture material 3033, the video material 3034, the picture material 3035, the picture material 3036, the picture material 3037 and the video material 3038 selected by the user, so as to generate videos from the video material 3031, the picture material 3032, the picture material 3033, the video material 3034, the picture material 3035, the picture material 3036, the picture material 3037 and the video material 3038 selected by the user based on the parent-child type video template, and may display the user interface 15 exemplarily shown in fig. 3F. For a specific implementation of the generated video mentioned herein, reference may be made to the description of the generated video in mode 1.
In other embodiments, the cell phone may display the user interface 19 exemplarily shown in fig. 3T upon detecting that the user performs an operation such as clicking on the control 3152 in the user interface 18 shown in fig. 3P.
The user interface 19 may include a control 317, where the control 317 is used to enter an interface capable of playing the video 1, and the video 1 mentioned here is a video generated based on the solution of the present application.
Upon detecting that the user performs an operation such as clicking on a control 317 in the user interface 19 shown in fig. 3Q, the cell phone may display the user interface 15 exemplarily shown in fig. 3F.
In summary, the electronic device may automatically generate video based on picture elements and/or video material selected by the user in the gallery application.
The specific implementation manner of the user interface 15 can refer to the foregoing description, and is not described herein again. Accordingly, the electronic device may display the generated video to the user through the preview area 306.
In addition, the user interface 15 is also used to display controls for editing the generated video. Therefore, the electronic equipment can provide a function of manually editing the generated video for the user, the user can conveniently configure parameters such as duration, picture width, video template, contained materials and filter of the video based on own will, and the style of the video is enriched.
In addition, the cell phone may save the video after detecting that the user performs an operation such as clicking on the control 309 in the user interface 15 shown in fig. 3F.
It should be noted that the parameters of the first, second, and third mentioned manners, such as the size of the control, the position of the control, the display content, and the jumping manner of the user interface, include but are not limited to the foregoing descriptions.
Based on the descriptions of the mode one, the mode two and the mode three, the mobile phone can store the generated video in the gallery application.
Please refer to fig. 3A, fig. 3R-fig. 3S, and fig. 3R-fig. 3S are schematic diagrams of a human-computer interaction interface according to an embodiment of the present application.
After detecting that the user performs an operation of opening the gallery application (e.g., clicking on the icon 301 of the gallery application) in the user interface 11 shown in fig. 3A, the mobile phone may display a user interface 12 'exemplarily shown in fig. 3R, where the user interface 12' is used for displaying pages of albums in the gallery application.
The user interface 12' is basically the same as the interface layout of the user interface 12 shown in fig. 3B, and the specific implementation manner can be referred to the description of the user interface 12 shown in fig. 3B in the first embodiment, which is not described herein again. Unlike the user interface 12 shown in fig. 3B, the number of videos stored in the user interface 12 'is increased by 1, and thus, the user interface 12' in fig. 3R shows that the number of all photographs is increased from "182" to "183" and the number of videos is increased from "49" to "50".
After detecting that the user performs an operation such as clicking on the control 3021 in the user interface 12 ' shown in fig. 3R, the mobile phone may display the user interface 13 ' exemplarily shown in fig. 3S, where the user interface 13 ' is a display interface of pictures and videos in the mobile phone.
The user interface 13 has substantially the same interface layout as the user interface 13 shown in fig. 3C, and specific implementation manners can be described with reference to the user interface 13 shown in fig. 3C in the first embodiment, which is not described herein again. Similar to the user interface 13 shown in fig. 3C, since the whole of the pictures/videos stored in the user interface 13 'moves to the next, the first material displayed on the user interface 13' in fig. 3S is a newly generated video 3039 in chronological order from the near to the current time.
The handset may play the video 3039 upon detecting that the user performs an operation such as clicking on the video 3039 in the user interface 13' shown in fig. 3S.
In the embodiment of the application, each video template can correspond to one piece of music. In general, different video templates correspond to different music. The electronic equipment can default that the music corresponding to each video template is kept unchanged, and can also change the music corresponding to each video template based on the selection of the user, so that the electronic equipment can be flexibly set through the actual situation. The music may be preset for the electronic device or manually added by the user, which is not limited in the embodiment of the present application.
In one aspect, the video template is also related to mirror motion, speed, and transitions. In general, different video templates differ in at least one of mirror motion, speed, and transition, whether the music to which the video templates correspond is the same or not.
For music corresponding to any one video template, each piece of the music can match the set moving mirror, speed and transition. The moving mirror and the transition can be related to the type of the material, the moving mirror adopted by the video material and the moving mirror adopted by the picture material can be the same or different, and the transition adopted by the video material and the transition adopted by the picture material can be the same or different. In addition, video material can generally set play effects corresponding to speed.
Referring to fig. 4A-4J, fig. 4A-4J are schematic diagrams illustrating the effect of the picture element 3033 after mirror-moving.
The mobile phone stores the image material 3033 exemplarily shown in fig. 4A, wherein the image material 3033 can refer to the description of the embodiment in fig. 3C, which is not described herein again.
When the cell phone displays the picture pixel 3033 with a mirror effect moving diagonally, the cell phone can change from displaying the interface 11 exemplarily shown in fig. 4B to displaying the interface 12 exemplarily shown in fig. 4C, wherein the interface 11 is the region a1 of the picture material 3033, the interface 12 is the region a2 of the picture material 3033, and the region a1 and the region a2 are located at different positions of the picture material 3033.
Besides the mirror moving effect of diagonal movement, the electronic device can also adopt mirror moving effects of upward, leftward, rightward and the like, which is not limited in the embodiment of the application.
When the mobile phone displays the picture pixel 3033 with the enlarged moving mirror effect, the mobile phone can change from the interface 1 exemplarily shown in fig. 4B to the interface 13 exemplarily shown in fig. 4D, wherein the interface 11 is the area a1 of the picture material 3033, and the interface 13 is an enlarged view of the area a3 of the picture material 3033.
Besides the enlarged mirror movement effect, the electronic device can also adopt the zoomed mirror movement effect, which is not limited in the embodiment of the application.
In addition, when the picture material 20 is a portrait picture as exemplarily shown in fig. 4E, and the generated video is in the form of a banner, the electronic device may display the picture material 20 with a moving mirror effect moving from top to bottom. For example, the cell phone may change from displaying the interface 21 exemplarily shown in fig. 4F to displaying the interface 22 exemplarily shown in fig. 4G, where the interface 21 is the region b1 of the picture material 20, the interface 22 is the region b2 of the picture material 20, and the region b1 and the region b2 are located at different positions of the picture material 20. Alternatively, the shape of the region composed of the region b1 and the region b2 may be set to be square. If the picture material 20 includes a person, a face, etc., the electronic device may include as many regions as possible, which correspond to the person and the face in the material, by using the region b1 and the region b 2.
When the picture material 30 is a banner picture as exemplarily shown in fig. 4H and the generated video is a portrait picture, the electronic device may display the picture material 30 with a mirror-moving effect moving from left to right. For example, the cell phone may change from displaying the interface 31 exemplarily shown in fig. 4I to displaying the interface 32 exemplarily shown in fig. 4J, where the interface 31 is the region c1 of the picture material 30, the interface 32 is the region c2 of the picture material 303, and the region c1 and the region c2 are located at different positions of the picture material 30. Alternatively, the shape of the region composed of the region c1 and the region c2 may be set to be square. If the picture material 30 includes a person, a face, etc., the electronic device may include as many regions corresponding to the person and the face in the material as possible by using the region c1 and the region c 2.
Therefore, the method is beneficial to the maximization of the material display of the video generated by the electronic equipment, enriches the content of the video and ensures the movie feeling and the picture feeling brought by the video.
Referring to fig. 5, fig. 5 is a schematic diagram illustrating the effects of different speeds of the video material 3038. For the video material 3038, reference may be made to the description of the embodiment in fig. 3C, which is not described herein again.
As shown in fig. 5, assuming that the video generated by the electronic device based on the material plays the video material 3038 in the time period t 0-t 1 and the time period t 2-t 3, and the time period t 2-t 3 is three times as long as the time period t 0-t 1, the electronic device plays the video material 3038 in the time period t 0-t 1 at three times as fast as the video material 3038 in the time period t 2-t 3.
It should be noted that the speed may include any ratio speed besides the triple speed, and this is not limited in the embodiment of the present application.
Referring to fig. 6, fig. 6 is a schematic diagram illustrating the effect of transition between the picture material 3033 and the picture material 3032. The picture material 3033 and the picture material 3032 can refer to the description of the embodiment in fig. 3C, which is not described herein again.
As shown in fig. 6, assuming that the electronic device plays the picture material 3033 in the time period t4 to t5, plays the picture material 3032 in the time period t6 to t7, and the picture material 3033 transitions to the picture material 3032 with the transition effect of superimposing blur in the time period t5 to t6, the electronic device plays the picture material 3033 in the time period t4 to t5, plays the gradually enlarged picture material 3032 superimposed on the blur-processed picture material 3033 in the time period t5 to t6, and plays the picture material 3032 in the time period t6 to t7 based on the video generated by the material.
It should be noted that, in addition to the "superimposition blurring" effect, the transition may also include effects such as focus blurring, which is not limited in the embodiment of the present application.
Therefore, the electronic equipment can realize scene scheduling and lens scheduling of the material according to the set moving mirror, speed and transition.
Video templates, on the other hand, are associated with a genre type. Generally, no matter whether the music corresponding to the video template is the same or not, the corresponding scene types of the different types of video templates are different; and the video templates of the same type have the same corresponding scene type.
When the user selects the music corresponding to the default video template, the electronic equipment does not need to adjust the duration of each segment of the video template, so that the music click can be realized. When a user selects other music as music corresponding to the video template, the electronic device needs to perform beat detection on the music selected by the user to obtain the beat speed of the music selected by the user, then judges whether the duration of each segment of the video template is equal to the integral multiple of the obtained beat speed, and adjusts the duration of the segment of which the duration is not equal to the integral multiple of the obtained beat speed, so that the duration of each segment in the video template is the integral multiple of the beat speed.
For any piece of music, in the embodiment of the present application, a Beat detection method may be used to perform Beat detection on the music by using a BPM (Beat Per Minute, Beat number) method, so as to obtain a Beat rate (BPM), where the electronic device analyzes audio by using a Digital Signal Processing (DSP) method to obtain a Beat point of the music. A common algorithm divides the original audio into several segments, then obtains a frequency spectrum through fast fourier transform, and finally obtains a beat point of the music through filtering analysis based on sound energy.
It should be noted that the scene type corresponding to each segment in each video template is set in advance based on practical experience (for example, the user perceives the scene type corresponding to a single segment and the scene types corresponding to a plurality of continuous segments at certain positions to be strong).
For music corresponding to any one video template, in the embodiment of the present application, a beat point of the music may be used as a boundary, the whole music is divided into a plurality of segments, and each segment is matched with a set scene type.
Wherein each section is an integral multiple of the beat speed of the music, thereby realizing the music beat point of each section. It is understood that the beat of music is the beat or beat, which refers to the combination rule of the beat and the beat, and specifically refers to the total length of notes of each measure in the music score, and the notes may be, for example, half notes, quarter notes, eighth notes, etc. In general, a piece of music may be composed of a plurality of beats, and the beats of a piece of music are generally fixed.
It will be appreciated that the selection of material is random, and in practice there is a high probability that the material will not fully satisfy the genre set for each segment. Therefore, when the above problems occur, the electronic device can adjust the arrangement order of the materials in various ways.
In some embodiments, the electronic device may prioritize each segment. Among them, the high priority fragments may include but are not limited to: the beginning, chorus, ending or accent segments of the music. Furthermore, the electronic equipment can prioritize the scene type set by the segment meeting the high priority, the material corresponding to the scene type set by the segment meeting the high priority is placed in the segment meeting the high priority, and the remaining material is placed in the remaining segment according to the scene type set by the remaining segment, wherein the scene type of the remaining material and the scene type set by the remaining segment can be the same or different.
In other embodiments, the electronic device may preferentially meet the scene type set by the segment located at the front, place the material corresponding to the scene type set by the segment located at the front in the segment located at the front, and place the remaining material in the remaining segment according to the scene type set by the remaining segment, where the scene type of the remaining material and the scene type set by the remaining segment may be the same or different.
The electronic equipment can also preferentially meet the scene type set by the front-positioned segment in the rest segments.
For music corresponding to any one video template, in the embodiment of the present application, the whole music may be divided into a plurality of segments by taking the beat point of the music as a boundary, and the set scene types are matched for the plurality of continuous segments, and the scene types of the remaining segments may not be limited. Thus, the shot feeling and movie feeling of the generated video are enhanced. The plurality of continuous segments may be segments of the beginning, end, or refrain of the music.
Taking the example of dividing the scene types into the short scene, the medium scene, and the long scene exemplarily shown in fig. 7 as an example, the respective corresponding scene types of the plurality of consecutive segments are introduced. Wherein, A represents the corresponding view type of the close view, B represents the corresponding view type of the middle view, and C represents the corresponding view type of the long view.
For example, the scene types corresponding to 5 consecutive segments corresponding to the beginning and/or the end of the music may be CCCBA, respectively, so that the generated video has an overhanging effect at the beginning portion or has an incomplete effect at the end portion.
As another example, the scene types corresponding to the 4 consecutive segments corresponding to the beginning and/or the end of the music may be ABBC, respectively, so that the generated video is an effect of preparing for video expansion narration at the beginning or at the end.
For another example, in the embodiment of the present application, the scene types corresponding to 5 consecutive segments corresponding to the segment after the beginning and/or the segment before the end of the music may be bbbbbbb, respectively, so that the generated video has the effect of expanding narration in the corresponding segment.
For another example, in the embodiment of the present application, the scene types corresponding to 5 consecutive segments corresponding to the refrain part of the music may be CCCCA, respectively, so that the narration of the video is improved to the climax effect at the refrain part of the generated video.
It should be noted that, the embodiments of the present application include, but are not limited to, specific implementations of scene types corresponding to multiple continuous segments of the above music.
Thus, the electronic apparatus can adjust the arrangement order of the materials in accordance with the scene type of the section set by the beat point of the music.
In summary, the electronic device arranges the materials in the order according to the set scene sequence in the video template, adds the scene sense and the shot sense of the materials according to the moving mirror, the speed and the transition set in the video template, and generates the video with the playing effect corresponding to the video template, so that the generated video has better expressive force and tension in the aspects of description of the movie scenario, expression of the character thought and emotion, processing of the character relationship, and the like, thereby enhancing the artistic appeal of the generated video.
For convenience of explanation, a specific implementation of the video template is described with reference to table 1 and table 2, taking a parent-child type video template and a travel type video template as examples. In tables 1 and 2, the scene types are exemplified by three types of short scenes, medium scenes, and long scenes as exemplarily shown in fig. 7, where for convenience of description, a represents a scene type corresponding to a short scene, B represents a scene type corresponding to a medium scene, and C represents a scene type corresponding to a long scene.
TABLE 1 parent-child type video templates
Figure BDA0002711163620000251
Figure BDA0002711163620000261
In table 1, at the beginning of the video, the transition uses the "white fade-in" effect and the "leader fade-out" effect for the video material. For picture materials, transition adopts the effect of 'white gradually-bright'.
At time 6x, the transition takes the effect of "fast downshifting" for video material. For picture materials, the transition adopts the effect of 'up-down fuzzy bevel transition'.
At time 14x, the transition takes the "stretch in" effect for the video material. For picture materials, the transition adopts the effect of 'left-right fuzzy pushing'.
At time 22x, the transition takes the "fast up" effect for the video material. For picture material, transition adopts the effect of 'push-up and focus blur/zoom behind the screen'.
At time 32x, the transition takes the effect of "speed left" for video material. For picture materials, the transition adopts the effect of rotating and blurring towards the right axis.
At time 34x, the transition takes the effect of "right rotation" for the video material. For picture materials, the transition adopts the effect of rotating and blurring towards the left axis.
At time 36x, the transition takes the effect of "fast left-slide" for the video material. For picture materials, the transition adopts the effect of perspective blurring.
At time 38x, the transition takes the effect of "fuzzy aliasing" for the video material. For picture material, transitions do not take any effect.
At time 40x, the transition takes the effect of "speed left" for video material. For picture material, transitions do not take any effect.
At time 42x, the transition takes the effect of "fade-out" and the effect of "right-turn" for the video material. For picture materials, the transition adopts the effect of 'whitening and fading'.
At time 44x, the transition takes the effect of "fast left-slide" for the video material. For picture material, transitions do not take any effect.
At time 46x, the transition takes the effect of "fuzzy aliasing" for the video material. For picture material, transitions do not take any effect.
At time 48x, the transition takes the "fade" effect for the video material. For picture materials, the transition adopts the effect of 'whitening and fading'.
At time 49x, the transition takes the effect of "left turn" for the video material. For picture materials, the transition adopts the effect of perspective blurring.
At time 52x, the transition takes the effect of "left rotation" for the video material. For picture material, transitions do not take any effect.
At time 56x, the transition takes the "fast left" effect for the video material. For picture material, transitions do not take any effect.
At time 60x, the transition takes the "fast left" effect for video material. For picture material, transitions do not take any effect.
At time 62x, the transition takes the "stretch in" effect for the video material. For picture material, transitions do not take any effect.
TABLE 2 video template for types of travel
Figure BDA0002711163620000271
Figure BDA0002711163620000281
Figure BDA0002711163620000291
Figure BDA0002711163620000301
The specific implementation manner of transition in table 2 can refer to the description manner of transition in table 1, and is not described herein again.
It should be noted that the video template includes, but is not limited to, parameters related to scene type, moving mirror, speed and transition.
In addition, the video template can also adjust the moving mode of the lens of the video in a self-adaptive manner based on the frame of the material, so as to achieve the optimal playing effect. For example, when a banner video is generated, the electronic device may use a mode of moving a lens from top to bottom to achieve a maximum regional display effect of a vertical material; when a vertical video is generated, the electronic equipment can realize the maximum regional display effect of the material of the banner by using a left-to-right moving mode of the lens. Therefore, the method is beneficial to displaying the materials to the maximum extent by the video, enriches the content of the video and ensures the movie feeling and the picture feeling brought by the video.
In the embodiment of the present application, each scene type in the video template corresponds to a segment, and the durations of the segments may be the same or different. The electronic device may first place the user-selected material based on the duration of each segment in the video template. Typically, video material may be placed in longer-duration segments in preference to picture material. The electronic equipment adjusts the arrangement sequence of the placed materials based on the scene type corresponding to each segment, so that the scene type of the materials is matched with the scene type of the segments, and the materials selected by the user are ensured to appear at least once in the generated video and the same materials cannot be placed in the adjacent segments.
It should be noted that the embodiments of the present application are not limited to the above implementation manner to adjust the arrangement order of the materials in the video.
In addition, when the number of the materials selected by the user is large and the time length of the video is set to be small, the time length of the section corresponding to the scene type can be set to be small, so that all the materials can appear in the video once. When the number of the materials selected by the user is small and the time length of the video is set to be large, the electronic equipment can select one or more segments from the video materials to repeatedly appear in the generated video for N times, wherein N is a positive integer greater than 1. If the longer video duration cannot be met, the electronic device may repeat all the arranged materials M times in the generated video, where M is a positive integer greater than 1.
The electronic device may set a minimum duration and a maximum duration for the duration of the music corresponding to the video template, so as to ensure that the user-selected material appears at least once in the generated video.
Based on the foregoing description, the play effect of the video 3039 in fig. 3S is associated with the video template. In general, the video templates are different, and the playing effect of the video 3039 is different. When the user selects the video material 3031, the picture material 3032, the picture material 3033, the video material 3034, the picture material 3035, the picture material 3036, the picture material 3037 and the video material 3038, the video 3039 may include: a video material 3031, a picture material 3032, a picture material 3033, a video material 3034, a picture material 3035, a picture material 3036, a picture material 3037, and a video material 3038.
Taking the example of dividing the scene types into the short scene, the medium scene, and the long scene exemplarily shown in fig. 7 as an example, the respective corresponding scene types of the plurality of consecutive segments are introduced. Wherein, A represents the corresponding view type of the close view, B represents the corresponding view type of the middle view, and C represents the corresponding view type of the long view.
In this embodiment, the electronic device may recognize that the scene type corresponding to the video material 3031 is BCBBB, the scene type corresponding to the picture material 3032 is B, the scene type corresponding to the picture material 3033 is B, the scene type corresponding to the video material 3034 is CCCC, the scene type corresponding to the picture material 3035 is B, the scene type corresponding to the picture material 3036 is a, the scene type corresponding to the picture material 3037 is a, and the scene type corresponding to the video material 3038 is BCCCC.
The electronic device recognizes that the generated video 3039 can adopt a parent-child type video template based on the video material 3031, the picture material 3032, the picture material 3033, the video material 3034, the picture material 3035, the picture material 3036, the picture material 3037 and the video material 3038.
In some embodiments, if the parent-child type video template shown in table 1 is used, the electronic device places the video material 3031, the picture material 3032, the picture material 3033, the video material 3034, the picture material 3035, the picture material 3036, the picture material 3037, and the video material 3038 at the position corresponding to the music respectively according to the scene type corresponding to each segment of the music given in table 1 based on the respective scene types of the video material 3031, the picture material 3032, the picture material 3033, the picture material 3035, the picture material 3036, the picture material 3037, and the video material 3038, so as to obtain the video 3039.
In other embodiments, the electronic device may enhance the playing effect of the generated video according to the scene type of the plurality of continuous segments corresponding to the set music in the video template, which is beneficial to improving the shot feeling and the movie feeling of the video.
It should be noted that, in addition to the above two manners, the electronic device may set the scene type in the video template according to an actual situation and an empirical value, and the setting manner of the scene type in the video template is not limited in this embodiment of the application.
In the following, with reference to fig. 8A to 8E, the playback effect of the video generated by the electronic device based on the material selected by the user is illustrated.
Referring to fig. 8A to 8E, fig. 8A to 8E are schematic diagrams illustrating a playing sequence of each material when the electronic device plays the generated video.
As shown in fig. 8A, when the user selects the picture material 11, the picture material 12, the picture material 13, the picture material 14, and the picture material 15, the electronic apparatus determines that: the scene type in the video template is CCCBA, and the durations of the scene type CCCBA are 4x, 2x, and 2x, respectively, where x is 0.48 seconds, and the scene type of the picture material 11 is B, the scene type of the picture material 12 is B, the scene type of the picture material 13 is C, the scene type of the picture material 14 is a, and the scene type of the picture material 15 is C.
Based on the scene type in the video template and the respective scene types of the picture material 11, the picture material 12, the picture material 13, the picture material 14, and the picture material 15, the electronic device can know that a scene type C with a duration of 2x is absent in all the materials, so that all the materials cannot be accurately matched with the scene type in the video template. Since all material needs to appear at least once, the electronic device may change the genre type CCCBA in the video template to CBCBA.
Thus, the electronic apparatus adjusts the arrangement order of the picture materials 11, 12, 13, 14, and 15 based on the scene type CBCBA, generating a video as exemplarily shown in fig. 8A.
In fig. 8A, the playing order of the picture material 11, the picture material 12, the picture material 13, the picture material 14, and the picture material 15 in the generated video is:
between 0-4 x: a picture material 13;
between 4x and 6 x: a picture material 11;
between 6x and 8 x: a picture material 15;
between 8x and 9 x: a picture material 12;
between 9x and 11 x: the picture material 14.
Further, the scene types corresponding to the video exemplarily shown in fig. 8A are CBCBA, respectively.
As shown in fig. 8B, when the user selects the picture material 21, the picture material 22, the picture material 23, the picture material 24, and the picture material 25, the electronic apparatus determines that: the scene type in the video template is CCCBA, and the durations of the scene type CCCBA are 4x, 2x, and 2x, respectively, where x is 0.48 seconds, and the scene type of the picture material 21 is C, the scene type of the picture material 22 is B, the scene type of the picture material 23 is C, the scene type of the picture material 24 is a, and the scene type of the picture material 25 is C.
Based on the scene type in the video template and the respective scene types of the picture material 21, the picture material 22, the picture material 23, the picture material 24, and the picture material 25, the electronic device can know that all the materials can accurately match the scene type in the video template. Thus, the electronic apparatus adjusts the arrangement order of the picture materials 21, 22, 23, 24, and 25 based on the category type of CCCBA, and generates a video as exemplarily shown in fig. 8B.
In fig. 8B, the playing order of the picture material 21, the picture material 22, the picture material 23, the picture material 24, and the picture material 25 in the generated video is:
between 0-4 x: a picture material 23;
between 4x and 6 x: a picture material 21;
between 6x and 8 x: a picture material 25;
between 8x and 9 x: picture material 22;
between 9x and 11 x: the picture material 24.
Further, the scene types corresponding to the video exemplarily shown in fig. 8B are CCCBAs, respectively.
As shown in fig. 8C, when the user selects the picture material 31, the video material 32, the picture material 32, and the picture material 33, the electronic apparatus determines that: the scene type in the video template is CCCBA, and the duration of the scene type CCCBA is 4x, 2x, and 2x, respectively, where x is 0.48 seconds, and the scene type of the picture material 31 is B, the scene type of the video material 31 is B, the duration of the video material 31 is equal to x, the scene type of the video material 32 is C, the duration of the video material 32 is greater than or equal to 4x, the scene type of the picture material 32 is a, and the scene type of the picture material 33 is C.
Based on the scene type in the video template and the respective scene types of the picture material 31, the video material 32, the picture material 32, and the picture material 33, the electronic device can know that a scene type C with a duration of 2x is absent in all the materials, so that all the materials cannot be accurately matched with the scene type in the video template. Since all material needs to appear at least once, the electronic device may change the genre type CCCBA in the video template to CBCBA.
Thus, the electronic apparatus adjusts the arrangement order of the picture material 31, the video material 32, the picture material 32, and the picture material 33 based on the scene type CBCBA, generating a video as exemplarily shown in fig. 8C.
In fig. 8C, the playing order of the picture material 31, the video material 32, the picture material 32, and the picture material 33 in the generated video is:
between 0-4 x: video material 32;
between 4x and 6 x: a picture material 31;
between 6x and 8 x: a picture material 33;
between 8x and 9 x: video material 32;
between 9x and 11 x: the picture material 32.
Further, the scene types corresponding to the video exemplarily shown in fig. 8C are cbcbcbas, respectively.
As shown in fig. 8D, when the user selects the picture material 41, the video material 42, the picture material 42, and the picture material 43, the electronic apparatus determines that: the scene type in the video template is CCCBA, and the duration of the scene type CCCBA is 4x, 2x, and 2x, respectively, where x is 0.48 seconds, and the scene type of the picture material 41 is C, the scene type of the video material 41 is B, the duration of the video material 41 is equal to x, the scene type of the video material 42 is C, the duration of the video material 42 is greater than or equal to 4x, the scene type of the picture material 42 is a, and the scene type of the picture material 43 is C.
Based on the scene type in the video template and the respective scene types of the picture material 41, the video material 42, the picture material 42, and the picture material 43, the electronic device can know that all the materials can be accurately matched with the scene type in the video template. Thus, the electronic apparatus adjusts the arrangement order of the picture material 41, the video material 42, the picture material 42, and the picture material 43 based on the category type of CCCBA, to generate a video as exemplarily shown in fig. 8D.
In fig. 8D, the playing order of the picture material 41, the video material 42, the picture material 42, and the picture material 43 in the generated video is:
between 0-4 x: a video material 42;
between 4x and 6 x: a picture material 41;
between 6x and 8 x: a picture material 43;
between 8x and 9 x: a video material 42;
between 9x and 11 x: the picture material 42.
Further, the scene types corresponding to the video exemplarily shown in fig. 8D are CCCBAs, respectively.
As shown in fig. 8E, when the user selects the picture material 51, the video material 52, and the picture material 52, the electronic apparatus determines that: the scene type in the video template is CCCBA, and the duration of the scene type CCCBA is 4x, 2x, and 2x, respectively, where x is 0.48 seconds, and the scene type of the picture material 51 is C, the scene type of the video material 51 is BC, the duration of the segment corresponding to the scene type C in the video material 51 is equal to 2x, the duration of the segment corresponding to the scene type B in the video material 51 is equal to x, the scene type of the video material 52 is C, the duration of the video material 42 is greater than or equal to 4x, and the scene type of the picture material 52 is a.
Based on the scene type in the video template and the respective scene types of the picture material 51, the video material 52 and the picture material 52, the electronic device can know that all the materials can be accurately matched with the scene type in the video template. Thus, the electronic apparatus adjusts the arrangement order of the picture material 51, the video material 52, and the picture material 52 based on the category type of CCCBA, to generate a video as exemplarily shown in fig. 8E.
In fig. 8E, the playing sequence of the picture material 51, the video material 52, and the picture material 52 in the generated video is:
between 0-4 x: video material 52;
between 4x and 6 x: a segment corresponding to the scene type C in the picture material 51;
between 6x and 8 x: a picture material 51;
between 8x and 9 x: a segment corresponding to the scene type B in the video material 51;
between 9x and 11 x: picture material 52.
Further, the scene types corresponding to the video exemplarily shown in fig. 8E are CCCBAs, respectively.
Based on the foregoing description, after determining that the scene type in the video template is the CCCBA, the electronic device may perform matching to a preset degree on the scene type of the material and the scene type in the video template based on the scene type corresponding to the material selected by the user, in consideration of factors such as a playing effect of the video, a duration of the video, the scene type in the video, a use condition of the material, a quantity of the material, the scene type of the material, whether the material supports reuse, and the like, thereby generating the video. That is, the video generated by the electronic device corresponds to a scene type that is completely or partially the same as the scene type in the video template. The preset degree may be 100% (i.e., precise matching) or 90% (i.e., fuzzy matching), and usually the preset degree is greater than or equal to 50%. In the embodiment of the application, the electronic equipment adjusts the arrangement sequence of the materials in the generated video based on the arrangement sequence of the scene types in the video template, and then combines the technologies of moving mirrors, speed, transition and the like in the video template, so that the video with coherent sight lines and high quality feeling can be generated.
In summary, the video generation method of the embodiment of the application enhances the shot feeling and the movie feeling of the video, and is beneficial to improving the use experience of the user.
Based on the foregoing description, embodiments of the present application may provide a video generation method.
Referring to fig. 9, fig. 9 is a schematic diagram illustrating a video generation method according to an embodiment of the present application. As shown in fig. 9, a video generation method according to an embodiment of the present application may include:
s101, the electronic equipment displays a first interface of a first application, wherein the first interface comprises a first control and a second control.
S102, after receiving a first operation acted on a first control, the electronic equipment determines that the arrangement sequence of a first material, a second material and a third material is a first sequence; and generating the first video from the first material, the second material and the third material according to the first sequence.
S103, after receiving a second operation acting on a second control, the electronic equipment determines that the arrangement sequence of the first material, the second material and the third material is a second sequence, and the second sequence is different from the third sequence; and generating a second video from the first material, the second material and the third material according to a second sequence.
The first material, the second material and the third material are different image materials stored in the electronic equipment, the third sequence is a time sequence of storing the first material, the second material and the third material in the electronic equipment, and the first sequence is different from the third sequence.
In the embodiment of the present application, reference may be made to the foregoing description for specific implementations of the first material, the second material, and the third material. The specific implementation manner of the first control may refer to any one of controls 30811, 30812, 30813 and 30814 exemplarily shown in fig. 3F, and the specific implementation manner of the second control may refer to any one of controls 30811, 30812, 30813 and 30814 exemplarily shown in fig. 3F, where the first control is different from the second control. The first order and the second order may be the same or different, and this is not limited in this application. The playing effect of the first video and the second video is different, and specifically, see the aforementioned video 1, video 2, and video 3, and videos generated based on the video material 3031, the picture material 3032, the picture material 3033, the video material 3034, the picture material 3035, the picture material 3036, the picture material 3037, and the video material 3038 selected by the user. In some embodiments, the first application is a gallery application of the electronic device.
In the embodiment of the application, the electronic equipment matches a proper video template by identifying the scene type of the material, adjusts the arrangement sequence of the material based on the set scene type of each segment in the video template, and automatically generates a video with continuous sight and high quality feeling by combining the set moving mirror, speed and transition of each segment in the video template, without depending on the manual editing of a user, thereby enhancing the shot feeling and movie feeling of the video and improving the use experience of the user.
In some embodiments, the first video is divided into a plurality of segments around a beat point of the music; the first material, the second material and the third material appear at least once in the first video, and the materials appearing in any two adjacent segments of the first video are different; the first material, the second material and the third material appear at least once in the second video, and the material appearing in any two adjacent segments of the second video is different.
In some embodiments, the method further comprises: the electronic equipment displays a second interface of the first application; the electronic equipment generates the first video from the first material, the second material and the third material after receiving a third operation acting on the second interface.
In the embodiment of the present application, a specific implementation process of the second interface may refer to the description of the user interface 13 exemplarily shown in fig. 3E in the first manner, or may refer to the description of the user interface 17 exemplarily shown in fig. 3N in the second manner, or may refer to the description of the user interface 17 exemplarily shown in fig. 3N in the third manner. The specific implementation process of the third operation may be referred to as the description of the word "movie" in the window 305 of the click user interface 13 exemplarily shown in fig. 3E in the manner one, or may be referred to as the description of the control 3142 in the click user interface 17 exemplarily shown in fig. 3N in the manner two, or may be referred to as the description of the control 3142 in the click user interface 17 exemplarily shown in fig. 3N in the manner three.
In some embodiments, the method further comprises: the electronic equipment determines to generate a first video from the first material, the second material, the third material and the fourth material; the fourth material is an image material which is stored in the electronic equipment and is different from the first material, the second material and the third material.
In the embodiment of the present application, reference may be made to descriptions of video 1, video 2, and video in the user interface 18 exemplarily shown in fig. 3P in manner three.
In some embodiments, a third control is also included in the first interface; the method further comprises the following steps: after receiving a fourth operation acting on the third control, the electronic device displays a third interface, where the third interface includes: options for configuration information, the configuration information comprising: at least one parameter of duration, filters, frames, materials or titles; the electronic device generates a third video from the first material, the second material, and the third material based on the configuration information in the first order after receiving a fifth operation acting on the option of the configuration information.
In this embodiment of the application, a specific implementation manner of the third control may refer to descriptions of the control 3082, the control 3083, the control 3084, and the control 3085 exemplarily shown in fig. 3F, which is not described herein again. The third interface may refer to the description of the user interface 21 exemplarily shown in fig. 3G, or the description of the user interface 22 exemplarily shown in fig. 3H, or the description of the user interface 23 exemplarily shown in fig. 3I, or the description of the user interface 24 exemplarily shown in fig. 3J, which is not repeated herein.
For example, the electronic device may adjust parameters such as the duration, the frame, whether to add new material, whether to delete existing material, etc. of the video 1 through the user interface 21 exemplarily shown in fig. 3G. As another example, the electronic device may adjust the music of video 1 via user interface 22 as exemplarily shown in fig. 3H. As another example, the electronic device may adjust the filter of video 1 via user interface 23, which is illustratively shown in FIG. 3I. As another example, the electronic device may adjust whether a title is added to video 1 via user interface 24 as exemplarily shown in fig. 3J.
In some embodiments, a fourth control is also included in the first interface; the method further comprises the following steps: the electronic device saves the first video in response to a fourth operation acting on the fourth control after generating the first video. In this embodiment of the application, a specific implementation manner of the fourth control may refer to the description of the control 309 exemplarily shown in fig. 3F, and is not described herein again.
In some embodiments, the method specifically comprises: the electronic equipment determines a scene type corresponding to the first material, a scene type corresponding to the second material and a scene type corresponding to the third material; the electronic equipment determines a material matched with the scene type corresponding to the first segment based on the scene type corresponding to the first material, the scene type corresponding to the second material, the scene type corresponding to the third material and the scene type set by each segment in the first video template, wherein the first segment is any one segment in the first video template; arranging materials corresponding to all the segments in the first video template into a first sequence; the electronic equipment determines a material matched with the scene type corresponding to the second fragment based on the scene type corresponding to the first material, the scene type corresponding to the second material, the scene type corresponding to the third material and the scene type set by each fragment in the second video template, wherein the second fragment is any one fragment in the second video template; arranging materials corresponding to all the segments in the second video template into a second sequence; wherein the first video template is different from the second video template, each segment in the second video being opposite to each segment in the second video template, each segment in the second video corresponding to each segment in the second video template.
In this embodiment of the application, the above scheme may refer to the aforementioned descriptions of videos generated based on the video 1, the video 2, and the video 3 and based on the video material 3031, the picture material 3032, the picture material 3033, the video material 3034, the picture material 3035, the picture material 3036, the picture material 3037, and the video material 3038 selected by the user, which is not described herein again.
In some embodiments, the method further comprises: the electronic equipment generates a first video from a first material, a second material and a third material according to a first sequence and a mirror moving effect, a speed effect and a transition effect which are set by each segment in a first video template; and the electronic equipment generates a second video from the first material, the second material and the third material according to the second sequence and the mirror moving effect, the speed effect and the transition effect which are set by each segment in the second video template.
In the embodiment of the present application, the above-mentioned scheme may refer to the aforementioned description, a specific implementation manner of the mirror moving effect may refer to the description exemplarily shown in fig. 5, a specific implementation manner of the speed effect may refer to the description exemplarily shown in fig. 6, and a specific implementation manner of the transition effect may refer to the description exemplarily shown in fig. 7, which is not described herein again.
In some embodiments, when the first material is a picture material, the method specifically includes: when the scene type corresponding to the first material is the same as the scene type corresponding to the first segment, or the scene type corresponding to the first material is adjacent to the sequence of the scene type corresponding to the first segment according to a preset rule, the electronic equipment determines the first material as the material matched with the scene type corresponding to the first segment; and when the scene type corresponding to the first material is the same as the scene type corresponding to the second segment, or the scene type corresponding to the first material is adjacent to the sequence of the scene type corresponding to the second segment according to a preset rule, the electronic equipment determines the first material as the material matched with the scene type corresponding to the second segment.
In the embodiment of the present application, the specific implementation process of the above scheme may refer to descriptions exemplarily shown in fig. 8A to 8E, which are not described herein again. Specific implementations of the first pixel can be seen in the picture pixels exemplarily mentioned in fig. 8A-8E.
In some embodiments, when the first material is a video material, the method specifically includes: when the scene type corresponding to the fourth material is the same as the scene type corresponding to the first segment or the scene type corresponding to the fourth material is adjacent to the sequence of the scene type corresponding to the first segment according to a preset rule and the duration of the fourth material is equal to the duration of the first segment, the electronic equipment intercepts the fourth material from the first material and determines the fourth material as the material matched with the scene type corresponding to the first segment; when the scene type corresponding to the fourth material is the same as the scene type corresponding to the second segment or the scene type corresponding to the fourth material is adjacent to the sequencing of the scene type corresponding to the second segment according to a preset rule, and the duration of the fourth material is equal to the duration of the second segment, the electronic equipment intercepts the fourth material from the second material and determines the fourth material as the material matched with the scene type corresponding to the second segment; and the fourth material is part or all of the first material.
In the embodiment of the present application, the specific implementation process of the above scheme may refer to descriptions exemplarily shown in fig. 8A to 8E, which are not described herein again. The specific implementation of the first pixel can be referred to as the video pixel exemplarily mentioned in fig. 8A to 8E, and the specific implementation of the fourth pixel can be referred to as the video pixel 51 or the video pixel 52.
In some embodiments, the scene type includes, in order of a preset rule: the short shot, the middle shot and the long shot, the type of the shot adjacent to the short shot is the long shot, the type of the shot adjacent to the middle shot is the short shot and the long shot, and the type of the shot adjacent to the long shot is the short shot. In the embodiment of the present application, the scene type division is not limited to the foregoing implementation manner, and reference may be specifically made to the foregoing description, which is not described herein again.
Illustratively, the present application provides an electronic device comprising: a memory and a processor; the memory is used for storing program instructions; the processor is configured to invoke the program instructions in the memory to cause the electronic device to perform the video generation method in the foregoing embodiments.
The application provides a chip system, which is applied to an electronic device comprising a memory, a display screen and a sensor; the chip system includes: a processor; when the processor executes the computer instructions stored in the memory, the electronic device performs the video generation method in the foregoing embodiments.
Illustratively, the present application provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, causes an electronic device to implement the video generation method in the foregoing embodiments.
Illustratively, the present application provides a computer program product comprising: executing instructions, which are stored in a readable storage medium, and at least one processor of the electronic device can read the executing instructions from the readable storage medium, and the at least one processor executes the executing instructions to enable the electronic device to implement the video generation method in the foregoing embodiments.
In the above-described embodiments, all or part of the functions may be implemented by software, hardware, or a combination of software and hardware. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium. The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
One of ordinary skill in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer-readable storage medium, and when executed, may include the processes of the above method embodiments. And the aforementioned storage medium includes: various media capable of storing program codes, such as ROM or RAM, magnetic or optical disks, etc.

Claims (15)

1. A method of video generation, comprising:
the method comprises the steps that electronic equipment displays a first interface of a first application, wherein the first interface comprises a first control and a second control;
the electronic equipment determines that the arrangement sequence of first materials, second materials and third materials is a first sequence after receiving a first operation acting on the first control, the first materials, the second materials and the third materials are different image materials stored in the electronic equipment, the first sequence is different from a third sequence, and the third sequence is the time sequence of storing the first materials, the second materials and the third materials in the electronic equipment; generating a first video from the first material, the second material and the third material according to the first sequence;
after receiving a second operation acting on the second control, the electronic device determines that the arrangement sequence of the first material, the second material and the third material is a second sequence, and the second sequence is different from the third sequence; and generating a second video from the first material, the second material and the third material according to the second sequence.
2. The method according to claim 1, wherein the first video is divided into a plurality of segments around a beat point of music;
the first material, the second material and the third material appear at least once in the first video, and the materials appearing in any two adjacent segments of the first video are different;
the first material, the second material, and the third material appear at least once in the second video, and the materials appearing in any two adjacent segments of the second video are different.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
the electronic equipment displays a second interface of the first application;
the electronic equipment generates the first video from the first material, the second material and the third material after receiving a third operation acting on the second interface.
4. The method according to claim 1 or 2, characterized in that the method further comprises:
the electronic equipment determines to generate the first video from the first material, the second material, the third material and a fourth material;
wherein the fourth material is an image material stored in the electronic device that is different from the first material, the second material, and the third material.
5. The method of any of claims 1-4, further comprising a third control in the first interface; the method further comprises the following steps:
after receiving a fourth operation acting on the third control, the electronic device displays a third interface, where the third interface includes: options for configuration information, the configuration information comprising: at least one parameter of duration, filters, frames, materials or titles;
the electronic equipment generates a third video from the first material, the second material and the third material based on the configuration information according to the first sequence after receiving a fifth operation acting on the option of the configuration information.
6. The method according to any one of claims 1-5, wherein the first interface further comprises a fourth control; the method further comprises the following steps:
the electronic device saves the first video in response to a fourth operation acting on the fourth control after generating the first video.
7. The method according to any one of claims 1 to 6, characterized in that it comprises in particular:
the electronic equipment determines a scene type corresponding to the first material, a scene type corresponding to the second material and a scene type corresponding to the third material;
the electronic equipment determines a material matched with the scene type corresponding to a first segment based on the scene type corresponding to the first material, the scene type corresponding to the second material, the scene type corresponding to the third material and the set scene type of each segment in a first video template, wherein the first segment is any one segment in the first video template; setting the arrangement sequence of the materials corresponding to all the segments in the first video template as the first sequence;
the electronic equipment determines a material matched with the scene type corresponding to a second segment based on the scene type corresponding to the first material, the scene type corresponding to the second material, the scene type corresponding to the third material and the set scene type of each segment in a second video template, wherein the second segment is any one segment in the second video template; setting the arrangement sequence of the materials corresponding to all the segments in the second video template as the second sequence;
wherein the first video template is different from the second video template, each segment in the second video being opposite to each segment in the second video template, each segment in the second video corresponding to each segment in the second video template.
8. The method of claim 7, further comprising:
the electronic equipment generates the first video from the first material, the second material and the third material according to the first sequence and the mirror moving effect, the speed effect and the transition effect which are set by each segment in the first video template;
and the electronic equipment generates the second video from the first material, the second material and the third material according to the second sequence and the mirror moving effect, the speed effect and the transition effect which are set by each segment in the second video template.
9. The method according to claim 7 or 8, wherein when the first material is a picture material, the method specifically comprises:
the electronic equipment determines the first material as a material matched with the scene type corresponding to the first segment when the scene type corresponding to the first material is the same as the scene type corresponding to the first segment, or the scene type corresponding to the first material is adjacent to the sequence of the scene type corresponding to the first segment according to a preset rule;
and when the scene type corresponding to the first material is the same as the scene type corresponding to the second segment, or the scene type corresponding to the first material is adjacent to the sequence of the scene type corresponding to the second segment according to a preset rule, the electronic equipment determines the first material as the material matched with the scene type corresponding to the second segment.
10. The method according to claim 7 or 8, wherein when the first material is a video material, the method specifically comprises:
when the scene type corresponding to a fourth material is the same as the scene type corresponding to the first segment or the scene type corresponding to the fourth material is adjacent to the sequence of the scene type corresponding to the first segment according to a preset rule, and the duration of the fourth material is equal to the duration of the first segment, the electronic equipment intercepts the fourth material from the first material and determines the fourth material as a material matched with the scene type corresponding to the first segment;
when the scene type corresponding to a fourth material is the same as the scene type corresponding to the second segment or the scene type corresponding to the fourth material is adjacent to the sequencing of the scene type corresponding to the second segment according to a preset rule, and the duration of the fourth material is equal to the duration of the second segment, the electronic equipment intercepts the fourth material from the second material and determines the fourth material as a material matched with the scene type corresponding to the second segment;
wherein the fourth material is part or all of the first material.
11. The method according to claim 10, wherein the scene type includes, in order of the preset rule: the short shot, the medium shot and the long shot are respectively adjacent to the short shot in type, the medium shot in type and the long shot in type, and the long shot in type.
12. The method of any of claims 1-11, wherein the first application is a gallery application of the electronic device.
13. An electronic device, comprising: a memory and a processor;
the memory is to store program instructions;
the processor is configured to invoke program instructions in the memory to cause the electronic device to perform the video generation method of any of claims 1-12.
14. A chip system, wherein the chip system is applied to an electronic device comprising a memory, a display screen, and a sensor; the chip system includes: a processor; the electronic device performs the video generation method of any of claims 1-12 when the processor executes computer instructions stored in the memory.
15. A computer-readable storage medium, on which a computer program is stored, the computer program, when being executed by a processor, causes the electronic device to carry out the video generation method of any one of claims 1 to 12.
CN202011057180.9A 2020-09-29 2020-09-29 Video generation method and electronic equipment Active CN114363527B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011057180.9A CN114363527B (en) 2020-09-29 2020-09-29 Video generation method and electronic equipment
PCT/CN2021/116047 WO2022068511A1 (en) 2020-09-29 2021-09-01 Video generation method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011057180.9A CN114363527B (en) 2020-09-29 2020-09-29 Video generation method and electronic equipment

Publications (2)

Publication Number Publication Date
CN114363527A true CN114363527A (en) 2022-04-15
CN114363527B CN114363527B (en) 2023-05-09

Family

ID=80949616

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011057180.9A Active CN114363527B (en) 2020-09-29 2020-09-29 Video generation method and electronic equipment

Country Status (2)

Country Link
CN (1) CN114363527B (en)
WO (1) WO2022068511A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115185429A (en) * 2022-05-13 2022-10-14 北京达佳互联信息技术有限公司 File processing method and device, electronic equipment and storage medium

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118055290A (en) * 2022-05-30 2024-05-17 荣耀终端有限公司 Multi-track video editing method, graphical user interface and electronic equipment
CN116055715B (en) * 2022-05-30 2023-10-20 荣耀终端有限公司 Scheduling method of coder and decoder and electronic equipment
CN117216312B (en) * 2023-11-06 2024-01-26 长沙探月科技有限公司 Method and device for generating questioning material, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090153744A1 (en) * 2007-08-08 2009-06-18 Funai Electric Co., Ltd. Scene detection system and scene detection method
CN104581380A (en) * 2014-12-30 2015-04-29 联想(北京)有限公司 Information processing method and mobile terminal
WO2018032921A1 (en) * 2016-08-19 2018-02-22 杭州海康威视数字技术股份有限公司 Video monitoring information generation method and device, and camera
US20190045164A1 (en) * 2017-12-15 2019-02-07 Intel Corporation Color Parameter Adjustment Based on the State of Scene Content and Global Illumination Changes
CN111048016A (en) * 2018-10-15 2020-04-21 广东美的白色家电技术创新中心有限公司 Product display method, device and system
CN111083138A (en) * 2019-12-13 2020-04-28 北京秀眼科技有限公司 Short video production system, method, electronic device and readable storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013063270A1 (en) * 2011-10-25 2013-05-02 Montaj, Inc. Methods and systems for creating video content on mobile devices
CN107437076B (en) * 2017-08-02 2019-08-20 逄泽沐风 The method and system that scape based on video analysis does not divide
CN109618222B (en) * 2018-12-27 2019-11-22 北京字节跳动网络技术有限公司 A kind of splicing video generation method, device, terminal device and storage medium
CN110825912B (en) * 2019-10-30 2022-04-22 北京达佳互联信息技术有限公司 Video generation method and device, electronic equipment and storage medium
CN111541946A (en) * 2020-07-10 2020-08-14 成都品果科技有限公司 Automatic video generation method and system for resource matching based on materials

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090153744A1 (en) * 2007-08-08 2009-06-18 Funai Electric Co., Ltd. Scene detection system and scene detection method
CN104581380A (en) * 2014-12-30 2015-04-29 联想(北京)有限公司 Information processing method and mobile terminal
WO2018032921A1 (en) * 2016-08-19 2018-02-22 杭州海康威视数字技术股份有限公司 Video monitoring information generation method and device, and camera
US20190045164A1 (en) * 2017-12-15 2019-02-07 Intel Corporation Color Parameter Adjustment Based on the State of Scene Content and Global Illumination Changes
CN111048016A (en) * 2018-10-15 2020-04-21 广东美的白色家电技术创新中心有限公司 Product display method, device and system
CN111083138A (en) * 2019-12-13 2020-04-28 北京秀眼科技有限公司 Short video production system, method, electronic device and readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115185429A (en) * 2022-05-13 2022-10-14 北京达佳互联信息技术有限公司 File processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
WO2022068511A1 (en) 2022-04-07
CN114363527B (en) 2023-05-09

Similar Documents

Publication Publication Date Title
CN113556461B (en) Image processing method, electronic equipment and computer readable storage medium
CN114397979B (en) Application display method and electronic equipment
CN109496423B (en) Image display method in shooting scene and electronic equipment
CN110231905B (en) Screen capturing method and electronic equipment
WO2021000881A1 (en) Screen splitting method and electronic device
WO2020078299A1 (en) Method for processing video file, and electronic device
CN114363527B (en) Video generation method and electronic equipment
CN113727017B (en) Shooting method, graphical interface and related device
CN110377204B (en) Method for generating user head portrait and electronic equipment
CN113838490B (en) Video synthesis method and device, electronic equipment and storage medium
CN111221453A (en) Function starting method and electronic equipment
CN112262563A (en) Image processing method and electronic device
CN109819306B (en) Media file clipping method, electronic device and server
CN109857401B (en) Display method of electronic equipment, graphical user interface and electronic equipment
CN114866860B (en) Video playing method and electronic equipment
CN113170037A (en) Method for shooting long exposure image and electronic equipment
CN112529645A (en) Picture layout method and electronic equipment
CN115484380A (en) Shooting method, graphical user interface and electronic equipment
CN115037872B (en) Video processing method and related device
CN115734032A (en) Video editing method, electronic device and storage medium
CN115808997A (en) Preview method, electronic equipment and system
CN113497888A (en) Photo preview method, electronic device and storage medium
WO2023065832A1 (en) Video production method and electronic device
WO2022228010A1 (en) Method for generating cover, and electronic device
WO2023280021A1 (en) Method for generating theme wallpaper, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant