CN114025237B - Video generation method and device and electronic equipment - Google Patents

Video generation method and device and electronic equipment Download PDF

Info

Publication number
CN114025237B
CN114025237B CN202111462332.8A CN202111462332A CN114025237B CN 114025237 B CN114025237 B CN 114025237B CN 202111462332 A CN202111462332 A CN 202111462332A CN 114025237 B CN114025237 B CN 114025237B
Authority
CN
China
Prior art keywords
image
file
display area
frame
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111462332.8A
Other languages
Chinese (zh)
Other versions
CN114025237A (en
Inventor
陈喆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202111462332.8A priority Critical patent/CN114025237B/en
Publication of CN114025237A publication Critical patent/CN114025237A/en
Application granted granted Critical
Publication of CN114025237B publication Critical patent/CN114025237B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The application discloses a video generation method, a video generation device and electronic equipment, and belongs to the technical field of video processing. The video generation method comprises the following steps: displaying a first splicing window and a second splicing window, wherein the first splicing window is used for displaying a first file, the second splicing window is used for displaying a second file, the first file comprises at least one frame of first image, the second file comprises at least one frame of second image, the first splicing window comprises a first display area and a second display area, the second splicing window comprises a third display area and a fourth display area, and a background image displayed by the second display area is the same as a background image displayed by the third display area; and generating a target video based on the image displayed in the first display area, the background image displayed in the second display area and the image displayed in the fourth display area.

Description

Video generation method and device and electronic equipment
Technical Field
The application belongs to the technical field of video processing, and particularly relates to a video generation method, a video generation device and electronic equipment.
Background
With the rise of short videos, users have increasingly high demands for producing interesting videos synthesized by multiple sections of videos, and the interesting videos can be usually finished through a large amount of later editing work, so that the operation for generating the interesting videos is complex, and the operability for common users is not high.
Disclosure of Invention
The embodiment of the application aims to provide a video generation method, a video generation device and electronic equipment, which can solve the problem of complex operation during generation of interesting videos.
In a first aspect, an embodiment of the present application provides a video generating method, including:
Displaying a first splicing window and a second splicing window, wherein the first splicing window is used for displaying a first file, the second splicing window is used for displaying a second file, the first file comprises at least one frame of first image, the second file comprises at least one frame of second image, the first splicing window comprises a first display area and a second display area, the second splicing window comprises a third display area and a fourth display area, and a background image displayed by the second display area is the same as a background image displayed by the third display area;
and generating a target video based on the image displayed in the first display area, the background image displayed in the second display area and the image displayed in the fourth display area.
In a second aspect, an embodiment of the present application provides an apparatus for generating video, including:
the first display module is used for displaying a first spliced window and a second spliced window, the first spliced window is used for displaying a first file, the second spliced window is used for displaying a second file, the first file comprises at least one frame of first image, the second file comprises at least one frame of second image, the first spliced window comprises a first display area and a second display area, the second spliced window comprises a third display area and a fourth display area, and the background image displayed by the second display area is identical to the background image displayed by the third display area;
and the generation module is used for generating a target video based on the image displayed in the first display area, the background image displayed in the second display area and the image displayed in the fourth display area.
In a third aspect, an embodiment of the present application provides an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, the video generation method, the video generation device and the electronic equipment can display a first splicing window and a second splicing window, wherein the first splicing window is used for displaying a first file and comprises a first display area and a second display area, the second splicing window is used for displaying a second file and comprises a third display area and a fourth display area, a background image displayed by the second display area is the same as a background image displayed by the third display area, and then a target video is generated based on the image displayed by the first display area, the background image displayed by the second display area and the image displayed by the fourth display area.
Therefore, the user does not need to carry out complex editing operation, and the multi-section video recorded in time periods can be spliced to generate the interesting video with the panoramic background; in addition, based on the images displayed in the first display area, the second display area and the fourth display area, the spliced video playing effect can be directly seen, the video splicing process of 'what you see is what you get' is achieved, and the interactive experience of a user is improved.
Drawings
Fig. 1 is a schematic flow chart of a video generating method according to an embodiment of the present application;
Fig. 2 is a schematic diagram of splicing a first file and a second file in the video generating method according to the embodiment of the present application;
FIG. 3 is a schematic diagram of an interface of an electronic device according to an embodiment of the present application;
FIG. 4 is a second schematic diagram of an interface of an electronic device according to an embodiment of the present application;
FIG. 5 is a third schematic diagram of an interface of an electronic device according to an embodiment of the present application;
FIG. 6 is a diagram illustrating an interface of an electronic device according to an embodiment of the present application;
FIG. 7 is a fifth schematic diagram of an interface of an electronic device according to an embodiment of the present application;
FIG. 8 is a diagram illustrating an interface of an electronic device according to an embodiment of the present application;
fig. 9 is an image frame stitching schematic diagram of a video generating method according to an embodiment of the present application;
FIG. 10 is a diagram of an interface of an electronic device according to an embodiment of the present application;
FIG. 11 is a schematic diagram of an interface of an electronic device according to an embodiment of the present application;
Fig. 12 is a schematic structural diagram of a video generating apparatus according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of an electronic device implementing an embodiment of the present application;
fig. 14 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
With the development of image processing technology, interesting photos and interesting videos are continuously emerging, wherein panoramic photos belong to one category of interesting photos, and panoramic videos belong to one category of interesting videos. Currently, a shooting function of an electronic device includes a panoramic shooting mode, in which the electronic device can shoot panoramic photos, and the panoramic photos are a group of photos shot by 360 ° scenes by using a camera to generate a photo containing all the scenes. However, the panoramic shooting mode is only limited to shooting still photos, and for dynamic panoramic videos, a target video is often required to be generated by utilizing professional video editing software through subsequent massive editing processing, so that the panoramic video manufacturing difficulty is high, the operability of a common user is not strong, and therefore, the related video manufacturing method is difficult to meet the requirements of users for manufacturing interesting videos.
Accordingly, embodiments of the present application provide a video generating method to solve the above-mentioned problems.
The video generating method provided by the embodiment of the application is described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
Fig. 1 is a flow chart of a video generating method according to an embodiment of the present application. The video generation method may include the steps of:
Step 101, displaying a first splicing window and a second splicing window, wherein the first splicing window is used for displaying a first file, the second splicing window is used for displaying a second file, the first file comprises at least one frame of first image, the second file comprises at least one frame of second image, the first splicing window comprises a first display area and a second display area, the second splicing window comprises a third display area and a fourth display area, and a background image displayed in the second display area is the same as a background image displayed in the third display area;
step 102, generating a target video based on the image displayed in the first display area, the background image displayed in the second display area and the image displayed in the fourth display area.
The specific implementation of each of the above steps will be described in detail below.
In the embodiment of the application, a first splicing window and a second splicing window can be displayed, wherein the first splicing window is used for displaying a first file and comprises a first display area and a second display area, the second splicing window is used for displaying a second file and comprises a third display area and a fourth display area, a background image displayed by the second display area is the same as a background image displayed by the third display area, and then a target video is generated based on the image displayed by the first display area, the background image displayed by the second display area and the image displayed by the fourth display area.
Therefore, the user does not need to carry out complex editing operation, and the multi-section video recorded in time periods can be spliced to generate the interesting video with the panoramic background; in addition, based on the images displayed in the first display area, the second display area and the fourth display area, the spliced video playing effect can be directly seen, the video splicing process of 'what you see is what you get' is achieved, and the interactive experience of a user is improved.
A specific implementation of each of the above steps is described below.
In step 101, the first file and the second file may be videos or photos stored in the electronic device in advance, or may be videos or photos acquired by a camera of the electronic device after receiving an instruction or operation of making interesting videos from a user. For example, the first file may be a first video, i.e. the first file may comprise a plurality of frames of a first image, the first file may also be a photo, i.e. the first file may comprise a frame of a first image, the second file may be a second video, i.e. the second file may comprise a plurality of frames of a second image, the second file may also be a photo, i.e. the second file may comprise a frame of a second image. In other words, the target video may be generated based on the first video and the one-frame second image, may be generated based on the one-frame first image and the one-frame second image, or may be generated based on the first video and the one-frame second image.
As shown in fig. 2, a first tile 201 and a second tile 202 may be displayed, wherein the first tile 201 may be used to display a first file, the second tile 202 may be used to display a second file, the first tile 201 may include a first display area 2011 and a second display area 2012, and the second tile 202 may include a third display area 2021 and a fourth display area 2022.
For example, referring to fig. 2, the second display region 2012 and the third display region 2021 may be overlapping regions, and the background image of the first document displayed in the second display region 2012 and the background image of the second document displayed in the third display region 2021 may be the same.
In step 102, as shown in fig. 2, in the case where the first and second mosaic windows 201 and 202 are displayed and the background image displayed by the second display region 2012 is the same as the background image displayed by the third display region 2021, the target video may be generated based on the image displayed by the first display region 2011, the background image displayed by the second display region 2012, and the image displayed by the fourth display region 2022. It is to be understood that the target video may be generated based on the image displayed in the first display area 2011, the background image displayed in the third display area 2021, and the image displayed in the fourth display area 2022.
Since the background image displayed in the second display region 2012 is identical to the background image displayed in the third display region 2021, the background image of this region can be used as a reference object when the first document and the second document are spliced. In other words, the same one of the background images may be retained, and then spliced together with the image displayed in the first display area 2011 and the image displayed in the fourth display area 2022, so that the target video with the panoramic background may be generated.
It can be appreciated that, before the first stitching window 201 displays the first file and the second stitching window 202 displays the second file, an existing pixel comparison algorithm may be used to compare the first file with the second file, and perform processing such as cropping or moving at least one frame of the first image of the first file and/or at least one frame of the second image of the second file, so that after processing, the background image displayed in the second display area 2012 is the same as the background image displayed in the third display area 2021, thereby ensuring the stitching effect of the target video.
In some embodiments, before the step 101, the video generating method may further perform the following steps:
receiving a first input of a user;
in response to the first input, displaying a first shooting preview interface of the first file;
The first shooting preview interface comprises a first preview area and a second preview area, the image displayed in the first display area is identical to the image displayed in the first preview area, and the image displayed in the second display area is identical to the image displayed in the second preview area.
In the embodiment of the present application, before the first and second mosaic windows are displayed, a first input of a user may be received, where the first input may be: the click input of the user to the display interface of the electronic device, or the voice command input by the user, or the specific gesture input by the user, may be specifically determined according to the actual use requirement, which is not limited by the embodiment of the present application. The specific gesture in the embodiment of the application can be any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture and a double-click gesture; the click input in the embodiment of the application can be single click input, double click input or any time click input, and the like, and can also be long-press input or short-press input.
As shown in fig. 3, the electronic device may acquire the first file and the second file when in the preset shooting mode. The preset shooting mode can be started by default of the electronic equipment, or can be started in response to user operation under the condition that user operation is received. For example, the preset shooting mode may be an "interesting panoramic video" mode, the electronic device may display the shooting mode selection interface 301, the shooting mode selection interface 301 may include an "interesting panoramic video" control, the "interesting panoramic video" mode may be selectively opened based on a related operation of the "interesting panoramic video" control by a user, and in this mode, a first input of the user is received, where the first input may be used to indicate that the first file is acquired.
As shown in fig. 4, a first photographing preview interface 401 of a first file may be displayed in response to a first input, wherein the first photographing preview interface 401 may include a first preview area 4011 and a second preview area 4012, and image contents, i.e., the first file, displayed in the first photographing preview interface 401. It can be understood that the image displayed in the first preview area 4011 is the image displayed in the subsequent first display area, and the image displayed in the second preview area 4012 is the image displayed in the subsequent second display area.
Therefore, the first file can be acquired according to the operation of the user, so that the target video generated by splicing the first file can meet the personalized requirements of the user.
In some embodiments, after the first shooting preview interface of the first file is displayed, the video generating method may further perform the following steps:
Receiving a sixth input of a user to the first shooting preview interface;
In response to the sixth input, image parameters of the first file are determined.
In the embodiment of the application, after the first shooting preview interface of the first file is displayed, a sixth input of the user to the first shooting preview interface may be received. Wherein the sixth input may be: the click input of the user to the display interface of the electronic device, or the voice command input by the user, or the specific gesture input by the user, may be specifically determined according to the actual use requirement, which is not limited by the embodiment of the present application. The specific gesture in the embodiment of the application can be any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture and a double-click gesture; the click input in the embodiment of the application can be single click input, double click input or any time click input, and the like, and can also be long-press input or short-press input.
For example, the sixth input may be used to indicate a shooting mode of the first file, where the shooting mode may be a "shooting mode" or a "video recording mode". If the sixth input indicates that the shooting mode of the first file is a "shooting mode", then the sixth input may be responded to determine that the first file may be a photograph, that is, the first file may include a frame of first image; if the sixth input indicates that the shooting mode of the first file is a "video mode", then it may be determined that the first file may be the first video, that is, the first file may include multiple frames of the first image, in response to the sixth input.
In some examples, to further satisfy the user's desire to make interesting video, the sixth input may further include an associated operation for instructing the user to select a specific shooting style of one of the "shooting mode" or the "video mode". As shown in fig. 5, when the electronic device displays the first shooting preview interface 501 of the first file, the electronic device may further display a "shooting mode" control 502 and a "video mode" control 503. The shooting style corresponding to the "shooting mode" may be displayed in response to the user's operation on the "shooting mode" control 502, for example, shooting styles such as "night view", "moon mode", "starry sky mode", and "double exposure", where the user may select a specific shooting style of the "shooting mode" to shoot the first file, for example, the user may select the "moon mode" of the "shooting mode" to shoot the first file. The shooting style corresponding to the "video mode" may also be displayed in response to the user operating the "video mode" control 503, for example, shooting styles such as "delayed shooting," "slow shot," "time slow gate," and "double view," where the user may select a specific shooting style in the "video mode" to record the first file, for example, the user may select "slow shot" in the "video mode" to record the first file.
Therefore, the image parameters of the first file can be determined according to the selection of the user, namely, the first file can be the video or the photo acquired according to the requirement of the user, so that the mode of generating the target video is more diversified, and various requirements of the user can be met.
In some embodiments, after the first shooting preview interface of the first file is displayed, the video generating method may further perform the following steps:
receiving eighth input of a user to the first shooting preview interface;
In response to the eighth input, the display scale of the first preview area and the second preview area is updated.
In the embodiment of the application, after the first shooting preview interface of the first file is displayed, an eighth input of the user to the first shooting preview interface can be received. Wherein the eighth input may be: the click input of the user to the display interface of the electronic device, or the voice command input by the user, or the specific gesture input by the user, may be specifically determined according to the actual use requirement, which is not limited by the embodiment of the present application. The specific gesture in the embodiment of the application can be any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture and a double-click gesture; the click input in the embodiment of the application can be single click input, double click input or any time click input, and the like, and can also be long-press input or short-press input.
Upon receiving the eighth input, as shown in fig. 4, the display scale of the first preview area 4011 and the second preview area 4012 may be updated in response to the eighth input. For example, the display area of the first preview area 4011 and the display area of the second preview area 4012 may be updated according to the user's demand.
For example, in order to further ensure the accuracy of the stitching, the display area of the second preview area 4012 may be increased, and at this time, the background image displayed in the second display area during stitching may be greater, so that the duty ratio of the same background image in the first file and the second file is higher, and the stitching effect is better.
In the embodiment of the application, the display proportion of the first preview area and the second preview area can be set according to the selection of the user, so that different requirements of the user on the operation or effect of generating the target video can be met.
In some embodiments, after the step 101, the video generating method may further perform the following steps:
receiving a second input of a user to the second shooting preview interface under the condition that the second shooting preview interface of the second file is displayed;
and in response to the second input, displaying a second image acquired by the camera in a second stitching window.
In the embodiment of the present application, as shown in fig. 6, after the first file is collected, the first file may be displayed in the first stitching window 601, and then the collection of the second file may be started. For example, a second shot preview interface 603 of the second file may be displayed, at which time a second input by the user to the second shot preview interface 603 may be received. Wherein the second input may be: the click input of the user to the display interface of the electronic device, or the voice command input by the user, or the specific gesture input by the user, may be specifically determined according to the actual use requirement, which is not limited by the embodiment of the present application. The specific gesture in the embodiment of the application can be any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture and a double-click gesture; the click input in the embodiment of the application can be single click input, double click input or any time click input, and the like, and can also be long-press input or short-press input.
It may be understood that the first file may be a first video, where the first video may be synchronously played in the first stitching window, or a first image of a certain frame in the first video may be displayed in the first stitching window.
After receiving the second input, a second image captured by the camera may be displayed in a second stitching window 602 in response to the second input. It is understood that the background image of the second document displayed in the third display area may be the same as the background image of the first document displayed in the second display area.
Therefore, when the second file is acquired, the first file can be synchronously displayed in the first splicing window, the foreground image of the first file can be referred to, the second file is guided to be recorded, and the video playing effect after splicing can be directly seen based on the first file displayed in the first splicing window and the second file displayed in the second splicing window, so that the recorded second file can meet the personalized requirement of a user for manufacturing panoramic video, and the interactive experience of the user is improved.
In some examples, as shown in fig. 6, the second shooting preview interface of the second file may display a shooting preview image of the second file, so that the user can more clearly view the content of the second file, and thus can acquire the second file meeting the requirement of the user.
In other examples, the electronic device may further monitor, in real time, the background image displayed in the third display area and the background image displayed in the second display area, and send a prompt message when the similarity between the background image displayed in the third display area and the background image displayed in the second display area is smaller than a preset similarity threshold, where the prompt message may be used to prompt the user that the background images of the second file and the first file are not matched at the moment, so that the user may adjust the shooting angle according to the prompt message. Therefore, the acquisition effect of the second file can be further ensured, and the splicing effect of the subsequent target video is further ensured.
In some embodiments, the video generation method may further perform the following steps before receiving the second input of the user to the second photographing preview interface:
Receiving a seventh input of a user to the second shooting preview interface;
In response to the seventh input, image parameters of the second file are determined.
In the embodiment of the application, after the second shooting preview interface of the second file is displayed, a seventh input of the user to the second shooting preview interface may be received. Wherein the seventh input may be: the click input of the user to the display interface of the electronic device, or the voice command input by the user, or the specific gesture input by the user, may be specifically determined according to the actual use requirement, which is not limited by the embodiment of the present application. The specific gesture in the embodiment of the application can be any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture and a double-click gesture; the click input in the embodiment of the application can be single click input, double click input or any time click input, and the like, and can also be long-press input or short-press input.
For example, the seventh input may be used to indicate a shooting mode of the second file, where the shooting mode may be a "shooting mode" or a "video recording mode". If the seventh input indicates that the shooting mode of the second file is a "shooting mode", then the seventh input may be responded to determine that the second file may be a photograph, that is, the second file may include a frame of the second image; if the seventh input indicates that the shooting mode of the second file is a "video mode", then it may be determined that the second file may be the second video, i.e., the second file may include multiple frames of the second image, in response to the seventh input.
In some examples, to further satisfy the user's desire to make interesting video, the seventh input may further include an associated operation for instructing the user to select a specific shooting style of one of "shooting mode" or "video mode". As shown in fig. 7, when the electronic device displays the second shooting preview interface 701 for displaying the second file, the electronic device may further display a "shooting mode" control 702 and a "video mode" control 703. The shooting style corresponding to the "shooting mode" may be displayed in response to the user's operation of the "shooting mode" control 702, for example, shooting styles such as "night view", "moon mode", "starry sky mode", and "double exposure", at which time the user may select a specific shooting style of the "shooting mode" to shoot the second file, for example, the user may select the "moon mode" of the "shooting mode" to shoot the second file. The shooting style corresponding to the "video mode" may also be displayed in response to the user operating the "video mode" control 703, for example, shooting styles such as "delayed shooting," "slow shot," "time slow gate," and "double view," where the user may select a specific shooting style in the "video mode" to record the second file, for example, the user may select "slow shot" in the "video mode" to record the second file.
Therefore, the image parameters of the second file can be determined according to the selection of the user, namely, the second file can be the video or the photo acquired according to the requirement of the user, so that the mode of generating the target video is more diversified, and various requirements of the user can be met.
In some embodiments, the video generation method may further perform the steps of:
and displaying a play speed control under the condition that the first file is played by the first splicing window, wherein the play speed control is used for adjusting the play speed of the first file.
In the embodiment of the present application, as shown in fig. 6, the first file may be a first video, and the first splicing window 601 may play the first video, where a play speed control 604 may be displayed, and the play speed control 604 may be used to adjust the play speed of the first video. For example, the playing speed of the first video may be determined by the user performing related operations such as clicking or sliding on the playing speed control 604, where a specific value of the playing speed may be set according to the needs of the user.
For example, the second file may be a second video, and the play speed of the first video may be adjusted by play speed control 604 when the second video is captured. Therefore, when the second video is acquired, besides the background image displayed in the second display area by referring to the first video, the acquired content of the second video can be correspondingly adjusted by referring to the content of the first video, so that the interest of the target video after subsequent splicing is higher, and the requirement of a user for manufacturing the interesting video can be met.
For example, in some examples, the play speed of the first video may be less than or equal to the record speed of the second video. The recording speed of the second video may be "1", that is, when the second video is recorded at a "1" multiple speed, the first video may be played at a "1" multiple speed or less, for example, the first video may be played at a "0.1" multiple speed.
It can be understood that when a user records the second video, the user needs to observe the content of the second video, but also needs to look at the first video as a content reference, so that the difficulty is high, and the user may be busy, so that the problem that the recorded content of the second video is difficult to adjust in time according to the played content of the first video may occur. The recording difficulty of the second video can be reduced by slowly playing the first video, so that a user can more conveniently refer to the content of the first video, the recorded content of the second video can be adjusted in time, and further the target video after subsequent splicing can meet the requirements of the user.
In the embodiment of the application, under the condition that the first file is played in the first splicing window, the playing speed of the first file can be determined according to the requirement of a user, so that the flexibility of the acquisition process of the second file is effectively improved, various requirements of the user can be met, the acquired second file can be more in line with the requirement of the user, and further, a target video more satisfactory to the user is generated.
In some embodiments, the step 102 may specifically be performed as follows:
Image stitching is carried out on at least one frame of first image and at least one frame of second image, and at least one frame of target video image is obtained;
A target video is generated based on the at least one frame of target video image.
In the embodiment of the application, at least one frame of first image and at least one frame of second image are subjected to image stitching to obtain at least one frame of target video image, and then the target video is generated. For example, image stitching may be performed on a second image frame and a first image frame, so as to obtain a stitched target video image, thereby generating a target video. Or respectively performing image stitching on the first image frame and the second image frame to obtain a stitched target video image, thereby generating a target video. And the first images and the second images can be spliced to obtain spliced target video images, so that the target video is generated.
In the embodiment of the application, based on the limitation of the shooting visual angle of the camera, the first image and the second image often comprise part of scene content, and at least one frame of first image and at least one frame of second image are directly subjected to image stitching, so that panoramic video can be obtained without a large number of clips, and the convenience of making interesting video by a user is effectively improved.
In some embodiments, the image stitching between the at least one first image and the at least one second image may specifically be performed as follows:
respectively carrying out image stitching on a first target image in at least one frame of first image and at least two frames of second images;
or respectively carrying out image stitching on the second target image in the at least one frame of second image and the at least two frames of first images.
In the embodiment of the present application, the first target image may refer to any first image in at least one frame of first image, where the first target image may be respectively image-spliced with at least two frames of second images, for example, when the first file is a photograph and the second file is a second video, the first file may include one frame of first image, and the second file may include multiple frames of second images, where the first target image is the one frame of first image, and the first target image may be respectively image-spliced with at least two frames of second images.
Similarly, the second target image may refer to any second image in at least one frame of second image, where the second target image may be respectively image-spliced with at least two frames of first images, for example, when the second file is a photograph and the first file is a first video, the second file may include one frame of second image, and the first file may include multiple frames of first images, and at this time, the second target image is the one frame of second image, and the second target image may be respectively image-spliced with at least two frames of first images.
In some examples, the first file may be a first video, and the second file may be a second video, where recording speeds of the first video and the second video may be different, for example, when the second video is recorded, a playing speed of the first video is less than "1", where a frame number of a first image included in the first video is less than a frame number of a second image included in the second video, in other words, one frame of the first image may correspond to multiple frames of the second image, and at this time, one frame of the first image and its corresponding multiple frames of the second image may also be respectively subjected to image stitching.
In the embodiment of the application, one frame of image can be respectively spliced with multiple frames of images, so that multiple frames of target video images are obtained, the target video is generated, the flexibility of image splicing is improved, the generation mode of the target video is more diversified, and various requirements of users for making interesting videos can be met.
In some embodiments, before the image stitching the at least one frame of the first image and the at least one frame of the second image, the video generating method may further perform the following steps:
displaying at least one first image and at least one second image;
Receiving a third input of a user to at least one frame of the first image;
determining a first starting image frame in response to a third input;
receiving a fourth input of a user to at least one frame of the second image;
Determining a second starting image frame in response to the fourth input;
Correspondingly, the image stitching is performed on at least one first frame of image and at least one second frame of image, and specifically the following steps may be executed:
And performing image stitching on a first target image frame sequence in at least one first image and a second target image frame sequence in at least one second image, wherein a starting frame of the first target image frame sequence is a first starting image frame, and a starting frame of the second target image frame sequence is a second starting image frame.
In an embodiment of the present application, as shown in fig. 8, at least one first image 801 and at least one second image 802 may be displayed, and then a third input of the user to the at least one first image 801 may be received, such that a first starting image frame is determined from the at least one first image 801 in response to the third input, and a fourth input of the user to the at least one second image 802 may be received, such that a second starting image frame is determined from the at least one first image 802 in response to the fourth input.
Wherein the third input and the fourth input may be: the click input of the user on the at least one first image 801 or the at least one second image 802 may be specifically determined according to the actual use requirement, which is not limited by the embodiment of the present application, or a voice command input by the user or a specific gesture input by the user. The specific gesture in the embodiment of the application can be any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture and a double-click gesture; the click input in the embodiment of the application can be single click input, double click input or any time click input, and the like, and can also be long-press input or short-press input.
It will be appreciated that the first starting image frame may be determined first and then the second starting image frame may be determined, and this order is not limited here.
As shown in fig. 9, after determining the first start image frame 9011 from the at least one first image 901 and the second start image frame 9021 from the at least one second image 902, the first target image frame sequence may be determined from the at least one first image 901 using the first start image frame 9011 as a start frame, and the second target image frame sequence may be determined from the at least one second image 902 using the second start image frame 9021 as a start frame. And then, performing image stitching on the first target image frame sequence and the second target image frame sequence, so as to generate a target video.
It can be appreciated that when the first file and the second file are acquired, a user may need a certain preparation time, and when the second file is recorded by referring to the content of the first file, it is often required to observe the content of the first file first and then record the second file correspondingly, and this process may also need a certain reaction time, which may cause a certain time error between the recorded second file and the first file, so if the first frame first image and the first frame second image are directly spliced, it may cause that the content of the generated target video does not meet the expectations of the user.
In the embodiment of the application, the first initial image frame and the second initial image frame can be determined according to the selection of a user, and the first initial image frame and the second initial image frame can be used as the reference to carry out image stitching on the first target image frame sequence and the second target image frame sequence, so that the corresponding relation between at least one frame of first image and at least one frame of second image can be further ensured, and the stitching effect of the target video is better.
In some examples, a frame of the first image may correspond to a plurality of frames of the second image. For example, when the second file is recorded, the playing speed of the first file may be "0.2", and then it may be considered that one frame of the first image may correspond to five frames of the second image, and when the user selects the first starting image frame and the second starting image frame, the first starting image frame may be determined from at least one frame of the first image, five initial starting image frames may be determined from at least one frame of the second image, and then the second starting image frame may be determined from the initial starting image frames.
Based on the fact that other first images except the first initial image frame in at least one first image also correspond to multiple frames of second images, in order to reduce operation of a user, image frame alignment is rapidly performed on the at least one first image and the at least one second image, after the second initial image frame is determined, aligned image frames of each other first image in the at least one second image can be determined according to an equivalent time interval. For example, a third initial image frame of the five initial image frames corresponding to the first initial image frame is taken as a second initial image frame, and then the image frames aligned with the first image of each other are all third second images of the five second images corresponding to the first image.
And based on the condition that other first images except the first initial image frame in at least one frame of first images also correspond to a plurality of frames of second images, in order to further improve the image frame splicing effect of the at least one frame of first images and the at least one frame of second images, the second image corresponding to each frame of first images for image splicing can be determined from the at least one frame of second images according to the selection of a user.
In some embodiments, after the step 102, the video generating method may further perform the following steps:
Displaying a playing interface of the target video;
Receiving a fifth input of a user to the playing interface;
In response to the fifth input, performing a target process;
Wherein the target process comprises at least one of:
storing the target video;
re-recording the first file displayed in the first splicing window;
re-recording the second file displayed in the second splicing window;
and splicing the target video with a third file, wherein the third file comprises at least one frame of third image.
In the embodiment of the present application, as shown in fig. 10, after generating the target video, a playing interface 1001 of the target video may be displayed, and a fifth input of the user to the playing interface may be received, where the fifth input may be: the click input of the user to the playing interface, or the voice instruction input by the user, or the specific gesture input by the user, may be specifically determined according to the actual use requirement, which is not limited by the embodiment of the present application. The specific gesture in the embodiment of the application can be any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture and a double-click gesture; the click input in the embodiment of the application can be single click input, double click input or any time click input, and the like, and can also be long-press input or short-press input.
Upon receiving the fifth input, a target process may be performed in response to the fifth input, wherein the target process may include storing the target video, re-recording the first file displayed in the first stitching window, re-recording the second file displayed in the second stitching window, or stitching the target video with a third file, wherein the third file may include at least one frame of a third image.
Illustratively, as shown in FIG. 10, the play interface 1001 may include a plurality of functionality controls, which may include, for example, a "re-record" control 1002, a "continue record" control 1003, a "save video" control 1004, and the like. If an associated input is received to the "re-record" control 1002, the first file and/or the second file may be re-recorded. If a related input to the "continue recording" control 1003 is received, a third file may be recorded on the basis of obtaining the target video, and image stitching may be performed on the third file and the second file. If relevant input is received to the "save video" control 1004, the target video may be directly output.
In the embodiment of the application, the target video can be further processed according to the needs of the user, so that different requirements of the user for making interesting videos can be met.
In some embodiments, in the case where the target processing includes splicing the target video with the third file, after receiving the fifth input of the user to the playing interface, the video generating method may further perform the following steps:
And displaying a third spliced window, wherein the third spliced window is used for displaying a third file, the second spliced window comprises a fifth display area, the third spliced window comprises a sixth display area, and a background image displayed by the fifth display area is the same as a background image displayed by the sixth display area.
In the embodiment of the present application, as shown in fig. 2, the second stitching window 202 further includes a fifth display area 2023, where the target processing may include stitching the target video with the third file, as shown in fig. 11, a third stitching window 1101 may also be displayed, and the third stitching window 1101 may be used to display the third file. The third file may be a video or a photo stored in the electronic device in advance, or may be a video or a photo collected by a camera of the electronic device after receiving an instruction or operation of making an interesting video by a user.
Referring to fig. 11, the electronic device may display a first mosaic window 1103, a second mosaic window 1102, and a third mosaic window 1101, the third mosaic window 1101 may include a sixth display area 11011 overlapping with the fifth display area 11022, and a background image of the second file displayed in the fifth display area 11022 may be the same as a background image of the third file displayed in the sixth display area 11011. Panoramic video may then be generated based on the image displayed by the first display area 11031, the background image displayed by the second display area 11032, the image displayed by the fourth display area 11021, the background image displayed by the fifth display area 11022, and the image displayed by the seventh display area 11012 (other display areas of the third mosaic window 1101 except for the sixth display area 11011).
It will be appreciated that a fourth tile for displaying a fourth file, a fifth tile for displaying a fifth file, etc. may also be displayed.
Therefore, after the target video is generated by image stitching based on the first file and the second file, the target video can be further subjected to image stitching with the third file on the basis of the target video, and therefore the diversified requirements of users for making interesting videos can be further met.
According to the video generation method provided by the embodiment of the application, the execution subject can be a video generation device. In the embodiment of the present application, a method for executing video generation by a video generation device is taken as an example, and the video generation device provided in the embodiment of the present application is described.
Fig. 12 is a schematic structural diagram of a video generating apparatus according to an embodiment of the present application. The video generating apparatus 1200 may include:
the first display module 1201 is configured to display a first mosaic window and a second mosaic window, where the first mosaic window is configured to display a first file, the second mosaic window is configured to display a second file, the first file includes at least one frame of first image, the second file includes at least one frame of second image, the first mosaic window includes a first display area and a second display area, the second mosaic window includes a third display area and a fourth display area, and a background image displayed in the second display area is the same as a background image displayed in the third display area;
A generating module 1202, configured to generate a target video based on the image displayed in the first display area, the background image displayed in the second display area, and the image displayed in the fourth display area.
In the embodiment of the application, a first splicing window and a second splicing window can be displayed, wherein the first splicing window is used for displaying a first file and comprises a first display area and a second display area, the second splicing window is used for displaying a second file and comprises a third display area and a fourth display area, a background image displayed by the second display area is the same as a background image displayed by the third display area, and then a target video is generated based on the image displayed by the first display area, the background image displayed by the second display area and the image displayed by the fourth display area.
Therefore, the user does not need to carry out complex editing operation, and the multi-section video recorded in time periods can be spliced to generate the interesting video with the panoramic background; in addition, based on the images displayed in the first display area, the second display area and the fourth display area, the spliced video playing effect can be directly seen, the video splicing process of 'what you see is what you get' is achieved, and the interactive experience of a user is improved.
In some embodiments, the video generating apparatus 1200 may further include:
the first receiving module is used for receiving a first input of a user;
the second display module is used for responding to the first input and displaying a first shooting preview interface of the first file;
The first shooting preview interface comprises a first preview area and a second preview area, the image displayed in the first display area is identical to the image displayed in the first preview area, and the image displayed in the second display area is identical to the image displayed in the second preview area.
Therefore, the first file can be acquired according to the operation of the user, so that the target video generated by splicing the first file can meet the personalized requirements of the user.
In some embodiments, the video generating apparatus 1200 may further include:
The second receiving module is used for receiving a second input of a user to the second shooting preview interface under the condition that the second shooting preview interface of the second file is displayed;
And the third display module is used for responding to the second input and displaying a second image acquired by the camera in the second splicing window.
Therefore, when the second file is acquired, the first file can be synchronously displayed in the first splicing window, the foreground image of the first file can be referred to, the second file is guided to be recorded, and the video playing effect after splicing can be directly seen based on the first file displayed in the first splicing window and the second file displayed in the second splicing window, so that the recorded second file can meet the personalized requirement of a user for manufacturing panoramic video, and the interactive experience of the user is improved.
In some embodiments, the video generating apparatus 1200 may further include:
And the fourth display module is used for displaying a play speed control under the condition that the first file is played by the first splicing window, and the play speed control is used for adjusting the play speed of the first file.
In the embodiment of the application, under the condition that the first file is played in the first splicing window, the playing speed of the first file can be determined according to the requirement of a user, so that the flexibility of the acquisition process of the second file is effectively improved, various requirements of the user can be met, the acquired second file can be more in line with the requirement of the user, and further, a target video more satisfactory to the user is generated.
In some embodiments, the generation module 1202 may include:
the splicing unit is used for carrying out image splicing on at least one frame of first image and at least one frame of second image to obtain at least one frame of target video image;
and the generating unit is used for generating the target video based on at least one frame of target video image.
In the embodiment of the application, based on the limitation of the shooting visual angle of the camera, the first image and the second image often comprise part of scene content, and at least one frame of first image and at least one frame of second image are directly subjected to image stitching, so that panoramic video can be obtained without a large number of clips, and the convenience of making interesting video by a user is effectively improved.
In some embodiments, the stitching unit may be specifically configured to:
respectively carrying out image stitching on a first target image in at least one frame of first image and at least two frames of second images;
or respectively carrying out image stitching on the second target image in the at least one frame of second image and the at least two frames of first images.
In the embodiment of the application, one frame of image can be respectively spliced with multiple frames of images, so that multiple frames of target video images are obtained, the target video is generated, the flexibility of image splicing is improved, the generation mode of the target video is more diversified, and various requirements of users for making interesting videos can be met.
In some embodiments, the generation module 1202 may further include:
A display unit for displaying at least one frame of a first image and at least one frame of a second image;
a first receiving unit for receiving a third input of at least one frame of a first image from a user;
a first determining unit for determining a first starting image frame in response to a third input;
a second receiving unit for receiving a fourth input of at least one frame of a second image from a user;
a second determining unit for determining a second starting image frame in response to a fourth input;
accordingly, the splicing unit may be specifically configured to:
And performing image stitching on a first target image frame sequence in at least one first image and a second target image frame sequence in at least one second image, wherein a starting frame of the first target image frame sequence is a first starting image frame, and a starting frame of the second target image frame sequence is a second starting image frame.
In the embodiment of the application, the first initial image frame and the second initial image frame can be determined according to the selection of a user, and the first initial image frame and the second initial image frame can be used as the reference to carry out image stitching on the first target image frame sequence and the second target image frame sequence, so that the corresponding relation between at least one frame of first image and at least one frame of second image can be further ensured, and the stitching effect of the target video is better.
In some embodiments, the video generating apparatus 1200 may further include:
The fifth display module is used for displaying a playing interface of the target video;
The third receiving module is used for receiving a fifth input of a user to the playing interface;
An execution module for executing the target process in response to the fifth input;
Wherein the target process comprises at least one of:
storing the target video;
re-recording the first file displayed in the first splicing window;
re-recording the second file displayed in the second splicing window;
and splicing the target video with a third file, wherein the third file comprises at least one frame of third image.
In the embodiment of the application, the target video can be further processed according to the needs of the user, so that different requirements of the user for making interesting videos can be met.
In some embodiments, where the target processing may include stitching the target video with the third file, the video generating apparatus 1200 may further include:
The sixth display module is used for displaying a third spliced window, the third spliced window is used for displaying a third file, the second spliced window comprises a fifth display area, the third spliced window comprises a sixth display area, and a background image displayed by the fifth display area is the same as a background image displayed by the sixth display area.
Therefore, after the target video is generated by image stitching based on the first file and the second file, the target video can be further subjected to image stitching with the third file on the basis of the target video, and therefore the diversified requirements of users for making interesting videos can be further met.
In some embodiments, the video generating apparatus 1200 may further include:
the fourth receiving module is used for receiving a sixth input of the user to the first shooting preview interface;
The first determining module is used for responding to the sixth input and determining image parameters of the first file.
Therefore, the image parameters of the first file can be determined according to the selection of the user, namely, the first file can be the video or the photo acquired according to the requirement of the user, so that the mode of generating the target video is more diversified, and various requirements of the user can be met.
In some embodiments, the video generating apparatus 1200 may further include:
the fifth receiving module is used for receiving a seventh input of the user to the second shooting preview interface;
and a second determining module for determining image parameters of the second file in response to the seventh input.
Therefore, the image parameters of the second file can be determined according to the selection of the user, namely, the second file can be the video or the photo acquired according to the requirement of the user, so that the mode of generating the target video is more diversified, and various requirements of the user can be met.
In some embodiments, the video generating apparatus 1200 may further include:
the sixth receiving module is used for receiving eighth input of the user to the first shooting preview interface;
and an updating module for updating the display scale of the first preview area and the second preview area in response to the eighth input.
In the embodiment of the application, the display proportion of the first preview area and the second preview area can be set according to the selection of the user, so that different requirements of the user on the operation or effect of generating the target video can be met.
The video generating device in the embodiment of the application can be an electronic device or a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. The electronic device may be a Mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a Mobile internet appliance (Mobile INTERNET DEVICE, MID), an augmented reality (augmented reality, AR)/Virtual Reality (VR) device, a robot, a wearable device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc., and may also be a server, a network attached storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, etc., which are not particularly limited in the embodiments of the present application.
The video generating apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The video generating apparatus provided by the embodiment of the present application can implement each process implemented by the embodiments of the methods of fig. 1 to 11 to achieve the same technical effects, and for avoiding repetition, a detailed description is omitted herein.
Optionally, as shown in fig. 13, the embodiment of the present application further provides an electronic device 1300, including a processor 1301 and a memory 1302, where the memory 1302 stores a program or an instruction that can be executed on the processor 1301, and the program or the instruction implements each step of the embodiment of the video generating method when executed by the processor 1301, and can achieve the same technical effect, so that repetition is avoided, and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
Fig. 14 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1400 includes, but is not limited to: radio frequency unit 1401, network module 1402, audio output unit 1403, input unit 1404, sensor 1405, display unit 1406, user input unit 1407, interface unit 1408, memory 1409, and processor 1410.
Those skilled in the art will appreciate that the electronic device 1400 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 1410 by a power management system to perform functions such as managing charging, discharging, and power consumption by the power management system. The electronic device structure shown in fig. 14 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
Wherein the display unit 1406 may be configured to:
Displaying a first splicing window and a second splicing window, wherein the first splicing window is used for displaying a first file, the second splicing window is used for displaying a second file, the first file comprises at least one frame of first image, the second file comprises at least one frame of second image, the first splicing window comprises a first display area and a second display area, the second splicing window comprises a third display area and a fourth display area, and a background image displayed in the second display area is the same as a background image displayed in the third display area;
The processor 1410 may be configured to:
and generating a target video based on the image displayed in the first display area, the background image displayed in the second display area and the image displayed in the fourth display area.
In the embodiment of the application, a first splicing window and a second splicing window can be displayed, wherein the first splicing window is used for displaying a first file and comprises a first display area and a second display area, the second splicing window is used for displaying a second file and comprises a third display area and a fourth display area, a background image displayed by the second display area is the same as a background image displayed by the third display area, and then a target video is generated based on the image displayed by the first display area, the background image displayed by the second display area and the image displayed by the fourth display area.
Therefore, the user does not need to carry out complex editing operation, and the multi-section video recorded in time periods can be spliced to generate the interesting video with the panoramic background; in addition, based on the images displayed in the first display area, the second display area and the fourth display area, the spliced video playing effect can be directly seen, the video splicing process of 'what you see is what you get' is achieved, and the interactive experience of a user is improved.
In some embodiments, the user input unit 1407 may be used to receive a first input of a user;
the display unit 1406 may also be configured to display a first shooting preview interface of the first file in response to the first input;
The first shooting preview interface comprises a first preview area and a second preview area, the image displayed in the first display area is identical to the image displayed in the first preview area, and the image displayed in the second display area is identical to the image displayed in the second preview area.
Therefore, the first file can be acquired according to the operation of the user, so that the target video generated by splicing the first file can meet the personalized requirements of the user.
In some embodiments, the user input unit 1407 may also be configured to receive a second input of a second shooting preview interface of the second file if the second shooting preview interface is displayed;
The display unit 1406 may also be configured to display a second image acquired by the camera in a second stitching window in response to a second input.
Therefore, when the second file is acquired, the first file can be synchronously displayed in the first splicing window, the foreground image of the first file can be referred to, the second file is guided to be recorded, and the video playing effect after splicing can be directly seen based on the first file displayed in the first splicing window and the second file displayed in the second splicing window, so that the recorded second file can meet the personalized requirement of a user for manufacturing panoramic video, and the interactive experience of the user is improved.
In some embodiments, the display unit 1406 may be further configured to display a play speed control for adjusting a play speed of the first file in a case where the first file is played in the first splicing window.
In the embodiment of the application, under the condition that the first file is played in the first splicing window, the playing speed of the first file can be determined according to the requirement of a user, so that the flexibility of the acquisition process of the second file is effectively improved, various requirements of the user can be met, the acquired second file can be more in line with the requirement of the user, and further, a target video more satisfactory to the user is generated.
In some embodiments, processor 1410 may be specifically configured to:
Image stitching is carried out on at least one frame of first image and at least one frame of second image, and at least one frame of target video image is obtained;
A target video is generated based on the at least one frame of target video image.
In the embodiment of the application, based on the limitation of the shooting visual angle of the camera, the first image and the second image often comprise part of scene content, and at least one frame of first image and at least one frame of second image are directly subjected to image stitching, so that panoramic video can be obtained without a large number of clips, and the convenience of making interesting video by a user is effectively improved.
In some embodiments, processor 1410 may be specifically configured to:
respectively carrying out image stitching on a first target image in at least one frame of first image and at least two frames of second images;
or respectively carrying out image stitching on the second target image in the at least one frame of second image and the at least two frames of first images.
In the embodiment of the application, one frame of image can be respectively spliced with multiple frames of images, so that multiple frames of target video images are obtained, the target video is generated, the flexibility of image splicing is improved, the generation mode of the target video is more diversified, and various requirements of users for making interesting videos can be met.
In some embodiments, the display unit 1406 may also be used to display at least one frame of a first image and at least one frame of a second image;
the user input unit 1407 may also be configured to receive a third input from a user of the at least one frame of the first image;
the processor 1410 may also be configured to determine a first starting image frame in response to a third input;
the user input unit 1407 may also be used to receive a fourth input of the user for at least one frame of the second image;
The processor 1410 may also be configured to determine a second starting image frame in response to a fourth input;
the processor 1410 may also be used to:
And performing image stitching on a first target image frame sequence in at least one first image and a second target image frame sequence in at least one second image, wherein a starting frame of the first target image frame sequence is a first starting image frame, and a starting frame of the second target image frame sequence is a second starting image frame.
In the embodiment of the application, the first initial image frame and the second initial image frame can be determined according to the selection of a user, and the first initial image frame and the second initial image frame can be used as the reference to carry out image stitching on the first target image frame sequence and the second target image frame sequence, so that the corresponding relation between at least one frame of first image and at least one frame of second image can be further ensured, and the stitching effect of the target video is better.
In some embodiments, the display unit 1406 may also be used to display a playback interface for the target video;
the user input unit 1407 may also be used to receive a fifth input of the user to the playback interface;
The processor 1410 may also be configured to perform a target process in response to a fifth input;
Wherein the target process comprises at least one of:
storing the target video;
re-recording the first file displayed in the first splicing window;
re-recording the second file displayed in the second splicing window;
and splicing the target video with a third file, wherein the third file comprises at least one frame of third image.
In the embodiment of the application, the target video can be further processed according to the needs of the user, so that different requirements of the user for making interesting videos can be met.
In some embodiments, where the target processing may include stitching the target video with a third file, the display unit 1406 may also be configured to:
And displaying a third spliced window, wherein the third spliced window is used for displaying a third file, the second spliced window comprises a fifth display area, the third spliced window comprises a sixth display area, and a background image displayed by the fifth display area is the same as a background image displayed by the sixth display area.
Therefore, after the target video is generated by image stitching based on the first file and the second file, the target video can be further subjected to image stitching with the third file on the basis of the target video, and therefore the diversified requirements of users for making interesting videos can be further met.
In some embodiments, the user input unit 1407 may also be configured to receive a sixth input from the user to the first capture preview interface;
The processor 1410 may be configured to determine an image parameter of the first file in response to the sixth input.
Therefore, the image parameters of the first file can be determined according to the selection of the user, namely, the first file can be the video or the photo acquired according to the requirement of the user, so that the mode of generating the target video is more diversified, and various requirements of the user can be met.
In some embodiments, the user input unit 1407 may also be configured to receive a seventh input of the user to the second capture preview interface;
The processor 1410 may be configured to determine an image parameter of the second file in response to the seventh input.
Therefore, the image parameters of the second file can be determined according to the selection of the user, namely, the second file can be the video or the photo acquired according to the requirement of the user, so that the mode of generating the target video is more diversified, and various requirements of the user can be met.
In some embodiments, the user input unit 1407 may also be configured to receive an eighth input of the user to the first capture preview interface;
The processor 1410 may be configured to update the display scale of the first preview area and the second preview area in response to the eighth input.
In the embodiment of the application, the display proportion of the first preview area and the second preview area can be set according to the selection of the user, so that different requirements of the user on the operation or effect of generating the target video can be met.
It should be appreciated that in embodiments of the present application, the input unit 1404 may include a graphics processor (Graphics Processing Unit, GPU) 14041 and a microphone 14042, with the graphics processor 14041 processing image data of still pictures or video obtained by an image capture device (e.g., a camera) in a video capture mode or an image capture mode. The display unit 1406 may include a display panel 14061, and the display panel 14061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1407 includes at least one of a touch panel 14071 and other input devices 14072. The touch panel 14071 is also referred to as a touch screen. The touch panel 14071 may include two parts, a touch detection device and a touch controller. Other input devices 14072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
Memory 1409 may be used to store software programs as well as various data. The memory 1409 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 1409 may include volatile memory or nonvolatile memory, or the memory 1409 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static random access memory (STATIC RAM, SRAM), dynamic random access memory (DYNAMIC RAM, DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate Synchronous dynamic random access memory (Double DATA RATE SDRAM, DDRSDRAM), enhanced Synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNCH LINK DRAM, SLDRAM), and Direct random access memory (DRRAM). Memory 1409 in embodiments of the application includes, but is not limited to, these and any other suitable types of memory.
Processor 1410 may include one or more processing units; optionally, the processor 1410 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, etc., and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 1410.
The embodiment of the application also provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the embodiment of the video generating method, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running programs or instructions to realize the processes of the embodiment of the video generation method, and can achieve the same technical effects, so that repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
Embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the video generation method embodiment described above, and achieve the same technical effects, and for avoiding repetition, a detailed description is omitted herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (20)

1. A method of video generation, the method comprising:
Displaying a first splicing window and a second splicing window, wherein the first splicing window is used for displaying a first file, the second splicing window is used for displaying a second file, the first file comprises at least one frame of first image, the second file comprises at least one frame of second image, the first splicing window comprises a first display area and a second display area, the second splicing window comprises a third display area and a fourth display area, and a background image displayed by the second display area is the same as a background image displayed by the third display area;
generating a target video based on the image displayed in the first display area, the background image displayed in the second display area and the image displayed in the fourth display area;
the second display area and the third display area are overlapped, and the first display area and the fourth display area are respectively positioned at two sides of the overlapped area.
2. The method of generating video according to claim 1, wherein before displaying the first and second mosaic windows, further comprising:
receiving a first input of a user;
responsive to the first input, displaying a first shooting preview interface of a first file;
The first shooting preview interface comprises a first preview area and a second preview area, the image displayed by the first display area is identical to the image displayed by the first preview area, and the image displayed by the second display area is identical to the image displayed by the second preview area.
3. The method of generating video according to claim 1, wherein after the displaying the first and second mosaic windows, further comprising:
Receiving a second input of a user to a second shooting preview interface under the condition that the second shooting preview interface of a second file is displayed;
and responding to the second input, and displaying a second image acquired by the camera in the second splicing window.
4. The video generation method according to claim 1, characterized in that the method further comprises:
And displaying a play speed control under the condition that the first file is played by the first splicing window, wherein the play speed control is used for adjusting the play speed of the first file.
5. The method according to claim 1, wherein the generating the target video based on the image displayed in the first display area, the background image displayed in the second display area, and the image displayed in the fourth display area, comprises:
Image stitching is carried out on the at least one first image and the at least one second image, so that at least one target video image is obtained;
generating a target video based on the at least one frame of target video image.
6. The method of video generation according to claim 5, wherein said image stitching the at least one first image with the at least one second image comprises:
Respectively carrying out image stitching on a first target image in the at least one frame of first image and at least two frames of second images;
Or respectively carrying out image stitching on the second target image in the at least one frame of second image and at least two frames of first images.
7. The method of video generation according to claim 5, wherein before the image stitching the at least one first image with the at least one second image, further comprising:
displaying the at least one first image and the at least one second image;
receiving a third input of a user to the at least one frame of first image;
determining a first starting image frame in response to the third input;
receiving a fourth input of a user to the at least one frame of second image;
Determining a second starting image frame in response to the fourth input;
the image stitching the at least one first image and the at least one second image includes:
and performing image stitching on a first target image frame sequence in the at least one first image and a second target image frame sequence in the at least one second image, wherein a starting frame of the first target image frame sequence is the first starting image frame, and a starting frame of the second target image frame sequence is the second starting image frame.
8. The video generation method according to claim 1, wherein after the generation of the target video, further comprising:
Displaying a playing interface of the target video;
Receiving a fifth input of a user to the playing interface;
Performing a target process in response to the fifth input;
wherein the target process comprises at least one of:
Storing the target video;
Re-recording the first file displayed in the first splicing window;
re-recording the second file displayed in the second splicing window;
And splicing the target video with a third file, wherein the third file comprises at least one frame of third image.
9. The method according to claim 8, wherein, in a case where the target processing includes splicing the target video with a third file, the receiving the fifth input from the user to the playback interface further includes:
And displaying a third spliced window, wherein the third spliced window is used for displaying a third file, the second spliced window comprises a fifth display area, the third spliced window comprises a sixth display area, and a background image displayed by the fifth display area is the same as a background image displayed by the sixth display area.
10. The method of claim 2, wherein after displaying the first shot preview interface of the first file, further comprising:
Receiving a sixth input of a user to the first shooting preview interface;
in response to the sixth input, image parameters of the first file are determined.
11. The method of video generation according to claim 3, wherein prior to receiving the second input from the user to the second capture preview interface, further comprising:
receiving a seventh input of a user to the second shooting preview interface;
In response to the seventh input, image parameters of the second file are determined.
12. The method of claim 2, wherein after displaying the first shot preview interface of the first file, further comprising:
Receiving eighth input of a user to the first shooting preview interface;
in response to the eighth input, the display scale of the first preview area and the second preview area is updated.
13. A video generating apparatus, the apparatus comprising:
the first display module is used for displaying a first spliced window and a second spliced window, the first spliced window is used for displaying a first file, the second spliced window is used for displaying a second file, the first file comprises at least one frame of first image, the second file comprises at least one frame of second image, the first spliced window comprises a first display area and a second display area, the second spliced window comprises a third display area and a fourth display area, and the background image displayed by the second display area is identical to the background image displayed by the third display area;
The generation module is used for generating a target video based on the image displayed in the first display area, the background image displayed in the second display area and the image displayed in the fourth display area;
the second display area and the third display area are overlapped, and the first display area and the fourth display area are respectively positioned at two sides of the overlapped area.
14. The video generating apparatus according to claim 13, wherein the apparatus further comprises:
the first receiving module is used for receiving a first input of a user;
the second display module is used for responding to the first input and displaying a first shooting preview interface of the first file;
The first shooting preview interface comprises a first preview area and a second preview area, the image displayed by the first display area is identical to the image displayed by the first preview area, and the image displayed by the second display area is identical to the image displayed by the second preview area.
15. The video generating apparatus according to claim 13, wherein the apparatus further comprises:
The second receiving module is used for receiving a second input of a user to the second shooting preview interface under the condition that the second shooting preview interface of the second file is displayed;
and the third display module is used for responding to the second input and displaying a second image acquired by the camera in the second splicing window.
16. The video generating apparatus according to claim 13, wherein the apparatus further comprises:
The fourth display module is used for displaying a play speed control under the condition that the first file is played by the first splicing window, and the play speed control is used for adjusting the play speed of the first file.
17. The video generating apparatus of claim 13, wherein the generating module comprises:
The splicing unit is used for carrying out image splicing on the at least one frame of first image and the at least one frame of second image to obtain at least one frame of target video image;
And the generating unit is used for generating a target video based on the at least one frame of target video image.
18. The video generating apparatus of claim 17, wherein the splicing unit is specifically configured to:
Respectively carrying out image stitching on a first target image in the at least one frame of first image and at least two frames of second images;
Or respectively carrying out image stitching on the second target image in the at least one frame of second image and at least two frames of first images.
19. The video generating apparatus of claim 17, wherein the generating module further comprises:
A display unit for displaying the at least one frame of first image and the at least one frame of second image;
A first receiving unit for receiving a third input of a user to the at least one frame of the first image;
A first determining unit configured to determine a first starting image frame in response to the third input;
a second receiving unit for receiving a fourth input of the user to the at least one frame of the second image;
A second determining unit for determining a second starting image frame in response to the fourth input;
Correspondingly, the splicing unit is specifically used for:
and performing image stitching on a first target image frame sequence in the at least one first image and a second target image frame sequence in the at least one second image, wherein a starting frame of the first target image frame sequence is the first starting image frame, and a starting frame of the second target image frame sequence is the second starting image frame.
20. An electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the video generation method of any of claims 1-12.
CN202111462332.8A 2021-12-02 2021-12-02 Video generation method and device and electronic equipment Active CN114025237B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111462332.8A CN114025237B (en) 2021-12-02 2021-12-02 Video generation method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111462332.8A CN114025237B (en) 2021-12-02 2021-12-02 Video generation method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN114025237A CN114025237A (en) 2022-02-08
CN114025237B true CN114025237B (en) 2024-06-14

Family

ID=80067598

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111462332.8A Active CN114025237B (en) 2021-12-02 2021-12-02 Video generation method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN114025237B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114520877A (en) * 2022-02-10 2022-05-20 维沃移动通信有限公司 Video recording method and device and electronic equipment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113596574A (en) * 2021-07-30 2021-11-02 维沃移动通信有限公司 Video processing method, video processing apparatus, electronic device, and readable storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113395441A (en) * 2020-03-13 2021-09-14 华为技术有限公司 Image color retention method and device
CN111601033A (en) * 2020-04-27 2020-08-28 北京小米松果电子有限公司 Video processing method, device and storage medium
CN112565844B (en) * 2020-12-04 2023-05-12 维沃移动通信有限公司 Video communication method and device and electronic equipment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113596574A (en) * 2021-07-30 2021-11-02 维沃移动通信有限公司 Video processing method, video processing apparatus, electronic device, and readable storage medium

Also Published As

Publication number Publication date
CN114025237A (en) 2022-02-08

Similar Documents

Publication Publication Date Title
WO2023174223A1 (en) Video recording method and apparatus, and electronic device
CN114520877A (en) Video recording method and device and electronic equipment
CN111669495B (en) Photographing method, photographing device and electronic equipment
CN114520876A (en) Time-delay shooting video recording method and device and electronic equipment
CN114125179A (en) Shooting method and device
CN113259743A (en) Video playing method and device and electronic equipment
CN113794835A (en) Video recording method and device and electronic equipment
CN111866379A (en) Image processing method, image processing device, electronic equipment and storage medium
CN114025237B (en) Video generation method and device and electronic equipment
WO2023093669A1 (en) Video filming method and apparatus, and electronic device and storage medium
CN114500844A (en) Shooting method and device and electronic equipment
CN113873319A (en) Video processing method and device, electronic equipment and storage medium
CN112784081A (en) Image display method and device and electronic equipment
CN112492205A (en) Image preview method and device and electronic equipment
CN115334242B (en) Video recording method, device, electronic equipment and medium
CN114500852B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN114173178B (en) Video playing method, video playing device, electronic equipment and readable storage medium
CN112672059B (en) Shooting method and shooting device
CN114143455B (en) Shooting method and device and electronic equipment
CN115174812A (en) Video generation method, video generation device and electronic equipment
CN114745506A (en) Video processing method and electronic equipment
CN114491090A (en) Multimedia file generation method and device and electronic equipment
CN114173178A (en) Video playing method, video playing device, electronic equipment and readable storage medium
CN114866694A (en) Photographing method and photographing apparatus
CN117395462A (en) Method and device for generating media content, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant