CN113099287A - Video production method and device - Google Patents

Video production method and device Download PDF

Info

Publication number
CN113099287A
CN113099287A CN202110350554.4A CN202110350554A CN113099287A CN 113099287 A CN113099287 A CN 113099287A CN 202110350554 A CN202110350554 A CN 202110350554A CN 113099287 A CN113099287 A CN 113099287A
Authority
CN
China
Prior art keywords
image
video
determining
frame
range
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110350554.4A
Other languages
Chinese (zh)
Inventor
姜山
邵帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Bilibili Technology Co Ltd
Original Assignee
Shanghai Bilibili Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Bilibili Technology Co Ltd filed Critical Shanghai Bilibili Technology Co Ltd
Priority to CN202110350554.4A priority Critical patent/CN113099287A/en
Publication of CN113099287A publication Critical patent/CN113099287A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47202End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a video production method and a video production device, wherein the video production method comprises the following steps: displaying an image time axis created based on an image to be manufactured in a manufacturing interface provided by the browser, wherein the image time axis comprises a range selection control; determining the relative position of the range selection control in the image timeline in response to the moving operation of the range selection control; determining the selected image range according to the relative position of the range selection control in the image time axis; previewing the animation effect corresponding to the image in the image range, and synthesizing the corresponding video according to the previewing result. Therefore, the video can be manufactured through the browser, the threshold for manufacturing the video is low, the time for manufacturing the video is shortened, and the manufacturing efficiency is improved.

Description

Video production method and device
Technical Field
The present application relates to the field of computer technologies, and in particular, to a video production method. The application also relates to a video production apparatus, a computing device, and a computer-readable storage medium.
Background
With the rapid development of computer technology and image processing technology, videos are increasingly favored by people. In the prior art, if a video needs to be produced, a special application program (including some charging software) needs to be downloaded and installed, and then an operation course of the application program needs to be learned, so that the video required by the user is produced. However, the above video production method needs to additionally download and install the corresponding application program, learn and master the use course of the application program, and the production process of the whole video is complicated, so that the production threshold of the video is high, the time is long, the efficiency is low, and the rapid production and delivery requirements of advertisers cannot be met.
Disclosure of Invention
In view of this, the present application provides a video production method. The application also relates to a video production device, a computing device and a computer readable storage medium, which are used for solving the problem of low video production efficiency in the prior art.
According to a first aspect of the embodiments of the present application, there is provided a video production method applied in a browser, including:
displaying an image time axis created based on an image to be manufactured in a manufacturing interface provided by the browser, wherein the image time axis comprises a range selection control;
determining the relative position of the range selection control in the image timeline in response to the moving operation of the range selection control;
determining the selected image range according to the relative position of the range selection control in the image time axis;
previewing the animation effect corresponding to the image in the image range, and synthesizing the corresponding video according to the previewing result.
According to a second aspect of the embodiments of the present application, there is provided a video production apparatus, which is applied to a browser, and includes:
the display module is configured to display an image time axis created based on an image to be produced in a production interface provided by the browser, and the image time axis contains a range selection control;
a first determination module configured to determine a relative position of the range selection control in the image timeline in response to a movement operation of the range selection control;
a second determination module configured to determine a selected image range according to a relative position of the range selection control in the image timeline;
and the synthesis module is configured to preview the animation effect corresponding to the image in the image range and synthesize the corresponding video according to the preview result.
According to a third aspect of embodiments herein, there is provided a computing device comprising:
a memory and a processor;
the memory is to store computer-executable instructions, and the processor is to execute the computer-executable instructions to:
displaying an image time axis created based on an image to be manufactured in a manufacturing interface provided by the browser, wherein the image time axis comprises a range selection control;
determining the relative position of the range selection control in the image timeline in response to the moving operation of the range selection control;
determining the selected image range according to the relative position of the range selection control in the image time axis;
previewing the animation effect corresponding to the image in the image range, and synthesizing the corresponding video according to the previewing result.
According to a fourth aspect of embodiments herein, there is provided a computer-readable storage medium storing computer-executable instructions that, when executed by a processor, implement the steps of any of the video production methods.
According to the video production method, an image time axis created based on an image to be produced can be displayed in a production interface provided by a browser, and the image time axis comprises a range selection control; determining the relative position of the range selection control in the image timeline in response to the moving operation of the range selection control; determining the selected image range according to the relative position of the range selection control in the image time axis; previewing the animation effect corresponding to the image in the image range, and synthesizing the corresponding video according to the previewing result. Under the condition, the intelligent video production method based on the browser is provided, the production of the video can be realized by installing the browser on the computer, a separate application program does not need to be downloaded and installed, the threshold for producing the video is low, the time for producing the video is shortened, and the production efficiency is improved. In addition, the required image range can be customized through the image time axis, and then the animation effect corresponding to the image in the customized image range is previewed, so that the video required by the user can be simply and efficiently synthesized, and the video production efficiency is improved.
Drawings
Fig. 1 is a flowchart of a video production method according to an embodiment of the present application;
FIG. 2 is a diagram of an image timeline provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of a production interface provided by a first browser according to an embodiment of the present application;
FIG. 4 is a diagram of a crop box provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of a second browser-provided authoring interface provided in an embodiment of the present application;
FIG. 6 is a schematic diagram of a third authoring interface provided by a browser according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a fourth browser-provided authoring interface provided in accordance with an embodiment of the present application;
FIG. 8 is a flow chart of another video production method provided by an embodiment of the present application;
FIG. 9 is a flow chart of yet another video production method provided by an embodiment of the present application;
fig. 10 is a schematic structural diagram of a video production apparatus according to an embodiment of the present application;
fig. 11 is a block diagram of a computing device according to an embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.
The terminology used in the one or more embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the present application. As used in one or more embodiments of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present application refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It will be understood that, although the terms first, second, etc. may be used herein in one or more embodiments of the present application to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first aspect may be termed a second aspect, and, similarly, a second aspect may be termed a first aspect, without departing from the scope of one or more embodiments of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
First, the noun terms to which one or more embodiments of the present application relate are explained.
GIF (Graphics exchange Format): the GIF is a public image file format standard, is used for displaying index color images in a hypertext markup language mode, and is widely applied to the Internet and other online service systems. The GIF is divided into static GIF and animation GIF, the extension name is GIF, the GIF is in a compressed bitmap format, a transparent background image is supported, the GIF is suitable for various operating systems, the 'body type' is small, and many small animations on the network are in the GIF format. In fact, the GIF is a GIF graph in which a plurality of images are stored as one image file to form animation, and most commonly, the GIF graph is a smiling graph formed by connecting animation of one frame in series, so that the GIF is still in an image file format.
Web technology: the method refers to a related technology realized based on a browser, and all realization operations are completed in the browser.
A Canvas: the browser canvas is a technology used for rendering various image pictures by the browser.
FPS: the definition in the field of images refers to the number of frames transmitted per second of a picture, and in colloquial, refers to the number of pictures in animation or video (e.g., a video is 30FPS or 24 FPS). The FPS measures the amount of information used to store and display the motion video. The greater the number of frames per second, the more fluid the displayed motion will be. Some computer video formats can only provide 15FPS per second. The movie is played at a rate of 24 pictures per second, i.e. 24 still pictures are projected continuously on the screen within one second. The unit of the animation playing speed is FPS, where F is english word Frame, P is Per, and S is Second. Expressed in chinese is how many frames per second, or frames per second, a movie is 24FPS, often referred to simply as 24 frames.
In the present application, a video production method is provided, and the present application relates to a video production apparatus, a computing device, and a computer-readable storage medium, which are described in detail in the following embodiments one by one.
Fig. 1 shows a flowchart of a video production method according to an embodiment of the present application, which is applied in a browser and specifically includes the following steps:
step 102: and according to the uploaded image to be made and a preset display rule, establishing a corresponding image time axis for the image to be made in a making interface provided by the browser.
In practical application, if a video needs to be made, a corresponding application program needs to be additionally downloaded and installed, a user learns and masters the use course of the application program, and the making process of the whole video is complicated, so that the making threshold of the video is high, the time is long, the efficiency is low, and the rapid making and putting requirements of an advertiser cannot be met.
Therefore, the application provides an intelligent video production method based on a browser, which can display an image time axis created based on an image to be produced in a production interface provided by the browser, wherein the image time axis comprises a range selection control; determining the relative position of the range selection control in the image timeline in response to the moving operation of the range selection control; determining the selected image range according to the relative position of the range selection control in the image time axis; previewing the animation effect corresponding to the image in the image range, and synthesizing the corresponding video according to the previewing result. Therefore, the computer is provided with the browser to realize the video production without downloading and installing an independent application program, the threshold for producing the video is low, the time for producing the video is shortened, and the production efficiency is improved.
Specifically, the images to be created refer to images that are subsequently used to synthesize a video, the number of the images to be created depends on the number of images uploaded by a user, and the sources of different images to be created may be different. The preset display rule is a sequence rule for displaying images to be produced from different sources, and for example, the preset display rule may be to display an image extracted from a video first, then display a static image, and finally display an image obtained from an existing GIF image (dynamic or static).
In addition, the production interface provided by the browser may refer to an interface provided in the browser for producing a video, that is, a video production interface, and various controls required for producing the video may be included in the video production interface. The image time axis is a time axis for displaying images to be produced in a thumbnail form according to a sequence, and each unit time on the time axis corresponds to one image to be produced, for example, one image corresponds to every 200 milliseconds, one image corresponds to every 400 milliseconds, or one image corresponds to every 1 second.
In an optional implementation manner of this embodiment, according to the uploaded image to be produced and a preset display rule, a corresponding image timeline is created for the image to be produced in a production interface provided by the browser, and a specific implementation process may be as follows:
determining the display sequence of the images to be manufactured according to the preset display rule;
determining the total number of the images to be made, and determining the display size of the images to be made according to the total number;
generating a thumbnail corresponding to the image to be manufactured based on the display size;
and displaying the thumbnails corresponding to the images to be produced in a time axis form according to the display sequence to generate the image time axis.
It should be noted that after each uploaded image to be produced is acquired, a corresponding image timeline may be created based on the image to be produced, and thumbnails corresponding to the images to be produced are displayed on the image timeline. The method comprises the steps of determining the size of a thumbnail according to the total number of images to be produced, and creating and displaying the corresponding thumbnail according to the size so as to obtain the image time axis.
According to the method and the device, when the image time axis is initially created and generated, the proper zooming size can be intelligently selected according to the total number of the images to be made (namely the total frame number included by the image time axis), so that better user experience is provided.
In an optional implementation manner of this embodiment, before creating the image timeline, a data source, that is, an image to be produced required by a composite video, needs to be acquired first, so that according to an uploaded image to be produced and a preset display rule, before creating a corresponding image timeline for the image to be produced in a production interface provided by a browser, the method further includes:
and acquiring the uploaded image to be made under the condition of receiving the data selection instruction.
Specifically, the data selection instruction refers to an instruction triggered by a user through an uploading control in a production interface provided by a browser, and the data selection instruction is used for acquiring an image to be produced uploaded by the user.
In an optional implementation manner of this embodiment, in the case that the data selection instruction is received, acquiring the uploaded image to be produced includes at least one of:
under the condition of receiving a first data selection instruction, acquiring a target video corresponding to the first data selection instruction, and extracting the image to be produced from the target video;
under the condition that a second data selection instruction is received, acquiring a target image corresponding to the second data selection instruction, and determining the target image as the image to be manufactured;
and under the condition of receiving a third data selection instruction, acquiring a GIF image corresponding to the third data selection instruction, and determining the GIF image as the image to be made.
Specifically, the first data selection instruction, the second data selection instruction, and the third data selection instruction may be instructions triggered by different uploading controls, where the first data selection instruction is used to upload a video, the second data selection instruction is used to upload a still picture, and the third data selection instruction is used to upload an existing GIF image (including a still GIF image and a GIF animation).
In addition, the first data selection instruction, the second data selection instruction, and the third data selection instruction may also be instructions triggered by an upload control, and when the upload control is triggered, a file type selected by a user is determined, if the file type is in a video format, at least one video frame is extracted from the video as the image to be produced, if the file type is in a picture format, the file is directly determined as the image to be produced, and if the file type is in a GIF format, each frame of image in the file is determined as the image to be produced.
It should be noted that, after the user triggers the upload control and selects the corresponding file, the image in the selected file may be displayed in the image preview area, and then the upload control may be triggered again to select the file for upload, where the file types selected in the previous and subsequent two times may be the same or different, and when the confirmation upload control in the production interface provided by the browser is triggered, all the images selected by the user may be used as the images to be produced to create the corresponding image timeline.
Illustratively, a video uploading control, a static image uploading control and a GIF animation uploading control are arranged in a production interface provided by a browser, when a user clicks the video uploading control, the user can select a video file to upload, extract a corresponding video frame from the video file, and display the video frame in an image preview area; when a user clicks the static image uploading control, an image file can be selected for uploading, and the uploaded image is displayed in the image preview area; when the user clicks the GIF animation uploading control, the GIF file can be selected to be uploaded, and each frame of image in the uploaded GIF animation is displayed in the image preview area. And after the user clicks a confirmation uploading control in a production interface provided by the browser, generating a corresponding image time axis.
The data source that the preparation video can select is various in this application, can be based on current video resource, current GIF animation and static image come the synthetic new video, the user can select the video alone, one kind in GIF animation or the static image is as the data source, also can be after having selected the video, the GIF animation is reselected, or the static image adds in the time axis, thereby can select the image of different images in the different data sources as final composite video simultaneously, make the user when the preparation video, the selection of data source is more nimble high-efficient, satisfy abundant GIF material and promote the demand.
In an optional implementation manner of this embodiment, if the file selected by the user to be uploaded is a video, at least one video frame needs to be extracted from the video uploaded by the user to be used as an image to be produced. In a specific implementation, the extraction of the image to be produced from the target video may be as follows:
determining corresponding extraction fineness according to the playing frame number per second of the target video;
determining the time length of each frame of video to be extracted according to the extraction fineness;
and extracting the image to be made from the target video according to the time length of each frame of video.
Specifically, the extraction fineness refers to the number of frames per second expected to be acquired, and the duration of each frame can be calculated according to the number of frames per second expected to be acquired, so that one frame of video frame is extracted every interval of the duration, that is, the duration of each frame of video indicates how many seconds every other frame of video frame is extracted. In practical application, the corresponding extraction fineness can be determined according to the number of playing frames per second of the target video.
In addition, for target videos with different durations, different extraction finenesses can be adopted for extraction, that is, for a target video with a longer video duration, in order to control the number of extracted video frames, the extraction finenesses can be made thicker, that is, the duration of each frame of video is longer (one frame of video frame is extracted at a longer interval), and for a target video with a shorter video duration, in order to ensure that enough video frames are extracted, the extraction finenesses can be made thinner, that is, the duration of each frame of video is shorter (one frame of video frame is extracted at a shorter interval).
For example, the number of frames per second played in the target video is 24 frames, and the corresponding fineness of extraction may be the same as the number of frames per second played in the target video, that is, the extraction is performed at a speed of 24 frames per second, where the duration (ms) of each frame of video is 1000 ms/24 frames. Alternatively, it may be extracted at a rate of 30 Frames Per Second (FPS), 15 frames per second, or 10 frames per second, depending on the duration of the target video and the corresponding requirements.
In an optional implementation manner of this embodiment, the to-be-made image is extracted from the target video according to the duration of each frame of video, and a specific implementation process may be as follows:
playing the target video in real time, and calculating the time point of the next video frame in the target video;
skipping the playing progress of the target video to the time point, pausing the target video, and acquiring a current video frame;
rendering the current video frame to a canvas of a browser, converting the video frame on the canvas into an image for storage to obtain the image to be manufactured, and returning to the operation step of calculating the time point of the next video frame in the target video until the target video is played.
It should be noted that, after a user uploads a target video in a production interface provided by a browser, the target video can be played in real time through a video tag in the browser, then in the playing process, a time point of a next frame in the video for extracting a picture is calculated according to the calculated duration of each frame of video, the video tag is quickly jumped to the time point for playing, the video is immediately paused, then the picture of the paused video is rendered on a canvas of the browser, then a frame image on the canvas is converted into image data to be stored in an internal memory, and the steps are repeated until all the desired video frames are extracted, so that an image to be produced uploaded by the user is obtained.
In an optional implementation manner of this embodiment, after the creating the image time axis, the further scaling may be performed on the image time axis, that is, according to the uploaded image to be produced and the preset display rule, after the creating a corresponding image time axis for the image to be produced in the production interface provided by the browser, the method further includes:
and under the condition that a zooming instruction aiming at the image time axis is received, zooming the thumbnail displayed on the image time axis according to a zooming parameter carried by the zooming instruction.
Specifically, the zoom instruction is an instruction triggered by a preset zoom operation, and is used for zooming in or zooming out a thumbnail displayed on an image time axis, and the preset zoom operation may be scrolling a roller, clicking a zoom control, and the like; the zooming parameters refer to the zooming or zooming-out amplitude, and when a zooming instruction is triggered through a preset zooming operation, the zooming parameter can carry the corresponding zooming parameter, for example, a user puts a mouse on an image time axis, and rolls 3 lower rollers upwards, and at the moment, the zooming instruction aiming at the image time axis is received, and the zooming parameter carried by the zooming instruction is 30% zooming; or, an amplifying control and a reducing control are arranged below the generated image time axis, and a user clicks the amplifying control once to amplify by 10%, and clicks the reducing control once to reduce by 10%.
For example, fig. 2 is a schematic diagram of an image timeline provided in an embodiment of the present application, and as shown in fig. 2, the image timeline is displayed with different scales (i.e., a pre-zoom image timeline and a post-zoom image timeline).
The image time axis in the application also provides the function of zooming in and zooming out, so that a user can clearly view the thumbnail corresponding to the image when the number of frames is too large or too small.
In an optional implementation manner of this embodiment, the rendering of the currently selected image to a canvas area in a production interface provided by the browser may be performed to perform display and operation, that is, according to the uploaded image to be produced and the preset display rule, after creating a corresponding image time axis for the image to be produced in the production interface provided by the browser, the method further includes:
determining a selected target image on the image time axis, wherein each unit time on the image time axis corresponds to one image;
and rendering the target image to a canvas area in a production interface provided by the browser according to the size of the target image.
It should be noted that the currently selected target image may be rendered to a canvas area in a production interface provided by the browser, so as to facilitate a user to clearly preview the image of each frame, and facilitate a subsequent user to perform operation editing on the image of each frame.
In an optional implementation manner of this embodiment, determining the selected target image on the image time axis includes:
determining an image to be produced selected by a selection operation as the target image when the selection operation for the image to be produced on the image time axis is received;
and under the condition that the selection operation of the image to be produced on the image time axis is not received, determining the first frame image on the image time axis as the target image.
It should be noted that, if a user selects a certain image on the image time axis, the image selected by the user may be used as a target image, and the target image is subsequently rendered to a canvas area in a production interface provided by the browser, that is, the image selected by the user is displayed in the canvas area in the production interface provided by the browser for the user to preview and operate. If the user does not select the image on the image time axis, for example, when the image time axis is generated for initialization, the user does not select any image, at this time, the first frame image on the image time axis can be determined as the target image, that is, the first frame image on the image time axis is displayed in the layout area in the production interface provided by the browser, so that the user can preview and operate the target image.
In an optional implementation manner of this embodiment, the rendering of the target image to a canvas area in a production interface provided by the browser according to the size of the target image may be implemented as follows:
determining the width and height of the target image;
determining a width and a height of the canvas area;
scaling the target image to the canvas area.
It should be noted that, in the present application, after a corresponding image time axis is created according to an acquired image to be made, a current frame can be rendered to a canvas area on a page in real time, and a canvas tool in a browser can intelligently and automatically scale to a canvas horizontal/vertical edge according to an original proportion of the image, so that a user can edit each frame of image globally.
In an optional implementation manner of this embodiment, after creating, according to the uploaded image to be produced and the preset display rule, a corresponding image timeline for the image to be produced in a production interface provided by the browser, the method further includes:
and in the case of receiving a deletion operation, deleting the image indicated by the deletion operation in the image time axis.
It should be noted that, after the corresponding image time axis is created, a deletion control is correspondingly arranged above each image on the image time axis, and when it is detected that the deletion control on a certain image is triggered, the image is deleted. In this way, when the continuous images are too close to each other, the mouse can move to the unnecessary frames, and the deleting control can be clicked, so that the useless frames can be deleted.
For example, fig. 3 is a schematic view of a production interface provided by a first browser according to an embodiment of the present application, and as shown in fig. 3, an "error number" is displayed on each image thumbnail on an image time axis, where the "error number" is a deletion control, and a user can delete a corresponding frame by clicking the "error number".
In an optional implementation manner of this embodiment, after generating the image timeline, a user may select a certain image in the image timeline to perform editing processing, that is, after creating a corresponding image timeline for the image to be made in a making interface provided by the browser according to the uploaded image to be made and a preset display rule, the method further includes:
and receiving an operation instruction of a first image in the image to be made, and generating a corresponding composite image.
Specifically, the first image is a selected image to be created, which is to be edited, the composite image is an image generated by editing the first image, and the operation instruction is an instruction for performing an editing operation on the image to be created (i.e., the first image) rendered to a canvas area in a creation interface provided by the browser, where the operation instruction may be an operation such as a clipping operation, an operation for adding a special effect, and an operation for adding a character.
In an optional implementation manner of this embodiment, the operation instruction may be a clipping instruction, that is, the operation instruction may be to clip an image to be produced (that is, a first image) rendered in a canvas area to obtain a corresponding composite image, and if the image to be produced in the canvas area is to be clipped, it is necessary to display a clipping frame in the canvas area, that is, after the target image is rendered in the canvas area in a production interface provided by the browser according to the size of the target image, the method further includes:
displaying a cropping frame in the canvas area, the cropping frame being located within the image area of the target image, the cropping frame having an area no greater than the target image;
and receiving control operation aiming at the cropping frame, and generating a synthetic image corresponding to the target image according to the control operation.
It should be noted that, the user can select the image size and content of each frame of image of the generated video by using the dragging, zooming and reducing functions of the cropping frame, and the browser can intelligently limit the cropping area to be only within the range of the original target image according to the size of the target image and the dragging position of the user, so as to avoid that the generated video has blank areas due to the misoperation of the user.
For example, fig. 4 is a schematic diagram of a cropping frame according to an embodiment of the present application, where the cropping frame is located in an actual area of an image as shown in fig. 4, and the cropping frame can be displayed in the cropping frame so that the cropping frame cannot be dragged to an area outside the image.
In practical applications, in one possible implementation, displaying the cropping box in the canvas area includes:
determining the target image as a bottom image of the canvas area;
adding a cropping frame on the bottom layer image, wherein the cropping frame and the target image are positioned on different layers;
and adding a masking layer between the bottom layer image and the cropping frame, wherein the masking layer is used for distinguishing selected areas and unselected areas of the cropping frame.
It should be noted that, the currently selected image (i.e. the target image) may be rendered on a canvas area (canvas) in real time as a base map, a cropping frame may be added on the base map, and rendered on the canvas area as different layers, the cropping frame may be dragged in size and position in proportion, a semi-transparent masking layer may be further disposed between the base map and the cropping frame to distinguish the selected area and the unselected area of the cropping frame, and the unselected area may be set as a semi-transparent dark color.
In one possible implementation manner, receiving a control operation for the cropping frame, and generating a synthetic image corresponding to the target image according to the control operation includes:
determining the position of the cropping frame according to the control operation for the cropping frame;
determining a positional parameter of the cropping frame relative to the underlying image;
acquiring image data in the cropping frame in the target image according to the position parameter;
and determining the acquired image data as the composite image, displaying the composite image in the cropping frame, and displaying the composite image in a cropping preview area in a production interface provided by the browser.
Specifically, the position parameters required for copying the cropping frame may include: left (pixel distance to the left of the target image), top (pixel distance to the top of the target image), width (cropping width), height (cropping height). In practical application, left is equal to the distance between the clipping box and the left side of the canvas area minus the distance between the bottom layer image and the left side of the canvas area; top equals the distance of the cropping box from the top of the canvas area minus the distance of the bottom layer image from the top of the canvas area; the width is equal to the zoomed width of the clipping frame; height is equal to the scaled height of the cropping box. The image data on the bottom layer image corresponding to the clipping frame area can be determined through the parameters, so that the image data in the clipping frame is copied and displayed on the clipping frame, and is displayed in a clipping preview area in a production interface provided by a browser.
It should be noted that, the position of the cropping frame may be obtained according to the real-time dragging of the mouse, and the position of the cropping frame in the canvas area relative to the underlying image is calculated, and the image data corresponding to the position is copied out by the position corresponding to the position of the target image on the underlying image, and is pasted and applied to the position of the cropping frame area, so that the brightness contrast between the selected area and the unselected area may be achieved, and simultaneously, the image data copied out by the cropping frame may be separately placed in the cropping preview area in the production interface provided by the browser, for the user to preview whether the finally cropped image meets the requirement.
In an optional implementation manner of this embodiment, each operation on the first image in the image to be produced may be further recorded, so as to facilitate subsequent further processing, that is, receiving a control operation on the cropping frame, and after generating the synthetic image corresponding to the target image according to the control operation, the method further includes:
storing the image data after the control operation;
determining an operation type of the control operation;
adding the control operation and the corresponding operation type into an operation list;
and setting an instruction index corresponding to the newly added control operation according to the instruction index of the control operation included in the operation list.
In an optional implementation manner of this embodiment, after receiving a control operation for the cropping frame and generating a synthetic image corresponding to the target image according to the control operation, the method further includes:
and under the condition of receiving a withdrawal instruction, withdrawing the operation corresponding to the current control operation, and restoring the synthetic image displayed in the canvas area in the production interface provided by the browser to the state before the operation.
In an optional embodiment, canceling an operation corresponding to a current control operation, and restoring a composite image displayed in a canvas area in a production interface provided by the browser to a state before the operation includes:
determining the operation type of the current control operation under the condition of receiving the withdrawal instruction;
determining a target control operation of the operation type in the operation list;
acquiring image data corresponding to the target control operation, and updating the instruction index of the current control operation into the instruction index of the target control operation;
and restoring the canvas area in the production interface provided by the browser into the image corresponding to the image data.
It should be noted that, in order to prevent the user from performing misoperation, the user may record each operation instruction of the image to be created, so as to implement a function of quickly revoking the previous operation and a function of quickly redoing and revoking the related operation. That is, each time the user drags the cropping frame or zooms the cropping frame, the corresponding operation instruction is recorded, and the image data after the corresponding operation is performed is used for subsequent image restoration. In particular, operations that can be undone/redone include a change in the size and position of the cropping frame, a target image deletion operation, an operation of selecting an image range, and the like. The user can click the cancel control on the production interface provided by the browser to perform cancel/redo operation, and can also perform such cancel operation more conveniently through the keyboard shortcut key (which can be consistent with cancel/redo shortcut keys of other conventional application programs, and reduces the cost for the user to remember the shortcut key).
In practical application, all effective operation instructions can be recorded through an operation list, each operation instruction has a corresponding operation type, so that when the operation is cancelled, the operation of the previous same type (such as the operation of changing the size and the position of the cropping frame) can be found, an instruction index can point to the current operation instruction, and the instruction index can point to any operation instruction in the operation list.
For example, there are currently 3 operation instructions [ a1, b1, a2], where a1 and a2 are operation instructions of the same type, and b1 is an operation instruction of another type, when a user has a new operation, assuming that the new operation instruction is b2, the latest data related to operation b2 is added to the tail end of the operation list, that is, [ a1, b1, a2, b2], at this time, the instruction index of b2 is 3 (the index starts with 0, and the fourth instruction index is 3), when the user clicks to cancel, the instruction index of the previous operation instruction of the same type (that is, the instruction index 1 corresponding to b 1) is found, the image data corresponding to the instruction index is obtained, the recovery processing is performed, and the instruction index of the current operation instruction is updated to 1, thereby completing a cancelled operation.
In addition, the method and the device provide the capability of performing frame deletion on the unnecessary target images, after the corresponding images are deleted, the remaining images can be updated to new indexes at the same time, and the rendered images can be synchronously updated in the canvas area.
In an optional implementation manner of this embodiment, after rendering the target image to a canvas area in a production interface provided by the browser according to the size of the target image, the method further includes:
and when a switching operation is received, rendering an image indicated by the switching operation to a canvas area in a production interface provided by the browser.
It should be noted that the image rendered in the current canvas area may be switched through a switching control provided in the production interface provided by the browser, where the switching control may be a previous frame/next frame control, and the switched image is re-rendered to the canvas area in the production interface provided by the browser for the user to browse.
In addition, an image time axis is generated, after a current frame is rendered to a canvas area on a page, a user can perform editing operation such as cutting operation on an image to be made displayed in the canvas area, and in order to facilitate the user to quickly preview the effect of a subsequently generated video, a preview function can be provided in the application.
Step 104: displaying an image time axis created based on an image to be manufactured in a manufacturing interface provided by the browser, wherein the image time axis comprises a range selection control; in response to a moving operation of the range selection control, determining a relative position of the range selection control in the image timeline.
Specifically, on the basis of establishing a corresponding image time axis for the image to be manufactured in a manufacturing interface provided by a browser according to the uploaded image to be manufactured and a preset display rule, further, displaying an image time axis established based on the image to be manufactured in the manufacturing interface provided by the browser, wherein the image time axis comprises a range selection control; and determining the relative position of the range selection control in the image time axis in response to the moving operation of the range selection control. The range selection control is a control for selecting an image range, which is arranged on an image time axis in a production interface provided by the browser.
In an optional implementation manner of this embodiment, determining the relative position of the range selection control in the image timeline in response to the moving operation of the range selection control includes:
determining a first distance of the range selection control relative to a first boundary of a production interface provided by the browser, a scrolling distance of an image timeline, and a second distance of a region boundary of the image timeline relative to the first boundary of the production interface provided by the browser;
determining a relative position of the range selection control in the image timeline according to the first distance, the scroll distance, and the second distance.
Specifically, the first boundary of the production interface provided by the browser may be a left side boundary with reference to a horizontal direction, and accordingly, a region boundary of the image time axis is a left end of the image time axis, a scroll distance of the image time axis is a distance of an image that is not displayed before a currently displayed image along the scroll direction of the image time axis, that is, the scroll distance of the image time axis refers to a distance of forward scrolling of the image. It should be noted that, although the above-described calculation method is a calculation method in which the image time axis is scrolled leftward from the left side as the starting point, it is needless to say that, in practical applications, the image time axis may be scrolled rightward from the right side as the starting point, and in this case, the first boundary of the creation interface provided by the browser may be a right side boundary with reference to the horizontal direction, and accordingly, the region boundary of the image time axis is the right end of the image time axis.
Step 106: and determining the selected image range according to the relative position of the range selection control in the image time axis.
Specifically, on the basis of determining the relative position of the range selection control in the image time axis in response to the movement operation of the range selection control, further, the selected image range is determined according to the relative position of the range selection control in the image time axis.
It should be noted that one or two range selection controls may be provided, and when two range selection controls are provided, the operation in step 104 may be performed for each range selection control, the relative positions of the two range selection controls in the image time axis are respectively determined, and then the selected image range may be determined according to the determined relative positions of the two range selection controls in the image time axis.
In particular implementations, the range selection control may include a first selection control and a second selection control; determining a first relative position of the first selection control in the graphical timeline and a second relative position of the second selection control in the graphical timeline; determining an image to be produced corresponding to the first relative position as a starting image, and determining an image to be produced corresponding to the second relative position as an ending image; determining a range between the start image and the end image as the selected image range.
It should be noted that after the images to be made uploaded by the user are acquired, each image to be made is converted into a local picture link and displayed on the image time axis in the form of a picture thumbnail, and each block (i.e., each unit time) on the image time axis is each frame of image. The image time axis can be provided with two selection controls, a user can select the image range required by the user by dragging the two selection controls, one selection control is used for indicating the starting position of the selected image range, and the other selection control is used for indicating the ending position of the selected image range.
Along the above example, as shown in fig. 3, two range selection controls are further arranged on the image time axis, and an image between the two range selection controls is an image range selected by the user.
In one possible implementation, determining a first relative position of the first selection control in the image timeline includes:
determining a first distance of the first selection control relative to a first boundary of a production interface provided by the browser;
determining a scroll distance of the image timeline;
determining a second distance of a region boundary of the image timeline relative to a first boundary of a production interface provided by the browser;
determining a first relative position of the first selection control in the image timeline according to the first distance, the scrolled distance, and the second distance.
In practical application, after the user drags the first selection control, the browser can calculate the position of the first selection control relative to a production interface provided by the browser, the position of the first selection control relative to the image time axis, and the length of the time axis in real time, so as to calculate the first relative position of the first selection control in the image time axis (that is, the moving distance of the first selection control relative to the leftmost side of the scroll area of the time axis). Specifically, a specific calculation formula for determining the first relative position of the first selection control in the image timeline is as follows: the moving distance d of the first selection control relative to the starting position of the image time axis is equal to a first distance (d3) of the first selection control relative to the left side of the production interface provided by the browser, a scrolling distance (d2) of the image time axis, and a second distance (d1) of the left end of the area of the image time axis relative to the left side of the production interface provided by the browser.
For example, fig. 5 is a schematic diagram of a second browser-provided authoring interface provided in the present application, where as shown in fig. 5, d1 refers to a second distance from the left end of the region of the image timeline to the left side of the browser-provided authoring interface, d2 refers to a scrolling distance of the image timeline, and d3 refers to a first distance from the first selection control to the left side of the browser-provided authoring interface. As shown in fig. 5, if the length of the image time axis is 10S, the production interface provided by the browser in the figure is only displayed for 0-800ms, a user can display an image that is not currently displayed by scrolling the image time axis, and after the image time axis is scrolled, the time at the leftmost end of the image time axis displayed in the production interface provided by the browser does not correspond to the start time of 0S.
It should be noted that a specific implementation process for determining the second relative position of the second selection control in the image timeline is the same as the above-mentioned specific implementation process for determining the first relative position of the first selection control in the image timeline, and details are not repeated herein. By the method, the relative positions of the two selection controls in the image time axis can be respectively calculated, and the calculated relative positions can be stored so as to be convenient for direct reference in the next dragging.
In an optional implementation manner of this embodiment, the specific implementation process of determining the selected image range according to the relative position of the range selection control in the image time axis may be as follows:
mapping the relative position of the range selection control in the image time axis to the image time axis, and determining a corresponding frame index;
and determining the selected image range according to the frame index.
In practical application, for a case that the range selection control includes a first selection control and a second selection control, determining the image to be produced corresponding to the first relative position as a start image may be: mapping the first relative position to the image time axis, and determining a corresponding starting frame index; determining a corresponding image to be manufactured according to the starting frame index; and rendering the image to be manufactured to a canvas area in a manufacturing interface provided by the browser.
It should be noted that the first relative position may be mapped to a corresponding start frame index on the image time axis, and then the image to be produced corresponding to the start frame index is rendered to a canvas area in a production interface provided by the browser. Similarly, an end frame index corresponding to the second relative position may be determined, so that the image to be produced corresponding to the end frame index may be rendered to a canvas area in a production interface provided by the browser. In a specific implementation, when each image is displayed in the image time axis, each image occupies a certain width of the image time axis, and based on this, the first relative position may be divided by the width occupied by each image, and then the whole is taken, so that the corresponding starting frame index may be obtained.
In addition, the determined start frame index and the end frame index in the image range can be synchronously updated to the canvas area, and the canvas area can render the image in the selected area in the image time axis through the new index.
In an optional implementation manner of this embodiment, the selected image range is determined according to the relative position of the range selection control in the image time axis, and a specific implementation process may also be as follows:
under the condition that the selection operation aiming at the target image to be manufactured is received, the range selection control is moved to the position corresponding to the target image to be manufactured;
under the condition of receiving a range selection operation, selecting an image in a preset range after the target image to be manufactured is selected by taking the target image to be manufactured as a starting image;
and determining the selected image as the selected image range.
It should be noted that the preset range is a preset image selection range, and the preset range may be 5 seconds, 10 seconds, 15 seconds, and the like. The range selection control can also only comprise one selection control, a user can move the range selection control to a position corresponding to a target image to be manufactured by clicking a certain target image to be manufactured on the image time axis, then automatically start from the selected target image to be manufactured by clicking the intercepting control, select the image in the subsequent preset range as the image range selected by the user, and select the last image if the image behind the target image to be manufactured selected by the user is less than the preset range.
That is to say, when the image timeline is in an unselected state, the user can click any image to quickly jump to the frame position, the range selection control is also synchronously moved to the position, meanwhile, the canvas area on the image timeline can also update the frame image clicked by the user in real time, and after the user clicks the capture control below the image timeline, the frame target image clicked by the user can be used as an initial position, and the image in the following preset range (5-second length range) are selected by default. In addition, after the image in the preset range is automatically selected from the target image, the user can also finely adjust the selected image range by dragging the range selection control.
For example, fig. 6 is a schematic diagram of a production interface provided by a third browser according to an embodiment of the present application, and as shown in fig. 6, when the image timeline is in an unselected state, a user may click a certain target image on the image timeline (that is, click a certain image to jump to the certain frame), the canvas may synchronously render a frame corresponding to the target image, and the progress bar is moved to a corresponding position.
In an optional implementation manner of this embodiment, after determining the selected image range according to the movement information of the range selection control, the method further includes:
identifying the selected image range.
It should be noted that the image range selected by the user may be identified, so that the user can clearly see the image range selected by the user. In specific implementation, the image range selected by the user can be highlighted, and the transparency of the image in the image range selected by the user can be changed to be distinguished from the unselected image. Of course, other identification manners may be used in practical applications, and the present application does not limit this.
In an optional implementation manner of this embodiment, after determining the selected image range according to the relative position of the range selection control in the image time axis, the method further includes:
and under the condition that the processing operation aiming at the selected image range is received, determining to update the image range according to the relative position of the range selection control in the image time axis and the processing operation.
Specifically, the processing operation for the image range refers to a processing operation performed on the whole corresponding to all the images included in the selected image range, and the processing operation may be an operation of updating the selected image range, such as an operation of dragging the whole selection frame; the operation may also be an operation of deleting all images included in the image range; in addition, the updated image range refers to the image range finally selected by the user, and the required video can be synthesized based on the images in the updated image range.
In an optional implementation manner of this embodiment, the update image range is determined according to the relative position of the range selection control in the image time axis and the processing operation, and a specific implementation process may be as follows:
receiving a moving operation of a selection frame corresponding to the selected image range;
determining an update start image and an update end image according to the position of the moved selection frame;
and determining the range of the updated image according to the updated starting image and the updated ending image.
It should be noted that, after the user selects the image range, a selection frame (the selection frame includes images in the initial image range) is formed, the user can drag the selection frame (i.e., the selected image range), and adjust the positions of the start image and the end image selected by the user by adjusting the position of the selection frame, so as to update the image range selected by the user.
In practical implementation, after the selected image range is determined according to the relative position of the range selection control in the image time axis, the frame index of the starting image in the image range and the frame index of the ending image can be recorded. Then, when a moving operation for the selection frame corresponding to the selected image range is received, the moving distance and direction of the moving operation can be determined, and the number of image frames corresponding to the moving distance, that is, the moving distance is moved by several frames, and the number of image frames is added to or subtracted from the frame index of the start image and the frame index of the end image, respectively, so that the frame index of the update start image and the frame index of the update end image can be obtained, and the update image range can be determined. Wherein moving in a forward direction along the image axis increases the number of image frames and moving in a reverse direction along the image axis subtracts the number of image frames.
In an optional implementation manner of this embodiment, a processing operation for the initial image range is received, and a target image range is determined according to the processing operation, where a specific implementation process may also be as follows:
receiving a deletion operation for an image included in the initial image range;
deleting images included in the initial image range in the image timeline;
and determining the residual images in the image time axis as the target image range.
In practical implementation, after the selected image range is determined according to the relative position of the range selection control in the image time axis, the frame index of the starting image in the image range and the frame index of the ending image can be recorded. Then, when a deletion operation for an image included in the image range is received, the number of image frames between the frame index of the start image and the frame index of the end image may be determined, and the updated frame index of the end image may be obtained by subtracting the number of image frames from the frame index of the end image, thereby determining the updated image range.
It should be noted that, after the user selects the image range, a selection frame (the selection frame includes images in the image range) is formed, and the user may directly delete all the images included in the selection frame (i.e., the selected image range), and determine the remaining images as the updated image range required by the final composite video. Therefore, when the image frames needing to be deleted are connected and the number of the image frames is large, the image processing efficiency can be improved by selecting the image range and deleting the image frames at one time, and the video synthesis efficiency can be improved.
Step 108: previewing the animation effect corresponding to the image in the image range, and synthesizing the corresponding video according to the previewing result.
Specifically, on the basis of determining the selected image range according to the relative position of the range selection control in the image time axis, further, the animation effect corresponding to the image in the image range is previewed, and the corresponding video is synthesized according to the previewing result.
It should be noted that, after the corresponding image time axis is created for the image to be created in the creation interface provided by the browser, an operation may be performed on a first image in the image to be created on the time axis to obtain a corresponding synthesized image, so that when a video corresponding to an image included in an image range needs to be previewed after the image range is determined, the image range may include a processed synthesized image and an unprocessed second image, and at this time, the synthesized image in the image range and the video corresponding to the second image are actually previewed on the creation interface provided by the browser.
The second image is an image other than the first image in the image range to be created, that is, the image range may include the first image on which the editing operation is performed and the second image on which the editing operation is not performed, and when previewing the image in the image range, the composite image after the editing operation should be displayed for the first image, and the second image (i.e., the original image) can be directly displayed because the editing operation is not performed for the second image.
In an optional implementation manner of this embodiment, previewing an animation effect corresponding to an image in the image range includes:
acquiring preview parameters under the condition of receiving a preview instruction;
determining the playing delay time length according to the preview parameter;
determining a frame index of a starting image and a frame index of an ending image in the image range, and determining the frame index of the starting image as a preview frame index;
rendering the composite image corresponding to the preview frame index in a preview window in a production interface provided by the browser;
and after the playing delay time, enabling the preview frame index to increase by 1, and returning to execute the operation step of rendering the composite image corresponding to the preview frame index in a preview window in a production interface provided by the browser until the preview frame index is equal to the frame index corresponding to the ending image.
Specifically, the preview parameter may refer to a speed at which the image is played. It should be noted that, a user may directly click a play button in a production interface provided by the browser to preview a video effect corresponding to the composite image and the second image in all the selected image ranges, so as to determine whether the effect of the finally generated video is expected, and when the browser plays the composite image and the second image in the selected image range, the playing speed may be determined according to a delay parameter (i.e., a preview parameter) set by the user. That is, when the image time axis is in the selected state, the preview playing function may limit the preview of the corresponding composite image and the second image within the image range selected by the user, and may not preview the composite images corresponding to other unselected images.
Illustratively, the images include image 1, image 2, image 3, image 4, and image 5 on the image time axis. Assuming that the image range selected by the user is image 2-image 4, and the cropping operation is performed on image 2 and image 3, then image 2 and image 3 are the first image, image 4 is the second image, and when a preview instruction is received, a composite image obtained by performing the cropping operation on image 2 and image 3 and image 4 are displayed on a production interface provided by a browser for the user to preview.
It should be noted that, in order to facilitate the user to quickly preview the video effect, the present application also provides a real-time preview animation function, the browser may record a preview frame index of an image currently rendered in the canvas area (i.e., the position of the current image), when the user clicks on the play, the browser may determine the play rate, and determine the corresponding play delay duration (i.e., previewing at the speed of how many frames per second) according to the play rate, when the current image is rendered, and after the play delay duration, the preview index is increased by 1 by itself, the next image is rendered, and so on until all rendering of the image to be previewed is completed.
For example, when the preview parameter is 10FPS, the playback delay time is 100 ms, when the current image is rendered and after 100 ms delay, the preview frame index is increased by 1, and the next image is rendered, and so on until all the required images are rendered.
In an optional implementation manner of this embodiment, after previewing a video corresponding to an image in the image range on a production interface provided by the browser according to the preview parameter, the method further includes:
determining a stop position of a drag operation for a display progress bar in case of receiving the drag operation;
determining an image corresponding to the stop position;
and previewing the image on a production interface provided by the browser.
It should be noted that, in the present application, the image corresponding to a certain time point may be quickly skipped by clicking/dragging the progress bar, if the image corresponding to the time point is the first image on which the editing operation is performed, the synthesized image after the editing operation is displayed, and if the image corresponding to the time point is the second image on which the editing operation is not performed, the second image is directly displayed. In specific implementation, a certain frame of the time axis is clicked to be quickly positioned to the position of the frame, and the picture corresponding to the frame is rendered on the canvas synchronously through clicking the obtained index.
For example, fig. 7 is a schematic view of a production interface provided by a fourth browser according to an embodiment of the present application, and as shown in fig. 7, the production interface provided by the browser includes a video/GIF/picture selection area, and a user can upload a required image to be produced through the area; the production interface provided by the browser also comprises a cancel/redo area, and a user can cancel the current operation through the area; the manufacturing interface provided by the browser also comprises a canvas area, and the canvas area is used for rendering the image selected by the user; the production interface provided by the browser also comprises a progress bar for playing, and the progress bar is dragged to quickly position to a certain frame of image; the production interface provided by the browser also comprises an image time axis, the image time axis can display all images to be produced uploaded by a user, the image time axis comprises a range selection control, and the user can customize an image range required by the synthesized video through the dragging range selection control; in addition, the browser provides a production interface which also comprises a delay setting area to control the preview parameters.
In practical application, a user can define an image range to be previewed by a user, then a video animation effect generated by an image (a composite image and/or a second image) in the image range is previewed, the user can return to adjust the selected image range, preview parameters and the like according to the preview effect, or edit operation is performed again on a certain composite image, or corresponding operation instructions are withdrawn, the user can click a generation control until the preview effect meets the user expectation, a corresponding video is generated and derived based on the image in the currently selected image range, and then advertisement delivery of the video can be performed through a corresponding platform.
For example, the user may select the 1 st to 10 th images, preview the video effect corresponding to the composite image and the second image in the range, then select the 5 th to 12 th images, preview the video effect corresponding to the composite image and the second image in the range, further select the 3 rd to 8 th images, preview the video effect corresponding to the composite image and the second image in the range, compare several animation effects, and generate and derive a final video based on the composite image and the second image corresponding to the 5 th to 12 th images.
According to the video production method, an image time axis created based on an image to be produced can be displayed in a production interface provided by a browser, and the image time axis comprises a range selection control; determining the relative position of the range selection control in the image timeline in response to the moving operation of the range selection control; determining the selected image range according to the relative position of the range selection control in the image time axis; previewing the animation effect corresponding to the image in the image range, and synthesizing the corresponding video according to the previewing result. Under the condition, the intelligent video production method based on the browser is provided, the production of the video can be realized by installing the browser on the computer, a separate application program does not need to be downloaded and installed, the threshold for producing the video is low, the time for producing the video is shortened, and the production efficiency is improved. In addition, the required image range can be customized through the image time axis, and then the animation effect corresponding to the synthesized image in the customized image range is previewed, so that the video required by the user can be synthesized simply and efficiently, and the video production efficiency is improved. Moreover, the method provides various rich functions like intelligently cutting images, deleting useless images, withdrawing current operation instructions, previewing animation effects in real time and the like, is simple to operate, and greatly improves the efficiency of making videos.
The following description will further describe the video production method provided by the present application in the application of GIF animation with reference to fig. 8. Fig. 8 shows a processing flow chart of a video production method applied to a GIF animation according to an embodiment of the present application, which specifically includes the following steps:
step 802: a user enters a GIF animation interface, selects a video/existing GIF animation through an uploading control on the GIF animation interface, and extracts an image of the video or the existing GIF animation; and/or selecting a plurality of still images, each still image being treated as a frame image.
Step 804: rendering all the acquired images to an image time axis; rendering the currently selected image in the canvas area, and performing undo/redo operation on the currently rendered image in the canvas area; in the canvas area, adjusting the size of the GIF through a clipping box; performing operations of selecting an image range, deleting frames and the like on an image time axis; and previewing the animation effect in real time.
Step 806: and determining whether the animation effect is in accordance with the expectation, if so, executing the following step 808, otherwise, returning to the step 804, or emptying the currently uploaded image and returning to the step 802 again.
Step 808: and generating a corresponding GIF animation, storing and putting.
The application provides an intelligent video production method based on a browser, the production of the GIF animation can be realized by installing the browser on a computer, an independent application program does not need to be downloaded and installed, the threshold for producing the GIF animation is low, the time for producing the GIF animation is shortened, and the production efficiency is improved. In addition, the required image range can be customized through the image time axis, and then the animation effect corresponding to the synthetic image in the customized image range is previewed, so that the GIF animation required by the user can be simply and efficiently synthesized, and the production efficiency of the GIF animation is improved. Moreover, the method provides various rich functions like intelligently cutting images, deleting useless images, withdrawing current operation instructions, previewing animation effects in real time and the like, is simple to operate, and greatly improves the efficiency of manufacturing GIF animation
Fig. 9 is a flowchart illustrating another video production method according to an embodiment of the present application, which is applied to a browser, and specifically includes the following steps:
step 902: in a case where a moving operation for a range selection control is received, a first distance of the range selection control, a scroll distance of an image timeline, and a second distance of the image timeline are determined.
The range selection control is a control for selecting an image range, which is arranged on an image time axis in a production interface provided by the browser, a first distance of the range selection control is a distance between the range selection control and a first boundary of the production interface provided by the browser, and a second distance of the image time axis is a distance between a region boundary of the image time axis and the first boundary of the production interface provided by the browser.
Step 904: determining a relative position of the range selection control in the image timeline according to the first distance, the scroll distance, and the second distance.
Step 906: and determining the selected image range according to the relative position of the range selection control in the image time axis.
Step 908: previewing the animation effect corresponding to the image in the image range, and synthesizing the corresponding video according to the previewing result.
The application provides an intelligent video production method based on a browser, the production of a video can be realized by installing the browser on a computer, an independent application program does not need to be downloaded and installed, the threshold for producing the video is low, the time for producing the video is shortened, and the production efficiency is improved. In addition, the required image range can be customized through the image time axis, and then the animation effect corresponding to the synthesized image in the customized image range is previewed, so that the video required by the user can be synthesized simply and efficiently, and the video production efficiency is improved.
Fig. 9 shows a schematic diagram of a video production method according to this embodiment. It should be noted that the technical solution of the video creation method shown in fig. 9 is the same as the technical solution of the video creation method shown in fig. 1, and details of the technical solution of the video creation method shown in fig. 9, which are not described in detail, can be referred to the description of the technical solution of the video creation method shown in fig. 1.
Corresponding to the above method embodiment, the present application further provides an embodiment of a video production apparatus, and fig. 10 shows a schematic structural diagram of a video production apparatus according to an embodiment of the present application. As shown in fig. 10, the apparatus includes:
a display module 1002, configured to display, in a production interface provided by the browser, an image timeline created based on an image to be produced, where the image timeline includes a range selection control;
a first determination module 1004 configured to determine a relative position of the range selection control in the image timeline in response to a movement operation of the range selection control;
a second determination module 1006 configured to determine a selected image range according to a relative position of the range selection control in the image timeline;
and the composition module 1008 is configured to preview the animation effect corresponding to the image in the image range, and compose a corresponding video according to the preview result.
Optionally, the first determining module 1004 is further configured to:
determining a first distance of the range selection control relative to a first boundary of a production interface provided by the browser, a scrolling distance of an image timeline, and a second distance of a region boundary of the image timeline relative to the first boundary of the production interface provided by the browser;
determining a relative position of the range selection control in the image timeline according to the first distance, the scroll distance, and the second distance.
Optionally, the apparatus further comprises a third determining module configured to:
and under the condition that the processing operation aiming at the selected image range is received, determining to update the image range according to the relative position of the range selection control in the image time axis and the processing operation.
Optionally, the apparatus further comprises an upload module configured to:
acquiring an uploaded image to be made under the condition that a data selection instruction is received;
and according to the uploaded image to be made and a preset display rule, establishing a corresponding image time axis for the image to be made in a making interface provided by the browser.
Optionally, the upload module is further configured to at least one of:
under the condition of receiving a first data selection instruction, acquiring a target video corresponding to the first data selection instruction, and extracting the image to be produced from the target video;
under the condition that a second data selection instruction is received, acquiring a target image corresponding to the second data selection instruction, and determining the target image as the image to be manufactured;
and under the condition of receiving a third data selection instruction, acquiring a GIF image corresponding to the third data selection instruction, and determining the GIF image as the image to be made.
Optionally, the upload module is further configured to:
determining corresponding extraction fineness according to the playing frame number per second of the target video;
determining the time length of each frame of video to be extracted according to the extraction fineness;
and extracting the image to be made from the target video according to the time length of each frame of video.
Optionally, the upload module is further configured to:
playing the target video in real time, and calculating the time point of the next video frame in the target video;
skipping the playing progress of the target video to the time point, pausing the target video, and acquiring a current video frame;
rendering the current video frame to a canvas of the browser, converting the video frame on the canvas into an image for storage to obtain the image to be manufactured, and returning to the operation step of calculating the time point of the next video frame in the target video until the target video is played.
Optionally, the synthesis module 1008 is further configured to:
acquiring preview parameters under the condition of receiving a preview instruction;
determining the playing delay time length according to the preview parameter;
determining a frame index of a starting image and a frame index of an ending image in the image range, and determining the frame index of the starting image as a preview frame index;
rendering the composite image corresponding to the preview frame index in a preview window in a production interface provided by the browser;
and after the playing delay time, enabling the preview frame index to increase by 1, and returning to execute the operation step of rendering the composite image corresponding to the preview frame index in a preview window in a production interface provided by the browser until the preview frame index is equal to the frame index corresponding to the ending image.
Optionally, the second determining module 1006 is further configured to:
determining a relative position of the range selection control in the image timeline as a stop position of the range selection control;
mapping the stopping position to the image time axis, and determining a corresponding frame index;
and determining the selected image range according to the frame index.
Optionally, the upload module is further configured to:
determining a selected target image on the image time axis, wherein each unit time on the image time axis corresponds to one image;
and rendering the target image to a canvas area in a production interface provided by the browser according to the size of the target image.
Optionally, the upload module is further configured to:
displaying a cropping frame in the canvas area, the cropping frame being located within the image area of the target image, the cropping frame having an area no greater than the target image;
and receiving control operation aiming at the cropping frame, and generating a synthetic image corresponding to the target image according to the control operation.
Optionally, the upload module is further configured to:
determining the target image as a bottom image of the canvas area;
adding a cropping frame on the bottom layer image, wherein the cropping frame and the target image are positioned on different layers;
and adding a masking layer between the bottom layer image and the cropping frame, wherein the masking layer is used for distinguishing selected areas and unselected areas of the cropping frame.
Optionally, the upload module is further configured to:
determining the position of the cropping frame according to the control operation for the cropping frame;
determining a positional parameter of the cropping frame relative to the underlying image;
acquiring corresponding image data in the target image according to the position parameters;
and determining the acquired image data as the composite image, and displaying the composite image in a cutting preview area in a production interface provided by the browser.
Optionally, the upload module is further configured to:
storing the image data after the control operation;
determining an operation type of the control operation;
adding the control operation and the corresponding operation type into an operation list;
and setting an instruction index corresponding to the newly added control operation according to the instruction index of the control operation included in the operation list.
Optionally, the upload module is further configured to:
and under the condition of receiving a withdrawal instruction, withdrawing the operation corresponding to the current control operation, and restoring the synthetic image displayed in the canvas area in the production interface provided by the browser to the state before the operation.
Optionally, the upload module is further configured to:
determining the operation type of the current control operation under the condition of receiving the withdrawal instruction;
determining a target control operation of the operation type in the operation list;
acquiring image data corresponding to the target control operation, and updating the instruction index of the current control operation into the instruction index of the target control operation;
and restoring the canvas area in the production interface provided by the browser into the image corresponding to the image data.
The application provides an intelligent video making device based on browser, the preparation that the browser can realize the video is equipped with on the computer, need not to download installation solitary application, and the threshold of preparation video is lower, has reduced the time of preparation video, has improved preparation efficiency. In addition, the required image range can be customized through the image time axis, and then the animation effect corresponding to the synthesized image in the customized image range is previewed, so that the video required by the user can be synthesized simply and efficiently, and the video production efficiency is improved. Moreover, the method provides various rich functions like intelligently cutting images, deleting useless images, withdrawing current operation instructions, previewing animation effects in real time and the like, is simple to operate, and greatly improves the efficiency of making videos.
The above is a schematic scheme of a video production apparatus of the present embodiment. It should be noted that the technical solution of the video creation apparatus and the technical solution of the video creation method belong to the same concept, and details that are not described in detail in the technical solution of the video creation apparatus can be referred to the description of the technical solution of the video creation method.
FIG. 11 illustrates a block diagram of a computing device 1100 provided in accordance with an embodiment of the present application. The components of the computing device 1100 include, but are not limited to, memory 1110 and a processor 1120. The processor 1120 is coupled to the memory 1110 via a bus 1130 and the database 1150 is used to store data.
The computing device 1100 also includes an access device 1140, the access device 1140 enabling the computing device 1100 to communicate via one or more networks 1160. Examples of such networks include the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. The access device 1140 may include one or more of any type of network interface, e.g., a Network Interface Card (NIC), wired or wireless, such as an IEEE802.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.
In one embodiment of the application, the above-described components of computing device 1100, as well as other components not shown in FIG. 11, may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device architecture shown in FIG. 11 is for purposes of example only and is not limiting as to the scope of the present application. Those skilled in the art may add or replace other components as desired.
Computing device 1100 can be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), mobile phone (e.g., smartphone), wearable computing device (e.g., smartwatch, smartglasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 1100 can also be a mobile or stationary server.
Wherein, the processor 1120 is configured to execute the following computer-executable instructions:
displaying an image time axis created based on an image to be manufactured in a manufacturing interface provided by the browser, wherein the image time axis comprises a range selection control;
determining the relative position of the range selection control in the image timeline in response to the moving operation of the range selection control;
determining the selected image range according to the relative position of the range selection control in the image time axis;
previewing the animation effect corresponding to the image in the image range, and synthesizing the corresponding video according to the previewing result.
The above is an illustrative scheme of a computing device of the present embodiment. It should be noted that the technical solution of the computing device and the technical solution of the video production method belong to the same concept, and details that are not described in detail in the technical solution of the computing device can be referred to the description of the technical solution of the video production method.
An embodiment of the present application further provides a computer-readable storage medium, which stores computer-executable instructions, which are executed by a processor, for implementing the operation steps of the video production method.
The above is an illustrative scheme of a computer-readable storage medium of the present embodiment. It should be noted that the technical solution of the storage medium belongs to the same concept as the technical solution of the video production method, and details that are not described in detail in the technical solution of the storage medium can be referred to the description of the technical solution of the video production method.
The foregoing description of specific embodiments of the present application has been presented. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The computer instructions comprise computer program code which may be in the form of source code, object code, an executable file or some intermediate form, or the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
It should be noted that, for the sake of simplicity, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present application is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The preferred embodiments of the present application disclosed above are intended only to aid in the explanation of the application. Alternative embodiments are not exhaustive and do not limit the invention to the precise embodiments described. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the application and its practical applications, to thereby enable others skilled in the art to best understand and utilize the application. The application is limited only by the claims and their full scope and equivalents.

Claims (19)

1. A video production method is applied to a browser and comprises the following steps:
displaying an image time axis created based on an image to be manufactured in a manufacturing interface provided by the browser, wherein the image time axis comprises a range selection control;
determining the relative position of the range selection control in the image timeline in response to the moving operation of the range selection control;
determining the selected image range according to the relative position of the range selection control in the image time axis;
previewing the animation effect corresponding to the image in the image range, and synthesizing the corresponding video according to the previewing result.
2. The video production method according to claim 1, wherein determining the relative position of the range selection control in the image timeline in response to the moving operation of the range selection control comprises:
determining a first distance of the range selection control relative to a first boundary of a production interface provided by the browser, a scrolling distance of an image timeline, and a second distance of a region boundary of the image timeline relative to the first boundary of the production interface provided by the browser;
determining a relative position of the range selection control in the image timeline according to the first distance, the scroll distance, and the second distance.
3. The method of claim 1, wherein after determining the selected image range based on the relative position of the range selection control in the image timeline, further comprising:
and under the condition that the processing operation aiming at the selected image range is received, determining to update the image range according to the relative position of the range selection control in the image time axis and the processing operation.
4. The video production method according to claim 1, wherein an image timeline created based on an image to be produced is displayed in a production interface provided by the browser, and before a range selection control is included in the image timeline, the method further comprises:
acquiring an uploaded image to be made under the condition that a data selection instruction is received;
and according to the uploaded image to be made and a preset display rule, establishing a corresponding image time axis for the image to be made in a making interface provided by the browser.
5. The video production method according to claim 4, wherein, in a case where the data selection instruction is received, acquiring the uploaded image to be produced includes at least one of:
under the condition of receiving a first data selection instruction, acquiring a target video corresponding to the first data selection instruction, and extracting the image to be produced from the target video;
under the condition that a second data selection instruction is received, acquiring a target image corresponding to the second data selection instruction, and determining the target image as the image to be manufactured;
and under the condition of receiving a third data selection instruction, acquiring a GIF image corresponding to the third data selection instruction, and determining the GIF image as the image to be made.
6. The video production method according to claim 5, wherein extracting the image to be produced from the target video comprises:
determining corresponding extraction fineness according to the playing frame number per second of the target video;
determining the time length of each frame of video to be extracted according to the extraction fineness;
and extracting the image to be made from the target video according to the time length of each frame of video.
7. The video production method according to claim 6, wherein extracting the image to be produced from the target video according to the duration of each frame of video comprises:
playing the target video in real time, and calculating the time point of the next video frame in the target video;
skipping the playing progress of the target video to the time point, pausing the target video, and acquiring a current video frame;
rendering the current video frame to a canvas of the browser, converting the video frame on the canvas into an image for storage to obtain the image to be manufactured, and returning to the operation step of calculating the time point of the next video frame in the target video until the target video is played.
8. A method for video production according to any one of claims 1 to 7 wherein previewing animation effects corresponding to images within the image area comprises:
acquiring preview parameters under the condition of receiving a preview instruction;
determining the playing delay time length according to the preview parameter;
determining a frame index of a starting image and a frame index of an ending image in the image range, and determining the frame index of the starting image as a preview frame index;
rendering the composite image corresponding to the preview frame index in a preview window in a production interface provided by the browser;
and after the playing delay time, enabling the preview frame index to increase by 1, and returning to execute the operation step of rendering the composite image corresponding to the preview frame index in a preview window in a production interface provided by the browser until the preview frame index is equal to the frame index corresponding to the ending image.
9. A method of video production as claimed in any one of claims 1 to 7 in which determining the selected image range in dependence upon the relative position of the range selection control in the image timeline comprises:
mapping the relative position of the range selection control in the image time axis to the image time axis, and determining a corresponding frame index;
and determining the selected image range according to the frame index.
10. The video production method according to claim 4, wherein after creating a corresponding image timeline for the image to be produced in a production interface provided by the browser according to the uploaded image to be produced and a preset display rule, the method further comprises:
determining a selected target image on the image time axis, wherein each unit time on the image time axis corresponds to one image;
and rendering the target image to a canvas area in a production interface provided by the browser according to the size of the target image.
11. The video production method of claim 10, wherein after rendering the target image to a canvas area in a production interface provided by the browser according to the size of the target image, further comprising:
displaying a cropping frame in the canvas area, the cropping frame being located within the image area of the target image, the cropping frame having an area no greater than the target image;
and receiving control operation aiming at the cropping frame, and generating a synthetic image corresponding to the target image according to the control operation.
12. The method of claim 11, wherein displaying a cropping box in the canvas area comprises:
determining the target image as a bottom image of the canvas area;
adding a cropping frame on the bottom layer image, wherein the cropping frame and the target image are positioned on different layers;
and adding a masking layer between the bottom layer image and the cropping frame, wherein the masking layer is used for distinguishing selected areas and unselected areas of the cropping frame.
13. The video production method according to claim 12, wherein receiving a control operation for the cropping frame, and generating a composite image corresponding to the target image according to the control operation includes:
determining the position of the cropping frame according to the control operation for the cropping frame;
determining a positional parameter of the cropping frame relative to the underlying image;
acquiring corresponding image data in the target image according to the position parameters;
and determining the acquired image data as the composite image, and displaying the composite image in a cutting preview area in a production interface provided by the browser.
14. The video production method according to claim 11, wherein the receiving of the control operation for the cropping frame and the generating of the composite image corresponding to the target image according to the control operation further comprise:
storing the image data after the control operation;
determining an operation type of the control operation;
adding the control operation and the corresponding operation type into an operation list;
and setting an instruction index corresponding to the newly added control operation according to the instruction index of the control operation included in the operation list.
15. The video production method according to claim 14, wherein the receiving of the control operation for the cropping frame and the generating of the composite image corresponding to the target image according to the control operation further comprise:
and under the condition of receiving a withdrawal instruction, withdrawing the operation corresponding to the current control operation, and restoring the synthetic image displayed in the canvas area in the production interface provided by the browser to the state before the operation.
16. The video production method of claim 15, wherein undoing the operation corresponding to the current control operation and restoring the composite image displayed in the canvas area in the production interface provided by the browser to the pre-operation state comprises:
determining the operation type of the current control operation under the condition of receiving the withdrawal instruction;
determining a target control operation of the operation type in the operation list;
acquiring image data corresponding to the target control operation, and updating the instruction index of the current control operation into the instruction index of the target control operation;
and restoring the canvas area in the production interface provided by the browser into the image corresponding to the image data.
17. A video production apparatus, applied to a browser, comprising:
the display module is configured to display an image time axis created based on an image to be produced in a production interface provided by the browser, and the image time axis contains a range selection control;
a first determination module configured to determine a relative position of the range selection control in the image timeline in response to a movement operation of the range selection control;
a second determination module configured to determine a selected image range according to a relative position of the range selection control in the image timeline;
and the synthesis module is configured to preview the animation effect corresponding to the image in the image range and synthesize the corresponding video according to the preview result.
18. A computing device, comprising:
a memory and a processor;
the memory is configured to store computer-executable instructions, and the processor is configured to execute the computer-executable instructions to implement the method of:
displaying an image time axis created based on an image to be manufactured in a manufacturing interface provided by a browser, wherein the image time axis comprises a range selection control;
determining the relative position of the range selection control in the image timeline in response to the moving operation of the range selection control;
determining the selected image range according to the relative position of the range selection control in the image time axis;
previewing the animation effect corresponding to the image in the image range, and synthesizing the corresponding video according to the previewing result.
19. A computer-readable storage medium storing computer-executable instructions which, when executed by a processor, perform the steps of the video production method of any one of claims 1 to 16.
CN202110350554.4A 2021-03-31 2021-03-31 Video production method and device Pending CN113099287A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110350554.4A CN113099287A (en) 2021-03-31 2021-03-31 Video production method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110350554.4A CN113099287A (en) 2021-03-31 2021-03-31 Video production method and device

Publications (1)

Publication Number Publication Date
CN113099287A true CN113099287A (en) 2021-07-09

Family

ID=76672077

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110350554.4A Pending CN113099287A (en) 2021-03-31 2021-03-31 Video production method and device

Country Status (1)

Country Link
CN (1) CN113099287A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113873329A (en) * 2021-10-19 2021-12-31 深圳追一科技有限公司 Video processing method and device, computer storage medium and electronic equipment
CN113938619A (en) * 2021-10-28 2022-01-14 稿定(厦门)科技有限公司 Video synthesis method, system and storage device based on browser
CN114363697A (en) * 2022-01-06 2022-04-15 上海哔哩哔哩科技有限公司 Video file generation and playing method and device
CN114915850A (en) * 2022-04-22 2022-08-16 网易(杭州)网络有限公司 Video playing control method and device, electronic equipment and storage medium
CN115278307A (en) * 2022-07-27 2022-11-01 天翼云科技有限公司 Video playing method, device, equipment and medium
CN117114978A (en) * 2023-10-24 2023-11-24 深圳软牛科技有限公司 Picture cropping and restoring method and device based on iOS and related medium thereof
CN117714774A (en) * 2024-02-06 2024-03-15 北京美摄网络科技有限公司 Method and device for manufacturing video special effect cover, electronic equipment and storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070174774A1 (en) * 2005-04-20 2007-07-26 Videoegg, Inc. Browser editing with timeline representations
CN101740082A (en) * 2009-11-30 2010-06-16 孟智平 Method and system for clipping video based on browser
US20100259645A1 (en) * 2009-04-13 2010-10-14 Pure Digital Technologies Method and system for still image capture from video footage
US20120321280A1 (en) * 2011-06-17 2012-12-20 Ken Kengkuan Lin Picture Selection for Video Skimming
CN106210451A (en) * 2016-08-02 2016-12-07 成都索贝数码科技股份有限公司 A kind of method and system of multi-track video editing based on html5
CN106791933A (en) * 2017-01-20 2017-05-31 杭州当虹科技有限公司 The method and system of the online quick editor's video based on web terminal
CN108040288A (en) * 2017-12-20 2018-05-15 北京达佳互联信息技术有限公司 Video editing method, device and intelligent mobile terminal
CN108965397A (en) * 2018-06-22 2018-12-07 中央电视台 Cloud video editing method and device, editing equipment and storage medium
CN110389796A (en) * 2019-07-01 2019-10-29 北京字节跳动网络技术有限公司 Edit operation processing method, device and electronic equipment
CN110401878A (en) * 2019-07-08 2019-11-01 天脉聚源(杭州)传媒科技有限公司 A kind of video clipping method, system and storage medium
CN110868631A (en) * 2018-08-28 2020-03-06 腾讯科技(深圳)有限公司 Video editing method, device, terminal and storage medium
CN111163358A (en) * 2020-01-07 2020-05-15 广州虎牙科技有限公司 GIF image generation method, device, server and storage medium
CN111612873A (en) * 2020-05-29 2020-09-01 维沃移动通信有限公司 GIF picture generation method and device and electronic equipment

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070174774A1 (en) * 2005-04-20 2007-07-26 Videoegg, Inc. Browser editing with timeline representations
US20100259645A1 (en) * 2009-04-13 2010-10-14 Pure Digital Technologies Method and system for still image capture from video footage
CN101740082A (en) * 2009-11-30 2010-06-16 孟智平 Method and system for clipping video based on browser
US20120321280A1 (en) * 2011-06-17 2012-12-20 Ken Kengkuan Lin Picture Selection for Video Skimming
CN106210451A (en) * 2016-08-02 2016-12-07 成都索贝数码科技股份有限公司 A kind of method and system of multi-track video editing based on html5
CN106791933A (en) * 2017-01-20 2017-05-31 杭州当虹科技有限公司 The method and system of the online quick editor's video based on web terminal
CN108040288A (en) * 2017-12-20 2018-05-15 北京达佳互联信息技术有限公司 Video editing method, device and intelligent mobile terminal
CN108965397A (en) * 2018-06-22 2018-12-07 中央电视台 Cloud video editing method and device, editing equipment and storage medium
CN110868631A (en) * 2018-08-28 2020-03-06 腾讯科技(深圳)有限公司 Video editing method, device, terminal and storage medium
CN110389796A (en) * 2019-07-01 2019-10-29 北京字节跳动网络技术有限公司 Edit operation processing method, device and electronic equipment
CN110401878A (en) * 2019-07-08 2019-11-01 天脉聚源(杭州)传媒科技有限公司 A kind of video clipping method, system and storage medium
CN111163358A (en) * 2020-01-07 2020-05-15 广州虎牙科技有限公司 GIF image generation method, device, server and storage medium
CN111612873A (en) * 2020-05-29 2020-09-01 维沃移动通信有限公司 GIF picture generation method and device and electronic equipment

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113873329A (en) * 2021-10-19 2021-12-31 深圳追一科技有限公司 Video processing method and device, computer storage medium and electronic equipment
CN113938619A (en) * 2021-10-28 2022-01-14 稿定(厦门)科技有限公司 Video synthesis method, system and storage device based on browser
CN114363697A (en) * 2022-01-06 2022-04-15 上海哔哩哔哩科技有限公司 Video file generation and playing method and device
CN114363697B (en) * 2022-01-06 2024-04-26 上海哔哩哔哩科技有限公司 Video file generation and playing method and device
CN114915850A (en) * 2022-04-22 2022-08-16 网易(杭州)网络有限公司 Video playing control method and device, electronic equipment and storage medium
CN114915850B (en) * 2022-04-22 2023-09-12 网易(杭州)网络有限公司 Video playing control method and device, electronic equipment and storage medium
CN115278307A (en) * 2022-07-27 2022-11-01 天翼云科技有限公司 Video playing method, device, equipment and medium
CN115278307B (en) * 2022-07-27 2023-08-04 天翼云科技有限公司 Video playing method, device, equipment and medium
CN117114978A (en) * 2023-10-24 2023-11-24 深圳软牛科技有限公司 Picture cropping and restoring method and device based on iOS and related medium thereof
CN117114978B (en) * 2023-10-24 2024-03-29 深圳软牛科技集团股份有限公司 Picture cropping and restoring method and device based on iOS and related medium thereof
CN117714774A (en) * 2024-02-06 2024-03-15 北京美摄网络科技有限公司 Method and device for manufacturing video special effect cover, electronic equipment and storage medium
CN117714774B (en) * 2024-02-06 2024-04-19 北京美摄网络科技有限公司 Method and device for manufacturing video special effect cover, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN113099287A (en) Video production method and device
EP3758364B1 (en) Dynamic emoticon-generating method, computer-readable storage medium and computer device
US11082377B2 (en) Scripted digital media message generation
CN113099288A (en) Video production method and device
CN111935504B (en) Video production method, device, equipment and storage medium
CN104540028B (en) A kind of video beautification interactive experience system based on mobile platform
US10728197B2 (en) Unscripted digital media message generation
CN111935505B (en) Video cover generation method, device, equipment and storage medium
KR20230042523A (en) Multimedia data processing method, generation method and related device
US20110170008A1 (en) Chroma-key image animation tool
CN108924622B (en) Video processing method and device, storage medium and electronic device
US11758082B2 (en) System for automatic video reframing
CN110647624A (en) Automatic generation of an animation preview that presents document differences in enterprise messaging
US11394888B2 (en) Personalized videos
CN114693827A (en) Expression generation method and device, computer equipment and storage medium
CN108614657B (en) Image synthesis method, device and equipment and image carrier thereof
CN113852757B (en) Video processing method, device, equipment and storage medium
CN106021322A (en) Multifunctional image input method
JP2001119666A (en) Method of interactive processing of video sequence, storage medium thereof and system
CN113873319A (en) Video processing method and device, electronic equipment and storage medium
KR20140127131A (en) Method for displaying image and an electronic device thereof
CN114025103A (en) Video production method and device
CN115988259A (en) Video processing method, device, terminal, medium and program product
CN117714774B (en) Method and device for manufacturing video special effect cover, electronic equipment and storage medium
US20240170025A1 (en) Video editing method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination