CN114666669B - Video processing method, device, equipment and storage medium - Google Patents

Video processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN114666669B
CN114666669B CN202210234091.XA CN202210234091A CN114666669B CN 114666669 B CN114666669 B CN 114666669B CN 202210234091 A CN202210234091 A CN 202210234091A CN 114666669 B CN114666669 B CN 114666669B
Authority
CN
China
Prior art keywords
video
visual information
template
target template
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210234091.XA
Other languages
Chinese (zh)
Other versions
CN114666669A (en
Inventor
汪谷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202210234091.XA priority Critical patent/CN114666669B/en
Publication of CN114666669A publication Critical patent/CN114666669A/en
Application granted granted Critical
Publication of CN114666669B publication Critical patent/CN114666669B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4728End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for selecting a Region Of Interest [ROI], e.g. for requesting a higher resolution version of a selected region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8166Monomedia components thereof involving executable data, e.g. software
    • H04N21/8173End-user applications, e.g. Web browser, game

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Television Signal Processing For Recording (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application relates to a video processing method, a video processing device, video processing equipment and a storage medium. In the embodiment of the application, aiming at a first video, N template videos are preset, each template video corresponds to visual information, a target template video is selected from the N template videos, the target template video and the first video are spliced to obtain a second video, in the process of previewing the second video, the second visual information corresponding to the first video is generated through editing operation of the first visual information corresponding to the target template video, and the second video with the second visual information is displayed on a social platform. The second visual information in the second video can improve the experience and the interestingness of the watching user, and the liveness and the platform viscosity of the watching user are improved.

Description

Video processing method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of internet technologies, and in particular, to a video processing method, apparatus, device, and storage medium.
Background
Currently, content publishers may publish video works on a social platform and present the video works to viewing users. Wherein, the content publisher hopes to show the video works to more watching users, and the content publisher can set covers for the video contents to attract the watching users to watch the video contents, but the covers of the video contents can not meet the watching interests of the watching users, and influence the liveness and the platform viscosity of the watching users.
Disclosure of Invention
The application provides a video processing method, a device, equipment and a storage medium, so that at least the experience and the interestingness of a watching user are improved, and the liveness and the platform viscosity of the watching user are increased. The technical scheme of the application is as follows:
according to a first aspect of embodiments of the present application, a video editing interface is provided through a terminal device, where the video editing interface includes a preview interface, a generation control, and N template videos, N is a positive integer, and N is greater than or equal to 1; the method comprises the following steps: responding to triggering operation on a target template video, updating a first video displayed in the preview interface into the target template video, wherein the target template video comprises first visual information, and displaying a target adding control corresponding to the target template video on the video editing interface; responding to the triggering operation of the target adding control, splicing the target template video with the first video to obtain a second video matched with the first video, and previewing the second video through the preview interface; in the process of previewing the first visual information, responding to editing operation of the first visual information, and generating second visual information corresponding to the first video; and responding to the triggering operation of the generation control, displaying a video display interface, wherein the video display interface comprises a second video with the second visual information.
According to a second aspect of embodiments of the present application, a video processing apparatus is provided, where the video processing apparatus provides a video editing interface, where the video editing interface includes a preview interface, a generating control, and N template videos, N is a positive integer, and N is greater than or equal to 1; the video processing apparatus includes: the system comprises an updating module, a splicing module, a generating module and a display module; the updating module is used for responding to the triggering operation of the target template video, updating the first video displayed in the preview interface into the target template video, wherein the target template video comprises first visual information, and displaying a target adding control corresponding to the target template video on the video editing interface; the splicing module is used for responding to the triggering operation of the target adding control, splicing the target template video with the first video to obtain a second video matched with the first video, and previewing the second video through the preview interface; the generation module is used for responding to the editing operation of the first visual information in the process of previewing the first visual information to generate second visual information corresponding to the first video; and the display module is used for responding to the triggering operation of the generation control and displaying a video display interface, and the video display interface comprises a second video with the second visual information.
According to a third aspect of embodiments of the present application, a video processing apparatus is provided, where the video processing apparatus provides a video editing interface, where the video editing interface includes a preview interface, a generation control, and N template videos, N is a positive integer, and N is greater than or equal to 1; the video processing apparatus includes: a memory and a processor; the memory is used for storing a computer program; the processor is coupled to the memory and is configured to execute the computer program to implement each step in the video processing method provided in the embodiment of the present application.
According to a fourth aspect of embodiments of the present application, there is provided a computer readable storage medium storing a computer program, which when executed by a processor causes the processor to implement steps in a video processing method provided by embodiments of the present application.
According to a fifth aspect of embodiments of the present application, there is provided a computer program product comprising computer programs/instructions which, when executed by a processor, cause the processor to carry out the steps of the video processing method provided by the embodiments of the present application.
The technical scheme provided by the embodiment of the application at least brings the following beneficial effects:
In the embodiment of the application, aiming at a first video, N template videos are preset, each template video corresponds to visual information, a target template video is selected from the N template videos, the target template video and the first video are spliced to obtain a second video, in the process of previewing the second video, the second visual information corresponding to the first video is generated through editing operation of the first visual information corresponding to the target template video, and the second video with the second visual information is displayed on a social platform. The second visual information in the second video can improve the experience and the interestingness of the watching user, and the liveness and the platform viscosity of the watching user are improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application and do not constitute an undue limitation on the application.
FIG. 1 is a flow chart illustrating a video processing method according to an exemplary embodiment;
FIG. 2 is a schematic diagram of a video editing interface shown in accordance with an exemplary embodiment;
FIG. 3 is a schematic diagram of another video editing interface shown in accordance with an exemplary embodiment;
FIG. 4 is a schematic diagram of yet another video editing interface shown in accordance with an exemplary embodiment;
FIG. 5 is a schematic diagram of yet another video editing interface shown in accordance with an exemplary embodiment;
fig. 6 is a schematic diagram showing a structure of a video processing apparatus according to an exemplary embodiment;
fig. 7 is a schematic diagram showing a structure of a video processing apparatus according to an exemplary embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
Aiming at the problems that in the prior art, the cover is set for the video works, so that the watching interest of a watching user cannot be met, and the liveness and the platform viscosity of the watching user are influenced. In the embodiment of the application, aiming at a first video, N template videos are preset, each template video corresponds to visual information, a target template video is selected from the N template videos, the target template video and the first video are spliced to obtain a second video, in the process of previewing the second video, the second visual information corresponding to the first video is generated through editing operation of the first visual information corresponding to the target template video, and the second video with the second visual information is displayed on a social platform. The second visual information in the second video can improve the experience and the interestingness of the watching user, and the liveness and the platform viscosity of the watching user are improved.
Fig. 1 is a flow chart of a video processing method according to an exemplary embodiment of the present application, where the video processing method provides a video editing interface through a terminal device, where the video editing interface includes a preview interface, a generation control, and N template videos; as shown in fig. 1, the method includes:
101. Responding to triggering operation on the target template video, updating the first video displayed in the preview interface into the target template video, wherein the target template video comprises first visual information, and displaying a target adding control corresponding to the target template video on a video editing interface;
102. in response to triggering operation of adding a control to a target, splicing the target template video with the first video to obtain a second video matched with the first video, and previewing the second video through a preview interface;
103. in the process of previewing the first visual information, responding to the editing operation of the first visual information, and generating second visual information corresponding to the first video;
104. and in response to a triggering operation of the generation control, displaying a video display interface, wherein the video display interface comprises a second video with second visual information.
In this embodiment, an APP corresponding to a social platform may be installed on a terminal device used by a user, and based on the APP, a first video may be a short video, a movie fragment, a variety program, or the like, which is not limited. The user can issue the first video outwards through the APP, and before issuing the first video, the user can edit the first video, for example, add a template video to the first video, and generate a video of the first video on the basis of the template video so as to arouse the watching interest of a viewer. The social platform is provided with a video editing interface, and the video editing interface comprises: the preview interface and the N template videos, N is a positive integer, N is greater than or equal to 1, and in fig. 2-5, the template videos are illustrated by taking the template videos with the number of 5 as an example, but the method is not limited thereto. The 4 template videos are respectively a template video A1, a template video A2, a template video A3, a template video A4 and a template video A5. The preview interface is used for previewing the video. The video previewed by the preview interface can be a first video imported by a user, can be a template video, and can also be a second video obtained by splicing the first video and the template video, and the specific situation is determined according to the situation. For example, a first video may be presented on the preview interface if the user did not trigger a template video, and a template video may be presented if the user triggered the template video. As shown in fig. 2, in the case where the user does not trigger the template video, the first video is shown on the preview interface.
In this embodiment, the user may perform a triggering operation on the target template video on the video editing interface, where the triggering operation may include, but is not limited to: the click trigger, the slide trigger, the voice trigger, or the like is not limited thereto. In the case that the target template video is triggered, the target template video can be highlighted, for example, a frame is added to the target template video, and further, different colors, lines, line widths and the like can be set for the frame, as shown in fig. 3, in the case that the template video A1 is triggered, the template video A1 is displayed with a frame to distinguish other non-triggered template videos; as shown in fig. 5, in the case where the template video A2 is triggered, the template video A2 is displayed with a frame, and the manner in which the template video is highlighted when the template video is triggered in fig. 3 and 5 is merely exemplary and not limited thereto.
In this embodiment, when the target template video is triggered, the first video displayed in the preview interface may be updated to the target template video in response to a triggering operation on the target template video, and a target adding control corresponding to the target template video may be displayed on the video editing interface; in FIG. 2, the target add control is illustrated with "+", but is not limited thereto; under the condition that the target adding control is triggered, the target template video and the first video can be spliced in response to the triggering operation of the target adding control, so that a second video matched with the first video is obtained, and the second video is previewed through a preview interface.
In this embodiment, the target template video includes first visual information, and editing operation may be performed on the first visual information in the process of previewing the second video. The embodiment of editing the first visual information is not limited to this. In an alternative embodiment, the first visual information includes text, picture or background, and the editing operation on the first visual information may be adding, deleting or modifying text content, modifying text style, arrangement (such as horizontal or vertical), text color or text background, and modifying a display position of the picture or text. In another alternative embodiment, the visual information of the template video includes editable information and non-editable information, different template videos correspond to different non-editable information, and the editable information in different template videos may be the same or different. For example, the editable information includes content, style, font color, etc. of the text, hue, saturation, brightness, etc. of the picture, and display position, etc. of the text or the picture, and the non-editable information includes background of the text, which may be solid background or picture background, without limitation; wherein the editing operation on the first visual information may be an editing operation on the editable information. It should be noted that, besides visual information, the template video may further include a speaking tone, text content in the template video is spoken through sound of the speaking tone, the speaking tone may be male voice or female voice, different video templates have different speaking tone, a user may select different speaking tone by selecting different template videos, or in the process of previewing the second video, a tone selection control is displayed on a video editing interface, and in response to a triggering operation of the tone selection control, the speaking tone corresponding to the tone selection control is obtained, and text content in the template video is spoken based on the speaking tone.
In the present embodiment, in the case where the first visual information is edited, the second visual information corresponding to the first video shown may be generated in response to an editing operation of the first visual information; the video playing interface comprises a preview interface and a template video, and also comprises: and under the condition that the generation control is triggered, a video display interface can be displayed in response to the triggering operation of the generation control, wherein the video display interface comprises a second video with the second visual information, so that other users on the social platform can watch the second video with the second visual information. The generation controls in fig. 2 to 3 and fig. 5 are illustrated by way of example and not limitation, and the generation controls are illustrated by way of example and by way of example as circular buttons in fig. 4. The video editing interface includes, in addition to the generation control, a cancel control for canceling setting the template video (e.g., the title) for the first video, and the cancel control in fig. 2-3 and fig. 5 is illustrated by taking "x" as an example, but is not limited thereto.
In an optional embodiment, the video display interface includes an upload control, and when the upload control is triggered, the first video uploaded by the user can be acquired and displayed on the preview interface of the video editing interface in response to a triggering operation on the upload control.
In the embodiment of the application, aiming at a first video, N template videos are preset, each template video corresponds to visual information, a target template video is selected from the N template videos, the target template video and the first video are spliced to obtain a second video, in the process of previewing the second video, the second visual information corresponding to the first video is generated through editing operation of the first visual information corresponding to the target template video, and the second video with the second visual information is displayed on a social platform. The second visual information in the second video can improve the experience and the interestingness of the watching user, and the liveness and the platform viscosity of the watching user are improved.
In an optional embodiment, in response to a triggering operation of adding a control to a target, after splicing the target template video with the first video to obtain a second video adapted to the first video, the method further includes: and adding third visual information corresponding to the target template video, wherein the third visual information represents that the target template video is spliced with the first video to obtain a second video. The third visual information may be any visual information capable of highlighting the target template video, for example, the third visual information may be lines with different colors, linearity and line widths, the display outline of the target template video is displayed through the lines with different colors, linearity and line widths, and a graph with different shapes, for example, a triangle, a circle, a star shape and the like, may be added to the target template video, and the graph is used as the third visual information. Fig. 4 illustrates the third visual information as a circle, but is not limited thereto.
In an optional embodiment, after the second video is generated, supporting replacement of the target template video in the second video, specifically, in a process of previewing the second video through the preview interface, in response to a triggering operation on a new target template video, updating the second video displayed in the preview interface to the new target template video; displaying a replacement control on a video editing interface, as shown in fig. 5; under the condition that the replacement control is triggered, the first video and the new target video can be spliced in response to the triggering operation of the replacement control, a new second video corresponding to the first video is obtained, and the new second video is previewed through a preview interface; and deleting the third visual information, adding fourth visual information on the new target template video, wherein the fourth visual information and the third visual information can be the same or different, and the fourth visual information can be lines with different colors, linearity and line widths or patterns with different shapes, and the like.
In an optional embodiment, in addition to supporting replacement of the target template video in the second video, deletion of the target template video in the second video is supported, and specifically, in a process of previewing the second video through the preview interface, the third visualization information may be deleted in response to a triggering operation on the target template video, so that the second video displayed in the preview interface is updated to the first video.
In an alternative embodiment, after the second video is generated, a time axis corresponding to the playing control and the second video may also be displayed on the video playing interface. The playing control is used for playing the second video, for example, starting playing or pausing playing. Wherein the timeline distinguishes the target template video from the first video by different visualization information. For example, the time axis portion corresponding to the first video is displayed by using a thumbnail of the first video, and the time axis portion corresponding to the target template video is displayed by using a video frame, as shown in fig. 4.
In this embodiment, the generation mode of the template video is not limited, the template video may be a video clip in the first video, or the template video may not be a video clip in the first video, and the following description will be given in case.
Case B1:the template video is not a video clip of the first video, N different template videos may be preset, where the different template videos have different visual information, for example, visual information corresponding to the different template videos, have different editable information, or have different non-editable information, or have different editable information and different non-editable information. For example, the number of the template videos is 3, namely the template video C1 and the template video C2 respectively, the editable information of the template videos is text content, text style, text color, and the template videos are not available Editing information into text background; as shown in fig. 3 and 4, the editable information of the template video C1 is: bold white "my video diary", the non-editable information is a picture background; the editable information of the template video C2 is: italic black "my video diary", the non-editable information is a white background. The duration of the template video is not limited, and may be 3-5 seconds, for example.
Case B2:the template video is a video clip in the first video, under the condition that the first video is displayed on the preview interface, the first video can be intercepted to obtain N video clips corresponding to the first video, and visual information is added to each video clip to obtain N template videos corresponding to the first video.
Optionally, the first video is intercepted to obtain N video clips, including at least one of the following operations:
operation E1:according to the duration of the first video, the first video is equally divided into N first candidate segments, video segments with the set first duration are intercepted from each first candidate segment, N video segments corresponding to the first video are obtained, the first duration is smaller than the duration of the candidate segments, and the first duration can be the duration of the template video, for example, 3-5 seconds.
Operation E2:basic attribute information of an image frame in a first video is acquired, wherein the basic attribute information comprises, but is not limited to: pixels, resolution, size, color, bit depth, hue, saturation, brightness, color channels, hierarchical composition of images, etc.; determining continuous image frames of which the basic attribute information accords with the set attribute conditions according to the basic attribute information of the image frames in the first video; according to the second candidate segments formed by the continuous image frames, N video segments corresponding to the first video are obtained; the set attribute condition may be that the brightness satisfies a set brightness threshold or that the saturation satisfies a set saturation threshold, and the like, which is not limited thereto. If the number M of the second candidate segments is greater than or equal to N, N second candidate segments can be directly selected from the second candidate segments, and the video segments with the first duration set for each of the N second candidate segments are interceptedObtaining N video clips corresponding to the first video; if the number M of the second candidate segments is smaller than N, intercepting video segments with a set first duration from each second candidate segment to obtain M video segments corresponding to the first video, intercepting the video segments with the set first duration from the random first video to obtain N-M video segments, and finally obtaining N video segments corresponding to the first video.
Operation E3:identifying a third candidate segment comprising a face area from the first video, determining a fourth candidate segment with the face area ratio exceeding a set proportion threshold value in the third candidate segment, and determining N video segments corresponding to the first video according to the fourth candidate segment; wherein the ratio of the face area represents the ratio of the face area to the area occupied by the preview interface, or the ratio of the face area appearing on the preview interface to the total face area. If the number L of the fourth candidate fragments is greater than or equal to N, selecting N fourth candidate fragments from the fourth candidate fragments, and aiming at each of the N fourth candidate fragments, intercepting video fragments with a first duration to obtain N video fragments corresponding to the first video; if the number L of the fourth candidate segments is smaller than N, intercepting video segments with a first duration according to each of the L fourth candidate segments to obtain L video segments corresponding to the first video, intercepting the video segments with the first duration according to the third candidate segments to obtain N-L video segments corresponding to the first video, and finally obtaining N video segments corresponding to the first video.
Fig. 6 is a schematic diagram showing a structure of a video processing apparatus according to an exemplary embodiment; the video processing device provides a video editing interface, wherein the video editing interface comprises a preview interface, a generation control and N template videos, N is a positive integer, and N is more than or equal to 1; as shown in fig. 6, the video processing apparatus includes: an updating module 61, a splicing module 62, a generating module 63 and a displaying module 64.
The updating module 61 is configured to update a first video displayed in the preview interface to a target template video in response to a triggering operation on the target template video, where the target template video includes first visualization information, and display a target adding control corresponding to the target template video on the video editing interface;
the stitching module 62 is configured to stitch the target template video and the first video in response to a trigger operation of adding a control to the target, obtain a second video adapted to the first video, and preview the second video through the preview interface;
the generating module 63 is configured to generate second visual information corresponding to the first video in response to an editing operation on the first visual information during previewing the first visual information;
the display module 64 is configured to display a video presentation interface in response to a triggering operation of the generating control, where the video presentation interface includes a second video with the second visual information.
In an alternative embodiment, the video processing apparatus further includes: and the adding module is used for adding third visual information corresponding to the target template video after the target template video is spliced with the first video to obtain a second video matched with the first video in response to the triggering operation of the target adding control, wherein the third visual information represents that the target template video is spliced with the first video to obtain the second video.
In an alternative embodiment, during the process of previewing the second video through the preview interface, the updating module 61 is further configured to update the second video displayed in the preview interface to a new target template video in response to a trigger operation on the new target template video; displaying a replacement control on the video editing interface; the stitching module 62 is further configured to stitch the first video and the new target video in response to a trigger operation on the replacement control, obtain a new second video corresponding to the first video, and preview the new second video through the preview interface; the video processing device comprises a deleting module, wherein the deleting module is used for deleting the third visual information, and the adding module is also used for adding fourth visual information on the new target template video.
In an optional embodiment, in the process of previewing the second video through the preview interface, the deleting module is further configured to delete the third visualization information in response to a triggering operation on the target template video, and the updating module 61 is further configured to update the second video displayed in the preview interface to the first video.
In an alternative embodiment, during the process of previewing the second video through the preview interface, the display module 64 is further configured to display a play control and a time axis corresponding to the second video on the video editing interface, where the play control is used to play the second video, and the time axis distinguishes the target template video and the first video through different visual information.
In an alternative embodiment, the visual information corresponding to each template video includes editable information and non-editable information, and different template videos correspond to different non-editable information; the editable information comprises text contents, text styles, text colors and display positions of texts or pictures; the non-editable information includes: text background.
In an optional embodiment, the video processing device includes an intercepting module, where the intercepting module is configured to intercept the first video to obtain N video segments corresponding to the first video; the adding module is further used for adding visual information to each video clip to obtain N template videos corresponding to the first video.
In an alternative embodiment, the interception module is specifically configured to perform at least one of the following operations: dividing the first video into N first candidate segments according to the duration of the first video, and intercepting video segments with a set first duration from the N first candidate segments to obtain N video segments corresponding to the first video, wherein the first duration is smaller than the duration of the candidate segments; determining continuous image frames of which the basic attribute information accords with a set attribute condition according to the basic attribute information of the image frames in the first video; obtaining N video clips corresponding to the first video according to the second candidate clips formed by the continuous image frames, wherein M is more than or equal to N; identifying a third candidate segment comprising a face area from the first video, determining a fourth candidate segment with the face area ratio exceeding a set proportion threshold value in the third candidate segment, and determining N video segments corresponding to the first video according to the fourth candidate segment, wherein L is more than or equal to N; the ratio of the face area represents the ratio of the face area to the area occupied by the preview interface, or the ratio of the face area appearing on the preview interface to the total face area.
According to the video processing device provided by the embodiment of the application, N template videos are preset for a first video, each template video corresponds to visual information, a target template video is selected from the N template videos, the target template video and the first video are spliced to obtain a second video, in the process of previewing the second video, the second visual information corresponding to the first video is generated through editing operation of the first visual information corresponding to the target template video, and the second video with the second visual information is displayed on a social platform. The second visual information in the second video can improve the experience and the interestingness of the watching user, and the liveness and the platform viscosity of the watching user are improved.
FIG. 7 is a schematic diagram of a video processing device according to an exemplary embodiment, the video processing device providing a video editing interface, the video editing interface including a preview interface, a generation control, and N template videos, N being a positive integer, and N being greater than or equal to 1; as shown in fig. 7, the video processing apparatus includes: a memory 74 and a processor 75.
Memory 74 is used to store computer programs and may be configured to store various other data to support operations on the video processing device. Examples of such data include instructions for any application or method operating on the video processing device, contact data, phonebook data, messages, pictures, video, and the like.
The memory 74 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
A processor 75 coupled to the memory 74 for executing the computer program in the memory 74 for: responding to triggering operation on a target template video, updating a first video displayed in the preview interface into the target template video, wherein the target template video comprises first visual information, and displaying a target adding control corresponding to the target template video on the video editing interface; responding to the triggering operation of the target adding control, splicing the target template video with the first video to obtain a second video matched with the first video, and previewing the second video through the preview interface; in the process of previewing the first visual information, responding to editing operation of the first visual information, and generating second visual information corresponding to the first video; and responding to the triggering operation of the generation control, displaying a video display interface, wherein the video display interface comprises a second video with the second visual information.
In an alternative embodiment, the processor 75 is further configured, after splicing the target template video with the first video in response to the triggering operation of the target adding control, to obtain a second video adapted to the first video: and adding third visual information corresponding to the target template video, wherein the third visual information represents that the target template video is spliced with the first video to obtain a second video.
In an alternative embodiment, the processor 75 is further configured to, during previewing of the second video through the preview interface: responding to the triggering operation of the new target template video, and updating the second video displayed in the preview interface into the new target template video; displaying a replacement control on the video editing interface; responding to the triggering operation of the replacement control, splicing the first video and the new target video to obtain a new second video corresponding to the first video, and previewing the new second video through the preview interface; and deleting the third visual information and adding fourth visual information on the new target template video.
In an alternative embodiment, the processor 75 is further configured to, during previewing of the second video through the preview interface: and deleting the third visual information in response to the triggering operation of the target template video, and updating the second video displayed in the preview interface to the first video.
In an alternative embodiment, the processor 75 is further configured to, during previewing of the second video through the preview interface: and displaying a playing control and a time axis corresponding to the second video on the video editing interface, wherein the playing control is used for playing the second video, and the time axis distinguishes the target template video from the first video through different visual information.
In an alternative embodiment, processor 75 is further configured to: the visual information corresponding to each template video comprises editable information and non-editable information, and different template videos correspond to different non-editable information; the editable information comprises text contents, text styles, text colors and display positions of texts or pictures; the non-editable information includes: text background.
In an alternative embodiment, processor 75 is further configured to: intercepting the first video to obtain N video clips corresponding to the first video, and adding visual information for each video clip to obtain N template videos corresponding to the first video.
In an alternative embodiment, when the first video is intercepted to obtain N video segments corresponding to the first video, the processor 75 is specifically configured to perform at least one of the following operations: dividing the first video into N first candidate segments according to the duration of the first video, and intercepting video segments with a set first duration from the N first candidate segments to obtain N video segments corresponding to the first video, wherein the first duration is smaller than the duration of the candidate segments; determining continuous image frames of which the basic attribute information accords with a set attribute condition according to the basic attribute information of the image frames in the first video; obtaining N video clips corresponding to the first video according to the second candidate clips formed by the continuous image frames, wherein M is more than or equal to N; identifying a third candidate segment comprising a face area from the first video, determining a fourth candidate segment with the face area ratio exceeding a set proportion threshold value in the third candidate segment, and determining N video segments corresponding to the first video according to the fourth candidate segment, wherein L is more than or equal to N; the ratio of the face area represents the ratio of the face area to the area occupied by the preview interface, or the ratio of the face area appearing on the preview interface to the total face area.
According to the video processing equipment provided by the embodiment of the application, N template videos are preset aiming at a first video, each template video corresponds to visual information, a target template video is selected from the N template videos, the target template video and the first video are spliced to obtain a second video, in the process of previewing the second video, the second visual information corresponding to the first video is generated through editing operation of the first visual information corresponding to the target template video, and the second video with the second visual information is displayed on a social platform. The second visual information in the second video can improve the experience and the interestingness of the watching user, and the liveness and the platform viscosity of the watching user are improved.
Further, as shown in fig. 7, the video processing apparatus further includes: communication component 76, display 77, power component 78, audio component 79, and the like. Only part of the components are schematically shown in fig. 7, which does not mean that the video processing device only comprises the components shown in fig. 7. It should be noted that, the components within the dashed box in fig. 7 are optional components, and not necessarily optional components, and specific to the product form of the video processing apparatus.
Accordingly, embodiments of the present application also provide a computer-readable storage medium storing a computer program, which when executed by a processor, causes the processor to implement the steps in the method of fig. 1 provided by the embodiments of the present application.
Accordingly, embodiments of the present application also provide a computer program product comprising a computer program/instructions which, when executed by a processor, cause the processor to carry out the steps of the method of fig. 1 provided by the embodiments of the present application.
The communication assembly of fig. 7 is configured to facilitate wired or wireless communication between the device in which the communication assembly is located and other devices. The device where the communication component is located can access a wireless network based on a communication standard, such as a mobile communication network of WiFi,2G, 3G, 4G/LTE, 5G, etc., or a combination thereof. In one exemplary embodiment, the communication component receives a broadcast signal or broadcast-related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component further comprises a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
The display in fig. 7 described above includes a screen, which may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation.
The power supply assembly shown in fig. 7 provides power to various components of the device in which the power supply assembly is located. The power components may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the devices in which the power components are located.
The audio component of fig. 7 described above may be configured to output and/or input audio signals. For example, the audio component includes a Microphone (MIC) configured to receive external audio signals when the device in which the audio component is located is in an operational mode, such as a call mode, a recording mode, and a speech recognition mode. The received audio signal may be further stored in a memory or transmitted via a communication component. In some embodiments, the audio assembly further comprises a speaker for outputting audio signals.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (13)

1. A video processing method is characterized in that a video editing interface is provided through terminal equipment, wherein the video editing interface comprises a preview interface, a generation control and N template videos, N is a positive integer, and N is more than or equal to 1; the method comprises the following steps:
responding to triggering operation on a target template video, updating a first video displayed in the preview interface into the target template video, wherein the target template video comprises first visual information, and displaying a target adding control corresponding to the target template video on the video editing interface;
responding to the triggering operation of the target adding control, splicing the target template video with the first video to obtain a second video matched with the first video, and previewing the second video through the preview interface;
In the process of previewing the first visual information, responding to editing operation of the first visual information, and generating second visual information corresponding to the first video;
responding to the triggering operation of the generation control, and displaying a video display interface, wherein the video display interface comprises a second video with the second visual information;
the method further comprises the steps of: intercepting the first video to obtain N video clips corresponding to the first video, and adding visual information for each video clip to obtain N template videos corresponding to the first video;
the method comprises the steps of intercepting the first video to obtain N video clips corresponding to the first video, wherein the N video clips comprise at least one of the following operations:
dividing the first video into N first candidate segments according to the duration of the first video, and intercepting video segments with a set first duration from the N first candidate segments to obtain N video segments corresponding to the first video, wherein the first duration is smaller than the duration of the candidate segments;
determining continuous image frames of which the basic attribute information accords with a set attribute condition according to the basic attribute information of the image frames in the first video; obtaining N video clips corresponding to the first video according to M second candidate clips formed by the continuous image frames, wherein M is more than or equal to N;
Identifying a third candidate segment comprising a face area from the first video, determining L fourth candidate segments of which the face area ratio exceeds a set proportion threshold in the third candidate segment, and determining N video segments corresponding to the first video according to the L fourth candidate segments, wherein L is more than or equal to N; the ratio of the face area represents the ratio of the face area to the area occupied by the preview interface, or the ratio of the face area appearing on the preview interface to the total face area.
2. The method of claim 1, wherein in response to a trigger operation for the target add control, stitching the target template video with the first video to obtain a second video adapted to the first video, further comprising:
and adding third visual information corresponding to the target template video, wherein the third visual information represents that the target template video is spliced with the first video to obtain a second video.
3. The method of claim 2, wherein in previewing the second video through the preview interface, further comprising:
responding to the triggering operation of the new target template video, and updating the second video displayed in the preview interface into the new target template video; displaying a replacement control on the video editing interface;
Responding to the triggering operation of the replacement control, splicing the first video and the new target video to obtain a new second video corresponding to the first video, and previewing the new second video through the preview interface; and deleting the third visual information and adding fourth visual information on the new target template video.
4. The method of claim 2, wherein in previewing the second video through the preview interface, further comprising:
and deleting the third visual information in response to the triggering operation of the target template video, and updating the second video displayed in the preview interface to the first video.
5. The method of claim 1, wherein in previewing the second video through the preview interface, further comprising:
and displaying a playing control and a time axis corresponding to the second video on the video editing interface, wherein the playing control is used for playing the second video, and the time axis distinguishes the target template video from the first video through different visual information.
6. The method according to claim 1, wherein the method further comprises:
the visual information corresponding to each template video comprises editable information and non-editable information, and different template videos correspond to different non-editable information; the editable information comprises text contents, text styles, text colors and display positions of texts or pictures; the non-editable information includes: text background.
7. The video processing device is characterized in that the video processing device provides a video editing interface, wherein the video editing interface comprises a preview interface, a generation control and N template videos, N is a positive integer, and N is more than or equal to 1; the video processing apparatus includes: the device comprises an updating module, a splicing module, a generating module and a display module;
the updating module is used for responding to the triggering operation of the target template video, updating the first video displayed in the preview interface into the target template video, wherein the target template video comprises first visual information, and displaying a target adding control corresponding to the target template video on the video editing interface;
the splicing module is used for responding to the triggering operation of the target adding control, splicing the target template video with the first video to obtain a second video matched with the first video, and previewing the second video through the preview interface;
The generation module is used for responding to the editing operation of the first visual information in the process of previewing the first visual information to generate second visual information corresponding to the first video;
the display module is used for responding to the triggering operation of the generation control and displaying a video display interface, and the video display interface comprises a second video with the second visual information;
the video processing apparatus further includes: a intercepting module;
the intercepting module is used for: intercepting the first video to obtain N video clips corresponding to the first video, and adding visual information for each video clip to obtain N template videos corresponding to the first video;
the intercepting module is specifically configured to perform at least one of the following operations:
dividing the first video into N first candidate segments according to the duration of the first video, and intercepting video segments with a set first duration from the N first candidate segments to obtain N video segments corresponding to the first video, wherein the first duration is smaller than the duration of the candidate segments;
determining continuous image frames of which the basic attribute information accords with a set attribute condition according to the basic attribute information of the image frames in the first video; obtaining N video clips corresponding to the first video according to M second candidate clips formed by the continuous image frames, wherein M is more than or equal to N;
Identifying a third candidate segment comprising a face area from the first video, determining L fourth candidate segments of which the face area ratio exceeds a set proportion threshold in the third candidate segment, and determining N video segments corresponding to the first video according to the L fourth candidate segments, wherein L is more than or equal to N; the ratio of the face area represents the ratio of the face area to the area occupied by the preview interface, or the ratio of the face area appearing on the preview interface to the total face area.
8. The apparatus of claim 7, wherein the video processing apparatus further comprises: adding a module;
after the target template video is spliced with the first video in response to the triggering operation of the target adding control, and a second video adapted to the first video is obtained, the adding module is used for:
and adding third visual information corresponding to the target template video, wherein the third visual information represents that the target template video is spliced with the first video to obtain a second video.
9. The apparatus of claim 8, wherein the video processing apparatus comprises: deleting the module;
During the previewing of the second video through the preview interface, the updating module is further configured to: responding to the triggering operation of the new target template video, and updating the second video displayed in the preview interface into the new target template video; the display module is also used for displaying a replacement control on the video editing interface;
the splicing module is further configured to splice the first video and the new target video in response to a triggering operation on the replacement control, so as to obtain a new second video corresponding to the first video, and the display module is further configured to preview the new second video through the preview interface; and the deleting module is used for deleting the third visual information and adding fourth visual information on the new target template video.
10. The apparatus of claim 9, wherein in previewing the second video through the preview interface, the deletion module is further configured to:
and deleting the third visual information in response to the triggering operation of the target template video, and updating the second video displayed in the preview interface to the first video.
11. The apparatus of claim 7, wherein in previewing the second video through the preview interface, the display module is further configured to:
and displaying a playing control and a time axis corresponding to the second video on the video editing interface, wherein the playing control is used for playing the second video, and the time axis distinguishes the target template video from the first video through different visual information.
12. The video processing equipment is characterized in that the video processing equipment provides a video editing interface, wherein the video editing interface comprises a preview interface, a generation control and N template videos, N is a positive integer, and N is more than or equal to 1; the video processing apparatus includes: a memory and a processor; the memory is used for storing a computer program; the processor, coupled to the memory, for executing the computer program to implement the video processing method of any of claims 1 to 6.
13. A computer readable storage medium storing a computer program, characterized in that the computer program, when executed by a processor, causes the processor to carry out the steps of the method according to any one of claims 1-6.
CN202210234091.XA 2022-03-10 2022-03-10 Video processing method, device, equipment and storage medium Active CN114666669B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210234091.XA CN114666669B (en) 2022-03-10 2022-03-10 Video processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210234091.XA CN114666669B (en) 2022-03-10 2022-03-10 Video processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114666669A CN114666669A (en) 2022-06-24
CN114666669B true CN114666669B (en) 2024-03-19

Family

ID=82029485

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210234091.XA Active CN114666669B (en) 2022-03-10 2022-03-10 Video processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114666669B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001197430A (en) * 1999-11-05 2001-07-19 Matsushita Electric Ind Co Ltd Moving image editing device and editing method
CN104349175A (en) * 2014-08-18 2015-02-11 周敏燕 Video producing system and video producing method based on mobile phone terminal
CN111787188A (en) * 2019-04-04 2020-10-16 百度在线网络技术(北京)有限公司 Video playing method and device, terminal equipment and storage medium
CN111918131A (en) * 2020-08-18 2020-11-10 北京达佳互联信息技术有限公司 Video generation method and device
CN112291484A (en) * 2019-07-23 2021-01-29 腾讯科技(深圳)有限公司 Video synthesis method and device, electronic equipment and storage medium
CN113452941A (en) * 2021-05-14 2021-09-28 北京达佳互联信息技术有限公司 Video generation method and device, electronic equipment and storage medium
CN113691836A (en) * 2021-10-26 2021-11-23 阿里巴巴达摩院(杭州)科技有限公司 Video template generation method, video generation method and device and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001197430A (en) * 1999-11-05 2001-07-19 Matsushita Electric Ind Co Ltd Moving image editing device and editing method
CN104349175A (en) * 2014-08-18 2015-02-11 周敏燕 Video producing system and video producing method based on mobile phone terminal
CN111787188A (en) * 2019-04-04 2020-10-16 百度在线网络技术(北京)有限公司 Video playing method and device, terminal equipment and storage medium
CN112291484A (en) * 2019-07-23 2021-01-29 腾讯科技(深圳)有限公司 Video synthesis method and device, electronic equipment and storage medium
CN111918131A (en) * 2020-08-18 2020-11-10 北京达佳互联信息技术有限公司 Video generation method and device
CN113452941A (en) * 2021-05-14 2021-09-28 北京达佳互联信息技术有限公司 Video generation method and device, electronic equipment and storage medium
CN113691836A (en) * 2021-10-26 2021-11-23 阿里巴巴达摩院(杭州)科技有限公司 Video template generation method, video generation method and device and electronic equipment

Also Published As

Publication number Publication date
CN114666669A (en) 2022-06-24

Similar Documents

Publication Publication Date Title
CN112073649B (en) Multimedia data processing method, multimedia data generating method and related equipment
US20210160435A1 (en) Fast and/or slow motion compensating timer display
US20170024110A1 (en) Video editing on mobile platform
CN108924622B (en) Video processing method and device, storage medium and electronic device
CN111356000A (en) Video synthesis method, device, equipment and storage medium
CN111866596A (en) Bullet screen publishing and displaying method and device, electronic equipment and storage medium
WO2022022262A1 (en) Processing method for multimedia resource, publishing method and electronic device
CN111918131A (en) Video generation method and device
CN110633380B (en) Control method and device for picture processing interface, electronic equipment and readable medium
CN113099287A (en) Video production method and device
CN110704059A (en) Image processing method, image processing device, electronic equipment and storage medium
CN113238752A (en) Code generation method and device, electronic equipment and storage medium
CN111835985A (en) Video editing method, device, apparatus and storage medium
CN113099288A (en) Video production method and device
CN113259776B (en) Binding method and device of caption and sound source
CN113157181B (en) Operation guiding method and device
CN111736746A (en) Multimedia resource processing method and device, electronic equipment and storage medium
CN113365010B (en) Volume adjusting method, device, equipment and storage medium
KR20180129265A (en) APP system having a function of editing motion picture
CN114666669B (en) Video processing method, device, equipment and storage medium
CN113364999B (en) Video generation method and device, electronic equipment and storage medium
CN113946246A (en) Page processing method and device, electronic equipment and computer readable storage medium
CN112115696A (en) Data processing method and device and recording equipment
KR20080017747A (en) A device having function editting of image and method thereof
CN117714774B (en) Method and device for manufacturing video special effect cover, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant