CN114339075A - Video editing method and device, electronic equipment and storage medium - Google Patents

Video editing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114339075A
CN114339075A CN202111565166.4A CN202111565166A CN114339075A CN 114339075 A CN114339075 A CN 114339075A CN 202111565166 A CN202111565166 A CN 202111565166A CN 114339075 A CN114339075 A CN 114339075A
Authority
CN
China
Prior art keywords
video
target
clipped
frame images
target frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111565166.4A
Other languages
Chinese (zh)
Inventor
文联
景欣
李达
吴昌桥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202111565166.4A priority Critical patent/CN114339075A/en
Publication of CN114339075A publication Critical patent/CN114339075A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Television Signal Processing For Recording (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The disclosure relates to a video clipping method, a video clipping device, an electronic device and a storage medium, which can improve the clipping efficiency of videos. The specific scheme comprises the following steps: receiving a video clip request and a video to be clipped, wherein the video clip request is used for requesting to clip the video to be clipped; determining a plurality of target frame images in a video to be edited, and acquiring at least one video segment from the video to be edited according to the plurality of target frame images; at least one video clip is a video clip corresponding to a plurality of target frame images, and one video clip comprises at least one target frame image; performing video synthesis processing on at least one video clip to obtain a synthesized video, and adding multimedia resources in the synthesized video to obtain a target video; the multimedia resource includes at least one of: background music, background pictures, description texts and voice playing information; and transmitting the target video.

Description

Video editing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of network technologies, and in particular, to a video editing method and apparatus, an electronic device, and a storage medium.
Background
In the current network application, a user can make a game picture into a short video for sharing. Specifically, the corresponding game video materials need to be stored first, and then the game video materials are edited by a video editing tool to obtain the edited short video, and the short video is shared on the short video platform.
However, in the above solution, the user needs to install the video clip tool in the electronic device in advance and have the basic video clip capability. When the user's video editing capabilities are low, it is difficult to create a quality short video based on the original game video material clip. Therefore, for users without the basic video clipping capability, the short video uploaded to the short video platform is only a relatively original video (i.e., un-clipped video, no dubbing, no subtitles, etc.), and the short video is viewed poorly and of low quality, which may result in consumer users brushing on some low quality video. Therefore, the requirement of the user for editing the video through the electronic equipment is high, the quality and the efficiency are low, and the enthusiasm of editing and creating the video by the user is also influenced.
Disclosure of Invention
The present disclosure provides a video editing method, apparatus, electronic device and storage medium, which can improve the editing efficiency of video. The technical scheme of the disclosure is as follows:
according to a first aspect of the present disclosure, there is provided a video clipping method, the method comprising: receiving a video clip request and a video to be clipped, wherein the video clip request is used for requesting to clip the video to be clipped; determining a plurality of target frame images in a video to be edited, and acquiring at least one video segment from the video to be edited according to the plurality of target frame images; at least one video clip is a video clip corresponding to a plurality of target frame images, and one video clip comprises at least one target frame image; performing video synthesis processing on at least one video clip to obtain a synthesized video, and adding multimedia resources in the synthesized video to obtain a target video; the multimedia resource includes at least one of: background music, background pictures, description texts and voice playing information; and transmitting the target video.
From the above, the electronic device may determine a plurality of target frame images in the video to be clipped, so as to obtain at least one video segment corresponding to the plurality of target frame images from the video to be clipped according to the determined plurality of target frame images. Further, the video synthesis processing is carried out on the at least one video segment to obtain a synthesized video, and multimedia resources are added to the synthesized video to obtain a target video. By the implementation mode, when a video clipping request and a video to be clipped are received, the video to be clipped is automatically processed, and the highlight video segment is determined from the video to be clipped and the video is synthesized, so that the clipped target video corresponding to the video to be clipped is generated quickly and efficiently, the video clipping efficiency is improved, and the quality of the video obtained after clipping is improved.
Optionally, the method for determining a plurality of target frame images in a video to be clipped specifically includes: determining a plurality of target frame images in a video to be edited through a target algorithm; the video to be edited is a target type video, the target algorithm is an algorithm corresponding to the target type video, and different types of videos correspond to different algorithms.
Therefore, the target frame images in the video to be clipped can be determined through the target algorithm corresponding to the target type video to which the video to be clipped belongs, and the target frame images can be accurately determined from the video to be clipped through the implementation mode.
Optionally, the method for determining a plurality of target frame images in a video to be clipped through a target algorithm specifically includes: when target content in a video to be clipped is detected, determining frame images corresponding to the target content as a plurality of target frame images; the targeted content includes at least one of: the target text, the target picture and the target audio are preset contents corresponding to the target algorithm.
As can be seen from the above, the frame images including the target content corresponding to the target algorithm in the video to be clipped may be detected, so as to determine the frame images including the target content as a plurality of target frame images.
Optionally, the method for acquiring at least one video segment from a video to be clipped according to a plurality of target frame images specifically includes: determining at least one playing time interval according to the playing time corresponding to each target frame image in the plurality of target frame images; and determining the video segment corresponding to the at least one playing period in the video to be edited as the at least one video segment.
As can be seen from the above, after the plurality of target frame images in the video to be clipped are determined, the corresponding at least one video clip can be accurately determined from the video to be clipped according to the corresponding playing time of each target frame image in the video to be clipped, and by such an implementation, the at least one video clip can be accurately determined from the video to be clipped according to the plurality of target frame images.
Optionally, the multimedia asset comprises background music; the method for adding multimedia resources to the composite video to obtain the target video specifically comprises the following steps: determining target background music meeting a first preset condition from background music corresponding to the historical video through a target algorithm; the historical videos are a plurality of target type videos released before the current moment, and the first preset condition includes any one of the following items: background music with the most application amount in the historical videos and background music applied to the newly released videos in the historical videos; and determining the target background music as the background music of the synthesized video, and editing the synthesized video to obtain the target video.
Therefore, the target background music meeting the first preset condition can be determined from the background music corresponding to the historical video through the target algorithm, so that the background music is added to the synthesized video to obtain the target video, and the quality of the edited video can be improved through the implementation mode.
Optionally, the method for determining the target background music as the background music of the synthesized video and performing clipping processing on the synthesized video to obtain the target video specifically includes: identifying the beat characteristic and the sound characteristic of the target background music, and determining a plurality of target point positions in the target background music according to the beat characteristic and the sound characteristic; the target points are used for indicating the stuck point time in the target background music; and matching at least one video segment included in the synthesized video with the music segments corresponding to the target points to obtain the target video.
From the above, the beat feature and the sound feature of the target background music can be identified, so that the plurality of target point locations in the target background music can be determined according to the beat feature and the sound feature. And matching at least one video segment included in the synthesized video with the music segments corresponding to the target point positions to obtain a target video with the video segment accurately matched with the stuck point in the background music at any moment. By this implementation, the quality of the edited video can be improved.
Optionally, the method for obtaining a composite video by performing video composition processing on at least one video segment specifically includes: performing video synthesis processing on at least one video clip, and performing target processing on a plurality of target frame images in at least one video clip to obtain a synthesized video; the target treatment includes at least one of: adding a dithering effect, carrying out picture amplification processing, carrying out picture reduction processing and adding a color rendering effect.
From the above, while performing video composition processing on at least one video segment, target processing may be performed on a plurality of target frame images in at least one video segment to obtain a composite video to which various effects are added. By this implementation, the quality of the edited video can be improved.
Optionally, the multimedia resource comprises a background picture; the method for adding multimedia resources to the composite video to obtain the target video specifically comprises the following steps: acquiring a target background picture, and determining the target background picture as a background picture of a synthesized video; and performing masking processing on the synthesized video based on the target background picture to obtain a target video, wherein the aspect ratio of the target video to the synthesized video is different.
Therefore, the target background picture can be obtained, so that the target video is obtained by performing the masking processing on the synthesized video based on the target background picture while performing the video synthesis processing on at least one video clip. By this implementation, the quality of the edited video can be improved.
Optionally, the at least one video clip is a game type video clip; after the "performing video synthesis processing on at least one video segment to obtain a synthesized video, and adding multimedia resources in the synthesized video to obtain a target video", the method further includes: determining a key frame image from the plurality of target frame images, and determining the key frame image as a video cover of the target video; the key frame image is an image which meets a second preset condition in the plurality of target frame images, and the second preset condition is any one of the following conditions: the method comprises the steps of selecting a frame of image with the highest wonderful degree, randomly selecting any one frame of image, a first frame of image and a last frame of image; determining a target file according to the picture content included in the target video, and displaying the target file in the video cover through the target font; the target document is used to illustrate the target video.
As described above, it is also possible to determine a key frame image satisfying the second preset condition from among a plurality of target frame images, and determine the key frame image as a video cover of the target video. And determining a target file according to the picture content included in the target video, and displaying the target file in the video cover through the target font so as to explain the target video. By this implementation, the quality of the edited video can be improved.
According to a second aspect of the present disclosure, there is provided a video clipping method, the method comprising: acquiring a video to be edited, which is specified by a user; sending a video to be edited; receiving a target video; the target video is a video obtained after the video to be clipped is clipped; the target video is a video obtained by adding multimedia resources in the composite video, the composite video is a video obtained by performing video composite processing on at least one video segment, and the at least one video segment is a video segment corresponding to a plurality of target frame images in the video to be clipped.
As can be seen from the above, after the client acquires the video to be clipped specified by the user, the client sends the video to be clipped to the server, so that the server clips the video with the clip to obtain the target video, and then receives the clipped target video from the server. Through the implementation mode, the client can obtain the clipped target video from the server only by sending the video to be clipped to the server, so that the video is not required to be clipped through the client, the requirement on the video clipping skill of a user is lowered, and the video clipping efficiency and the quality of the obtained clipped video are improved.
Optionally, before the "acquiring the video to be clipped specified by the user", the method further includes: displaying a clipping description interface, wherein the clipping description interface is used for prompting the video clipping process; the method for acquiring the video to be edited specified by the user specifically comprises the following steps: responding to the triggering operation of the user based on the editing description interface, and displaying a video selection interface; and responding to the operation of selecting the video, which is executed by the user based on the video selection interface, and acquiring the video to be edited, which is specified by the user.
Therefore, the client can display the editing description interface in advance to prompt the user of the video editing process through the editing description interface, so that the user can perform triggering operation according to the editing description interface to enable the client to display the video selection interface, and obtain the video to be edited, which is specified by the user, through the operation of selecting the video by the user. Through the implementation mode, a user can know the video clipping process through the clipping description interface displayed by the client side, so that the use experience of the user is improved.
Optionally, after the "acquiring the video to be clipped specified by the user", the method further includes: and displaying a video processing interface, wherein the video processing interface is used for prompting a user to clip the video to be clipped to obtain the target video.
As can be seen from the above, after the client acquires the video to be edited specified by the user and sends the video to be edited to the server, the video processing interface may be displayed to prompt the user that the video to be edited is being edited to obtain the target video. Through the implementation mode, the user can know the video clipping process, and the use experience of the user is improved.
Optionally, after the "receiving the target video", the method further includes: displaying a video preview interface, wherein the video preview interface comprises a cover of the target video; responding to the playing operation of the user for the target video, and playing the target video; displaying a sharing interface after the playing is finished; and responding to the sending operation of the user based on the sharing interface, and publishing the target video.
Therefore, after the client receives the target video, the client can display the video preview interface to play the target video according to the playing operation of the user, so that the user can browse the effect of the target video in advance, and after the target video reaches the satisfaction degree of the user, the target video is released according to the sending operation of the user. Through the implementation mode, the user can know the video clipping process, and the use experience of the user is improved.
Optionally, in the process of playing the target video, when any frame image of the plurality of target frame images is played, displaying the image after the target processing; the target treatment includes at least one of: adding a dithering effect, carrying out picture amplification processing, carrying out picture reduction processing and adding a color rendering effect.
As can be seen from the above, the plurality of target frame images included in the target video are images subjected to target processing, that is, videos to which various effects are added, and the quality of the edited video can be improved.
Optionally, the target video comprises at least one of: a target background picture, a video cover and a target file; the target video is obtained by masking the synthesized video based on the target background picture, the video cover is a key frame image determined from a plurality of target frame images, the target file is a file determined according to the picture content included in the target video, the target file is displayed on the video cover through the target font, and the target file is used for explaining the target video.
Therefore, the obtained target video comprises the target background picture, the video cover and the target file, so that the quality of the edited video is improved.
Optionally, when the target video is released, the target link is synchronously sent, and the target link is a link corresponding to an application program for generating the video to be processed.
Therefore, when the target video is released, the link of the application program of the to-be-processed video corresponding to the generated target video is synchronously sent, so that the diversity of the released video can be improved.
According to a third aspect of the present disclosure, there is provided a video clipping device including: the device comprises a receiving unit, a determining unit, an acquiring unit, a processing unit and a sending unit; a receiving unit configured to perform receiving a video clip request and a video to be clipped, the video clip request being for requesting to clip the video to be clipped; a determination unit configured to perform determination of a plurality of target frame images in a video to be clipped; an acquisition unit configured to perform acquisition of at least one video segment from a video to be clipped according to a plurality of target frame images; at least one video clip is a video clip corresponding to a plurality of target frame images, and one video clip comprises at least one target frame image; the processing unit is configured to execute video synthesis processing on at least one video segment to obtain a synthesized video, and add multimedia resources in the synthesized video to obtain a target video; the multimedia resource includes at least one of: background music, background pictures, description texts and voice playing information; a transmission unit configured to perform transmission of the target video.
Optionally, the determining unit is configured to perform determining a plurality of target frame images in the video to be clipped through a target algorithm; the video to be edited is a target type video, the target algorithm is an algorithm corresponding to the target type video, and different types of videos correspond to different algorithms.
Optionally, the determining unit is configured to determine, when the target content in the video to be clipped is detected, frame images corresponding to the target content as a plurality of target frame images; the targeted content includes at least one of: the target text, the target picture and the target audio are preset contents corresponding to the target algorithm.
Optionally, the determining unit is configured to determine at least one playing period according to a playing time corresponding to each of the plurality of target frame images; the determining unit is configured to execute the determination of the video segment corresponding to the at least one playing period in the video to be clipped as the at least one video segment.
Optionally, the multimedia asset comprises background music; a determining unit configured to execute determining target background music satisfying a first preset condition from background music corresponding to the history video through a target algorithm; the historical videos are a plurality of target type videos released before the current moment, and the first preset condition includes any one of the following items: background music with the most application amount in the historical videos and background music applied to the newly released videos in the historical videos; a determination unit configured to perform determination of target background music as background music of the composite video; and the processing unit is configured to execute and clip the composite video to obtain the target video.
Optionally, the processing unit is configured to perform identifying a beat feature and a sound feature of the target background music; a determination unit configured to perform determination of a plurality of target point locations in the target background music based on the beat feature and the sound feature; the target points are used for indicating the stuck point time in the target background music; and the processing unit is configured to execute matching of at least one video segment included in the synthesized video and the music segments corresponding to the target points to obtain the target video.
Optionally, the processing unit is configured to perform video synthesis processing on at least one video segment, and perform target processing on a plurality of target frame images in the at least one video segment to obtain a synthesized video; the target treatment includes at least one of: adding a dithering effect, carrying out picture amplification processing, carrying out picture reduction processing and adding a color rendering effect.
Optionally, the multimedia resource comprises a background picture; an acquisition unit configured to perform acquisition of a target background picture and determine the target background picture as a background picture of a composite video; and the processing unit is configured to perform masking processing on the synthesized video based on the target background picture to obtain a target video, wherein the aspect ratio of the target video to the synthesized video is different.
Optionally, the at least one video clip is a game type video clip; a determination unit configured to perform determining a key frame image from the plurality of target frame images and determining the key frame image as a video cover of the target video; the key frame image is an image which meets a second preset condition in the plurality of target frame images, and the second preset condition is any one of the following conditions: the method comprises the steps of selecting a frame of image with the highest wonderful degree, randomly selecting any one frame of image, a first frame of image and a last frame of image; a determination unit configured to perform determination of a target document from picture contents included in a target video; a processing unit configured to perform displaying a target document in a video cover by a target font; the target document is used to illustrate the target video.
According to a fourth aspect of the present disclosure, there is provided a video clipping device including: an acquisition unit, a transmission unit and a reception unit; an acquisition unit configured to perform acquisition of a video to be clipped specified by a user; a transmitting unit configured to perform transmitting a video to be clipped; a receiving unit configured to perform receiving a target video; the target video is a video obtained after the video to be clipped is clipped; the target video is a video obtained by adding multimedia resources in the composite video, the composite video is a video obtained by performing video composite processing on at least one video segment, and the at least one video segment is a video segment corresponding to a plurality of target frame images in the video to be clipped.
Optionally, a display unit configured to execute displaying a clipping description interface, the clipping description interface being used for prompting a video clipping process; a display unit configured to perform a trigger operation based on the clip description interface in response to a user, and display a video selection interface; and the acquisition unit is configured to execute the operation of responding to the video selection executed by the user based on the video selection interface and acquire the video to be clipped specified by the user.
Optionally, the display unit is configured to execute displaying a video processing interface, where the video processing interface is used to prompt a user that a target video is obtained by performing a clipping process on a video to be clipped.
Optionally, a display unit configured to perform displaying a video preview interface, the video preview interface including a cover of the target video; a display unit configured to perform a play operation of the target video in response to a user's play operation for the target video; the display unit is configured to display a sharing interface after the playing is finished; and the sending unit is configured to execute sending operation based on the sharing interface of the user and publish the target video.
Optionally, in the process of playing the target video, when any frame image of the plurality of target frame images is played, displaying the image after the target processing; the target treatment includes at least one of: adding a dithering effect, carrying out picture amplification processing, carrying out picture reduction processing and adding a color rendering effect.
Optionally, the target video comprises at least one of: a target background picture, a video cover and a target file; the target video is obtained by masking the synthesized video based on the target background picture, the video cover is a key frame image determined from a plurality of target frame images, the target file is a file determined according to the picture content included in the target video, the target file is displayed on the video cover through the target font, and the target file is used for explaining the target video.
Optionally, when the target video is released, the target link is synchronously sent, and the target link is a link corresponding to an application program for generating the video to be processed.
According to a fifth aspect of the present disclosure, there is provided an electronic apparatus including:
a processor. A memory for storing processor-executable instructions. Wherein the processor is configured to execute the instructions to implement the method of any one of the first or second aspects described above, optionally video clipping.
According to a sixth aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon instructions that, when executed by a processor of an electronic device, enable the electronic device to perform any of the above-described first or second aspect optional video clipping methods.
According to a seventh aspect of the present disclosure there is provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the method of optionally video clipping as in any of the first or second aspects.
According to an eighth aspect of the present disclosure there is provided a chip comprising a processor and a communication interface, the communication interface being coupled to the processor, the processor being configured to execute a computer program or instructions to implement the optional video clipping method as in any of the first or second aspects.
The technical scheme provided by the disclosure at least brings the following beneficial effects:
based on any one of the above aspects, in the present disclosure, the electronic device may determine a plurality of target frame images in the video to be clipped, so as to obtain at least one video segment corresponding to the plurality of target frame images from the video to be clipped according to the determined plurality of target frame images. Further, the video synthesis processing is carried out on the at least one video segment to obtain a synthesized video, and multimedia resources are added to the synthesized video to obtain a target video. By the implementation mode, when a video clipping request and a video to be clipped are received, the video to be clipped is automatically processed, and the highlight video segment is determined from the video to be clipped and the video is synthesized, so that the clipped target video corresponding to the video to be clipped is generated quickly and efficiently, the video clipping efficiency is improved, and the quality of the video obtained after clipping is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a schematic diagram illustrating a video clipping system in accordance with an embodiment of the present disclosure;
FIG. 2 is a schematic flow diagram illustrating a video clipping method in accordance with an embodiment of the present disclosure;
FIG. 3 is a schematic flow diagram illustrating another video clipping method in accordance with an embodiment of the present disclosure;
FIG. 4 is a schematic flow chart diagram illustrating yet another video clipping method in accordance with an embodiment of the present disclosure;
FIG. 5 is a schematic flow chart diagram illustrating yet another video clipping method in accordance with an embodiment of the present disclosure;
FIG. 6 is a schematic flow chart diagram illustrating yet another video clipping method in accordance with an embodiment of the present disclosure;
FIG. 7 is a schematic flow chart diagram illustrating yet another video clipping method in accordance with an embodiment of the present disclosure;
FIG. 8 is a schematic flow chart diagram illustrating yet another video clipping method in accordance with an embodiment of the present disclosure;
FIG. 9 is a schematic diagram illustrating a client interface in accordance with an embodiment of the present disclosure;
FIG. 10 is a schematic flow chart diagram illustrating yet another video clipping method in accordance with an embodiment of the present disclosure;
FIG. 11 is a schematic diagram illustrating another client interface in accordance with an embodiment of the present disclosure;
FIG. 12 is a schematic flow chart diagram illustrating yet another video clipping method in accordance with an embodiment of the present disclosure;
FIG. 13 is a schematic diagram illustrating yet another client interface in accordance with an embodiment of the present disclosure;
FIG. 14 is a schematic diagram illustrating yet another client interface in accordance with an embodiment of the present disclosure;
FIG. 15 is a schematic diagram illustrating yet another client interface in accordance with an embodiment of the present disclosure;
FIG. 16 is a schematic diagram illustrating a configuration of a video clipping device according to an embodiment of the present disclosure;
FIG. 17 is a schematic diagram illustrating a structure of yet another video clipping device according to an embodiment of the present disclosure;
fig. 18 is a schematic structural diagram illustrating another video clipping device according to an embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
First, an application scenario of the embodiment of the present disclosure is described. The user has the desire to share the game video in the game playing process, so that after the game is finished, the corresponding game video can be stored and shared through the short video platform, and more users can browse the game video content. However, the original game video saved by the user is not good in appreciation, and the user only wants to share part of the game picture, which requires the user to clip the original game video by the video clipping tool. For most users, the video editing tool needs to be downloaded and installed, and the user needs to have video editing capability, and for users without video editing capability, it is difficult to create a good quality short video based on the original game video. Therefore, a short video shared by many users on the short video platform is actually a relatively original video, i.e., a video without clips, dubbing and subtitles, and the quality of the video is low, which affects the creation enthusiasm of the users and causes the users to browse some low-quality videos.
In order to solve the above problem, an embodiment of the present disclosure provides a video clipping method, where an electronic device may determine a plurality of target frame images in a video to be clipped, and acquire at least one video segment corresponding to the plurality of target frame images from the video to be clipped according to the determined plurality of target frame images. Further, the video synthesis processing is carried out on the at least one video segment to obtain a synthesized video, and multimedia resources are added to the synthesized video to obtain a target video. By the implementation mode, when a video clipping request and a video to be clipped are received, the video to be clipped is automatically processed, and the highlight video segment is determined from the video to be clipped and the video is synthesized, so that the clipped target video corresponding to the video to be clipped is generated quickly and efficiently, the video clipping efficiency is improved, and the quality of the video obtained after clipping is improved.
The content display method provided by the embodiment of the disclosure is exemplarily described below with reference to the accompanying drawings:
fig. 1 is a schematic diagram of a video clip system provided by an embodiment of the present disclosure, and as shown in fig. 1, the video clip system may include a server 11, a client 12 (only one client 12 is shown in fig. 1 by way of example, and there may be more clients in a specific implementation), and a database 13. Wherein a communication connection can be established between the server 11, the client 12 and the database 13. The server 11, the client 12 and the database 13 may be connected in a wired manner or in a wireless manner, which is not limited in the embodiment of the present disclosure.
The server 11 is configured to receive a video clipping request and a video to be clipped sent by the client 12, acquire corresponding data from the database 13, clip the video to be clipped to obtain a target video, and send the clipped target video to the client 12.
And the client 12 is used for determining a video to be clipped, sending the video to be clipped to the server 11, and receiving the clipped target video sent by the server. And displaying the target video content to the user and publishing the target video to the short video platform.
And the database 13 is used for storing the data information corresponding to the server 11 and providing required resource materials in the process of performing clipping processing on the video to be clipped by the server 11.
In an implementation manner, the server 11 may be one server, a server cluster composed of a plurality of servers, or a cloud computing service center. The server 11 may include a processor, memory, and a network interface, among others.
In one implementable manner, the client 12 is used to provide voice and/or data connectivity services to users. The client 12 may be variously named, for example, a UE end, a terminal unit, a terminal station, a mobile station, a remote terminal, a mobile device, a wireless communication device, a vehicular user equipment, a terminal agent, or a terminal device, etc.
Alternatively, the client 12 may be a handheld device, an in-vehicle device, a wearable device, or a computer with various communication functions, which is not limited in this disclosure. For example, the handheld device may be a smartphone. The in-vehicle device may be an in-vehicle navigation system. The wearable device may be a smart bracelet. The computer may be a Personal Digital Assistant (PDA) computer, a tablet computer, and a laptop computer.
The video clipping method provided by the embodiment of the present disclosure can be applied to the server 11, the client 12, and the database 13 in the video clipping system shown in fig. 1. The electronic device to which the present disclosure relates may be the server 11 or the client 12. Taking the application of the video clipping method disclosed by the invention to a server in the process of executing a service as an example, the video clipping method provided by the embodiment of the disclosure is described in detail.
After introducing the application scenario and the video clipping system of the embodiment of the present disclosure, the video clipping method provided by the embodiment of the present disclosure is described in detail below with reference to the video clipping system shown in fig. 1.
As shown in fig. 2, a flow chart of a video clipping method is shown, according to an exemplary embodiment, for application to an electronic device. The video clipping method may include S201-S204.
S201, receiving a video clip request and a video to be clipped;
wherein the video clip request is used for requesting to clip the video to be clipped.
Optionally, the embodiments of the present disclosure may be applied to an electronic device, which may be a client or a server, and as to which device of the client or the server is specifically applied, the present disclosure is not specifically limited according to specific situations.
It should be noted that, in the case that the embodiment of the present disclosure is applied to a client, receiving a video clip request and a video to be clipped may be understood as: the client acquires the video to be clipped from the local according to the video clipping instruction (namely the video clipping request); in the case where the embodiments of the present disclosure are applied to a server, the server may receive a video clip request and a video to be clipped sent by a client.
That is, it can be understood that, in the case where the embodiments of the present disclosure are applied to a client, the video clip request described above may be a locally generated video clip instruction.
S202, determining a plurality of target frame images in the video to be clipped, and acquiring at least one video segment from the video to be clipped according to the plurality of target frame images.
The at least one video segment is a video segment corresponding to the plurality of target frame images, and one video segment comprises at least one target frame image.
It should be noted that the plurality of target frame images are frame images corresponding to highlight content in the video to be clipped, that is, the at least one video segment is a highlight video segment in the video to be clipped. For example, in the case where the video to be clipped is a saved game video, the plurality of target frame images are: frame images corresponding to opponent moments, frame images corresponding to game winning moments and the like are eliminated.
Optionally, after the video to be clipped is obtained, the electronic device may traverse each frame image in the video to be clipped, so as to screen out a plurality of target frame images that satisfy the condition, and correspondingly obtain at least one video segment corresponding to the plurality of target frame images.
It should be noted that, the inclusion of at least one target frame image in one video segment may be understood as follows: when the time interval corresponding to any two target frame images in the plurality of target frame images is smaller than the preset time interval, a video clip can be determined according to the any two target frame images. That is, the plurality of target frame images and the at least one video segment may not necessarily have a one-to-one correspondence.
S203, carrying out video synthesis processing on at least one video clip to obtain a synthesized video, and adding multimedia resources in the synthesized video to obtain a target video.
Wherein the multimedia resources include at least one of: background music, background pictures, description texts and voice playing information.
Optionally, after at least one video segment is obtained from the video to be edited, the at least one video segment may be synthesized into a complete synthesized video, and the synthesized video is modified by the multimedia resource to obtain a target video with higher video quality.
And S204, transmitting the target video.
Optionally, in a case where the embodiment of the present disclosure is applied to a client, sending the target video may be understood as: saving the target video to save the target video to a local storage space; in the case where the embodiment of the present disclosure is applied to a server, sending a target video may be understood as: and the server sends the target video to the client.
The technical scheme provided by the embodiment at least has the following beneficial effects: the electronic equipment can determine a plurality of target frame images in a video to be clipped, and acquire at least one video segment corresponding to the plurality of target frame images from the video to be clipped according to the determined plurality of target frame images. Further, the video synthesis processing is carried out on the at least one video segment to obtain a synthesized video, and multimedia resources are added to the synthesized video to obtain a target video. By the implementation mode, when a video clipping request and a video to be clipped are received, the video to be clipped is automatically processed, and the highlight video segment is determined from the video to be clipped and the video is synthesized, so that the clipped target video corresponding to the video to be clipped is generated quickly and efficiently, the video clipping efficiency is improved, and the quality of the video obtained after clipping is improved.
In an implementable manner, referring to fig. 2, as shown in fig. 3, the method of "determining a plurality of target frame images in a video to be clipped" in S202 may specifically include S2021.
S2021, determining a plurality of target frame images in the video to be clipped through a target algorithm.
The video to be edited is a target type video, the target algorithm is an algorithm corresponding to the target type video, and different types of videos correspond to different algorithms.
Optionally, the video type may be any of: a sports-type game video, a card-type game video, a tower defense-type game video, a shooting-type game video, etc. Different algorithms can be constructed in advance for different types of videos, so that when the different types of videos are clipped, required frame images (or video segments) can be determined from the videos to be clipped according to the corresponding algorithms.
For example, different algorithm models need to be established for different types of games so as to accurately identify effective pictures (wonderful pictures) in the original video to be edited. For example, when a character a in a video rejects a character b, the video to be edited may be calculated as a highlight moment, so that when a key text of the character a rejecting the character b included in the video picture is detected through a corresponding algorithm, the current frame image may be regarded as the highlight moment, and further N frame images before and after the frame image (or M-second-duration videos before and after the frame image) may be determined as an effective picture (effective video segment).
The technical scheme provided by the embodiment at least has the following beneficial effects: through the implementation mode, a plurality of target frame images in the video to be clipped can be accurately determined from the video to be clipped.
In an implementable manner, as shown in fig. 4 in conjunction with fig. 3, the method in S2021 may specifically include S2022.
S2022, when the target content in the video to be clipped is detected, determining the frame images corresponding to the target content as a plurality of target frame images.
Wherein the targeted content includes at least one of: the target text, the target picture and the target audio are preset contents corresponding to the target algorithm.
Optionally, the target text may be a system prompt content which is popped up and displayed in a pop-up screen form in the video to be detected; the target picture can be a picture meeting preset conditions, such as a character is eliminated; the target audio can be prompt tone of a system in the video to be detected through voice broadcasting.
It should be noted that, in the case that the video to be detected is a different type of video, the specific content included in the target content is also different (i.e., the algorithm is different).
The technical scheme provided by the embodiment at least has the following beneficial effects: the frame images including the target content corresponding to the target algorithm in the video to be edited can be detected, so that the frame images including the target content are determined to be a plurality of target frame images.
In an implementable manner, referring to fig. 2, as shown in fig. 5, the method of "acquiring at least one video clip from a video to be clipped according to a plurality of target frame images" in S202 may specifically include S301 to S302.
S301, determining at least one playing time interval according to the playing time corresponding to each target frame image in the plurality of target frame images.
Optionally, after the plurality of target frame images are determined, the playing time of the plurality of target frame images in the video to be clipped needs to be determined, so that at least one playing time period corresponding to at least one video clip is determined from the video to be clipped according to the playing time corresponding to each target frame image.
In one implementation, a period corresponding to a duration of M seconds before and/or after the playing time corresponding to each target frame image may be determined as at least one playing period.
In another implementation manner, a period corresponding to N frame images before and/or after the playing time corresponding to each target frame image may be determined as at least one playing period.
S302, determining the video segment corresponding to the at least one playing time interval in the video to be clipped as at least one video segment.
It should be understood that after determining at least one playing time period, the video picture corresponding to the at least one playing time period may be determined as at least one video clip; each video clip of the at least one video clip is a video clip comprising a duration of 2M seconds; alternatively, each of the at least one video clip is a video clip including 2N +1 frame images.
The technical scheme provided by the embodiment at least has the following beneficial effects: after a plurality of target frame images in a video to be clipped are determined, at least one corresponding video segment can be accurately determined from the video to be clipped according to the corresponding playing time of each target frame image in the video to be clipped.
In one implementable manner, the multimedia asset includes background music; referring to fig. 3, as shown in fig. 6, the method for "adding multimedia resources to the composite video to obtain the target video" in S203 may specifically include S401 to S402.
S401, determining target background music meeting a first preset condition from background music corresponding to the historical video through a target algorithm.
The historical videos are a plurality of target type videos issued before the current moment, and the first preset condition includes any one of the following conditions: background music which is applied to the most recently released videos in the history videos and background music which is applied to the most recently released videos in the history videos.
It should be noted that the history video is a target type video published by a plurality of users in a short video platform, and the background music corresponding to the history video is the background music applied to each history video.
It can be understood that, in all the historical videos corresponding to the target type videos, common and popular background music can be screened as the required target background music through the target algorithm. In the video clipped by the method, the background music follows the real-time hot spot, so that more browsing amount is easily obtained.
S402, determining the target background music as the background music of the synthesized video, and clipping the synthesized video to obtain the target video.
Optionally, after the target background music is determined from the background music corresponding to the historical video, the target background music is determined as the background music corresponding to the synthesized video, and the target video and the target background music are synthesized to obtain the target video.
The technical scheme provided by the embodiment at least has the following beneficial effects: the target background music meeting the first preset condition can be determined from the background music corresponding to the historical videos through a target algorithm, so that the background music is added to the synthesized video to obtain the target video.
In an implementable manner, the method in S402 may specifically include S4021 to S4022.
S4021, identifying the beat characteristics and the sound characteristics of the target background music, and determining a plurality of target point locations in the target background music according to the beat characteristics and the sound characteristics.
The plurality of target point positions are used for indicating the click time in the target background music.
Alternatively, the beat feature may be used to represent the energy amplitude of the target background music, and the sound feature may be used to represent the harmonic content of the target background music.
It should be noted that the plurality of target point locations may be understood as a plurality of playing time instants or a plurality of playing time periods in the target background music. It is understood that the music pieces corresponding to the plurality of target points may be music pieces of a chorus part (i.e., a climax part).
S4022, matching at least one video segment included in the synthesized video with the music segments corresponding to the target points to obtain a target video.
Optionally, at least one video segment included in the synthesized video is respectively corresponding to a plurality of target point bits determined in the target background music, so that the target video is obtained through synthesis processing.
Optionally, between at least one video segment included in the composite video, picture transition padding may be performed to enable ordered transition between the respective video segments.
The technical scheme provided by the embodiment at least has the following beneficial effects: the beat features and the sound features of the target background music can be identified to determine a plurality of target point locations in the target background music according to the beat features and the sound features. And matching at least one video segment included in the synthesized video with the music segments corresponding to the target point positions to obtain a target video with the video segment accurately matched with the stuck point in the background music at any moment. By this implementation, the quality of the edited video can be improved.
In one implementable manner, the multimedia asset includes background music; the method of "performing video synthesis processing on at least one video segment to obtain a synthesized video" in S203 may specifically include S2031.
S2031, video synthesis processing is carried out on at least one video clip, and target processing is carried out on a plurality of target frame images in at least one video clip to obtain a synthesized video.
Wherein the target processing comprises at least one of: adding a dithering effect, carrying out picture amplification processing, carrying out picture reduction processing and adding a color rendering effect.
Optionally, when the composite video is obtained by performing composite processing on at least one video segment, target processing may be performed on a plurality of target frame images to add a special effect to the plurality of target frame images.
It is understood that after a special effect is added to the plurality of target frame images, when the composite video is played, a corresponding effect may be displayed when the plurality of target frame images are played.
The technical scheme provided by the embodiment at least has the following beneficial effects: while the video composition processing is performed on the at least one video clip, the target processing may be performed on a plurality of target frame images in the at least one video clip to obtain a composite video to which various effects are added. By this implementation, the quality of the edited video can be improved.
In one implementable manner, the multimedia asset includes a background picture; the method for "adding multimedia resources to the composite video to obtain the target video" in S203 may specifically include S501-S502.
S501, obtaining a target background picture, and determining the target background picture as a background picture of a synthesized video.
Optionally, the target background picture may be a picture corresponding to the video to be clipped, for example, the target background picture may be a propaganda diagram of a target game (i.e., a game that generates the video to be clipped).
And S502, performing masking processing on the synthesized video based on the target background picture to obtain the target video.
The aspect ratio of the target video is different from that of the composite video, that is, after the composite video is masked based on the target background picture, the aspect ratio of the obtained target video is different from that of the composite video.
Optionally, a mask pattern may be automatically added to the composite video according to the target background picture, and the horizontal version video may be converted into a vertical version video more suitable for the short video platform.
It should be noted that the game interface is usually a horizontal type interface, so that the obtained video to be edited is a horizontal type video, and the horizontal type video can be converted into a vertical type video more suitable for a short video platform in order to facilitate browsing of a user.
Illustratively, the composite video is a landscape video, the aspect ratio is 16:9, a landscape screen is required to be played when the composite video is played, after masking processing, a target video with the aspect ratio of 9:16 (or other ratios) can be obtained, and the target video can be played in a vertical screen mode.
The technical scheme provided by the embodiment at least has the following beneficial effects: the target background picture can be acquired, so that the target video is obtained by performing masking processing on the synthesized video based on the target background picture while performing video synthesis processing on at least one video clip. By this implementation, the quality of the edited video can be improved.
In one possible implementation, at least one of the video segments is a game-type video segment; after S203 above, the method may further include S601-S602.
S601, determining a key frame image from the plurality of target frame images, and determining the key frame image as a video cover of the target video.
The key frame image is an image which meets a second preset condition in the target frame images, and the second preset condition is any one of the following conditions: the image of the frame with the highest wonderful degree, any image of the frame which is randomly extracted, the first image of the frame and the last image of the frame.
It is understood that the video cover is a picture displayed in the interface before the video starts to be played, i.e., in the case where the video is not played.
Optionally, the key frame image is a target frame image determined from a plurality of target frame images according to a target algorithm, the target algorithm defines priorities of the plurality of target frame images, and a frame image with the highest priority may be determined as the key frame image.
It will be appreciated that the target algorithm automatically identifies the best quality one of the plurality of target frame images as the video cover, for example, the plurality of target frame images are: and determining the champion picture as a video cover of the target video according to the priority of the champion picture, the priority of the eliminated opponent picture, the priority of the group defeat profit picture and the priority of the hit opponent picture defined by the target algorithm.
S602, determining a target file according to the picture content included in the target video, and displaying the target file in the video cover through the target font.
Wherein the target file is used for explaining the target video.
Optionally, the video content of the target video may be analyzed by a target algorithm to generate a target document describing the target video.
It can be understood that the target algorithm can automatically recognize the basic semantics of the video picture and expand the basic semantics. For example, if a champion screen in the target video corresponding to game a is recognized, a target case such as "champion cheering of game a" or "capture crown review of game a" may be generated.
Optionally, the target document may further include a description document of the target video itself, a document (game link, etc.) for guiding the user to download the game a, a video distribution document (text description before video distribution), and the like.
Optionally, when the target document is displayed in the video cover, a font form may be set to display the target document in the font form desired by the user. The font form may be any of: thought source black body, standing cool butter body, preferred title black, small white body of chartlet, and the like.
The technical scheme provided by the embodiment at least has the following beneficial effects: it is also possible to determine a key frame image satisfying the second preset condition from among the plurality of target frame images and determine the key frame image as a video cover of the target video. And determining a target file according to the picture content included in the target video, and displaying the target file in the video cover through the target font so as to explain the target video. By this implementation, the quality of the edited video can be improved.
As shown in fig. 7, a flowchart of a video clipping method is shown, applied to a client, in accordance with an exemplary embodiment. The video clipping method may include S701-S703.
S701, acquiring a video to be clipped specified by a user.
Optionally, when the user needs to clip the video to be clipped, the client may obtain the video to be clipped braked by the user from the local storage space according to the operation of the user.
Optionally, the user may designate one or more original videos as the video to be edited, so as to synthesize the multiple original videos to obtain the target video.
And S702, sending the video to be clipped.
Optionally, the client may send the video to be clipped to the server, so that the server clips the video to be clipped to obtain the target video.
It can be understood that after the user triggers the client to upload the video to be edited, the video to be edited is transmitted to the cloud of the server, and the target algorithm can automatically analyze and synthesize the key effective frames in the video to be edited and configure appropriate music, documents, voice broadcast and the like. For a specific processing procedure, reference may be made to the foregoing embodiments, which are not described herein again.
And S703, receiving the target video.
The target video is a video obtained after the video to be clipped is clipped; the target video is a video obtained by adding multimedia resources in the composite video, the composite video is a video obtained by performing video composite processing on at least one video segment, and the at least one video segment is a video segment corresponding to a plurality of target frame images in the video to be clipped.
Optionally, after the server clips the video to be clipped to obtain the target video, the server sends the target video back to the client, so that the client receives the target video.
The technical scheme provided by the embodiment at least has the following beneficial effects: the client side can send the video to be clipped to the server after acquiring the video to be clipped specified by the user, so that the server clips the video with the clip to obtain the target video, and then receives the clipped target video from the server. Through the implementation mode, the client can obtain the clipped target video from the server only by sending the video to be clipped to the server, so that the video is not required to be clipped through the client, the requirement on the video clipping skill of a user is lowered, and the video clipping efficiency and the quality of the obtained clipped video are improved.
In a practical manner, as shown in fig. 8 in conjunction with fig. 7, before S701 above, the method further includes S801; the method in S701 may specifically include S7011 to S7012.
S801, displaying a clipping description interface.
Wherein the clip description interface is used to prompt the video clipping process.
Optionally, as shown in fig. 9, before the user triggers the client to determine the video to be edited, the client may display an editing description interface to display the pre-established intelligent editing description steps, so as to explain the process, the effect, the requirements, the finished products, and the like of the intelligent editing to the user, thereby achieving the effect of attracting the user to participate.
S7011, responding to the triggering operation of the user based on the editing description interface, and displaying a video selection interface.
Optionally, after the user refers to the clipping description interface, an operation may be performed to trigger the client to display the video selection interface, so that the user may select a desired video as the video to be clipped on the video selection interface.
S7012, responding to the operation of selecting the video executed by the user based on the video selection interface, and acquiring the video to be edited, which is specified by the user.
Optionally, the client determines, according to the selection operation of the user, the video specified by the user as the video to be clipped from the plurality of videos.
The technical scheme provided by the embodiment at least has the following beneficial effects: the client can display the editing description interface in advance to prompt the user of the video editing process through the editing description interface, so that the user can perform triggering operation according to the editing description interface to enable the client to display the video selection interface, and obtain the video to be edited, which is specified by the user, through the operation of selecting the video by the user. Through the implementation mode, a user can know the video clipping process through the clipping description interface displayed by the client side, so that the use experience of the user is improved.
In a practical manner, as shown in fig. 10 in conjunction with fig. 7, after S701 above, the method further includes S802.
And S802, displaying a video processing interface.
The video processing interface is used for prompting a user to clip a video to be clipped to obtain a target video.
Alternatively, as shown in fig. 11, after the user triggers the client to send the video to be clipped to the server for clipping, the client may display a video processing interface to prompt the user that the video to be clipped is being processed.
The technical scheme provided by the embodiment at least has the following beneficial effects: after the client acquires the video to be edited specified by the user and sends the video to be edited to the server, a video processing interface can be displayed to prompt the user that the video to be edited is being edited to obtain the target video. Through the implementation mode, the user can know the video clipping process, and the use experience of the user is improved.
In a practical manner, as shown in fig. 12 in conjunction with fig. 7, after the above S703, the method further includes S901-S904.
And S901, displaying a video preview interface.
Wherein the video preview interface includes a cover of the target video.
Optionally, after the server clips the video to be clipped to obtain the target video and sends the target video to the client, the client may display a video preview interface, so that the user may view the target video, and if the target video meets the expected effect of the user, the target video may be released to the short video platform.
And S902, responding to the playing operation of the user for the target video, and playing the target video.
Optionally, the user may trigger the client to play the target video through a play operation, so as to browse the target video and determine whether the target video meets an expected effect of the user.
And S903, displaying the sharing interface after the playing is finished.
And S904, responding to the sending operation of the user based on the sharing interface, and publishing the target video.
Optionally, after the target video is played, a sharing interface may be displayed, and in the sharing interface, the user may trigger the client to publish the target video to the short video platform through operation.
It is understood that after a user triggers a client to publish a target video to a short video platform, other users may browse the target video.
The technical scheme provided by the embodiment at least has the following beneficial effects: after receiving the target video, the client can display a video preview interface to play the target video according to the playing operation of the user, so that the user can browse the effect of the target video in advance, and release the target video according to the sending operation of the user after the target video reaches the satisfaction degree of the user. Through the implementation mode, the user can know the video clipping process, and the use experience of the user is improved.
In an implementable manner, in the process of playing the target video, when any frame image in the plurality of target frame images is played, the image after the target processing is displayed; the target treatment includes at least one of: adding a dithering effect, carrying out picture amplification processing, carrying out picture reduction processing and adding a color rendering effect.
The technical scheme provided by the embodiment at least has the following beneficial effects: the plurality of target frame images included in the target video are images subjected to target processing, namely videos with various effects, and the quality of the edited video can be improved.
In one implementable manner, the target video includes at least one of: a target background picture, a video cover and a target file; the target video is obtained by masking the synthesized video based on the target background picture, the video cover is a key frame image determined from a plurality of target frame images, the target file is a file determined according to the picture content included in the target video, the target file is displayed on the video cover through the target font, and the target file is used for explaining the target video.
For example, as shown in fig. 13, a target background picture of a target video is shown, and as shown in fig. 14, a video cover of the target video is shown.
The technical scheme provided by the embodiment at least has the following beneficial effects: the obtained target video comprises a target background picture, a video cover and a target file, so that the quality of the edited video is improved.
In an implementable manner, when the target video is released, the target link is synchronously sent, and the target link is a link corresponding to an application program for generating the video to be processed.
Optionally, a download link of the game corresponding to the target video may be automatically attached when the target video is released, so that the other users may download and install the corresponding game through the link, and after the other users download and install the corresponding game through the link, the users (i.e., the target video authors) may obtain benefits.
Illustratively, as shown in fig. 15, a schematic diagram of the effect after the target video is published to the short video platform.
The technical scheme provided by the embodiment at least has the following beneficial effects: when the target video is published, the link of the application program of the to-be-processed video corresponding to the target video is synchronously sent, so that the diversity of the published video can be improved.
The embodiment of the disclosure specifically discloses that after a user saves a game video, the user can obtain a high-quality video subjected to clipping processing without manually clipping the game video, so that the clipped work of the user can be uploaded to a short video platform, the video is provided with a game downloading link, and after other users download games through the link, the author can obtain benefits. Meanwhile, the short video platform can obtain more high-quality videos through the method, so that more users can see the corresponding propaganda videos.
It will be appreciated that the above method may be implemented by a video clipping device. The video editing apparatus includes hardware structures and/or software modules for performing the respective functions in order to realize the above functions. Those of skill in the art will readily appreciate that the various illustrative modules and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosed embodiments.
The video editing apparatus and the like can be divided into functional modules according to the method example, for example, the functional modules can be divided into corresponding functions, or two or more functions can be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, the division of the modules in the embodiments of the present disclosure is illustrative, and is only one division of logic functions, and there may be another division in actual implementation.
FIG. 16 is a schematic diagram illustrating a structure of a video clipping device according to an example embodiment. Referring to fig. 16, the video clipping device 100 may include: a receiving unit 1001, a determining unit 1002, an acquiring unit 1003, a processing unit 1004, and a transmitting unit 1005.
A receiving unit 1001 configured to perform receiving a video clip request and a video to be clipped, the video clip request being for requesting to clip the video to be clipped; for example, the receiving unit 1001 may be configured to perform the step in step 201 in fig. 2.
A determining unit 1002 configured to perform determining a plurality of target frame images in a video to be clipped; for example, the determining unit 1002 may be configured to perform the step in step 202 in fig. 2.
An acquiring unit 1003 configured to perform acquiring at least one video segment from a video to be clipped according to a plurality of target frame images; at least one video clip is a video clip corresponding to a plurality of target frame images, and one video clip comprises at least one target frame image; for example, the determining unit 1002 may be configured to perform the step in step 202 in fig. 2.
The processing unit 1004 is configured to perform video synthesis processing on at least one video segment to obtain a synthesized video, and add multimedia resources to the synthesized video to obtain a target video; the multimedia resource includes at least one of: background music, background pictures, description texts and voice playing information; for example, the processing unit 1004 may be configured to perform the steps in step 203 in fig. 2.
A transmission unit 1005 configured to perform transmission of the target video. For example, the transmitting unit 1005 may be configured to perform the step in step 203 in fig. 2.
Optionally, the determining unit 1002 is configured to perform determining a plurality of target frame images in the video to be clipped through a target algorithm; the video to be edited is a target type video, the target algorithm is an algorithm corresponding to the target type video, and different types of videos correspond to different algorithms. For example, the determining unit 1002 may be configured to perform the step in step 2021 in fig. 3.
Optionally, the determining unit 1002 is configured to determine, when target content in the video to be clipped is detected, frame images corresponding to the target content as a plurality of target frame images; the targeted content includes at least one of: the target text, the target picture and the target audio are preset contents corresponding to the target algorithm. For example, the determining unit 1002 may be configured to perform the step in step 2022 in fig. 4.
Optionally, the determining unit 1002 is configured to determine at least one playing period according to a playing time corresponding to each of the plurality of target frame images; for example, the determining unit 1002 may be configured to perform the step in step 301 in fig. 5.
A determining unit 1002 configured to perform determining at least one video segment corresponding to at least one playing period in the video to be clipped as the at least one video segment. For example, the determining unit 1002 may be configured to perform the steps in step 302 in fig. 5.
Optionally, the multimedia asset comprises background music; a determining unit 1002 configured to execute determining, by a target algorithm, a target background music satisfying a first preset condition from background music corresponding to the history video; the historical videos are a plurality of target type videos released before the current moment, and the first preset condition includes any one of the following items: background music with the most application amount in the historical videos and background music applied to the newly released videos in the historical videos; for example, the determining unit 1002 may be configured to perform the step in step 401 in fig. 6.
A determination unit 1002 configured to perform determination of target background music as background music of a composite video; for example, the determining unit 1002 may be configured to perform the steps in step 402 in fig. 6.
And the processing unit 1004 is configured to execute and clip the composite video to obtain the target video. For example, the processing unit 1004 may be used to perform the steps in step 402 in fig. 6.
Optionally, the processing unit 1004 is configured to perform identifying a beat feature and a sound feature of the target background music.
A determination unit 1002 configured to perform determination of a plurality of target point locations in the target background music according to the beat feature and the sound feature; the plurality of target points are used for indicating the time of the stuck point in the target background music.
And the processing unit 1004 is configured to perform matching of at least one video segment included in the composite video and the music segments corresponding to the plurality of target points to obtain a target video.
Optionally, the processing unit 1004 is configured to perform video synthesis processing on at least one video segment, and perform target processing on a plurality of target frame images in the at least one video segment to obtain a synthesized video; the target treatment includes at least one of: adding a dithering effect, carrying out picture amplification processing, carrying out picture reduction processing and adding a color rendering effect.
Optionally, the multimedia resource comprises a background picture; an acquiring unit 1003 configured to perform acquiring a target background picture and determine the target background picture as a background picture of the composite video.
And a processing unit 1004 configured to perform masking processing on the composite video based on the target background picture to obtain a target video, wherein the aspect ratio of the target video to the composite video is different.
Optionally, the at least one video clip is a game type video clip; a determination unit 1002 configured to perform determination of a key frame image from among a plurality of target frame images, and determine the key frame image as a video cover of a target video; the key frame image is an image which meets a second preset condition in the plurality of target frame images, and the second preset condition is any one of the following conditions: the image of the frame with the highest wonderful degree, any image of the frame which is randomly extracted, the first image of the frame and the last image of the frame.
A determining unit 1002 configured to perform determining a target copy from picture contents included in the target video.
A processing unit 1004 configured to execute displaying a target document in a video cover by a target font; the target document is used to illustrate the target video.
FIG. 17 is a schematic diagram illustrating a structure of a video clipping device according to an example embodiment. Referring to fig. 17, the video clipping device 110 may include: an acquisition unit 1101, a transmission unit 1102, a reception unit 1103, and a display unit 1104.
An acquisition unit 1101 configured to perform acquisition of a video to be clipped specified by a user; for example, the obtaining unit 1101 may be configured to perform the steps in step 701 in fig. 7.
A transmitting unit 1102 configured to perform transmitting a video to be clipped; for example, the sending unit 1102 may be configured to perform the steps in step 702 in fig. 7.
A receiving unit 1103 configured to perform receiving a target video; for example, the receiving unit 1103 may be configured to perform the step in step 703 in fig. 7.
Optionally, a display unit 1104 configured to perform displaying the clipping specification interface; the editing description interface is used for prompting the video editing process; for example, the display unit 1104 may be used to perform the steps in step 801 in fig. 8.
A display unit 1104 configured to perform display of a video selection interface in response to a user's trigger operation based on the clip description interface; for example, display unit 1104 may be used to perform the steps in step 7011 in fig. 8.
An obtaining unit 1101 configured to perform an operation of selecting a video in response to a user performing based on a video selection interface, and obtain a video to be clipped specified by the user. For example, the obtaining unit 1101 may be configured to perform the step in step 7012 in fig. 8.
Optionally, the display unit 1104 is configured to execute displaying a video processing interface, where the video processing interface is used for prompting a user that a target video is obtained by performing a clipping process on a video to be clipped. For example, the display unit 1104 may be used to perform the steps in step 802 in fig. 10.
Optionally, a display unit 1104 configured to perform displaying a video preview interface, the video preview interface including a cover of the target video; for example, the display unit 1104 may be used to perform the steps in step 901 in fig. 12.
A display unit 1104 configured to perform playing of the target video in response to a playing operation of the target video by the user; for example, the display unit 1104 may be used to perform the steps in step 902 in fig. 12.
A display unit 1104 configured to perform displaying a sharing interface after the playing is completed; for example, the display unit 1104 may be used to perform the steps in step 903 in fig. 12.
A sending unit 1102 configured to execute a sending operation based on a sharing interface in response to a user, and publish the target video. For example, the sending unit 1102 may be configured to perform the step in step 904 in fig. 12.
Optionally, in the process of playing the target video, when any frame image of the plurality of target frame images is played, displaying the image after the target processing; the target treatment includes at least one of: adding a dithering effect, carrying out picture amplification processing, carrying out picture reduction processing and adding a color rendering effect.
Optionally, the target video comprises at least one of: a target background picture, a video cover and a target file; the target video is obtained by masking the synthesized video based on the target background picture, the video cover is a key frame image determined from a plurality of target frame images, the target file is a file determined according to the picture content included in the target video, the target file is displayed on the video cover through the target font, and the target file is used for explaining the target video.
Optionally, when the target video is released, the target link is synchronously sent, and the target link is a link corresponding to an application program for generating the video to be processed.
As above, the embodiment of the present disclosure may perform division of functional modules on an electronic device according to the above method example. The integrated module can be realized in a hardware form, and can also be realized in a software functional module form. In addition, it should be further noted that the division of the modules in the embodiments of the present disclosure is schematic, and is only a logic function division, and there may be another division manner in actual implementation. For example, the functional blocks may be divided for the respective functions, or two or more functions may be integrated into one processing block.
With regard to the video clipping apparatus in the above-described embodiment, the specific manner in which the respective modules perform operations has been described in detail in the embodiment related to the method, and will not be elaborated upon here.
Fig. 18 is a schematic structural diagram of a video clip device 60 provided by the present disclosure. As shown in fig. 18, the video clipping device 60 may include at least one processor 601 and a memory 603 for storing instructions executable by the processor 601. Wherein the processor 601 is configured to execute instructions in the memory 603 to implement the video clipping method in the above embodiments.
Additionally, video clip device 60 may also include a communication bus 602 and at least one communication interface 604.
The processor 601 may be a GPU, a micro-processing unit, an ASIC, or one or more integrated circuits for controlling the execution of programs in accordance with the disclosed aspects.
The communication bus 602 may include a path that conveys information between the aforementioned components.
The communication interface 604 may be any device, such as a transceiver, for communicating with other devices or communication networks, such as an ethernet, a Radio Access Network (RAN), a Wireless Local Area Network (WLAN), etc.
The memory 603 may be, but is not limited to, a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that can store information and instructions, an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disk storage, optical disk storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory may be self-contained and connected to the processing unit by a bus. The memory may also be integrated with the processing unit as a volatile storage medium in the GPU.
The memory 603 is used for storing instructions for executing the disclosed solution, and is controlled by the processor 601. The processor 601 is configured to execute instructions stored in the memory 603 to implement the functions of the disclosed method.
In particular implementations, processor 601 may include one or more GPUs, such as GPU0 and GPU1 in fig. 18, as one embodiment.
In particular implementations, video clipping device 60 may include multiple processors, such as processor 601 and processor 607 in FIG. 18, as an example. Each of these processors may be a single-Core (CPU) processor or a multi-core (multi-GPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
In particular implementations, video clipping device 60 may also include an output device 605 and an input device 606, as one embodiment. Output device 605 is in communication with processor 601 and may display information in a variety of ways. For example, the output device 605 may be a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display device, a Cathode Ray Tube (CRT) display device, a projector (projector), or the like. The input device 606 is in communication with the processor 601 and may accept user input in a variety of ways. For example, the input device 606 may be a mouse, a keyboard, a touch screen device, or a sensing device, among others.
Those skilled in the art will appreciate that the configuration shown in FIG. 18 does not constitute a limitation of video clipping device 60, and may include more or fewer components than shown, or combine certain components, or employ a different arrangement of components.
The present disclosure also provides a computer-readable storage medium having instructions stored thereon, where the instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the group communication method provided by the embodiments of the present disclosure.
The disclosed embodiments also provide a computer program product containing instructions, which when run on an electronic device, cause the electronic device to perform the video clipping method provided by the disclosed embodiments.
The embodiment of the present disclosure also provides a communication system, as shown in fig. 1, the system includes a server 11, a client 12, and a database 13. The server 11, the client 12, and the database 13 are respectively configured to execute corresponding steps in the foregoing embodiments of the present disclosure, so that the communication system solves technical problems solved by the embodiments of the present disclosure and achieves technical effects achieved by the embodiments of the present disclosure, which are not described herein again.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method of video clipping, the method comprising:
receiving a video clip request and a video to be clipped, wherein the video clip request is used for requesting to clip the video to be clipped;
determining a plurality of target frame images in the video to be edited, and acquiring at least one video segment from the video to be edited according to the plurality of target frame images; the at least one video segment is a video segment corresponding to the plurality of target frame images, and one video segment comprises at least one target frame image;
performing video synthesis processing on the at least one video clip to obtain a synthesized video, and adding multimedia resources in the synthesized video to obtain a target video; the multimedia resource includes at least one of: background music, background pictures, description texts and voice playing information;
and sending the target video.
2. The method of claim 1, wherein the determining a plurality of target frame images in the video to be edited comprises:
determining the plurality of target frame images in the video to be edited through a target algorithm; the video to be edited is a target type video, the target algorithm is an algorithm corresponding to the target type video, and different types of videos correspond to different algorithms.
3. The method of claim 2, wherein the determining the plurality of target frame images in the video to be edited by a target algorithm comprises:
when target content in the video to be clipped is detected, determining frame images corresponding to the target content as the plurality of target frame images; the targeted content includes at least one of: the target content is preset content corresponding to the target algorithm.
4. The method according to claim 1, wherein the obtaining at least one video segment from the video to be edited according to the plurality of target frame images comprises:
determining at least one playing time interval according to the playing time corresponding to each target frame image in the plurality of target frame images;
and determining the video segment corresponding to the at least one playing period in the video to be edited as the at least one video segment.
5. A method of video clipping, the method comprising:
acquiring a video to be edited, which is specified by a user;
sending the video to be edited;
receiving a target video;
the target video is a video obtained after the video to be clipped is clipped; the target video is a video obtained by adding multimedia resources in a composite video, the composite video is a video obtained by performing video composite processing on at least one video segment, and the at least one video segment is a video segment corresponding to a plurality of target frame images in the video to be clipped.
6. A video clipping apparatus, comprising:
a receiving unit configured to perform receiving a video clip request and a video to be clipped, the video clip request requesting to clip the video to be clipped;
a determining unit configured to perform determining a plurality of target frame images in the video to be clipped;
an acquisition unit configured to perform acquisition of at least one video segment from the video to be clipped according to the plurality of target frame images; the at least one video segment is a video segment corresponding to the plurality of target frame images, and one video segment comprises at least one target frame image;
the processing unit is configured to perform video synthesis processing on the at least one video segment to obtain a synthesized video, and add multimedia resources in the synthesized video to obtain a target video; the multimedia resource includes at least one of: background music, background pictures, description texts and voice playing information;
a transmitting unit configured to perform transmitting the target video.
7. A video clipping apparatus, comprising:
an acquisition unit configured to perform acquisition of a video to be clipped specified by a user;
a transmitting unit configured to perform transmitting the video to be clipped;
a receiving unit configured to perform receiving a target video;
the target video is a video obtained after the video to be clipped is clipped; the target video is a video obtained by adding multimedia resources in a composite video, the composite video is a video obtained by performing video composite processing on at least one video segment, and the at least one video segment is a video segment corresponding to a plurality of target frame images in the video to be clipped.
8. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the video clipping method of any of claims 1-4 or claim 5.
9. A computer-readable storage medium having instructions stored thereon, wherein the instructions in the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the video clipping method of any of claims 1-4 or claim 5.
10. A computer program product comprising computer programs/instructions, characterized in that the computer programs/instructions, when executed by a processor, implement the video clipping method of any of claims 1-4 or claim 5.
CN202111565166.4A 2021-12-20 2021-12-20 Video editing method and device, electronic equipment and storage medium Pending CN114339075A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111565166.4A CN114339075A (en) 2021-12-20 2021-12-20 Video editing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111565166.4A CN114339075A (en) 2021-12-20 2021-12-20 Video editing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114339075A true CN114339075A (en) 2022-04-12

Family

ID=81051998

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111565166.4A Pending CN114339075A (en) 2021-12-20 2021-12-20 Video editing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114339075A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109089128A (en) * 2018-07-10 2018-12-25 武汉斗鱼网络科技有限公司 A kind of method for processing video frequency, device, equipment and medium
CN109168084A (en) * 2018-10-24 2019-01-08 麒麟合盛网络技术股份有限公司 A kind of method and apparatus of video clipping
CN113691864A (en) * 2021-07-13 2021-11-23 北京百度网讯科技有限公司 Video clipping method, video clipping device, electronic equipment and readable storage medium
CN113709561A (en) * 2021-04-14 2021-11-26 腾讯科技(深圳)有限公司 Video editing method, device, equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109089128A (en) * 2018-07-10 2018-12-25 武汉斗鱼网络科技有限公司 A kind of method for processing video frequency, device, equipment and medium
CN109168084A (en) * 2018-10-24 2019-01-08 麒麟合盛网络技术股份有限公司 A kind of method and apparatus of video clipping
CN113709561A (en) * 2021-04-14 2021-11-26 腾讯科技(深圳)有限公司 Video editing method, device, equipment and storage medium
CN113691864A (en) * 2021-07-13 2021-11-23 北京百度网讯科技有限公司 Video clipping method, video clipping device, electronic equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN109525853B (en) Live broadcast room cover display method and device, terminal, server and readable medium
US9560102B2 (en) Media source identification
CN105721620B (en) Video information method for pushing and device and video information exhibit method and apparatus
US8639086B2 (en) Rendering of video based on overlaying of bitmapped images
CN100584022C (en) Method and system for streaming documents, e-mail attachments and maps to wireless devices
JP5204492B2 (en) Method and apparatus for configuring software resources for playing network programs
CN107895016B (en) Method and device for playing multimedia
US20070219937A1 (en) Automated visualization for enhanced music playback
WO2019227429A1 (en) Method, device, apparatus, terminal, server for generating multimedia content
CN108769816B (en) Video playing method, device and storage medium
CN112073753B (en) Method, device, equipment and medium for publishing multimedia data
CN111818354B (en) Animation configuration method, animation playback method, animation configuration device, animation playback device, animation configuration system, animation playback system, and media
CN112689168A (en) Dynamic effect processing method, dynamic effect display method and dynamic effect processing device
CN109495767A (en) Method and apparatus for output information
CN114422816A (en) Live video processing method and device, electronic equipment and storage medium
US20100115025A1 (en) Content reproduction apparatus, content delivery apparatus, content delivery system, and method for generating metadata
JP2023506364A (en) Audio messaging interface on messaging platform
CN104822086B (en) Information-pushing method, terminal and system
CN114339075A (en) Video editing method and device, electronic equipment and storage medium
CN113282770A (en) Multimedia recommendation system and method
WO2023024803A1 (en) Dynamic cover generating method and apparatus, electronic device, medium, and program product
CN112287173A (en) Method and apparatus for generating information
CN117041628B (en) Live picture rendering method, system, device, equipment and medium
CN111367592A (en) Information processing method and device
CN110866138A (en) Background generation method and system, computer system, and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination