CN109089127B - Video splicing method, device, equipment and medium - Google Patents

Video splicing method, device, equipment and medium Download PDF

Info

Publication number
CN109089127B
CN109089127B CN201810752191.5A CN201810752191A CN109089127B CN 109089127 B CN109089127 B CN 109089127B CN 201810752191 A CN201810752191 A CN 201810752191A CN 109089127 B CN109089127 B CN 109089127B
Authority
CN
China
Prior art keywords
video
target
wonderful
highlight
segments
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810752191.5A
Other languages
Chinese (zh)
Other versions
CN109089127A (en
Inventor
郑伟
张文明
陈少杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Douyu Network Technology Co Ltd
Original Assignee
Wuhan Douyu Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Douyu Network Technology Co Ltd filed Critical Wuhan Douyu Network Technology Co Ltd
Priority to CN201810752191.5A priority Critical patent/CN109089127B/en
Publication of CN109089127A publication Critical patent/CN109089127A/en
Application granted granted Critical
Publication of CN109089127B publication Critical patent/CN109089127B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • H04N21/2387Stream processing in response to a playback request from an end-user, e.g. for trick-play
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4781Games
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Databases & Information Systems (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Television Signal Processing For Recording (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses a video splicing method, a device, equipment and a medium, wherein the method comprises the following steps: determining N wonderful video segments from the target video by adopting feature matching and/or bullet screen information analysis, wherein N is greater than 1; extracting the N wonderful video segments from the target video; splicing the N wonderful video segments into one video to form a spliced video; and when a request for requesting to acquire the spliced video sent by a client is received, sending the spliced video to the client for playing. The method, the device, the equipment and the medium can solve the technical problems that in the prior art, when the historical live game video is watched, the watching time of the audience is wasted, and the probability that the audience obtains the wonderful video clip is low. The technical effect of saving the viewing time is achieved.

Description

Video splicing method, device, equipment and medium
Technical Field
The invention relates to the technical field of computers, in particular to a video splicing method, a video splicing device, video splicing equipment and a video splicing medium.
Background
Currently, with the progress of network communication technology and the speed increase of broadband networks, live webcasts are increasingly developed and applied. In order to prevent the user from missing the highlights live video of the anchor, the video website often records and provides the history live video of the anchor for the user to watch.
In the live game, there are some wonderful game scenarios, such as killing successful video clips in the game, collecting successful video clips in the game, or marriage video clips in the friend-making game. The highlight video segments are usually the most highlight portions in the live game, and in order to view the highlight scenes, the audience users usually need to completely view the whole video from the beginning of the historical live video to ensure that the highlight segments are not missed. This results in the viewer wasting more time watching a video that is less interesting to the viewer and also easily missing a highlight video moment.
Therefore, the technical problems that the watching time of audiences is wasted and the probability of obtaining wonderful video clips by the audiences is low when the audiences watch the live video of the historical game in the prior art are solved.
Disclosure of Invention
The invention provides a video splicing method, a video splicing device, video splicing equipment and a video splicing medium, which are used for solving the technical problems that in the prior art, when watching a live video of a historical game, the watching time of audiences is wasted, and the probability of obtaining wonderful video clips by the audiences is low.
In a first aspect, the present invention provides a video stitching method, including:
determining N wonderful video segments from the target video by adopting feature matching and/or bullet screen information analysis, wherein N is greater than 1;
extracting the N wonderful video segments from the target video;
splicing the N wonderful video segments into one video to form a spliced video;
and when a request for requesting to acquire the spliced video sent by a client is received, sending the spliced video to the client for playing.
Optionally, the determining N highlight video segments in the target video includes: setting characteristic information according to the video category of the target video; performing feature matching on the target video to determine a target frame matched with the feature information in the target video; determining the N highlight video segments in the target video according to the target frame and a preset highlight video intercepting rule, wherein the highlight video segments comprise the target frame, and the highlight video intercepting rule corresponds to the characteristic information; or acquiring a target video and bullet screen information, wherein the bullet screen information comprises bullet screen quantity information of the target video in the historical playing process; and determining the N wonderful video segments of which the bullet screen conditions meet preset requirements in the target video according to the bullet screen information.
Optionally, the extracting the N highlight video segments from the target video includes: acquiring attribute information of the target video; judging whether the target video needs to adopt a timestamp accurate extraction mode according to the attribute information; if so, performing video decoding on the target video; extracting the N wonderful video segments from the decoded target video according to the wonderful video interception rule and the timestamp information of the decoded target video; if not, searching a video unit with corresponding timestamp information closest to the timestamp information of the target frame in an undecoded target video, wherein the target video comprises N video units, and N is a positive integer greater than 1; and determining and extracting the highlight video segment according to the nearest video unit.
Optionally, the splicing the N highlight video segments into one video to form a spliced video includes: splicing the N wonderful video segments into a video, and inserting a prompt video before each wonderful video segment, wherein the prompt video is used for describing the wonderful video segment to be played to form a spliced video; or, the N wonderful video segments are spliced into one video, an interval video is inserted between every two wonderful video segments, and the interval video is used for representing that the playing of the previous wonderful video segment is finished and the playing of the next wonderful video segment is about to be played to form a spliced video; or, the N wonderful video segments are spliced into one video, and prompt information is superposed and played in the initial segment video of each wonderful video segment, wherein the prompt information is used for describing the wonderful video segments being played to form a spliced video.
Optionally, the determining N highlight video segments in the target video is implemented in a GCR-word layer; the extracting the N highlight video segments from the target video is implemented in a Media-Worker layer.
In a second aspect, a video stitching device is provided, which includes:
the determining unit is used for determining N wonderful video segments from the target video by adopting feature matching and/or bullet screen information analysis, wherein N is larger than 1;
an extracting unit, configured to extract the N highlight video segments from the target video;
the splicing unit is used for splicing the N wonderful video segments into one video to form a spliced video;
and the sending unit is used for sending the spliced video to the client side for playing when receiving a request which is sent by the client side and used for requesting to acquire the spliced video.
Optionally, the splicing unit is further configured to: splicing the N wonderful video segments into a video, and inserting a prompt video before each wonderful video segment, wherein the prompt video is used for describing the wonderful video segment to be played to form a spliced video; or, the N wonderful video segments are spliced into one video, an interval video is inserted between every two wonderful video segments, and the interval video is used for representing that the playing of the previous wonderful video segment is finished and the playing of the next wonderful video segment is about to be played to form a spliced video; or, the N wonderful video segments are spliced into one video, and prompt information is superposed and played in the initial segment video of each wonderful video segment, wherein the prompt information is used for describing the wonderful video segments being played to form a spliced video.
Optionally, the extracting unit is further configured to: acquiring attribute information of the target video; judging whether the target video needs to adopt a timestamp accurate extraction mode according to the attribute information; if so, performing video decoding on the target video; extracting the N wonderful video segments from the decoded target video according to the wonderful video interception rule and the timestamp information of the decoded target video; if not, searching a video unit with corresponding timestamp information closest to the timestamp information of the target frame in an undecoded target video, wherein the target video comprises N video units, and N is a positive integer greater than 1; and determining and extracting the highlight video segment according to the nearest video unit.
One or more technical solutions provided in the embodiments of the present invention have at least the following technical effects or advantages:
the method, the device, the equipment and the medium provided by the embodiment of the application adopt characteristic matching and/or barrage information analysis to determine and extract N wonderful video segments from the target video, and splice the N wonderful video segments into one video to form a spliced video, and when a request sent by a client side for acquiring the spliced video is received, the spliced video is sent to the client side for playing so that a viewer can watch the whole target video without completely, and can watch all wonderful video segments without missing, thereby effectively saving the watching time of the viewer and enabling the viewer to obtain all wonderful video segments in a short time.
Further, whether the target video needs to adopt a timestamp accurate extraction mode is judged according to the attribute information of the target video, when needed, the target video is subjected to video decoding, then the wonderful video segment is extracted according to the timestamp information of the decoded target video, and when not needed, a video unit with corresponding timestamp information closest to the timestamp information of the target frame is directly searched in the undecoded target video to extract the wonderful video segment, so that the video extraction time without accurate extraction is effectively shortened, and the extraction accuracy of part of videos needing accurate extraction is also ensured.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a flow chart of a video stitching method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an embodiment of an extraction method without a timestamp precision extraction mode;
FIG. 3 is a schematic structural diagram of a video stitching apparatus according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of an electronic device according to an embodiment of the invention;
FIG. 5 is a schematic structural diagram of a storage medium according to an embodiment of the present invention.
Detailed Description
The embodiment of the application provides a video splicing method, a video splicing device, a video splicing apparatus and a video splicing medium, so as to solve the technical problems that in the prior art, when a historical game live video is watched, the watching time of a spectator is wasted, and the probability that the spectator obtains a highlight video clip is low. The technical effect that the watching time of the audience is saved, and the audience can obtain all the wonderful video clips in a short time is achieved.
The technical scheme in the embodiment of the application has the following general idea:
the method comprises the steps of determining and extracting N wonderful video segments from a target video by adopting feature matching and/or bullet screen information analysis, splicing the N wonderful video segments into one video to form a spliced video, and sending the spliced video to a client for playing when receiving a request sent by the client for requesting to acquire the spliced video, so that a viewer can see all most wonderful video segments which are not missed at all only by directly watching the spliced video without completely watching the whole target video, thereby effectively saving the watching time of the viewer and enabling the viewer to acquire all wonderful video segments in a short time.
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
The present embodiment provides a video splicing method, as shown in fig. 1, including:
step S101, determining N wonderful video segments from the target video by adopting feature matching and/or bullet screen information analysis, wherein N is greater than 1;
step S102, extracting the N wonderful video segments from the target video;
step S103, splicing the N wonderful video segments into one video to form a spliced video;
and step S104, when a request for requesting to acquire the spliced video sent by a client is received, sending the spliced video to the client for playing.
In the embodiment of the present application, the method may be applied to a server, and may also be applied to a viewer side or a broadcaster side, which is not limited herein, and the implementation device may be an electronic device such as a smart phone, a desktop computer, a notebook computer, or a tablet computer, which is also not limited herein.
The following describes in detail the specific implementation steps of the method provided in this embodiment with reference to fig. 1:
firstly, step S101 is executed, and N highlight video segments are determined from the target video by using feature matching and/or by using bullet screen information analysis, where N is greater than 1.
For determining N wonderful video segments from the target video by adopting feature matching, the specific implementation method comprises the following steps:
setting characteristic information according to the video category of the target video; performing feature matching on the target video to determine a target frame matched with the feature information in the target video; and determining the N highlight video segments in the target video according to the target frame and a preset highlight video capturing rule, wherein the highlight video segments comprise the target frame, and the highlight video capturing rule corresponds to the characteristic information.
Specifically, first, feature information is set according to the video category of the target video.
It should be noted that the target video may be a video uploaded by the anchor terminal; or the video stored by the server in the previous live broadcasting process; but also live video that is currently live. If the target video is the live video which is currently live, the method provided by the implementation is to perform real-time target frame matching and highlight video segment extraction on the received live video stream in the live broadcast process.
In a specific implementation process, the video types of the target video are different, and the corresponding feature information is also different, where the feature information may be voice feature information or image feature information, which is not limited herein, and the following are respectively illustrated as follows:
first, the feature information is image feature information.
According to the video category of the target video, determining feature information corresponding to the video category from a preset feature information base, wherein the feature information is extracted from a highlight image, and the highlight image is an image in the video corresponding to the video category. That is, some highlight pictures with the same default tend to exist for some highlight video clips of the same type in the target video, and the feature information may be a common image feature extracted from the highlight pictures.
For example, when the target video is a game video including a killing scenario, the feature information is set to information extracted from a game killing success screen. Specifically, after the killing is successful, images for prompting the successful killing, such as a word "KO", a word "person plus 1", or a blood spot pattern, are often displayed on the video, and the characteristics of the images can be used as characteristic information.
And when the target video is a game video comprising a collection scenario, setting the characteristic information as information extracted from a successfully collected picture. Specifically, after the acquisition is successful, images indicating the acquisition success are often displayed on the video, for example: adding 1, or collecting the article pattern, etc., these image features can be used as feature information.
Second, the feature information is speech feature information.
According to the video category of the target video, determining feature information corresponding to the video category from a preset feature information base, wherein the feature information is extracted from a video voice file. That is, some voice information that is the same by default often exists for some highlight video segments of the same type in the target video, and the feature information may be a common voice feature extracted from the voice information.
For example, when the target video is a game video including a killing scenario, the feature information is set to information extracted from a game killing success voice. Specifically, after the killing is successful, a sound for prompting the successful killing is often played along with the video, for example, a KO sound, a kok sound, a wawa sound or other tragic sounds can be used as the characteristic information.
And when the target video is a lottery-drawing video, setting the characteristic information as voice information extracted from the lottery-drawing video. Specifically, in the case of drawing a prize, a voice for presenting the prize drawing, for example, a specific music or a voice such as "immediately revealed" may be played along with the video, and these voice characteristics may be used as the characteristic information.
Of course, in the implementation process, the feature information is not limited to the above two types, and may also be time information, which is not limited herein and is not listed.
In the implementation process, according to the type of the video and the requirement of the video content, a plurality of kinds of or one kind of characteristic information can be set for a target video, so that a plurality of kinds of or one kind of content of highlight video segments can be extracted subsequently.
And then, carrying out feature matching on the target video to determine a target frame matched with the feature information in the target video.
In the specific implementation process, the feature information is different, and the corresponding matching methods are also different:
and if the characteristic information is image characteristic information, performing image matching on the characteristic information and each frame image of the target video, or performing image matching on the characteristic information and interval frame images of the target video, and when an image corresponding to the characteristic information exists on a matched frame image, determining that the frame is the target frame. For example, if the feature information is a blood spot pattern, and a frame including the blood spot pattern is matched, the frame image is used as a target frame.
And if the characteristic information is voice characteristic information, performing audio matching on the characteristic information and an audio file of a target video, and when a certain audio is matched to correspond to the characteristic information, determining that a frame corresponding to the audio is a target frame, specifically, a frame corresponding to the audio is a frame with timestamp information consistent with the timestamp information of the audio file. For example, if the feature information is "click-through" sound, and an audio file including audio is matched, a frame having the same time stamp as that of the audio file is used as a target frame.
Of course, the methods for performing feature matching are not limited to the above two methods, and are not limited herein, and are not listed.
And then, according to the target frame and a preset highlight video capturing rule, determining a highlight video segment in the target video, wherein the highlight video segment comprises the target frame, and the highlight video capturing rule corresponds to the characteristic information.
In the embodiment of the present application, according to the preset highlight video capture rule, a playing time length of a starting frame of the highlight video segment from the target frame and a playing time length of an ending frame of the highlight video segment from the target frame are determined, where in the target video, a playing position of the starting frame is located before or equal to the target frame, and a playing position of the ending frame is located after or equal to the target frame.
Specifically, the highlight video capture rule corresponds to the feature information, that is, different feature information may have a highlight video capture rule corresponding to each feature information, for example:
assuming that the feature information is information representing successful killing of the game in the game video including the killing scenario, considering that the wonderful aiming and killing probably occur about 1 minute before the successful killing, the wonderful video capturing rule corresponding to the feature information can be set as follows: and determining the video between 60s before the target frame and the target frame as a highlight video segment.
Assuming that the feature information is information representing the starting of drawing in a lottery-like video, and considering that the drawing time is about 180s, the highlight video interception rule corresponding to the feature information may be set as: and determining the video between the beginning of the target frame and 180s later as a highlight video segment.
Of course, besides the determination of the highlight video capturing rule by determining the time length of the highlight video end and the time position relation with the target frame through the characteristic information type, other methods for determining the highlight video capturing rule are available. For example, a plurality of pieces of feature information may be set, and a video between target frames corresponding to certain two pieces of feature information may be set as a highlight video segment. For example, if feature information a representing the beginning of drawing a lottery and feature information B representing the end of drawing a lottery are set for a lottery-like video, and a target frame a corresponding to the feature information a and a target frame B corresponding to the feature information B are matched, the corresponding highlight video capture rule may be set as: and determining the video between the target frame A and the target frame B as a highlight video segment.
For determining N wonderful video segments from the target video by adopting barrage information analysis, the specific implementation method comprises the following steps:
acquiring a target video and bullet screen information, wherein the bullet screen information comprises bullet screen quantity information of the target video in a historical playing process; and determining the N wonderful video segments of which the bullet screen conditions meet preset requirements in the target video according to the bullet screen information.
Specifically, first, a target video and bullet screen information are obtained, wherein the bullet screen information includes bullet screen quantity information of the target video in a historical playing process.
It should be noted that the target video may be a video uploaded by the anchor terminal; or the video stored by the server in the previous live broadcasting process; but also live video that is currently live. If the target video is the live video which is currently live, the method provided by the implementation comprises the steps of acquiring and judging the real-time barrage information of the received live video stream and extracting the highlight video segment in the live broadcast process.
In a specific implementation process, the bullet screen information may include bullet screen quantity information, bullet screen content information, bullet screen sender number information, bullet screen sending word number information, and the like, which are obtained when each frame of the target video is live broadcasted.
And then, according to the bullet screen information, determining a wonderful video segment in the target video, wherein the bullet screen condition meets the preset requirement.
In the embodiment of the application, the determination of the highlight video segment is performed by determining a target frame meeting a preset requirement in the target video according to the bullet screen information, and determining the highlight video segment in the target video according to the target frame and a preset highlight video capture rule, wherein the highlight video segment comprises the target frame.
The preset requirement may be that the number of the barrage displayed when the target frame is played is required to be greater than a preset value, or the speed increase of the number of the barrage is greater than the preset value, which is not limited herein.
In the embodiment of the present application, there are various methods for determining a highlight video segment according to the bullet screen information, and three methods are listed as examples below:
first, the number of bullet curtains is greater than a preset value.
Namely, according to the bullet screen information, the wonderful video segments in the target video, the number of bullet screens of which is greater than the preset number, are determined.
Specifically, the corresponding frames when the number of displayed bullet screens is greater than a preset value can be determined, and the frames can be extracted and arranged into a highlight video segment according to the time sequence.
And determining the wonderful video segments which can be effectively determined and have high user participation according to the fact that the number of the bullet screens is larger than the preset value.
And secondly, the target frame with the largest bullet screen quantity.
Namely, according to the bullet screen information, determining a target frame with the largest bullet screen quantity in the target video, and then determining the highlight video segment according to the target frame, wherein the highlight video segment comprises the target frame.
Specifically, to avoid the problem of discontinuity of the highlight video segment caused by extracting only some frames, the target frame with the maximum pop-up screen number or the pop-up screen number larger than a certain value in the target video may be determined, and then the target frame and the video in a period before and after the target frame may be used as the highlight video segment. For example, the target frame and the video 30s before and after the target frame can be taken as the highlight video segment.
Thirdly, the number of the bullet screens is increased.
Namely, according to the bullet screen information, the wonderful video segment in the target video, the increasing speed of the bullet screen quantity of which is greater than the preset speed, is determined
Specifically, the corresponding bullet screen quantity increasing rate of each frame can be determined according to the bullet screen quantity of each frame and the number of the bullet screens of the frames before and after the frame, the frames with the bullet screen quantity increasing rate larger than the preset speed are taken as target frames, and the frames are extracted and arranged into a wonderful video segment according to the time sequence. For example, the bullet screen speed increasing rate corresponding to each frame may be set to be equal to a ratio of the number of bullet screens displayed in a frame after the frame to the number of bullet screens displayed in the frame, or the bullet screen speed increasing rate corresponding to each frame may be set to be a ratio of the total number of bullet screens displayed in 5 seconds after the frame to the total number of bullet screens displayed in 5 seconds before the frame, which is not limited herein.
By determining the wonderful video segments according to the quantity increase of the bullet screens, the key video segments which arouse a large number of users to send the bullet screens can be effectively determined.
Of course, the determination method of the highlight video segment is not limited to the above three methods, and the highlight video segment may also be determined according to the total number of words sent by the bullet screen, which is not limited herein, and is not listed one by one.
Then, step S102 is executed to extract the N highlight video segments from the target video.
In a specific implementation process, the highlight video segment is determined, that is, a start time stamp and an end time stamp of the highlight video segment can be determined, so that the highlight video segment between the start time stamp and the end time stamp is extracted from the target video.
Considering that the extraction of the highlight video segment needs to consume more computing and processing resources, the embodiment further provides an extraction method with low resource consumption, which is described in detail as follows:
referring to fig. 2, since the target video is live video which is live or historically live, video transmission is performed according to the video units and audio units which are transmitted in an interposed manner, and each video unit and each audio unit have corresponding timestamp information, in this embodiment, the target video segment is not decoded, the live video stream is directly pulled through steps S201 to S204, after stream de-multiplexing, a video unit whose corresponding timestamp information is closest to the timestamp information of the target frame is searched in the undecoded target video, the highlight video segment is determined and extracted according to the closest video unit, and then the extracted highlight video segment is synthesized and stored through steps S205 to S206 by stream multiplexing. For example, as shown in fig. 2, assuming that the timestamps of the video unit 3 and the video unit 4 are closest to the determined timestamp information of the highlight video segment, the video unit 3 and the video unit 4 are extracted by de-streaming multiplexing, and after the audio units corresponding to the timestamp information of the video unit 3 and the video unit 4 are extracted, the video unit and the audio unit are synthesized by streaming multiplexing, so as to form the extracted complete highlight video segment.
By adopting the method for extracting the wonderful video segment, the whole video does not need to be decoded, so that more calculation and processing resources can be saved, and the processing speed is improved.
Furthermore, considering that some highlight video segments have strict requirements on time, the attribute information of the target video can be acquired before highlight video segment extraction is performed; judging whether the target video needs to adopt a timestamp accurate extraction mode according to the attribute information; if so, performing video decoding on the target video; extracting the wonderful video segment from the decoded target video according to the wonderful video interception rule and the timestamp information of the decoded target video; if not, searching a video unit with corresponding timestamp information closest to the timestamp information of the target frame in an undecoded target video, wherein the target video comprises N video units, and N is a positive integer greater than 1; and determining and extracting the highlight video segment according to the nearest video unit.
Specifically, according to the situation of the highlight video segment corresponding to each type of feature information, the worker sets extraction information representing whether a timestamp accurate extraction mode is required or not in advance in the attribute information of the target video, for example, if the timestamp accurate extraction is required, the number behind the Ti mark of the attribute information is set to be 1, and if the timestamp accurate extraction is not required, the number behind the Ti mark of the attribute information is set to be 0. Before subsequent extraction, whether the target video needs to adopt a timestamp accurate extraction mode is judged according to extraction information preset in the attribute information, if so, the target video is decoded, then accurate extraction is carried out according to the timestamp of each frame, if not, the target video is not decoded, and low-resource consumption extraction is directly carried out according to the timestamp of each video unit.
Then, step S103 is executed to splice the N highlight video segments into one video, so as to form a spliced video.
In a specific implementation process, there may be a plurality of methods for splicing the plurality of highlight video segments, and three methods are listed as examples below:
the N highlight video segments may be spliced into one video, and a prompt video is inserted before each highlight video segment, where the prompt video is used to describe the highlight video segment to be played, so as to form a spliced video. Namely, inserting a prepared prompt video before each highlight video segment, wherein the prompt video can comprise: the playing time information of the highlight video to be played next in the original target video, the content description of the highlight video to be played next, the video content type of the highlight video to be played next, and the like.
The N wonderful video segments can be spliced into one video, an interval video is inserted between every two wonderful video segments, and the interval video is used for representing that the playing of the previous wonderful video segment is finished and the playing of the next wonderful video segment is about to be played to form a spliced video. Namely, a pre-prepared interval video is inserted before each highlight video segment, and the interval video can be a blank video, a preset subtitle video or a self-introduction video of a main broadcast and the like.
The N highlight video segments can be spliced into one video, and prompt information is superimposed in the initial segment video of each highlight video segment, where the prompt information is used to describe the highlight video segment being played, so as to form a spliced video. That is, in order not to increase the redundant video playing time, a preset prompting message is synthesized in the first frame or frames of each highlight video segment, where the prompting message may be a prompting picture or a prompting voice, and is not limited herein. If the prompting message is a prompting picture, a picture-in-picture mode can be adopted, and a semitransparent covering mode can also be adopted, which is not limited herein.
Certainly, in a specific implementation process, the video splicing manner is not limited to the above three manners, and the plurality of highlight videos may be seamlessly spliced according to the time stamp sequence to reduce the playing and processing time, which is not limited herein and is not listed one by one.
And then, executing step S104, and when receiving a request for requesting to acquire the spliced video, sent by a client, sending the spliced video to the client for playing.
In the embodiment of the application, after the spliced video is formed, the open link of the spliced video can be placed on the page of the main broadcasting room corresponding to the target video in the live broadcasting website, so that the audience can directly trigger the open link to select to play the spliced video.
Of course, in the embodiment of the present application, a highlight video segment may also be marked on the play progress bar.
After determining the highlight video segment, acquiring the playing time information of the highlight video segment in the target video; and according to the playing time information, marking the highlight video segment at a target position corresponding to the playing time information on the playing progress bar of the target video.
Specifically, the highlight video segment may be marked at a target position corresponding to the playing time information on the playing progress bar of the target video in a dotted manner. The color of the target position corresponding to the playing time information on the playing progress bar can be changed, or the width of the playing progress bar can be changed, or a marking line can be added for marking.
And when receiving an operation acting on the target position, displaying pictures or videos capable of representing the wonderful video segment. Specifically, the image of the target frame may be displayed, or other pictures in the highlight video segment may be displayed, or the highlight video segment may be triggered to be played, or a preset introduction picture used for describing the highlight video segment may be displayed.
In a specific implementation process, the displayed picture or video capable of representing the highlight video segment may be displayed by opening a window alone, may be displayed directly in a playing window of the target video, or may be displayed by being superimposed in the playing window of the target video, which is not limited herein. The superimposed display may be a picture-in-picture display, or may be a semi-transparent display, which is not limited herein.
Furthermore, in the embodiment of the application, in consideration of resource consumption occupied by performing feature information matching and performing highlight video segment extraction, in order to avoid mutual interference and resource preemption in the execution of each task, it may be further configured that determining the highlight video segment from the target video by using feature matching is performed on a GCR-word layer; the extraction of the N wonderful video segments from the target video is implemented on a Media-Worker layer; and the video splicing and marking the highlight video segment on a target position corresponding to the playing time information on the playing progress bar of the target video according to the playing time information are implemented on a Media-Worker layer.
Specifically, N wonderful video segments are determined and extracted from the target video by adopting feature matching and/or bullet screen information analysis, the N wonderful video segments are spliced into one video to form a spliced video, and when a request for requesting to acquire the spliced video, which is sent by a client, is received, the spliced video is sent to the client to be played, so that a viewer can see all most wonderful video segments which are not missed at all only by directly watching the spliced video without completely watching the whole target video, thereby effectively saving the watching time of the viewer and enabling the viewer to obtain all wonderful video segments in a short time.
Based on the same inventive concept, the embodiment of the invention also provides a device corresponding to the video splicing method in the first embodiment, which is shown in the second embodiment.
Example two
The present embodiment provides a video stitching apparatus, as shown in fig. 3, the apparatus includes:
a determining unit 301, configured to determine N highlight video segments from the target video by using feature matching and/or by using bullet screen information analysis, where N is greater than 1;
an extracting unit 302, configured to extract the N highlight video segments from the target video;
a splicing unit 303, configured to splice the N highlight video segments into one video to form a spliced video;
a sending unit 304, configured to send the spliced video to the client for playing when receiving a request for requesting to obtain the spliced video sent by the client.
In the embodiment of the present application, the apparatus may be an electronic device such as a smart phone, a desktop computer, a notebook computer, or a tablet computer, and is not limited herein.
In this embodiment of the present application, the device may be an Android system, an IOS system, or a Windows system, which is not limited herein.
Since the apparatus described in the second embodiment of the present invention is an apparatus used for implementing the method of the first embodiment of the present invention, based on the method described in the first embodiment of the present invention, a person skilled in the art can understand the specific structure and the deformation of the apparatus, and thus the details are not described herein. All the devices adopted in the method of the first embodiment of the present invention belong to the protection scope of the present invention.
Based on the same inventive concept, the application provides a corresponding electronic device embodiment, which is detailed in
Example three.
EXAMPLE III
The present embodiment provides an electronic device, as shown in fig. 4, including a memory 410, a processor 420, and a computer program 411 stored in the memory 410 and executable on the processor 420, where when the processor 420 executes the computer program 411, any one of the embodiments may be implemented.
Since the electronic device described in this embodiment is a device used for implementing the method in the first embodiment of the present application, based on the method described in the first embodiment of the present application, a specific implementation of the electronic device in this embodiment and various variations thereof can be understood by those skilled in the art, and therefore, how to implement the method in the first embodiment of the present application by the electronic device is not described in detail herein. The equipment used by those skilled in the art to implement the methods in the embodiments of the present application is within the scope of the present application.
Based on the same inventive concept, the application provides a storage medium corresponding to the fourth embodiment, which is described in detail in the fourth embodiment.
Example four
The embodiment provides a computer-readable storage medium 500, as shown in fig. 5, on which a computer program 511 is stored, and when the computer program 511 is executed by a processor, any one of the embodiment modes can be implemented.
The technical scheme provided in the embodiment of the application at least has the following technical effects or advantages:
the method, the device, the equipment and the medium provided by the embodiment of the application adopt characteristic matching and/or barrage information analysis to determine and extract N wonderful video segments from the target video, and splice the N wonderful video segments into one video to form a spliced video, and when a request sent by a client side for acquiring the spliced video is received, the spliced video is sent to the client side for playing so that a viewer can watch the whole target video without completely, and can watch all wonderful video segments without missing, thereby effectively saving the watching time of the viewer and enabling the viewer to obtain all wonderful video segments in a short time.
Further, whether the target video needs to adopt a timestamp accurate extraction mode is judged according to the attribute information of the target video, when needed, the target video is subjected to video decoding, then the wonderful video segment is extracted according to the timestamp information of the decoded target video, and when not needed, a video unit with corresponding timestamp information closest to the timestamp information of the target frame is directly searched in the undecoded target video to extract the wonderful video segment, so that the video extraction time without accurate extraction is effectively shortened, and the extraction accuracy of part of videos needing accurate extraction is also ensured.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made in the embodiments of the present invention without departing from the spirit or scope of the embodiments of the invention. Thus, if such modifications and variations of the embodiments of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to encompass such modifications and variations.

Claims (9)

1. A method for video stitching, comprising:
determining N wonderful video segments from the target video by adopting feature matching analysis, wherein N is greater than 1; wherein the target video is a game video;
extracting the N wonderful video segments from the target video;
splicing the N wonderful video segments into one video to form a spliced video;
when a request for requesting to acquire the spliced video sent by a client is received, sending the spliced video to the client for playing;
wherein determining N highlight video segments from the target video using feature matching analysis comprises:
setting characteristic information according to the video category of the target video; performing feature matching on the target video to determine a target frame matched with the feature information in the target video; determining the N highlight video segments in the target video according to the target frame and a preset highlight video intercepting rule, wherein the highlight video segments comprise the target frame, and the highlight video intercepting rule corresponds to the characteristic information;
when the feature information is image feature information, the feature information is common image features extracted from highlight video clips of the same type of the target video; when the feature information is voice feature information, the feature information is common voice features extracted from the highlight video clips of the same type of the target video.
2. The method of claim 1, wherein said extracting said N highlight video segments from said target video comprises:
acquiring attribute information of the target video;
judging whether the target video needs to adopt a timestamp accurate extraction mode according to the attribute information;
if so, performing video decoding on the target video; extracting the N wonderful video segments from the decoded target video according to the wonderful video interception rule and the timestamp information of the decoded target video;
if not, searching a video unit with corresponding timestamp information closest to the timestamp information of the target frame in an undecoded target video, wherein the target video comprises N video units, and N is a positive integer greater than 1; and determining and extracting the highlight video segment according to the nearest video unit.
3. The method of claim 1, wherein said splicing said N highlight video segments into one video to form a spliced video comprises:
splicing the N wonderful video segments into a video, and inserting a prompt video before each wonderful video segment, wherein the prompt video is used for describing the wonderful video segment to be played to form a spliced video; alternatively, the first and second electrodes may be,
splicing the N wonderful video segments into one video, and inserting an interval video between every two wonderful video segments, wherein the interval video is used for representing that the playing of the previous wonderful video segment is finished and the playing of the next wonderful video segment is about to be played to form a spliced video; alternatively, the first and second electrodes may be,
and splicing the N wonderful video segments into one video, and superposing playing prompt information in the initial segment video of each wonderful video segment, wherein the prompt information is used for describing the wonderful video segments which are being played to form a spliced video.
4. The method of claim 1, wherein said determining N highlight video segments from the target video is performed at a GCR-word layer; the extracting the N highlight video segments from the target video is implemented in a Media-Worker layer.
5. A video stitching device, comprising:
the determining unit is used for determining N wonderful video segments from the target video by adopting feature matching analysis, wherein N is greater than 1; wherein the target video is a game video;
an extracting unit, configured to extract the N highlight video segments from the target video;
the splicing unit is used for splicing the N wonderful video segments into one video to form a spliced video;
the sending unit is used for sending the spliced video to the client side for playing when receiving a request which is sent by the client side and used for requesting to acquire the spliced video;
the determining unit is further used for setting characteristic information according to the video category of the target video; performing feature matching on the target video to determine a target frame matched with the feature information in the target video; determining the N highlight video segments in the target video according to the target frame and a preset highlight video intercepting rule, wherein the highlight video segments comprise the target frame, and the highlight video intercepting rule corresponds to the characteristic information; when the feature information is image feature information, the feature information is common image features extracted from highlight video clips of the same type of the target video; when the feature information is voice feature information, the feature information is common voice features extracted from the highlight video clips of the same type of the target video.
6. The apparatus of claim 5, wherein the splicing unit is further configured to:
splicing the N wonderful video segments into a video, and inserting a prompt video before each wonderful video segment, wherein the prompt video is used for describing the wonderful video segment to be played to form a spliced video; alternatively, the first and second electrodes may be,
splicing the N wonderful video segments into one video, and inserting an interval video between every two wonderful video segments, wherein the interval video is used for representing that the playing of the previous wonderful video segment is finished and the playing of the next wonderful video segment is about to be played to form a spliced video; alternatively, the first and second electrodes may be,
and splicing the N wonderful video segments into one video, and superposing playing prompt information in the initial segment video of each wonderful video segment, wherein the prompt information is used for describing the wonderful video segments which are being played to form a spliced video.
7. The apparatus of claim 5, wherein the extraction unit is further to:
acquiring attribute information of the target video;
judging whether the target video needs to adopt a timestamp accurate extraction mode according to the attribute information;
if so, performing video decoding on the target video; extracting the N wonderful video segments from the decoded target video according to the wonderful video interception rule and the timestamp information of the decoded target video;
if not, searching a video unit with corresponding timestamp information closest to the timestamp information of the target frame in an undecoded target video, wherein the target video comprises N video units, and N is a positive integer greater than 1; and determining and extracting the highlight video segment according to the nearest video unit.
8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any of claims 1-4 when executing the program.
9. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out the method of any one of claims 1 to 4.
CN201810752191.5A 2018-07-10 2018-07-10 Video splicing method, device, equipment and medium Active CN109089127B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810752191.5A CN109089127B (en) 2018-07-10 2018-07-10 Video splicing method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810752191.5A CN109089127B (en) 2018-07-10 2018-07-10 Video splicing method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN109089127A CN109089127A (en) 2018-12-25
CN109089127B true CN109089127B (en) 2021-05-28

Family

ID=64837508

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810752191.5A Active CN109089127B (en) 2018-07-10 2018-07-10 Video splicing method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN109089127B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109618180B (en) * 2019-01-30 2021-06-22 网宿科技股份有限公司 Live broadcast data processing method, system and server
US11025984B2 (en) 2019-01-30 2021-06-01 Wangsu Science & Technology Co., Ltd. Method, system for processing a live-broadcasting data, and server thereof
CN113411681A (en) * 2019-07-17 2021-09-17 刘彩霞 Streaming media internet big data bullet screen processing system
CN110798744A (en) 2019-11-08 2020-02-14 北京字节跳动网络技术有限公司 Multimedia information processing method, device, electronic equipment and medium
CN110933511B (en) * 2019-11-29 2021-12-14 维沃移动通信有限公司 Video sharing method, electronic device and medium
CN110958465A (en) * 2019-12-17 2020-04-03 广州酷狗计算机科技有限公司 Video stream pushing method and device and storage medium
CN111083525B (en) * 2019-12-27 2022-01-11 恒信东方文化股份有限公司 Method and system for automatically generating intelligent image
CN113542845B (en) * 2020-04-16 2024-02-02 腾讯科技(深圳)有限公司 Information display method, device, equipment and storage medium
CN111711861B (en) * 2020-05-15 2022-04-12 北京奇艺世纪科技有限公司 Video processing method and device, electronic equipment and readable storage medium
CN113055741B (en) * 2020-12-31 2023-05-30 科大讯飞股份有限公司 Video abstract generation method, electronic equipment and computer readable storage medium
CN113473224B (en) * 2021-06-29 2023-05-23 北京达佳互联信息技术有限公司 Video processing method, video processing device, electronic equipment and computer readable storage medium
CN114339304A (en) * 2021-12-22 2022-04-12 中国电信股份有限公司 Live video processing method and device and storage medium
CN115174947A (en) * 2022-06-28 2022-10-11 广州博冠信息科技有限公司 Live video extraction method and device, storage medium and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101268505A (en) * 2006-01-06 2008-09-17 三菱电机株式会社 Method and system for classifying a video
CN102902756A (en) * 2012-09-24 2013-01-30 南京邮电大学 Video abstraction extraction method based on story plots
CN105847993A (en) * 2016-04-19 2016-08-10 乐视控股(北京)有限公司 Method and device for sharing video clip
CN107154264A (en) * 2017-05-18 2017-09-12 北京大生在线科技有限公司 The method that online teaching wonderful is extracted
CN107438204A (en) * 2017-07-26 2017-12-05 维沃移动通信有限公司 A kind of method and mobile terminal of media file loop play

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160112727A1 (en) * 2014-10-21 2016-04-21 Nokia Technologies Oy Method, Apparatus And Computer Program Product For Generating Semantic Information From Video Content

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101268505A (en) * 2006-01-06 2008-09-17 三菱电机株式会社 Method and system for classifying a video
CN102902756A (en) * 2012-09-24 2013-01-30 南京邮电大学 Video abstraction extraction method based on story plots
CN105847993A (en) * 2016-04-19 2016-08-10 乐视控股(北京)有限公司 Method and device for sharing video clip
CN107154264A (en) * 2017-05-18 2017-09-12 北京大生在线科技有限公司 The method that online teaching wonderful is extracted
CN107438204A (en) * 2017-07-26 2017-12-05 维沃移动通信有限公司 A kind of method and mobile terminal of media file loop play

Also Published As

Publication number Publication date
CN109089127A (en) 2018-12-25

Similar Documents

Publication Publication Date Title
CN109089154B (en) Video extraction method, device, equipment and medium
CN109089127B (en) Video splicing method, device, equipment and medium
CN108924576A (en) A kind of video labeling method, device, equipment and medium
CN106658200B (en) Live video sharing and acquiring method and device and terminal equipment thereof
US10425679B2 (en) Method and device for displaying information on video image
CN109089128A (en) A kind of method for processing video frequency, device, equipment and medium
CN108989883B (en) Live broadcast advertisement method, device, equipment and medium
CN108495152B (en) Video live broadcast method and device, electronic equipment and medium
CN110300307B (en) Live broadcast interaction method and device, live broadcast server and storage medium
CN109714622B (en) Video data processing method and device and electronic equipment
CN111050205A (en) Video clip acquisition method, device, apparatus, storage medium, and program product
WO2019214371A1 (en) Image display method and generating method, device, storage medium and electronic device
WO2019114330A1 (en) Video playback method and apparatus, and terminal device
CN111277854A (en) Display method and device of virtual live broadcast room, electronic equipment and storage medium
CN108521584B (en) Interactive information processing method, device, anchor side equipment and medium
CN109803151B (en) Multimedia data stream switching method and device, storage medium and electronic device
CN109040773A (en) A kind of video improvement method, apparatus, equipment and medium
CN110996157A (en) Video playing method and device, electronic equipment and machine-readable storage medium
TWI620438B (en) Method, device for calibrating interactive time in a live program and a computer-readable storage device
CN111147911A (en) Video clipping method and device, electronic equipment and storage medium
CN108881938B (en) Live broadcast video intelligent cutting method and device
CN108616769B (en) Video-on-demand method and device
CN111050204A (en) Video clipping method and device, electronic equipment and storage medium
CN112422844A (en) Method, device and equipment for adding special effect in video and readable storage medium
CN108271050B (en) Live broadcast room program recommendation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant