CN109089127A - A kind of video-splicing method, apparatus, equipment and medium - Google Patents
A kind of video-splicing method, apparatus, equipment and medium Download PDFInfo
- Publication number
- CN109089127A CN109089127A CN201810752191.5A CN201810752191A CN109089127A CN 109089127 A CN109089127 A CN 109089127A CN 201810752191 A CN201810752191 A CN 201810752191A CN 109089127 A CN109089127 A CN 109089127A
- Authority
- CN
- China
- Prior art keywords
- video
- featured videos
- target
- section
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 63
- 230000005540 biological transmission Effects 0.000 claims abstract description 18
- 238000004458 analytical method Methods 0.000 claims abstract description 13
- 238000000605 extraction Methods 0.000 claims description 19
- 230000008569 process Effects 0.000 claims description 19
- 239000000284 extract Substances 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 11
- 238000003780 insertion Methods 0.000 claims description 6
- 230000037431 insertion Effects 0.000 claims description 6
- 239000002699 waste material Substances 0.000 abstract description 5
- 230000000694 effects Effects 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 10
- 239000000686 essence Substances 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 238000012512 characterization method Methods 0.000 description 4
- 239000008280 blood Substances 0.000 description 3
- 210000004369 blood Anatomy 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000000763 evoking effect Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2187—Live feed
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23418—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23424—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/238—Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
- H04N21/2387—Stream processing in response to a playback request from an end-user, e.g. for trick-play
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/4781—Games
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8456—Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8547—Content authoring involving timestamps for synchronizing content
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Security & Cryptography (AREA)
- Databases & Information Systems (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- Television Signal Processing For Recording (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The present invention discloses a kind of video-splicing method, apparatus, equipment and medium, this method comprises: determining N number of featured videos section from the target video using characteristic matching and/or using barrage information analysis, N is greater than 1;N number of featured videos section is extracted from the target video;N number of featured videos section is spliced into a video, forms splicing video;When receiving when being used for the request of splicing video described in request of client transmission, sends the splicing video to the client and play out.Method, apparatus, equipment and medium provided by the present application can solve viewing historical game play live video in the prior art, existing waste spectators' viewing time and the technical problem for causing the probability of spectators' acquisition featured videos segment lower.Realize the technical effect for saving viewing time.
Description
Technical field
The present invention relates to field of computer technology more particularly to a kind of video-splicing method, apparatus, equipment and medium.
Background technique
Currently, with the progress of network communication technology and the speed-raising of broadband network, network direct broadcasting has been obtained more and more
Development and application.In order to make user not miss the excellent live video of main broadcaster, video website is often recorded and provides main broadcaster's
History live video is watched for user.
In game live streaming, often there are some excellent game scenarios, successfully regarded for example, killing killing in class game
The marriage video clip acquired in successfully video clip or friend-making class game etc. in frequency segment, collection class game.These essences
Color frequency range is often the part of most excellent most worth viewing in game live streaming, and spectators user is in order to watching these excellent
Camera lens generally requires completely to carry out the viewing of entire video since the beginning of history live video, just can guarantee and do not miss
These wonderfuls.It will lead to spectators in this way and waste more time in its less interested video of viewing, and be also easy to miss
Excellent video moment.
As it can be seen that watching historical game play live video in the prior art, there is waste spectators' viewing time and spectators is caused to obtain
The technical problem for taking the probability of featured videos segment lower.
Summary of the invention
The present invention provides a kind of video-splicing method, apparatus, equipment and medium, watches history in the prior art to solve
Game live video, existing waste spectators' viewing time and the lower technology of probability for leading to spectators' acquisition featured videos segment
Problem.
In a first aspect, the present invention provides a kind of video-splicing methods, comprising:
N number of featured videos section, N are determined from the target video using characteristic matching and/or using barrage information analysis
Greater than 1;
N number of featured videos section is extracted from the target video;
N number of featured videos section is spliced into a video, forms splicing video;
When receiving when being used for the request of splicing video described in request of client transmission, the splicing video is sent
It is played out to the client.
Optionally, described that N number of featured videos section is determined in target video, comprising: according to the video classification of target video,
Characteristic information is set;To the target video carry out characteristic matching, with determine in the target video with the characteristic information
Matched target frame;According to the target frame and preset featured videos interception rule, the N is determined in the target video
A featured videos section, wherein the featured videos section includes the target frame, the featured videos interception rule and the feature
Information is corresponding;Alternatively, acquisition target video and barrage information, the barrage information include that the target video was played in history
Barrage quantity information in journey;According to the barrage information, determine that barrage situation meets preset requirement in the target video
N number of featured videos section.
Optionally, described that N number of featured videos section is extracted from the target video, comprising: to obtain the target view
The attribute information of frequency;Judge whether the target video needs precisely to extract mode using time stamp according to the attribute information;Such as
Fruit needs, then carries out video decoding to the target video;And rule is intercepted according to the featured videos, according to decoded mesh
The time stab information for marking video, extracts N number of featured videos section from decoded target video;If it is not required, then not
The immediate video unit of time stab information of corresponding time stab information Yu the target frame is searched in decoded target video,
In, the target video includes N number of video unit, and N is the positive integer greater than 1;It is determined according to the immediate video unit
And extract the featured videos section.
Optionally, described that N number of featured videos section is spliced into a video, form splicing video, comprising: will be described
N number of featured videos section is spliced into a video, and the insertion prompt video before each featured videos section, the prompt video are used
In the featured videos section that description will play, splicing video is formed;Alternatively, N number of featured videos section is spliced into a view
Frequently, insertion interval video, the interval video are used to characterize the last period featured videos piece and between every two featured videos section
Section is played to be terminated to play with next section of featured videos segment, forms splicing video;Alternatively, by N number of featured videos
Section is spliced into a video, and play cuing information is superimposed in the initial segment video of each featured videos section, the prompt letter
Breath forms splicing video for describing featured videos section being played on.
Optionally, described to determine that N number of featured videos section is implemented at GCR-Work layers in target video;It is described from the mesh
N number of featured videos section is extracted in mark video to implement at Media-Worker layers.
Second aspect provides a kind of video-splicing device, comprising:
Determination unit, for determination to be N number of from the target video using characteristic matching and/or using barrage information analysis
Featured videos section, N are greater than 1;
Extraction unit, for extracting N number of featured videos section from the target video;
Concatenation unit forms splicing video for N number of featured videos section to be spliced into a video;
Transmission unit, for sending out when receiving when being used for the request of splicing video described in request of client transmission
The splicing video to the client is sent to play out.
Optionally, the concatenation unit is also used to: N number of featured videos section being spliced into a video, and each
Prompt video is inserted into before featured videos section, the featured videos section that the prompt video is used to describe to play forms splicing
Video;Alternatively, N number of featured videos section is spliced into a video, and interval is inserted between every two featured videos section
Video, the interval video will for characterizing the played end of the last period featured videos segment and next section of featured videos segment
It plays, forms splicing video;Alternatively, N number of featured videos section is spliced into a video, and in each featured videos section
Play cuing information is superimposed in the initial segment video, the prompt information is formed and spelled for describing featured videos section being played on
Connect video.
Optionally, the extraction unit is also used to: obtaining the attribute information of the target video;According to the attribute information
Judge whether the target video needs precisely to extract mode using time stamp;If it is required, then being regarded to the target video
Frequency decodes;And rule is intercepted according to the featured videos, according to the time stab information of decoded target video, from decoded mesh
N number of featured videos section is extracted in mark video;If it is not required, then searching corresponding time stamp in not decoded target video
The immediate video unit of time stab information of information and the target frame, wherein the target video includes N number of video unit, N
For the positive integer greater than 1;According to the immediate video unit determination and extract the featured videos section.
The one or more technical solutions provided in the embodiment of the present invention, have at least the following technical effects or advantages:
Method, apparatus, equipment and medium provided by the embodiments of the present application using characteristic matching and/or use barrage information
Analysis determines and extracts N number of featured videos section from the target video, and N number of featured videos section is spliced into a view
Frequently, splicing video is formed, when receiving the request for being used to splice described in request video of client transmission, described in transmission
Splicing video to the client plays out, so that spectators can not have to completely watch entire target video, it is only necessary to
The direct viewing splicing is effectively saved spectators' sight it is seen that all most excellent, the video clip that do not want to miss most
See the time, whole featured videos segments can be got by making spectators within a short period of time.
Further, by judging whether the target video needs using time stamp essence according to the attribute information of target video
Quasi- extraction mode, and when needed, video decoding is carried out to the target video, further according to the time stamp of decoded target video
Featured videos section described in information extraction directly searches corresponding time stab information in not decoded target video when not needed
It effectively reduces and does not need to extract the featured videos section with the immediate video unit of time stab information of the target frame
The video extraction time precisely extracted also ensures the extraction accuracy for the video that part needs precisely to extract.
The above description is only an overview of the technical scheme of the present invention, in order to better understand the technical means of the present invention,
And it can be implemented in accordance with the contents of the specification, and in order to allow above and other objects of the present invention, feature and advantage can
It is clearer and more comprehensible, the followings are specific embodiments of the present invention.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is this hair
Bright some embodiments for those of ordinary skill in the art without creative efforts, can be with root
Other attached drawings are obtained according to these attached drawings.
Fig. 1 is the flow chart of video-splicing method in the embodiment of the present invention;
Fig. 2 is the extraction schematic diagram for not using time stamp precisely to extract mode in the embodiment of the present invention;
Fig. 3 is the structural schematic diagram of video-splicing device in the embodiment of the present invention;
Fig. 4 is the structural schematic diagram of electronic equipment in the embodiment of the present invention;
Fig. 5 is the structural schematic diagram of storage medium in the embodiment of the present invention.
Specific embodiment
The embodiment of the present application is by providing a kind of video-splicing method, apparatus, equipment and medium, to solve the prior art
Middle viewing historical game play live video, existing waste spectators' viewing time and the probability for leading to spectators' acquisition featured videos segment
Lower technical problem.It realizes and has saved spectators' viewing time, make spectators within a short period of time and can get and is all excellent
The technical effect of video clip.
Technical solution in the embodiment of the present application, general thought are as follows:
N number of excellent view is determined and extracted from the target video using characteristic matching and/or using barrage information analysis
Frequency range, and N number of featured videos section is spliced into a video, splicing video is formed, when the use for receiving client transmission
It when splicing the request of video described in request, sends the splicing video to the client and plays out, so that seeing
Crowd can not have to completely watch entire target video, it is only necessary to the direct viewing splicing it is seen that all most excellent,
Its video clip that do not want to miss most is effectively saved spectators' viewing time, and making spectators within a short period of time can get entirely
The featured videos segment in portion.
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention
In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is
A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art
Every other embodiment obtained without creative efforts, shall fall within the protection scope of the present invention.
Embodiment one
The present embodiment provides a kind of video-splicing methods, as shown in Figure 1, comprising:
Step S101, using characteristic matching and/or using barrage information analysis, determination is N number of excellent from the target video
Video-frequency band, N are greater than 1;
Step S102 extracts N number of featured videos section from the target video;
N number of featured videos section is spliced into a video by step S103, forms splicing video;
Step S104 sends institute when receiving when being used for the request of splicing video described in request of client transmission
Splicing video to the client is stated to play out.
In the embodiment of the present application, the method can be applied to server, also can be applied to viewer end or main broadcaster end,
This is not restricted, and facilities and equipments can be the electronic equipments such as smart phone, desktop computer, notebook or tablet computer,
This is also with no restriction.
Below with reference to Fig. 1, the specific implementation step of method provided in this embodiment is described in detail:
Firstly, executing step S101, using characteristic matching and/or use barrage information analysis true from the target video
Fixed N number of featured videos section, N are greater than 1.
For determining N number of featured videos section, specific implementation method from the target video using characteristic matching are as follows:
According to the video classification of target video, characteristic information is set;Characteristic matching is carried out to the target video, with determination
Out in the target video with the matched target frame of the characteristic information;It is intercepted according to the target frame and preset featured videos
Rule determines N number of featured videos section in the target video, wherein the featured videos section includes the target frame,
The featured videos interception rule is corresponding with the characteristic information.
Specifically, firstly, characteristic information is arranged according to the video classification of target video.
It should be noted that the target video can be the video of main broadcaster end upload;It is also possible to the live streaming before
In the process, the video that server stores;It can also be the live video being currently broadcast live.If the target video is
The live video being currently broadcast live, then during method provided by the embodiment is live streaming at the scene, the live streaming to receiving
Video flowing carries out real-time target frame matching and featured videos section is extracted.
In the specific implementation process, the video classification of target video is different, and corresponding characteristic information is also different, this feature letter
Breath can be voice characteristics information, be also possible to image feature information, this is not restricted, separately below for example:
The first, characteristic information is image feature information.
I.e. according to the video classification of the target video, the video classification pair is determined from preset characteristic information library
The characteristic information answered, the characteristic information are the information extracted from excellent image, and the excellent image is the video classification
Image in corresponding video.That is, often there is default phase to similar featured videos segments certain in target video
Same some excellent image frames, then it is special to can be the common images extracted from these excellent image frames for characteristic information
Sign.
For example, when the target video be include the game video for killing plot when, the characteristic information, which is arranged, is
The information extracted in picture is killed successfully from game.Specifically, after killing successfully, often display reminding is hit on video
Successful image, such as " KO " printed words, or " number adds 1 " printed words or blood cake pattern etc. are killed, then can be made with these characteristics of image
It is characterized information.
When the target video be include acquiring the game video of plot when, it is from acquiring successfully that the characteristic information, which is arranged,
The information extracted in picture.Specifically, after acquiring successfully, often display reminding acquires successful image on video,
Such as: " adding 1 " printed words, or acquisition article pattern etc., then it can be using these characteristics of image as characteristic information.
Second, characteristic information is voice characteristics information.
I.e. according to the video classification of the target video, the video classification pair is determined from preset characteristic information library
The characteristic information answered, the characteristic information are the information extracted from video speech file.That is, to certain in target video
A little similar featured videos segments, which often exist, defaults identical some voice messagings, then characteristic information can be from these voices
The common phonetic feature extracted in information.
For example, when the target video be include the game video for killing plot when, the characteristic information, which is arranged, is
The information extracted in voice is killed successfully from game.Specifically, it after killing successfully, is often killed with video playing prompt
Successful voice, such as " KO " pronunciation, or " killing success " pronunciation, or the horrible cries such as " " pronunciation, then can be with these voices spy
Sign is used as characteristic information.
When the target video is prize drawing class video, it is the voice extracted from video of announcing the winners in a lottery that the characteristic information, which is arranged,
Information.Specifically, when announcing the winners in a lottery, the voice announced the winners in a lottery, such as specific music often are prompted with video playing, or " at once
Make known " etc. voices, then can be using these phonetic features as characteristic information.
Certainly, in the specific implementation process, the characteristic information is not limited to above two, can also be temporal information,
This also will not enumerate with no restriction.
In the specific implementation process, according to the needs of video type and video content, a target video can be arranged
A variety of or a kind of characteristic information, so as to the subsequent featured videos section that can extract a variety of or a kind of content.
Then, to the target video carry out characteristic matching, with determine in the target video with the characteristic information
Matched target frame.
In the specific implementation process, characteristic information is different, and corresponding matching process is also different:
If the characteristic information is image feature information, each frame image of characteristic information and target video is carried out
Images match, or the interval frame image of characteristic information and target video is subjected to images match, it is deposited on certain frame image when matching
In image corresponding with this feature information, it is determined that the frame is target frame.For example, it is assumed that characteristic information is blood cake pattern, then
When matching the frame comprising the blood cake pattern, then using the frame image as target frame.
If the characteristic information is voice characteristics information, the audio file of characteristic information and target video is subjected to sound
Frequency matches, when match somewhere audio and this feature information to it is corresponding when this at the corresponding frame of audio be target frame, specifically
For at this corresponding frame of audio be time stab information and the consistent frame of audio file time stab information at this.For example, it is assumed that feature is believed
Breath is that " killing success " pronounces, then when matching the audio file comprising audio, then with identical with the time stamp of the audio file
Frame is as target frame.
Certainly, the method for carrying out characteristic matching is not limited to above two, and this is not restricted, also will not enumerate.
Next, intercepting rule according to the target frame and preset featured videos, essence is determined in the target video
Color frequency range, the featured videos section include the target frame, and the featured videos interception rule is corresponding with the characteristic information.
In the embodiment of the present application, rule is intercepted according to the preset featured videos, determines the featured videos section
The end frame of playing duration and the featured videos section of the start frame away from the target frame away from the playing duration of the target frame,
Wherein, in the target video, the play position of the start frame is located at before the target frame or is equal to the target frame,
The play position of the end frame is located at after the target frame or is equal to the target frame.
Specifically, the featured videos interception rule is corresponding with the characteristic information, refers to, different features is believed
Breath has corresponding featured videos interception rule, for example:
Assuming that this feature information is to include in the game video for kill plot, characterization game kills successful information, considers
To excellent aiming and kill probably occur killing successfully first 1 minute or so time, then such characteristic information can be set
Corresponding featured videos interception rule are as follows: determine target frame forward 60s to the video between the target frame be featured videos section.
Assuming that this feature information is in prize drawing class video, characterization starts the information announced the winners in a lottery, it is contemplated that duration of announcing the winners in a lottery is general
The corresponding featured videos interception rule of such characteristic information then can be set are as follows: determines that target frame starts to 180s backward in 180s
Between video be featured videos section.
Certainly, in addition to determined above by characteristic information type featured videos end duration and with the time location of target frame
Relationship, so that it is determined that featured videos interception rule is outer, there are also the methods that other determine featured videos interception rule.For example, may be used also
Multiple characteristic informations are arranged, using the video between certain corresponding target frame of two characteristic informations as featured videos section.Citing
For, it is assumed that prize drawing class video, it is provided with characterization and starts the characteristic information A to announce the winners in a lottery and characterize the characteristic information B for end of announcing the winners in a lottery,
It matches characteristic information A and corresponds to target frame A, characteristic information B corresponds to target frame B, then corresponding featured videos interception can be set
Rule are as follows: determine that the video between target frame A and target frame B is featured videos section.
For determining N number of featured videos section, specific implementation method from the target video using barrage information analysis
Are as follows:
It obtains target video and barrage information, the barrage information includes the target video in history playing process
Barrage quantity information;According to the barrage information, determine that barrage situation in the target video meets the N of preset requirement
A featured videos section.
Specifically, firstly, acquisition target video and barrage information, the barrage information include the target video in history
Barrage quantity information in playing process.
It should be noted that the target video can be the video of main broadcaster end upload;It is also possible to the live streaming before
In the process, the video that server stores;It can also be the live video being currently broadcast live.If the target video is
The live video being currently broadcast live, then during method provided by the embodiment is live streaming at the scene, the live streaming to receiving
Video flowing carries out real-time barrage acquisition of information and judgement and featured videos section is extracted.
In the specific implementation process, the barrage information may include that every frame of the target video is obtained in net cast
Barrage quantity information, barrage content information, the barrage obtained sends number information, barrage sends number of words information etc..
Then, according to the barrage information, determine that barrage situation meets the excellent of preset requirement in the target video
Video-frequency band.
In the embodiment of the present application, it determines featured videos section, is that the mesh is determined according to the barrage information by elder generation
It marks in video, meets the target frame of preset requirement, rule is intercepted further according to the target frame and preset featured videos, described
Featured videos section is determined in target video, the featured videos section includes the target frame.
The preset requirement can be the barrage quantity shown when target frame being required to play and be greater than preset value or barrage quantity
Speedup is greater than preset value, and this is not restricted.
In the embodiment of the present application, according to the barrage information, determine featured videos section method can there are many, under
Face is enumerated for three kinds:
The first, barrage quantity is greater than preset value.
It i.e. according to the barrage information, determines in the target video, barrage quantity is greater than the essence of preset quantity
Color frequency range.
Specifically, it can first determine that the barrage quantity of display is greater than corresponding frame when preset value, these frames are all extracted
It is in chronological sequence sequentially arranged as featured videos section out.
It is greater than preset value according to barrage quantity and determines that featured videos section can be determined effectively, high excellent of user's participation
Video-frequency band.
Second, the maximum target frame of barrage quantity.
It i.e. according to the barrage information, determines in the target video, the maximum target frame of barrage quantity, further according to institute
It states target frame and determines that the featured videos section, the featured videos section include the target frame.
Specifically, in order to avoid only extracting certain frames, the caused discontinuous problem of featured videos section can be determined first
Barrage quantity is maximum in target video out or the target frame of the big Mr. Yu's value of barrage quantity, then for a period of time by target frame and its front and back
Interior video is as featured videos section.For example, the video of target frame and its front and back 30s can be taken as featured videos section.
The third, barrage quantity speedup.
It i.e. according to the barrage information, determines in the target video, increasing speed for barrage quantity is greater than default speed
The featured videos section of degree
Specifically, the corresponding barrage quantity speedup of each frame can be determined according to the barrage quantity of every frame and its before and after frames,
These frames are all extracted in chronological sequence sequence and arranged by the frame for being greater than pre-set velocity using barrage quantity speedup as target frame
For featured videos section.It is removed for example, the barrage quantity that the corresponding barrage speedup of every frame is shown equal to a frame behind the frame can be set
With the ratio for the barrage quantity that the frame is shown, alternatively, the corresponding barrage speedup of every frame is arranged, for display in continuous 5 seconds after the frame
Barrage sum is divided by the ratio of the barrage sum of display in continuous 5 seconds before the frame, and this is not restricted.
By determining featured videos section according to barrage quantity speedup, it can effectively determine that evoking user largely sends barrage
Key video sequence section.
Certainly, the determination method of featured videos section is not limited to above-mentioned three kinds, can also according to barrage send total number of word come
Determine featured videos section, this is not restricted, also will not enumerate.
Then, step S102 is executed, N number of featured videos section is extracted from the target video.
In the specific implementation process, the featured videos section is determined, it can determine rising for the featured videos section
Beginning time stamp and end time stamp, terminate the featured videos between time stamp to extract the starting time stamp and this from the target video
Section.
In view of the extraction of featured videos section needs to consume more calculating and process resource, the present embodiment additionally provides one
The extracting method of kind low consumption of resources, is described in detail as follows:
Referring to FIG. 2, due to target video be broadcast live or history live streaming live video, transmission of video be by
It is interspersed with transmission according to video unit and audio unit, each video unit and audio unit have its corresponding time stab information,
Therefore the present embodiment is not decoded the target video section, directly pulls live stream, Xie Liufu by step S201~S204
With rear, the immediate video of time stab information of corresponding time stab information Yu the target frame is searched in not decoded target video
Unit according to the immediate video unit determination and extracts the featured videos section, then is flowed by step S205~S206
The featured videos section that multiplexing synthesis and preservation extract.For example, as illustrated in fig. 2, it is assumed that video unit 3 and video unit 4
Time stamp and the featured videos section determined time stab information it is closest, then solve stream multiplexing and extract the video unit 3 and video
Unit 4, and after extracting time stab information audio unit corresponding with video unit 3 and video unit 4, then stream multiplexing is carried out, it closes
At video unit and audio unit, to form the complete featured videos section extracted.
Using this featured videos section extracting method, due to not needing to be decoded to entire video, can save more
Calculating and process resource improve processing speed.
Further, it is contemplated that some featured videos sections have strict requirements to the time, can also be arranged and carry out excellent view again
Before frequency extraction, the attribute information of the target video is first obtained;Whether the target video is judged according to the attribute information
It needs precisely to extract mode using time stamp;If it is required, then carrying out video decoding to the target video;And according to described excellent
Video intercepting rule, according to the time stab information of decoded target video, is extracted described excellent from decoded target video
Video-frequency band;If it is not required, then searching the time stamp of corresponding time stab information Yu the target frame in not decoded target video
The immediate video unit of information, wherein the target video includes N number of video unit, and N is the positive integer greater than 1;According to institute
It states immediate video unit determination and extracts the featured videos section.
I.e. featured videos section corresponding according to every category feature information by staff the case where, in advance in the category of target video
Property information in setting characterization whether need precisely to extract the extraction information of mode using time stamp, for example, it is desired to accurate using time stamp
Extracting number behind the Ti mark for the information that then sets a property is 1, does not need precisely to extract using time stamp, set a property information
Number is 0 to Ti mark below.Before subsequent extract, first judge that target is regarded according to extraction information preset in attribute information
Whether frequency needs precisely to extract mode using time stamp, is first decoded to target video if needing, then carries out by every frame time stamp
It is accurate to extract, as do not needed, target video is not decoded, directly carries out low consumption of resources by the time stamp of each video unit
Extraction.
Then, step S103 is executed, N number of featured videos section is spliced into a video, forms splicing video.
In the specific implementation process, splice the multiple featured videos section method can there are many, be set forth below three kinds
For:
N number of featured videos section can be spliced into a video, and be inserted into prompt before each featured videos section
Video, the featured videos section that the prompt video is used to describe to play, forms splicing video.I.e. in each featured videos section
It is inserted into one section of pre-prepd prompt video before, may include: that next to be played section is excellent in the prompt video
Play time information of the video in former target video, the content for that section of featured videos next to be played description or next
The video content types etc. of that be played section featured videos.
N number of featured videos section can also be spliced into a video, and be inserted between every two featured videos section
It is spaced video, the last period featured videos segment is played to be terminated and next section of featured videos segment the interval video for characterizing
It will play, form splicing video.One section of pre-prepd interval video is inserted into i.e. before each featured videos section, between described
Every can be for one section of blank video, one section of default credit video or self-introduction video of main broadcaster etc. in video.
N number of featured videos section can also be spliced into a video, and regarded in the initial segment of each featured videos section
Play cuing information is superimposed in frequency, the prompt information forms splicing video for describing featured videos section being played on.I.e.
In order not to increase extra video playback time, synthesizes in one frame of starting or multiframe of each featured videos section and mentioned into preset
Show information, which can be prompt picture, be also possible to suggestion voice, and this is not restricted.If the prompt information
It is prompt picture, can be by the way of picture-in-picture, can also be by the way of translucent covering, this is not restricted.
Certainly, in the specific implementation process, the mode for splicing video is not limited to above-mentioned three kinds, can also be by the multiple essence
Color frequency is according to time stamp sequencing, and progress is seamless spliced to play and handle the time to reduce, and this is not restricted, also not another
One enumerates.
Subsequently, step S104 is executed, when the asking for splicing video described in request for receiving client transmission
When asking, sends the splicing video to the client and play out.
In the embodiment of the present application, after forming splicing video, the unlatching of the splicing video can also be linked and is placed into
In webcast website on main broadcaster corresponding with the target video room page, fetched in order to which spectators can directly trigger the unlatching chain
Selection plays the splicing video.
Certainly, in the embodiment of the present application, featured videos section can also be marked in playing progress bar.
I.e. after featured videos section has been determined, play time letter of the featured videos section in the target video is obtained
Breath;It is corresponding with the play time information in the playing progress bar of the target video according to the play time information
Target position marks the featured videos section.
Specifically, can use trichobothrium mode, in the playing progress bar of the target video with the play time
The corresponding target position of information marks the featured videos section.It can be by the corresponding mesh of play time information in playing progress bar
Cursor position carries out color change, or carries out the change of progress bar width, or addition marks lines to be labeled.
When receiving the operation for acting on the target position, display can characterize the picture or view of the featured videos section
Frequently.It specifically can be other pictures in the image for showing the target frame, or the display featured videos section, or triggering is played and is somebody's turn to do
Featured videos section, or display is preset introduces picture for describe this section of featured videos section.
In the specific implementation process, the picture or video that can characterize the featured videos section of display, can be list
It opens a window solely come what is shown, is also possible to be also possible to directly in the broadcast window of target video come what is shown
Superposition is in the broadcast window of target video come what is shown, and this is not restricted.Wherein, Overlapping display can be using picture-in-picture
Mode show, be also possible to be arranged it is translucent come Overlapping display, herein also with no restriction.
Further, in the embodiment of the present application, it is contemplated that carry out characteristic information matching and carry out featured videos section to extract
Resource consumption can be occupied, interfering with each other when in order to avoid each task execution is seized with resource, can also be arranged described using special
Sign matching determines that featured videos section is implemented at GCR-Work layers from target video;It is described to be extracted from the target video
N number of featured videos section is implemented at Media-Worker layers;The video-splicing and according to the play time information, in institute
Target position mark corresponding with the play time information featured videos section in the playing progress bar of target video is stated to exist
Media-Worker layers of implementation.
Specifically, N is determined and extracted from the target video using characteristic matching and/or using barrage information analysis
A featured videos section, and N number of featured videos section is spliced into a video, forms splicing video, when receiving client
When what is sent is used to splice described in request the request of video, sends the splicing video to the client and plays out,
So that spectators can not have to completely watch entire target video, it is only necessary to the direct viewing splicing it is seen that it is all most
Excellent, the video clip that do not want to miss most is effectively saved spectators' viewing time, obtain spectators can within a short period of time
Get whole featured videos segments.
Based on the same inventive concept, the embodiment of the invention also provides the corresponding dresses of video-splicing method in embodiment one
It sets, sees embodiment two.
Embodiment two
A kind of video-splicing device is present embodiments provided, as shown in figure 3, the device includes:
Determination unit 301, for being determined from the target video using characteristic matching and/or using barrage information analysis
N number of featured videos section, N are greater than 1;
Extraction unit 302, for extracting N number of featured videos section from the target video;
Concatenation unit 303 forms splicing video for N number of featured videos section to be spliced into a video;
Transmission unit 304, for when receive client transmission for the request of splicing video described in request when,
The splicing video to the client is sent to play out.
In the embodiment of the present application, described device can be the electricity such as smart phone, desktop computer, notebook or tablet computer
Sub- equipment, this is not restricted.
In the embodiment of the present application, described device can be android system, IOS system or Windows system, herein
With no restriction.
By the device that the embodiment of the present invention two is introduced, filled used by the method to implement the embodiment of the present invention one
It sets, so based on the method that the embodiment of the present invention one is introduced, the affiliated personnel in this field can understand the specific structure of the device
And deformation, so details are not described herein.Device used by the method for all embodiment of the present invention one belongs to the present invention and is intended to
The range of protection.
Based on the same inventive concept, it this application provides the corresponding electronic equipment embodiment of embodiment one, is detailed in
Embodiment three.
Embodiment three
The present embodiment provides a kind of electronic equipment, as shown in figure 4, including memory 410, processor 420 and being stored in
On reservoir 410 and the computer program 411 that can run on processor 420 can when processor 420 executes computer program 411
To realize any embodiment in embodiment one.
Since the electronic equipment that the present embodiment is introduced is equipment used by method in implementation the embodiment of the present application one, therefore
And based on method described in the embodiment of the present application one, the electronics that those skilled in the art can understand the present embodiment is set
Standby specific embodiment and its various change form, so how to realize the embodiment of the present application for the electronic equipment herein
In method be no longer discussed in detail.As long as those skilled in the art implement to set used by the method in the embodiment of the present application
It is standby, belong to the range to be protected of the application.
Based on the same inventive concept, this application provides the corresponding storage medium of embodiment one, detailed in Example four.
Example IV
The present embodiment provides a kind of computer readable storage mediums 500, as shown in figure 5, being stored thereon with computer program
511, when which is executed by processor, any embodiment in embodiment one may be implemented.
The technical solution provided in the embodiment of the present application, has at least the following technical effects or advantages:
Method, apparatus, equipment and medium provided by the embodiments of the present application using characteristic matching and/or use barrage information
Analysis determines and extracts N number of featured videos section from the target video, and N number of featured videos section is spliced into a view
Frequently, splicing video is formed, when receiving the request for being used to splice described in request video of client transmission, described in transmission
Splicing video to the client plays out, so that spectators can not have to completely watch entire target video, it is only necessary to
The direct viewing splicing is effectively saved spectators' sight it is seen that all most excellent, the video clip that do not want to miss most
See the time, whole featured videos segments can be got by making spectators within a short period of time.
Further, by judging whether the target video needs using time stamp essence according to the attribute information of target video
Quasi- extraction mode, and when needed, video decoding is carried out to the target video, further according to the time stamp of decoded target video
Featured videos section described in information extraction directly searches corresponding time stab information in not decoded target video when not needed
It effectively reduces and does not need to extract the featured videos section with the immediate video unit of time stab information of the target frame
The video extraction time precisely extracted also ensures the extraction accuracy for the video that part needs precisely to extract.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as method, system or computer program
Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the present invention
Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the present invention, which can be used in one or more,
The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces
The form of product.
The present invention be referring to according to the method for the embodiment of the present invention, the process of equipment (system) and computer program product
Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions
The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs
Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce
A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real
The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates,
Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or
The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting
Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or
The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one
The step of function of being specified in a box or multiple boxes.
Although preferred embodiments of the present invention have been described, it is created once a person skilled in the art knows basic
Property concept, then additional changes and modifications may be made to these embodiments.So it includes excellent that the following claims are intended to be interpreted as
It selects embodiment and falls into all change and modification of the scope of the invention.
Obviously, those skilled in the art can carry out various modification and variations without departing from this hair to the embodiment of the present invention
The spirit and scope of bright embodiment.In this way, if these modifications and variations of the embodiment of the present invention belong to the claims in the present invention
And its within the scope of equivalent technologies, then the present invention is also intended to include these modifications and variations.
Claims (10)
1. a kind of video-splicing method characterized by comprising
N number of featured videos section is determined from the target video using characteristic matching and/or using barrage information analysis, N is greater than
1;
N number of featured videos section is extracted from the target video;
N number of featured videos section is spliced into a video, forms splicing video;
When receiving when being used for the request of splicing video described in request of client transmission, the splicing video is sent to institute
Client is stated to play out.
2. the method as described in claim 1, which is characterized in that described to determine N number of featured videos section, packet in target video
It includes:
According to the video classification of target video, characteristic information is set;Characteristic matching is carried out to the target video, to determine
State in target video with the matched target frame of the characteristic information;According to the target frame and preset featured videos interception rule
Then, N number of featured videos section is determined in the target video, wherein the featured videos section includes the target frame, institute
It is corresponding with the characteristic information to state featured videos interception rule;Alternatively,
It obtains target video and barrage information, the barrage information includes barrage of the target video in history playing process
Quantity information;According to the barrage information, determine that barrage situation in the target video meets N number of essence of preset requirement
Color frequency range.
3. the method as described in claim 1, which is characterized in that described to extract N number of excellent view from the target video
Frequency range, comprising:
Obtain the attribute information of the target video;
Judge whether the target video needs precisely to extract mode using time stamp according to the attribute information;
If it is required, then carrying out video decoding to the target video;And rule is intercepted according to the featured videos, according to decoding
The time stab information of target video afterwards extracts N number of featured videos section from decoded target video;
If it is not required, then searching the time stab information of corresponding time stab information Yu the target frame in not decoded target video
Immediate video unit, wherein the target video includes N number of video unit, and N is the positive integer greater than 1;According to it is described most
Close video unit is determining and extracts the featured videos section.
4. the method as described in claim 1, which is characterized in that it is described that N number of featured videos section is spliced into a video,
Form splicing video, comprising:
N number of featured videos section is spliced into a video, and the insertion prompt video before each featured videos section, it is described
The featured videos section that prompt video is used to describe to play forms splicing video;Alternatively,
N number of featured videos section is spliced into a video, and the insertion interval video between every two featured videos section, institute
Stating interval video will play for characterizing the played end of the last period featured videos segment and next section of featured videos segment, shape
At splicing video;Alternatively,
N number of featured videos section is spliced into a video, and is superimposed and broadcasts in the initial segment video of each featured videos section
Prompt information is put, the prompt information forms splicing video for describing featured videos section being played on.
5. the method as described in claim 1, which is characterized in that described to determine that N number of featured videos section exists in target video
GCR-Work layers of implementation;It is described that N number of featured videos section is extracted from the target video in Media-Worker layers of reality
It applies.
6. a kind of video-splicing device characterized by comprising
Determination unit, for determination to be N number of excellent from the target video using characteristic matching and/or using barrage information analysis
Video-frequency band, N are greater than 1;
Extraction unit, for extracting N number of featured videos section from the target video;
Concatenation unit forms splicing video for N number of featured videos section to be spliced into a video;
Transmission unit, for sending institute when receiving when being used for the request of splicing video described in request of client transmission
Splicing video to the client is stated to play out.
7. device as claimed in claim 6, which is characterized in that the concatenation unit is also used to:
N number of featured videos section is spliced into a video, and the insertion prompt video before each featured videos section, it is described
The featured videos section that prompt video is used to describe to play forms splicing video;Alternatively,
N number of featured videos section is spliced into a video, and the insertion interval video between every two featured videos section, institute
Stating interval video will play for characterizing the played end of the last period featured videos segment and next section of featured videos segment, shape
At splicing video;Alternatively,
N number of featured videos section is spliced into a video, and is superimposed and broadcasts in the initial segment video of each featured videos section
Prompt information is put, the prompt information forms splicing video for describing featured videos section being played on.
8. device as claimed in claim 6, which is characterized in that the extraction unit is also used to:
Obtain the attribute information of the target video;
Judge whether the target video needs precisely to extract mode using time stamp according to the attribute information;
If it is required, then carrying out video decoding to the target video;And rule is intercepted according to the featured videos, according to decoding
The time stab information of target video afterwards extracts N number of featured videos section from decoded target video;
If it is not required, then searching the time stab information of corresponding time stab information Yu the target frame in not decoded target video
Immediate video unit, wherein the target video includes N number of video unit, and N is the positive integer greater than 1;According to it is described most
Close video unit is determining and extracts the featured videos section.
9. a kind of electronic equipment including memory, processor and stores the calculating that can be run on a memory and on a processor
Machine program, which is characterized in that the processor realizes claim 1-6 any method when executing described program.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor
Claim 1-6 any method is realized when execution.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810752191.5A CN109089127B (en) | 2018-07-10 | 2018-07-10 | Video splicing method, device, equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810752191.5A CN109089127B (en) | 2018-07-10 | 2018-07-10 | Video splicing method, device, equipment and medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109089127A true CN109089127A (en) | 2018-12-25 |
CN109089127B CN109089127B (en) | 2021-05-28 |
Family
ID=64837508
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810752191.5A Active CN109089127B (en) | 2018-07-10 | 2018-07-10 | Video splicing method, device, equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109089127B (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110505530A (en) * | 2019-07-17 | 2019-11-26 | 刘彩霞 | A kind of Streaming Media internet big data barrage processing system and method |
CN110933511A (en) * | 2019-11-29 | 2020-03-27 | 维沃移动通信有限公司 | Video sharing method, electronic device and medium |
CN110958465A (en) * | 2019-12-17 | 2020-04-03 | 广州酷狗计算机科技有限公司 | Video stream pushing method and device and storage medium |
CN111083525A (en) * | 2019-12-27 | 2020-04-28 | 恒信东方文化股份有限公司 | Method and system for automatically generating intelligent image |
WO2020155295A1 (en) * | 2019-01-30 | 2020-08-06 | 网宿科技股份有限公司 | Live data processing method and system, and server |
CN111711861A (en) * | 2020-05-15 | 2020-09-25 | 北京奇艺世纪科技有限公司 | Video processing method and device, electronic equipment and readable storage medium |
WO2021089002A1 (en) * | 2019-11-08 | 2021-05-14 | 北京字节跳动网络技术有限公司 | Multimedia information processing method, apparatus, electronic device, and medium |
US11025984B2 (en) | 2019-01-30 | 2021-06-01 | Wangsu Science & Technology Co., Ltd. | Method, system for processing a live-broadcasting data, and server thereof |
CN113055741A (en) * | 2020-12-31 | 2021-06-29 | 科大讯飞股份有限公司 | Video abstract generation method, electronic equipment and computer readable storage medium |
CN113473224A (en) * | 2021-06-29 | 2021-10-01 | 北京达佳互联信息技术有限公司 | Video processing method and device, electronic equipment and computer readable storage medium |
CN113542845A (en) * | 2020-04-16 | 2021-10-22 | 腾讯科技(深圳)有限公司 | Information display method, device, equipment and storage medium |
CN114339304A (en) * | 2021-12-22 | 2022-04-12 | 中国电信股份有限公司 | Live video processing method and device and storage medium |
CN115174947A (en) * | 2022-06-28 | 2022-10-11 | 广州博冠信息科技有限公司 | Live video extraction method and device, storage medium and electronic equipment |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101268505A (en) * | 2006-01-06 | 2008-09-17 | 三菱电机株式会社 | Method and system for classifying a video |
CN102902756A (en) * | 2012-09-24 | 2013-01-30 | 南京邮电大学 | Video abstraction extraction method based on story plots |
US20160112727A1 (en) * | 2014-10-21 | 2016-04-21 | Nokia Technologies Oy | Method, Apparatus And Computer Program Product For Generating Semantic Information From Video Content |
CN105847993A (en) * | 2016-04-19 | 2016-08-10 | 乐视控股(北京)有限公司 | Method and device for sharing video clip |
CN107154264A (en) * | 2017-05-18 | 2017-09-12 | 北京大生在线科技有限公司 | The method that online teaching wonderful is extracted |
CN107438204A (en) * | 2017-07-26 | 2017-12-05 | 维沃移动通信有限公司 | A kind of method and mobile terminal of media file loop play |
-
2018
- 2018-07-10 CN CN201810752191.5A patent/CN109089127B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101268505A (en) * | 2006-01-06 | 2008-09-17 | 三菱电机株式会社 | Method and system for classifying a video |
CN102902756A (en) * | 2012-09-24 | 2013-01-30 | 南京邮电大学 | Video abstraction extraction method based on story plots |
US20160112727A1 (en) * | 2014-10-21 | 2016-04-21 | Nokia Technologies Oy | Method, Apparatus And Computer Program Product For Generating Semantic Information From Video Content |
CN105847993A (en) * | 2016-04-19 | 2016-08-10 | 乐视控股(北京)有限公司 | Method and device for sharing video clip |
CN107154264A (en) * | 2017-05-18 | 2017-09-12 | 北京大生在线科技有限公司 | The method that online teaching wonderful is extracted |
CN107438204A (en) * | 2017-07-26 | 2017-12-05 | 维沃移动通信有限公司 | A kind of method and mobile terminal of media file loop play |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020155295A1 (en) * | 2019-01-30 | 2020-08-06 | 网宿科技股份有限公司 | Live data processing method and system, and server |
US11025984B2 (en) | 2019-01-30 | 2021-06-01 | Wangsu Science & Technology Co., Ltd. | Method, system for processing a live-broadcasting data, and server thereof |
CN110505530B (en) * | 2019-07-17 | 2021-07-06 | 深圳市中鹏教育科技股份有限公司 | Streaming media internet big data bullet screen processing system |
CN110505530A (en) * | 2019-07-17 | 2019-11-26 | 刘彩霞 | A kind of Streaming Media internet big data barrage processing system and method |
US11893054B2 (en) | 2019-11-08 | 2024-02-06 | Beijing Bytedance Network Technology Co., Ltd. | Multimedia information processing method, apparatus, electronic device, and medium |
WO2021089002A1 (en) * | 2019-11-08 | 2021-05-14 | 北京字节跳动网络技术有限公司 | Multimedia information processing method, apparatus, electronic device, and medium |
CN110933511A (en) * | 2019-11-29 | 2020-03-27 | 维沃移动通信有限公司 | Video sharing method, electronic device and medium |
CN110958465A (en) * | 2019-12-17 | 2020-04-03 | 广州酷狗计算机科技有限公司 | Video stream pushing method and device and storage medium |
CN111083525B (en) * | 2019-12-27 | 2022-01-11 | 恒信东方文化股份有限公司 | Method and system for automatically generating intelligent image |
CN111083525A (en) * | 2019-12-27 | 2020-04-28 | 恒信东方文化股份有限公司 | Method and system for automatically generating intelligent image |
CN113542845A (en) * | 2020-04-16 | 2021-10-22 | 腾讯科技(深圳)有限公司 | Information display method, device, equipment and storage medium |
CN113542845B (en) * | 2020-04-16 | 2024-02-02 | 腾讯科技(深圳)有限公司 | Information display method, device, equipment and storage medium |
CN111711861A (en) * | 2020-05-15 | 2020-09-25 | 北京奇艺世纪科技有限公司 | Video processing method and device, electronic equipment and readable storage medium |
CN111711861B (en) * | 2020-05-15 | 2022-04-12 | 北京奇艺世纪科技有限公司 | Video processing method and device, electronic equipment and readable storage medium |
CN113055741A (en) * | 2020-12-31 | 2021-06-29 | 科大讯飞股份有限公司 | Video abstract generation method, electronic equipment and computer readable storage medium |
CN113055741B (en) * | 2020-12-31 | 2023-05-30 | 科大讯飞股份有限公司 | Video abstract generation method, electronic equipment and computer readable storage medium |
CN113473224A (en) * | 2021-06-29 | 2021-10-01 | 北京达佳互联信息技术有限公司 | Video processing method and device, electronic equipment and computer readable storage medium |
CN113473224B (en) * | 2021-06-29 | 2023-05-23 | 北京达佳互联信息技术有限公司 | Video processing method, video processing device, electronic equipment and computer readable storage medium |
CN114339304A (en) * | 2021-12-22 | 2022-04-12 | 中国电信股份有限公司 | Live video processing method and device and storage medium |
CN115174947A (en) * | 2022-06-28 | 2022-10-11 | 广州博冠信息科技有限公司 | Live video extraction method and device, storage medium and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN109089127B (en) | 2021-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108924576A (en) | A kind of video labeling method, device, equipment and medium | |
CN109089154A (en) | A kind of video extraction method, apparatus, equipment and medium | |
CN109089127A (en) | A kind of video-splicing method, apparatus, equipment and medium | |
CN109089128A (en) | A kind of method for processing video frequency, device, equipment and medium | |
US20220053160A1 (en) | System and methods providing sports event related media to internet-enabled devices synchronized with a live broadcast of the sports event | |
US20160316233A1 (en) | System and method for inserting, delivering and tracking advertisements in a media program | |
CN110446115A (en) | Living broadcast interactive method, apparatus, electronic equipment and storage medium | |
US10469902B2 (en) | Apparatus and method for confirming content viewing | |
US10981056B2 (en) | Methods and systems for determining a reaction time for a response and synchronizing user interface(s) with content being rendered | |
CN109040773A (en) | A kind of video improvement method, apparatus, equipment and medium | |
WO2020072820A1 (en) | Overlaying content within live streaming video | |
CN105872786B (en) | A kind of method and device for launching advertisement by barrage in a program | |
CN108292314B (en) | Information processing apparatus, information processing method, and program | |
CN113490004B (en) | Live broadcast interaction method and related device | |
GB2503878A (en) | Generating interstitial scripts for video content, based on metadata related to the video content | |
CN109714622B (en) | Video data processing method and device and electronic equipment | |
CN106851326B (en) | Playing method and device | |
CN108133385A (en) | A kind of advertisement placement method and device | |
CN114025188B (en) | Live advertisement display method, system, device, terminal and readable storage medium | |
CN107635153B (en) | Interaction method and system based on image data | |
CN110784751A (en) | Information display method and device | |
CN113824983B (en) | Data matching method, device, equipment and computer readable storage medium | |
US20170311009A1 (en) | Promotion information processing method, device and apparatus, and non-volatile computer storage medium | |
CN105848005A (en) | Video subtitle display method and video subtitle display device | |
CN110602528B (en) | Video processing method, terminal, server and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |